当前位置:   article > 正文

mem0ai 适配ollama_mem0 valueerror: shapes (0,1536) and (768,) not al

mem0 valueerror: shapes (0,1536) and (768,) not aligned: 1536 (dim 1) != 768

适配ollama

  • mem0最近非常红火,短短几天功夫,github上的星就飙到了16.9K。官方文档中有OpenAI的例子,但是对于同样风头正盛的ollama,却没有相关的例子。,检查了下项目仓库中的代码,发现ollama适配工作并未完成。于是自己动手丰衣足食,花了点功夫,适配了ollama。

环境说明

  • python 3.10

  • ollama 0.2.8

    • 假设安装在本机
    • 提前准备ollama模型mistral-nemo和nomic-embed-text

相关步骤

  • 拉取mem0ai仓库的fork
git clone https://github.com/Galileo2017/mem0.git
  • 安装依赖
  1. pip install ollama
  2. pip install mem0ai
  • 替换mem0ai安装包
将fork仓库中mem0/mem0目录下所有的文件和目录复制到python安装包目录Lib/site-packages/mem0中进行替换。
  • 示例脚本
  1. import os
  2. from mem0 import Memory
  3. # 配置用户编码
  4. USER_ID = "deshraj"
  5. # ollama配置
  6. # 设置环境变量OLLAMA_HOST=http://127.0.0.1:11434
  7. os.environ['OLLAMA_HOST']="http://127.0.0.1:11434"
  8. config = {
  9. "llm": {
  10. "provider": "ollama",
  11. "config": {
  12. "model": "mistral-nemo",
  13. "temperature": 0.2,
  14. "max_tokens": 1500
  15. }
  16. },
  17. "embedder":{
  18. "provider": "ollama"
  19. },
  20. "embedding_model_dims":768
  21. }
  22. # 初始化mem0
  23. memory = Memory.from_config(config)
  24. # 设置用户数据
  25. USER_DATA = """
  26. About me
  27. - I'm Deshraj Yadav, Co-founder and CTO at Mem0 (f.k.a Embedchain). I am broadly interested in the field of Artificial Intelligence and Machine Learning Infrastructure.
  28. - Previously, I was Senior Autopilot Engineer at Tesla Autopilot where I led the Autopilot's AI Platform which helped the Tesla Autopilot team to track large scale training and model evaluation experiments, provide monitoring and observability into jobs and training cluster issues.
  29. - I had built EvalAI as my masters thesis at Georgia Tech, which is an open-source platform for evaluating and comparing machine learning and artificial intelligence algorithms at scale.
  30. - Outside of work, I am very much into cricket and play in two leagues (Cricbay and NACL) in San Francisco Bay Area.
  31. """
  32. # 添加用户数据至mem0
  33. memory.add(USER_DATA, user_id=USER_ID)
  34. print("User data added to memory.")
  35. # 设置查询命令
  36. command = "Find papers on arxiv that I should read based on my interests."
  37. relevant_memories = memory.search(command, user_id=USER_ID, limit=3)
  38. relevant_memories_text = '\n'.join(mem['text'] for mem in relevant_memories)
  39. print(f"Relevant memories:")
  40. print(relevant_memories_text)

问题处理

  • 使用ollama生成的embedding保存时会报错:
ValueError: shapes (0,512) and (768,) not aligned: 512 (dim 1) != 768 (dim 0)

原因分析:

  1. mem0\memory\main.py
  2. self.vector_store.create_col(
  3. name=self.collection_name, vector_size=self.embedding_model.dims
  4. )

用embedding_model.dims初始化vector_size,与实际模型的dims不一样导致报错,可以添加配置"embedding_model_dims":768解决问题

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/知新_RL/article/detail/955748
推荐阅读
相关标签
  

闽ICP备14008679号