当前位置:   article > 正文

Chroma使用入门以及向量数据库查询入门_chromavectorstore

chromavectorstore

参考:
https://python.langchain.com/docs/integrations/vectorstores/chroma


import ChatGLM

from langchain_community.document_loaders import PyPDFLoader
import ChatGLM

from langchain_community.vectorstores import Chroma
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_community.embeddings import JinaEmbeddings
from langchain_core.runnables import RunnableParallel, RunnablePassthrough


llm = ChatGLM.ChatGLM_LLM()
# loader = PyPDFLoader("唐诗三百首.pdf")
loader = PyPDFLoader("西游记.pdf")
documents = loader.load_and_split()

embeddings = JinaEmbeddings(
    jina_api_key="jina_c5d02a61c97d4d79b88234362726e94aVLMTvF38wvrElYqpGYSxFtC5Ifhj", model_name="jina-embeddings-v2-base-zh"
)

# 第一次存入本地
# vectorstore = Chroma.from_documents(documents, embeddings,persist_directory="./chroma_db")

# 从本地加载
vectorstore = Chroma(persist_directory="./chroma_db", embedding_function=embeddings)

retriever = vectorstore.as_retriever()
template = """Answer the question based only on the following context:
{context}

Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
llm = ChatGLM.ChatGLM_LLM()
output_parser = StrOutputParser()
setup_and_retrieval = RunnableParallel(
    {"context": retriever, "question": RunnablePassthrough()}
)
chain = setup_and_retrieval | prompt | llm | output_parser
print(chain.invoke("第二十二回讲了什么"))




  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46




  • 1
  • 2
  • 3
  • 4
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/神奇cpp/article/detail/899037
推荐阅读
相关标签
  

闽ICP备14008679号