赞
踩
LangChain 是一个开发由语言模型驱动的应用程序的框架,有以下特性。
pip install langchain
报错ERROR: Could not find a version that satisfies the requirement langchain (from versions: none)ERROR: No matching distribution found for langchain
解决:pip install langchain -i http://pypi.douban.com/simple/ --trusted-host pypi.douban.com
langchain集成llama:
pip install llama-cpp-python -i http://pypi.douban.com/simple/ --trusted-host pypi.douban.com
报错:ERROR: Cannot unpack file C:\Users\96584\AppData\Local\Temp\pip-unpack-izfgtfwa\simple (downloaded from C:\Users\96584\AppData\Local\Temp\pip-req-build-1raavtqr, content-type: text/html; charset=utf-8); cannot detect archive format
ERROR: Cannot determine archive format of C:\Users\96584\AppData\Local\Temp\pip-req-build-1raavtqr
解决:换源
pip install -i https://pypi.tuna.tsinghua.edu.cn/simple --trusted-host pypi.tuna.tsinghua.edu.cn llama-cpp-python
报错:ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects 不识别nmake
解决:在powershell中更改编译器 在cmd中输入powershell
$env:CMAKE_GENERATOR = "MinGW Makefiles"
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on
-DCMAKE_C_COMPILER=C:/w64devkit/bin/gcc.exe
-DCMAKE_CXX_COMPILER=C:/w64devkit/bin/g++.exe"
pip install -i https://pypi.tuna.tsinghua.edu.cn/simple --trusted-host pypi.tuna.tsinghua.edu.cn llama-cpp-python
报错:
:thread.c:(.text+0x103f): multiple definition of `pthread_self' ../../libllama.dll.a(d000850.o):(.text+0x0): first defined here collect2.exe: error: ld returned 1 exit status
解决:
在github上issue里有:添加:-DLLAVA_BUILD=OFF 取消LLAVA的支持
这是llama-cpp-python的bug
简单运行:
import llama_cpp model = llama_cpp.Llama( model_path="D:/researchPJ/llamacpp/llama.cpp/models/llama-2-7b/ggml-model-q4_0.gguf", ) print(model("The quick brown fox jumps ", stop=["."])["choices"][0]["text"])
报错:
RuntimeError: Failed to load shared library 'D:\code\langchain-llama\.venv\lib\site-packages\llama_cpp\libllama.dll': [WinError 193] %1 不是有效的 Win32 应用程序。
解决:
libllama.dll是32位的,需要重新编译为64位以适配windows
在cmake环境处添加-DCMAKE_GENERATOR_PLATFORM=x64以生成64位dll
报错:
MinGW Makefiles does not support platform specification, but platform x64 was specified.
解决:
创建文件如"mingw64.cmake"
其中内容:
set(CMAKE_GENERATOR_PLATFORM x64)
# Specify compilers
set(CMAKE_C_COMPILER C:/w64devkit/bin/gcc.exe)
set(CMAKE_CXX_COMPILER C:/w64devkit/bin/g++.exe)
更改编译环境:
添加-DCMAKE_TOOLCHAIN_FILE=path/to/mingw64.cmake
重新编译:
pip install -i https://pypi.tuna.tsinghua.edu.cn/simple --trusted-host pypi.tuna.tsinghua.edu.cn llama-cpp-python --upgrade --force --reinstall --no-cache-dir
之后运行成功
create_chat_completion() return response['choices'][0]['message']['content']
- from langchain_community.llms import LlamaCpp
- llm = LlamaCpp(
- model_path="D:/researchPJ/llamacpp/llama.cpp/models/llama-2-7b/ggml-model-q4_0.gguf",
- temperature=0.75,
- n_gpu_layers=20, # gpu accelerate
- n_threads=6,
-
- )
- prompt = """
- what is langsmith?
- """
- print(llm.invoke(prompt))
用模板来输入:
- template = """Question: {question}
- Answer: Let's work this out in a step by step way to be sure we have the right answer."""
-
- prompt = PromptTemplate.from_template(template)
-
- llm_chain = LLMChain(prompt=prompt, llm=llm)
-
- question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"
- print(llm_chain.run(question))
提示+模型+输出解析:
- prompt = ChatPromptTemplate.from_messages([
- ("system", "You are world class technical documentation writer."),
- ("user", "{input}")
- ])
-
- output_parser = StrOutputParser()
-
- llm_chain = LLMChain(prompt=prompt, llm=llm, output_parser=output_parser)
-
- print(llm_chain.invoke({"input": "how can langsmith help with testing?"}))
嵌入模型+向量存储+提示+模型+输出解析:
pip install beautifulsoup4
pip install faiss-cpu
- from langchain.callbacks.manager import CallbackManager
- from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
- from langchain.chains import LLMChain
- from langchain.prompts import PromptTemplate
- from langchain_community.llms import LlamaCpp
- from langchain_core.prompts import ChatPromptTemplate
- from langchain_core.output_parsers import StrOutputParser
- from langchain_community.document_loaders import WebBaseLoader
- from langchain.embeddings import LlamaCppEmbeddings
- from langchain_community.vectorstores import FAISS
- from langchain.text_splitter import RecursiveCharacterTextSplitter
- from langchain.chains.combine_documents import create_stuff_documents_chain
- from langchain.chains import create_retrieval_chain
-
-
-
- model_path = "D:/researchPJ/llamacpp/llama.cpp/models/llama-2-7b/ggml-model-q4_0.gguf"
-
- embeddings = LlamaCppEmbeddings(model_path=model_path)
-
- llm = LlamaCpp(
- model_path=model_path,
- temperature=0.75,
- n_gpu_layers=20, # gpu accelerate
- n_threads=6,
- n_ctx=2048,
-
- )
-
- loader = WebBaseLoader("https://docs.smith.langchain.com")
-
- docs = loader.load()
-
- text_splitter = RecursiveCharacterTextSplitter()
- documents = text_splitter.split_documents(docs)
- vector = FAISS.from_documents(documents, embeddings)
-
- prompt = ChatPromptTemplate.from_template("""Answer the following question based only on the provided context:
- <context>
- {context}
- </context>
- Question: {input}""")
-
- document_chain = create_stuff_documents_chain(llm, prompt)
-
- retriever = vector.as_retriever()
- retrieval_chain = create_retrieval_chain(retriever, document_chain)
-
- response = retrieval_chain.invoke({"input": "how can langsmith help with testing?"})
- print(response["answer"])
-
- # LangSmith offers several features that can help with testing:...
加载要索引的数据。使用 WebBaseLoader
使用嵌入模型将文档摄取到向量存储
将创建一个检索链。 该链将接受一个传入的问题,查找相关文档,然后将这些文档与原始问题一起传递到 LLM 中,并要求它回答原始问题。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。