赞
踩
总结一下对llamaindex的使用心得,第一部分是构建知识库并持久化,第二部分是使用本地llm模型和本地embed模型,第三部分是对qurey engine和chat engine的使用(自定义prompt)。
llamaindex:v0.10.20.post1
知识库格式:一个全是txt文档的文件夹
需要安装的库:
- pip install llama-index
- pip install llama-index-embeddings-huggingface
llamaindex的代码改动还挺大的,包括import部分(我在CSDN和知乎等搜到的其他文章的代码都已经不符合现在的版本了),所以代码仅供参考,很有可能在不久之后官方代码就会改动,但是可以参考本文的构建结构和思路去llamaindex官网找到对应的最新的代码。
所有导入的包:
- import os
- from llama_index.embeddings.huggingface import HuggingFaceEmbedding
- from llama_index.core import Settings,SummaryIndex,load_index_from_storage,StorageContext,Settings
- from transformers import BitsAndBytesConfig, AutoModelForCausalLM, AutoTokenizer
- from typing import Optional, List, Mapping, Any
- import torch
- from llama_index.llms.huggingface import HuggingFaceLLM
- from llama_index.core.prompts import PromptTemplate
- from llama_index.core.node_parser import SentenceSplitter
- from llama_index.core.callbacks import CallbackManager
- from llama_index.core.llms.callbacks import llm_completion_callback
- from transformers.generation import GenerationConfig
- from llama_index.core.llms import (
- CustomLLM,
- CompletionResponse,
- CompletionResponseGen,
- LLMMetadata,
- )
- load document
- documents = SimpleDirectoryReader(
- input_dir="your/dirs/path").load_data(show_progress=True)
- # check document
- print("文档总数:",len(documents))
- print("第一个文档",documents[0])
- index = VectorStoreIndex.from_documents(documents)
如果想将文档解析成更小的块:
- Settings.chunk_size = 4096
- #Local settings
- index = VectorStoreIndex.from_documents(
- documents, transformations=[SentenceSplitter(chunk_size=4096)]
- )
summary index可以在匹配的过程中根据总结的内容先进行匹配。
- splitter = SentenceSplitter(chunk_size=1024)
- for dirpath, dirnames, filenames in os.walk("your/dir/path"):
- filepaths = [os.path.join(dirpath, name) for name in filenames]
- print(filepaths)
- docs = []
- #遍历filepath,提取txt文档的name作为doc_id
- for filepath in filepaths:
- _docs = SimpleDirectoryReader(
- input_files=[f"{filepath}"]
- ).load_data()
- _docs[0].doc_id = filepath.split('/')[-1].split('.txt')[0]
- docs.extend(_docs)
- print(docs)
- # default mode of building the index
- response_synthesizer = get_response_synthesizer(
- response_mode="tree_summarize", use_async=True
- )
- doc_summary_index = DocumentSummaryIndex.from_documents(
- docs,
- llm=OurLLM(), #这个llm就是本地化的llm模型,第二部分会讲到它的构建。
- transformations=[splitter],
- response_synthesizer=response_synthesizer,
- show_progress=True,
- )
- #关于llm=OurLLM():我这里是使用我的大模型对文档内容进行总结,也可以使用embedding模型或者其他模型对文档进行总结。
doc_summary_index.storage_context.persist("your/persist/path")
或者
index.storage_context.persist(persist_dir='your/persist/path')
- #storage_context = StorageContext.from_defaults(persist_dir='your/persist/dir')
- #index = load_index_from_storage(storage_context)
-
- storage_context = StorageContext.from_defaults(persist_dir='your/persist/dir')
- doc_summary_index = load_index_from_storage(storage_context)
- Settings.embed_model = HuggingFaceEmbedding(
- model_name="your/embed/model/path"
- )#我用的是bge-large-zh-v1.5
- quantization_config = BitsAndBytesConfig(
- load_in_4bit=True,
- bnb_4bit_compute_dtype=torch.float16,
- bnb_4bit_quant_type="nf4",
- bnb_4bit_use_double_quant=True,
- ) #4比特化加载模型(因为我的显存不够,所以只能这么加载,如果可以全量加载就不需要这个)
- model_name = "your/llm/model/path"
- tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
- model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True, low_cpu_mem_usage=True,quantization_config=quantization_config).eval()
-
- #自定义本地模型
- class OurLLM(CustomLLM):
- context_window: int = 4096
- num_output: int = 1024
- model_name: str = "custom"
-
- @property
- def metadata(self) -> LLMMetadata:
- """Get LLM metadata."""
- return LLMMetadata(
- context_window=self.context_window,
- num_output=self.num_output,
- model_name=self.model_name,
- )
-
- @llm_completion_callback()
- def complete(self, prompt: str, **kwargs: Any) -> CompletionResponse:
- text, history = model.chat(tokenizer, prompt, history=[], temperature=0.1)
- return CompletionResponse(text=text)
-
- @llm_completion_callback()
- def stream_complete(
- self, prompt: str, **kwargs: Any
- ) -> CompletionResponseGen:
- raise NotImplementedError()
1.1 针对最简单的默认构建知识库的方式所构建的query engine
- query_engine = index.as_query_engine(similarity_top_k=3)
- qa_prompt_tmpl_str = (
- "请结合给出的参考知识,回答用户的问题。"
- "参考知识如下:\n"
- "---------------------\n"
- "{context_str}\n"
- "---------------------\n"
- "用户的问题如下:\n"
- "human: {query_str}\n"
- "Assistant: "
- )
- qa_prompt_tmpl = PromptTemplate(qa_prompt_tmpl_str)
- query_engine.update_prompts(
- {"response_synthesizer:text_qa_template": qa_prompt_tmpl}
- )
也可以不自定义prompt,它会自动使用默认的prompt,直接
query_engine = index.as_query_engine(similarity_top_k=3)
默认的prompts可以在:
/你安装的llama_index的文件夹/llama_index/core/prompts/chat_prompts.py和mixin.py和default_prompts.py等里面找到,这个文件夹里面有很多模板py文件,自行搜索就好了。
使用query_engine:
response = query_engine.query('狗头表情包代表什么感情?')
1.2 针对summary index构建的知识库的方式所构建的query_engine
与1.1相同,只不过把index更改为doc_summary_index
- query_engine = doc_summary_index.as_query_engine(
- response_mode="tree_summarize", use_async=True
- )
- qa_prompt_tmpl_str = (
- "结合参考知识,回答用户的问题\n"
- "参考知识如下:\n"
- "---------------------\n"
- "{context_str}\n"
- "---------------------\n"
- "用户的问题如下:\n"
- "{query_str}\n"
- )
- qa_prompt_tmpl = PromptTemplate(qa_prompt_tmpl_str)
- query_engine.update_prompts(
- {"response_synthesizer:text_qa_template": qa_prompt_tmpl}
- )
- #或者不自定义prompt,直接:
- query_engine = doc_summary_index.as_query_engine(
- response_mode="tree_summarize", use_async=True
- )
使用query_engine:
response = query_engine.query('狗头表情包代表什么感情?')
print(response.metadata)
在使用chat_engine的时候,有可能会出现initial token out of limits,这时候就去修改:
/你安装的llama_index的文件夹/llama_index/core/chat_engine/condense_plus_context.py的token_limit,我设置的是30000
chat_engine需要改的就是它的chat_mode:
/你安装的llama_index的文件夹/llama_index/core/chat_engine文件夹里
2.1 mode:condense_plus_context 的chat_engine
- system_prompt = '''请结合给出的参考知识,回答问题。\n
- 参考内容如下:\n
- ---------------------\n
- {context_str}\n
- ---------------------\n
- Query: {query_str}\n
- Answer:
- '''
- chat_engine = doc_summary_index.as_chat_engine(
- chat_mode = 'condense_plus_context',use_async=True,system_prompt=system_prompt,
- verbose = True
- )
- #verbose是在输出的时候会把它找到的相关文档等信息也print出来,默认为False
- #只有chat engine有verbose,query engine使用print(response.metadata)来查看
使用:
- response = chat_engine.chat('表达开心应该发什么表情包?')
- print(response)
- response = chat_engine.chat('要高冷地表达')
- print(response)
- #查看聊天历史:
- print(chat_engine.chat_history)
2.2 mode:condense_question
- DEFAULT_TEMPLATE = """\
- Given a conversation (between Human and Assistant) and a follow up message from Human, \
- rewrite the message to be a standalone question that captures all relevant context \
- from the conversation.If the answer cannot be confirmed, the user is asked follow-up questions based on the content of the reference knowledge, with a limit of 3 questions.
- <Chat History>
- {chat_history}
- <Follow Up Message>
- {question}
- <Standalone question>
- """
-
- DEFAULT_PROMPT = PromptTemplate(DEFAULT_TEMPLATE)
-
- chat_engine = doc_summary_index.as_chat_engine(
- chat_mode = 'condense_question',use_async=True,condense_question_prompt=DEFAULT_PROMPT,
- verbose = True
- )
- #verbose是在输出的时候会把它找到的相关文档等信息也print出来,默认为False
使用:
- response = chat_engine.chat('表达开心应该发什么表情包?')
- print(response)
- response = chat_engine.chat('要高冷地表达')
- print(response)
- #查看聊天历史:
- print(chat_engine.chat_history)
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。