当前位置:   article > 正文

开源模型应用落地-chatglm3-6b-集成langchain(十)_128k chain

128k chain

一、前言

    langchain框架调用本地模型,使得用户可以直接提出问题或发送指令,而无需担心具体的步骤或流程。通过LangChain和chatglm3-6b模型的整合,可以更好地处理对话,提供更智能、更准确的响应,从而提高对话系统的性能和用户体验


二、术语

2.1. ChatGLM3

    是智谱AI和清华大学 KEG 实验室联合发布的对话预训练模型。ChatGLM3-6B 是 ChatGLM3 系列中的开源模型,在保留了前两代模型对话流畅、部署门槛低等众多优秀特性的基础上,ChatGLM3-6B 引入了如下特性:

  1. 更强大的基础模型: ChatGLM3-6B 的基础模型 ChatGLM3-6B-Base 采用了更多样的训练数据、更充分的训练步数和更合理的训练策略。在语义、数学、推理、代码、知识等不同角度的数据集上测评显示,* ChatGLM3-6B-Base 具有在 10B 以下的基础模型中最强的性能*。
  2. 更完整的功能支持: ChatGLM3-6B 采用了全新设计的 Prompt 格式 ,除正常的多轮对话外。同时原生支持工具调用(Function Call)、代码执行(Code Interpreter)和 Agent 任务等复杂场景。
  3. 更全面的开源序列: 除了对话模型 ChatGLM3-6B 外,还开源了基础模型 ChatGLM3-6B-Base 、长文本对话模型 ChatGLM3-6B-32K 和进一步强化了对于长文本理解能力的 ChatGLM3-6B-128K。以上所有权重对学术研究完全开放 ,在填写 问卷 进行登记后亦允许免费商业使用

2.2.LangChain

    是一个全方位的、基于大语言模型这种预测能力的应用开发工具。LangChain的预构建链功能,就像乐高积木一样,无论你是新手还是经验丰富的开发者,都可以选择适合自己的部分快速构建项目。对于希望进行更深入工作的开发者,LangChain 提供的模块化组件则允许你根据自己的需求定制和创建应用中的功能链条。

    LangChain本质上就是对各种大模型提供的API的套壳,是为了方便我们使用这些 API,搭建起来的一些框架、模块和接口。

    LangChain的主要特性:
        1.可以连接多种数据源,比如网页链接、本地PDF文件、向量数据库等
        2.允许语言模型与其环境交互
        3.封装了Model I/O(输入/输出)、Retrieval(检索器)、Memory(记忆)、Agents(决策和调度)等核心组件
        4.可以使用链的方式组装这些组件,以便最好地完成特定用例。
        5.围绕以上设计原则,LangChain解决了现在开发人工智能应用的一些切实痛点。


三、前提条件 

3.1. 基础环境及前置条件

  1.  操作系统:centos7
  2.  Tesla V100-SXM2-32GB  CUDA Version: 12.2

3.2. 下载chatglm3-6b模型

从huggingface下载:https://huggingface.co/THUDM/chatglm3-6b/tree/main

从魔搭下载:魔搭社区汇聚各领域最先进的机器学习模型,提供模型探索体验、推理、训练、部署和应用的一站式服务。https://www.modelscope.cn/models/ZhipuAI/chatglm3-6b/filesicon-default.png?t=N7T8https://www.modelscope.cn/models/ZhipuAI/chatglm3-6b/files

3.3. 安装虚拟环境

  1. conda create --name langchain python=3.10
  2. conda activate langchain
  3. pip install langchain accelerate

四、技术实现

4.1. 示例一

  1. # -*- coding = utf-8 -*-
  2. from langchain.llms.base import LLM
  3. from langchain import LLMChain, PromptTemplate, ConversationChain
  4. from transformers import AutoTokenizer, AutoModelForCausalLM
  5. from typing import List, Optional
  6. modelPath = "/model/chatglm3-6b"
  7. class ChatGLM3(LLM):
  8. temperature: float = 0.45
  9. top_p = 0.8
  10. repetition_penalty = 1.1
  11. max_token: int = 8192
  12. do_sample: bool = True
  13. tokenizer: object = None
  14. model: object = None
  15. history: List = []
  16. def __init__(self):
  17. super().__init__()
  18. @property
  19. def _llm_type(self) -> str:
  20. return "ChatGLM3"
  21. def load_model(self, model_name_or_path=None):
  22. self.tokenizer = AutoTokenizer.from_pretrained(
  23. model_name_or_path,
  24. trust_remote_code=True
  25. )
  26. self.model = AutoModelForCausalLM.from_pretrained(
  27. model_name_or_path, trust_remote_code=True, device_map="auto").cuda()
  28. def _call(self, prompt: str, history: List = [], stop: Optional[List[str]] = ["<|user|>"]):
  29. response, self.history = self.model.chat(
  30. self.tokenizer,
  31. prompt,
  32. history=self.history,
  33. do_sample=self.do_sample,
  34. max_length=self.max_token,
  35. temperature=self.temperature,
  36. top_p = self.top_p,
  37. repetition_penalty = self.repetition_penalty
  38. )
  39. history.append((prompt, response))
  40. return response
  41. if __name__ == "__main__":
  42. llm = ChatGLM3()
  43. llm.load_model(modelPath)
  44. template = """
  45. 问题: {question}
  46. """
  47. prompt = PromptTemplate.from_template(template)
  48. llm_chain = LLMChain(prompt=prompt, llm=llm)
  49. question = "广州有什么特色景点?"
  50. print(llm_chain.run(question))

调用结果:

4.2. 示例二

  1. # -*- coding = utf-8 -*-
  2. from langchain.llms.base import LLM
  3. from langchain import LLMChain, ConversationChain
  4. from langchain.prompts import SystemMessagePromptTemplate, HumanMessagePromptTemplate, ChatPromptTemplate
  5. from transformers import AutoTokenizer, AutoModelForCausalLM
  6. from typing import List, Optional
  7. modelPath = "/model/chatglm3-6b"
  8. class ChatGLM3(LLM):
  9. temperature: float = 0.45
  10. top_p = 0.8
  11. repetition_penalty = 1.1
  12. max_token: int = 8192
  13. do_sample: bool = True
  14. tokenizer: object = None
  15. model: object = None
  16. history: List = []
  17. def __init__(self):
  18. super().__init__()
  19. @property
  20. def _llm_type(self) -> str:
  21. return "ChatGLM3"
  22. def load_model(self, model_name_or_path=None):
  23. self.tokenizer = AutoTokenizer.from_pretrained(
  24. model_name_or_path,
  25. trust_remote_code=True
  26. )
  27. self.model = AutoModelForCausalLM.from_pretrained(
  28. model_name_or_path, trust_remote_code=True, device_map="auto").cuda()
  29. def _call(self, prompt: str, history: List = [], stop: Optional[List[str]] = ["<|user|>"]):
  30. # print(f'prompt: {prompt}')
  31. # print(f'history: {history}')
  32. response, self.history = self.model.chat(
  33. self.tokenizer,
  34. prompt,
  35. history=self.history,
  36. do_sample=self.do_sample,
  37. max_length=self.max_token,
  38. temperature=self.temperature,
  39. top_p = self.top_p,
  40. repetition_penalty = self.repetition_penalty
  41. )
  42. history.append((prompt, response))
  43. return response
  44. if __name__ == "__main__":
  45. llm = ChatGLM3()
  46. llm.load_model(modelPath)
  47. template = "你是一个数学专家,很擅长解决复杂的逻辑推理问题。"
  48. system_message_prompt = SystemMessagePromptTemplate.from_template(template)
  49. human_template = "问题: {question}"
  50. human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
  51. prompt_template = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
  52. llm_chain = LLMChain(prompt=prompt_template, llm=llm)
  53. print(llm_chain.run(question="若一个三角形的两条边长度分别为3和4,且夹角为直角,最后一条边的长度是多少?"))

调用结果:


五、附带说明

5.1. 示例中ChatGLM3可以扩展,实现更复杂的功能

参见官方示例:

ChatGLM3.py

  1. import ast
  2. import json
  3. from langchain.llms.base import LLM
  4. from transformers import AutoTokenizer, AutoModel, AutoConfig
  5. from typing import List, Optional
  6. class ChatGLM3(LLM):
  7. max_token: int = 8192
  8. do_sample: bool = True
  9. temperature: float = 0.8
  10. top_p = 0.8
  11. tokenizer: object = None
  12. model: object = None
  13. history: List = []
  14. has_search: bool = False
  15. def __init__(self):
  16. super().__init__()
  17. @property
  18. def _llm_type(self) -> str:
  19. return "ChatGLM3"
  20. def load_model(self, model_name_or_path=None):
  21. model_config = AutoConfig.from_pretrained(
  22. model_name_or_path,
  23. trust_remote_code=True
  24. )
  25. self.tokenizer = AutoTokenizer.from_pretrained(
  26. model_name_or_path,
  27. trust_remote_code=True
  28. )
  29. self.model = AutoModel.from_pretrained(
  30. model_name_or_path, config=model_config, trust_remote_code=True, device_map="auto").eval()
  31. def _tool_history(self, prompt: str):
  32. ans = []
  33. tool_prompts = prompt.split(
  34. "You have access to the following tools:\n\n")[1].split("\n\nUse a json blob")[0].split("\n")
  35. tools_json = []
  36. for tool_desc in tool_prompts:
  37. name = tool_desc.split(":")[0]
  38. description = tool_desc.split(", args:")[0].split(":")[1].strip()
  39. parameters_str = tool_desc.split("args:")[1].strip()
  40. parameters_dict = ast.literal_eval(parameters_str)
  41. params_cleaned = {}
  42. for param, details in parameters_dict.items():
  43. params_cleaned[param] = {'description': details['description'], 'type': details['type']}
  44. tools_json.append({
  45. "name": name,
  46. "description": description,
  47. "parameters": params_cleaned
  48. })
  49. ans.append({
  50. "role": "system",
  51. "content": "Answer the following questions as best as you can. You have access to the following tools:",
  52. "tools": tools_json
  53. })
  54. dialog_parts = prompt.split("Human: ")
  55. for part in dialog_parts[1:]:
  56. if "\nAI: " in part:
  57. user_input, ai_response = part.split("\nAI: ")
  58. ai_response = ai_response.split("\n")[0]
  59. else:
  60. user_input = part
  61. ai_response = None
  62. ans.append({"role": "user", "content": user_input.strip()})
  63. if ai_response:
  64. ans.append({"role": "assistant", "content": ai_response.strip()})
  65. query = dialog_parts[-1].split("\n")[0]
  66. return ans, query
  67. def _extract_observation(self, prompt: str):
  68. return_json = prompt.split("Observation: ")[-1].split("\nThought:")[0]
  69. self.history.append({
  70. "role": "observation",
  71. "content": return_json
  72. })
  73. return
  74. def _extract_tool(self):
  75. if len(self.history[-1]["metadata"]) > 0:
  76. metadata = self.history[-1]["metadata"]
  77. content = self.history[-1]["content"]
  78. lines = content.split('\n')
  79. for line in lines:
  80. if 'tool_call(' in line and ')' in line and self.has_search is False:
  81. # 获取括号内的字符串
  82. params_str = line.split('tool_call(')[-1].split(')')[0]
  83. # 解析参数对
  84. params_pairs = [param.split("=") for param in params_str.split(",") if "=" in param]
  85. params = {pair[0].strip(): pair[1].strip().strip("'\"") for pair in params_pairs}
  86. action_json = {
  87. "action": metadata,
  88. "action_input": params
  89. }
  90. self.has_search = True
  91. print("*****Action*****")
  92. print(action_json)
  93. print("*****Answer*****")
  94. return f"""
  95. Action:
  96. ```
  97. {json.dumps(action_json, ensure_ascii=False)}
  98. ```"""
  99. final_answer_json = {
  100. "action": "Final Answer",
  101. "action_input": self.history[-1]["content"]
  102. }
  103. self.has_search = False
  104. return f"""
  105. Action:
  106. ```
  107. {json.dumps(final_answer_json, ensure_ascii=False)}
  108. ```"""
  109. def _call(self, prompt: str, history: List = [], stop: Optional[List[str]] = ["<|user|>"]):
  110. if not self.has_search:
  111. self.history, query = self._tool_history(prompt)
  112. else:
  113. self._extract_observation(prompt)
  114. query = ""
  115. _, self.history = self.model.chat(
  116. self.tokenizer,
  117. query,
  118. history=self.history,
  119. do_sample=self.do_sample,
  120. max_length=self.max_token,
  121. temperature=self.temperature,
  122. )
  123. response = self._extract_tool()
  124. history.append((prompt, response))
  125. return response

main.py

  1. """
  2. This script demonstrates the use of the LangChain's StructuredChatAgent and AgentExecutor alongside various tools
  3. The script utilizes the ChatGLM3 model, a large language model for understanding and generating human-like text.
  4. The model is loaded from a specified path and integrated into the chat agent.
  5. Tools:
  6. - Calculator: Performs arithmetic calculations.
  7. - Weather: Provides weather-related information based on input queries.
  8. - DistanceConverter: Converts distances between meters, kilometers, and feet.
  9. The agent operates in three modes:
  10. 1. Single Parameter without History: Uses Calculator to perform simple arithmetic.
  11. 2. Single Parameter with History: Uses Weather tool to answer queries about temperature, considering the
  12. conversation history.
  13. 3. Multiple Parameters without History: Uses DistanceConverter to convert distances between specified units.
  14. 4. Single use Langchain Tool: Uses Arxiv tool to search for scientific articles.
  15. Note:
  16. The model calling tool fails, which may cause some errors or inability to execute. Try to reduce the temperature
  17. parameters of the model, or reduce the number of tools, especially the third function.
  18. The success rate of multi-parameter calling is low. The following errors may occur:
  19. Required fields [type=missing, input_value={'distance': '30', 'unit': 'm', 'to': 'km'}, input_type=dict]
  20. The model illusion in this case generates parameters that do not meet the requirements.
  21. The top_p and temperature parameters of the model should be adjusted to better solve such problems.
  22. Success example:
  23. *****Action*****
  24. {
  25. 'action': 'weather',
  26. 'action_input': {
  27. 'location': '厦门'
  28. }
  29. }
  30. *****Answer*****
  31. {
  32. 'input': '厦门比北京热吗?',
  33. 'chat_history': [HumanMessage(content='北京温度多少度'), AIMessage(content='北京现在12度')],
  34. 'output': '根据最新的天气数据,厦门今天的气温为18度,天气晴朗。而北京今天的气温为12度。所以,厦门比北京热。'
  35. }
  36. ****************
  37. """
  38. import os
  39. from langchain import hub
  40. from langchain.agents import AgentExecutor, create_structured_chat_agent, load_tools
  41. from langchain_core.messages import AIMessage, HumanMessage
  42. from ChatGLM3 import ChatGLM3
  43. from tools.Calculator import Calculator
  44. from tools.Weather import Weather
  45. from tools.DistanceConversion import DistanceConverter
  46. MODEL_PATH = os.environ.get('MODEL_PATH', 'THUDM/chatglm3-6b')
  47. if __name__ == "__main__":
  48. llm = ChatGLM3()
  49. llm.load_model(MODEL_PATH)
  50. prompt = hub.pull("hwchase17/structured-chat-agent")
  51. # for single parameter without history
  52. tools = [Calculator()]
  53. agent = create_structured_chat_agent(llm=llm, tools=tools, prompt=prompt)
  54. agent_executor = AgentExecutor(agent=agent, tools=tools)
  55. ans = agent_executor.invoke({"input": "34 * 34"})
  56. print(ans)
  57. # for singe parameter with history
  58. tools = [Weather()]
  59. agent = create_structured_chat_agent(llm=llm, tools=tools, prompt=prompt)
  60. agent_executor = AgentExecutor(agent=agent, tools=tools)
  61. ans = agent_executor.invoke(
  62. {
  63. "input": "厦门比北京热吗?",
  64. "chat_history": [
  65. HumanMessage(content="北京温度多少度"),
  66. AIMessage(content="北京现在12度"),
  67. ],
  68. }
  69. )
  70. print(ans)
  71. # for multiple parameters without history
  72. tools = [DistanceConverter()]
  73. agent = create_structured_chat_agent(llm=llm, tools=tools, prompt=prompt)
  74. agent_executor = AgentExecutor(agent=agent, tools=tools)
  75. ans = agent_executor.invoke({"input": "how many meters in 30 km?"})
  76. print(ans)
  77. # for using langchain tools
  78. tools = load_tools(["arxiv"], llm=llm)
  79. agent = create_structured_chat_agent(llm=llm, tools=tools, prompt=prompt)
  80. agent_executor = AgentExecutor(agent=agent, tools=tools)
  81. ans = agent_executor.invoke({"input": "Describe the paper about GLM 130B"})
  82. print(ans)

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/我家自动化/article/detail/653136
推荐阅读
相关标签
  

闽ICP备14008679号