当前位置:   article > 正文

Langchain Agent Type - Json Chat_qwen prompt missing required variables

qwen prompt missing required variables
使用提示词:
prompt = hub.pull("hwchase17/react-chat-json")
  1. SYSTEM
  2. Assistant是OpenAI训练的大型语言模型。
  3. Assistant 旨在帮助完成各种任务,从回答简单的问题到就各种主题提供深入的解释和讨论。 作为一种语言模型,Assistant 能够根据接收到的输入生成类似人类的文本,使其能够进行听起来自然的对话,并提供与当前主题连贯且相关的响应。
  4. Assistant 不断学习和改进,其能力也在不断发展。 它能够处理和理解大量文本,并可以利用这些知识对各种问题提供准确且内容丰富的答案。 此外,Assistant 能够根据收到的输入生成自己的文本,使其能够参与讨论并就各种主题提供解释和描述。
  5. 总的来说,Assistant 是一个功能强大的系统,可以帮助完成广泛的任务,并提供有关广泛主题的宝贵见解和信息。 无论您需要解决特定问题的帮助还是只想就特定主题进行对话,助理都会随时为您提供帮助。
  6. PLACEHOLDER
  7. chat_history
  8. HUMAN
  9. 工具
  10. ------
  11. Assistant可以要求用户使用工具查找可能有助于回答用户原始问题的信息。 人类可以使用的工具有:
  12. {tools}
  13. 回复格式说明
  14. ----------------------------
  15. 回复我时,请以以下两种格式之一输出回复:
  16. **选项1:**
  17. 如果您希望人类使用工具,请使用此选项。
  18. 采用以下模式格式化的 Markdown 代码片段:
  19. ```json
  20. {{
  21. "action": string, \ 要采取的行动。 必须是 {tool_names} 之一
  22. "action_input": string \ 行动的输入
  23. }}
  24. ````
  25. **选项#2:**
  26. 如果您想直接对人类做出反应,请使用此选项。
  27. 采用以下模式格式化的 Markdown 代码片段:
  28. ```json
  29. {{
  30. "action": "最终答案",
  31. "action_input": string \ 你应该把你想要返回使用的东西放在这里
  32. }}
  33. ````
  34. 用户的输入
  35. --------------------
  36. 这是用户的输入(请记住通过单个操作使用 json blob 的 markdown 代码片段进行响应,仅此而已):
  37. {input}
  38. PLACEHOLDER
  39. agent_scratchpad
Agent代码
  1. def create_json_chat_agent(
  2. llm: BaseLanguageModel,
  3. tools: Sequence[BaseTool],
  4. prompt: ChatPromptTemplate,
  5. stop_sequence: Union[bool, List[str]] = True,
  6. tools_renderer: ToolsRenderer = render_text_description,
  7. ) -> Runnable:
  8. """Create an agent that uses JSON to format its logic, build for Chat Models.
  9. Args:
  10. llm: LLM to use as the agent.
  11. tools: Tools this agent has access to.
  12. prompt: The prompt to use. See Prompt section below for more.
  13. stop_sequence: bool or list of str.
  14. If True, adds a stop token of "Observation:" to avoid hallucinates.
  15. If False, does not add a stop token.
  16. If a list of str, uses the provided list as the stop tokens.
  17. Default is True. You may to set this to False if the LLM you are using
  18. does not support stop sequences.
  19. tools_renderer: This controls how the tools are converted into a string and
  20. then passed into the LLM. Default is `render_text_description`.
  21. Returns:
  22. A Runnable sequence representing an agent. It takes as input all the same input
  23. variables as the prompt passed in does. It returns as output either an
  24. AgentAction or AgentFinish.
  25. """ # noqa: E501
  26. # Make sure the required variables are present in the prompt
  27. missing_vars = {"tools", "tool_names", "agent_scratchpad"}.difference(
  28. prompt.input_variables
  29. )
  30. if missing_vars:
  31. raise ValueError(
  32. "Prompt missing required variables: {}".format(missing_vars)
  33. )
  34. # Add the tools names and render the tools to the prompt
  35. prompt = prompt.partial(
  36. tools=tools_renderer(list(tools)),
  37. tool_names=", ".join([t.name for t in tools]),
  38. )
  39. # Set up the stop sequence if needed
  40. if stop_sequence:
  41. # If True, set the stop sequence to "\nObservation"
  42. # If a list of strings, use those strings as the stop sequence
  43. stop = ["\nObservation"] if stop_sequence is True else stop_sequence
  44. # Bind the stop sequence to the language model
  45. llm_to_use = llm.bind(stop=stop)
  46. else:
  47. # If False, don't add a stop sequence
  48. llm_to_use = llm
  49. # Create the agent
  50. agent = (
  51. # Add a step to take the intermediate_steps and format them into
  52. # a Message object with a "tool_response" type
  53. RunnablePassthrough.assign(
  54. agent_scratchpad=lambda x: format_log_to_messages(
  55. x["intermediate_steps"], template_tool_response=TEMPLATE_TOOL_RESPONSE
  56. )
  57. )
  58. # Run the prompt with the input variables
  59. | prompt
  60. # Run the LLM with the output of the prompt
  61. | llm_to_use
  62. # Parse the output of the LLM as JSON to get the action and input
  63. | JSONAgentOutputParser()
  64. )
  65. return agent
推理输出
  1. > Entering new AgentExecutor chain...
  2. {
  3. "action": "tavily_search_results_json",
  4. "action_input": "LangChain"
  5. }[{'url': 'https://www.ibm.com/topics/langchain', 'content': 'LangChain is essentially a library of abstractions for Python and Javascript, representing common steps and concepts LangChain is an open source orchestration framework for the development of applications using large language models other LangChain features, like the eponymous chains. LangChain provides integrations for over 25 different embedding methods, as well as for over 50 different vector storesLangChain is a tool for building applications using large language models (LLMs) like chatbots and virtual agents. It simplifies the process of programming and integration with external data sources and software workflows. It supports Python and Javascript languages and supports various LLM providers, including OpenAI, Google, and IBM.'}]{
  6. "action": "Final Answer",
  7. "action_input": "LangChain is an open source orchestration framework for the development of applications using large language models. It simplifies the process of programming and integration with external data sources and software workflows. It supports Python and Javascript languages and supports various LLM providers, including OpenAI, Google, and IBM."
  8. }
  9. > Finished chain.
  1. {'input': 'what is LangChain?',
  2. 'output': 'LangChain is an open source orchestration framework for the development of applications using large language models. It simplifies the process of programming and integration with external data sources and software workflows. It supports Python and Javascript languages and supports various LLM providers, including OpenAI, Google, and IBM.'}

                
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/正经夜光杯/article/detail/990194
推荐阅读
相关标签
  

闽ICP备14008679号