赞
踩
prompt = hub.pull("hwchase17/react-chat-json")
- SYSTEM
- Assistant是OpenAI训练的大型语言模型。
-
- Assistant 旨在帮助完成各种任务,从回答简单的问题到就各种主题提供深入的解释和讨论。 作为一种语言模型,Assistant 能够根据接收到的输入生成类似人类的文本,使其能够进行听起来自然的对话,并提供与当前主题连贯且相关的响应。
-
- Assistant 不断学习和改进,其能力也在不断发展。 它能够处理和理解大量文本,并可以利用这些知识对各种问题提供准确且内容丰富的答案。 此外,Assistant 能够根据收到的输入生成自己的文本,使其能够参与讨论并就各种主题提供解释和描述。
-
- 总的来说,Assistant 是一个功能强大的系统,可以帮助完成广泛的任务,并提供有关广泛主题的宝贵见解和信息。 无论您需要解决特定问题的帮助还是只想就特定主题进行对话,助理都会随时为您提供帮助。
-
- PLACEHOLDER
- chat_history
-
- HUMAN
- 工具
- ------
- Assistant可以要求用户使用工具查找可能有助于回答用户原始问题的信息。 人类可以使用的工具有:
-
- {tools}
-
- 回复格式说明
- ----------------------------
-
- 回复我时,请以以下两种格式之一输出回复:
-
- **选项1:**
- 如果您希望人类使用工具,请使用此选项。
- 采用以下模式格式化的 Markdown 代码片段:
-
- ```json
- {{
- "action": string, \ 要采取的行动。 必须是 {tool_names} 之一
- "action_input": string \ 行动的输入
- }}
- ````
-
- **选项#2:**
- 如果您想直接对人类做出反应,请使用此选项。
- 采用以下模式格式化的 Markdown 代码片段:
-
- ```json
- {{
- "action": "最终答案",
- "action_input": string \ 你应该把你想要返回使用的东西放在这里
- }}
- ````
-
- 用户的输入
- --------------------
- 这是用户的输入(请记住通过单个操作使用 json blob 的 markdown 代码片段进行响应,仅此而已):
-
- {input}
-
- PLACEHOLDER
- agent_scratchpad
- def create_json_chat_agent(
- llm: BaseLanguageModel,
- tools: Sequence[BaseTool],
- prompt: ChatPromptTemplate,
- stop_sequence: Union[bool, List[str]] = True,
- tools_renderer: ToolsRenderer = render_text_description,
- ) -> Runnable:
- """Create an agent that uses JSON to format its logic, build for Chat Models.
- Args:
- llm: LLM to use as the agent.
- tools: Tools this agent has access to.
- prompt: The prompt to use. See Prompt section below for more.
- stop_sequence: bool or list of str.
- If True, adds a stop token of "Observation:" to avoid hallucinates.
- If False, does not add a stop token.
- If a list of str, uses the provided list as the stop tokens.
-
- Default is True. You may to set this to False if the LLM you are using
- does not support stop sequences.
- tools_renderer: This controls how the tools are converted into a string and
- then passed into the LLM. Default is `render_text_description`.
- Returns:
- A Runnable sequence representing an agent. It takes as input all the same input
- variables as the prompt passed in does. It returns as output either an
- AgentAction or AgentFinish.
- """ # noqa: E501
-
- # Make sure the required variables are present in the prompt
- missing_vars = {"tools", "tool_names", "agent_scratchpad"}.difference(
- prompt.input_variables
- )
- if missing_vars:
- raise ValueError(
- "Prompt missing required variables: {}".format(missing_vars)
- )
-
- # Add the tools names and render the tools to the prompt
- prompt = prompt.partial(
- tools=tools_renderer(list(tools)),
- tool_names=", ".join([t.name for t in tools]),
- )
-
- # Set up the stop sequence if needed
- if stop_sequence:
- # If True, set the stop sequence to "\nObservation"
- # If a list of strings, use those strings as the stop sequence
- stop = ["\nObservation"] if stop_sequence is True else stop_sequence
- # Bind the stop sequence to the language model
- llm_to_use = llm.bind(stop=stop)
- else:
- # If False, don't add a stop sequence
- llm_to_use = llm
-
- # Create the agent
- agent = (
- # Add a step to take the intermediate_steps and format them into
- # a Message object with a "tool_response" type
- RunnablePassthrough.assign(
- agent_scratchpad=lambda x: format_log_to_messages(
- x["intermediate_steps"], template_tool_response=TEMPLATE_TOOL_RESPONSE
- )
- )
- # Run the prompt with the input variables
- | prompt
- # Run the LLM with the output of the prompt
- | llm_to_use
- # Parse the output of the LLM as JSON to get the action and input
- | JSONAgentOutputParser()
- )
-
- return agent
-
-
- > Entering new AgentExecutor chain...
- {
- "action": "tavily_search_results_json",
- "action_input": "LangChain"
- }[{'url': 'https://www.ibm.com/topics/langchain', 'content': 'LangChain is essentially a library of abstractions for Python and Javascript, representing common steps and concepts LangChain is an open source orchestration framework for the development of applications using large language models other LangChain features, like the eponymous chains. LangChain provides integrations for over 25 different embedding methods, as well as for over 50 different vector storesLangChain is a tool for building applications using large language models (LLMs) like chatbots and virtual agents. It simplifies the process of programming and integration with external data sources and software workflows. It supports Python and Javascript languages and supports various LLM providers, including OpenAI, Google, and IBM.'}]{
- "action": "Final Answer",
- "action_input": "LangChain is an open source orchestration framework for the development of applications using large language models. It simplifies the process of programming and integration with external data sources and software workflows. It supports Python and Javascript languages and supports various LLM providers, including OpenAI, Google, and IBM."
- }
-
- > Finished chain.
- {'input': 'what is LangChain?',
- 'output': 'LangChain is an open source orchestration framework for the development of applications using large language models. It simplifies the process of programming and integration with external data sources and software workflows. It supports Python and Javascript languages and supports various LLM providers, including OpenAI, Google, and IBM.'}
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。