当前位置:   article > 正文

LangChain实现AgentExecutor调用tools_langchain agentexecutor

langchain agentexecutor

在我们使用LangChain进行我们自己功能开发时,同时想借助AI能力来帮我们处理数据并按照一定顺序去执行方法时,那么我们就会需要使用到Agent。

这里由于我的方法比较单一,所以使用的是OpenAI Functions。不过建议是更换为OPENAI Tools,它支持并行。

必不可少的模型:

  1. from langchain_openai import ChatOpenAI
  2. llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)

首先我们需要定义调用方法:

  1. from langchain.agents import tool
  2. @tool
  3. def get_word_length(word: str) -> int:
  4. """Returns the length of a word."""
  5. return len(word)

接下来定义我们这个AI具体是来做什么的,自定义System Prompt,并且当我们想用历史记录的时候那么就可以按照以下方式来定义:

  1. from langchain.prompts import MessagesPlaceholder
  2. prompt = ChatPromptTemplate.from_messages(
  3. [
  4. (
  5. "system",
  6. "You are very powerful assistant, but bad at calculating lengths of words.",
  7. ),
  8. MessagesPlaceholder(variable_name='chat_history'),
  9. ("user", "{input}"),
  10. MessagesPlaceholder(variable_name="agent_scratchpad"),
  11. ]
  12. )

接下来将我们的方法进行绑定:

llm_with_tools = llm.bind_functions(tools)

如果你想直接返回工具生成的结果,可以再加上自定义输出,这部分可以自己打断点去看看内部实现步骤并且对于流程很有帮助。

  1. def parse(output):
  2. # If no function was invoked, return to user
  3. if "function_call" not in output.additional_kwargs:
  4. return AgentFinish(return_values={"output": output.content}, log=output.content)
  5. # Parse out the function call
  6. function_call = output.additional_kwargs["function_call"]
  7. name = function_call["name"]
  8. inputs = json.loads(function_call["arguments"])
  9. # If the cusResponse function was invoked, return to the user with the function inputs
  10. if name == "cusResponse":
  11. return AgentFinish(return_values=inputs, log=str(function_call))
  12. # Otherwise, return an agent action
  13. else:
  14. return AgentActionMessageLog(
  15. tool=name, tool_input=inputs, log="", message_log=[output]
  16. )

创建代理:

  1. agent = (
  2. {
  3. "input": lambda x: x["input"],
  4. "agent_scratchpad": lambda x: format_to_openai_function_messages(
  5. x["intermediate_steps"]
  6. ),
  7. "chat_history": lambda x: x["chat_history"],
  8. }
  9. | prompt
  10. | llm_with_tools
  11. | parse
  12. )

代理执行:

  1. agent_executor = AgentExecutor(agent=agent, tools=[searchVideos], verbose=False).with_config(
  2. {"run_name": "Agent"}
  3. )

正常的运行:

result=agent_executor.astream({"input": content, "chat_history": chat_history})

这里我想要流式输出结果:

  1. async for event in agent_executor.astream_events(
  2. {"input": content, "chat_history": chat_history},
  3. version="v1"
  4. ):
  5. kind = event["event"]
  6. if kind == "on_chain_start":
  7. if (
  8. event["name"] == "Agent"
  9. ): # Was assigned when creating the agent with `.with_config({"run_name": "Agent"})`
  10. print(
  11. f"Starting agent: {event['name']} with input: {event['data'].get('input')}"
  12. )
  13. elif kind == "on_chain_end":
  14. if (
  15. event["name"] == "Agent"
  16. ): # Was assigned when creating the agent with `.with_config({"run_name": "Agent"})`
  17. print()
  18. print("--")
  19. print(
  20. f"Done agent: {event['name']} with output: {event['data'].get('output')['output']}"
  21. )
  22. if kind == "on_chat_model_stream":
  23. print(event)
  24. content = event["data"]["chunk"].content
  25. if content:
  26. # Empty content in the context of OpenAI means
  27. # that the model is asking for a tool to be invoked.
  28. # So we only print non-empty content
  29. print(content, end="|")
  30. elif kind == "on_tool_start":
  31. print("--")
  32. print(
  33. f"Starting tool: {event['name']} with inputs: {event['data'].get('input')}"
  34. )
  35. elif kind == "on_tool_end":
  36. print(f"Done tool: {event['name']}")
  37. print(f"Tool output was: {event['data'].get('output')}")
  38. print("--")

以上就是LangChain对于Tools的调用。如果你还有什么疑问可以加下面的公众号,联系我获取更多资源和解答。

如果你想关注更多的AI最新信息和获取到更多关于AI的资源,请关注我们的公众号:

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/IT小白/article/detail/883968
推荐阅读
相关标签
  

闽ICP备14008679号