当前位置:   article > 正文

LangGraph 入门与实战_langgraph中文应用

langgraph中文应用

原文:LangGraph 入门与实战 - 知乎

参考:langgraph/examples at main · langchain-ai/langgraph · GitHub

大家好,我是雨飞。LangGraph 是在 LangChain 基础上的一个库,是 LangChain 的 LangChain Expression Language (LCEL)的扩展。能够利用有向无环图的方式,去协调多个LLM或者状态,使用起来比 LCEL 会复杂,但是逻辑会更清晰。

相当于一种高级的LCEL语言,值得一试。

安装也十分简单。注意,这个库需要自己去安装,默认的LangChain不会安装这个库。

pip install langgraph

由于,OpenAI访问不方便,我们统一使用智普AI的大模型进行下面的实践。

智普AI的接口和OpenAI的比较类似,因此也可以使用OpenAI的tools的接口,目前还没有发现第二家如此方便的接口。实际使用起来,还是比较丝滑的,虽然有一些小问题。

我们下面以ToolAgent的思想,利用LangGraph去实现一个可以调用工具的Agent。

定义工具以及LLM

工具的定义,可以参考这篇文章,写的比较详细了,比较方便的就是使用 tools 这个注解。

雨飞:使用智普清言的Tools功能实现ToolAgent

定义Agent的状态

LangGraph 中最基础的类型是 StatefulGraph,这种图就会在每一个Node之间传递不同的状态信息。然后每一个节点会根据自己定义的逻辑去更新这个状态信息。具体来说,可以继承 TypeDict 这个类去定义状态,下图我们就定义了有四个变量的信息。

input:这是输入字符串,代表用户的主要请求。

chat_history: 这是之前的对话信息,也作为输入信息传入.

agent_outcome: 这是来自代理的响应,可以是 AgentAction,也可以是 AgentFinish。如果是 AgentFinish,AgentExecutor 就应该结束,否则就应该调用请求的工具。

intermediate_steps: 这是代理在一段时间内采取的行动和相应观察结果的列表。每次迭代都会更新。

  1. class AgentState(TypedDict):
  2. # The input string
  3. input: str
  4. # The list of previous messages in the conversation
  5. chat_history: list[BaseMessage]
  6. # The outcome of a given call to the agent
  7. # Needs `None` as a valid type, since this is what this will start as
  8. agent_outcome: Union[AgentAction, AgentFinish, None]
  9. # List of actions and corresponding observations
  10. # Here we annotate this with `operator.add` to indicate that operations to
  11. # this state should be ADDED to the existing values (not overwrite it)
  12. intermediate_steps: Annotated[list[tuple[AgentAction, str]], operator.add]

定义图中的节点

在LangGraph中,节点一般是一个函数或者langchain中runnable的一种类。

我们这里定义两个节点,agent和tool节点,其中agent节点就是决定执行什么样的行动,

tool节点就是当agent节点选择执行某个行动时,去调用相应的工具。

此外,还需要定义节点之间的连接,也就是边。

条件判断的边:定义图的走向,比如Agent要采取行动时,就需要接下来调用tools,如果Agent说当前的的任务已经完成了,则结束整个流程。

普通的边:调用工具后,始终需要返回到Agent,让Agent决定下一步的行动

  1. from langchain_core.agents import AgentFinish
  2. from langgraph.prebuilt.tool_executor import ToolExecutor
  3. # This a helper class we have that is useful for running tools
  4. # It takes in an agent action and calls that tool and returns the result
  5. tool_executor = ToolExecutor(tools)
  6. # Define the agent
  7. def run_agent(data):
  8. agent_outcome = agent_runnable.invoke(data)
  9. return {"agent_outcome": agent_outcome}
  10. # Define the function to execute tools
  11. def execute_tools(data):
  12. # Get the most recent agent_outcome - this is the key added in the `agent` above
  13. agent_action = data["agent_outcome"]
  14. print("agent action:{}".format(agent_action))
  15. output = tool_executor.invoke(agent_action[-1])
  16. return {"intermediate_steps": [(agent_action[-1], str(output))]}
  17. # Define logic that will be used to determine which conditional edge to go down
  18. def should_continue(data):
  19. # If the agent outcome is an AgentFinish, then we return `exit` string
  20. # This will be used when setting up the graph to define the flow
  21. if isinstance(data["agent_outcome"], AgentFinish):
  22. return "end"
  23. # Otherwise, an AgentAction is returned
  24. # Here we return `continue` string
  25. # This will be used when setting up the graph to define the flow
  26. else:
  27. return "continue"

定义图

然后,我们就可以定义整个图了。值得注意的是,条件判断的边和普通的边添加方式是不一样的

最后需要编译整个图,才能正常运行。

  1. # Define a new graph
  2. workflow = StateGraph(AgentState)
  3. # Define the two nodes we will cycle between
  4. workflow.add_node("agent", run_agent)
  5. workflow.add_node("action", execute_tools)
  6. # Set the entrypoint as `agent`
  7. # This means that this node is the first one called
  8. workflow.set_entry_point("agent")
  9. # We now add a conditional edge
  10. workflow.add_conditional_edges(
  11. # First, we define the start node. We use `agent`.
  12. # This means these are the edges taken after the `agent` node is called.
  13. "agent",
  14. # Next, we pass in the function that will determine which node is called next.
  15. should_continue,
  16. # Finally we pass in a mapping.
  17. # The keys are strings, and the values are other nodes.
  18. # END is a special node marking that the graph should finish.
  19. # What will happen is we will call `should_continue`, and then the output of that
  20. # will be matched against the keys in this mapping.
  21. # Based on which one it matches, that node will then be called.
  22. {
  23. # If `tools`, then we call the tool node.
  24. "continue": "action",
  25. # Otherwise we finish.
  26. "end": END,
  27. },
  28. )
  29. # We now add a normal edge from `tools` to `agent`.
  30. # This means that after `tools` is called, `agent` node is called next.
  31. workflow.add_edge("action", "agent")
  32. # Finally, we compile it!
  33. # This compiles it into a LangChain Runnable,
  34. # meaning you can use it as you would any other runnable
  35. app = workflow.compile()

总代码

下面是所有的可执行代码,注意,需要将api_key替换为自己的api_key。

  1. # !/usr/bin env python3
  2. # -*- coding: utf-8 -*-
  3. # author: yangyunlong time:2024/2/28
  4. import datetime
  5. import operator
  6. from typing import TypedDict, Annotated, Union, Optional,Type,List
  7. import requests
  8. from langchain import hub
  9. from langchain.agents import create_openai_tools_agent
  10. from langchain.pydantic_v1 import BaseModel, Field
  11. from langchain.tools import BaseTool, tool
  12. from langchain_core.agents import AgentAction
  13. from langchain_core.agents import AgentFinish
  14. from langchain_core.messages import BaseMessage
  15. from langgraph.graph import END, StateGraph
  16. from langgraph.prebuilt.tool_executor import ToolExecutor
  17. from zhipu_llm import ChatZhipuAI
  18. zhipuai_api_key = ""
  19. glm3 = "glm-3-turbo"
  20. glm4 = "glm-4"
  21. chat_zhipu = ChatZhipuAI(
  22. temperature=0.8,
  23. api_key=zhipuai_api_key,
  24. model=glm3
  25. )
  26. class Tagging(BaseModel):
  27. """分析句子的情感极性,并输出句子对应的语言"""
  28. sentiment: str = Field(description="sentiment of text, should be `pos`, `neg`, or `neutral`")
  29. language: str = Field(description="language of text (should be ISO 639-1 code)")
  30. class Overview(BaseModel):
  31. """Overview of a section of text."""
  32. summary: str = Field(description="Provide a concise summary of the content.")
  33. language: str = Field(description="Provide the language that the content is written in.")
  34. keywords: str = Field(description="Provide keywords related to the content.")
  35. @tool("tagging", args_schema=Tagging)
  36. def tagging(s1: str, s2: str):
  37. """分析句子的情感极性,并输出句子对应的语言"""
  38. return "The sentiment is {a}, the language is {b}".format(a=s1, b=s2)
  39. @tool("overview", args_schema=Overview)
  40. def overview(summary: str, language: str, keywords: str):
  41. """Overview of a section of text."""
  42. return "Summary: {a}\nLanguage: {b}\nKeywords: {c}".format(a=summary, b=language, c=keywords)
  43. @tool
  44. def get_current_temperature(latitude: float, longitude: float):
  45. """Fetch current temperature for given coordinates."""
  46. BASE_URL = "https://api.open-meteo.com/v1/forecast"
  47. # Parameters for the request
  48. params = {
  49. 'latitude': latitude,
  50. 'longitude': longitude,
  51. 'hourly': 'temperature_2m',
  52. 'forecast_days': 1,
  53. }
  54. # Make the request
  55. response = requests.get(BASE_URL, params=params)
  56. if response.status_code == 200:
  57. results = response.json()
  58. else:
  59. raise Exception(f"API Request failed with status code: {response.status_code}")
  60. current_utc_time = datetime.datetime.utcnow()
  61. time_list = [datetime.datetime.fromisoformat(time_str.replace('Z', '+00:00')) for time_str in
  62. results['hourly']['time']]
  63. temperature_list = results['hourly']['temperature_2m']
  64. closest_time_index = min(range(len(time_list)), key=lambda i: abs(time_list[i] - current_utc_time))
  65. current_temperature = temperature_list[closest_time_index]
  66. return f'The current temperature is {current_temperature}°C'
  67. tools = [tagging, overview, get_current_temperature]
  68. # Get the prompt to use - you can modify this!
  69. prompt = hub.pull("hwchase17/openai-tools-agent")
  70. # Construct the OpenAI Functions agent
  71. agent_runnable = create_openai_tools_agent(chat_zhipu, tools, prompt)
  72. class AgentState(TypedDict):
  73. # The input string
  74. input: str
  75. # The list of previous messages in the conversation
  76. chat_history: list[BaseMessage]
  77. # The outcome of a given call to the agent
  78. # Needs `None` as a valid type, since this is what this will start as
  79. agent_outcome: Union[AgentAction, AgentFinish, None]
  80. # List of actions and corresponding observations
  81. # Here we annotate this with `operator.add` to indicate that operations to
  82. # this state should be ADDED to the existing values (not overwrite it)
  83. intermediate_steps: Annotated[list[tuple[AgentAction, str]], operator.add]
  84. # This a helper class we have that is useful for running tools
  85. # It takes in an agent action and calls that tool and returns the result
  86. tool_executor = ToolExecutor(tools)
  87. # Define the agent
  88. def run_agent(data):
  89. agent_outcome = agent_runnable.invoke(data)
  90. return {"agent_outcome": agent_outcome}
  91. # Define the function to execute tools
  92. def execute_tools(data):
  93. # Get the most recent agent_outcome - this is the key added in the `agent` above
  94. agent_action = data["agent_outcome"]
  95. print("agent action:{}".format(agent_action))
  96. output = tool_executor.invoke(agent_action[-1])
  97. return {"intermediate_steps": [(agent_action[-1], str(output))]}
  98. # Define logic that will be used to determine which conditional edge to go down
  99. def should_continue(data):
  100. # If the agent outcome is an AgentFinish, then we return `exit` string
  101. # This will be used when setting up the graph to define the flow
  102. if isinstance(data["agent_outcome"], AgentFinish):
  103. return "end"
  104. # Otherwise, an AgentAction is returned
  105. # Here we return `continue` string
  106. # This will be used when setting up the graph to define the flow
  107. else:
  108. return "continue"
  109. # Define a new graph
  110. workflow = StateGraph(AgentState)
  111. # Define the two nodes we will cycle between
  112. workflow.add_node("agent", run_agent)
  113. workflow.add_node("action", execute_tools)
  114. # Set the entrypoint as `agent`
  115. # This means that this node is the first one called
  116. workflow.set_entry_point("agent")
  117. # We now add a conditional edge
  118. workflow.add_conditional_edges(
  119. # First, we define the start node. We use `agent`.
  120. # This means these are the edges taken after the `agent` node is called.
  121. "agent",
  122. # Next, we pass in the function that will determine which node is called next.
  123. should_continue,
  124. # Finally we pass in a mapping.
  125. # The keys are strings, and the values are other nodes.
  126. # END is a special node marking that the graph should finish.
  127. # What will happen is we will call `should_continue`, and then the output of that
  128. # will be matched against the keys in this mapping.
  129. # Based on which one it matches, that node will then be called.
  130. {
  131. # If `tools`, then we call the tool node.
  132. "continue": "action",
  133. # Otherwise we finish.
  134. "end": END,
  135. },
  136. )
  137. # We now add a normal edge from `tools` to `agent`.
  138. # This means that after `tools` is called, `agent` node is called next.
  139. workflow.add_edge("action", "agent")
  140. # Finally, we compile it!
  141. # This compiles it into a LangChain Runnable,
  142. # meaning you can use it as you would any other runnable
  143. app = workflow.compile()
  144. inputs = {"input": "what is the weather in NewYork", "chat_history": []}
  145. result = app.invoke(inputs)
  146. print(result["agent_outcome"].messages[0].content)
本文内容由网友自发贡献,转载请注明出处:【wpsshop博客】
推荐阅读
相关标签
  

闽ICP备14008679号