当前位置:   article > 正文

基于langgraph的开发入门(初稿)

langgraph

前言:由于langgraph是较新的multi-agent框架,资料较少,官方文档又晦涩难懂,且自己只有一点点langchain的经验,所以准备精读langgraph的框架,特此记录,以供查阅

Chapter1:初识langgraph

作用: 1.一个能够实现多个action的库,用循环的方式能够协调多个langchain。能够循环调用大模型的能力,而不是一个DAG框架(类似于metagpt那种),这能够帮助大模型知道下一步做什么。

安装

pip install langgraph

启动举例

1.安装几个库

pip install -U langchain langchain_openai tavily-python

2.设置api的key

export OPENAI_API_KEY=sk-…export TAVILY_API_KEY=tvly-…

openai_key是openai的chatgpt api调用时使用的key

tavily_key是一个类似搜索引擎的api,能够通过发送请求,得到对应的一些信息,并且对LLM agent的优化效果较好

可以设置langsmith来查看记录agent运行情况

3.设置工具

工具列表类似langchain,只需要把工具放到列表里

from langchain_community.tools.tavily_search import TavilySearchResults

tools = [TavilySearchResults(max_results=1)]
  • 1
  • 2
  • 3

利用ToolExecutor对工具进行封装

from langgraph.prebuilt import ToolExecutor

tool_executor = ToolExecutor(tools)
  • 1
  • 2
  • 3

4.加载模型

模型要求:1.能够有多轮对话能力 2.支持OpenAI格式的输入

以chatgpt为例

from langchain_openai import ChatOpenAI

# We will set streaming=True so that we can stream tokens
# See the streaming section for more information on this.
model = ChatOpenAI(temperature=0, streaming=True)
  • 1
  • 2
  • 3
  • 4
  • 5

对于各个模型的function calling功能,一般需求特定的格式,所以把我们的工具和模型进行绑定

from langchain.tools.render import format_tool_to_openai_function

functions = [format_tool_to_openai_function(t) for t in tools]
model = model.bind_functions(functions)
  • 1
  • 2
  • 3
  • 4

5.状态图(用于查看模型状态)

状态图通过一个列表进行记录,并将子节点的状态进行返回

from typing import TypedDict, Annotated, Sequence
import operator
from langchain_core.messages import BaseMessage


class AgentState(TypedDict):
    messages: Annotated[Sequence[BaseMessage], operator.add]
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

6.定义节点

每个节点有这不同的功能和作用,langgraph就是靠着一个个节点进行的

类似langchain,有两个主要的节点,一个是决定使用什么工具,另一个是用来调用工具

其中包含条件节点。在agent决定采取action时,条件节点才会被使用,否则将结束。

from langgraph.prebuilt import ToolInvocation
import json
from langchain_core.messages import FunctionMessage

# Define the function that determines whether to continue or not
def should_continue(state):
    messages = state['messages']
    #获取状态信息的最后一个
    last_message = messages[-1]
    # If there is no function call, then we finish
    if "function_call" not in last_message.additional_kwargs:
        return "end"
    # Otherwise if there is, we continue
    else:
        return "continue"

# Define the function that calls the model
def call_model(state):
    messages = state['messages']
    response = model.invoke(messages)
    # We return a list, because this will get added to the existing list
    return {"messages": [response]}

# Define the function to execute tools
def call_tool(state):
    messages = state['messages']
    # Based on the continue condition
    # we know the last message involves a function call
    last_message = messages[-1]
    # We construct an ToolInvocation from the function_call
    action = ToolInvocation(
        tool=last_message.additional_kwargs["function_call"]["name"],
        tool_input=json.loads(last_message.additional_kwargs["function_call"]["arguments"]),
    )
    # We call the tool_executor and get back a response
    response = tool_executor.invoke(action)
    # We use the response to create a FunctionMessage
    function_message = FunctionMessage(content=str(response), name=action.tool)
    # We return a list, because this will get added to the existing list
    return {"messages": [function_message]}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40

7.graph创建

定义好节点和状态之后,就可以开始创建graph了

from langgraph.graph import StateGraph, END
# Define a new graph
workflow = StateGraph(AgentState)

# Define the two nodes we will cycle between
workflow.add_node("agent", call_model)
workflow.add_node("action", call_tool)

# Set the entrypoint as `agent`
# This means that this node is the first one called
workflow.set_entry_point("agent")

# We now add a conditional edge
workflow.add_conditional_edges(
    # First, we define the start node. We use `agent`.
    # This means these are the edges taken after the `agent` node is called.
    "agent",
    # Next, we pass in the function that will determine which node is called next.
    should_continue,
    # Finally we pass in a mapping.
    # The keys are strings, and the values are other nodes.
    # END is a special node marking that the graph should finish.
    # What will happen is we will call `should_continue`, and then the output of that
    # will be matched against the keys in this mapping.
    # Based on which one it matches, that node will then be called.
    {
        # If `tools`, then we call the tool node.
        "continue": "action",
        # Otherwise we finish.
        "end": END
    }
)

# We now add a normal edge from `tools` to `agent`.
# This means that after `tools` is called, `agent` node is called next.
workflow.add_edge('action', 'agent')

# Finally, we compile it!
# This compiles it into a LangChain Runnable,
# meaning you can use it as you would any other runnable
app = workflow.compile()
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41

代码解释:

  • 集成agentstate创建StateGraph

  • 加入agent节点 和模型进行交互信息

  • 加入action节点,进行action执行

  • 设置整体流程被启用的第一个节点为agent节点作为入口

  • 然后就开始按图写流程

  • 加入边,对agent输出进行判断,如果是continue则action,end则结束

  • action之后,把结果返回agent

  • 开始运行

8.可以使用啦

from langchain_core.messages import HumanMessage

inputs = {"messages": [HumanMessage(content="what is the weather in sf")]}
app.invoke(inputs)
  • 1
  • 2
  • 3
  • 4

流式输出(streaming)

下面有几种方式

节点输出

inputs = {"messages": [HumanMessage(content="what is the weather in sf")]}
for output in app.stream(inputs):
    # stream() yields dictionaries with output keyed by node name
    for key, value in output.items():
        print(f"Output from node '{key}':")
        print("---")
        print(value)
    print("\n---\n")
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

代码解释:对于outputs每次生成的k,v进行print(yield机制)

流式输出LLM的token

inputs = {"messages": [HumanMessage(content="what is the weather in sf")]}
async for output in app.astream_log(inputs, include_types=["llm"]):
    # astream_log() yields the requested logs (here LLMs) in JSONPatch format
    for op in output.ops:
        if op["path"] == "/streamed_output/-":
            # this is the output from .stream()
            ...
        elif op["path"].startswith("/logs/") and op["path"].endswith(
            "/streamed_output/-"
        ):
            # because we chose to only include LLMs, these are LLM tokens
            print(op["value"])
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

代码解释:

选择大模型的输出日志,每次都返回最新的生成token。并且不断返回

使用场景

需要循环调用的时候,而如果只是链式的,langchain语言就可以做到

使用案例

  • ChatAgentExecutor

Getting Started Notebook: Walks through creating this type of executor from scratch

High Level Entrypoint: Walks through how to use the high level entrypoint for the chat agent executor.

Human-in-the-loop: How to add a human-in-the-loop component

Force calling a tool first: How to always call a specific tool first

Respond in a specific format: How to force the agent to respond in a specific format

Dynamically returning tool output directly: How to dynamically let the agent choose whether to return the result of a tool directly to the user

Managing agent steps: How to more explicitly manage intermediate steps that an agent takes

Chat bot evaluation as multi-agent simulation: How to simulate a dialogue between a “virtual user” and your chat bot

  • Async

running LangGraph in async workflows, you may want to create the nodes to be async by default. In order for a walkthrough on how to do that, see this documentation

  • Streaming TokensSometimes language models take a while to respond and you may want to stream tokens to end users. For a guide on how to do this, see this documentation

  • PersistenceLangGraph comes with built-in persistence, allowing you to save the state of the graph at point and resume from there. In order for a walkthrough on how to do that, see this documentation

一些类的使用

StateGraph

graph

EndPrebuilt Examples

ToolExecutor

chat_agent_executor.create_function_calling_executor

create_agent_executor

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/凡人多烦事01/article/detail/367695
推荐阅读
相关标签
  

闽ICP备14008679号