当前位置:   article > 正文

如何实现langchain的bind_tools功能_langchain无法bind tools

langchain无法bind tools

如何用vllm框架提供的类openAI风格LLM服务,不具备直接使用 LangChain的bind_tools功能,可以通过以下方式使用。

import os 
import json
import requests
import operator
from langchain_openai import ChatOpenAI
from langchain.tools import BaseTool, StructuredTool, tool
from langchain_core.pydantic_v1 import BaseModel, Field
from langgraph.graph import StateGraph, END
from langgraph.prebuilt import ToolExecutor, ToolInvocation
from typing import TypedDict, Annotated, Sequence
from langchain_core.messages import BaseMessage
from langgraph.prebuilt import ToolInvocation
from langchain_core.messages import ToolMessage
from langchain import hub
from langchain_core.runnables import Runnable, RunnablePassthrough
from langchain.agents.format_scratchpad import format_log_to_str
from langchain.agents.output_parsers import JSONAgentOutputParser # 特定格式输出的解析工具
from langchain.tools.render import ToolsRenderer, render_text_description_and_args

os.environ["OPENAI_API_KEY"] = "..."
base_url = "xxx"
llm_origin = ChatOpenAI(base_url=base_url,model='qwen14b', max_tokens=2048,temperature=0.8,streaming=True)

@tool
def multiply(first_number: int, second_number: int):
    """Multiplies two numbers together."""
    return first_number * second_number

tools = [multiply]

prompt = hub.pull("hwchase17/structured-chat-agent")  # 理解 MessagesPlaceholder
prompt = prompt.partial(
    tools=render_text_description_and_args(list(tools)),
    tool_names=", ".join([t.name for t in tools]),
)
stop = ["\nObservation"] 
llm_with_stop = llm_origin.bind(stop=stop)
agent = RunnablePassthrough.assign( agent_scratchpad=lambda x: format_log_to_str(x["intermediate_steps"]),) | prompt | llm_with_stop| JSONAgentOutputParser()
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38

效果如下:

agent.invoke({'input':'1*1','intermediate_steps':[]})

# AgentAction(tool='multiply', tool_input={'first_number': 1, 'second_number': 1}, log='Thought: The user wants to multiply two numbers, a simple arithmetic operation.\nAction:\n```\n{\n  "action": "multiply",\n  "action_input": {"first_number": 1, "second_number": 1}\n}\n```\nObserv')

agent.invoke({'input':'hello','intermediate_steps':[]})

# AgentFinish(return_values={'output': 'Hello! How can I assist you today?'}, log='Thought: The user has simply greeted me, so there\'s no need for a tool at this moment. A friendly response will be provided directly.\n\nAction:\n```\n{\n  "action": "Final Answer",\n  "action_input": "Hello! How can I assist you today?"\n}')
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

以上便实现了将tool加载到llm的同时,llm也能正常的对话,其核心在于 agent的prompt和JSONAgentOutputParser这两部分。

这么实现之后,就方便LangGraph的后续操作了。

从这个例子,也可以看出LangChain最需要掌握的是ICEL表达式的使用。

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/空白诗007/article/detail/853636
推荐阅读
相关标签
  

闽ICP备14008679号