赞
踩
如何用vllm框架提供的类openAI风格LLM服务,不具备直接使用 LangChain的bind_tools功能,可以通过以下方式使用。
import os import json import requests import operator from langchain_openai import ChatOpenAI from langchain.tools import BaseTool, StructuredTool, tool from langchain_core.pydantic_v1 import BaseModel, Field from langgraph.graph import StateGraph, END from langgraph.prebuilt import ToolExecutor, ToolInvocation from typing import TypedDict, Annotated, Sequence from langchain_core.messages import BaseMessage from langgraph.prebuilt import ToolInvocation from langchain_core.messages import ToolMessage from langchain import hub from langchain_core.runnables import Runnable, RunnablePassthrough from langchain.agents.format_scratchpad import format_log_to_str from langchain.agents.output_parsers import JSONAgentOutputParser # 特定格式输出的解析工具 from langchain.tools.render import ToolsRenderer, render_text_description_and_args os.environ["OPENAI_API_KEY"] = "..." base_url = "xxx" llm_origin = ChatOpenAI(base_url=base_url,model='qwen14b', max_tokens=2048,temperature=0.8,streaming=True) @tool def multiply(first_number: int, second_number: int): """Multiplies two numbers together.""" return first_number * second_number tools = [multiply] prompt = hub.pull("hwchase17/structured-chat-agent") # 理解 MessagesPlaceholder prompt = prompt.partial( tools=render_text_description_and_args(list(tools)), tool_names=", ".join([t.name for t in tools]), ) stop = ["\nObservation"] llm_with_stop = llm_origin.bind(stop=stop) agent = RunnablePassthrough.assign( agent_scratchpad=lambda x: format_log_to_str(x["intermediate_steps"]),) | prompt | llm_with_stop| JSONAgentOutputParser()
效果如下:
agent.invoke({'input':'1*1','intermediate_steps':[]})
# AgentAction(tool='multiply', tool_input={'first_number': 1, 'second_number': 1}, log='Thought: The user wants to multiply two numbers, a simple arithmetic operation.\nAction:\n```\n{\n "action": "multiply",\n "action_input": {"first_number": 1, "second_number": 1}\n}\n```\nObserv')
agent.invoke({'input':'hello','intermediate_steps':[]})
# AgentFinish(return_values={'output': 'Hello! How can I assist you today?'}, log='Thought: The user has simply greeted me, so there\'s no need for a tool at this moment. A friendly response will be provided directly.\n\nAction:\n```\n{\n "action": "Final Answer",\n "action_input": "Hello! How can I assist you today?"\n}')
以上便实现了将tool加载到llm的同时,llm也能正常的对话,其核心在于 agent的prompt和JSONAgentOutputParser这两部分。
这么实现之后,就方便LangGraph的后续操作了。
从这个例子,也可以看出LangChain最需要掌握的是ICEL表达式的使用。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。