赞
踩
前言:在ChatGLM3-6B模型中,我们调用了tool工具(ChatGLM3-6B-Base,ChatGLM3-6B-32K没有调用tool功能),这里我们进行简单的demo。
如何制作tools(下列代码中有两个tool:“track”,“text-to-speech”):
- tools = [
- {
- "name": "track",
- "description": "追踪指定股票的实时价格",
- "parameters": {
- "type": "object",
- "properties": {
- "symbol": {
- "description": "需要追踪的股票代码"
- }
- },
- "required": ['symbol']
- }
- },
- {
- "name": "text-to-speech",
- "description": "将文本转换为语音",
- "parameters": {
- "type": "object",
- "properties": {
- "text": {
- "description": "需要转换成语音的文本"
- },
- "voice": {
- "description": "要使用的语音类型(男声、女声等)"
- },
- "speed": {
- "description": "语音的速度(快、中等、慢等)"
- }
- },
- "required": ['text']
- }
- }
- ]
- system_info = {"role": "system", "content": "Answer the following questions as best as you can. You have access to the following tools:", "tools": tools}
'运行
tools参数解释:
"name":为tool工具名;
"description":对工具的作用进行描述;
"parameters":
"type":默认为"object";
"properties":在此定义工具的属性以及对属性值的描述;
"required": 需要返回的属性;
加载模型:
- from transformers import AutoTokenizer, AutoModel
- tokenizer = AutoTokenizer.from_pretrained("path_to_model", trust_remote_code=True)
- model = AutoModel.from_pretrained("path_to_model", trust_remote_code=True, device='cuda')
- model = model.eval()
在调用模型时,我们的query和tool相关时,模型会自动调用tool并反馈:
- history = [system_info]
- query = "帮我查询股票10111的价格"
- response, history = model.chat(tokenizer, query, history=history)
- print(response)
response输出为{'name': 'track', 'parameters': {'symbol': '10111'}} #模型的反馈
这里result模拟的是外部的反馈数据,将反馈和history一起传递给模型
(这里 role="observation" 表示输入的是工具调用的返回值而不是用户输入,不能省略。)
- import json
- result = json.dumps({"price": 12412}, ensure_ascii=False)
- response, history = model.chat(tokenizer, result, history=history, role="observation")
- print(response)
这是repsonse的值
“根据您的查询,我已经成功调用了追踪股票价格的API,并返回了股票代码为10111的实时股价。根据API调用结果,当前股票的价格为12412元。希望这个信息对您有所帮助! ”
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。