赞
踩
pip install openai
from openai import OpenAI
client = OpenAI(
api_key="sk-################################################",
)
completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a poetic assistant, skilled in explaining complex programming concepts with creative flair."},
{"role": "user", "content": "Compose a poem that explains the concept of recursion in programming."}
]
)
print(completion.choices[0].message)
输出结果
ChatCompletionMessage(content='In the realm of code and lines that dance,\nResides a concept, both eerie and entrancing.\nIt\'s called recursion, a mesmerizing art,\nWhere functions loop within their own heart.\n\nImagine a story, a tale to unfold,\nWith chapters repeated, in a loop, behold!\nA story within a story, the plot intertwine,\nEach telling itself, a pattern so fine.\n\nA function, like a master storyteller,\nCalls itself again, a loop\'s propeller.\nWith each iteration, a new world unfurls,\nLike a tale within a tale, where magic twirls.\n\nJust like a mirror reflecting its face,\nRecursion reflects code\'s looping grace.\nA problem divides, it breaks apart,\nAnd recursive functions play their part.\n\nTo solve a challenge, we dive deep within,\nA function called, like a foggy, mysterious din.\nBreaking it down, bit by bit, we explore,\nSolving the puzzle, uncovering more.\n\nA base case, the epic\'s final stanza,\nStops the tale\'s spinning, like a joyful bonanza.\nIt stops the recursion, brings it to rest,\nClosing the book, writing "THE END" with zest.\n\nRecursion, a tale of wonder and awe,\nEnchanting programmers, leaving them in awe.\nThrough code\'s endless loop, it finds its way,\nCreating elegant solutions, day by day.\n\nSo embrace recursion\'s poetic might,\nIn programming\'s realm, it shines so bright.\nA looping symphony, dancing to and fro,\nInfinite depths of marvel, it will bestow.', role='assistant', function_call=None, tool_calls=None)
聊天模型将消息列表作为输入,并返回模型生成的消息作为输出。
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who won the world series in 2020?"},
{"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
{"role": "user", "content": "Where was it played?"}
]
)
print(response)
ChatCompletion(id='chatcmpl-8SFPWnJvm2xiqtou76j2YEZJ6PIJP', choices=[Choice(finish_reason='stop', index=0, message=ChatCompletionMessage(content='Due to the COVID-19 pandemic, the entire 2020 World Series was played at Globe Life Field in Arlington, Texas.', role='assistant', function_call=None, tool_calls=None))], created=1701743114, model='gpt-3.5-turbo-0613', object='chat.completion', system_fingerprint=None, usage=CompletionUsage(completion_tokens=26, prompt_tokens=53, total_tokens=79))
返回说明 参考文档
id
string
聊天完成的唯一标识符;choices
array
聊天完成选择的列表。如果 n 大于 1,可以有多个选择
finish_reason
string
模型停止生成标记的原因:可能是因为达到了自然停止点或提供的停止序列,长度达到了请求中指定的最大标记数,由于内容被省略而触发内容过滤器的 content_filter,如果模型调用了工具则为 tool_calls,或者如果模型调用了函数则为 function_call(已弃用)index
integer
在选择列表中的选择的索引message
object
模型生成的聊天完成消息
content
string or null
消息的内容tool_calls
array
模型生成的工具调用,例如函数调用
id
string
工具调用的 IDtype
string
目前仅支持函数(function)function
object
模型调用的函数
name
string
要调用的函数的名称arguments
string
:以 JSON 格式由模型生成的调用函数的参数。请注意,模型不始终生成有效的 JSON,可能会产生未在函数模式中定义的参数。在调用函数之前,请在代码中验证参数role
string
此消息的作者角色function_call
object
已弃用且由 tool_calls 替代。由模型生成的应调用的函数的名称和参数。
arguments
string
JSON 格式由模型生成的调用函数的参数。请注意,模型不始终生成有效的 JSON,可能会产生未在函数模式中定义的参数。在调用函数之前,请在代码中验证参数name
string
要调用的函数的名称created
integer
聊天完成创建的 Unix 时间戳(以秒为单位)model
string
用于聊天完成的模型system_fingerprint
string
表示模型运行配置的后端配置,可与 seed 请求参数一起使用,以了解已经对可能影响的后端进行了更改object
string
始终为 chat.completionusage
object
完成请求的使用统计信息
completion_tokens
integer
生成完成的标记数prompt_tokens
integer
提示中的标记数。total_tokens
integer
请求中使用的总标记数(提示 + 完成)其他
system
,然后是交替的用户消息user
和助理消息assistant
Where was it played?
” 仅在有关 The Los Angeles Dodgers won the World Series in 2020.
的先前消息的上下文中才有意义要模仿 ChatGPT 中迭代返回文本的效果,请将流参数设置为 true。
completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
],
stream=True
)
for chunk in completion:
print(chunk.choices[0].delta)
ChoiceDelta(content='', function_call=None, role='assistant', tool_calls=None)
ChoiceDelta(content='Hello', function_call=None, role=None, tool_calls=None)
ChoiceDelta(content='!', function_call=None, role=None, tool_calls=None)
ChoiceDelta(content=' How', function_call=None, role=None, tool_calls=None)
ChoiceDelta(content=' can', function_call=None, role=None, tool_calls=None)
ChoiceDelta(content=' I', function_call=None, role=None, tool_calls=None)
ChoiceDelta(content=' assist', function_call=None, role=None, tool_calls=None)
ChoiceDelta(content=' you', function_call=None, role=None, tool_calls=None)
ChoiceDelta(content=' today', function_call=None, role=None, tool_calls=None)
ChoiceDelta(content='?', function_call=None, role=None, tool_calls=None)
ChoiceDelta(content=None, function_call=None, role=None, tool_calls=None)
使用聊天完成的常见方法:在系统消息中指示模型始终返回为 JSON 对象。
在调用gpt-4-1106-preview
或时gpt-3.5-turbo-1106
,将esponse_format
设置为{ "type": "json_object" }
启用JSON模式。启用 JSON 模式时,模型只生成 JSON 对象的字符串。
response = client.chat.completions.create(
model="gpt-3.5-turbo-1106",
response_format={ "type": "json_object" },
messages=[
{"role": "system", "content": "You are a helpful assistant designed to output JSON."},
{"role": "user", "content": "Who won the world series in 2020?"}
]
)
print(response.choices[0].message.content)
"content": "{\"winner\": \"Los Angeles Dodgers\"}"`
注意事项
请注意,当模型生成参数作为函数调用的一部分时,JSON 模式始终处于启用状态。
# 一种与模型进行对话的方法,其中包含了对特定函数的调用和处理函数响应的步骤: def get_current_weather(location, unit="fahrenheit"): """Get the current weather in a given location""" if "tokyo" in location.lower(): return json.dumps({"location": "Tokyo", "temperature": "10", "unit": unit}) elif "san francisco" in location.lower(): return json.dumps({"location": "San Francisco", "temperature": "72", "unit": unit}) elif "paris" in location.lower(): return json.dumps({"location": "Paris", "temperature": "22", "unit": unit}) else: return json.dumps({"location": location, "temperature": "unknown"}) def run_conversation(): # 步骤 1:发送对话和可用函数给模型 messages = [{"role": "user", "content": "What's the weather like in San Francisco, Tokyo, and Paris?"}] tools = [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA", }, "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}, }, "required": ["location"], }, }, } ] response = client.chat.completions.create( model="gpt-3.5-turbo-1106", messages=messages, tools=tools, tool_choice="auto", # 虽然“auto”是默认选项,但我们将明确指定。 ) response_message = response.choices[0].message tool_calls = response_message.tool_calls # 步骤 2:检查模型是否要调用函数 if tool_calls: # 步骤 3:调用函数 available_functions = { "get_current_weather": get_current_weather, } messages.append(response_message) # 步骤 4:发送每个函数调用和函数响应的信息给模型 for tool_call in tool_calls: function_name = tool_call.function.name function_to_call = available_functions[function_name] function_args = json.loads(tool_call.function.arguments) function_response = function_to_call( location=function_args.get("location"), unit=function_args.get("unit"), ) messages.append( { "tool_call_id": tool_call.id, "role": "tool", "name": function_name, "content": function_response, } ) #扩展对话以包含函数响应 second_response = client.chat.completions.create( model="gpt-3.5-turbo-1106", messages=messages, ) # 从模型获取新的响应,其中可以看到函数的响应 return second_response print(run_conversation())
ChatCompletion(id='chatcmpl-8SGNHVjgoyQzvUgEEQ2LrITDki7HC', choices=[Choice(finish_reason='stop', index=0, message=ChatCompletionMessage(content="The current weather in San Francisco is 72°C, in Tokyo it's 10°C, and in Paris it's 22°C.", role='assistant', function_call=None, tool_calls=None))], created=1701746819, model='gpt-3.5-turbo-1106', object='chat.completion', system_fingerprint='fp_eeff13170a', usage=CompletionUsage(completion_tokens=28, prompt_tokens=175, total_tokens=203))
代码解析
简介:在请求聊天完成时可以指定一个seed
参数,以确保获得相同的完成结果。系统响应中会包含一个称为system_fingerprint
的标识,帮助开发者了解系统变化对确定性的影响。
模型级特性:Chat Completions
和Completions API
默认情况下是不确定的,官方提供了一些控制选项,可以使输出更加确定性。这对于在API上构建的应用非常有用,可以更好地控制模型行为,便于复制结果和进行测试。
如何实现确定性输出:
1. 设置seed
参数为您选择的整数,例如12345,并在请求中始终使用相同的值。
2. 将所有其他参数(prompt
, temperature
, top_p
, etc
)在请求中保持相同的值。
3. 在响应中查看system_fingerprint
字段。这是一个标识符,表示当前模型权重、基础设施和其他配置选项的组合。当您更改请求参数或我们更新服务模型的数值配置时,这个标识符会发生变化。
通过确保在请求中
seed
、请求参数和system_fingerprint
都匹配,可以基本上获得相同的模型输出。请注意,由于计算机的非确定性,即使在请求参数和system_fingerprint
匹配的情况下,响应可能仍会略有不同。
模型级别的控制:
seed
:如果指定了seed
,会进行确定性采样,以确保使用相同的seed
和参数进行重复请求将返回相同的结果。但是确定性不能得到绝对保证,可以参考响应参数system_fingerprint
以监视后端的变化。system_fingerprint
: 这是表示模型运行时后端配置的指纹。它可与seed
请求参数一起使用,以了解后端更改是否会影响确定性。这是用户是否应该期望“几乎总是相同结果”的指标。示例: 生成一致的短故事
from openai import OpenAI import json import difflib from IPython.display import display, HTML client = OpenAI( api_key="sk-################################################", ) GPT_MODEL = "gpt-3.5-turbo-1106" def get_chat_response(system_message: str, user_request: str, seed: int = None): try: response = client.chat.completions.create( model="gpt-3.5-turbo-1106", messages=[ {"role": "system", "content": system_message}, {"role": "user", "content": user_request}, ], seed=seed, max_tokens=100, temperature=0.7, ) response_content = response.choices[0].message.content system_fingerprint = response.system_fingerprint prompt_tokens = response.usage.prompt_tokens completion_tokens = response.usage.total_tokens - prompt_tokens table = f""" <table> <tr><th>Response</th><td>{response_content}</td></tr> <tr><th>System Fingerprint</th><td>{system_fingerprint}</td></tr> <tr><th>Number of prompt tokens</th><td>{prompt_tokens}</td></tr> <tr><th>Number of completion tokens</th><td>{completion_tokens}</td></tr> </table> """ display(HTML(table)) return response_content except Exception as e: print(f"An error occurred: {e}") return None #该函数比较两个响应并在表格中显示差异。删除部分以红色突出显示,添加部分以绿色突出显示。如果未发现差异,它将打印"未发现差异"。 def compare_responses(previous_response: str, response: str): d = difflib.Differ() diff = d.compare(previous_response.splitlines(), response.splitlines()) diff_table = "<table>" diff_exists = False for line in diff: if line.startswith("- "): diff_table += f"<tr style='color: red;'><td>{line}</td></tr>" diff_exists = True elif line.startswith("+ "): diff_table += f"<tr style='color: green;'><td>{line}</td></tr>" diff_exists = True else: diff_table += f"<tr><td>{line}</td></tr>" diff_table += "</table>" if diff_exists: display(HTML(diff_table)) else: print("No differences found.")
# 案例1
system_message = "You are a helpful assistant that generates short stories."
user_request = "Generate a short story about a journey to Mars."
# 第一次调用时 没有seed
previous_response = get_chat_response(system_message, user_request)
response = get_chat_response(system_message, user_request)
输出
# 案例2
SEED = 123
system_message = "You are a helpful assistant that generates short stories."
user_request = "Generate a short story about a journey to Mars."
# 第二次调用时 使用seed
previous_response = get_chat_response(system_message, user_request,SEED)
response = get_chat_response(system_message, user_request,SEED)
compare_responses(previous_response, response)
演示了如何使用一个固定的整数
seed
生成模型的一致输出。这在需要可重复性的情况下非常有用。但要注意,虽然seed
确保一致性,但并不能保证输出的质量。在示例使用相同的seed
生成了一篇有关火星之旅的短篇小说,多次查询模型,输出一直一致,证明了这种模型级别的控制在可重复性方面的有效性。这个方法的一个很好的应用是在基准测试/评估不同提示或模型性能时使用一致的seed
,确保每个版本在相同条件下评估,使比较公平、结果可靠。
晚上在更新吧~
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。