赞
踩
大家好,我是小困难。最近在做的项目中有涉及到调用ChatGPT的API,于是想要在这里和大家分享一下我的经验,希望能够帮助到大家。
使用python调用ChatGPT的API,依赖于python中的openai
库,如果没有安装该库,可以使用下面的命令安装:
pip install openai
安装完必须的库后,我们就可以使用openai
库中的函数正式开始调用了。
import openai
openai.api_key = 'your_api_key'
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=prompt_text,
max_tokens=2000,
n=1,
stop=None,
temperature=0.5,
)
response_content = response.choices[0].message.content
openai.api_key = 'your_api_key'
: 在这里我们需要设定自己在OpenAI平台上获取的API密钥,这个密钥是用于身份验证,确保你有权利访问OpenAI的服务。response = openai.ChatCompletion.create(...)
: 从这里开始发起对ChatGPT API的请求。使用openai.ChatCompletion.create
方法,我们向ChatGPT模型提交一个对话生成的请求。
model="gpt-3.5-turbo"
: 指定要使用的模型。messages=prompt_text
: 提供对话的信息,其中prompt_text
是一个包含对话角色和内容的列表。系统提示、用户输入和助手ChatGPT的回复都在这个列表中,等一下我们再具体讲解一下这一部分。max_tokens=2000
: 限制生成的文本长度。n=1
: 请求的响应数,这里设置为1,表示我们只需要一个ChatGPT的回复。stop=None
: 指定停止标志,如果想要在生成的文本中指定位置停止,可以在这里设置。temperature=0.5
: 控制生成文本的多样性,较低的值会产生更加确定性的输出,而较高的值则会产生更加多样化的输出。response_content = response.choices[0].message.content
: 从API的响应中提取ChatGPT生成的回复内容。API的响应是一个JSON格式的对象,其中response.choices
包含了生成的文本,我们通过[0].message.content
获取了其中的内容。上面展示的版本是老版本的openai
库的使用,新版本有了一些细节上的改变,如下:
response = openai.chat.completions.create(
model="gpt-3.5-turbo",
messages=prompt_text,
max_tokens=2000,
n=1,
stop=None,
temperature=0.5,
)
接下来我重点讲解一下messages
的设定,messages
是一个包含对话角色和内容的列表,openai
允许设定三种角色:
system
系统消息:可以理解为对GPT角色的设定,你希望它在本次会话中扮演什么角色。user
用户消息:对话(你和GPT交流的内容),相当于你在网页端的输入框中输入并发送的消息。assistant
助理消息:你希望GPT回复的内容,通常用于限定它的回复格式。prompt_text = [{"role": "system", "content": "You are an expert in Python."},
{"role": "user", "content": "Please tell me the basics of Python."},
{"role": "assistant", "content": "1. Installation:...2. Hello, World!:...3. ..."}
]
完整的代码如下:
import openai def get_chat_messages(prompt_text): openai.api_key = 'your_api_key' response = openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=prompt_text, max_tokens=2000, n=1, stop=None, temperature=0.5, ) return response.choices[0].message.content # 定义对话信息 prompt_text = [{"role": "system", "content": "You are an expert in Python."}, {"role": "user", "content": "Please tell me the basics of Python."}, {"role": "assistant", "content": "1. Installation:...2. Hello, World!:...3. ..."} ] # 调用ChatGPT API生成回复 response_content = get_chat_messages(prompt_text) # 打印生成的回复 print("ChatGPT回复:", response_content)
MODEL | DESCRIPTION | CONTEXT WINDOW | TRAINING DATA |
---|---|---|---|
gpt-3.5-turbo-1106 | New Updated GPT 3.5 Turbo The latest GPT-3.5 Turbo model with improved instruction following, JSON mode, reproducible outputs, parallel function calling, and more. Returns a maximum of 4,096 output tokens. Learn more. | 16,385 tokens | Up to Sep 2021 |
gpt-3.5-turbo | Currently points to gpt-3.5-turbo-0613 . | 4,096 tokens | Up to Sep 2021 |
gpt-3.5-turbo-16k | Currently points to gpt-3.5-turbo-16k-0613 . | 16,385 tokens | Up to Sep 2021 |
gpt-3.5-turbo-instruct | Similar capabilities as GPT-3 era models. Compatible with legacy Completions endpoint and not Chat Completions. | 4,096 tokens | Up to Sep 2021 |
gpt-3.5-turbo-0613 | Legacy Snapshot of gpt-3.5-turbo from June 13th 2023. Will be deprecated on June 13, 2024. | 4,096 tokens | Up to Sep 2021 |
gpt-3.5-turbo-16k-0613 | Legacy Snapshot of gpt-3.5-16k-turbo from June 13th 2023. Will be deprecated on June 13, 2024. | 16,385 tokens | Up to Sep 2021 |
gpt-3.5-turbo-0301 | Legacy Snapshot of gpt-3.5-turbo from March 1st 2023. Will be deprecated on June 13th 2024. | 4,096 tokens | Up to Sep 2021 |
MODEL | DESCRIPTION | CONTEXT WINDOW | TRAINING DATA |
---|---|---|---|
gpt-4-0125-preview | New GPT-4 Turbo The latest GPT-4 model intended to reduce cases of “laziness” where the model doesn’t complete a task. Learn more. | 128,000 tokens | Up to Apr 2023 |
gpt-4-turbo-preview | Currently points to gpt-4-0125-preview . | 128,000 tokens | Up to Apr 2023 |
gpt-4-1106-preview | GPT-4 Turbo model featuring improved instruction following, JSON mode, reproducible outputs, parallel function calling, and more. Returns a maximum of 4,096 output tokens. This preview model is not yet suited for production traffic. Learn more. | 128,000 tokens | Up to Apr 2023 |
gpt-4-vision-preview | GPT-4 with the ability to understand images, in addition to all other GPT-4 Turbo capabilities. Returns a maximum of 4,096 output tokens. This is a preview model version and not suited yet for production traffic. Learn more. | 128,000 tokens | Up to Apr 2023 |
gpt-4 | Currently points to gpt-4-0613 . See continuous model upgrades. | 8,192 tokens | Up to Sep 2021 |
gpt-4-0613 | Snapshot of gpt-4 from June 13th 2023 with improved function calling support. | 8,192 tokens | Up to Sep 2021 |
gpt-4-32k | Currently points to gpt-4-32k-0613 . See continuous model upgrades. This model was never rolled out widely in favor of GPT-4 Turbo. | 32,768 tokens | Up to Sep 2021 |
gpt-4-32k-0613 | Snapshot of gpt-4-32k from June 13th 2023 with improved function calling support. This model was never rolled out widely in favor of GPT-4 Turbo. | 32,768 tokens | Up to Sep 2021 |
如果想要了解更多内容,大家也可以直接访问ChatGPT API 的官网 OpenAI API
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。