赞
踩
通过代理访问一个 HTTPS 网站来验证代理服务器是否工作正常
curl -x socks5h://127.0.0.1:1080 https://www.google.com
curl -x socks5h://127.0.0.1:1080 -s https://api.openai.com/v1/models/gpt-3.5-turbo
-H "Authorization: Bearer $OPENAI_API_KEY"
结果:
{
"id": "gpt-3.5-turbo",
"object": "model",
"created": 1677610602,
"owned_by": "openai"
}
再次访问:
curl -k -x socks5h://127.0.0.1:1080 -X POST https://api.openai.com/v1/chat/completions ^
-H "Content-Type: application/json" ^
-H "Authorization: Bearer $OPENAI_API_KEY" ^
-d "{\"model\":\"gpt-3.5-turbo\",\"messages\":[{\"role\":\"user\",\"content\":\"Translate the following English text to French: 'Hello, world!'\"}],\"max_tokens\":100}"
其中:
特别注意:在 Windows 命令行中,双引号内的双引号需要用反斜杠转义。
python 访问代码:
import openai import os import httpx import time # 设置你的 OpenAI API 密钥 openai.api_key = os.getenv("OPENAI_API_KEY") # 设置代理服务器 proxy_url = 'socks5://127.0.0.1:1080' # 创建一个 HTTPX 客户端并配置代理 client = httpx.Client(proxies={ "http://": proxy_url, "https://": proxy_url, }) # 自定义发送请求的函数 def list_models(max_retries=10, retry_delay=5): url = "https://api.openai.com/v1/models" headers = { "Authorization": f"Bearer {openai.api_key}" } for attempt in range(max_retries): try: response = client.get(url, headers=headers) response.raise_for_status() # 如果请求失败,抛出异常 return response.json() except httpx.HTTPStatusError as e: if e.response.status_code == 429: print(f"Received 429 Too Many Requests. Retrying in {retry_delay} seconds...") time.sleep(retry_delay) retry_delay *= 2 # 指数退避策略:等待时间加倍 else: raise e raise Exception("Max retries exceeded") def print_models(models_response): print("Available Models:") print("=================") for model in models_response['data']: model_id = model.get('id', 'N/A') created_at = model.get('created', 'N/A') owned_by = model.get('owned_by', 'N/A') print(f"Model ID: {model_id}") print(f"Created At: {created_at}") print(f"Owned By: {owned_by}") print("-----------------") # 列出模型并打印结果 models_response = list_models() print_models(models_response)
输出结果:
Available Models: ================= Model ID: whisper-1 Created At: 1677532384 Owned By: openai-internal ----------------- Model ID: babbage-002 Created At: 1692634615 Owned By: system ----------------- Model ID: dall-e-2 Created At: 1698798177 Owned By: system ----------------- Model ID: gpt-3.5-turbo-16k Created At: 1683758102 Owned By: openai-internal ----------------- Model ID: tts-1-hd-1106 Created At: 1699053533 Owned By: system ----------------- Model ID: tts-1-hd Created At: 1699046015 Owned By: system ----------------- Model ID: gpt-3.5-turbo-instruct-0914 Created At: 1694122472 Owned By: system ----------------- Model ID: gpt-3.5-turbo-instruct Created At: 1692901427 Owned By: system ----------------- Model ID: dall-e-3 Created At: 1698785189 Owned By: system ----------------- Model ID: text-embedding-3-small Created At: 1705948997 Owned By: system ----------------- Model ID: tts-1 Created At: 1681940951 Owned By: openai-internal ----------------- Model ID: text-embedding-3-large Created At: 1705953180 Owned By: system ----------------- Model ID: gpt-3.5-turbo-0125 Created At: 1706048358 Owned By: system ----------------- Model ID: gpt-3.5-turbo Created At: 1677610602 Owned By: openai ----------------- Model ID: gpt-3.5-turbo-0301 Created At: 1677649963 Owned By: openai ----------------- Model ID: tts-1-1106 Created At: 1699053241 Owned By: system ----------------- Model ID: gpt-3.5-turbo-16k-0613 Created At: 1685474247 Owned By: openai ----------------- Model ID: gpt-3.5-turbo-0613 Created At: 1686587434 Owned By: openai ----------------- Model ID: text-embedding-ada-002 Created At: 1671217299 Owned By: openai-internal ----------------- Model ID: davinci-002 Created At: 1692634301 Owned By: system ----------------- Model ID: gpt-3.5-turbo-1106 Created At: 1698959748 Owned By: system -----------------
最后一个模型用 curl 验证:
$ curl -x socks5h://127.0.0.1:1080 -s https://api.openai.com/v1/models/gpt-3.5-turbo-1106 -H "Authorization: Bearer $OPENAI_API_KEY"
{
"id": "gpt-3.5-turbo-1106",
"object": "model",
"created": 1698959748,
"owned_by": "system"
}
检索特定的模型 gpt-3.5-turbo-1106:
import openai import os import httpx # 设置你的 OpenAI API 密钥 openai.api_key = os.getenv("OPENAI_API_KEY") # 设置代理服务器 proxy_url = 'socks5://127.0.0.1:1080' # 创建一个 HTTPX 客户端并配置代理 client = httpx.Client(proxies={ "http://": proxy_url, "https://": proxy_url, }) # 自定义发送请求的函数以检索模型信息 def retrieve_model(model_id, max_retries=10, retry_delay=5): url = f"https://api.openai.com/v1/models/{model_id}" headers = { "Authorization": f"Bearer {openai.api_key}" } for attempt in range(max_retries): try: response = client.get(url, headers=headers) response.raise_for_status() # 如果请求失败,抛出异常 return response.json() except httpx.HTTPStatusError as e: if e.response.status_code == 429: print(f"Received 429 Too Many Requests. Retrying in {retry_delay} seconds...") time.sleep(retry_delay) retry_delay *= 2 # 指数退避策略:等待时间加倍 else: raise e raise Exception("Max retries exceeded") # 检索 gpt-3.5-turbo 模型的信息 model_id = "gpt-3.5-turbo-1106" model_info = retrieve_model(model_id) # 输出模型信息 print(f"Model ID: {model_info.get('id', 'N/A')}") print(f"Object: {model_info.get('object', 'N/A')}") print(f"Created At: {model_info.get('created', 'N/A')}") print(f"Owned By: {model_info.get('owned_by', 'N/A')}")
运行结果:
Model ID: gpt-3.5-turbo-1106
Object: model
Created At: 1698959748
Owned By: system
还有代码:
import openai import os import httpx import time # 设置你的 OpenAI API 密钥 openai.api_key = os.getenv("OPENAI_API_KEY") # 设置代理服务器 proxy_url = 'socks5://127.0.0.1:1080' # 创建一个 HTTPX 客户端并配置代理 client = httpx.Client(proxies={ "http://": proxy_url, "https://": proxy_url, }) # 自定义发送请求的函数 def create_chat_completion(messages, max_retries=10, retry_delay=10): url = "https://api.openai.com/v1/chat/completions" headers = { "Content-Type": "application/json", "Authorization": f"Bearer {openai.api_key}" } payload = { "model": "gpt-3.5-turbo", "messages": messages } for attempt in range(max_retries): try: response = client.post(url, headers=headers, json=payload) response.raise_for_status() # 如果请求失败,抛出异常 return response.json() except httpx.HTTPStatusError as e: if e.response.status_code == 429: print(f"Received 429 Too Many Requests. Retrying in {retry_delay} seconds...") time.sleep(retry_delay) else: raise e raise Exception("Max retries exceeded") # 测试请求 messages = [ {"role": "user", "content": "Translate the following English text to French: 'Hello, world!'"} ] response = create_chat_completion(messages) # 输出响应 print(response['choices'][0]['message']['content'])
参考:
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。