赞
踩
openai.error.APIConnectionError: Error communicating with OpenAI: HTTPSConnectionPool(host='api.openai.com', port=443): Max retries exceeded with url: /v1/completions (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1129)')))
解决办法
降低urllib3的版本 <= 1.25.11
pip install urllib3==1.25.11
待补充错误提示
调用有时会失败,除了网络原因,还有本身调用的频率问题,免费的openai调用的次数在每分钟内是有上限的,当然,付费的肯定也有,如果有必要,可以发邮件申请提高上线。
当OpenAI调用频次受限时,可以考虑以下几种处理方式:
- 优化调用方式:减少调用OpenAI的频次或者减少对API的请求次数,利用缓存等方式进行数据处理和储存,降低对OpenAI的依赖。
- 升级API版本:如果你的OpenAI API版本太老,可以考虑升级到较新的版本。新版本可能具有更高的调用频次限制。
- 购买更多资源:考虑购买更多的资源来增加OpenAI调用频次,可以联系OpenAI官方咨询相关服务。
- 尝试其他解决方案:如果以上三种方法都不可行,可以尝试其他的自然语言处理或人工智能相关的开源工具,如TensorFlow,PyTorch,NLTK等。
# 来源于LangChain中文网
#-*- coding:utf-8 -*-
# 实例1:构建一个基于公司产品生成公司名称的服务
import os
os.environ["OPENAI_API_KEY"] = "……"
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.9)
text = "What would be a good company name for a company that makes colorful socks?"
print(llm(text))
# 输出:Rainbow Rascals Socks.
我的第一个调用实例,感觉还是很神奇的
“temperature” : OpenAI的API有效载荷中,"temperature"选项是一个控制语言模型输出的随机性或创造性的参数。当使用语言模型生成文本时,它通常会输出根据输入和先前训练数据确定为最可能的单词或词序列。然而,增加输出的随机性可以帮助模型创建更具创意和有趣的输出。"temperature"选项实际上控制着随机性的程度。将温度设置为较低的值将导致输出更可预测和重复,而较高的温度会导致更多种类和不可预测的输出。例如,将温度设置为0.5将导致较保守的输出,而温度为1将创建更富创意和自发的输出。需要注意的是,理想的温度值将取决于具体的任务和上下文,因此可能需要一些实验来找到适合您需要的正确值。
# 实例2:提示模板的使用
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
)
print(prompt.format(product="colorful socks"))
# 输出:What is a good name for a company that makes colorful socks?
# 实例3:在多步骤的工作流中组合 LLM 和提示 import os os.environ["OPENAI_API_KEY"] = "……" from langchain.llms import OpenAI from langchain.prompts import PromptTemplate from langchain.chains import LLMChain llm = OpenAI(temperature=0.9) prompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?", ) chain = LLMChain(llm=llm, prompt=prompt) print(chain.run("colorful socks")) # 输出:Sock Pop!
实例4:带记忆的多轮会话 import os os.environ["OPENAI_API_KEY"] = "……" from langchain import OpenAI, ConversationChain llm = OpenAI(temperature=0) conversation = ConversationChain(llm=llm, verbose=True)#verbose=True 这样我们就可以看到提示符 HumanSent="" while HumanSent!= "q": HumanSent = input("input:") output = conversation.predict(input=HumanSent) print(output)
默认情况下, ConversationChain 有一个简单的内存类型,它记住所有以前的输入/输出,并将它们添加到传递的上下文中。
import os os.environ["OPENAI_API_KEY"] = "……" from langchain.chat_models import ChatOpenAI from langchain.schema import ( AIMessage, HumanMessage, SystemMessage ) chat = ChatOpenAI(temperature=0) batch_messages = [ [ SystemMessage(content="You are a helpful assistant that translates English to Chinese."), HumanMessage(content="Translate this sentence from English to Chinese. I love programming.") ], [ SystemMessage(content="You are a helpful assistant that translates English to Chinese."), HumanMessage(content="Translate this sentence from English to Chinese. I love artificial intelligence.") ], ] result = chat.generate(batch_messages) print(result) result.llm_output['token_usage'] ''' generations=[ [ ChatGeneration( text='我喜欢编程。', generation_info={'finish_reason': 'stop'}, message=AIMessage(content='我喜欢编程。', additional_kwargs={}, example=False) ) ], [ ChatGeneration( text='我喜欢人工智能。', generation_info={'finish_reason': 'stop'}, message=AIMessage(content='我喜欢人工智能。', additional_kwargs={}, example=False) ) ] ] llm_output={ 'token_usage': {'prompt_tokens': 69, 'completion_tokens': 19, 'total_tokens': 88}, 'model_name': 'gpt-3.5-turbo' } run=[ RunInfo(run_id=UUID('5a2161dd-d623-4d0c-8c0a-0c95df966ca1')), RunInfo(run_id=UUID('e42fad23-306a-437f-868c-7e5783bedc4b'))] '''
import os os.environ["OPENAI_API_KEY"] = "……" os.environ["SERPAPI_API_KEY"] = "……" from langchain.agents import load_tools from langchain.agents import initialize_agent from langchain.agents import AgentType from langchain.llms import OpenAI # First, let's load the language model we're going to use to control the agent. llm = OpenAI(temperature=0) # Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in. tools = load_tools(["serpapi", "llm-math"], llm=llm) # Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use. agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True) # Now let's test it out! agent.run("What was the high temperature in SF yesterday in Fahrenheit? What is that number raised to the .023 power?") ''' >[1m> Entering new AgentExecutor chain...>[0m >[32;1m>[1;3m I need to find the temperature first, then use the calculator to raise it to the .023 power. Action: Search Action Input: "High temperature in SF yesterday">[0m Observation: >[36;1m>[1;3mSan Francisco Temperature Yesterday. Maximum temperature yesterday: 77 °F (at 12:56 pm) Minimum temperature yesterday: 58 °F (at 3:56 am)>[0m Thought:>[32;1m>[1;3m I now need to use the calculator to raise 77 to the .023 power Action: Calculator Action Input: 77^.023>[0m Observation: >[33;1m>[1;3mAnswer: 1.1050687217917479>[0m Thought:>[32;1m>[1;3m I now know the final answer Final Answer: 1.1050687217917479>[0m >[1m> Finished chain.>[0m '''
这里唯一需要注意的就是要注册一个 SerpAPI,非常简单,找到官网,几分钟就可以注册完成,然后复制API即可
SerpAPI是一个谷歌搜索结果API工具,可以通过API获取谷歌搜索结果,包括自然搜索结果、广告、新闻、图片等。使用SerpAPI可以帮助开发人员、市场营销人员、数据分析人员等获取搜索引擎结果,以进行分析和决策。
注册SerpAPI需要先访问官方网站https://serpapi.com/,进入注册页面,点击"TRY SERP API FOR FREE"按钮,进入注册流程。用户需要填写邮箱地址、用户名和密码,并同意服务条款和隐私政策。注册成功后,用户需要验证邮箱并选择API使用计划。SerpAPI提供多种计划,包括免费计划、标准计划、高级计划和企业计划,不同计划的费用和API使用次数不同。用户可以根据自身需求选择计划,并进行付款。付款完成后,用户可以在个人控制台中获取API密钥,开始使用SerpAPI。
import os os.environ["OPENAI_API_KEY"] = "……" os.environ["SERPAPI_API_KEY"] = "……" from langchain.agents import load_tools from langchain.agents import initialize_agent from langchain.agents import AgentType from langchain.chat_models import ChatOpenAI from langchain.llms import OpenAI # First, let's load the language model we're going to use to control the agent. chat = ChatOpenAI(temperature=0) # Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in. llm = OpenAI(temperature=0) tools = load_tools(["serpapi", "llm-math"], llm=llm) # Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use. agent = initialize_agent(tools, chat, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True) # Now let's test it out! agent.run("钱学森的年龄乘以3是多少?") agent.run("钱学森的夫人是谁?她的年龄乘以3是多少?") ''' >[1m> Entering new AgentExecutor chain...>[0m >[32;1m>[1;3mQuestion: What is the result of multiplying Qian Xuesen's age by 3? Thought: I can use a calculator to find the answer. Action: { "action": "Calculator", "action_input": "Qian Xuesen's age * 3" } >[0m Observation: >[33;1m>[1;3mAnswer: 348>[0m Thought:>[32;1m>[1;3mI now know the final answer. Final Answer: The result of multiplying Qian Xuesen's age by 3 is 348.>[0m >[1m> Finished chain.>[0m >[1m> Entering new AgentExecutor chain...>[0m >[32;1m>[1;3mThought: I need to find out who is the wife of Qian Xuesen and then calculate her age multiplied by 3. Action: { "action": "Search", "action_input": "Qian Xuesen wife" } >[0m Observation: >[36;1m>[1;3mJiang Ying>[0m Thought:>[32;1m>[1;3mJiang Ying is the wife of Qian Xuesen. Now I need to find her age and multiply it by 3. Action: { "action": "Search", "action_input": "Jiang Ying age" } >[0m Observation: >[36;1m>[1;3m36 years>[0m Thought:>[32;1m>[1;3mJiang Ying is 36 years old. Now I can calculate her age multiplied by 3. Action: { "action": "Calculator", "action_input": "36 * 3" } >[0m Observation: >[33;1m>[1;3mAnswer: 108>[0m Thought:>[32;1m>[1;3mJiang Ying's age multiplied by 3 is 108. Final Answer: 108>[0m >[1m> Finished chain.>[0m
1、LangChain中文网
2、python 关于Caused by SSLError(SSLEOFError(8, ‘EOF occurred in violation of protocol (_ssl.c:1131)‘)
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。