赞
踩
Traceback (most recent call last):
File "/home/skywalk/github/OpenDevin/opendevin/server/agent/manager.py", line 125, in create_controller
self.controller = AgentController(
^^^^^^^^^^^^^^^^
File "/home/skywalk/github/OpenDevin/opendevin/controller/agent_controller.py", line 86, in __init__
self.command_manager = CommandManager(
^^^^^^^^^^^^^^^
File "/home/skywalk/github/OpenDevin/opendevin/controller/command_manager.py", line 14, in __init__
self.shell = DockerInteractive(
^^^^^^^^^^^^^^^^^^
File "/home/skywalk/github/OpenDevin/opendevin/sandbox/sandbox.py", line 135, in __init__
self.restart_docker_container()
File "/home/skywalk/github/OpenDevin/opendevin/sandbox/sandbox.py", line 388, in restart_docker_container
raise Exception('Failed to start container')
Exception: Failed to start container
确认系统资源(如CPU、内存和磁盘空间)是否充足,以便启动新的Docker容器。停掉以前的docker试试:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
aae10082acd0 ghcr.io/opendevin/sandbox "/usr/sbin/sshd -D -…" 2 hours ago Up 2 hours sandbox-1da44fb5-d73b-4086-9a69-2dd80538e7fc
(py311) skywalk@ub:/tmp$ docker stop aaae10082acd0
Error response from daemon: No such container: aaae10082acd0
(py311) skywalk@ub:/tmp$ docker stop sandbox-1da44fb5-d73b-4086-9a69-2dd80538e7fc
sandbox-1da44fb5-d73b-4086-9a69-2dd80538e7fc
(py311) skywalk@ub:/tmp$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
哇塞,好了哦!
究其原因可能是前面的后台没有完全杀掉导致的。
lsof -i:3000
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
uvicorn 959326 skywalk 16u IPv4 13541698 0t0 TCP localhost:3000 (LISTEN)
(py311) skywalk@ub:~/github/OpenDevin$ kill -9 959326
用lsof -i:3000查出进程然后kill掉。
ok了。
究其原因可能是前面的后台没有完全杀掉导致的。
04:45:07 - opendevin:ERROR: agent_controller.py:113 - Error in loop Traceback (most recent call last): File "/media/skywalk/EXTERNAL_USB/py311/lib/python3.11/site-packages/litellm/llms/openai.py", line 414, in completion raise e File "/media/skywalk/EXTERNAL_USB/py311/lib/python3.11/site-packages/litellm/llms/openai.py", line 373, in completion response = openai_client.chat.completions.create(**data, timeout=timeout) # type: ignore ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/media/skywalk/EXTERNAL_USB/py311/lib/python3.11/site-packages/openai/_utils/_utils.py", line 275, in wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/media/skywalk/EXTERNAL_USB/py311/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 667, in create return self._post( ^^^^^^^^^^^ File "/media/skywalk/EXTERNAL_USB/py311/lib/python3.11/site-packages/openai/_base_client.py", line 1213, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/media/skywalk/EXTERNAL_USB/py311/lib/python3.11/site-packages/openai/_base_client.py", line 902, in request return self._request( ^^^^^^^^^^^^^^ File "/media/skywalk/EXTERNAL_USB/py311/lib/python3.11/site-packages/openai/_base_client.py", line 993, in _request raise self._make_status_error_from_response(err.response) from None openai.NotFoundError: Error code: 404 - {'detail': 'Not Found'} During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/media/skywalk/EXTERNAL_USB/py311/lib/python3.11/site-packages/litellm/main.py", line 999, in completion raise e File "/media/skywalk/EXTERNAL_USB/py311/lib/python3.11/site-packages/litellm/main.py", line 972, in completion response = openai_chat
File "/media/skywalk/EXTERNAL_USB/py311/lib/python3.11/site-packages/litellm/utils.py", line 7344, in exception_type
raise NotFoundError(
litellm.exceptions.NotFoundError: OpenAIException - Error code: 404 - {'detail': 'Not Found'}
curl --location 'http://127.0.0.1:1337/v1/models/' \
--header 'Content-Type: application/json' \
--data ' {
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "what llm are you"
}
]
}
'
显示:{"detail":"Method Not Allowed"}
证明这个不能交互,要交互还是要用chat
curl --location 'http://127.0.0.1:1337/v1/chat/completions' \
--header 'Content-Type: application/json' \
--data ' {
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "what llm are you"
}
]
}
'
curl --location 'http://127.0.0.1:1337/v1/chat/completions' \
--header 'Content-Type: application/json' \
--data ' {
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "what llm are you"
}
]
}
'
{"error": {"message": "ValueError: Can't patch loop of type <class 'uvloop.Loop'>"}, "model": "gpt-3.5-turbo", "provider": "OpenaiChat"}
继续试试
curl --location 'http://127.0.0.1:1337/v1/chat/completions' \
--header 'Content-Type: application/json' \
--data ' {
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "what llm are you"
}
]
}
'
- import openai
- client = openai.OpenAI(
- api_key="anything",
- base_url="http://127.0.0.1:1337/v1/models"
- )
-
- # request sent to model set on litellm proxy, `litellm --model`
- response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
- {
- "role": "user",
- "content": "this is a test request, write a short poem"
- }
- ])
-
- print(response)
这样使用g4f作为chatgpt的替代,还是没成功啊。 回头仔细看看这个文档:OpenAI | liteLLM
看看怎么让它能用g4f
- from openai import OpenAI
-
- client = OpenAI(
- api_key="",
- # Change the API base URL to the local interference API
- base_url="http://localhost:1337/v1/models"
- )
-
- response = client.chat.completions.create(
- model="gpt-3.5-turbo",
- messages=[{"role": "user", "content": "write a poem about a tree"}],
- stream=True,
- )
- if isinstance(response, dict):
- # Not streaming
- print(response.choices[0].message.content)
- else:
- # Streaming
- for token in response:
- content = token.choices[0].delta.content
- if content is not None:
- print(content, end="", flush=True)
- curl --location 'http://airoot.org:1337/v1/chat/completions' \
- --header 'Content-Type: application/json' \
- --data ' {
- "model": "gpt-3.5-turbo",
- "messages": [
- {
- "role": "user",
- "content": "what llm are you" }]}'
curl --location 'http://airoot.org:1337/v1/models' \
--header 'Content-Type: application/json' \
--data ' {
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "what llm are you" }]}'
最终测试完毕,是国内的g4f api服务不稳定(或者说有时候连不上)导致的,换成莫斯科的就好了。
另外v1/chat/completions测试通过,v1/models测试失败。
但是这样好像OpenDevin还是没pass。未完待续。
明白了,还是应该只带v1,因为前面有问题,就一直以为v1不对,而多写成了v1/models或者v1/chat/completions ,导致一直报错。改成v1,搞定
py311/lib/python3.11/site-packages/litellm/utils.py", line 7390, in exception_type
raise APIError(
litellm.exceptions.APIError: OpenAIException - Error code: 500 - {'error': {'message': 'ResponseStatusError: Response 401: {"detail":"Could not parse your authentication token. Please try signing in again."}'}, 'model': 'gpt-3.5-turbo', 'provider': 'OpenaiChat'}
^CReceived signal 2, exiting...
make: *** [Makefile:128: run] Interrupt
看到有这个issue报错,但是不太明白怎么整:https://github.com/xtekky/gpt4free/issues/1619
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。