当前位置:   article > 正文

OpenDevin调试报错_litellm.exceptions.apierror: openaiexception - err

litellm.exceptions.apierror: openaiexception - error code: 502

OpenDevin调试的时候报错:Failed to start container

Traceback (most recent call last):
  File "/home/skywalk/github/OpenDevin/opendevin/server/agent/manager.py", line 125, in create_controller
    self.controller = AgentController(
                      ^^^^^^^^^^^^^^^^
  File "/home/skywalk/github/OpenDevin/opendevin/controller/agent_controller.py", line 86, in __init__
    self.command_manager = CommandManager(
                           ^^^^^^^^^^^^^^^
  File "/home/skywalk/github/OpenDevin/opendevin/controller/command_manager.py", line 14, in __init__
    self.shell = DockerInteractive(
                 ^^^^^^^^^^^^^^^^^^
  File "/home/skywalk/github/OpenDevin/opendevin/sandbox/sandbox.py", line 135, in __init__
    self.restart_docker_container()
  File "/home/skywalk/github/OpenDevin/opendevin/sandbox/sandbox.py", line 388, in restart_docker_container
    raise Exception('Failed to start container')
Exception: Failed to start container

确认系统资源(如CPU、内存和磁盘空间)是否充足,以便启动新的Docker容器。停掉以前的docker试试:

docker  ps
CONTAINER ID   IMAGE                       COMMAND                  CREATED       STATUS       PORTS     NAMES
aae10082acd0   ghcr.io/opendevin/sandbox   "/usr/sbin/sshd -D -…"   2 hours ago   Up 2 hours             sandbox-1da44fb5-d73b-4086-9a69-2dd80538e7fc
(py311) skywalk@ub:/tmp$ docker stop aaae10082acd0
Error response from daemon: No such container: aaae10082acd0
(py311) skywalk@ub:/tmp$ docker stop sandbox-1da44fb5-d73b-4086-9a69-2dd80538e7fc
sandbox-1da44fb5-d73b-4086-9a69-2dd80538e7fc
(py311) skywalk@ub:/tmp$ docker  ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES

哇塞,好了哦!

究其原因可能是前面的后台没有完全杀掉导致的。

3000端口一直有服务

lsof -i:3000
COMMAND    PID    USER   FD   TYPE   DEVICE SIZE/OFF NODE NAME
uvicorn 959326 skywalk   16u  IPv4 13541698      0t0  TCP localhost:3000 (LISTEN)
(py311) skywalk@ub:~/github/OpenDevin$ kill -9 959326

用lsof -i:3000查出进程然后kill掉。

ok了。

究其原因可能是前面的后台没有完全杀掉导致的。

openai报错OpenAIException - Error code: 404 - {'detail': 'Not Found'}

04:45:07 - opendevin:ERROR: agent_controller.py:113 - Error in loop Traceback (most recent call last): File "/media/skywalk/EXTERNAL_USB/py311/lib/python3.11/site-packages/litellm/llms/openai.py", line 414, in completion raise e File "/media/skywalk/EXTERNAL_USB/py311/lib/python3.11/site-packages/litellm/llms/openai.py", line 373, in completion response = openai_client.chat.completions.create(**data, timeout=timeout) # type: ignore ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/media/skywalk/EXTERNAL_USB/py311/lib/python3.11/site-packages/openai/_utils/_utils.py", line 275, in wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/media/skywalk/EXTERNAL_USB/py311/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 667, in create return self._post( ^^^^^^^^^^^ File "/media/skywalk/EXTERNAL_USB/py311/lib/python3.11/site-packages/openai/_base_client.py", line 1213, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/media/skywalk/EXTERNAL_USB/py311/lib/python3.11/site-packages/openai/_base_client.py", line 902, in request return self._request( ^^^^^^^^^^^^^^ File "/media/skywalk/EXTERNAL_USB/py311/lib/python3.11/site-packages/openai/_base_client.py", line 993, in _request raise self._make_status_error_from_response(err.response) from None openai.NotFoundError: Error code: 404 - {'detail': 'Not Found'} During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/media/skywalk/EXTERNAL_USB/py311/lib/python3.11/site-packages/litellm/main.py", line 999, in completion raise e File "/media/skywalk/EXTERNAL_USB/py311/lib/python3.11/site-packages/litellm/main.py", line 972, in completion response = openai_chat

  File "/media/skywalk/EXTERNAL_USB/py311/lib/python3.11/site-packages/litellm/utils.py", line 7344, in exception_type
    raise NotFoundError(
litellm.exceptions.NotFoundError: OpenAIException - Error code: 404 - {'detail': 'Not Found'}

curl --location 'http://127.0.0.1:1337/v1/models/' \
--header 'Content-Type: application/json' \
--data ' {
      "model": "gpt-3.5-turbo",
      "messages": [
        {
          "role": "user",
          "content": "what llm are you"
        }
      ]
    }
'

显示:{"detail":"Method Not Allowed"}

证明这个不能交互,要交互还是要用chat

curl --location 'http://127.0.0.1:1337/v1/chat/completions' \
--header 'Content-Type: application/json' \
--data ' {
      "model": "gpt-3.5-turbo",
      "messages": [
        {
          "role": "user",
          "content": "what llm are you"
        }
      ]
    }
'

试试/v1/chat/completions

curl --location 'http://127.0.0.1:1337/v1/chat/completions' \
--header 'Content-Type: application/json' \
--data ' {
      "model": "gpt-3.5-turbo",
      "messages": [
        {
          "role": "user",
          "content": "what llm are you"
        }
      ]
    }
'
{"error": {"message": "ValueError: Can't patch loop of type <class 'uvloop.Loop'>"}, "model": "gpt-3.5-turbo", "provider": "OpenaiChat"}

继续试试

curl --location 'http://127.0.0.1:1337/v1/chat/completions' \
--header 'Content-Type: application/json' \
--data ' {
      "model": "gpt-3.5-turbo",
      "messages": [
        {
          "role": "user",
          "content": "what llm are you"
        }
      ]
    }
'

  1. import openai
  2. client = openai.OpenAI(
  3. api_key="anything",
  4. base_url="http://127.0.0.1:1337/v1/models"
  5. )
  6. # request sent to model set on litellm proxy, `litellm --model`
  7. response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
  8. {
  9. "role": "user",
  10. "content": "this is a test request, write a short poem"
  11. }
  12. ])
  13. print(response)

这样使用g4f作为chatgpt的替代,还是没成功啊。 回头仔细看看这个文档:OpenAI | liteLLM

看看怎么让它能用g4f

测试g4f

  1. from openai import OpenAI
  2. client = OpenAI(
  3. api_key="",
  4. # Change the API base URL to the local interference API
  5. base_url="http://localhost:1337/v1/models"
  6. )
  7. response = client.chat.completions.create(
  8. model="gpt-3.5-turbo",
  9. messages=[{"role": "user", "content": "write a poem about a tree"}],
  10. stream=True,
  11. )
  12. if isinstance(response, dict):
  13. # Not streaming
  14. print(response.choices[0].message.content)
  15. else:
  16. # Streaming
  17. for token in response:
  18. content = token.choices[0].delta.content
  19. if content is not None:
  20. print(content, end="", flush=True)

测试外网的1337服务

  1. curl --location 'http://airoot.org:1337/v1/chat/completions' \
  2. --header 'Content-Type: application/json' \
  3. --data ' {
  4. "model": "gpt-3.5-turbo",
  5. "messages": [
  6. {
  7. "role": "user",
  8. "content": "what llm are you" }]}'

curl --location 'http://airoot.org:1337/v1/models' \
--header 'Content-Type: application/json' \
--data ' {
      "model": "gpt-3.5-turbo",
      "messages": [
        {
          "role": "user",
          "content": "what llm are you" }]}' 

最终测试完毕,是国内的g4f api服务不稳定(或者说有时候连不上)导致的,换成莫斯科的就好了。

另外v1/chat/completions测试通过,v1/models测试失败。

但是这样好像OpenDevin还是没pass。未完待续。

明白了,还是应该只带v1,因为前面有问题,就一直以为v1不对,而多写成了v1/models或者v1/chat/completions ,导致一直报错。改成v1,搞定

运行后报错:Could not parse your authentication token. Please try signing in again.

py311/lib/python3.11/site-packages/litellm/utils.py", line 7390, in exception_type
    raise APIError(
litellm.exceptions.APIError: OpenAIException - Error code: 500 - {'error': {'message': 'ResponseStatusError: Response 401: {"detail":"Could not parse your authentication token. Please try signing in again."}'}, 'model': 'gpt-3.5-turbo', 'provider': 'OpenaiChat'}
^CReceived signal 2, exiting...
make: *** [Makefile:128: run] Interrupt

看到有这个issue报错,但是不太明白怎么整:https://github.com/xtekky/gpt4free/issues/1619

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/凡人多烦事01/article/detail/528797
推荐阅读
相关标签
  

闽ICP备14008679号