当前位置:   article > 正文

Litestar GET function blocks OpenAI

Litestar GET function blocks OpenAI

题意:Litestar GET 函数阻塞 OpenAI

问题背景:

When I transfer function to litestar, it suddenly stops OpenAI from returning a completion. I can print to console every declared variable except answer:

当我将函数传递给 litestar 时,它突然阻止了 OpenAI 返回完成结果。我可以打印到控制台中声明的每个变量,除了答案之外。

  1. from dotenv import load_dotenv
  2. from litestar import Controller, Litestar, get
  3. from litestar.types import ControllerRouterHandler
  4. import os
  5. import pinecone
  6. from langchain.embeddings.openai import OpenAIEmbeddings
  7. from langchain.vectorstores import Pinecone
  8. from dotenv import load_dotenv
  9. import openai
  10. __all__ = (
  11. "index",
  12. "support",
  13. )
  14. load_dotenv()
  15. embeddings = OpenAIEmbeddings()
  16. @get("/")
  17. async def index() -> str:
  18. return "Тестовый запрос выполнен. Чтобы получить ответ, воспользуйтесь командой /support/{вопрос%20вопрос}."
  19. @get("/support/{question:str}")
  20. async def get_answer(question: str) -> str:
  21. pinecone.init(
  22. api_key=os.getenv("PINECONE_API_KEY"),
  23. environment=os.environ.get('PINECONE_ENVIRONMENT'),
  24. )
  25. index_name = os.environ.get('PINECONE_INDEX_NAME')
  26. k = 2
  27. docsearch = Pinecone.from_existing_index(index_name, embeddings)
  28. res = docsearch.similarity_search_with_score(question, k=k)
  29. prompt = f'''
  30. Use text below to compile an answer:
  31. {[x for x in res]}
  32. '''
  33. completion = openai.Completion.create(
  34. model="text-davinci-003",
  35. prompt=prompt,
  36. max_tokens = 1000
  37. )
  38. answer = completion.choices[0].text
  39. return {"answer": answer}
  40. routes: list[ControllerRouterHandler] = [
  41. get_answer
  42. ]
  43. app = Litestar([index, get_answer])

Though bare OpenAI script works fine:        尽管纯 OpenAI 脚本工作正常

  1. import os
  2. import pinecone
  3. from langchain.embeddings.openai import OpenAIEmbeddings
  4. from langchain.vectorstores import Pinecone
  5. from dotenv import load_dotenv
  6. import openai
  7. load_dotenv()
  8. # Подготовим эмбеддинги
  9. embeddings = OpenAIEmbeddings()
  10. pinecone.init(
  11. api_key=os.getenv("PINECONE_API_KEY"),
  12. environment=os.environ.get('PINECONE_ENVIRONMENT'),
  13. )
  14. index_name = os.environ.get('PINECONE_INDEX_NAME')
  15. query = input("Enter your question: ")
  16. k = 2
  17. docsearch = Pinecone.from_existing_index(index_name, embeddings)
  18. res = docsearch.similarity_search_with_score(query, k=k)
  19. prompt = f'''
  20. Use text below to compile an answer:
  21. {[x for x in res]}
  22. '''
  23. completion = openai.Completion.create(
  24. model="text-davinci-003",
  25. prompt=prompt,
  26. max_tokens = 1000
  27. )
  28. print(completion.choices[0].text)

pip freeze:        pip freeze 命令得到的结果

  1. litestar==2.2.1
  2. openai==0.27.8
  3. pinecone-client==2.2.2

Litestar keeps showing 500 Internal Server Error without details. index() works fine. What can I do to resolve this issue?

Litestar 一直显示 500 内部服务器错误但没有详细信息。index() 函数工作正常。我该如何解决这个问题?

问题解决:

Here is a minimal example using Litestar that has the same problem you would face.

这是一个使用 Litestar 的最小示例,它会遇到与你相同的问题。

  1. from litestar import Litestar, get
  2. @get()
  3. async def get_answer() -> str:
  4. return {'hello': 'world'}
  5. app = Litestar([get_answer])

Making a GET request to localhost:8000 returns

向 localhost:8000 发送 GET 请求返回...

{"status_code":500,"detail":"Internal Server Error"}

If you turn on debug mode like so app = Litestar([get_answer], debug=True) the following error is shown when you make the same request.

如果你像这样开启调试模式 app = Litestar([get_answer], debug=True),当你发送相同的请求时,会显示以下错误。

500: Unable to serialize response content

This is because you have mentioned the return type as str in async def get_answer(question: str) -> str: but in your actual code you are returning a dict. Litestar uses the return type of the function to serialize the data. Converting a str to dict fails.

这是因为你在 async def get_answer(question: str) -> str: 中将返回类型指定为 str,但在你的实际代码中你却返回了一个 dict。Litestar 使用函数的返回类型来序列化数据。将 str 转换为 dict 失败了。

In your example index works fine because the return type and the actual return value are the same str.

在你的例子中,index 函数工作正常,因为返回类型和实际返回值都是相同的 str(字符串)。

The fix is to use the correct return type for get_answerdict[str, str] or even plain dict is enough.

修复方法是使用正确的返回类型给 get_answer。使用 dict[str, str] 或者简单的 dict 就足够了

  1. from litestar import Litestar, get
  2. @get()
  3. async def get_answer() -> dict[str, str]:
  4. return {'hello': 'world'}
  5. app = Litestar([get_answer])

If you are on 3.8, you can use typing.Dict instead of dict.

如果你使用的是 Python 3.8 或更高版本,你可以使用 typing.Dict 而不是 dict 来明确指定字典的键和值的类型

  1. from typing import Dict
  2. from litestar import Litestar, get
  3. @get()
  4. async def get_answer() -> Dict:
  5. return {'hello': 'world'}
  6. app = Litestar([get_answer])

PS:

Though bare OpenAI script works fine:   尽管直接使用 OpenAI 的脚本也能正常工作

This is why I removed the OpenAI parts and only focused on the litestar ones. If you have an error in that you would still get a 500 error, you will have to fix that separately.

这就是为什么我去掉了 OpenAI 的部分,只专注于 litestar 的部分。如果你在那里有错误,你仍然会得到一个 500 错误,你需要单独修复它。

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/正经夜光杯/article/detail/969848
推荐阅读
相关标签
  

闽ICP备14008679号