当前位置:   article > 正文

4种控制LLM大模型输出JSON的方法_大语言模型更准确输出json

大语言模型更准确输出json

JSON 是全球使用最广泛的数据交换格式之一,支持我们的所有需求。 在构建人工智能驱动的应用程序时,工程师不可避免地需要将大型语言模型(LLM)的输出集成到他们的代码库中。

通过指示 LLM 使用特定的语法或模式,然后输出应用程序所需的生成结果,我们可以使应用程序的行为更加可预测。 简而言之,JSON 的互操作性使其成为数据交换的首选。

NSDT工具推荐: Three.js AI纹理开发包 - YOLO合成数据生成器 - GLTF/GLB在线编辑 - 3D模型格式在线转换 - 可编程3D场景编辑器 - REVIT导出3D模型插件 - 3D模型语义搜索引擎 - Three.js虚拟轴心开发包

1、为什么让LLM 输出JSON数据如此困难?

语言模型擅长预测下一个标记并生成文本,但它们在产生文本之外的精确输出方面可能具有挑战性,因为它们并不总是精确地遵循指令

例如:对于 OpenAI,我希望 GPT-3.5-turbo 始终以以下形式响应

(message_type) {message_content}

然而,它可能会以略微不同的方式响应:

  • message_type:message_content
  • message_type:"message_content"
  • (message_type): "message_content"

2、使用提示工程

Please provide the response in the form of a Python list. It should begin with “[“ and end with “]”.

Chatgpt (gpt4) 支持提示系统/用户 (gpt4 api) 将数据格式化为 csv。 通常工作完美。 虽然 gpt4 非常适合制作演示原型,但它相当昂贵,因此本地解决方案将是完美的。

有许多提示工程框架可以限制 json 格式的输出,请参阅此处的一个用于 LLM 输出的严格 JSON 框架。

  1. ## simple example provided by the author
  2. res = strict_output(system_prompt = 'You are a classifier',
  3. user_prompt = 'It is a beautiful day',
  4. output_format = {"Sentiment": "Type of Sentiment",
  5. "Tense": "Type of Tense"})
  6. print(res)
  7. ## output
  8. {'Sentiment': 'Positive', 'Tense': 'Present'}

虽然提示工程对于某些用例可能是有效的,但它有一个局限性—LLM所做的任何内部更改都可能导致意外的输出。 众所周知,这会在生产环境中引起问题,正如在线故事中所见,依赖 ChatGPT API 的 AI 应用程序由于不断的后台更新而失败。

3、约束LLM输出

这一领域已经有大量的创新工作,我有机会探索三个框架,它们都从不同的角度解决了这个问题。 尽管使用不同的方法,但每个框架如何达到相似的结果给我留下了深刻的印象。

  • GRAMMAR — 约束模型输出的语法。 例如,你可以强制模型仅输出 JSON:
  • KOR — 这是一个半成品原型,可以“帮助”你使用LLM从文本中提取结构化数据
  • LM-Format-Enforcer — 强制语言模型的输出格式(JSON Schema、Regex 等)
  • Finetune LLM 模型 — 教导模型根据输入数据输出 JSON

3.1 使用语法规则强制模型仅输出 JSON

在这种方法中,你需要使用 Llama.cpp 来运行模型并创建语法文件。 GBNF (GGML BNF) 是一种用于定义形式语法以约束 llama.cpp 中模型输出的格式。

这是我为基本测试创建的一个简单语法文件:

  1. root ::= answer
  2. answer ::= "{" ws "\"id\":" ws number "," ws "\"name\":" ws string "}"
  3. answerlist ::= "[]" | "[" ws answer ("," ws answer)* "]"
  4. string ::= "\"" ([^"]*) "\""
  5. boolean ::= "true" | "false"
  6. ws ::= [ \t\n]*
  7. number ::= [0-9]+ "."? [0-9]*
  8. stringlist ::= "[" ws "]" | "[" ws string ("," ws string)* ws "]"
  9. numberlist ::= "[" ws "]" | "[" ws string ("," ws number)* ws "]"

它更难理解,但是,可以从更容易理解的模式定义开始。 如下所示:

  1. interface answer {
  2. id: number;
  3. name: string;
  4. }

接下来将模式粘贴到这个在线工具以自动生成语法文件 - 省去很多麻烦。

现在,我们有了一个语法文件并准备好插入 Llama.cpp。 有关在你的计算机上本地运行的设置的更多详细信息,请参阅存储库。

  1. ## start with a prompt
  2. ./main -m ./models/Mistral-7B-Instruct-v0.1-Q8.gguf -n 256 — grammar-file grammars/answer.gbnf -p ‘Q: Name the planets in the solar system? A:’
  3. ...................................................................................................
  4. llama_new_context_with_model: n_ctx = 512
  5. llama_new_context_with_model: freq_base = 10000.0
  6. llama_new_context_with_model: freq_scale = 1
  7. llama_new_context_with_model: kv self size = 64.00 MB
  8. llama_new_context_with_model: compute buffer total size = 79.13 MB
  9. llama_new_context_with_model: VRAM scratch buffer: 73.00 MB
  10. llama_new_context_with_model: total VRAM used: 73.00 MB (model: 0.00 MB, context: 73.00 MB)
  11. system_info: n_threads = 8 / 16 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 |
  12. sampling:
  13. repeat_last_n = 64, repeat_penalty = 1.100, frequency_penalty = 0.000, presence_penalty = 0.000
  14. top_k = 40, tfs_z = 1.000, top_p = 0.950, typical_p = 1.000, temp = 0.800
  15. mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
  16. generate: n_ctx = 512, n_batch = 512, n_predict = 256, n_keep = 0
  17. ## response
  18. Q: Name the planets in the solar system? A:{ "id": 1, "name": "Mercury"} [end of text]
  19. llama_print_timings: load time = 845.86 ms
  20. llama_print_timings: sample time = 157.01 ms / 16 runs ( 9.81 ms per token, 101.91 tokens per second)
  21. llama_print_timings: prompt eval time = 649.35 ms / 13 tokens ( 49.95 ms per token, 20.02 tokens per second)
  22. llama_print_timings: eval time = 3280.48 ms / 15 runs ( 218.70 ms per token, 4.57 tokens per second)
  23. llama_print_timings: total time = 4104.05 ms
  24. Log end

搞定! 结果是合法的 json对象 {"id":1,"name":"Mercury"} 。

因此,语法可以灵活地创建复杂的对象。 这是我第二次尝试创建收据模式和语法文件。

  1. ## Receipt Type Definitions using Typescript.
  2. ```
  3. interface RestaurantReceipt {
  4. restaurant: Restaurant;
  5. customer: Customer;
  6. order_date: string;
  7. total_price: number;
  8. tax_rate: number;
  9. tax_amount: number;
  10. discount_code: string;
  11. payment_method: string;
  12. card_type: string;
  13. card_number: string;
  14. expiration_month: number;
  15. expiration_year: number;
  16. cvv: string;
  17. shipping_address: string;
  18. items: Item[];
  19. }
  20. interface Restaurant {
  21. name: string;
  22. location: Location;
  23. year: number;
  24. phone_number: string;
  25. email:string;
  26. }
  27. interface Customer {
  28. first_name: string;
  29. last_name: string;
  30. email:string;
  31. phone_number: string;
  32. }
  33. interface Location {
  34. address: string;
  35. city: string;
  36. state: string;
  37. country: string;
  38. }
  39. interface Item {
  40. item_name: string;
  41. quantity: number;
  42. unit_price: number;
  43. description: string;
  44. item_total: number;
  45. }
  46. ```

对此收据生成的语法文件:

  1. ## Generated Grammar used during LLMs generation.
  2. ```
  3. root ::= RestaurantReceipt
  4. Item ::= "{" ws "\"item_name\":" ws string "," ws "\"quantity\":" ws number "," ws "\"unit_price\":" ws number "," ws "\"description\":" ws string "," ws "\"item_total\":" ws number "}"
  5. Itemlist ::= "[]" | "[" ws Item ("," ws Item)* "]"
  6. Location ::= "{" ws "\"address\":" ws string "," ws "\"city\":" ws string "," ws "\"state\":" ws string "," ws "\"country\":" ws string "}"
  7. Locationlist ::= "[]" | "[" ws Location ("," ws Location)* "]"
  8. Customer ::= "{" ws "\"first_name\":" ws string "," ws "\"last_name\":" ws string "," ws "\"email\":" ws string "," ws "\"phone_number\":" ws string "}"
  9. Customerlist ::= "[]" | "[" ws Customer ("," ws Customer)* "]"
  10. Restaurant ::= "{" ws "\"name\":" ws string "," ws "\"location\":" ws Location "," ws "\"year\":" ws number "," ws "\"phone_number\":" ws string "," ws "\"email\":" ws string "}"
  11. Restaurantlist ::= "[]" | "[" ws Restaurant ("," ws Restaurant)* "]"
  12. RestaurantReceipt ::= "{" ws "\"restaurant\":" ws Restaurant "," ws "\"customer\":" ws Customer "," ws "\"order_date\":" ws string "," ws "\"total_price\":" ws number "," ws "\"tax_rate\":" ws number "," ws "\"tax_amount\":" ws number "," ws "\"discount_code\":" ws string "," ws "\"payment_method\":" ws string "," ws "\"card_type\":" ws string "," ws "\"card_number\":" ws string "," ws "\"expiration_month\":" ws number "," ws "\"expiration_year\":" ws number "," ws "\"cvv\":" ws string "," ws "\"shipping_address\":" ws string "," ws "\"items\":" ws Itemlist "}"
  13. RestaurantReceiptlist ::= "[]" | "[" ws RestaurantReceipt ("," ws RestaurantReceipt)* "]"
  14. string ::= "\"" ([^"]*) "\""
  15. boolean ::= "true" | "false"
  16. ws ::= [ \t\n]*
  17. number ::= [0-9]+ "."? [0-9]*
  18. stringlist ::= "[" ws "]" | "[" ws string ("," ws string)* ws "]"
  19. numberlist ::= "[" ws "]" | "[" ws string ("," ws number)* ws "]"

然后运行 llama.cpp:

  1. ## Constrained output with grammars
  2. ## llama.cpp supports grammars to constrain model output. For example, you can force the model to output JSON only:
  3. ./main -m ./models/Mistral-7B-Instruct-v0.1-Q8.gguf -n 256 --grammar-file grammars/json.gbnf -p 'give me a sample receipt:'

输出结果:

  1. ...................................................................................................
  2. llama_new_context_with_model: n_ctx = 512
  3. llama_new_context_with_model: freq_base = 10000.0
  4. llama_new_context_with_model: freq_scale = 1
  5. llama_new_context_with_model: kv self size = 64.00 MB
  6. llama_new_context_with_model: compute buffer total size = 79.13 MB
  7. llama_new_context_with_model: VRAM scratch buffer: 73.00 MB
  8. llama_new_context_with_model: total VRAM used: 73.00 MB (model: 0.00 MB, context: 73.00 MB)
  9. system_info: n_threads = 8 / 16 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 |
  10. sampling:
  11. repeat_last_n = 64, repeat_penalty = 1.100, frequency_penalty = 0.000, presence_penalty = 0.000
  12. top_k = 40, tfs_z = 1.000, top_p = 0.950, typical_p = 1.000, temp = 0.800
  13. mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
  14. generate: n_ctx = 512, n_batch = 512, n_predict = 256, n_keep = 0
  15. give me a sample receipt:{"receiptNumber":"12345","customerName":"John Smith","date":
  16. "2021-01-01 10:30:00.000000",
  17. "items": [
  18. {
  19. "itemId": "1",
  20. "productId": "ABC123",
  21. "quantity": 1,
  22. "unitPrice": 19.99
  23. },
  24. {
  25. "itemId": "2",
  26. "productId": "DEF456",
  27. "quantity": 2,
  28. "unitPrice": 29.99
  29. }
  30. ],
  31. "subTotal": 59.98,
  32. "taxAmount": 2.37,
  33. "total": 62.35
  34. } [end of text]
  35. llama_print_timings: load time = 842.78 ms
  36. llama_print_timings: sample time = 2477.51 ms / 177 runs ( 14.00 ms per token, 71.44 tokens per second)
  37. llama_print_timings: prompt eval time = 509.36 ms / 9 tokens ( 56.60 ms per token, 17.67 tokens per second)
  38. llama_print_timings: eval time = 38122.00 ms / 176 runs ( 216.60 ms per token, 4.62 tokens per second)
  39. llama_print_timings: total time = 41331.49 ms
  40. Log end

到目前为止,语法可以控制输出始终生成 JSON 作为输出—看起来很有前途的解决方案。 请参阅我的存储库,了解我为此测试创建的架构和语法文件。

3.2 KOR — 使用LLM提取文本中的结构化数据

关于一些可以用 Kor 完成的事情的想法。

  • 从与提取模式匹配的文本中提取数据。
  • 通过精确理解用户请求,为人工智能助手提供技能。
  • 提供对现有 API 的自然语言访问。

请参阅此处的存储库链接,了解我为此测试创建的测试笔记本。

对于此测试,我将使用开源 LLama-2 模型,因为我们都喜欢节省不使用 ChatGPT api 的成本。

  1. ## download LLM model
  2. from huggingface_hub import hf_hub_download
  3. downloaded_model_path = hf_hub_download(repo_id="TheBloke/Llama-2-7b-Chat-GGUF", filename="llama-2-7b-chat.Q5_K_M.gguf")
  1. from langchain.llms import LlamaCpp
  2. from langchain.prompts import PromptTemplate
  3. from langchain.chains import LLMChain
  4. from kor.extraction import create_extraction_chain
  5. # get model chain
  6. llm = LlamaCpp(model_path=downloaded_model_path,temperature=0.8,verbose=True,echo=True,n_ctx=512)
  7. DEFAULT_SYSTEM_PROMPT = """\
  8. You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\
  9. """
  10. def get_prompt(message: str, system_prompt: str = DEFAULT_SYSTEM_PROMPT) -> str:
  11. return f'<s>[INST] <<SYS>>\n{system_prompt}\n<</SYS>>\n\n{message} [/INST]'

示例 1:模式和链 — 输出单个 Json 对象

  1. #from langchain.chat_models import ChatOpenAI
  2. from kor import create_extraction_chain, Object, Text
  3. from kor.nodes import Object, Text, Number
  4. schema = Object(
  5. id="player",
  6. description=(
  7. "User is controlling a music player to select songs, pause or start them or play"
  8. " music by a particular artist."
  9. ),
  10. attributes=[
  11. Text(
  12. id="song",
  13. description="User wants to play this song",
  14. examples=[],
  15. many=True,
  16. ),
  17. Text(
  18. id="album",
  19. description="User wants to play this album",
  20. examples=[],
  21. many=True,
  22. ),
  23. Text(
  24. id="artist",
  25. description="Music by the given artist",
  26. examples=[("Songs by paul simon", "paul simon")],
  27. many=True,
  28. ),
  29. Text(
  30. id="action",
  31. description="Action to take one of: `play`, `stop`, `next`, `previous`.",
  32. examples=[
  33. ("Please stop the music", "stop"),
  34. ("play something", "play"),
  35. ("play a song", "play"),
  36. ("next song", "next"),
  37. ],
  38. ),
  39. ],
  40. many=False,
  41. )
  42. ## chain
  43. chain = create_extraction_chain(llm, schema, encoder_or_encoder_class='json')
  1. chain.run("play songs by paul simon and led zeppelin and the doors")['data']
  2. ## result
  3. {'player': {'artist': ['paul simon', 'led zeppelin', 'the doors']}}

结果看起来不错,与单个对象的架构定义匹配。 KOR 还支持更流行的 pydantic 模式定义。 这是创建 json 对象列表的第二个示例。

示例 2:Pydantic Schema — Json 对象的输出列表

  1. from kor import from_pydantic
  2. from typing import List, Optional
  3. from pydantic import BaseModel, Field
  4. ## schema
  5. class PlanetSchema(BaseModel):
  6. planet_name: str = Field(description="The name of the planet")
  7. class PlanetList(BaseModel):
  8. planets: List[PlanetSchema]
  9. schema, validator = from_pydantic(
  10. PlanetSchema,
  11. description="Planet Information",
  12. many=True, # <-- Note Many = True
  13. )
  14. chain = create_extraction_chain(llm, schema, validator=validator)
  15. result = chain.run(("list planets in our solar system."))
  16. result
  17. ## output
  18. {'data': {'planetschema': []},
  19. 'raw': '\n"planetname|name|\nMercury|4|244|0.387|\nVenus|10|210|0.936|\nEarth|5|127|1.000|\nMars|2|210|0.181|\nJupiter|15|890|4.35|\nSaturn|6|720|0.550|\nUranus|7|510|0.750|\nNeptune|8|490|1.778|"',
  20. 'errors': [],
  21. 'validated_data': []}

嗯,结果与我对 json 对象列表的预期不符。 需要更多调查。 鉴于原始数据确实得出了正确的值。

3.3 LM-Format-Enforcer — 强制LLM的输出格式

LM-Format-Enforcer可以强制LLM的输出格式,例如JSON、Regex等,这是一个看起来很有希望成为最好的框架。 根据文档,框架根据架构设计操纵令牌的输出来生成 json。

请参阅我为此测试创建的笔记本。 与 KOR 测试类似,我将继续使用开源 LLama-2 模型,因为它受到框架的支持。

  1. ## setup LLM model
  2. from llama_cpp import Llama
  3. from huggingface_hub import hf_hub_download
  4. downloaded_model_path = hf_hub_download(repo_id="TheBloke/Llama-2-7b-Chat-GGUF", filename="llama-2-7b-chat.Q5_K_M.gguf")
  5. llm = Llama(model_path=downloaded_model_path)
  6. DEFAULT_SYSTEM_PROMPT = """\
  7. You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\
  8. """
  9. def get_prompt(message: str, system_prompt: str = DEFAULT_SYSTEM_PROMPT) -> str:
  10. return f'<s>[INST] <<SYS>>\n{system_prompt}\n<</SYS>>\n\n{message} [/INST]'

对于令牌的输出操作,它与 LLM 推理框架紧密耦合。 对于 Llama.cpp,它需要创建一个 LogitProcessor。 参见下面的代码:

  1. ## LM Format Enforcer Logits Processor
  2. from typing import Optional
  3. from llama_cpp import LogitsProcessorList
  4. from lmformatenforcer import CharacterLevelParser
  5. from lmformatenforcer.integrations.llamacpp import build_llamacpp_logits_processor
  6. from lmformatenforcer import JsonSchemaParser
  7. from pydantic import BaseModel
  8. from typing import List
  9. from IPython.display import display, Markdown
  10. def display_header(text):
  11. display(Markdown(f'**{text}**'))
  12. def display_content(text):
  13. display(Markdown(f'```\n{text}\n```'))
  14. def llamacpp_with_character_level_parser(llm: Llama, prompt: str, character_level_parser: Optional[CharacterLevelParser]) -> str:
  15. logits_processors: Optional[LogitsProcessorList] = None
  16. if character_level_parser:
  17. logits_processors = LogitsProcessorList([build_llamacpp_logits_processor(llm, character_level_parser)])
  18. output = llm(prompt, logits_processor=logits_processors)
  19. text: str = output['choices'][0]['text']
  20. return text

现在,我们要运行一个简单的测试来返回单个 json 对象

  1. class PlayerSchema(BaseModel):
  2. first_name: str
  3. last_name: str
  4. year_of_birth: int
  5. num_seasons_in_nba: int
  6. question = 'Please give me information about Michael Jordan. You MUST answer using the following json schema: '
  7. question_with_schema = f'{question}{PlayerSchema.schema_json()}'
  8. prompt = get_prompt(question_with_schema)
  9. display_header("Standard LLM Output:")
  10. result = llamacpp_with_character_level_parser(llm, prompt, None)
  11. display_content(result)
  1. ## result
  2. Of course! I'd be happy to provide information about Michael Jordan using the provided JSON schema.
  3. {
  4. "first_name": "Michael",
  5. "last_name": "Jordan",
  6. "year_of_birth": 1963,
  7. "num_seasons_in_nba": 15
  8. }
  9. I hope this helps! Let me know if you have any other questions.

所以,结果还不错,它包含一个json对象。 但是,对于要使用此输出的应用程序,它仍然需要额外的解析工作来删除不需要的文本。 所以这个框架正是在输出中保留不需要的文本—只返回一个 json 对象。

  1. display_header("LLM Output with json schema enforcing:")
  2. result = llamacpp_with_character_level_parser(llm, prompt, JsonSchemaParser(PlayerSchema.schema()))
  3. display_content(result)
 { "first_name": "Michael", "last_name": "Jordan", "year_of_birth": 1963, "num_seasons_in_nba": 15 }

不错,干得好!

接下来,让我们测试一下json对象列表的生成,首先从标准LLM输出开始:

  1. message="Q:please give me a list of planets in the solar system? A: "
  2. prompt=get_prompt(message,DEFAULT_SYSTEM_PROMPT)
  3. output = llm(prompt,max_tokens=512,stop=["Q:"])
  4. text: str = output['choices'][0]['text']
  5. display_header("LLM standard output")
  6. print(text)
  7. ## LLM standard output
  8. Of course! I'd be happy to help you with that. The eight planets in our solar system are:
  9. 1. Mercury
  10. 2. Venus
  11. 3. Earth
  12. 4. Mars
  13. 5. Jupiter
  14. 6. Saturn
  15. 7. Uranus
  16. 8. Neptune

现在,让我们加入 LLM 输出强制以及一个简单的模式。

  1. ## llm
  2. llm = Llama(model_path=downloaded_model_path, n_ctx=4096,n_threads=16,verbose=False)
  3. from typing import List
  4. from pydantic import BaseModel
  5. ## schema
  6. class PlanetSchema(BaseModel):
  7. planet_name: str
  8. class PlanetList(BaseModel):
  9. planets: List[PlanetSchema]
  10. ## question
  11. question = 'please give me a list of planets in the solar system?. You MUST answer using the following json schema: '
  12. question_with_schema = f'{question}{PlanetList.schema_json()}'
  13. prompt = get_prompt(question_with_schema)
  14. #display_content(prompt)
  15. ## response
  16. display_header("LLM Output with json schema enforcing:")
  17. result = llamacpp_with_character_level_parser(llm, prompt, JsonSchemaParser(PlanetList.schema()))
  18. display_content(result)
  1. ## LLM Output with json schema enforcing:
  2. { "planets": [
  3. { "planet_name": "Mercury" },
  4. { "planet_name": "Venus" }, { "planet_name": "Earth" },
  5. { "planet_name": "Mars" }, { "planet_name": "Jupiter" },
  6. { "planet_name": "Saturn" }, { "planet_name": "Uranus" },
  7. { "planet_name": "Neptune" }
  8. ] }

很棒的结果是我们在模式中定义的 json 对象列表。

3.4 微调LLM

请参阅我之前的文章,了解我尝试通过微调 LLM从OCR数据输出JSON格式作为输入以及使用图像作为输入,在这两种情况下,结果都很好。

4、结束语

虽然没有一种万能的解决方案,但对完美方法的探索仍在继续。 这些令人惊叹的框架是针对特定用例量身定制的,我发现对输出施加限制比即时工程产生更好的结果。

训练我自己的本地模型可以让我更好地控制输出,并且在使用模型之前测试模型非常重要,因为每个模型的输出可能会有所不同,并且生成 JSON 对象列表对于LLM来说可能具有挑战性。


原文链接:约束LLM输出JSON - BimAnt

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/weixin_40725706/article/detail/200126
推荐阅读
相关标签
  

闽ICP备14008679号