当前位置:   article > 正文

LangChain教程 | LCEL原理详解教程 一 | LangChain Expression Language (LCEL)_langchain lcel

langchain lcel

一、LCEL的特点

LangChain表达式语言(LCEL)是一种声明式方法,可以轻松地将 组合在一起。LCEL从第一天起就被设计成支持将原型投入生产,无需更改代码,从最简单的 “ prompt + LLM ” 链 到最复杂的链(我们已经看到人们在生产中成功运行了具有数百个步骤的LCEL链)。强调几个你可能想使用LCEL的原因:

流媒体支持:当您使用LCEL构建链时,您可以获得最佳的首次令牌时间(直到第一批输出出来所经过的时间)。对于某些链来说,这意味着我们将令牌直接从LLM流式传输到流式输出解析器,然后可以与LLM提供者输出原始令牌相同的速率获得解析的增量输出块。

异步支持:任何用LCEL构建的链既可以用同步API调用(例如在你的Jupyter笔记本中进行原型开发时),也可以用异步API调用(例如在 LangServe 服务器)。这使得可以在原型和生产中使用相同的代码,从而获得出色的性能,并能够在同一台服务器上处理多个并发请求。

优化的并行执行:只要您的LCEL链有可以并行执行的步骤(例如,如果您从多个检索器获取文档),我们就会在同步和异步接口中自动执行,以尽可能减少延迟。

重试和回退:为LCEL链的任何部分配置重试和回退。这是一个很好的方法,使您的链在规模上更加可靠。我们目前正在努力为重试/回退添加流支持,因此您可以在没有任何延迟成本的情况下获得更高的可靠性。

访问中间结果:对于更复杂的链,甚至在最终输出产生之前访问中间步骤的结果通常非常有用。这可以用来让终端用户知道正在发生的事情,甚至只是为了调试您的链。您可以流式传输中间结果,它可在每个LangServe服务器。

输入和输出模式:输入和输出模式为每个LCEL链提供了从链的结构中推断出的Pydantic和JSONSchema模式。这可用于验证输入和输出,是LangServe不可或缺的一部分。

无缝LangSmith跟踪集成:随着您的链变得越来越复杂,了解每一步到底发生了什么变得越来越重要。和LCEL一起,全部 步骤会自动记录到 LangSmith  为了最大的可观察性和可调试性。

无缝LangServe部署集成:使用LCEL创建的任何链都可以使用LangServe.

二、开始

LCEL使从基本组件构建复杂链变得容易,并支持开箱即用的功能,如 并行日志记录

基本示例:Promt + Model + Output parser

( 提示 + 模型 + 输出解析器)

最基本和最常见的用例是将提示模板和模型链接在一起。为了了解这是如何工作的,让我们创建一个获取一个主题并生成一个笑话的链:

基本包安装:

pip install --upgrade --quiet  langchain-core langchain-community langchain-openai

 示例代码:

  1. from langchain_core.output_parsers import StrOutputParser
  2. from langchain_core.prompts import ChatPromptTemplate
  3. from langchain_openai import ChatOpenAI
  4. # 提示
  5. prompt = ChatPromptTemplate.from_template("tell me a short joke about {topic}")
  6. # 模型(大语言模型)
  7. model = ChatOpenAI(model="gpt-4")
  8. # 输出解析器
  9. output_parser = StrOutputParser()
  10. # 组合成链
  11. chain = prompt | model | output_parser
  12. # 运行
  13. response = chain.invoke({"topic": "ice cream"})
  14. # 打印结果
  15. print(response)

(提示,这里的model:ChatOpenAI需要配置openai_api_key,如果是代理的还需要配置open_api_base,如果不了解的请先看入门篇。) 

输出结果:

"Why don't ice creams ever get invited to parties?\n\nBecause they always drip when things heat up!"

请注意这段代码中的这一行,在这里我们使用 LCEL 将不同的组件拼凑成一条链:

chain = prompt | model | output_parser

 这 符号类似于unix管道运算符,它将不同的组件链接在一起,将一个组件的 输出 作为 输入 提供给下一个组件。

在这个链中,用户输入传递给提示模板,然后提示模板输出传递给模型,然后模型输出传递给输出解析器。让我们单独看一下每个组件,以真正了解发生了什么。

1.Prompt 提示

Prompt 是一个 BasePromptTemplate ,这意味着它接受模板变量的字典并生成PromptValue。PromptValue是一个完整提示的包装,可以传递给LLM(以字符串作为输入)或ChatModel(以消息序列作为输入)。它可以与任何一种语言模型类型一起使用,因为它定义了用于生成 BaseMessages 和用于生成字符串的逻辑。

  1. prompt_value = prompt.invoke({"topic": "ice cream"})
  2. prompt_value
ChatPromptValue(messages=[HumanMessage(content='tell me a short joke about ice cream')])
prompt_value.to_messages()
[HumanMessage(content='tell me a short joke about ice cream')]
prompt_value.to_string()
'Human: tell me a short joke about ice cream'

2.Model 模型

 PromptValue 然后被传递给 model。在这种情况下我们的model是一个ChatModel,这意味着它将输出 BaseMessage.

  1. message = model.invoke(prompt_value)
  2. message
  1. AIMessage(content="Why don't ice creams ever get invited to parties?\n\nBecause they always bring a melt down!")

如果我们的 model是一个 LLM,它将输出一个字符串。

  1. from langchain_openai.llms import OpenAI
  2. llm = OpenAI(model="gpt-3.5-turbo-instruct")
  3. llm.invoke(prompt_value)
'\n\nRobot: Why did the ice cream truck break down? Because it had a meltdown!'

3. Output输出解析器

最后我们通过我们的model输出到output_parser,这是一个BaseOutputParser这意味着它需要一个字符串或一个BaseMessage作为输入。这StrOutputParser具体来说,很容易将任何输入转换为字符串。

output_parser.invoke(message)
"Why did the ice cream go to therapy? \n\nBecause it had too many toppings and couldn't find its cone-fidence!"

4.Entire Pipeline 整个管道

按照步骤进行:

  1. 我们将用户对所需主题的输入作为{"topic": "ice cream"}
  2. prompt组件接受用户输入,然后在使用topic来构建提示。
  3. model组件接受生成的提示,并传递到OpenAI LLM模型进行评估。模型生成的输出是一个ChatMessage对象。
  4. 最后,在output_parser组件接受一个ChatMessage,并将其转换为从invoke方法返回的Python字符串。

  1. input = {"topic": "ice cream"}
  2. prompt.invoke(input)
  3. # > ChatPromptValue(messages=[HumanMessage(content='tell me a short joke about ice cream')])
  4. (prompt | model).invoke(input)
  5. # > AIMessage(content="Why did the ice cream go to therapy?\nBecause it had too many toppings and couldn't cone-trol itself!")

 三、RAG 查询示例

(retrieval-augmented generation chain)

对于我们的下一个示例,我们希望运行一个检索增强生成链,以便在回答问题时添加一些上下文。

  1. # Requires:
  2. # pip install langchain docarray tiktoken
  3. from langchain_community.vectorstores import DocArrayInMemorySearch
  4. from langchain_core.output_parsers import StrOutputParser
  5. from langchain_core.prompts import ChatPromptTemplate
  6. from langchain_core.runnables import RunnableParallel, RunnablePassthrough
  7. from langchain_openai.chat_models import ChatOpenAI
  8. from langchain_openai.embeddings import OpenAIEmbeddings
  9. # 初始化一个向量存储
  10. vectorstore = DocArrayInMemorySearch.from_texts(
  11. ["harrison worked at kensho", "bears like to eat honey"],
  12. embedding=OpenAIEmbeddings(),
  13. )
  14. # 生成检索索引器
  15. retriever = vectorstore.as_retriever()
  16. # 一个对话模板,内含2个变量context和question
  17. template = """Answer the question based only on the following context:
  18. {context}
  19. Question: {question}
  20. """
  21. # 基于模板生成提示
  22. prompt = ChatPromptTemplate.from_template(template)
  23. # 基于对话openai生成模型
  24. model = ChatOpenAI()
  25. # 生成输出解析器
  26. output_parser = StrOutputParser()
  27. # 将检索索引器和输入内容(问题)生成检索
  28. setup_and_retrieval = RunnableParallel(
  29. {"context": retriever, "question": RunnablePassthrough()}
  30. )
  31. # 建立增强链
  32. chain = setup_and_retrieval | prompt | model | output_parser
  33. chain.invoke("where did harrison work?")

 在这种情况下,合成链为:

chain = setup_and_retrieval | prompt | model | output_parser

为了解释这一点,我们首先可以看到上面的提示模板接受了contextquestion作为提示中要替换的值。在构建提示模板之前,我们希望检索与搜索相关的文档,并将其作为上下文的一部分。

作为 第一步,我们已经使用内存存储设置了检索器,它可以基于查询检索文档。这也是一个可运行组件,可以与其他组件链接在一起,但您也可以尝试单独运行它:

retriever.invoke("where did harrison work?")

然后 我们使用RunnableParallel通过使用检索到的文档条目和原始用户问题、使用检索器进行文档搜索以及使用RunnablePassthrough传递用户问题,为提示准备预期的输入:

  1. setup_and_retrieval = RunnableParallel(
  2. {"context": retriever, "question": RunnablePassthrough()}
  3. )

回顾一下,完整的链是:

  1. setup_and_retrieval = RunnableParallel(
  2. {"context": retriever, "question": RunnablePassthrough()}
  3. )
  4. chain = setup_and_retrieval | prompt | model | output_parser

 流程如下:

  1. 第一步创建一个RunnableParallel具有两个词条的对象。第一个词条(context将包括由检索器获取的文档结果。第二个词条(question将包含用户的原始问题。为了传递问题,我们使用RunnablePassthrough复制这个词条。
  2. 将上面步骤中的词典输入到prompt组件。然后它接受用户输入question以及检索到的文档context构造提示并输出提示值。
  3. model组件接受生成的提示,并传递到OpenAI LLM模型进行评估。模型生成的输出是一个ChatMessage对象。
  4. 最后,是output_parser组件接受一个ChatMessage,并将其转换为从invoke方法返回的Python字符串。

四、为什么使用LECL 

LCEL 使从基本组件构建复杂的链变得容易。为此,它提供了:统一的界面和合成原语。

        1、统一的界面:每个 LCEL对象 都实现 Runnable接口 ,它定义了一组通用的调用方法(invokebatchstreamainvoke...).这使得LCEL对象链也可以自动支持这些调用。也就是说,每个LCEL对象链本身就是一个LCEL对象

        2、合成原语:LCEL提供了许多原语,使构建链、并行化组件、添加回退、动态配置内部链等变得容易。

        为了更好地理解LCEL的价值,看看它的实际应用并思考如果没有它我们如何重新创建类似的功能是很有帮助的。在本演练中,我们将使用快速入门的示例部分。我们将使用简单的prompt +模型链,它已经定义了很多功能,看看如何重新创建所有这些功能。

%pip install --upgrade --quiet  langchain-core langchain-openai langchain-anthropic
  • invoke (引子)

在最简单的情况下,我们只想传入一个主题字符串并获取一个笑话字符串:

  没有LCEL  

  1. from typing import List
  2. import openai
  3. prompt_template = "Tell me a short joke about {topic}"
  4. client = openai.OpenAI()
  5. def call_chat_model(messages: List[dict]) -> str:
  6. response = client.chat.completions.create(
  7. model="gpt-3.5-turbo",
  8. messages=messages,
  9. )
  10. return response.choices[0].message.content
  11. def invoke_chain(topic: str) -> str:
  12. prompt_value = prompt_template.format(topic=topic)
  13. messages = [{"role": "user", "content": prompt_value}]
  14. return call_chat_model(messages)
  15. invoke_chain("ice cream")

  LCEL  

  1. from langchain_openai import ChatOpenAI
  2. from langchain_core.prompts import ChatPromptTemplate
  3. from langchain_core.output_parsers import StrOutputParser
  4. from langchain_core.runnables import RunnablePassthrough
  5. prompt = ChatPromptTemplate.from_template(
  6. "Tell me a short joke about {topic}"
  7. )
  8. output_parser = StrOutputParser()
  9. model = ChatOpenAI(model="gpt-3.5-turbo")
  10. chain = (
  11. {"topic": RunnablePassthrough()}
  12. | prompt
  13. | model
  14. | output_parser
  15. )
  16. chain.invoke("ice cream")
  •  Stream(流式)

如果我们想 流式传输结果 ,我们需要更改我们的函数:

  没有LCEL  

  1. from typing import Iterator
  2. def stream_chat_model(messages: List[dict]) -> Iterator[str]:
  3. stream = client.chat.completions.create(
  4. model="gpt-3.5-turbo",
  5. messages=messages,
  6. stream=True,
  7. )
  8. for response in stream:
  9. content = response.choices[0].delta.content
  10. if content is not None:
  11. yield content
  12. def stream_chain(topic: str) -> Iterator[str]:
  13. prompt_value = prompt.format(topic=topic)
  14. return stream_chat_model([{"role": "user", "content": prompt_value}])
  15. for chunk in stream_chain("ice cream"):
  16. print(chunk, end="", flush=True)

  LCEL  

  1. for chunk in chain.stream("ice cream"):
  2. print(chunk, end="", flush=True)
  • Batch(批量)

如果我们想要 并行运行一批输入 ,我们将再次需要一个新函数:

  没有LCEL  

  1. from concurrent.futures import ThreadPoolExecutor
  2. def batch_chain(topics: list) -> list:
  3. with ThreadPoolExecutor(max_workers=5) as executor:
  4. return list(executor.map(invoke_chain, topics))
  5. batch_chain(["ice cream", "spaghetti", "dumplings"])

  LCEL  

chain.batch(["ice cream", "spaghetti", "dumplings"])
  • Async(异步)

如果我们需要异步版本:

  没有LCEL  

  1. async_client = openai.AsyncOpenAI()
  2. async def acall_chat_model(messages: List[dict]) -> str:
  3. response = await async_client.chat.completions.create(
  4. model="gpt-3.5-turbo",
  5. messages=messages,
  6. )
  7. return response.choices[0].message.content
  8. async def ainvoke_chain(topic: str) -> str:
  9. prompt_value = prompt_template.format(topic=topic)
  10. messages = [{"role": "user", "content": prompt_value}]
  11. return await acall_chat_model(messages)
await ainvoke_chain("ice cream")

  LCEL  

chain.ainvoke("ice cream")
  • LLM代替聊天模式

如果我们想使用完成端点而不是一个聊天端点:

  没有LCEL  

  1. def call_llm(prompt_value: str) -> str:
  2. response = client.completions.create(
  3. model="gpt-3.5-turbo-instruct",
  4. prompt=prompt_value,
  5. )
  6. return response.choices[0].text
  7. def invoke_llm_chain(topic: str) -> str:
  8. prompt_value = prompt_template.format(topic=topic)
  9. return call_llm(prompt_value)
  10. invoke_llm_chain("ice cream")

  LCEL  

  1. from langchain_openai import OpenAI
  2. llm = OpenAI(model="gpt-3.5-turbo-instruct")
  3. llm_chain = (
  4. {"topic": RunnablePassthrough()}
  5. | prompt
  6. | llm
  7. | output_parser
  8. )
  9. llm_chain.invoke("ice cream")
  • 不同的模型提供商

如果我们想用Anthropic代替OpenAI:

  没有LCEL  

  1. import anthropic
  2. anthropic_template = f"Human:\n\n{prompt_template}\n\nAssistant:"
  3. anthropic_client = anthropic.Anthropic()
  4. def call_anthropic(prompt_value: str) -> str:
  5. response = anthropic_client.completions.create(
  6. model="claude-2",
  7. prompt=prompt_value,
  8. max_tokens_to_sample=256,
  9. )
  10. return response.completion
  11. def invoke_anthropic_chain(topic: str) -> str:
  12. prompt_value = anthropic_template.format(topic=topic)
  13. return call_anthropic(prompt_value)
  14. invoke_anthropic_chain("ice cream")

  LCEL  

  1. from langchain_anthropic import ChatAnthropic
  2. anthropic = ChatAnthropic(model="claude-2")
  3. anthropic_chain = (
  4. {"topic": RunnablePassthrough()}
  5. | prompt
  6. | anthropic
  7. | output_parser
  8. )
  9. anthropic_chain.invoke("ice cream")
  • 运行时可配置性

如果我们想在运行时选择可配置的聊天模型或LLM:

  没有LCEL  

  1. def invoke_configurable_chain(
  2. topic: str,
  3. *,
  4. model: str = "chat_openai"
  5. ) -> str:
  6. if model == "chat_openai":
  7. return invoke_chain(topic)
  8. elif model == "openai":
  9. return invoke_llm_chain(topic)
  10. elif model == "anthropic":
  11. return invoke_anthropic_chain(topic)
  12. else:
  13. raise ValueError(
  14. f"Received invalid model '{model}'."
  15. " Expected one of chat_openai, openai, anthropic"
  16. )
  17. def stream_configurable_chain(
  18. topic: str,
  19. *,
  20. model: str = "chat_openai"
  21. ) -> Iterator[str]:
  22. if model == "chat_openai":
  23. return stream_chain(topic)
  24. elif model == "openai":
  25. # Note we haven't implemented this yet.
  26. return stream_llm_chain(topic)
  27. elif model == "anthropic":
  28. # Note we haven't implemented this yet
  29. return stream_anthropic_chain(topic)
  30. else:
  31. raise ValueError(
  32. f"Received invalid model '{model}'."
  33. " Expected one of chat_openai, openai, anthropic"
  34. )
  35. def batch_configurable_chain(
  36. topics: List[str],
  37. *,
  38. model: str = "chat_openai"
  39. ) -> List[str]:
  40. # You get the idea
  41. ...
  42. async def abatch_configurable_chain(
  43. topics: List[str],
  44. *,
  45. model: str = "chat_openai"
  46. ) -> List[str]:
  47. ...
  48. invoke_configurable_chain("ice cream", model="openai")
  49. stream = stream_configurable_chain(
  50. "ice_cream",
  51. model="anthropic"
  52. )
  53. for chunk in stream:
  54. print(chunk, end="", flush=True)
  55. # batch_configurable_chain(["ice cream", "spaghetti", "dumplings"])
  56. # await ainvoke_configurable_chain("ice cream")

  LCEL  

  1. from langchain_core.runnables import ConfigurableField
  2. configurable_model = model.configurable_alternatives(
  3. ConfigurableField(id="model"),
  4. default_key="chat_openai",
  5. openai=llm,
  6. anthropic=anthropic,
  7. )
  8. configurable_chain = (
  9. {"topic": RunnablePassthrough()}
  10. | prompt
  11. | configurable_model
  12. | output_parser
  13. )
  1. configurable_chain.invoke(
  2. "ice cream",
  3. config={"model": "openai"}
  4. )
  5. stream = configurable_chain.stream(
  6. "ice cream",
  7. config={"model": "anthropic"}
  8. )
  9. for chunk in stream:
  10. print(chunk, end="", flush=True)
  11. configurable_chain.batch(["ice cream", "spaghetti", "dumplings"])
  12. # await configurable_chain.ainvoke("ice cream")
  • Logging(记录)

如果我们想记录我们的中间结果:

  没有LCEL  

我们将 print 出于说明目的的中间步骤

  1. def invoke_anthropic_chain_with_logging(topic: str) -> str:
  2. print(f"Input: {topic}")
  3. prompt_value = anthropic_template.format(topic=topic)
  4. print(f"Formatted prompt: {prompt_value}")
  5. output = call_anthropic(prompt_value)
  6. print(f"Output: {output}")
  7. return output
  8. invoke_anthropic_chain_with_logging("ice cream")

  LCEL  

每个组件都内置了与LangSmith的集成。如果我们设置以下两个环境变量,所有链跟踪都会记录到LangSmith。 

  1. import os
  2. os.environ["LANGCHAIN_API_KEY"] = "..."
  3. os.environ["LANGCHAIN_TRACING_V2"] = "true"
  4. anthropic_chain.invoke("ice cream")

下面是我们的LangSmith跟踪的样子:https://Smith . lang chain . com/public/e4de 52 f 8-BCD 9-4732-b950-dee E4 b 04 e 313/r

  • Fallbacks(回退)

  没有LCEL  

  1. def invoke_chain_with_fallback(topic: str) -> str:
  2. try:
  3. return invoke_chain(topic)
  4. except Exception:
  5. return invoke_anthropic_chain(topic)
  6. async def ainvoke_chain_with_fallback(topic: str) -> str:
  7. try:
  8. return await ainvoke_chain(topic)
  9. except Exception:
  10. # Note: we haven't actually implemented this.
  11. return ainvoke_anthropic_chain(topic)
  12. async def batch_chain_with_fallback(topics: List[str]) -> str:
  13. try:
  14. return batch_chain(topics)
  15. except Exception:
  16. # Note: we haven't actually implemented this.
  17. return batch_anthropic_chain(topics)
  18. invoke_chain_with_fallback("ice cream")
  19. # await ainvoke_chain_with_fallback("ice cream")
  20. batch_chain_with_fallback(["ice cream", "spaghetti", "dumplings"]))

  LCEL  

  1. fallback_chain = chain.with_fallbacks([anthropic_chain])
  2. fallback_chain.invoke("ice cream")
  3. # await fallback_chain.ainvoke("ice cream")
  4. fallback_chain.batch(["ice cream", "spaghetti", "dumplings"])
  •  完整代码比较

即使在这种简单的情况下,我们的LCEL链也简洁地包含了许多功能。随着链变得越来越复杂,这变得尤其有价值。

  没有LCEL  

  1. from concurrent.futures import ThreadPoolExecutor
  2. from typing import Iterator, List, Tuple
  3. import anthropic
  4. import openai
  5. prompt_template = "Tell me a short joke about {topic}"
  6. anthropic_template = f"Human:\n\n{prompt_template}\n\nAssistant:"
  7. client = openai.OpenAI()
  8. async_client = openai.AsyncOpenAI()
  9. anthropic_client = anthropic.Anthropic()
  10. def call_chat_model(messages: List[dict]) -> str:
  11. response = client.chat.completions.create(
  12. model="gpt-3.5-turbo",
  13. messages=messages,
  14. )
  15. return response.choices[0].message.content
  16. def invoke_chain(topic: str) -> str:
  17. print(f"Input: {topic}")
  18. prompt_value = prompt_template.format(topic=topic)
  19. print(f"Formatted prompt: {prompt_value}")
  20. messages = [{"role": "user", "content": prompt_value}]
  21. output = call_chat_model(messages)
  22. print(f"Output: {output}")
  23. return output
  24. def stream_chat_model(messages: List[dict]) -> Iterator[str]:
  25. stream = client.chat.completions.create(
  26. model="gpt-3.5-turbo",
  27. messages=messages,
  28. stream=True,
  29. )
  30. for response in stream:
  31. content = response.choices[0].delta.content
  32. if content is not None:
  33. yield content
  34. def stream_chain(topic: str) -> Iterator[str]:
  35. print(f"Input: {topic}")
  36. prompt_value = prompt.format(topic=topic)
  37. print(f"Formatted prompt: {prompt_value}")
  38. stream = stream_chat_model([{"role": "user", "content": prompt_value}])
  39. for chunk in stream:
  40. print(f"Token: {chunk}", end="")
  41. yield chunk
  42. def batch_chain(topics: list) -> list:
  43. with ThreadPoolExecutor(max_workers=5) as executor:
  44. return list(executor.map(invoke_chain, topics))
  45. def call_llm(prompt_value: str) -> str:
  46. response = client.completions.create(
  47. model="gpt-3.5-turbo-instruct",
  48. prompt=prompt_value,
  49. )
  50. return response.choices[0].text
  51. def invoke_llm_chain(topic: str) -> str:
  52. print(f"Input: {topic}")
  53. prompt_value = promtp_template.format(topic=topic)
  54. print(f"Formatted prompt: {prompt_value}")
  55. output = call_llm(prompt_value)
  56. print(f"Output: {output}")
  57. return output
  58. def call_anthropic(prompt_value: str) -> str:
  59. response = anthropic_client.completions.create(
  60. model="claude-2",
  61. prompt=prompt_value,
  62. max_tokens_to_sample=256,
  63. )
  64. return response.completion
  65. def invoke_anthropic_chain(topic: str) -> str:
  66. print(f"Input: {topic}")
  67. prompt_value = anthropic_template.format(topic=topic)
  68. print(f"Formatted prompt: {prompt_value}")
  69. output = call_anthropic(prompt_value)
  70. print(f"Output: {output}")
  71. return output
  72. async def ainvoke_anthropic_chain(topic: str) -> str:
  73. ...
  74. def stream_anthropic_chain(topic: str) -> Iterator[str]:
  75. ...
  76. def batch_anthropic_chain(topics: List[str]) -> List[str]:
  77. ...
  78. def invoke_configurable_chain(
  79. topic: str,
  80. *,
  81. model: str = "chat_openai"
  82. ) -> str:
  83. if model == "chat_openai":
  84. return invoke_chain(topic)
  85. elif model == "openai":
  86. return invoke_llm_chain(topic)
  87. elif model == "anthropic":
  88. return invoke_anthropic_chain(topic)
  89. else:
  90. raise ValueError(
  91. f"Received invalid model '{model}'."
  92. " Expected one of chat_openai, openai, anthropic"
  93. )
  94. def stream_configurable_chain(
  95. topic: str,
  96. *,
  97. model: str = "chat_openai"
  98. ) -> Iterator[str]:
  99. if model == "chat_openai":
  100. return stream_chain(topic)
  101. elif model == "openai":
  102. # Note we haven't implemented this yet.
  103. return stream_llm_chain(topic)
  104. elif model == "anthropic":
  105. # Note we haven't implemented this yet
  106. return stream_anthropic_chain(topic)
  107. else:
  108. raise ValueError(
  109. f"Received invalid model '{model}'."
  110. " Expected one of chat_openai, openai, anthropic"
  111. )
  112. def batch_configurable_chain(
  113. topics: List[str],
  114. *,
  115. model: str = "chat_openai"
  116. ) -> List[str]:
  117. ...
  118. async def abatch_configurable_chain(
  119. topics: List[str],
  120. *,
  121. model: str = "chat_openai"
  122. ) -> List[str]:
  123. ...
  124. def invoke_chain_with_fallback(topic: str) -> str:
  125. try:
  126. return invoke_chain(topic)
  127. except Exception:
  128. return invoke_anthropic_chain(topic)
  129. async def ainvoke_chain_with_fallback(topic: str) -> str:
  130. try:
  131. return await ainvoke_chain(topic)
  132. except Exception:
  133. return ainvoke_anthropic_chain(topic)
  134. async def batch_chain_with_fallback(topics: List[str]) -> str:
  135. try:
  136. return batch_chain(topics)
  137. except Exception:
  138. return batch_anthropic_chain(topics)

  LCEL  

  1. import os
  2. from langchain_anthropic import ChatAnthropic
  3. from langchain_openai import ChatOpenAI
  4. from langchain_openai import OpenAI
  5. from langchain_core.output_parsers import StrOutputParser
  6. from langchain_core.prompts import ChatPromptTemplate
  7. from langchain_core.runnables import RunnablePassthrough, ConfigurableField
  8. os.environ["LANGCHAIN_API_KEY"] = "..."
  9. os.environ["LANGCHAIN_TRACING_V2"] = "true"
  10. prompt = ChatPromptTemplate.from_template(
  11. "Tell me a short joke about {topic}"
  12. )
  13. chat_openai = ChatOpenAI(model="gpt-3.5-turbo")
  14. openai = OpenAI(model="gpt-3.5-turbo-instruct")
  15. anthropic = ChatAnthropic(model="claude-2")
  16. model = (
  17. chat_openai
  18. .with_fallbacks([anthropic])
  19. .configurable_alternatives(
  20. ConfigurableField(id="model"),
  21. default_key="chat_openai",
  22. openai=openai,
  23. anthropic=anthropic,
  24. )
  25. )
  26. chain = (
  27. {"topic": RunnablePassthrough()}
  28. | prompt
  29. | model
  30. | StrOutputParser()
  31. )

五、interface 接口

        为了尽可能容易地创建定制链,我们实现了一个“Runable”协议。Runnable 大多数组件中都实现了协议。这是一个标准接口,使得定义定制链以及以标准方式调用它们变得容易。标准接口包括:

  • stream:流回响应块
  • invoke:调用输入上的链
  • batch:调用输入列表上的链

        这些也有相应的异步方法:

  • astream:异步流回响应块
  • ainvoke:调用输入异步链
  • abatch:异步调用输入列表上的链
  • astream_log:除了最终响应之外,在中间步骤发生时将其流回
  • astream_events:链中发生的beta流事件(介绍在 langchain-core 0.1.14)

输入类型输出类型因组件而异:

Component

组件

Input Type

输入类型

Output Type

输出类型

PromptDictionary(词典)PromptValue
ChatModelSingle string(简单字符串), list of chat messages(聊天消息体列表) or a PromptValue(提示值)ChatMessage
LLMSingle string, list of chat messages or a PromptValueString
OutputParserThe output of an LLM or ChatModel

Depends on the parser

(取决于解析器)

RetrieverSingle stringList of Documents
ToolSingle string or dictionary, depending on the toolDepends on the tool

所有可运行程序都公开输入和输出计划,以便检查输入和输出:

        -input_schema:从Runnable的结构自动生成的输入Pydantic模型

        -output_schema:从Runnable的结构自动生成的输出Pydantic模型

让我们来看看这些方法。为此,我们将创建一个超级简单的PromptTemplate + ChatModel链。

pip install –upgrade –quiet langchain-core langchain-community langchain-openai
  1. from langchain_core.prompts import ChatPromptTemplate
  2. from langchain_openai import ChatOpenAI
  3. model = ChatOpenAI()
  4. prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")
  5. chain = prompt | model

输入模式

一个Runnable接受输入的描述。这是一个从任何Runnable的结构中动态生成的Pydantic模型。你可以称呼.schema()JSONSchema表示。

# The input schema of the chain is the input schema of its first part, the prompt.

链的输入模式是提示的第一部分输入模式。

chain.input_schema.schema()

  1. {'title': 'PromptInput',
  2. 'type': 'object',
  3. 'properties': {'topic': {'title': 'Topic', 'type': 'string'}}}

prompt.input_schema.schema()

  1. {'title': 'PromptInput',
  2. 'type': 'object',
  3. 'properties': {'topic': {'title': 'Topic', 'type': 'string'}}}

model.input_schema.schema()

  1. {'title': 'ChatOpenAIInput',
  2. 'anyOf': [{'type': 'string'},
  3. {'$ref': '#/definitions/StringPromptValue'},
  4. {'$ref': '#/definitions/ChatPromptValueConcrete'},
  5. {'type': 'array',
  6. 'items': {'anyOf': [{'$ref': '#/definitions/AIMessage'},
  7. {'$ref': '#/definitions/HumanMessage'},
  8. {'$ref': '#/definitions/ChatMessage'},
  9. {'$ref': '#/definitions/SystemMessage'},
  10. {'$ref': '#/definitions/FunctionMessage'},
  11. {'$ref': '#/definitions/ToolMessage'}]}}],
  12. 'definitions': {'StringPromptValue': {'title': 'StringPromptValue',
  13. 'description': 'String prompt value.',
  14. 'type': 'object',
  15. 'properties': {'text': {'title': 'Text', 'type': 'string'},
  16. 'type': {'title': 'Type',
  17. 'default': 'StringPromptValue',
  18. 'enum': ['StringPromptValue'],
  19. 'type': 'string'}},
  20. 'required': ['text']},
  21. 'AIMessage': {'title': 'AIMessage',
  22. 'description': 'A Message from an AI.',
  23. 'type': 'object',
  24. 'properties': {'content': {'title': 'Content',
  25. 'anyOf': [{'type': 'string'},
  26. {'type': 'array',
  27. 'items': {'anyOf': [{'type': 'string'}, {'type': 'object'}]}}]},
  28. 'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},
  29. 'type': {'title': 'Type',
  30. 'default': 'ai',
  31. 'enum': ['ai'],
  32. 'type': 'string'},
  33. 'example': {'title': 'Example', 'default': False, 'type': 'boolean'}},
  34. 'required': ['content']},
  35. 'HumanMessage': {'title': 'HumanMessage',
  36. 'description': 'A Message from a human.',
  37. 'type': 'object',
  38. 'properties': {'content': {'title': 'Content',
  39. 'anyOf': [{'type': 'string'},
  40. {'type': 'array',
  41. 'items': {'anyOf': [{'type': 'string'}, {'type': 'object'}]}}]},
  42. 'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},
  43. 'type': {'title': 'Type',
  44. 'default': 'human',
  45. 'enum': ['human'],
  46. 'type': 'string'},
  47. 'example': {'title': 'Example', 'default': False, 'type': 'boolean'}},
  48. 'required': ['content']},
  49. 'ChatMessage': {'title': 'ChatMessage',
  50. 'description': 'A Message that can be assigned an arbitrary speaker (i.e. role).',
  51. 'type': 'object',
  52. 'properties': {'content': {'title': 'Content',
  53. 'anyOf': [{'type': 'string'},
  54. {'type': 'array',
  55. 'items': {'anyOf': [{'type': 'string'}, {'type': 'object'}]}}]},
  56. 'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},
  57. 'type': {'title': 'Type',
  58. 'default': 'chat',
  59. 'enum': ['chat'],
  60. 'type': 'string'},
  61. 'role': {'title': 'Role', 'type': 'string'}},
  62. 'required': ['content', 'role']},
  63. 'SystemMessage': {'title': 'SystemMessage',
  64. 'description': 'A Message for priming AI behavior, usually passed in as the first of a sequence\nof input messages.',
  65. 'type': 'object',
  66. 'properties': {'content': {'title': 'Content',
  67. 'anyOf': [{'type': 'string'},
  68. {'type': 'array',
  69. 'items': {'anyOf': [{'type': 'string'}, {'type': 'object'}]}}]},
  70. 'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},
  71. 'type': {'title': 'Type',
  72. 'default': 'system',
  73. 'enum': ['system'],
  74. 'type': 'string'}},
  75. 'required': ['content']},
  76. 'FunctionMessage': {'title': 'FunctionMessage',
  77. 'description': 'A Message for passing the result of executing a function back to a model.',
  78. 'type': 'object',
  79. 'properties': {'content': {'title': 'Content',
  80. 'anyOf': [{'type': 'string'},
  81. {'type': 'array',
  82. 'items': {'anyOf': [{'type': 'string'}, {'type': 'object'}]}}]},
  83. 'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},
  84. 'type': {'title': 'Type',
  85. 'default': 'function',
  86. 'enum': ['function'],
  87. 'type': 'string'},
  88. 'name': {'title': 'Name', 'type': 'string'}},
  89. 'required': ['content', 'name']},
  90. 'ToolMessage': {'title': 'ToolMessage',
  91. 'description': 'A Message for passing the result of executing a tool back to a model.',
  92. 'type': 'object',
  93. 'properties': {'content': {'title': 'Content',
  94. 'anyOf': [{'type': 'string'},
  95. {'type': 'array',
  96. 'items': {'anyOf': [{'type': 'string'}, {'type': 'object'}]}}]},
  97. 'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},
  98. 'type': {'title': 'Type',
  99. 'default': 'tool',
  100. 'enum': ['tool'],
  101. 'type': 'string'},
  102. 'tool_call_id': {'title': 'Tool Call Id', 'type': 'string'}},
  103. 'required': ['content', 'tool_call_id']},
  104. 'ChatPromptValueConcrete': {'title': 'ChatPromptValueConcrete',
  105. 'description': 'Chat prompt value which explicitly lists out the message types it accepts.\nFor use in external schemas.',
  106. 'type': 'object',
  107. 'properties': {'messages': {'title': 'Messages',
  108. 'type': 'array',
  109. 'items': {'anyOf': [{'$ref': '#/definitions/AIMessage'},
  110. {'$ref': '#/definitions/HumanMessage'},
  111. {'$ref': '#/definitions/ChatMessage'},
  112. {'$ref': '#/definitions/SystemMessage'},
  113. {'$ref': '#/definitions/FunctionMessage'},
  114. {'$ref': '#/definitions/ToolMessage'}]}},
  115. 'type': {'title': 'Type',
  116. 'default': 'ChatPromptValueConcrete',
  117. 'enum': ['ChatPromptValueConcrete'],
  118. 'type': 'string'}},
  119. 'required': ['messages']}}}

输出模式

Runnable生成的输出的描述。这是一个由任何Runnable的结构动态生成的Pydantic模型。您可以对其调用 .schema() 以获得JSONSchema表示。

# The output schema of the chain is the output schema of its last part, in this case a ChatModel, which outputs a ChatMessage

链的输出模式是其最后部分的输出模式,在本例中为ChatModel,它输出ChatMessage。

chain.output_schema.schema() 

  1. {'title': 'ChatOpenAIOutput',
  2. 'anyOf': [{'$ref': '#/definitions/AIMessage'},
  3. {'$ref': '#/definitions/HumanMessage'},
  4. {'$ref': '#/definitions/ChatMessage'},
  5. {'$ref': '#/definitions/SystemMessage'},
  6. {'$ref': '#/definitions/FunctionMessage'},
  7. {'$ref': '#/definitions/ToolMessage'}],
  8. 'definitions': {'AIMessage': {'title': 'AIMessage',
  9. 'description': 'A Message from an AI.',
  10. 'type': 'object',
  11. 'properties': {'content': {'title': 'Content',
  12. 'anyOf': [{'type': 'string'},
  13. {'type': 'array',
  14. 'items': {'anyOf': [{'type': 'string'}, {'type': 'object'}]}}]},
  15. 'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},
  16. 'type': {'title': 'Type',
  17. 'default': 'ai',
  18. 'enum': ['ai'],
  19. 'type': 'string'},
  20. 'example': {'title': 'Example', 'default': False, 'type': 'boolean'}},
  21. 'required': ['content']},
  22. 'HumanMessage': {'title': 'HumanMessage',
  23. 'description': 'A Message from a human.',
  24. 'type': 'object',
  25. 'properties': {'content': {'title': 'Content',
  26. 'anyOf': [{'type': 'string'},
  27. {'type': 'array',
  28. 'items': {'anyOf': [{'type': 'string'}, {'type': 'object'}]}}]},
  29. 'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},
  30. 'type': {'title': 'Type',
  31. 'default': 'human',
  32. 'enum': ['human'],
  33. 'type': 'string'},
  34. 'example': {'title': 'Example', 'default': False, 'type': 'boolean'}},
  35. 'required': ['content']},
  36. 'ChatMessage': {'title': 'ChatMessage',
  37. 'description': 'A Message that can be assigned an arbitrary speaker (i.e. role).',
  38. 'type': 'object',
  39. 'properties': {'content': {'title': 'Content',
  40. 'anyOf': [{'type': 'string'},
  41. {'type': 'array',
  42. 'items': {'anyOf': [{'type': 'string'}, {'type': 'object'}]}}]},
  43. 'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},
  44. 'type': {'title': 'Type',
  45. 'default': 'chat',
  46. 'enum': ['chat'],
  47. 'type': 'string'},
  48. 'role': {'title': 'Role', 'type': 'string'}},
  49. 'required': ['content', 'role']},
  50. 'SystemMessage': {'title': 'SystemMessage',
  51. 'description': 'A Message for priming AI behavior, usually passed in as the first of a sequence\nof input messages.',
  52. 'type': 'object',
  53. 'properties': {'content': {'title': 'Content',
  54. 'anyOf': [{'type': 'string'},
  55. {'type': 'array',
  56. 'items': {'anyOf': [{'type': 'string'}, {'type': 'object'}]}}]},
  57. 'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},
  58. 'type': {'title': 'Type',
  59. 'default': 'system',
  60. 'enum': ['system'],
  61. 'type': 'string'}},
  62. 'required': ['content']},
  63. 'FunctionMessage': {'title': 'FunctionMessage',
  64. 'description': 'A Message for passing the result of executing a function back to a model.',
  65. 'type': 'object',
  66. 'properties': {'content': {'title': 'Content',
  67. 'anyOf': [{'type': 'string'},
  68. {'type': 'array',
  69. 'items': {'anyOf': [{'type': 'string'}, {'type': 'object'}]}}]},
  70. 'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},
  71. 'type': {'title': 'Type',
  72. 'default': 'function',
  73. 'enum': ['function'],
  74. 'type': 'string'},
  75. 'name': {'title': 'Name', 'type': 'string'}},
  76. 'required': ['content', 'name']},
  77. 'ToolMessage': {'title': 'ToolMessage',
  78. 'description': 'A Message for passing the result of executing a tool back to a model.',
  79. 'type': 'object',
  80. 'properties': {'content': {'title': 'Content',
  81. 'anyOf': [{'type': 'string'},
  82. {'type': 'array',
  83. 'items': {'anyOf': [{'type': 'string'}, {'type': 'object'}]}}]},
  84. 'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},
  85. 'type': {'title': 'Type',
  86. 'default': 'tool',
  87. 'enum': ['tool'],
  88. 'type': 'string'},
  89. 'tool_call_id': {'title': 'Tool Call Id', 'type': 'string'}},
  90. 'required': ['content', 'tool_call_id']}}}

Stream 流式

  1. for s in chain.stream({"topic": "bears"}):
  2. print(s.content, end="", flush=True)
  1. Sure, here's a bear-themed joke for you:
  2. Why don't bears wear shoes?
  3. Because they already have bear feet!

Invoke 调用

chain.invoke({"topic": "bears"})
AIMessage(content="Why don't bears wear shoes? \n\nBecause they have bear feet!")

Batch 批量

chain.batch([{"topic": "bears"}, {"topic": "cats"}])
  1. [AIMessage(content="Sure, here's a bear joke for you:\n\nWhy don't bears wear shoes?\n\nBecause they already have bear feet!"),
  2. AIMessage(content="Why don't cats play poker in the wild?\n\nToo many cheetahs!")]

属性来设置并发请求的数量max_concurrency参数

chain.batch([{"topic": "bears"}, {"topic": "cats"}], config={"max_concurrency": 5})
  1. [AIMessage(content="Why don't bears wear shoes?\n\nBecause they have bear feet!"),
  2. AIMessage(content="Why don't cats play poker in the wild? Too many cheetahs!")]

Async Stream 异步流

  1. async for s in chain.astream({"topic": "bears"}):
  2. print(s.content, end="", flush=True)
  1. Why don't bears wear shoes?
  2. Because they have bear feet!

Async Invoke 异步调用

await chain.ainvoke({"topic": "bears"})
AIMessage(content="Why don't bears ever wear shoes?\n\nBecause they already have bear feet!")

Async Stream Events (beta) 异步事件流(测试版)

事件流是一种测试版API,并且可能会根据反馈进行一些更改。

注意:在langchain-core 0.2.0中引入

现在,当使用astream_events API时,为了一切正常运行,请:

  • 使用async贯穿整个代码(包括异步工具等)
  • 如果定义自定义函数/可运行函数,则传播回调。
  • 无论何时在没有LCEL的情况下使用runnables,一定要调用.astream()在LLM上而不是.ainvoke强制LLM对令牌进行流式传输。

 Event Reference 事件参考

        下面是一个参考表,显示了各种Runnable对象可能发出的一些事件。表格后面包含了一些Runnable的定义。

        流传输可运行的输入时,⚠️将不可用,直到输入流被完全消耗。这意味着输入将在相应的end挂钩而不是start事件。

eventnamechunkinputoutput
on_chat_model_start[model name]{“messages”: [[SystemMessage, HumanMessage]]}
on_chat_model_stream[model name]AIMessageChunk(content=“hello”)
on_chat_model_end[model name]{“messages”: [[SystemMessage, HumanMessage]]}{“generations”: […], “llm_output”: None, …}
on_llm_start[model name]{‘input’: ‘hello’}
on_llm_stream[model name]‘Hello’
on_llm_end[model name]‘Hello human!’
on_chain_startformat_docs
on_chain_streamformat_docs“hello world!, goodbye world!”
on_chain_endformat_docs[Document(…)]“hello world!, goodbye world!”
on_tool_startsome_tool{“x”: 1, “y”: “2”}
on_tool_streamsome_tool{“x”: 1, “y”: “2”}
on_tool_endsome_tool{“x”: 1, “y”: “2”}
on_retriever_start[retriever name]{“query”: “hello”}
on_retriever_chunk[retriever name]{documents: […]}
on_retriever_end[retriever name]{“query”: “hello”}{documents: […]}
on_prompt_start[template_name]

{“question”: “hello”}
on_prompt_end[template_name]{“question”: “hello”}ChatPromptValue(messages: [SystemMessage, …])

 以下是与上述事件相关的声明:

format_docs:

  1. def format_docs(docs: List[Document]) -> str:
  2. '''Format the docs.'''
  3. return ", ".join([doc.page_content for doc in docs])
  4. format_docs = RunnableLambda(format_docs)

some_tool:

  1. @tool
  2. def some_tool(x: int, y: str) -> dict:
  3. '''Some_tool.'''
  4. return {"x": x, "y": y}

prompt:

  1. template = ChatPromptTemplate.from_messages(
  2. [("system", "You are Cat Agent 007"), ("human", "{question}")]
  3. ).with_config({"run_name": "my_template", "tags": ["my_template"]})

让我们定义一个新的链,使展示astream_events接口(以及后来的astream_log界面)。

  1. from langchain_community.vectorstores import FAISS
  2. from langchain_core.output_parsers import StrOutputParser
  3. from langchain_core.runnables import RunnablePassthrough
  4. from langchain_openai import OpenAIEmbeddings
  5. template = """Answer the question based only on the following context:
  6. {context}
  7. Question: {question}
  8. """
  9. prompt = ChatPromptTemplate.from_template(template)
  10. vectorstore = FAISS.from_texts(
  11. ["harrison worked at kensho"], embedding=OpenAIEmbeddings()
  12. )
  13. retriever = vectorstore.as_retriever()
  14. retrieval_chain = (
  15. {
  16. "context": retriever.with_config(run_name="Docs"),
  17. "question": RunnablePassthrough(),
  18. }
  19. | prompt
  20. | model.with_config(run_name="my_llm")
  21. | StrOutputParser()
  22. )

现在让我们使用astream_events从检索器和LLM获取事件。

  1. async for event in retrieval_chain.astream_events(
  2. "where did harrison work?", version="v1", include_names=["Docs", "my_llm"]
  3. ):
  4. kind = event["event"]
  5. if kind == "on_chat_model_stream":
  6. print(event["data"]["chunk"].content, end="|")
  7. elif kind in {"on_chat_model_start"}:
  8. print()
  9. print("Streaming LLM:")
  10. elif kind in {"on_chat_model_end"}:
  11. print()
  12. print("Done streaming LLM.")
  13. elif kind == "on_retriever_end":
  14. print("--")
  15. print("Retrieved the following documents:")
  16. print(event["data"]["output"]["documents"])
  17. elif kind == "on_tool_end":
  18. print(f"Ended tool: {event['name']}")
  19. else:
  20. pass
  1. /home/eugene/src/langchain/libs/core/langchain_core/_api/beta_decorator.py:86: LangChainBetaWarning: This API is in beta and may change in the future.
  2. warn_beta(
  1. --
  2. Retrieved the following documents:
  3. [Document(page_content='harrison worked at kensho')]
  4. Streaming LLM:
  5. |H|arrison| worked| at| Kens|ho|.||
  6. Done streaming LLM.

Parallelism 并行

        让我们看看LangChain表达式语言是如何支持并行请求的。例如,当使用RunnableParallel(通常写成字典)它并行执行每个元素。

  1. from langchain_core.runnables import RunnableParallel
  2. chain1 = ChatPromptTemplate.from_template("tell me a joke about {topic}") | model
  3. chain2 = (
  4. ChatPromptTemplate.from_template("write a short (2 line) poem about {topic}")
  5. | model
  6. )
  7. combined = RunnableParallel(joke=chain1, poem=chain2)
  1. %%time
  2. chain1.invoke({"topic": "bears"})

CPU times: user 18 ms, sys: 1.27 ms, total: 19.3 ms
Wall time: 692 ms

AIMessage(content="Why don't bears wear shoes?\n\nBecause they already have bear feet!") 

  1. %%time
  2. chain2.invoke({"topic": "bears"})

CPU times: user 10.5 ms, sys: 166 µs, total: 10.7 ms
Wall time: 579 ms

 AIMessage(content="In forest's embrace,\nMajestic bears pace.")

  1. %%time
  2. combined.invoke({"topic": "bears"})

 CPU times: user 32 ms, sys: 2.59 ms, total: 34.6 ms
Wall time: 816 ms

{'joke': AIMessage(content="Sure, here's a bear-related joke for you:\n\nWhy did the bear bring a ladder to the bar?\n\nBecause he heard the drinks were on the house!"),
 'poem': AIMessage(content="In wilderness they roam,\nMajestic strength, nature's throne.")} 

Parallelism on batches 批量并行

并行可以与其他runnables结合使用。让我们尝试对批处理使用并行性。

  1. %%time
  2. chain1.batch([{"topic": "bears"}, {"topic": "cats"}])

CPU times: user 17.3 ms, sys: 4.84 ms, total: 22.2 ms
Wall time: 628 ms

[AIMessage(content="Why don't bears wear shoes?\n\nBecause they have bear feet!"),
AIMessage(content="Why don't cats play poker in the wild?\n\nToo many cheetahs!")] 

  1. %%time
  2. chain2.batch([{"topic": "bears"}, {"topic": "cats"}])

 CPU times: user 15.8 ms, sys: 3.83 ms, total: 19.7 ms
Wall time: 718 ms

[AIMessage(content='In the wild, bears roam,\nMajestic guardians of ancient home.'),
AIMessage(content='Whiskers grace, eyes gleam,\nCats dance through the moonbeam.')] 

  1. %%time
  2. combined.batch([{"topic": "bears"}, {"topic": "cats"}])

 CPU times: user 44.8 ms, sys: 3.17 ms, total: 48 ms
Wall time: 721 ms

[{'joke': AIMessage(content="Sure, here's a bear joke for you:\n\nWhy don't bears wear shoes?\n\nBecause they have bear feet!"),
  'poem': AIMessage(content="Majestic bears roam,\nNature's strength, beauty shown.")},
 {'joke': AIMessage(content="Why don't cats play poker in the wild?\n\nToo many cheetahs!"),
  'poem': AIMessage(content="Whiskers dance, eyes aglow,\nCats embrace the night's gentle flow.")}] 

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/一键难忘520/article/detail/853644
推荐阅读
相关标签
  

闽ICP备14008679号