当前位置:   article > 正文

LangChain的Memory的4种用法(一)_langchain + memory

langchain + memory

什么是LangChain

LangChain 是一个基于语言模型开发应用程序的开源框架,用于构建基于大语言模型(LLMs)的端到端语言模型应用。它的核心理念是为各种大语言模型应用提供通用的接口,简化开发过程,并提供一套工具、组件和接口来创建由LLMs和聊天模型支持的应用程序。LangChain提供了多种组件来简化我们开发大语言模型应用的难度,本文来介绍其中Memory的使用方式。

Memory的作用

大多数 LLM 模型都有一个会话接口,当我们使用接口调用大模型能力时,每一次的调用都是新的一次会话。如果我们想和大模型进行多轮的对话,而不必每次重复之前的上下文时,就需要一个Memory来记忆我们之前的对话内容。

Memory就是这样的一个模块,来帮助开发者可以快速的构建自己的应用“记忆”。

用法一:Conversation buffer memory

Conversation buffer memory 是最简单的一种memory,它会把之前的对话信息全部记录下来。

  1. import os
  2. from dotenv import load_dotenv, find_dotenv
  3. from langchain.chat_models import ChatOpenAI
  4. from langchain.chains import ConversationChain
  5. from langchain.memory import ConversationBufferMemory
  6. llm = ChatOpenAI(temperature=0.0)
  7. # 创建一个对话ConversationBufferMemory的缓存
  8. memory = ConversationBufferMemory()
  9. #创建一个对话链
  10. conversation = ConversationChain(
  11. llm=llm,
  12. memory = memory,
  13. verbose=True
  14. )

通过ConversationChain向LLM介绍,提出第一个问题

conversation.predict(input="Hi, my name is Andrew")

LLM回答第一个问题

  1. > Entering new chain...
  2. Prompt after formatting:
  3. The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
  4. Current conversation:
  5. Human: Hi, my name is Andrew
  6. AI:
  7. > Finished chain.
  8. "Hello Andrew! It's nice to meet you. How can I assist you today?"

向LLM提出第二个问题

conversation.predict(input="What is 1+1?")

LLM回答第二问题

  1. > Entering new chain...
  2. Prompt after formatting:
  3. The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
  4. Current conversation:
  5. Human: Hi, my name is Andrew
  6. AI: Hello Andrew! It's nice to meet you. How can I assist you today?
  7. Human: What is 1+1?
  8. AI:
  9. > Finished chain.
  10. '1+1 is equal to 2.'

向LLM再次询问第一个问题的信息,LLM正确的回答出来了

  1. # 可以记住上一次的对话提到的人类的名字
  2. conversation.predict(input="What is my name?")
  1. > Entering new chain...
  2. Prompt after formatting:
  3. The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
  4. Current conversation:
  5. Human: Hi, my name is Andrew
  6. AI: Hello Andrew! It's nice to meet you. How can I assist you today?
  7. Human: What is 1+1?
  8. AI: 1+1 is equal to 2.
  9. Human: What is my name?
  10. AI:
  11. > Finished chain.
  12. 'Your name is Andrew.'

查看memory的变量

  1. # 加载缓存的变量内容
  2. memory.load_memory_variables({})
{'history': "Human: Hi, my name is Andrew\nAI: Hello Andrew! It's nice to meet you. How can I assist you today?\nHuman: What is 1+1?\nAI: 1+1 is equal to 2.\nHuman: What is my name?\nAI: Your name is Andrew."}

我们也可以手动添加memory的内容

  1. # 手动添加缓存的内容
  2. memory.save_context({"input": "Hi"},
  3. {"output": "What's up"})

用法二:Conversation buffer window memory

上面的 Conversation buffer memory 会随着我们和LLM的对话次数的增长而不断的增加,这个时候memory就可能变大的很大,可能会超出LLM的tokens限制。Conversation buffer window memory 提供了一个滑动窗口的memory,它可以记录最后K轮对话的内容(一般我们认为越远离我们的对话,可能与我们当前讨论的主题无关)。

  1. from langchain.memory import ConversationBufferWindowMemory
  2. # 创建一个对话缓存的窗口(k=1)
  3. memory = ConversationBufferWindowMemory(k=1)
  4. # 连续两次添加对话内容
  5. memory.save_context({"input": "Hi"},
  6. {"output": "What's up"})
  7. memory.save_context({"input": "Not much, just hanging"},
  8. {"output": "Cool"})

手动添加了两轮对话内容后,去查看memory的变量,发现只保持了最后一轮的内容(之前设置是k=1)。

  1. # 只能记忆最后一次的对话内容
  2. memory.load_memory_variables({})
{'history': 'Human: Not much, just hanging\nAI: Cool'}

通过ConversationChain测试对话能力

  1. # 创建conversation chain并且设置memory
  2. llm = ChatOpenAI(temperature=0.0)
  3. memory = ConversationBufferWindowMemory(k=1)
  4. conversation = ConversationChain(
  5. llm=llm,
  6. memory = memory,
  7. verbose=False
  8. )

向LLM提出第一个问题

conversation.predict(input="Hi, my name is Andrew")
"Hello Andrew! It's nice to meet you. How can I assist you today?"

向LLM提出第二个问题

conversation.predict(input="What is 1+1?")
'1+1 is equal to 2.'

向LLM询问第一个问题相关的内容,发现LLM无法回答

  1. # memory只能记住最后一次的对话内容
  2. conversation.predict(input="What is my name?")
"I'm sorry, but I don't have access to personal information."

声明:本文内容由网友自发贡献,转载请注明出处:【wpsshop博客】
推荐阅读
相关标签
  

闽ICP备14008679号