赞
踩
LangChain 是一个基于语言模型开发应用程序的开源框架,用于构建基于大语言模型(LLMs)的端到端语言模型应用。它的核心理念是为各种大语言模型应用提供通用的接口,简化开发过程,并提供一套工具、组件和接口来创建由LLMs和聊天模型支持的应用程序。LangChain提供了多种组件来简化我们开发大语言模型应用的难度,本文来介绍其中Memory的使用方式。
大多数 LLM 模型都有一个会话接口,当我们使用接口调用大模型能力时,每一次的调用都是新的一次会话。如果我们想和大模型进行多轮的对话,而不必每次重复之前的上下文时,就需要一个Memory来记忆我们之前的对话内容。
Memory就是这样的一个模块,来帮助开发者可以快速的构建自己的应用“记忆”。
Conversation buffer memory 是最简单的一种memory,它会把之前的对话信息全部记录下来。
- import os
-
- from dotenv import load_dotenv, find_dotenv
- from langchain.chat_models import ChatOpenAI
- from langchain.chains import ConversationChain
- from langchain.memory import ConversationBufferMemory
-
- llm = ChatOpenAI(temperature=0.0)
-
- # 创建一个对话ConversationBufferMemory的缓存
- memory = ConversationBufferMemory()
-
- #创建一个对话链
- conversation = ConversationChain(
- llm=llm,
- memory = memory,
- verbose=True
- )
通过ConversationChain向LLM介绍,提出第一个问题
conversation.predict(input="Hi, my name is Andrew")
LLM回答第一个问题
- > Entering new chain...
- Prompt after formatting:
- The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
-
- Current conversation:
-
- Human: Hi, my name is Andrew
- AI:
-
- > Finished chain.
-
- "Hello Andrew! It's nice to meet you. How can I assist you today?"
向LLM提出第二个问题
conversation.predict(input="What is 1+1?")
LLM回答第二问题
- > Entering new chain...
- Prompt after formatting:
- The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
-
- Current conversation:
- Human: Hi, my name is Andrew
- AI: Hello Andrew! It's nice to meet you. How can I assist you today?
- Human: What is 1+1?
- AI:
- > Finished chain.
- '1+1 is equal to 2.'
向LLM再次询问第一个问题的信息,LLM正确的回答出来了
- # 可以记住上一次的对话提到的人类的名字
- conversation.predict(input="What is my name?")
- > Entering new chain...
- Prompt after formatting:
- The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
-
- Current conversation:
- Human: Hi, my name is Andrew
- AI: Hello Andrew! It's nice to meet you. How can I assist you today?
- Human: What is 1+1?
- AI: 1+1 is equal to 2.
- Human: What is my name?
- AI:
- > Finished chain.
- 'Your name is Andrew.'
查看memory的变量
- # 加载缓存的变量内容
- memory.load_memory_variables({})
{'history': "Human: Hi, my name is Andrew\nAI: Hello Andrew! It's nice to meet you. How can I assist you today?\nHuman: What is 1+1?\nAI: 1+1 is equal to 2.\nHuman: What is my name?\nAI: Your name is Andrew."}
我们也可以手动添加memory的内容
- # 手动添加缓存的内容
- memory.save_context({"input": "Hi"},
- {"output": "What's up"})
上面的 Conversation buffer memory 会随着我们和LLM的对话次数的增长而不断的增加,这个时候memory就可能变大的很大,可能会超出LLM的tokens限制。Conversation buffer window memory 提供了一个滑动窗口的memory,它可以记录最后K轮对话的内容(一般我们认为越远离我们的对话,可能与我们当前讨论的主题无关)。
- from langchain.memory import ConversationBufferWindowMemory
- # 创建一个对话缓存的窗口(k=1)
- memory = ConversationBufferWindowMemory(k=1)
-
- # 连续两次添加对话内容
- memory.save_context({"input": "Hi"},
- {"output": "What's up"})
- memory.save_context({"input": "Not much, just hanging"},
- {"output": "Cool"})
手动添加了两轮对话内容后,去查看memory的变量,发现只保持了最后一轮的内容(之前设置是k=1)。
- # 只能记忆最后一次的对话内容
- memory.load_memory_variables({})
{'history': 'Human: Not much, just hanging\nAI: Cool'}
通过ConversationChain测试对话能力
- # 创建conversation chain并且设置memory
- llm = ChatOpenAI(temperature=0.0)
- memory = ConversationBufferWindowMemory(k=1)
- conversation = ConversationChain(
- llm=llm,
- memory = memory,
- verbose=False
- )
向LLM提出第一个问题
conversation.predict(input="Hi, my name is Andrew")
"Hello Andrew! It's nice to meet you. How can I assist you today?"
向LLM提出第二个问题
conversation.predict(input="What is 1+1?")
'1+1 is equal to 2.'
向LLM询问第一个问题相关的内容,发现LLM无法回答
- # memory只能记住最后一次的对话内容
- conversation.predict(input="What is my name?")
"I'm sorry, but I don't have access to personal information."
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。