当前位置:   article > 正文

大模型从入门到应用——LangChain:记忆(Memory)-[记忆的类型:对话知识图谱记忆、对话摘要记忆和会话摘要缓冲记忆]_langchain 知识图谱

langchain 知识图谱

分类目录:《大模型从入门到应用》总目录

LangChain系列文章:


对话知识图谱记忆(Conversation Knowledge Graph Memory)

这种类型的记忆使用知识图谱来重建记忆:

from langchain.memory import ConversationKGMemory
from langchain.llms import OpenAI

llm = OpenAI(temperature=0)
memory = ConversationKGMemory(llm=llm)
memory.save_context({"input": "say hi to sam"}, {"output": "who is sam"})
memory.save_context({"input": "sam is a friend"}, {"output": "okay"})
memory.load_memory_variables({"input": 'who is sam'})
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

输出:

{'history': 'On Sam: Sam is friend.'}
  • 1

我们还可以将历史记录作为消息列表获取,如果我们与聊天模型一起使用时,这将非常有用:

memory = ConversationKGMemory(llm=llm, return_messages=True)
memory.save_context({"input": "say hi to sam"}, {"output": "who is sam"})
memory.save_context({"input": "sam is a friend"}, {"output": "okay"})
memory.load_memory_variables({"input": 'who is sam'})
  • 1
  • 2
  • 3
  • 4

输出:

{'history': [SystemMessage(content='On Sam: Sam is friend.', additional_kwargs={})]}
  • 1

我们还可以更模块化地从新消息中获取当前实体,这将使用前面的消息作为上下文:

memory.get_current_entities("what's Sams favorite color?")
  • 1

输出:

['Sam']
  • 1

我们还可以更模块化地从新消息中获取知识三元组,这也将使用前面的消息作为上下文:

memory.get_knowledge_triplets("her favorite color is red")
  • 1

输出:

[KnowledgeTriple(subject='Sam', predicate='favorite color', object_='red')]
  • 1
在链中使用

现在让我们在一个链中使用这个功能:

llm = OpenAI(temperature=0)
from langchain.prompts.prompt import PromptTemplate
from langchain.chains import ConversationChain

template = """The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. The AI ONLY uses information contained in the "Relevant Information" section and does not hallucinate.

Relevant Information:

{history}

Conversation:
Human: {input}
AI:"""

prompt = PromptTemplate(
    input_variables=["history", "input"], template=template
)
conversation_with_kg = ConversationChain(
    llm=llm, 
    verbose=True, 
    prompt=prompt,
    memory=ConversationKGMemory(llm=llm)
)
conversation_with_kg.predict(input="Hi, what's up?")
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24

日志输出:

> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. 
If the AI does not know the answer to a question, it truthfully says it does not know. The AI ONLY uses information contained in the "Relevant Information" section and does not hallucinate.

Relevant Information:



Conversation:
Human: Hi, what's up?
AI:

> Finished chain.
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14

输出:

" Hi there! I'm doing great. I'm currently in the process of learning about the world around me. I'm learning about different cultures, languages, and customs. It's really fascinating! How about you?"
  • 1

输入:

conversation_with_kg.predict(input="My name is James and I'm helping Will. He's an engineer.")
  • 1

日志输出:

> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. 
If the AI does not know the answer to a question, it truthfully says it does not know. The AI ONLY uses information contained in the "Relevant Information" section and does not hallucinate.

Relevant Information:



Conversation:
Human: My name is James and I'm helping Will. He's an engineer.
AI:

> Finished chain.
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14

输出:

" Hi James, it's nice to meet you. I'm an AI and I understand you're helping Will, the engineer. What kind of engineering does he do?"
  • 1

输入:

conversation_with_kg.predict(input="What do you know about Will?")
  • 1

输入:

> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. 
If the AI does not know the answer to a question, it truthfully says it does not know. The AI ONLY uses information contained in the "Relevant Information" section and does not hallucinate.

Relevant Information:

On Will: Will is an engineer.

Conversation:
Human: What do you know about Will?
AI:

> Finished chain.
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14

输出:

' Will is an engineer.'
  • 1

对话摘要记忆ConversationSummaryMemory

现在让我们来看一下使用稍微复杂的记忆类型ConversationSummaryMemory。这种类型的记忆会随着时间的推移创建对话的摘要。这对于从对话中压缩信息非常有用。让我们首先探索一下这种类型记忆的基本功能:

from langchain.memory import ConversationSummaryMemory, ChatMessageHistory
from langchain.llms import OpenAI

memory = ConversationSummaryMemory(llm=OpenAI(temperature=0))
memory.save_context({"input": "hi"}, {"output": "whats up"})
memory.load_memory_variables({})
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

输出:

{'history': '\nThe human greets the AI, to which the AI responds.'}
  • 1

我们还可以将历史记录作为消息列表获取,如果我们正在与聊天模型一起使用,这将非常有用:

memory = ConversationSummaryMemory(llm=OpenAI(temperature=0), return_messages=True)
memory.save_context({"input": "hi"}, {"output": "whats up"})
memory.load_memory_variables({})
  • 1
  • 2
  • 3

输出:

    {'history': [SystemMessage(content='\nThe human greets the AI, to which the AI responds.', additional_kwargs={})]}
  • 1

我们还可以直接使用predict_new_summary方法:

messages = memory.chat_memory.messages
previous_summary = ""
memory.predict_new_summary(messages, previous_summary)
  • 1
  • 2
  • 3

输出:

'\nThe human greets the AI, to which the AI responds.'
  • 1
使用消息进行初始化

如果我们有类似的消息,则可以很容易地使用ChatMessageHistory来初始化这个类,它将会计算一个摘要在加载过程中。

history = ChatMessageHistory()
history.add_user_message("hi")
history.add_ai_message("hi there!")
memory = ConversationSummaryMemory.from_messages(llm=OpenAI(temperature=0), chat_memory=history, return_messages=True)
memory.buffer
  • 1
  • 2
  • 3
  • 4
  • 5

输出:

'\nThe human greets the AI, to which the AI responds with a friendly greeting.'
  • 1
在对话链中使用

让我们通过一个示例来演示在对话链中使用这个功能,同样设置verbose=True以便我们可以看到提示。

from langchain.llms import OpenAI
from langchain.chains import ConversationChain
llm = OpenAI(temperature=0)
conversation_with_summary = ConversationChain(
    llm=llm, 
    memory=ConversationSummaryMemory(llm=OpenAI()),
    verbose=True
)
conversation_with_summary.predict(input="Hi, what's up?")
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

日志输出:

> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.

Current conversation:

Human: Hi, what's up?
AI:

> Finished chain.
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

输出:

" Hi there! I'm doing great. I'm currently helping a customer with a technical issue. How about you?"
  • 1

输入:

conversation_with_summary.predict(input="Tell me more about it!")
  • 1

日志输出:

> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.

Current conversation:

The human greeted the AI and asked how it was doing. The AI replied that it was doing great and was currently helping a customer with a technical issue.
Human: Tell me more about it!
AI:

> Finished chain.
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

输出:

" Sure! The customer is having trouble with their computer not connecting to the internet. I'm helping them troubleshoot the issue and figure out what the problem is. So far, we've tried resetting the router and checking the network settings, but the issue still persists. We're currently looking into other possible solutions."
  • 1

输入:

conversation_with_summary.predict(input="Very cool -- what is the scope of the project?")
  • 1

日志输出:

> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.

Current conversation:

The human greeted the AI and asked how it was doing. The AI replied that it was doing great and was currently helping a customer with a technical issue where their computer was not connecting to the internet. The AI was troubleshooting the issue and had already tried resetting the router and checking the network settings, but the issue still persisted and they were looking into other possible solutions.
Human: Very cool -- what is the scope of the project?
AI:

> Finished chain.
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

输出:

" The scope of the project is to troubleshoot the customer's computer issue and find a solution that will allow them to connect to the internet. We are currently exploring different possibilities and have already tried resetting the router and checking the network settings, but the issue still persists."
  • 1

会话摘要缓冲记忆 ConversationSummaryBufferMemory

ConversationSummaryBufferMemoryConversationBufferMemoryConversationSummaryMemory的概念结合起来。它在内存中保留了最近的一些对话交互,并将它们编译成一个摘要。与先前的实现不同,它使用标记长度来确定何时刷新交互,而不是交互数量。

from langchain.memory import ConversationSummaryBufferMemory
from langchain.llms import OpenAI
llm = OpenAI()
memory = ConversationSummaryBufferMemory(llm=llm, max_token_limit=10)
memory.save_context({"input": "hi"}, {"output": "whats up"})
memory.save_context({"input": "not much you"}, {"output": "not much"})
memory.load_memory_variables({})
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

输出:

{'history': 'System: \nThe human says "hi", and the AI responds with "whats up".\nHuman: not much you\nAI: not much'}
  • 1

我们还可以将历史记录作为消息列表获取,如果我们正在与聊天模型一起使用,将非常有用:

memory = ConversationSummaryBufferMemory(llm=llm, max_token_limit=10, return_messages=True)
memory.save_context({"input": "hi"}, {"output": "whats up"})
memory.save_context({"input": "not much you"}, {"output": "not much"})
  • 1
  • 2
  • 3

我们还可以直接利用predict_new_summary方法:

messages = memory.chat_memory.messages
previous_summary = ""
memory.predict_new_summary(messages, previous_summary)
  • 1
  • 2
  • 3

输出:

'\nThe human and AI state that they are not doing much.'
  • 1
在链式结构中的使用

让我们通过一个例子来演示在链式结构中的使用ConversationSummaryBufferMemory,我们同样设置verbose=True以便我们可以看到提示信息:

from langchain.chains import ConversationChain
conversation_with_summary = ConversationChain(
    llm=llm, 
    # We set a very low max_token_limit for the purposes of testing.
    memory=ConversationSummaryBufferMemory(llm=OpenAI(), max_token_limit=40),
    verbose=True
)
conversation_with_summary.predict(input="Hi, what's up?")
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

日志输出:

> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.

Current conversation:

Human: Hi, what's up?
AI:

> Finished chain.
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

输出:

" Hi there! I'm doing great. I'm learning about the latest advances in artificial intelligence. What about you?"
  • 1

输入:

conversation_with_summary.predict(input="Just working on writing some documentation!")
  • 1

日志输出:

> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.

Current conversation:
Human: Hi, what's up?
AI:  Hi there! I'm doing great. I'm spending some time learning about the latest developments in AI technology. How about you?
Human: Just working on writing some documentation!
AI:

> Finished chain.
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

输出:

' That sounds like a great use of your time. Do you have experience with writing documentation?'
  • 1

输入:

# We can see here that there is a summary of the conversation and then some previous interactions
conversation_with_summary.predict(input="For LangChain! Have you heard of it?")
  • 1
  • 2

日志输出:

> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.

Current conversation:
System: 
The human asked the AI what it was up to and the AI responded that it was learning about the latest developments in AI technology.
Human: Just working on writing some documentation!
AI:  That sounds like a great use of your time. Do you have experience with writing documentation?
Human: For LangChain! Have you heard of it?
AI:

> Finished chain.
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

输出:

" No, I haven't heard of LangChain. Can you tell me more about it?"
  • 1

输入:

# We can see here that the summary and the buffer are updated
conversation_with_summary.predict(input="Haha nope, although a lot of people confuse it for that")
  • 1
  • 2

日志输出:

> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.

Current conversation:
System: 
The human asked the AI what it was up to and the AI responded that it was learning about the latest developments in AI technology. The human then mentioned they were writing documentation, to which the AI responded that it sounded like a great use of their time and asked if they had experience with writing documentation.
Human: For LangChain! Have you heard of it?
AI:  No, I haven't heard of LangChain. Can you tell me more about it?
Human: Haha nope, although a lot of people confuse it for that
AI:

> Finished chain.
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

输出:

' Oh, okay. What is LangChain?'
  • 1

参考文献:
[1] LangChain官方网站:https://www.langchain.com/
[2] LangChain

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/不正经/article/detail/129707?site
推荐阅读
相关标签