赞
踩
在上一篇博客中,我们学习了如何使用LangChain的文档加载器将文档加载为标准格式。加载文档后,下一步是将它们拆分为更小的块。这个过程乍一看似乎很简单,但有一些微妙之处和重要的考虑因素会显着影响下游任务的性能和准确性。
文档拆分至关重要,因为它可以确保语义相关的内容在同一块中组合在一起。在回答问题或执行依赖于文档中存在的上下文信息的其他任务时,这一点尤为重要。
考虑以下示例:假设我们有一句关于丰田凯美瑞及其规格的句子。如果我们天真地拆分这个句子,而不考虑上下文,我们最终可能会得到一个包含句子部分的块和另一个包含剩余部分的块。因此,当试图回答有关凯美瑞规格的问题时,我们都不会在任何一个块中获得完整的信息,从而导致答案不正确或不完整。
LangChain中所有文本拆分器的基础是将文本拆分为指定大小的块,相邻块之间有可选的重叠。下图对此进行了说明:
对应于每个块的大小,可以用字符或标记来衡量。这是在连续块之间共享的文本的一部分,允许跨块维护上下文boundaries.chunk_sizechunk_overlap。
LangChain提供了几种类型的文本拆分器,每种都有自己的优势和用例。以下是一些最常用的分离器:
一个基本的拆分器,它基于单个字符分隔符(如空格或换行符)拆分文本。在处理结构不清晰的文本或想要在特定点拆分文本时,此拆分器非常有用。
用于通用文本拆分,它根据分隔符的层次结构拆分文本,从双换行符开始,然后是单换行符 、空格,最后是单个字符。这种方法旨在通过优先考虑段落和句子等自然边界的拆分来保持文本的结构和连贯性。RecursiveCharacterTextSplitternnn
根据标记计数而不是字符计数拆分文本,因为许多语言模型都具有由标记计数而不是字符计数指定的上下文窗口。标记的长度通常约为四个字符,因此基于标记计数进行拆分可以更好地表示语言模型将如何处理文本。TokenTextSplitter
旨在根据标题结构拆分 Markdown 文档。它将标头元数据保留在生成的块中,从而允许上下文感知拆分和使用文档结构的潜在下游任务。MarkdownHeaderTextSplitter
让我们探索一些示例,以更好地了解这些文本拆分器的工作原理以及如何有效地使用它们。
通过导入必要的库并加载 OpenAI API 密钥来设置环境:
import os
from langchain_openai import OpenAI
from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv())
client = OpenAI(
api_key=os.getenv("OPENAI_API_KEY")
)
接下来,我们将导入两个最常用的文本拆分器:
from langchain_text_splitters import (
CharacterTextSplitter,
RecursiveCharacterTextSplitter,
)
让我们从定义一些示例开始,以了解这些分离器的工作原理:
chunk_size = 26 chunk_overlap = 4 r_splitter = RecursiveCharacterTextSplitter( chunk_size=chunk_size, chunk_overlap=chunk_overlap ) c_splitter = CharacterTextSplitter(chunk_size=chunk_size, chunk_overlap=chunk_overlap) text1 = "abcdefghijklmnopqrstuvwxyz" print(r_splitter.split_text(text1)) # Output: ['abcdefghijklmnopqrstuvwxyz'] text2 = "abcdefghijklmnopqrstuvwxyzabcdefg" print(r_splitter.split_text(text2)) # Output: ['abcdefghijklmnopqrstuvwxyz', 'wxyzabcdefg'] text3 = "a b c d e f g h i j k l m n o p q r s t u v w x y z" print(r_splitter.split_text(text3)) # Output: ['a b c d e f g h i j k l m', 'l m n o p q r s t u v w x', 'w x y z'] print(c_splitter.split_text(text3)) # Output: ['a b c d e f g h i j k l m n o p q r s t u v w x y z'] # Set the separator for CharacterTextSplitter c_splitter = CharacterTextSplitter( chunk_size=chunk_size, chunk_overlap=chunk_overlap, separator=" " ) print(c_splitter.split_text(text3)) # Output: ['a b c d e f g h i j k l m', 'l m n o p q r s t u v w x', 'w x y z']
这些示例演示了如何根据指定的 和 拆分文本,而如何基于单个字符分隔符(在本例中为空格)拆分文本。
尝试拆分一些真实世界的例子:
some_text = """When writing documents, writers will use document structure to group content. \ This can convey to the reader, which idea's are related. For example, closely related ideas \ are in sentances. Similar ideas are in paragraphs. Paragraphs form a document. \n\n \ Paragraphs are often delimited with a carriage return or two carriage returns. \ Carriage returns are the "backslash n" you see embedded in this string. \ Sentences have a period at the end, but also, have a space.\ and words are separated by space.""" c_splitter = CharacterTextSplitter(chunk_size=450, chunk_overlap=0, separator=" ") r_splitter = RecursiveCharacterTextSplitter( chunk_size=450, chunk_overlap=0, separators=["\n\n", "\n", " ", ""] ) chunks = c_splitter.split_text(some_text) print("Chunks: ", chunks) print("Length of chunks: ", len(chunks)) # Chunks: ['When writing documents, writers will use document structure to group content. This can convey to the reader, which idea's are related. For example, closely related ideas are in sentances. Similar ideas are in paragraphs. Paragraphs form a document. \n\n Paragraphs are often delimited with a carriage return or two carriage returns. Carriage returns are the "backslash n" you see embedded in this string. Sentences have a period at the end, but also,', 'have a space.and words are separated by space.'] # Length of chunks: 2 chunks = r_splitter.split_text(some_text) print("Chunks: ", chunks) print("Length of chunks: ", len(chunks)) # Chunks: ["When writing documents, writers will use document structure to group content. This can convey to the reader, which idea's are related. For example, closely related ideas are in sentances. Similar ideas are in paragraphs. Paragraphs form a document.", 'Paragraphs are often delimited with a carriage return or two carriage returns. Carriage returns are the "backslash n" you see embedded in this string. Sentences have a period at the end, but also, have a space.and words are separated by space.'] # Length of chunks: 2
在此示例中,它基于空格拆分文本,而第一个尝试拆分双换行符,然后是单换行符、空格,最后是单个字符。CharacterTextSplitterRecursiveCharacterTextSplitterCharacterTextSplitterRecursiveCharacterTextSplitter
我们还可以拆分真实世界的文档,例如 PDF 和 Notion 数据库:
from langchain.document_loaders import PyPDFLoader, NotionDirectoryLoader
# Load a PDF document
loader = PyPDFLoader("docs/cs229_lectures/MachineLearning-Lecture01.pdf")
pages = loader.load()
text_splitter = CharacterTextSplitter(
separator="\n", chunk_size=1000, chunk_overlap=150, length_function=len
)
docs = text_splitter.split_documents(pages)
print("Pages in the original document: ", len(pages))
print("Length of chunks after splitting pages: ", len(docs))
# Pages in the original document: 22
# Length of chunks after splitting pages: 353
此代码使用 加载 PDF 文档,将页面拆分为较小的块,并打印原始页数和生成的块数。PyPDFLoaderCharacterTextSplitter
# Load a Notion database
loader = NotionDirectoryLoader("docs/Notion_DB")
notion_db = loader.load()
docs = text_splitter.split_documents(notion_db)
print("Pages in the original notion document: ", len(notion_db))
print("Length of chunks after splitting pages: ", len(docs))
# Pages in the original notion document: 52
# Length of chunks after splitting pages: 353
类似地,我们可以加载一个 Notion 数据库,将文档拆分为块,并打印原始文档的数量和生成的块。NotionDirectoryLoader
除了基于字符的拆分之外,LangChain还支持基于令牌的拆分,这在使用具有由令牌计数指定的上下文窗口的语言模型时非常有用:
from langchain.text_splitter import TokenTextSplitter
text_splitter = TokenTextSplitter(chunk_size=1, chunk_overlap=0)
text1 = "foo bar bazzyfoo"
print(text_splitter.split_text(text1))
# Output: ['foo', ' bar', ' b', 'az', 'zy', 'foo']
text_splitter = TokenTextSplitter(chunk_size=10, chunk_overlap=0)
docs = text_splitter.split_documents(pages)
print(docs[0])
# Output: Document(page_content='MachineLearning-Lecture01 \n', metadata={'source': 'docs/cs229_lectures/MachineLearning-Lecture01.pdf', 'page': 0})
print(pages[0].metadata)
# Output: {'source': 'docs/cs229_lectures/MachineLearning-Lecture01.pdf', 'page': 0}
在此示例中,我们使用 to 根据令牌计数拆分文本。我们可以调整 and 参数来控制拆分行为。TokenTextSplitterchunk_sizechunk_overlap
LangChain还提供了上下文感知拆分的工具,旨在在拆分过程中保留文档结构和语义上下文。它根据文档的标题结构拆分 Markdown 文档,并将标题元数据保留在生成的块中:MarkdownHeaderTextSplitter
from langchain.document_loaders import NotionDirectoryLoader from langchain.text_splitter import MarkdownHeaderTextSplitter markdown_document = """# Title\n\n \ ## Chapter 1\n\n \ Hi this is Jim\n\n Hi this is Joe\n\n \ ### Section \n\n \ Hi this is Lance \n\n ## Chapter 2\n\n \ Hi this is Molly""" headers_to_split_on = [ ("#", "Header 1"), ("##", "Header 2"), ("###", "Header 3"), ] markdown_splitter = MarkdownHeaderTextSplitter(headers_to_split_on=headers_to_split_on) md_header_splits = markdown_splitter.split_text(markdown_document) print(md_header_splits[0]) # Output: Document(page_content='Hi this is Jim \nHi this is Joe', metadata={'Header 1': 'Title', 'Header 2': 'Chapter 1'}) print(md_header_splits[1]) # Output: Document(page_content='Hi this is Lance', metadata={'Header 1': 'Title', 'Header 2': 'Chapter 1', 'Header 3': 'Section'})
在此示例中,我们定义一个带有标题的 Markdown 文档,并根据标题结构拆分文档。生成的块保留标头元数据,这对于利用文档结构的下游任务非常有用。MarkdownHeaderTextSplitter
我们还可以将此拆分器Notion 数据库:
loader = NotionDirectoryLoader("docs/Notion_DB")
docs = loader.load()
txt = " ".join([d.page_content for d in docs])
headers_to_split_on = [
("#", "Header 1"),
("##", "Header 2"),
]
markdown_splitter = MarkdownHeaderTextSplitter(headers_to_split_on=headers_to_split_on)
md_header_splits = markdown_splitter.split_text(txt)
print(md_header_splits[0])
此代码加载一个 Notion 数据库,将文档内容联接到单个字符串中,拆分字符串,并打印第一个生成的块。MarkdownHeaderTextSplitter
文档拆分是LangChain流水线中的关键步骤,因为它确保语义相关的内容在同一块中组合在一起。LangChain提供了各种文本拆分器,每个拆分器都有自己的优势和用例,允许您根据自己的特定需求选择最合适的拆分器。
无论您是处理通用文本、Markdown 文档、代码片段还是其他类型的内容,LangChain 的文本拆分器都提供了灵活性和自定义选项,可以有效地拆分您的文档。通过了解文档拆分中涉及的细微差别和注意事项,可以优化语言模型和下游任务的性能和准确性。
作为一名热心肠的互联网老兵,我意识到有很多经验和知识值得分享给大家,也可以通过我们的能力和经验解答大家在人工智能学习中的很多困惑,所以在工作繁忙的情况下还是坚持各种整理和分享。
但苦于知识传播途径有限,很多互联网行业朋友无法获得正确的资料得到学习提升,故此将并将重要的 AI大模型资料
包括AI大模型入门学习思维导图、精品AI大模型学习书籍手册、视频教程、实战学习等录播视频免费分享出来。
所有资料 ⚡️ ,朋友们如果有需要全套 《LLM大模型入门+进阶学习资源包》,扫码获取~
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/从前慢现在也慢/article/detail/990263
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。