赞
踩
这是最简单的方法。它基于字符(默认为 “\n\n”)进行分割,并通过字符数量来衡量块的长度。
%pip install -qU langchain-text-splitters
# 这是一个我们可以分割的长文档。
with open("../../state_of_the_union.txt") as f:
state_of_the_union = f.read()
from langchain_text_splitters import CharacterTextSplitter
text_splitter = CharacterTextSplitter(
separator="\n\n",
chunk_size=1000,
chunk_overlap=200,
length_function=len,
is_separator_regex=False,
)
texts = text_splitter.create_documents([state_of_the_union])
print(texts[0])
page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.'
这是将元数据与文档一起传递的示例,请注意它与文档一起分割。
metadatas = [{"document": 1}, {"document": 2}]
documents = text_splitter.create_documents(
[state_of_the_union, state_of_the_union], metadatas=metadatas
)
print(documents[0])
page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.' metadata={'document': 1}
text_splitter.split_text(state_of_the_union)[0]
'Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.'
这个文本分割器是用于一般文本的推荐工具。它由一个字符列表参数化。它尝试按顺序在它们上进行分割,直到块足够小。默认列表是 ["\n\n", "\n", " ", ""]
。这样做的效果是尽可能保持所有段落(然后句子,再然后单词)在一起,因为这些通常看起来是语义相关性最强的文本片段。
%pip install -qU langchain-text-splitters
# 这是一个我们可以分割的长文档。
with open("../../state_of_the_union.txt") as f:
state_of_the_union = f.read()
from langchain_text_splitters import RecursiveCharacterTextSplitter
text_splitter = RecursiveCharacterTextSplitter(
# 设置一个非常小的块大小,只是为了展示。
chunk_size=100,
chunk_overlap=20,
length_function=len,
is_separator_regex=False,
)
texts = text_splitter.create_documents([state_of_the_union])
print(texts[0])
print(texts[1])
page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and'
page_content='of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.'
text_splitter.split_text(state_of_the_union)[:2]
['Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and',
'of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.']
语言模型有一个标记限制。您不应超过标记限制。因此,当您将文本分成块时,最好计算标记数。有许多标记器。在计算文本中的标记数时,应使用与语言模型中使用的相同的标记器。
tiktoken 是由
OpenAI
创建的快速BPE
标记器。
我们可以使用它来估算使用的标记数。对于 OpenAI 模型来说,这可能更准确。
tiktoken
标记器。%pip install --upgrade --quiet langchain-text-splitters tiktoken
# 这是一个我们可以分割的长文档。
with open("../../state_of_the_union.txt") as f:
state_of_the_union = f.read()
from langchain_text_splitters import CharacterTextSplitter
text_splitter = CharacterTextSplitter.from_tiktoken_encoder(
chunk_size=100, chunk_overlap=0
)
texts = text_splitter.split_text(state_of_the_union)
print(texts[0])
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.
Last year COVID-19 kept us apart. This year we are finally together again.
Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.
With a duty to one another to the American people to the Constitution.
请注意,如果我们使用 CharacterTextSplitter.from_tiktoken_encoder
,文本仅由 CharacterTextSplitter
分割,而 tiktoken
标记器用于合并分割。这意味着分割可能比 tiktoken
标记器测量的块大小要大。我们可以使用 RecursiveCharacterTextSplitter.from_tiktoken_encoder
来确保分割不会大于语言模型允许的标记块大小,如果某个分割大小超过了限制,将会递归地进行分割。
我们也可以直接加载一个 tiktoken 分割器,以确保每个分割块都小于块大小。
from langchain_text_splitters import TokenTextSplitter
text_splitter = TokenTextSplitter(chunk_size=10, chunk_overlap=0)
texts = text_splitter.split_text(state_of_the_union)
print(texts[0])
spaCy 是一个用 Python 和 Cython 编写的先进自然语言处理开源软件库。
与 NLTK
不同,我们可以使用 spaCy 标记器 进行分割。
spaCy
标记器。%pip install --upgrade --quiet spacy
# 这是一个我们可以分割的长文档。
with open("../../state_of_the_union.txt") as f:
state_of_the_union = f.read()
from langchain_text_splitters import SpacyTextSplitter
text_splitter = SpacyTextSplitter(chunk_size=1000)
texts = text_splitter.split_text(state_of_the_union)
print(texts[0])
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman.
Members of Congress and the Cabinet.
Justices of the Supreme Court.
My fellow Americans.
...
SentenceTransformersTokenTextSplitter
是专门为句子转换模型设计的文本分割器。默认行为是将文本分割成适合您想要使用的句子转换模型的标记窗口的块。
from langchain_text_splitters import SentenceTransformersTokenTextSplitter
splitter = SentenceTransformersTokenTextSplitter(chunk_overlap=0)
text = "Lorem "
count_start_and_stop_tokens = 2
text_token_count = splitter.count_tokens(text=text) - count_start_and_stop_tokens
print(text_token_count)
2
token_multiplier = splitter.maximum_tokens_per_chunk // text_token_count + 1
# `text_to_split` 无法适应单个块
text_to_split = text * token_multiplier
print(f"tokens in text to split: {splitter.count_tokens(text=text_to_split)}")
tokens in text to split: 514
text_chunks = splitter.split_text(text=text_to_split)
print(text_chunks[1])
lorem
自然语言工具包,更常见的是 NLTK,是用 Python 编写的一套用于英语符号和统计自然语言处理(NLP)的库和程序。
与其仅仅根据 “\n\n” 进行分割,我们可以使用 NLTK
根据 NLTK 标记器 进行分割。
NLTK
标记器。# pip install nltk
# 这是一个我们可以分割的长文档。
with open("../../state_of_the_union.txt") as f:
state_of_the_union = f.read()
from langchain_text_splitters import NLTKTextSplitter
text_splitter = NLTKTextSplitter(chunk_size=1000)
texts = text_splitter.split_text(state_of_the_union)
print(texts[0])
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman.
Members of Congress and the Cabinet.
Justices of the Supreme Court.
My fellow Americans.
...
KoNLPy: Korean NLP in Python 是用于韩语自然语言处理(NLP)的 Python 包。
标记分割涉及将文本分割成更小、更易处理的单元,称为标记。这些标记通常是单词、短语、符号或其他对进一步处理和分析至关重要的有意义元素。在英语等语言中,标记分割通常涉及通过空格和标点符号分隔单词。标记分割的有效性在很大程度上取决于标记器对语言结构的理解,确保生成有意义的标记。由于为英语设计的标记器无法理解韩语等其他语言的独特语义结构,因此无法有效用于韩语语言处理。
在韩语文本的情况下,KoNLPY 包含一个称为 Kkma
(韩语知识形态分析器)的形态分析器。Kkma
提供韩语文本的详细形态分析。它将句子分解为单词,将单词分解为各自的形态素,为每个标记识别词性。它可以将一块文本分割成单独的句子,这对于处理长文本特别有用。
虽然 Kkma
以其详细的分析而闻名,但重要的是要注意,这种精度可能会影响处理速度。因此,Kkma
最适用于将分析深度置于快速文本处理之上的应用程序。
# pip install konlpy
# 这是一个我们想要将其分割成组成句子的韩文文档。
with open("./your_korean_doc.txt") as f:
korean_document = f.read()
from langchain_text_splitters import KonlpyTextSplitter
text_splitter = KonlpyTextSplitter()
texts = text_splitter.split_text(korean_document)
# 句子以 "\n\n" 字符分割。
print(texts[0])
故事讲述了这位年轻的官员在升任高官后,最终找到了春香。
两人经历了重重考验后再次相遇,他们的爱情传遍全世界,延续到后人。
- 《春香传》
Hugging Face 拥有许多分词器。
我们使用 Hugging Face 分词器,GPT2TokenizerFast 来计算文本长度(以 token 为单位)。
Hugging Face
分词器计算的 token 数量。from transformers import GPT2TokenizerFast
tokenizer = GPT2TokenizerFast.from_pretrained("gpt2")
# 这是一个可以分割的长文档。
with open("../../../state_of_the_union.txt") as f:
state_of_the_union = f.read()
from langchain_text_splitters import CharacterTextSplitter
text_splitter = CharacterTextSplitter.from_huggingface_tokenizer(
tokenizer, chunk_size=100, chunk_overlap=0
)
texts = text_splitter.split_text(state_of_the_union)
print(texts[0])
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.
Last year COVID-19 kept us apart. This year we are finally together again.
Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.
With a duty to one another to the American people to the Constitution.
许多聊天或问答应用程序在嵌入和向量存储之前涉及对输入文档进行分块。
Pinecone的这些笔记提供了一些有用的提示:
当嵌入整个段落或文档时,嵌入过程考虑文本中句子和短语之间的整体上下文和关系。这可能导致更全面的向量表示,捕捉文本的更广泛含义和主题。
正如上文所述,分块通常旨在将具有共同上下文的文本放在一起。考虑到这一点,我们可能希望特别尊重文档本身的结构。例如,Markdown文件是由标题组织的。在特定标题组内创建分块是一个直观的想法。为了解决这一挑战,我们可以使用 MarkdownHeaderTextSplitter
。这将根据指定的一组标题来拆分Markdown文件。
例如,如果我们想要拆分这个Markdown:
md = '# Foo\n\n ## Bar\n\nHi this is Jim \nHi this is Joe\n\n ## Baz\n\n Hi this is Molly'
我们可以指定要拆分的标题:
[("#", "Header 1"),("##", "Header 2")]
然后内容将根据共同的标题进行分组或拆分:
{'content': 'Hi this is Jim \nHi this is Joe', 'metadata': {'Header 1': 'Foo', 'Header 2': 'Bar'}}
{'content': 'Hi this is Molly', 'metadata': {'Header 1': 'Foo', 'Header 2': 'Baz'}}
让我们看一些下面的示例。
%pip install -qU langchain-text-splitters
from langchain_text_splitters import MarkdownHeaderTextSplitter
markdown_document = "# Foo\n\n ## Bar\n\nHi this is Jim\n\nHi this is Joe\n\n ### Boo \n\n Hi this is Lance \n\n ## Baz\n\n Hi this is Molly"
headers_to_split_on = [
("#", "Header 1"),
("##", "Header 2"),
("###", "Header 3"),
]
markdown_splitter = MarkdownHeaderTextSplitter(headers_to_split_on=headers_to_split_on)
md_header_splits = markdown_splitter.split_text(markdown_document)
md_header_splits
type(md_header_splits[0])
默认情况下,MarkdownHeaderTextSplitter
会从输出块的内容中剥离要拆分的标题。可以通过设置 strip_headers = False
来禁用此功能。
markdown_splitter = MarkdownHeaderTextSplitter(
headers_to_split_on=headers_to_split_on, strip_headers=False
)
md_header_splits = markdown_splitter.split_text(markdown_document)
md_header_splits
在每个Markdown组内,我们可以应用任何我们想要的文本拆分器。
markdown_document = "# Intro \n\n ## History \n\n Markdown[9] is a lightweight markup language for creating formatted text using a plain-text editor. John Gruber created Markdown in 2004 as a markup language that is appealing to human readers in its source code form.[9] \n\n Markdown is widely used in blogging, instant messaging, online forums, collaborative software, documentation pages, and readme files. \n\n ## Rise and divergence \n\n As Markdown popularity grew rapidly, many Markdown implementations appeared, driven mostly by the need for \n\n additional features such as tables, footnotes, definition lists,[note 1] and Markdown inside HTML blocks. \n\n #### Standardization \n\n From 2012, a group of people, including Jeff Atwood and John MacFarlane, launched what Atwood characterised as a standardisation effort. \n\n ## Implementations \n\n Implementations of Markdown are available for over a dozen programming languages." headers_to_split_on = [ ("#", "Header 1"), ("##", "Header 2"), ] # MD splits markdown_splitter = MarkdownHeaderTextSplitter( headers_to_split_on=headers_to_split_on, strip_headers=False ) md_header_splits = markdown_splitter.split_text(markdown_document) # Char-level splits from langchain_text_splitters import RecursiveCharacterTextSplitter chunk_size = 250 chunk_overlap = 30 text_splitter = RecursiveCharacterTextSplitter( chunk_size=chunk_size, chunk_overlap=chunk_overlap ) # Split splits = text_splitter.split_documents(md_header_splits) splits
与MarkdownHeaderTextSplitter
类似,HTMLHeaderTextSplitter
是一个“结构感知”分块器,它在元素级别拆分文本,并为每个与任何给定分块相关的标题添加元数据。它可以逐个元素返回分块,也可以将具有相同元数据的元素组合在一起,其目标是(a)将相关文本(或多或少)语义地分组,并(b)保留编码在文档结构中的上下文丰富信息。它可以与其他文本分块器一起用作分块管道的一部分。
%pip install -qU langchain-text-splitters
from langchain_text_splitters import HTMLHeaderTextSplitter html_string = """ <!DOCTYPE html> <html> <body> <div> <h1>Foo</h1> <p>Some intro text about Foo.</p> <div> <h2>Bar main section</h2> <p>Some intro text about Bar.</p> <h3>Bar subsection 1</h3> <p>Some text about the first subtopic of Bar.</p> <h3>Bar subsection 2</h3> <p>Some text about the second subtopic of Bar.</p> </div> <div> <h2>Baz</h2> <p>Some text about Baz</p> </div> <br> <p>Some concluding text about Foo</p> </div> </body> </html> """ headers_to_split_on = [ ("h1", "Header 1"), ("h2", "Header 2"), ("h3", "Header 3"), ] html_splitter = HTMLHeaderTextSplitter(headers_to_split_on=headers_to_split_on) html_header_splits = html_splitter.split_text(html_string) html_header_splits
from langchain_text_splitters import RecursiveCharacterTextSplitter url = "https://plato.stanford.edu/entries/goedel/" headers_to_split_on = [ ("h1", "Header 1"), ("h2", "Header 2"), ("h3", "Header 3"), ("h4", "Header 4"), ] html_splitter = HTMLHeaderTextSplitter(headers_to_split_on=headers_to_split_on) # 对于本地文件,请使用html_splitter.split_text_from_file(<path_to_file>) html_header_splits = html_splitter.split_text_from_url(url) chunk_size = 500 chunk_overlap = 30 text_splitter = RecursiveCharacterTextSplitter( chunk_size=chunk_size, chunk_overlap=chunk_overlap ) # 分块 splits = text_splitter.split_documents(html_header_splits) splits[80:85]
不同的 HTML 文档之间可能存在相当大的结构变化,而HTMLHeaderTextSplitter
会尝试将所有“相关”标题附加到任何给定的分块,但有时可能会错过某些标题。例如,在以下新闻文章(截至本文撰写时)中,文档的结构使得顶级标题的文本,虽然标记为“h1”,但位于与我们期望的文本元素“上方”不同的子树中,因此我们可以观察到“h1”元素及其相关文本不会显示在分块元数据中(但在适用的情况下,我们会看到“h2”及其相关文本):
url = "https://www.cnn.com/2023/09/25/weather/el-nino-winter-us-climate/index.html"
headers_to_split_on = [
("h1", "Header 1"),
("h2", "Header 2"),
]
html_splitter = HTMLHeaderTextSplitter(headers_to_split_on=headers_to_split_on)
html_header_splits = html_splitter.split_text_from_url(url)
print(html_header_splits[1].page_content[:500])
CodeTextSplitter 允许您使用多种支持的语言拆分代码。导入枚举 Language
并指定语言。
%pip install -qU langchain-text-splitters
from langchain_text_splitters import (
Language,
RecursiveCharacterTextSplitter,
)
# 支持的语言完整列表
[e.value for e in Language]
[‘cpp’, ‘go’, ‘java’, ‘kotlin’, ‘js’, ‘ts’, ‘php’, ‘proto’, ‘python’, ‘rst’, ‘ruby’, ‘rust’, ‘scala’, ‘swift’, ‘markdown’, ‘latex’, ‘html’, ‘sol’, ‘csharp’, ‘cobol’]
# 您还可以查看给定语言使用的分隔符
RecursiveCharacterTextSplitter.get_separators_for_language(Language.PYTHON)
['\nclass ', '\ndef ', '\n\tdef ', ‘\n\n’, ‘\n’, ’ ', ‘’]
这里是使用 PythonTextSplitter 的示例:
PYTHON_CODE = """
def hello_world():
print("Hello, World!")
# 调用函数
hello_world()
"""
python_splitter = RecursiveCharacterTextSplitter.from_language(
language=Language.PYTHON, chunk_size=50, chunk_overlap=0
)
python_docs = python_splitter.create_documents([PYTHON_CODE])
python_docs
[Document(page_content=‘def hello_world():\n print(“Hello, World!”)’),
Document(page_content=‘# 调用函数\nhello_world()’)]
这里是使用 JS 文本拆分器的示例:
JS_CODE = """
function helloWorld() {
console.log("Hello, World!");
}
// 调用函数
helloWorld();
"""
js_splitter = RecursiveCharacterTextSplitter.from_language(
language=Language.JS, chunk_size=60, chunk_overlap=0
)
js_docs = js_splitter.create_documents([JS_CODE])
js_docs
[Document(page_content=‘function helloWorld() {\n console.log(“Hello, World!”);\n}’),
Document(page_content=‘// 调用函数\nhelloWorld();’)]
这里是使用 TS 文本拆分器的示例:
TS_CODE = """
function helloWorld(): void {
console.log("Hello, World!");
}
// 调用函数
helloWorld();
"""
ts_splitter = RecursiveCharacterTextSplitter.from_language(
language=Language.TS, chunk_size=60, chunk_overlap=0
)
ts_docs = ts_splitter.create_documents([TS_CODE])
ts_docs
[Document(page_content=‘function helloWorld(): void {’),
Document(page_content=‘console.log(“Hello, World!”);\n}’),
Document(page_content=‘// 调用函数\nhelloWorld();’)]
这里是使用 Markdown 文本拆分器的示例:
markdown_text = """
# 声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/我家小花儿/article/detail/527433
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。