赞
踩
BPE 的全称是 Byte Pair Encoding,原本是一种数据压缩算法,现已被广泛应用于自然语言处理中的分词任务。它通过统计高频字符序列来构建词表,并将词汇拆分为更小的、可重用的子词单元,例如:highest-> [high, est]
。
基本思想:
词表构造本质上是对语料库的学习过程,只不过这里学习的不是语义,而是词语的组成规律。
核心思想:通过反复合并频繁出现的字符或子词对
,将文本表示为一系列子词,并将合并的子词对
添加到词表和合并规则中,从而生成一个子词词汇表和合并规则表。
假设有一个语料库(为了演示,比较简短):
corpus = [
"This is the Hugging Face Course.",
"This chapter is about tokenization.",
"This section shows several tokenizer algorithms.",
"Hopefully, you will be able to understand how they are trained and generate tokens.",
]
首先按照标点和空格进行切分,gpt中此步是采用一个正则表达式来实现预切分。
import regex as re
PRETOKENIZE_REGEX = r"""(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\r\n\p{L}\p{N}]?\p{L}+|\p{N}| ?[^\s\p{L}\p{N}]+[\r\n]*|\s*[\r\n]+|\s+(?!\S)|\s+"""
pat = re.compile(PRETOKENIZE_REGEX)
tokens = re.findall(pat, corpus)
>>> ['This', ' is', ' the', ' Hugging', ' Face', ' Course', '.\n', 'This', ' chapter', ' is', ' about', ' tokenization', '.\n', 'This', ' section', ' shows', ' several', ' tokenizer', ' algorithms', '.\n', 'Hopefully', ',', ' you', ' will', ' be', ' able', ' to', ' understand', ' how', ' they', ' are', ' trained', ' and', ' generate', ' tokens', '.\n']
正则表达式解读:
- (?i:'s|'t|'re|'ve|'m|'ll|'d): 匹配常见的英文缩略形式,例如:'s(is 或 has),'t(not),'re(are),'ve(have),'m(am),'ll(will),'d(would 或 had)。
- [^\r\n\p{L}\p{N}]?\p{L}+: 匹配一个单词,\p{L}+用于匹配单词字符,前面则用于匹配非数字、字母、换行符的字符,例如空格。
- \p{N}: 匹配任何一个数字字符。
- ** ?[^\s\p{L}\p{N}]+[\r\n]* **:匹配段落结尾的换行符。
上面切分的结果会有空格以及换行等控制性字符,这些字符在BPE处理时会丢失单词和句子的边界信息。为避免单词和句子的边界被破坏,就需要对这些字符进行特殊编码。编码方法如下:
def bytes_to_unicode():
bs = (list(range(ord("!"), ord("~") + 1)) + list(range(ord("¡"), ord("¬") + 1)) + list(range(ord("®"), ord("ÿ") + 1))
)
cs = bs[:]
n = 0
for b in range(2**8):
if b not in bs:
bs.append(b)
cs.append(2**8 + n)
n += 1
cs = [chr(n) for n in cs]
return dict(zip(bs, cs))
基本逻辑:
- 首先,构造一个初始字节列表bs,它是0-255范围内
所有可见字符
的ASCII码值,共有三段,分别为从“!”到“~”、从“¡”到“¬”和从“®”到“ÿ”,不包括空白字符和控制字符;- 其次,0-255内进行for循环遍历,目的是为了扩展bs字节列表,将剩下的不可见字符补齐,方法是将0~255内的
所有不可见字符
都转换成256以上的可见unicode字符,得到字符列表cs;- 最后,将字节列表bs和字符列表cs两者配对,构成一个字节到字符的映射表。
调用这个函数就生成一个0-255
范围内从 UTF-8 字节到 Unicode 字符的映射表。
byte_encoder = bytes_to_unicode()
{33: '!', 34: '"', 35: '#', 36: '$', 37: '%', 38: '&', 39: "'", 40: '(', 41: ')', 42: '*', 43: '+', 44: ',', 45: '-', 46: '.', 47: '/', 48: '0', 49: '1', 50: '2', 51: '3', 52: '4', 53: '5', 54: '6', 55: '7', 56: '8', 57: '9', 58: ':', 59: ';', 60: '<', 61: '=', 62: '>', 63: '?', 64: '@', 65: 'A', 66: 'B', 67: 'C', 68: 'D', 69: 'E', 70: 'F', 71: 'G', 72: 'H', 73: 'I', 74: 'J', 75: 'K', 76: 'L', 77: 'M', 78: 'N', 79: 'O', 80: 'P', 81: 'Q', 82: 'R', 83: 'S', 84: 'T', 85: 'U', 86: 'V', 87: 'W', 88: 'X', 89: 'Y', 90: 'Z', 91: '[', 92: '\\', 93: ']', 94: '^', 95: '_', 96: '`', 97: 'a', 98: 'b', 99: 'c', 100: 'd', 101: 'e', 102: 'f', 103: 'g', 104: 'h', 105: 'i', 106: 'j', 107: 'k', 108: 'l', 109: 'm', 110: 'n', 111: 'o', 112: 'p', 113: 'q', 114: 'r', 115: 's', 116: 't', 117: 'u', 118: 'v', 119: 'w', 120: 'x', 121: 'y', 122: 'z', 123: '{', 124: '|', 125: '}', 126: '~', 161: '¡', 162: '¢', 163: '£', 164: '¤', 165: '¥', 166: '¦', 167: '§', 168: '¨', 169: '©', 170: 'ª', 171: '«', 172: '¬', 174: '®', 175: '¯', 176: '°', 177: '±', 178: '²', 179: '³', 180: '´', 181: 'µ', 182: '¶', 183: '·', 184: '¸', 185: '¹', 186: 'º', 187: '»', 188: '¼', 189: '½', 190: '¾', 191: '¿', 192: 'À', 193: 'Á', 194: 'Â', 195: 'Ã', 196: 'Ä', 197: 'Å', 198: 'Æ', 199: 'Ç', 200: 'È', 201: 'É', 202: 'Ê', 203: 'Ë', 204: 'Ì', 205: 'Í', 206: 'Î', 207: 'Ï', 208: 'Ð', 209: 'Ñ', 210: 'Ò', 211: 'Ó', 212: 'Ô', 213: 'Õ', 214: 'Ö', 215: '×', 216: 'Ø', 217: 'Ù', 218: 'Ú', 219: 'Û', 220: 'Ü', 221: 'Ý', 222: 'Þ', 223: 'ß', 224: 'à', 225: 'á', 226: 'â', 227: 'ã', 228: 'ä', 229: 'å', 230: 'æ', 231: 'ç', 232: 'è', 233: 'é', 234: 'ê', 235: 'ë', 236: 'ì', 237: 'í', 238: 'î', 239: 'ï', 240: 'ð', 241: 'ñ', 242: 'ò', 243: 'ó', 244: 'ô', 245: 'õ', 246: 'ö', 247: '÷', 248: 'ø', 249: 'ù', 250: 'ú', 251: 'û', 252: 'ü', 253: 'ý', 254: 'þ', 255: 'ÿ', 0: 'Ā', 1: 'ā', 2: 'Ă', 3: 'ă', 4: 'Ą', 5: 'ą', 6: 'Ć', 7: 'ć', 8: 'Ĉ', 9: 'ĉ', 10: 'Ċ', 11: 'ċ', 12: 'Č', 13: 'č', 14: 'Ď', 15: 'ď', 16: 'Đ', 17: 'đ', 18: 'Ē', 19: 'ē', 20: 'Ĕ', 21: 'ĕ', 22: 'Ė', 23: 'ė', 24: 'Ę', 25: 'ę', 26: 'Ě', 27: 'ě', 28: 'Ĝ', 29: 'ĝ', 30: 'Ğ', 31: 'ğ', 32: 'Ġ', 127: 'ġ', 128: 'Ģ', 129: 'ģ', 130: 'Ĥ', 131: 'ĥ', 132: 'Ħ', 133: 'ħ', 134: 'Ĩ', 135: 'ĩ', 136: 'Ī', 137: 'ī', 138: 'Ĭ', 139: 'ĭ', 140: 'Į', 141: 'į', 142: 'İ', 143: 'ı', 144: 'IJ', 145: 'ij', 146: 'Ĵ', 147: 'ĵ', 148: 'Ķ', 149: 'ķ', 150: 'ĸ', 151: 'Ĺ', 152: 'ĺ', 153: 'Ļ', 154: 'ļ', 155: 'Ľ', 156: 'ľ', 157: 'Ŀ', 158: 'ŀ', 159: 'Ł', 160: 'ł', 173: 'Ń'}
由于BPE算法是以UTF-8字节为基础,这个映射表可以确保所有可能的字节(0-255)都有对应可打印的字符。
可以将正则表达式的空格分词和特殊字符编码这两步封装成一个预分词函数,如下所示:
def pre_tokenize(text):
tokens = re.findall(pat, text)
for i, token in enumerate(tokens):
# Maps all our bytes to unicode strings, avoiding control tokens of the BPE (spaces in our case)
tokens[i] = "".join(byte_encoder[b] for b in token.encode("utf-8"))
return tokens
预分词后的word列表进行特殊字符编码后,就能得到真正预分词后的token列表。如下所示:
['This', 'Ġis', 'Ġthe', 'ĠHugging', 'ĠFace', 'ĠCourse', '.Ċ', 'This', 'Ġchapter', 'Ġis', 'Ġabout', 'Ġtokenization', '.Ċ', 'This', 'Ġsection', 'Ġshows', 'Ġseveral', 'Ġtokenizer', 'Ġalgorithms', '.Ċ', 'Hopefully', ',', 'Ġyou', 'Ġwill', 'Ġbe', 'Ġable', 'Ġto', 'Ġunderstand', 'Ġhow', 'Ġthey', 'Ġare', 'Ġtrained', 'Ġand', 'Ġgenerate', 'Ġtokens', '.Ċ']
在这个特殊编码表中,空格会保留成特殊的字符“Ġ”,换行会保留成特殊字符’Ċ’,同GPT中一致。
由于BPE是从字符级别的小词表,逐步合并成大词表,所以需要先获得字符级别的小词表。
词表就是一个数字ID和子词的映射,从子词
——>数字ID
叫编码,从数字ID
——>子词
称为解码。
为避免
语料库
字符集不全带来的UNK(未知字符)问题,基础词表的字符集需要尽量完整,这里直接用前面构建的特殊编码表byte_encoder来构建基础词表,每个字符都是一个独立的子词。
vocabs = {idx: char for idx, (byte, char) in enumerate(byte_encoder.items())}
print(vocabs)
{0: '!', 1: '"', 2: '#', 3: '$', 4: '%', 5: '&', 6: "'", 7: '(', 8: ')', 9: '*', 10: '+', 11: ',', 12: '-', 13: '.', 14: '/', 15: '0', 16: '1', 17: '2', 18: '3', 19: '4', 20: '5', 21: '6', 22: '7', 23: '8', 24: '9', 25: ':', 26: ';', 27: '<', 28: '=', 29: '>', 30: '?', 31: '@', 32: 'A', 33: 'B', 34: 'C', 35: 'D', 36: 'E', 37: 'F', 38: 'G', 39: 'H', 40: 'I', 41: 'J', 42: 'K', 43: 'L', 44: 'M', 45: 'N', 46: 'O', 47: 'P', 48: 'Q', 49: 'R', 50: 'S', 51: 'T', 52: 'U', 53: 'V', 54: 'W', 55: 'X', 56: 'Y', 57: 'Z', 58: '[', 59: '\\', 60: ']', 61: '^', 62: '_', 63: '`', 64: 'a', 65: 'b', 66: 'c', 67: 'd', 68: 'e', 69: 'f', 70: 'g', 71: 'h', 72: 'i', 73: 'j', 74: 'k', 75: 'l', 76: 'm', 77: 'n', 78: 'o', 79: 'p', 80: 'q', 81: 'r', 82: 's', 83: 't', 84: 'u', 85: 'v', 86: 'w', 87: 'x', 88: 'y', 89: 'z', 90: '{', 91: '|', 92: '}', 93: '~', 94: '¡', 95: '¢', 96: '£', 97: '¤', 98: '¥', 99: '¦', 100: '§', 101: '¨', 102: '©', 103: 'ª', 104: '«', 105: '¬', 106: '®', 107: '¯', 108: '°', 109: '±', 110: '²', 111: '³', 112: '´', 113: 'µ', 114: '¶', 115: '·', 116: '¸', 117: '¹', 118: 'º', 119: '»', 120: '¼', 121: '½', 122: '¾', 123: '¿', 124: 'À', 125: 'Á', 126: 'Â', 127: 'Ã', 128: 'Ä', 129: 'Å', 130: 'Æ', 131: 'Ç', 132: 'È', 133: 'É', 134: 'Ê', 135: 'Ë', 136: 'Ì', 137: 'Í', 138: 'Î', 139: 'Ï', 140: 'Ð', 141: 'Ñ', 142: 'Ò', 143: 'Ó', 144: 'Ô', 145: 'Õ', 146: 'Ö', 147: '×', 148: 'Ø', 149: 'Ù', 150: 'Ú', 151: 'Û', 152: 'Ü', 153: 'Ý', 154: 'Þ', 155: 'ß', 156: 'à', 157: 'á', 158: 'â', 159: 'ã', 160: 'ä', 161: 'å', 162: 'æ', 163: 'ç', 164: 'è', 165: 'é', 166: 'ê', 167: 'ë', 168: 'ì', 169: 'í', 170: 'î', 171: 'ï', 172: 'ð', 173: 'ñ', 174: 'ò', 175: 'ó', 176: 'ô', 177: 'õ', 178: 'ö', 179: '÷', 180: 'ø', 181: 'ù', 182: 'ú', 183: 'û', 184: 'ü', 185: 'ý', 186: 'þ', 187: 'ÿ', 188: 'Ā', 189: 'ā', 190: 'Ă', 191: 'ă', 192: 'Ą', 193: 'ą', 194: 'Ć', 195: 'ć', 196: 'Ĉ', 197: 'ĉ', 198: 'Ċ', 199: 'ċ', 200: 'Č', 201: 'č', 202: 'Ď', 203: 'ď', 204: 'Đ', 205: 'đ', 206: 'Ē', 207: 'ē', 208: 'Ĕ', 209: 'ĕ', 210: 'Ė', 211: 'ė', 212: 'Ę', 213: 'ę', 214: 'Ě', 215: 'ě', 216: 'Ĝ', 217: 'ĝ', 218: 'Ğ', 219: 'ğ', 220: 'Ġ', 221: 'ġ', 222: 'Ģ', 223: 'ģ', 224: 'Ĥ', 225: 'ĥ', 226: 'Ħ', 227: 'ħ', 228: 'Ĩ', 229: 'ĩ', 230: 'Ī', 231: 'ī', 232: 'Ĭ', 233: 'ĭ', 234: 'Į', 235: 'į', 236: 'İ', 237: 'ı', 238: 'IJ', 239: 'ij', 240: 'Ĵ', 241: 'ĵ', 242: 'Ķ', 243: 'ķ', 244: 'ĸ', 245: 'Ĺ', 246: 'ĺ', 247: 'Ļ', 248: 'ļ', 249: 'Ľ', 250: 'ľ', 251: 'Ŀ', 252: 'ŀ', 253: 'Ł', 254: 'ł', 255: 'Ń'}
先统计预切分后每个词的词频,为后面字符对pair的频率统计提供基础。
from collections import defaultdict
word2count = defaultdict(int)
for word in pre_tokenized_corpus:
word2count[word] += 1
print(word2count)
{ 'This': 3, 'Ġis': 2, 'Ġthe': 1, 'ĠHugging': 1, 'ĠFace': 1, 'ĠCourse': 1, '.Ċ': 4, 'Ġchapter': 1, 'Ġabout': 1, 'Ġtokenization': 1, 'Ġsection': 1, 'Ġshows': 1, 'Ġseveral': 1, 'Ġtokenizer': 1, 'Ġalgorithms': 1, 'Hopefully': 1, ',': 1, 'Ġyou': 1, 'Ġwill': 1, 'Ġbe': 1, 'Ġable': 1, 'Ġto': 1, 'Ġunderstand': 1, 'Ġhow': 1, 'Ġthey': 1, 'Ġare': 1, 'Ġtrained': 1, 'Ġand': 1, 'Ġgenerate': 1, 'Ġtokens': 1 })
再对每个单词进行字符级别的初步切分。
word2splits = {word: [c for c in word] for word in word2count}
'This': ['T', 'h', 'i', 's'],
'Ġis': ['Ġ', 'i', 's'],
'Ġthe': ['Ġ', 't', 'h', 'e'],
...
'Ġand': ['Ġ', 'a', 'n', 'd'],
'Ġgenerate': ['Ġ', 'g', 'e', 'n', 'e', 'r', 'a', 't', 'e'],
'Ġtokens': ['Ġ', 't', 'o', 'k', 'e', 'n', 's']
基于word2splits统计vocabs中相邻两个pair的词频pair2count,其中:
def _compute_pair2score(word2splits, word2count):
pair2count = defaultdict(int)
for word, word_count in word2count.items():
split = word2splits[word]
if len(split) == 1:
continue
for i in range(len(split) - 1):
pair = (split[i], split[i + 1])
pair2count[pair] += word_count
return pair2count
通过相邻字符的两两配对后,获得相邻pair的词频pair2count,这是所有相邻字符对的出现频率统计。
{('T', 'h'): 3, ('h', 'i'): 3, ('i', 's'): 5, ('Ġ', 'i'): 2, ('Ġ', 't'): 7, ('t', 'h'): 3, ('h', 'e'): 2, ('Ġ', 'H'): 1, ('H', 'u'): 1, ('u', 'g'): 1, ('g', 'g'): 1, ('g', 'i'): 1, ('i', 'n'): 2, ('n', 'g'): 1, ('Ġ', 'F'): 1, ('F', 'a'): 1, ('a', 'c'): 1, ('c', 'e'): 1, ('Ġ', 'C'): 1, ('C', 'o'): 1, ('o', 'u'): 3, ('u', 'r'): 1, ('r', 's'): 2, ('s', 'e'): 3, ('.', 'Ċ'): 4, ('Ġ', 'c'): 1, ('c', 'h'): 1, ('h', 'a'): 1, ('a', 'p'): 1, ('p', 't'): 1, ('t', 'e'): 2, ('e', 'r'): 5, ('Ġ', 'a'): 5, ('a', 'b'): 2, ('b', 'o'): 1, ('u', 't'): 1, ('t', 'o'): 4, ('o', 'k'): 3, ('k', 'e'): 3, ('e', 'n'): 4, ('n', 'i'): 2, ('i', 'z'): 2, ('z', 'a'): 1, ('a', 't'): 2, ('t', 'i'): 2, ('i', 'o'): 2, ('o', 'n'): 2, ('Ġ', 's'): 3, ('e', 'c'): 1, ('c', 't'): 1, ('s', 'h'): 1, ('h', 'o'): 2, ('o', 'w'): 2, ('w', 's'): 1, ('e', 'v'): 1, ('v', 'e'): 1, ('r', 'a'): 3, ('a', 'l'): 2, ('z', 'e'): 1, ('l', 'g'): 1, ('g', 'o'): 1, ('o', 'r'): 1, ('r', 'i'): 1, ('i', 't'): 1, ('h', 'm'): 1, ('m', 's'): 1, ('H', 'o'): 1, ('o', 'p'): 1, ('p', 'e'): 1, ('e', 'f'): 1, ('f', 'u'): 1, ('u', 'l'): 1, ('l', 'l'): 2, ('l', 'y'): 1, ('Ġ', 'y'): 1, ('y', 'o'): 1, ('Ġ', 'w'): 1, ('w', 'i'): 1, ('i', 'l'): 1, ('Ġ', 'b'): 1, ('b', 'e'): 1, ('b', 'l'): 1, ('l', 'e'): 1, ('Ġ', 'u'): 1, ('u', 'n'): 1, ('n', 'd'): 3, ('d', 'e'): 1, ('s', 't'): 1, ('t', 'a'): 1, ('a', 'n'): 2, ('Ġ', 'h'): 1, ('e', 'y'): 1, ('a', 'r'): 1, ('r', 'e'): 1, ('t', 'r'): 1, ('a', 'i'): 1, ('n', 'e'): 2, ('e', 'd'): 1, ('Ġ', 'g'): 1, ('g', 'e'): 1, ('n', 's'): 1})
统计频率最高的字符对词频:
def _compute_most_score_pair(pair2count):
best_pair = None
max_freq = None
for pair, freq in pair2count.items():
if max_freq is None or max_freq < freq:
best_pair = pair
max_freq = freq
return best_pair
得到当前频率最高的pair为: (‘Ġ’, ‘t’), 频率为7次。 将(‘Ġ’, ‘t’)合并成一个词并添加到词表中。此时的词表变成了如下所示(注意最后新添加的256:'Ġt'
):
{0: '!', 1: '"', 2: '#', 3: '$', 4: '%', 5: '&', 6: "'", 7: '(', 8: ')', 9: '*', 10: '+', 11: ',' ……, 254: 'ł', 255: 'Ń', 256: 'Ġt'}
同时在合并规则中添加(‘Ġ’, ‘t’)这条合并规则。
[('Ġ', 't')]
根据更新后的vocab重新对word2count进行切分,获得了新的work2splits。
def _merge_pair(a, b, word2splits):
new_word2splits = dict()
for word, split in word2splits.items():
if len(split) == 1:
new_word2splits[word] = split
continue
i = 0
while i < len(split) - 1:
if split[i] == a and split[i + 1] == b:
split = split[:i] + [a + b] + split[i + 2:]
else:
i += 1
new_word2splits[word] = split
return new_word2splits
{'This': ['T', 'h', 'i', 's'],
'Ġis': ['Ġ', 'i', 's'],
'Ġthe': ['Ġt', 'h', 'e'],
……
'Ġand': ['Ġ', 'a', 'n', 'd'],
'Ġgenerate': ['Ġ', 'g', 'e', 'n', 'e', 'r', 'a', 't', 'e'],
'Ġtokens': ['Ġt', 'o', 'k', 'e', 'n', 's']}
可以看到新的word2split中已经包含了新的词"Ġt"。
重复上述循环直到整个词表的大小达到预先设定的词表大小。
for i in range (64):
pair2score = _compute_pair2score(word2splits, word2count)
best_pair = _compute_most_score_pair(pair2score)
vocabs[256 + i] = best_pair[0] + best_pair[1]
merge_rules.append(best_pair)
word2splits = _merge_pair(best_pair[0], best_pair[1], word2splits)
假定最终词表的大小为64,经过上述迭代后我们获得的词表和合并规则如下:
vocabs = {0: '!', 1: '"', 2: '#', 3: '$', 4: '%', 5: '&', 6: "'", 7: '(', 8: ')', 9: '*', 10: '+', 11: ',', 12: '-', 13: '.', 14: '/', 15: '0', 16: '1', 17: '2', 18: '3', 19: '4', 20: '5', 21: '6', 22: '7', 23: '8', 24: '9', 25: ':', 26: ';', 27: '<', 28: '=', 29: '>', 30: '?', 31: '@', 32: 'A', 33: 'B', 34: 'C', 35: 'D', 36: 'E', 37: 'F', 38: 'G', 39: 'H', 40: 'I', 41: 'J', 42: 'K', 43: 'L', 44: 'M', 45: 'N', 46: 'O', 47: 'P', 48: 'Q', 49: 'R', 50: 'S', 51: 'T', 52: 'U', 53: 'V', 54: 'W', 55: 'X', 56: 'Y', 57: 'Z', 58: '[', 59: '\\', 60: ']', 61: '^', 62: '_', 63: '`', 64: 'a', 65: 'b', 66: 'c', 67: 'd', 68: 'e', 69: 'f', 70: 'g', 71: 'h', 72: 'i', 73: 'j', 74: 'k', 75: 'l', 76: 'm', 77: 'n', 78: 'o', 79: 'p', 80: 'q', 81: 'r', 82: 's', 83: 't', 84: 'u', 85: 'v', 86: 'w', 87: 'x', 88: 'y', 89: 'z', 90: '{', 91: '|', 92: '}', 93: '~', 94: '¡', 95: '¢', 96: '£', 97: '¤', 98: '¥', 99: '¦', 100: '§', 101: '¨', 102: '©', 103: 'ª', 104: '«', 105: '¬', 106: '®', 107: '¯', 108: '°', 109: '±', 110: '²', 111: '³', 112: '´', 113: 'µ', 114: '¶', 115: '·', 116: '¸', 117: '¹', 118: 'º', 119: '»', 120: '¼', 121: '½', 122: '¾', 123: '¿', 124: 'À', 125: 'Á', 126: 'Â', 127: 'Ã', 128: 'Ä', 129: 'Å', 130: 'Æ', 131: 'Ç', 132: 'È', 133: 'É', 134: 'Ê', 135: 'Ë', 136: 'Ì', 137: 'Í', 138: 'Î', 139: 'Ï', 140: 'Ð', 141: 'Ñ', 142: 'Ò', 143: 'Ó', 144: 'Ô', 145: 'Õ', 146: 'Ö', 147: '×', 148: 'Ø', 149: 'Ù', 150: 'Ú', 151: 'Û', 152: 'Ü', 153: 'Ý', 154: 'Þ', 155: 'ß', 156: 'à', 157: 'á', 158: 'â', 159: 'ã', 160: 'ä', 161: 'å', 162: 'æ', 163: 'ç', 164: 'è', 165: 'é', 166: 'ê', 167: 'ë', 168: 'ì', 169: 'í', 170: 'î', 171: 'ï', 172: 'ð', 173: 'ñ', 174: 'ò', 175: 'ó', 176: 'ô', 177: 'õ', 178: 'ö', 179: '÷', 180: 'ø', 181: 'ù', 182: 'ú', 183: 'û', 184: 'ü', 185: 'ý', 186: 'þ', 187: 'ÿ', 188: 'Ā', 189: 'ā', 190: 'Ă', 191: 'ă', 192: 'Ą', 193: 'ą', 194: 'Ć', 195: 'ć', 196: 'Ĉ', 197: 'ĉ', 198: 'Ċ', 199: 'ċ', 200: 'Č', 201: 'č', 202: 'Ď', 203: 'ď', 204: 'Đ', 205: 'đ', 206: 'Ē', 207: 'ē', 208: 'Ĕ', 209: 'ĕ', 210: 'Ė', 211: 'ė', 212: 'Ę', 213: 'ę', 214: 'Ě', 215: 'ě', 216: 'Ĝ', 217: 'ĝ', 218: 'Ğ', 219: 'ğ', 220: 'Ġ', 221: 'ġ', 222: 'Ģ', 223: 'ģ', 224: 'Ĥ', 225: 'ĥ', 226: 'Ħ', 227: 'ħ', 228: 'Ĩ', 229: 'ĩ', 230: 'Ī', 231: 'ī', 232: 'Ĭ', 233: 'ĭ', 234: 'Į', 235: 'į', 236: 'İ', 237: 'ı', 238: 'IJ', 239: 'ij', 240: 'Ĵ', 241: 'ĵ', 242: 'Ķ', 243: 'ķ', 244: 'ĸ', 245: 'Ĺ', 246: 'ĺ', 247: 'Ļ', 248: 'ļ', 249: 'Ľ', 250: 'ľ', 251: 'Ŀ', 252: 'ŀ', 253: 'Ł', 254: 'ł', 255: 'Ń', 256: 'Ġt', 257: 'is', 258: 'er', 259: 'Ġa', 260: '.Ċ', 261: 'Ġto', 262: 'en', 263: 'Th', 264: 'This', 265: 'ou', 266: 'se', 267: 'Ġtok', 268: 'Ġtoken', 269: 'nd', 270: 'Ġis', 271: 'Ġth', 272: 'Ġthe', 273: 'in', 274: 'Ġab', 275: 'Ġtokeni', 276: 'Ġtokeniz', 277: 'at', 278: 'io', 279: 'ion', 280: 'Ġse', 281: 'ho', 282: 'how', 283: 'll', 284: 'ĠH', 285: 'ĠHu', 286: 'ĠHug', 287: 'ĠHugg', 288: 'ĠHuggin', 289: 'ĠHugging', 290: 'ĠF', 291: 'ĠFa', 292: 'ĠFac', 293: 'ĠFace', 294: 'ĠC', 295: 'ĠCou', 296: 'ĠCour', 297: 'ĠCourse', 298: 'Ġc', 299: 'Ġch', 300: 'Ġcha', 301: 'Ġchap', 302: 'Ġchapt', 303: 'Ġchapter', 304: 'Ġabou', 305: 'Ġabout', 306: 'Ġtokenizat', 307: 'Ġtokenization', 308: 'Ġsec', 309: 'Ġsect', 310: 'Ġsection', 311: 'Ġs', 312: 'Ġshow', 313: 'Ġshows', 314: 'Ġsev', 315: 'Ġsever', 316: 'Ġsevera', 317: 'Ġseveral', 318: 'Ġtokenizer', 319: 'Ġal'}
merge_rules = [('Ġ', 't'), ('Ġ', 't'), ('i', 's'), ('e', 'r'), ('Ġ', 'a'), ('.', 'Ċ'), ('Ġt', 'o'), ('e', 'n'), ('T', 'h'), ('Th', 'is'), ('o', 'u'), ('s', 'e'), ('Ġto', 'k'), ('Ġtok', 'en'), ('n', 'd'), ('Ġ', 'is'), ('Ġt', 'h'), ('Ġth', 'e'), ('i', 'n'), ('Ġa', 'b'), ('Ġtoken', 'i'), ('Ġtokeni', 'z'), ('a', 't'), ('i', 'o'), ('io', 'n'), ('Ġ', 'se'), ('h', 'o'), ('ho', 'w'), ('l', 'l'), ('Ġ', 'H'), ('ĠH', 'u'), ('ĠHu', 'g'), ('ĠHug', 'g'), ('ĠHugg', 'in'), ('ĠHuggin', 'g'), ('Ġ', 'F'), ('ĠF', 'a'), ('ĠFa', 'c'), ('ĠFac', 'e'), ('Ġ', 'C'), ('ĠC', 'ou'), ('ĠCou', 'r'), ('ĠCour', 'se'), ('Ġ', 'c'), ('Ġc', 'h'), ('Ġch', 'a'), ('Ġcha', 'p'), ('Ġchap', 't'), ('Ġchapt', 'er'), ('Ġab', 'ou'), ('Ġabou', 't'), ('Ġtokeniz', 'at'), ('Ġtokenizat', 'ion'), ('Ġse', 'c'), ('Ġsec', 't'), ('Ġsect', 'ion'), ('Ġ', 's'), ('Ġs', 'how'), ('Ġshow', 's'), ('Ġse', 'v'), ('Ġsev', 'er'), ('Ġsever', 'a'), ('Ġsevera', 'l'), ('Ġtokeniz', 'er'), ('Ġa', 'l')]
至此我们就根据给定的语料和词汇数量完成了BPE分词器的训练。
BPE训练以收集到指定数量的词汇作为终止条件,在本示例中,学习到了数字ID从256到319的64个新词汇。
训练的结果:包含一个词库vocabs和一个合并规则merge_rules。
基于上面训练好的词表vocabs和合并规则merge_rules,就可以进行文本的tokenize和数字ID的解码。
先基于上面的词表来构建编码器和解码器,分别负责token
->数字ID
和数字ID
-token
的转换。
decoder: 前面训练出来的词表vocabs就是一个数字ID到token的解码表。
encoder: vocabs的key-value互换就得到了一个编码表。
decoder = vocabs
encoder = {char: idx for idx, char in vocabs.items()}
给定一个句子,我们需要将其切分成一个token的序列。 具体实现:
T
和h
-> Th
,Th
和 is
-> This
。def tokenize(text: str) -> List[str]: # pre tokenize words = [word for word in pre_tokenize(text)] # split into char level splits = [[c for c in word] for word in words] # apply merge rules for merge_rule in merge_rules: for index, split in enumerate(splits): i = 0 while i < len(split) - 1: if split[i] == merge_rule[0] and split[i + 1] == merge_rule[1]: split = split[:i] + ["".join(merge_rule)] + split[i + 2:] else: i += 1 splits[index] = split tokens = sum(splits, []) return [encoder[token] for token in tokens]
- 合并规则是有顺序的,自顶向下,先短后长,第一层merge_rules的循环是为了保证合并顺序。
- 上面的合并过程有三层循环,执行效率并不高,实际的tokenizer实现中一般会预先建立有顺序的dict, 这里直接写只是出于理解merge过程的直观。
运行示例:
>>> tokenize("This is about tokenization.")
>>> [264, 270, 305, 307, 13]
解码是为了将数字ID映射到可读的text文本。有两层解码:
def decode(ids):
# given ids (list of integers), return Python string
tokens = [decoder[idx] for idx in ids]
text = "".join(tokens)
return bytearray([byte_decoder[c] for c in text]).decode("utf-8")
运行示例:
>>> print(decode([264, 270, 305, 307, 13]))
>>> This is about tokenization.
小结:本文以一个非常简短的语料库作为示例,来说明BPE分词模型的核心实现逻辑和训练过程,其中,前256个基础词库保证它能够有效地处理未知词问题,子词模型则保证了它能在一定程度上保留词汇的原始信息,目前被广泛采用。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。