当前位置:   article > 正文

tokenizers processors模块_tokenizer postprocess

tokenizer postprocess

模块概述

processors模块负责对文本执行额外的转换,添加额外的特殊标记。比如有这样一句话,“this is a text, this is a another text.”,通过处理后,这句话就变成了"[CLS] this is a text [SEP] this is a another text [SEP]"。

processors模块中实现的都是PostProcessor的子类,对于PostProcessor,官网的解释如下,大致意思是提供与一些基于 Transformers 的 SoTA 模型兼容的高级构建功能。 例如,对于 BERT,它会将标记化的句子包裹在 [CLS] 和 [SEP] 标记周围。

Post-Processing: Provides advanced construction features to be compatible with some of the Transformers-based SoTA models. For instance, for BERT it would wrap the tokenized sentence around [CLS] and [SEP] tokens.

processor模块实现了4种PostProcessor,分别是BertProcessing、ByteLevel、RobertaProcessing、TemplateProcessing。

模块使用

1、BertProcessing
tokenizers.processors.BertProcessing(sep, cls)
  • 1

BertProcessing后处理器负责将文本添加特殊标记。参数sep和参数cls为一个元组(str, int),第一项为特殊标记,第二项为标记对应的id。

>>>  def batch_iterator():
	    for i in range(0, len(dataset), 1000):
	        yield dataset[i: i + 1000]["text"]

>>> dataset = load_dataset("wikitext", "wikitext-103-raw-v1", split="validation")
>>> tokenizer = Tokenizer(models.WordPiece(unk_token="[UNK]"))
>>> tokenizer.pre_tokenizer = pre_tokenizers.BertPreTokenizer()

>>> special_tokens = ["[UNK]", "[PAD]", "[CLS]", "[SEP]", "[MASK]"]
>>> trainers = trainers.WordPieceTrainer(special_tokens=special_tokens)
>>> tokenizer.train_from_iterator(batch_iterator(), trainers)

>>> tokenizer.encode("this is a text!!!").ids
[758, 560, 64, 4413, 5, 5, 5]

>>> processor = processors.BertProcessing(sep=("[SEP]", tokenizer.token_to_id("[SEP]")),
                                          cls=("[CLS]", tokenizer.token_to_id("[CLS]")))
processor.process(tokenizer.encode("this is a text!!!")).ids
[2, 758, 560, 64, 4413, 5, 5, 5, 3]
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
2、ByteLevel
tokenizers.processors.ByteLevel(trim_offsets = True)
  • 1

ByteLevel后处理器在Byte-level BPE模型后使用,经过Byte-level BPE模型处理后会包含许多空格,这些空格都是包含在offsets中,如果不想要这些空格包含在offsets中,可以指定参数trim_offsets=True,这也是默认的处理方式。

>>> def batch_iterator():
	    for i in range(0, len(dataset), 1000):
	        yield dataset[i: i + 1000]["text"]

>>> dataset = load_dataset("wikitext", "wikitext-103-raw-v1", split="validation")
>>> tokenizer = Tokenizer(models.BPE())
>>> tokenizer.pre_tokenizer = pre_tokenizers.ByteLevel()
>>> trainers = trainers.BpeTrainer(special_tokens=["<|endoftext|>"])
>>> tokenizer.train_from_iterator(batch_iterator(), trainers)

>>> processor = processors.ByteLevel(trim_offsets=False)
>>> processor.process(tokenizer.encode("this is a text!!!")).offsets
[(0, 4), (4, 7), (7, 9), (9, 14), (14, 15), (15, 16), (16, 17)]
>>> processor = processors.ByteLevel()
>>> processor.process(tokenizer.encode("this is a text!!!")).offsets
[(0, 4), (5, 7), (8, 9), (10, 14), (14, 15), (15, 16), (16, 17)]
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
3、RobertaProcessing
tokenizers.processors.RobertaProcessing(sep, cls, trim_offsets = True, add_prefix_space = True )
  • 1

RobertaProcessing后处理器处理后供Roberta模型来使用,Roberta使用的也是Byte-level BPE模型,因此经Byte-level BPE模型处理后也会包含许多空格。参数add_prefix_space的值应该与pre_tokenizers中使用的add_prefix_space值一致,因为如果添加了token前面添加了空格,也会影响到offsets。

>>> def batch_iterator():
	    for i in range(0, len(dataset), 1000):
	        yield dataset[i: i + 1000]["text"]


>>> dataset = load_dataset("wikitext", "wikitext-103-raw-v1", split="validation")

>>> tokenizer = Tokenizer(models.BPE(unk_token="<unk>"))
>>> tokenizer.pre_tokenizer = pre_tokenizers.ByteLevel(add_prefix_space=True)
>>> special_tokens = ["<unk>", "<pad>", "<s>", "</s>", "<mask>"]
>>> trainers = trainers.BpeTrainer(special_tokens=special_tokens)
>>> tokenizer.train_from_iterator(batch_iterator(), trainers)

>>> processor = processors.RobertaProcessing(sep=("</s>", tokenizer.token_to_id("</s>")),
	                                         cls=("<s>", tokenizer.token_to_id("<s>")),
	                                         trim_offsets=True,
	                                         add_prefix_space=True)
>>> processor.process(tokenizer.encode("this is a text!!!")).ids
[2, 522, 305, 176, 4452, 5, 5, 5, 3]
>>> processor.process(tokenizer.encode("this is a text!!!")).offsets
[(0, 0), (0, 4), (5, 7), (8, 9), (10, 14), (14, 15), (15, 16), (16, 17), (0, 0)]
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
4、TemplateProcessing
tokenizers.processors.TemplateProcessing(single, pair, special_tokens )
  • 1

TemplateProcessing后处理器提供一个模板化的文本处理,以便将特殊标记添加到输入序列中。参数single表示单个序列使用的模板,参数pair表示两个序列使用的模板,参数special_tokens表示模板使用的特殊标记,是一个元组(str, int)。

对于参数single和参数pair有三种形式,分别是:

# 只指定序列,type_ids默认为0
$A或者$B
# 只指定type_ids,序列默认为A
$0、$1、...
# 既指定序列,又指定type_ids
$A:0、$B:1
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
>>> def batch_iterator():
	    for i in range(0, len(dataset), 1000):
	        yield dataset[i: i + 1000]["text"]


>>> dataset = load_dataset("wikitext", "wikitext-103-raw-v1", split="validation")

>>> tokenizer = Tokenizer(models.WordPiece(unk_token="[UNK]"))
>>> tokenizer.normalizer = normalizers.BertNormalizer()
>>> tokenizer.pre_tokenizer = pre_tokenizers.BertPreTokenizer()
>>> special_tokens = ["[UNK]", "[PAD]", "[CLS]", "[SEP]", "[MASK]"]
>>> trainers = trainers.WordPieceTrainer(special_tokens=special_tokens)
>>> tokenizer.train_from_iterator(batch_iterator(), trainers)
>>> tokenizer.encode("this is a text!!!").ids)
[482, 359, 38, 3350, 5, 5, 5]

>>> cls_token_id = tokenizer.token_to_id("[CLS]")
>>> sep_token_id = tokenizer.token_to_id("[SEP]")
>>> post_processor = processors.TemplateProcessing(
	    single=f"[CLS]:0 $A:0 [SEP]:0",
	    pair=f"[CLS]:0 $A:0 [SEP]:0 $B:1 [SEP]:1",
	    special_tokens=[
	        ("[CLS]", cls_token_id),
	        ("[SEP]", sep_token_id),
	    ],
	)
>>> post_processor.process(tokenizer.encode("this is a text!!!")).ids
[2, 482, 359, 38, 3350, 5, 5, 5, 3]
>>> (post_processor.process(tokenizer.encode("this is a text!!!")).type_ids
[0, 0, 0, 0, 0, 0, 0, 0, 0]
>>> post_processor.process(tokenizer.encode("this is a text!!!"), tokenizer.encode("this is another text!!!")).ids
[2, 482, 359, 38, 3350, 5, 5, 5, 3, 482, 359, 1061, 3350, 5, 5, 5, 3]
>>> post_processor.process(tokenizer.encode("this is a text!!!"), tokenizer.encode("this is another text!!!")).type_ids
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1]
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/weixin_40725706/article/detail/1005275
推荐阅读
相关标签
  

闽ICP备14008679号