当前位置:   article > 正文

python-pytorch基础之bert模型分词器tokenizer填充的两种方法_berttokenizer怎么填充

berttokenizer怎么填充

方法一

from transformers import AutoTokenizer
tokenizer=AutoTokenizer.from_pretrained("./distilbert-base-uncased-finetuned-sst-2-english")

x_train_tokenized=x_train[0].apply(lambda ii:tokenizer.encode(ii, add_special_tokens = True))

# 填充方法
max_len=0
for i in x_train_tokenized.values:
    if len(i) > max_len:
        max_len = len(i)
x_train_tokenized = np.array([i + [0] * (max_len - len(i)) for i in x_train_tokenized.values])
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

方法二

from transformers import AutoTokenizer
tokenizer=AutoTokenizer.from_pretrained("./distilbert-base-uncased-finetuned-sst-2-english")


x_train_tokenized=x_train[0].apply(lambda ii:tokenizer(ii,
                       padding="max_length",
                       truncation=True,
                       return_tensors="pt",
                       max_length=66))
输出类似
 tensor([[  101,  5342,  2047,  3595,  8496,  2013,  1996, 18643,  3197,   102,
              0,     0,     0,     0,     0,     0,     0,     0,     0,     0,
              0,     0,     0,     0,     0,     0,     0,     0,     0,     0,
              0,     0,     0,     0,     0,     0,     0,     0,     0,     0,
              0,     0,     0,     0,     0,     0,     0,     0,     0,     0,
              0,     0,     0,     0,     0,     0,     0,     0,     0,     0,
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/盐析白兔/article/detail/481123
推荐阅读
相关标签
  

闽ICP备14008679号