当前位置:   article > 正文

Huggingface Transformers Deberta-v3-base安装踩坑记录_you need to have sentencepiece installed to conver

you need to have sentencepiece installed to convert a slow tokenizer to a fa

下载transformers的预训练模型时,使用bert-base-cased等模型在AutoTokenizer和AutoModel时并不会有太多问题。但在下载deberta-v3-base时可能会发生很多报错。

首先,

  1. from transformers import AutoTokneizer, AutoModel, AutoConfig
  2. checkpoint = 'microsoft/deberta-v3-base'
  3. tokenizer = AutoTokenizer.from_pretrained(checkpoint)

此时会发生报错,提示

  1. ValueError: Couldn't instantiate the backend tokenizer from one of:
  2. (1) a `tokenizers` library serialization file,
  3. (2) a slow tokenizer instance to convert or
  4. (3) an equivalent slow tokenizer class to instantiate and convert.
  5. You need to have sentencepiece installed to convert a slow tokenizer to a fast one.

 解决方法是

pip install transformers sentencepiece

 继续导入tokenizer,又会有如下报错

  1. ImportError:
  2. DeberetaV2Converter requires the protobuf library but it was not f
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/知新_RL/article/detail/365428?site
推荐阅读
相关标签
  

闽ICP备14008679号