当前位置:   article > 正文

使用sentencepiece模型替换词表

sentencepiece

最近在用DeBERTa模型跑一些下游任务,了解到了sentencepiece模型,用于替代预训练模型中的词表。

sentencepiece 是google开源的文本Tokenzier工具,本身提供四种切分方法。包括:char, word, byte-pair-encoding(bpe), unigram language model(unigram 默认类别)

经过实验对比,我发现针对中文文本,采用bpe的效果还算可以,但也存在一些问题,最好手动再进行对其进行调整。

通过对自己的文本进行训练,最后得到.model 和.vocab文件可供使用。

整个过程流程如下:

  • 安装 sentencepiece        

         整个过程比较简单,Github上有详细的介绍,这里我使用vcpkg进行安装。​​

Installation

Python module

SentencePiece provides Python wrapper that supports both SentencePiece training and segmentation. You can install Python binary package of SentencePiece with.

% pip install sentencepiece

For more detail, see Python module

Build and install SentencePiece command line tools from C++ source

The following tools and libraries are required to build SentencePiece:

  • cmake
  • C++11 compiler
  • gperftools library (optional, 10-40% performance improvement can be obtained.)

On Ubuntu, the build tools can be installed with apt-get:

% sudo apt-get install cmake build-essential pkg-config libgoogle-perftools-dev

Then, you can build and install command line tools as follows.

  1. % git clone https://github.com/google/sentencepiece.git
  2. % cd sentencepiece
  3. % mkdir build
  4. % cd build
  5. % cmake ..
  6. % make -j $(nproc)
  7. % sudo make install
  8. % sudo ldconfig -v

On OSX/macOS, replace the last command with sudo update_dyld_shared_cache

Build and install using vcpkg

You can download and install sentencepiece using the vcpkg dependency manager:

  1. git clone https://github.com/Microsoft/vcpkg.git
  2. cd vcpkg
  3. ./bootstrap-vcpkg.sh
  4. ./vcpkg integrate install
  5. ./vcpkg install sentencepiece

The sentencepiece port in vcpkg is kept up to date by Microsoft team members and community contributors. If the version is out of date, please create an issue or pull request on the vcpkg repository.

  • 训练sentencepiece模型

        很简单,也是一句话的事,不过如果采用bpe进行训练的话,随着vocab_size的上升,时间会变得很长。我针对一个20GB+的中文文本进行训练,将vocab_size设置为320000,后台运行了一整天。

Train SentencePiece Model

% spm_train --input=<input> --model_prefix=<model_name> --vocab_size=8000 --character_coverage=1.0 --model_type=<type>
  • --input: one-sentence-per-line raw corpus file. No need to run tokenizer, normalizer or preprocessor. By default, SentencePiece normalizes the input with Unicode NFKC. You can pass a comma-separated list of files.
  • --model_prefix: output model name prefix. <model_name>.model and <model_name>.vocab are generated.
  • --vocab_size: vocabulary size, e.g., 8000, 16000, or 32000
  • --character_coverage: amount of characters covered by the model, good defaults are: 0.9995 for languages with rich character set like Japanese or Chinese and 1.0 for other languages with small character set.
  • --model_type: model type. Choose from unigram (default), bpechar, or word. The input sentence must be pretokenized when using word type.

        代码如图所示: 

  1. import sentencepiece as spm
  2. spm.SentencePieceTrainer.Train(input='/data/data.txt', model_prefix='/data/sentencepiece/mypiece', vocab_size=320000, character_coverage=1, model_type='bpe', num_threads=96)

 再之后就可以导入训练好的模型进行使用啦。

关于效果:

  • 首先有个疑问,是不是词表越大越好?是不是vocab_size越大越好?

        针对这个问题,我在网上简单检索了一下发现知乎上有位作者烛之文做了相关实验,对比了中文新闻文本数据集下的spm+cnn模型,词表8000,20000,320000(sentencepiece能训练的最大词表)的效果。https://zhuanlan.zhihu.com/p/307485012

        从他的实验结果可以看出,随着词表增大,在训练集上更早的达到“潜在的最好效果”,而在验证集上的表现越来越差。理论上不是词表越大越好吗,它毕竟降低了未登录词出现的概率。他认为是该新闻数据集的每个label的特征都是很明显的,而且这些影响特征都是可用高频词汇组合出来的。如果加大词表,就相当于training过程中,让model学到很多label的噪声特征,导致在验证集上效果降低。

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/你好赵伟/article/detail/527472
推荐阅读
相关标签
  

闽ICP备14008679号