开关变压器绕制教程
This is the first of a series of mini-tutorials to help you with various aspects of the AllenNLP library.
这是一系列迷你教程的第一部分,可以帮助您了解AllenNLP库的各个方面。
If you’re new to AllenNLP, consider first going through the official guide, as these tutorials will be focused on more advanced use cases.
如果您是AllenNLP的新手,请考虑先阅读正式指南 ,因为这些教程将重点介绍更高级的用例。
Please keep in mind these tutorials are written for version 1.0 and greater of AllenNLP and may not be relevant for older versions.
请记住,这些教程是针对1.0版及更高版本的AllenNLP编写的,可能与旧版本无关。
One way AllenNLP is commonly used is for fine-tuning transformer models to specific tasks. We host several of these models on our demo site, such as a BERT model applied to the SQuAD v1.1 question-answer task, and a RoBERTa model applied to the SNLI textual entailment task.
AllenNLP常用的一种方法是将变压器模型微调到特定任务。 我们在演示站点上托管了这些模型中的几种,例如应用于SQuAD v1.1问题解答任务的BERT模型,以及应用于SNLI文本包含任务的RoBERTa模型。
You can find the code and configuration files used to train these models in the AllenNLP Models repository.
您可以在 AllenNLP Models 存储库中 找到用于训练这些模型的代码和配置文件 。
This tutorial will show you how to take a fine-tuned transformer model, like one of these, and upload the weights and/or the tokenizer to HuggingFace’s model hub.
本教程将向您展示如何采用微调的变压器模型(如其中之一),并将权重和/或标记器上传到HuggingFace的模型中心 。
Note that we are talking about uploading only the transformer part of your model, not including any task-specific heads that you’re using.
请注意,我们正在谈论的是仅上传模型的转换器部分,而不包括您正在使用的任何特定于任务的机头。
First of all, you’ll need to know how a transformer model and tokenizer is actually integrated into an AllenNLP model.
首先,您需要了解如何将转换器模型和令牌生成器实际集成到AllenNLP模型中。
This is usually done by providing your dataset reader with a PretrainedTransformerTokenizer
and a matching PretrainedTransformerIndexer
, and then providing your model with the corresponding PretrainedTransformerEmbedder
.
通常,这是通过为数据集读取器提供PretrainedTransformerTokenizer
和匹配的PretrainedTransformerIndexer
,然后为模型提供相应的PretrainedTransformerEmbedder
。
If your dataset reader and model are already general enough that they can accept any type of tokenizer / token indexer and token embedder, respectively, then the only thing you need to do in order to utilize a pretrained transformer in your model is tweak your training configuration file.
如果您的数据集阅读器和模型已经足够通用,可以分别接受任何类型的令牌生成器/令牌索引器和令牌嵌入器,那么为了在模型中使用预训练的转换器,您要做的唯一事情就是调整训练配置文件。
With the RoBERTa SNLI model, for example, the “dataset_reader” part of the config would look like this:
例如,使用RoBERTa SNLI模型,配置的“ dataset_reader”部分如下所示:
"dataset_reader": { "type": "snli", "tokenizer": { "type": "pretrained_transformer", "model_name": "roberta-large", "add_special_tokens": false }, "token_indexers": { "tokens": { "type": "pretrained_transformer", "model_name": "roberta-large", "max_length": 512 } }}
While the “model” part of the config would look like this:
虽然配置的“模型”部分看起来像这样:
"model": { "type": "basic_classifier", "text_field_embedder": { "token_embedders": { "tokens": { "type": "pretrained_transformer", "model_name": "roberta-large", "max_length": 512 } } }, ...}
Once you’ve trained your model, just follow these 3 steps to upload the transformer part of your model to HuggingFace.
训练完模型后,只需按照以下3个步骤将模型的变压器部分上载到HuggingFace。
Step 1: Load your tokenizer and your trained model.
步骤1:加载令牌生成器和训练有素的模型。
If you get a ConfigurationError
during this step that says something like “foo is not a registered name for bar”, that just means you need to import any other classes that your model or dataset reader use so they get registered.
如果 在此步骤中 收到 ConfigurationError
,并说“ foo不是bar的注册名称”,则意味着您需要导入模型或数据集读取器使用的任何其他类,以便对其进行注册。
Step 2: Serialize your tokenizer and just the transformer part of your model using the HuggingFace transformers
API.
第2步:使用HuggingFace transformers
API 将令牌化器和模型的转换器部分序列化 。
Step 3: Upload the serialized tokenizer and transformer to the HuggingFace model hub.
步骤3:将序列化标记器和转换器上载到HuggingFace模型中心。
Finally, just follow the steps from HuggingFace’s documentation to upload your new cool transformer with their CLI.
最后,只需按照HuggingFace文档中的步骤操作, 即可通过其CLI上传新的酷转换器。
That’s it! Happy NLP-ing!
而已! NLP开心!
If you find any issues with this tutorial please leave a comment or open a new issue in the AllenNLP repo and give it the “Tutorials” tag:
如果您发现本教程有任何问题,请在AllenNLP存储库中发表评论或打开新问题 ,并为其添加“ Tutorials”标签:
Follow @allen_ai and @ai2_allennlp on Twitter, and subscribe to the AI2 Newsletter to stay current on news and research coming out of AI2.
在Twitter上 关注 @allen_ai 和 @ ai2_allennlp ,并订阅 AI2时事通讯, 以 随时了解 AI2 方面的新闻和研究。
开关变压器绕制教程