赞
踩
本文属于huggingface.transformers全部文档学习笔记博文的一部分。
全文链接:huggingface transformers包 文档学习笔记(持续更新ing…)
本部分网址:https://huggingface.co/docs/transformers/main/en/training
本部分以文本分类任务为例,介绍transformers上如何微调预训练模型。
由于本人主要使用PyTorch框架,因此本文仅介绍使用transformers.Trainer(文档:https://huggingface.co/docs/transformers/main/en/main_classes/trainer#transformers.Trainer)和使用原生PyTorch来进行微调的方法。
由于教程中的代码是分散的,所以我会在这两个部分的最后一节各自呈现完整的脚本代码。
此外,由于①本人有用自己数据集的需要。②由于我的服务器不好挂代理,所以用datasets不方便。所以本文将用一些篇幅介绍不使用datasets实现所需功能的方式。(但是也会介绍本部分文档所提到datasets包的使用内容)
另请注意:我一部分代码是在jupyter notebook上跑的,一部分代码是用脚本跑的,而且使用的环境有所改变,所以输出的环境可能不统一。
一个本文代码可用的Python环境:Python 3.8,PyTorch 1.8.1,cudatoolkit 10.2,transformers 4.18.0,datasets 2,scikit-learn 1.0.2
(据我观察别的版本应该也可以,影响不大)
在一个特定任务的数据集上训练预训练模型,就叫微调(finetune)。
我专门另写了一篇博文介绍datasets包的使用方法,请转见这篇博文:huggingface.datasets使用说明
总之最终用了Yelp Reviews(dataset = load_dataset("yelp_review_full")
)的一小部分数据来作为我们这次实验的数据集:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("mypath/bert-base-cased")
def tokenize_function(examples):
return tokenizer(examples["text"],padding="max_length",truncation=True,max_length=512)
tokenized_datasets = dataset.map(tokenize_function, batched=True)
small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000))
small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000))
这个数据集的标签有5类。
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("mypath/bert-base-cased", num_labels=5)
输出:
Some weights of the model checkpoint at mypath/bert-base-cased were not used when initializing BertForSequenceClassification: ['cls.predictions.transform.LayerNorm.weight', 'cls.predictions.decoder.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.dense.bias', 'cls.predictions.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.dense.weight', 'cls.seq_relationship.weight']
- This IS expected if you are initializing BertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of BertForSequenceClassification were not initialized from the model checkpoint at mypath/bert-base-cased and are newly initialized: ['classifier.weight', 'classifier.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
(对该输出的解释可参考我之前写的博文:Some weights of the model checkpoint at mypath/bert-base-chinese were not used when initializing Ber_诸神缄默不语的博客-CSDN博客)
这个代码也可以这么写:
from transformers import AutoConfig,AutoModelForSequenceClassification
model_path="mypath/bert-base-cased"
config=AutoConfig.from_pretrained(model_path,num_labels=5)
model=AutoModelForSequenceClassification.from_pretrained(model_path,config=config)
TrainingArguments类(文档:https://huggingface.co/docs/transformers/main/en/main_classes/trainer#transformers.TrainingArguments)包含了所有可调的超参数、训练设置。在本教程中用的是默认超参数。
定义checkpoint存储位置:
from transformers import TrainingArguments
training_args = TrainingArguments(output_dir="test_trainer")
Trainer不会自动评估模型,所以需要传递给它用以计算和打印指标的函数。
更多指标相关的内容可参考:https://huggingface.co/docs/datasets/metrics.html
accuracy(准确率)指标的huggingface官方网页:Hugging Face – The AI community building the future.
加载准确率指标:
import numpy as np
metric=datasets.load_metric("accuracy")
在metric上调用compute()
方法,就可以计算预测值(模型返回值中的logits)的准确率了。
def compute_metrics(eval_pred):
logits, labels = eval_pred
predictions = np.argmax(logits, axis=-1)
return metric.compute(predictions=predictions, references=labels)
如果想要在微调过程中监测指标的变化情况,需要在TrainingArguments中定义evaluation_strategy超参,以在每个epoch结束时打印测试集上的指标):
training_args = TrainingArguments(output_dir="test_trainer", evaluation_strategy="epoch")
定义Trainer对象:
from transformers import Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=small_train_dataset,
eval_dataset=small_eval_dataset,
compute_metrics=compute_metrics,
)
开始训练:
trainer.train()
脚本运行输出:(在这里就可以看到,text列没有传入模型)
The following columns in the training set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: text. If text are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message. myenv/lib/python3.8/site-packages/transformers/optimization.py:306: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning warnings.warn( ***** Running training ***** Num examples = 1000 Num Epochs = 3 Instantaneous batch size per device = 8 Total train batch size (w. parallel, distributed & accumulation) = 32 Gradient Accumulation steps = 1 Total optimization steps = 96 0%| | 0/96 [00:00<?, ?it/s]myenv/lib/python3.8/site-packages/torch/nn/parallel/_functions.py:65: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector. warnings.warn('Was asked to gather along dimension 0, but all ' 33%|████████████████████████▎ | 32/96 [00:19<00:23, 2.73it/s]The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: text. If text are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message. ***** Running Evaluation ***** Num examples = 1000 Batch size = 8 {'eval_loss': 1.219325304031372, 'eval_accuracy': 0.487, 'eval_runtime': 5.219, 'eval_samples_per_second': 191.609, 'eval_steps_per_second': 6.131, 'epoch': 1.0} 33%|████████████████████████▎ | 32/96 [00:24<00:23, 2.73it/smyenv/lib/python3.8/site-packages/torch/nn/parallel/_functions.py:65: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector. warnings.warn('Was asked to gather along dimension 0, but all ' 67%|████████████████████████████████████████████████▋ | 64/96 [00:37<00:11, 2.87it/s]The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: text. If text are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message. ***** Running Evaluation ***** Num examples = 1000 Batch size = 8 {'eval_loss': 1.0443027019500732, 'eval_accuracy': 0.57, 'eval_runtime': 5.1937, 'eval_samples_per_second': 192.539, 'eval_steps_per_second': 6.161, 'epoch': 2.0} 67%|████████████████████████████████████████████████▋ | 64/96 [00:42<00:11, 2.87it/smyenv/lib/python3.8/site-packages/torch/nn/parallel/_functions.py:65: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector. warnings.warn('Was asked to gather along dimension 0, but all ' 100%|█████████████████████████████████████████████████████████████████████████| 96/96 [00:55<00:00, 2.87it/s]The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: text. If text are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message. ***** Running Evaluation ***** Num examples = 1000 Batch size = 8 {'eval_loss': 0.9776290655136108, 'eval_accuracy': 0.598, 'eval_runtime': 5.2137, 'eval_samples_per_second': 191.803, 'eval_steps_per_second': 6.138, 'epoch': 3.0} 100%|█████████████████████████████████████████████████████████████████████████| 96/96 [01:00<00:00, 2.87it/s] Training completed. Do not forget to share your model on huggingface.co/models =) {'train_runtime': 60.8009, 'train_samples_per_second': 49.341, 'train_steps_per_second': 1.579, 'train_loss': 1.0931960741678874, 'epoch': 3.0} 100%|█████████████████████████████████████████████████████████████████████████| 96/96 [01:00<00:00, 1.58it/s]
jupyter notebook的输出效果,看起来比脚本输出更清晰一些:
The following columns in the training set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: text. If text are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.
myenv/lib/python3.8/site-packages/transformers/optimization.py:306: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
warnings.warn(
***** Running training *****
Num examples = 1000
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 32
Gradient Accumulation steps = 1
Total optimization steps = 96
myenv/lib/python3.8/site-packages/torch/nn/parallel/_functions.py:65: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.
warnings.warn('Was asked to gather along dimension 0, but all '
The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: text. If text are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message. ***** Running Evaluation ***** Num examples = 1000 Batch size = 8 myenv/lib/python3.8/site-packages/torch/nn/parallel/_functions.py:65: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector. warnings.warn('Was asked to gather along dimension 0, but all ' The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: text. If text are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message. ***** Running Evaluation ***** Num examples = 1000 Batch size = 8 myenv/lib/python3.8/site-packages/torch/nn/parallel/_functions.py:65: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector. warnings.warn('Was asked to gather along dimension 0, but all ' The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: text. If text are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message. ***** Running Evaluation ***** Num examples = 1000 Batch size = 8 Training completed. Do not forget to share your model on huggingface.co/models =) TrainOutput(global_step=96, training_loss=1.1009167830149333, metrics={'train_runtime': 60.9212, 'train_samples_per_second': 49.244, 'train_steps_per_second': 1.576, 'total_flos': 789354427392000.0, 'train_loss': 1.1009167830149333, 'epoch': 3.0})
因为我为了debug在colab上也跑了一遍,所以也展示一下colab上的输出效果(我也用了GPU,但还是比在本地慢了很多,不知道为啥。我本地是有4张卡,但这明显不止慢了4倍啊):
The following columns in the training set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: text. If text are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.
/usr/local/lib/python3.7/dist-packages/transformers/optimization.py:309: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
FutureWarning,
***** Running training *****
Num examples = 1000
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 8
Gradient Accumulation steps = 1
Total optimization steps = 375
The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: text. If text are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message. ***** Running Evaluation ***** Num examples = 1000 Batch size = 8 The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: text. If text are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message. ***** Running Evaluation ***** Num examples = 1000 Batch size = 8 The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: text. If text are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message. ***** Running Evaluation ***** Num examples = 1000 Batch size = 8 Training completed. Do not forget to share your model on huggingface.co/models =) TrainOutput(global_step=375, training_loss=1.2140440266927084, metrics={'train_runtime': 780.671, 'train_samples_per_second': 3.843, 'train_steps_per_second': 0.48, 'total_flos': 789354427392000.0, 'train_loss': 1.2140440266927084, 'epoch': 3.0})
(注意这里还有一点在于torch.nn.parallel的报错,colab运行时没有报错,我怀疑要么是因为colab只有一张卡,要么是因为torch版本的问题(我本地用的是PyTorch 1.8.1,colab是PyTorch 1.10)。但是这玩意不好验证,我就猜猜)
import datasets import numpy as np from transformers import AutoTokenizer,AutoModelForSequenceClassification,TrainingArguments,Trainer dataset=datasets.load_from_disk("datasets/yelp_full_review_disk") tokenizer = AutoTokenizer.from_pretrained("pretrained_models/bert-base-cased") def tokenize_function(examples): return tokenizer(examples["text"],padding="max_length",truncation=True,max_length=512) tokenized_datasets = dataset.map(tokenize_function, batched=True) small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000)) small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000)) model = AutoModelForSequenceClassification.from_pretrained("pretrained_models/bert-base-cased", num_labels=5) training_args = TrainingArguments(output_dir="pt_save_pretrained",evaluation_strategy="epoch") metric=datasets.load_metric('datasets/accuracy.py') def compute_metrics(eval_pred): logits, labels = eval_pred predictions = np.argmax(logits, axis=-1) return metric.compute(predictions=predictions, references=labels) trainer = Trainer( model=model, args=training_args, train_dataset=small_train_dataset, eval_dataset=small_eval_dataset, compute_metrics=compute_metrics, ) trainer.train()
Trainer虽好,屁事太多。太难debug了,不如直接用原生PyTorch写。
这一部分的理解可参考我之前写的博文60分钟闪击速成PyTorch(Deep Learning with PyTorch: A 60 Minute Blitz)学习笔记_诸神缄默不语的博客-CSDN博客
一个training loop:
将训练数据输入模型,得到预测结果→计算损失函数→计算梯度→更新参数→重新将训练数据输入模型,得到预测结果
如果在notebook上照着之前的代码后面继续跑的,建议先把之前的模型、Trainer之类的先删掉,清一下cuda上的缓存,以省出内存。或者直接重启notebook:
del model
del trainer
torch.cuda.empty_cache()
预处理dataset(后文会介绍如何使用Python原生的数据对象来生成所需的数据集):
from torch.utils.data import DataLoader tokenized_datasets = tokenized_datasets.remove_columns(["text"]) #删除模型不用的text列 tokenized_datasets = tokenized_datasets.rename_column("label", "labels") #改名label列为labels,因为AutoModelForSequenceClassification的入参键名为label #我不知道为什么dataset直接叫label就可以啦…… tokenized_datasets.set_format("torch") #将值转换为torch.Tensor对象 small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000)) small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000)) #抽样出一部分数据来,快速完成教程 train_dataloader = DataLoader(small_train_dataset, shuffle=True, batch_size=8) eval_dataloader = DataLoader(small_eval_dataset, batch_size=8) #将数据集转为DataLoader,就是键-值相对应的形式,后文可以看出是通过**batch的形式将数据传入模型的
使用自己的数据集:
示例数据集是这么得到的:
example_dict={'labels':dataset['train']['label'],'text':dataset['train']['text']}
展示数据:
print(type(example_dict['labels']))
print(example_dict['labels'][12345])
print(type(example_dict['text']))
print(example_dict['text'][12345])
输出:
<class 'list'>
2
<class 'list'>
I went here in search of a crepe with Nutella and I got a really good crepe. I wouldn't exactly say this place is authentic French because you've got Americans cooking the food, but my crepe was still good. \n\nIt doesn't taste like the ones I had in France, Carmon's puts a twist on (or maybe it was just overcooked) theirs by making the crepe more firm. \n\nThe whipped cream was also made fresh and delightful. The prices were horrid though.\n\nCrepes don't cost that much to make, so they're clearly overpricing here. Price is the only reason I won't come back so often.
①使用torch的DataSet和DataLoader类(跟上面将datasets.Dataset最后得到的东西相当于是一样的):
from torch.utils.data import Dataset,DataLoader #定义DataSet class YelpDataset(Dataset): def __init__(self,dict_data) -> None: """ dict_data: dict格式的data,键labels对应标签列表(元素是数值),键text对应文本列表 """ super(YelpDataset,self).__init__() self.data=dict_data def __getitem__(self, index): return [self.data['text'][index],self.data['labels'][index]] #返回一个列表,第一个元素是文本,第二个元素是标签 def __len__(self): return len(self.data['text']) #定义collate函数 def collate_fn(batch): pt_batch=tokenizer([b[0] for b in batch],padding=True,truncation=True,max_length=512, return_tensors='pt') labels=torch.tensor([b[1] for b in batch]) return {'labels':labels,'input_ids':pt_batch['input_ids'],'token_type_ids':pt_batch['token_type_ids'], 'attention_mask':pt_batch['attention_mask']} train_data=YelpDataset(example_dict) train_dataloader=DataLoader(train_data,batch_size=8,shuffle=True,collate_fn=collate_fn)
②手写DataLoader:
在每个training loop中,如此遍历(大多数变量我觉得都能看名字就看出来什么意思,就不做详细介绍了):
#训练部分 #(验证部分差不多) train_data_length=len(example_dict['labels']) if train_data_length%batch_size==0: batch_num=int(train_data_length/batch_size) else: batch_num=int(train_data_length/batch_size)+1 for b in range(batch_num): index_begin=b*batch_size index_end=min(train_data_length,index_begin+batch_size) this_batch_text=example_dict['text'][index_begin:index_end] this_batch_labels=example_dict['labels'][index_begin:index_end] pt_batch=tokenizer(this_batch_text,padding=True,truncation=True,max_length=512,return_tensors='pt') #pt_batch我就懒得按键拆开了,以下运行训练部分代码,和用DataLoader的类似,一目了然不言而喻,略
定义分类模型:
from transformers import AutoModelForSequenceClassification
model=AutoModelForSequenceClassification.from_pretrained("mypath/bert-base-cased",
num_labels=5)
在前文可以看到,transformers的Trainer默认调用的是transformers的AdamW优化器,并会报此警告:
FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set no_deprecation_warning=True
to disable this warning
所以以前的AdamW都别用了,用PyTorch官方的AdamW优化器:
from torch.optim import AdamW
optimizer = AdamW(model.parameters(), lr=5e-5)
从Trainer创建默认的learning rate scheduler:
from transformers import get_scheduler
num_epochs = 3
num_training_steps = num_epochs * len(train_dataloader)
lr_scheduler = get_scheduler(
name="linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps
)
指定设备(单卡情况)并将模型转移到指定设备上:
import torch
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model.to(device)
tqdm包官网:https://tqdm.github.io/
from tqdm.auto import tqdm progress_bar = tqdm(range(num_training_steps)) model.train() for epoch in range(num_epochs): for batch in train_dataloader: batch = {k: v.to(device) for k, v in batch.items()} outputs = model(**batch) loss = outputs.loss loss.backward() optimizer.step() lr_scheduler.step() optimizer.zero_grad() progress_bar.update(1)
输出:
在真实代码中还可以加上早停、保存在验证集上指标最高的模型等功能。
和使用Trainer时一样,用datasets包的Metric来计算指标。
这里的验证过程是在训练结束后,通过Metric的add_batch()函数(文档:https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=add_batch#datasets.Metric.add_batch)来累积所有batch。
metric = load_metric("accuracy")
model.eval()
for batch in eval_dataloader:
batch = {k: v.to(device) for k, v in batch.items()}
with torch.no_grad():
outputs = model(**batch)
logits = outputs.logits
predictions = torch.argmax(logits, dim=-1)
metric.add_batch(predictions=predictions, references=batch["labels"])
metric.compute()
输出:{'accuracy': 0.588}
from tqdm.auto import tqdm import torch from torch.utils.data import DataLoader from torch.optim import AdamW import datasets from transformers import AutoTokenizer,AutoModelForSequenceClassification,get_scheduler dataset=datasets.load_from_disk("datasets/yelp_full_review_disk") tokenizer = AutoTokenizer.from_pretrained("pretrained_models/bert-base-cased") def tokenize_function(examples): return tokenizer(examples["text"], padding="max_length",truncation=True,max_length=512) tokenized_datasets = dataset.map(tokenize_function, batched=True) #Postprocess dataset tokenized_datasets = tokenized_datasets.remove_columns(["text"]) #删除模型不用的text列 tokenized_datasets = tokenized_datasets.rename_column("label", "labels") #改名label列为labels,因为AutoModelForSequenceClassification的入参键名为label #我不知道为什么dataset直接叫label就可以啦…… tokenized_datasets.set_format("torch") #将值转换为torch.Tensor对象 small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000)) small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000)) train_dataloader = DataLoader(small_train_dataset, shuffle=True, batch_size=8) eval_dataloader = DataLoader(small_eval_dataset, batch_size=8) model=AutoModelForSequenceClassification.from_pretrained\ ("pretrained_models/bert-base-cased",num_labels=5) optimizer = AdamW(model.parameters(), lr=5e-5) num_epochs = 3 num_training_steps = num_epochs * len(train_dataloader) lr_scheduler = get_scheduler( name="linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps ) device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") model.to(device) progress_bar = tqdm(range(num_training_steps)) model.train() for epoch in range(num_epochs): for batch in train_dataloader: batch = {k: v.to(device) for k, v in batch.items()} outputs = model(**batch) loss = outputs.loss loss.backward() optimizer.step() lr_scheduler.step() optimizer.zero_grad() progress_bar.update(1) metric=datasets.load_metric('datasets/accuracy.py') model.eval() for batch in eval_dataloader: batch = {k: v.to(device) for k, v in batch.items()} with torch.no_grad(): outputs = model(**batch) logits = outputs.logits predictions = torch.argmax(logits, dim=-1) metric.add_batch(predictions=predictions, references=batch["labels"]) print(metric.compute())
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。