当前位置:   article > 正文

transformers 之 head介绍_mlm head是什么

mlm head是什么

BertForPreTraining

Bert Model with two heads on top as done during the pretraining: a masked language modeling head and a next sentence prediction (classification) head.

有两个头,

MLM head: 可以简单理解是一个全连接层(实际不是,先经过liner(hidden_size>hidden_size)>激活>layernorm>liner(hidden_size>vocab_size)),预测被mask的单词

nsp head:  nsp预测,也是一个全连接层, hidden_size->2

  1. class BertPreTrainingHeads(nn.Module):
  2. def __init__(self, config):
  3. super().__init__()
  4. self.predictions = BertLMPredictionHead(config) # MLM head
  5. self.seq_relationship = nn.Linear(config.hidden_size, 2) # NSP HEAD
  6. def forward(self, sequence_output, pooled_output):
  7. prediction_scores = self.predictions(sequence_output)
  8. seq_relationship_score = self.seq_relationship(pooled_output)
  9. return prediction_scores, seq_relationship_score

BertLMHeadModel

Bert Model with a language modeling head on top for CLM fine-tuning.

只有一个MLM head, 训练目标是根据上一个词预测当前词,时因果语言建模(CLM)

  1. import torch
  2. from transformers import AutoTokenizer, BertLMHeadModel
  3. tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
  4. model = BertLMHeadModel.from_pretrained("bert-base-uncased")
  5. inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
  6. outputs = model(**inputs, labels=inputs["input_ids"])
  7. loss = outputs.loss
  8. logits = outputs.logits

BertForMaskedLM

Bert Model with a language modeling head on top.

只有一个MLM head,训练目标就是预测mask

  1. from transformers import AutoTokenizer, BertForMaskedLM
  2. import torch
  3. tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
  4. model = BertForMaskedLM.from_pretrained("bert-base-uncased")
  5. inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt")
  6. with torch.no_grad():
  7. logits = model(**inputs).logits
  8. # retrieve index of [MASK]
  9. mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
  10. predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
  11. tokenizer.decode(predicted_token_id)
  12. labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
  13. # mask labels of non-[MASK] tokens
  14. labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
  15. outputs = model(**inputs, labels=labels)
  16. round(outputs.loss.item(), 2)

BertForNextSentencePrediction

Bert Model with a next sentence prediction (classification) head on top.

只有一个NSPhead

BertForSequenceClassification

Bert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.

一个全连接层head, 输出维度等于 类别数量

BertForMultipleChoice

Bert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks.

一个全连接层head,输出维度1(可以理解是一个得分)

主要处理答案选择任务。比如我有一道题,然后有4个选项ABCD

构造数据如下:

[cls]问题[sep]A[sep]

[cls]问题[sep]B[sep]

[cls]问题[sep]C[sep]

[cls]问题[sep]D[sep]

把这4条数据分别送入模型

正确答案的得分应该最高,采用softmax激活,交叉熵损失(有点类似文本匹配的listwise loss)

BertForQuestionAnswering

Bert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits).

一个全连接层head,做答案抽取的,预测答案的开始和结尾。

参考:

https://huggingface.co/docs/transformers/model_doc/bert#transformers.BertForQuestionAnswering

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/小小林熬夜学编程/article/detail/342578
推荐阅读
相关标签
  

闽ICP备14008679号