当前位置:   article > 正文

昇思25天学习打卡营第23天|基于mindspore bert对话情绪识别

昇思25天学习打卡营第23天|基于mindspore bert对话情绪识别

Interesting thing!

About Bert you just need to know that it is like gpt, but focus on pre-training Encoder instead of decoder. It has a mask method which enhances its precision remarkbably. (judge not only the word before the blank but the later one )

model : BertForSequenceClassfication constructs the model and load the config and set the sentiment classification to 3 kinds

  1. model = BertForSequenceClassification.from_pretrained('bert-base-chinese', num_labels = 3)
  2. model = auto_mixed_precision(model, '01')
  3. optimizer = nn.Adam(model.trainable_params(), learning_rate = 2e-5)
  4. metric = Accuracy()
  5. ckpoint_cb = CheckpointCallback(save_path = 'checkpoint', ckpt_name = 'bert_emotect', epochs = 1, keep_checkpoint_max = 2)
  6. best_model_cb = BestModelCallback(save_path = 'checkpoint', ckpt_name = 'bert_emotect_best', auto_load = True)
  7. trainer = Trainer(network = model, train_dataset = dataset_train,
  8. eval_dataset=dataset_val, metrics = metric,
  9. epochs = 5, optimizer = optimizer, callback = [ckpoint_cb, best_model_cb])
  10. trainer.run(tgt_columns = 'labels')

the model validation and prediction are the same mostly like Sentiment by any model:

  1. evaluator = Evaluator(network = model, eval_dataset = dataset_test, metrics= metric)
  2. evaluator.run(tgt_columns='labels')
  3. dataset_infer = SentimentDataset('data/infer.tsv')
  4. def predict(text, label = None):
  5. label_map = {0:'消极', 1:'中性', 2:'积极'}
  6. text_tokenized = Tensor([tokenizer(text).input_ids])
  7. logits = model(text_tokenized)
  8. predict_label = logits[0].asnumpy().argmax()
  9. info = f"inputs:'{text}',predict:
  10. '{label_map[predict_label]}'"
  11. if label is not None:
  12. info += f", label:'{label_map[label]}'"
  13. print(info)

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/空白诗007/article/detail/904866
推荐阅读
相关标签
  

闽ICP备14008679号