当前位置:   article > 正文

情感分析Baseline快速实现_nlpcc14-sc

nlpcc14-sc

项目地址:飞桨AI Studio星河社区-人工智能学习与实训社区 (baidu.com)

这里用飞桨的高层API快速搭建模型实现情感分析比赛的结果的提交。具体的原理和分析请参考『NLP打卡营』实践课5:文本情感分析。以下将分三部分:句子级情感分析(NLPCC14-SC,ChnSentiCorp);目标级情感分析(SE-ABSA16_PHNS,SE-ABSA16_CAME);以及观点抽取(COTE-BD,COTE-DP,COTE-MFW)。

项目的使用非常简单,更改相应章节的data_name,并自己调整batch_size和epochs等以达到最佳的训练效果,并运行相应章节的所有代码即可得到对应数据集的预测结果。所有数据预测完成后,下载submission文件夹提交即可。

2021/6/18更新:添加了get_data_loader函数里的返回data_loader的shuffle选项(修复bug);更改了观点抽取中保存文件的名称(修复bug)

2016/6/20:更改了2,3章shuffle的错误拼写。

2016/6/21:观点抽取中替换了英文字母编码时的特殊符号“##”,将"[UNK]"直接替换成了空字符(可以提高大概0.003的成绩)

!pip install --upgrade paddlenlp -i https://pypi.org/simple

1、 句子级情感分析

句子级情感分析是针对输入的一段话,判断其感情倾向,一般为积极(1)或消极(0)。

1.0 载入模型和Tokenizer

  1. import paddlenlp
  2. from paddlenlp.transformers import SkepForSequenceClassification, SkepTokenizer

1.1 数据处理

虽然一些数据集在PaddleNLP已存在,但是为了数据处理上的一致性,这里统一从上传的datasets中处理。对于PaddleNLP已存在的数据集,强烈建议直接用API调用,非常方便。

  1. # 解压数据
  2. !unzip -o datasets/ChnSentiCorp
  3. !unzip -o datasets/NLPCC14-SC

数据内部结构解析:

ChnSentiCorp:

train:
label text_a
0 房间太小。其他的都一般。。。。。。。。。
1 轻便,方便携带,性能也不错,能满足平时的工作需要,对出差人员来说非常不错

dev:
qid label text_a
0 1 這間酒店環境和服務態度亦算不錯,但房間空間太小~…

test:
qid text_a
0 这个宾馆比较陈旧了,特价的房间也很一般。总体来说一般
… …

NLPCC14-SC:

train:
label text_a
1 请问这机不是有个遥控器的吗?
0 全是大道理啊

test:
qid text_a
0 我终于找到同道中人啦~~~~从初中开始,我就…
… …

从上可以看出两个数据集可以定义一致的读取方式,但是NLPCC14-SC没有dev数据集,因此不再定义dev数

  1. # 得到数据集字典
  2. def open_func(file_path):
  3. return [line.strip() for line in open(file_path, 'r', encoding='utf8').readlines()[1:] if len(line.strip().split('\t')) >= 2]
  4. data_dict = {'chnsenticorp': {'test': open_func('ChnSentiCorp/test.tsv'),
  5. 'dev': open_func('ChnSentiCorp/dev.tsv'),
  6. 'train': open_func('ChnSentiCorp/train.tsv')},
  7. 'nlpcc14sc': {'test': open_func('NLPCC14-SC/test.tsv'),
  8. 'train': open_func('NLPCC14-SC/train.tsv')}}

1.2 定义数据读取器

  1. # 定义数据集
  2. from paddle.io import Dataset, DataLoader
  3. from paddlenlp.data import Pad, Stack, Tuple
  4. import numpy as np
  5. label_list = [0, 1]
  6. # 注意,由于token type在此项任务中并没有起作用,因此这里不再考虑,让模型自行填充。
  7. class MyDataset(Dataset):
  8. def __init__(self, data, tokenizer, max_len=512, for_test=False):
  9. super().__init__()
  10. self._data = data
  11. self._tokenizer = tokenizer
  12. self._max_len = max_len
  13. self._for_test = for_test
  14. def __len__(self):
  15. return len(self._data)
  16. def __getitem__(self, idx):
  17. samples = self._data[idx].split('\t')
  18. label = samples[-2]
  19. text = samples[-1]
  20. label = int(label)
  21. text = self._tokenizer.encode(text, max_seq_len=self._max_len)['input_ids']
  22. if self._for_test:
  23. return np.array(text, dtype='int64')
  24. else:
  25. return np.array(text, dtype='int64'), np.array(label, dtype='int64')
  26. def batchify_fn(for_test=False):
  27. if for_test:
  28. return lambda samples, fn=Pad(axis=0, pad_val=tokenizer.pad_token_id): np.row_stack([data for data in fn(samples)])
  29. else:
  30. return lambda samples, fn=Tuple(Pad(axis=0, pad_val=tokenizer.pad_token_id),
  31. Stack()): [data for data in fn(samples)]
  32. def get_data_loader(data, tokenizer, batch_size=32, max_len=512, for_test=False):
  33. dataset = MyDataset(data, tokenizer, max_len, for_test)
  34. shuffle = True if not for_test else False
  35. data_loader = DataLoader(dataset=dataset, batch_size=batch_size, collate_fn=batchify_fn(for_test), shuffle=shuffle)
  36. return data_loader

1.3 模型搭建并进行训练

模型非常简单,我们只需要调用对应的序列分类工具就行了。为了方便训练,直接用高层API Model完成训练。

  1. import paddle
  2. from paddle.static import InputSpec
  3. # 模型和分词
  4. model = SkepForSequenceClassification.from_pretrained('skep_ernie_1.0_large_ch', num_classes=2)
  5. tokenizer = SkepTokenizer.from_pretrained('skep_ernie_1.0_large_ch')
  6. # 参数设置 chnsenticorp nlpcc14sc
  7. data_name = 'nlpcc14sc' # 更改此选项改变数据集
  8. ## 训练相关
  9. epochs = 8
  10. learning_rate = 2e-5
  11. batch_size = 8
  12. max_len = 512
  13. ## 数据相关
  14. train_dataloader = get_data_loader(data_dict[data_name]['train'], tokenizer, batch_size, max_len, for_test=False)
  15. if data_name == 'chnsenticorp':
  16. dev_dataloader = get_data_loader(data_dict[data_name]['dev'], tokenizer, batch_size, max_len, for_test=False)
  17. else:
  18. dev_dataloader = None
  19. input = InputSpec((-1, -1), dtype='int64', name='input')
  20. label = InputSpec((-1, 2), dtype='int64', name='label')
  21. model = paddle.Model(model, [input], [label])
  22. # 模型准备
  23. optimizer = paddle.optimizer.Adam(learning_rate=learning_rate, parameters=model.parameters())
  24. model.prepare(optimizer, loss=paddle.nn.CrossEntropyLoss(), metrics=[paddle.metric.Accuracy()])
  1. # 开始训练
  2. model.fit(train_dataloader, dev_dataloader, batch_size, epochs, eval_freq=5, save_freq=5, save_dir='./checkpoints', log_freq=200)

step  200/1250 - loss: 0.3085 - acc: 0.7544 - 308ms/step
step  400/1250 - loss: 0.3240 - acc: 0.7750 - 305ms/step
step  600/1250 - loss: 0.5761 - acc: 0.7800 - 311ms/step
step  800/1250 - loss: 0.1471 - acc: 0.7880 - 307ms/step
step 1000/1250 - loss: 0.3115 - acc: 0.7925 - 303ms/step
step 1200/1250 - loss: 0.8489 - acc: 0.7985 - 302ms/step
step 1250/1250 - loss: 0.4910 - acc: 0.8007 - 302ms/step
save checkpoint at /home/aistudio/checkpoints/0
Epoch 2/8
step  200/1250 - loss: 0.9686 - acc: 0.8825 - 293ms/step
step  400/1250 - loss: 0.1028 - acc: 0.8772 - 304ms/step
step  600/1250 - loss: 0.4760 - acc: 0.8773 - 310ms/step
step  800/1250 - loss: 0.1483 - acc: 0.8755 - 306ms/step
step 1000/1250 - loss: 0.6983 - acc: 0.8740 - 310ms/step
step 1200/1250 - loss: 0.6258 - acc: 0.8735 - 309ms/step
step 1250/1250 - loss: 0.1694 - acc: 0.8729 - 310ms/step
Epoch 3/8
step  200/1250 - loss: 0.4893 - acc: 0.9375 - 323ms/step
step  400/1250 - loss: 0.1291 - acc: 0.9344 - 322ms/step
step  600/1250 - loss: 0.1544 - acc: 0.9302 - 310ms/step
step  800/1250 - loss: 0.0948 - acc: 0.9284 - 305ms/step
step 1000/1250 - loss: 0.2340 - acc: 0.9247 - 305ms/step
step 1200/1250 - loss: 0.0474 - acc: 0.9266 - 306ms/step
step 1250/1250 - loss: 0.4171 - acc: 0.9262 - 306ms/step
Epoch 4/8
step  200/1250 - loss: 0.1890 - acc: 0.9700 - 312ms/step
step  400/1250 - loss: 0.0281 - acc: 0.9688 - 309ms/step
step  600/1250 - loss: 0.0076 - acc: 0.9642 - 315ms/step
step  800/1250 - loss: 0.0038 - acc: 0.9642 - 307ms/step
step 1000/1250 - loss: 0.0757 - acc: 0.9639 - 306ms/step
step 1200/1250 - loss: 0.0072 - acc: 0.9621 - 304ms/step
step 1250/1250 - loss: 0.0525 - acc: 0.9615 - 305ms/step
Epoch 5/8
step  200/1250 - loss: 0.0035 - acc: 0.9806 - 296ms/step
step  400/1250 - loss: 0.0149 - acc: 0.9816 - 293ms/step
step  600/1250 - loss: 0.0142 - acc: 0.9827 - 298ms/step
step  800/1250 - loss: 0.0300 - acc: 0.9789 - 303ms/step
step 1000/1250 - loss: 0.0716 - acc: 0.9734 - 304ms/step
step 1200/1250 - loss: 0.0867 - acc: 0.9723 - 303ms/step
step 1250/1250 - loss: 0.0047 - acc: 0.9712 - 305ms/step
Epoch 6/8
step  200/1250 - loss: 0.0346 - acc: 0.9800 - 293ms/step
step  400/1250 - loss: 3.5588e-04 - acc: 0.9838 - 301ms/step
step  600/1250 - loss: 0.1159 - acc: 0.9823 - 308ms/step
step  800/1250 - loss: 0.0036 - acc: 0.9817 - 312ms/step
step 1000/1250 - loss: 0.0824 - acc: 0.9826 - 311ms/step
step 1200/1250 - loss: 0.0160 - acc: 0.9808 - 310ms/step
step 1250/1250 - loss: 0.1026 - acc: 0.9811 - 310ms/step
save checkpoint at /home/aistudio/checkpoints/5
Epoch 7/8
step  200/1250 - loss: 0.0131 - acc: 0.9888 - 308ms/step
step  400/1250 - loss: 0.0019 - acc: 0.9884 - 316ms/step
step  600/1250 - loss: 0.0044 - acc: 0.9879 - 313ms/step
step  800/1250 - loss: 0.0038 - acc: 0.9864 - 311ms/step
step 1000/1250 - loss: 0.0356 - acc: 0.9864 - 308ms/step
step 1200/1250 - loss: 0.0082 - acc: 0.9868 - 306ms/step
step 1250/1250 - loss: 0.0248 - acc: 0.9869 - 308ms/step
Epoch 8/8
step  200/1250 - loss: 0.0026 - acc: 0.9912 - 294ms/step
step  400/1250 - loss: 0.0023 - acc: 0.9891 - 293ms/step
step  600/1250 - loss: 0.0253 - acc: 0.9879 - 311ms/step
step  800/1250 - loss: 2.8810e-04 - acc: 0.9864 - 315ms/step
step 1000/1250 - loss: 3.1722e-04 - acc: 0.9859 - 313ms/step
step 1200/1250 - loss: 9.1276e-04 - acc: 0.9852 - 315ms/step
step 1250/1250 - loss: 4.1999e-04 - acc: 0.9850 - 315ms/step
save checkpoint at /home/aistudio/checkpoints/final 

1.4 预测并保存

  1. # 导入预训练模型
  2. checkpoint_path = './checkpoints/final' # 填写预训练模型的保存路径
  3. model = SkepForSequenceClassification.from_pretrained('skep_ernie_1.0_large_ch', num_classes=2)
  4. input = InputSpec((-1, -1), dtype='int64', name='input')
  5. model = paddle.Model(model, input)
  6. model.load(checkpoint_path)
  7. # 导入测试集
  8. test_dataloader = get_data_loader(data_dict[data_name]['test'], tokenizer, batch_size, max_len, for_test=True)
  9. # 预测保存
  10. save_file = {'chnsenticorp': './submission/ChnSentiCorp.tsv', 'nlpcc14sc': './submission/NLPCC14-SC.tsv'}
  11. predicts = []
  12. for batch in test_dataloader:
  13. predict = model.predict_batch(batch)
  14. predicts += predict[0].argmax(axis=-1).tolist()
  15. with open(save_file[data_name], 'w', encoding='utf8') as f:
  16. f.write("index\tprediction\n")
  17. for idx, sample in enumerate(data_dict[data_name]['test']):
  18. qid = sample.split('\t')[0]
  19. f.write(qid + '\t' + str(predicts[idx]) + '\n')
  20. f.close()

2. 目标级情感分析

目标级情感分析将对整句的情感倾向扩充为对多个特定属性的情感倾向,本质上仍然是序列分类,但是针对同一个序列需要进行多次分类,针对不同的属性。这里的思路是将针对的属性也作为输入的一部分传入模型,并预测情感倾向。

2.0 载入模型和Tokenizer

  1. import paddlenlp
  2. from paddlenlp.transformers import SkepForSequenceClassification, SkepTokenizer

2.1 数据处理

  1. # 解压数据
  2. !unzip -o datasets/SE-ABSA16_CAME
  3. !unzip -o datasets/SE-ABSA16_PHNS

Archive:  datasets/SE-ABSA16_CAME.zip
  inflating: SE-ABSA16_CAME/train.tsv  
  inflating: __MACOSX/SE-ABSA16_CAME/._train.tsv  
  inflating: SE-ABSA16_CAME/License.pdf  
  inflating: __MACOSX/SE-ABSA16_CAME/._License.pdf  
  inflating: SE-ABSA16_CAME/test.tsv  
  inflating: __MACOSX/SE-ABSA16_CAME/._test.tsv  
  inflating: __MACOSX/._SE-ABSA16_CAME  
Archive:  datasets/SE-ABSA16_PHNS.zip
  inflating: SE-ABSA16_PHNS/train.tsv  
  inflating: __MACOSX/SE-ABSA16_PHNS/._train.tsv  
  inflating: SE-ABSA16_PHNS/License.pdf  
  inflating: __MACOSX/SE-ABSA16_PHNS/._License.pdf  
  inflating: SE-ABSA16_PHNS/test.tsv  
  inflating: __MACOSX/SE-ABSA16_PHNS/._test.tsv  
  inflating: __MACOSX/._SE-ABSA16_PHNS

数据内部结构解析(两个数据集的结构相同):

​​​​​​​train:
labeltext_atext_b
1phone#design_features今天有幸拿到了港版白色iPhone 5真机,试玩了一下,说说感受吧:1. 真机尺寸宽度与4/4s保持一致没有变化...
0software#operation_performance苹果iPhone5新机到手 对比4S使用感受1,外观。一开始看发布会和网上照片,我和大多数人观点一样:变化不大,有点小失望。...
test:
qidtext_atext_b
0software#usability刚刚入手8600,体会。刚刚从淘宝购买,1635元(包邮)。1、全新,...
.........
  1. # 得到数据集字典
  2. # 得到数据集字典
  3. def open_func(file_path):
  4. return [line.strip() for line in open(file_path, 'r', encoding='utf8').readlines()[1:] if len(line.strip().split('\t')) >= 2]
  5. data_dict = {'seabsa16phns': {'test': open_func('SE-ABSA16_PHNS/test.tsv'),
  6. 'train': open_func('SE-ABSA16_PHNS/train.tsv')},
  7. 'seabsa16came': {'test': open_func('SE-ABSA16_CAME/test.tsv'),
  8. 'train': open_func('SE-ABSA16_CAME/train.tsv')}}

2.2 定义数据读取器

方法与1.2中相似,基本是完全粘贴复制过来即可。这里注意需要两个text,并且要考虑token_type_id了。

  1. # 定义数据集
  2. from paddle.io import Dataset, DataLoader
  3. from paddlenlp.data import Pad, Stack, Tuple
  4. import numpy as np
  5. label_list = [0, 1]
  6. # 考虑token_type_id
  7. class MyDataset(Dataset):
  8. def __init__(self, data, tokenizer, max_len=512, for_test=False):
  9. super().__init__()
  10. self._data = data
  11. self._tokenizer = tokenizer
  12. self._max_len = max_len
  13. self._for_test = for_test
  14. def __len__(self):
  15. return len(self._data)
  16. def __getitem__(self, idx):
  17. samples = self._data[idx].split('\t')
  18. label = samples[-3]
  19. text_b = samples[-1]
  20. text_a = samples[-2]
  21. label = int(label)
  22. encoder_out = self._tokenizer.encode(text_a, text_b, max_seq_len=self._max_len)
  23. text = encoder_out['input_ids']
  24. token_type = encoder_out['token_type_ids']
  25. if self._for_test:
  26. return np.array(text, dtype='int64'), np.array(token_type, dtype='int64')
  27. else:
  28. return np.array(text, dtype='int64'), np.array(token_type, dtype='int64'), np.array(label, dtype='int64')
  29. def batchify_fn(for_test=False):
  30. if for_test:
  31. return lambda samples, fn=Tuple(Pad(axis=0, pad_val=tokenizer.pad_token_id),
  32. Pad(axis=0, pad_val=tokenizer.pad_token_type_id)): [data for data in fn(samples)]
  33. else:
  34. return lambda samples, fn=Tuple(Pad(axis=0, pad_val=tokenizer.pad_token_id),
  35. Pad(axis=0, pad_val=tokenizer.pad_token_type_id),
  36. Stack()): [data for data in fn(samples)]
  37. def get_data_loader(data, tokenizer, batch_size=32, max_len=512, for_test=False):
  38. dataset = MyDataset(data, tokenizer, max_len, for_test)
  39. shuffle = True if not for_test else False
  40. data_loader = DataLoader(dataset=dataset, batch_size=batch_size, collate_fn=batchify_fn(for_test), shuffle=shuffle)
  41. return data_loader

2.3 模型搭建并进行训练

把1.3的复制粘贴过来,注意该数据集名称,并加上一个token_type_id的输入。

  1. import paddle
  2. from paddle.static import InputSpec
  3. # 模型和分词
  4. model = SkepForSequenceClassification.from_pretrained('skep_ernie_1.0_large_ch', num_classes=2)
  5. tokenizer = SkepTokenizer.from_pretrained('skep_ernie_1.0_large_ch')
  6. # 参数设置 seabsa16phns seabsa16came
  7. data_name = 'seabsa16phns' # 更改此选项改变数据集
  8. ## 训练相关
  9. epochs = 1
  10. learning_rate = 2e-5
  11. batch_size = 8
  12. max_len = 512
  13. ## 数据相关
  14. train_dataloader = get_data_loader(data_dict[data_name]['train'], tokenizer, batch_size, max_len, for_test=False)
  15. input = InputSpec((-1, -1), dtype='int64', name='input')
  16. token_type = InputSpec((-1, -1), dtype='int64', name='token_type')
  17. label = InputSpec((-1, 2), dtype='int64', name='label')
  18. model = paddle.Model(model, [input, token_type], [label])
  19. # 模型准备
  20. optimizer = paddle.optimizer.Adam(learning_rate=learning_rate, parameters=model.parameters())
  21. model.prepare(optimizer, loss=paddle.nn.CrossEntropyLoss(), metrics=[paddle.metric.Accuracy()])
  1. # 开始训练
  2. model.fit(train_dataloader, batch_size=batch_size, epochs=epochs, save_freq=5, save_dir='./checkpoints', log_freq=200)

2.4 预测并保存

  1. # 导入预训练模型
  2. checkpoint_path = './checkpoints/final' # 填写预训练模型的保存路径
  3. model = SkepForSequenceClassification.from_pretrained('skep_ernie_1.0_large_ch', num_classes=2)
  4. input = InputSpec((-1, -1), dtype='int64', name='input')
  5. token_type = InputSpec((-1, -1), dtype='int64', name='token_type')
  6. model = paddle.Model(model, [input, token_type])
  7. model.load(checkpoint_path)
  8. # 导入测试集
  9. test_dataloader = get_data_loader(data_dict[data_name]['test'], tokenizer, batch_size, max_len, for_test=True)
  10. # 预测保存
  11. save_file = {'seabsa16phns': './submission/SE-ABSA16_PHNS.tsv', 'seabsa16came': './submission/SE-ABSA16_CAME.tsv'}
  12. predicts = []
  13. for batch in test_dataloader:
  14. predict = model.predict_batch(batch)
  15. predicts += predict[0].argmax(axis=-1).tolist()
  16. with open(save_file[data_name], 'w', encoding='utf8') as f:
  17. f.write("index\tprediction\n")
  18. for idx, sample in enumerate(data_dict[data_name]['test']):
  19. qid = sample.split('\t')[0]
  20. f.write(qid + '\t' + str(predicts[idx]) + '\n')
  21. f.close()

3. 观点抽取

3.0 载入模型和Tokenizer

  1. import paddlenlp
  2. from paddlenlp.transformers import SkepForTokenClassification, SkepTokenizer

3.1 数据处理

  1. # 解压数据
  2. !unzip -o datasets/COTE-BD
  3. !unzip -o datasets/COTE-DP
  4. !unzip -o datasets/COTE-MFW

Archive:  datasets/COTE-BD.zip
   creating: COTE-BD/
  inflating: COTE-BD/train.tsv       
   creating: __MACOSX/COTE-BD/
  inflating: __MACOSX/COTE-BD/._train.tsv  
  inflating: COTE-BD/License.pdf     
  inflating: __MACOSX/COTE-BD/._License.pdf  
  inflating: COTE-BD/test.tsv        
  inflating: __MACOSX/COTE-BD/._test.tsv  
  inflating: __MACOSX/._COTE-BD      
Archive:  datasets/COTE-DP.zip
   creating: COTE-DP/
  inflating: COTE-DP/train.tsv       
   creating: __MACOSX/COTE-DP/
  inflating: __MACOSX/COTE-DP/._train.tsv  
  inflating: COTE-DP/License.pdf     
  inflating: __MACOSX/COTE-DP/._License.pdf  
  inflating: COTE-DP/test.tsv        
  inflating: __MACOSX/COTE-DP/._test.tsv  
  inflating: __MACOSX/._COTE-DP      
Archive:  datasets/COTE-MFW.zip
   creating: COTE-MFW/
  inflating: COTE-MFW/train.tsv      
   creating: __MACOSX/COTE-MFW/
  inflating: __MACOSX/COTE-MFW/._train.tsv  
  inflating: COTE-MFW/License.pdf    
  inflating: __MACOSX/COTE-MFW/._License.pdf  
  inflating: COTE-MFW/test.tsv       
  inflating: __MACOSX/COTE-MFW/._test.tsv  
  inflating: __MACOSX/._COTE-MFW

数据内部结构解析(三个数据集的结构相同):

train:
labeltext_atext_b
1phone#design_features今天有幸拿到了港版白色iPhone 5真机,试玩了一下,说说感受吧:1. 真机尺寸宽度与4/4s保持一致没有变化...
0software#operation_performance苹果iPhone5新机到手 对比4S使用感受1,外观。一开始看发布会和网上照片,我和大多数人观点一样:变化不大,有点小失望。...
test:
qidtext_atext_b
0software#usability刚刚入手8600,体会。刚刚从淘宝购买,1635元(包邮)。1、全新,... ... ... ...
  1. # 得到数据集字典
  2. def open_func(file_path):
  3. return [line.strip() for line in open(file_path, 'r', encoding='utf8').readlines()[1:] if len(line.strip().split('\t')) >= 2]
  4. data_dict = {'cotebd': {'test': open_func('COTE-BD/test.tsv'),
  5. 'train': open_func('COTE-BD/train.tsv')},
  6. 'cotedp': {'test': open_func('COTE-DP/test.tsv'),
  7. 'train': open_func('COTE-DP/train.tsv')},
  8. 'cotemfw': {'test': open_func('COTE-MFW/test.tsv'),
  9. 'train': open_func('COTE-MFW/train.tsv')}}

3.2 定义数据读取器

思路类似,需要注意的是这一次是Tokens级的分类。在数据读取器中,将label写成BIO的形式,每一个token都对应一个label。

  1. # 定义数据集
  2. from paddle.io import Dataset, DataLoader
  3. from paddlenlp.data import Pad, Stack, Tuple
  4. import numpy as np
  5. label_list = {'B': 0, 'I': 1, 'O': 2}
  6. index2label = {0: 'B', 1: 'I', 2: 'O'}
  7. # 考虑token_type_id
  8. class MyDataset(Dataset):
  9. def __init__(self, data, tokenizer, max_len=512, for_test=False):
  10. super().__init__()
  11. self._data = data
  12. self._tokenizer = tokenizer
  13. self._max_len = max_len
  14. self._for_test = for_test
  15. def __len__(self):
  16. return len(self._data)
  17. def __getitem__(self, idx):
  18. samples = self._data[idx].split('\t')
  19. label = samples[-2]
  20. text = samples[-1]
  21. if self._for_test:
  22. origin_enc = self._tokenizer.encode(text, max_seq_len=self._max_len)['input_ids']
  23. return np.array(origin_enc, dtype='int64')
  24. else:
  25. # 由于并不是每个字都是一个token,这里采用一种简单的处理方法,先编码label,再编码text中除了label以外的词,最后合到一起
  26. texts = text.split(label)
  27. label_enc = self._tokenizer.encode(label)['input_ids']
  28. cls_enc = label_enc[0]
  29. sep_enc = label_enc[-1]
  30. label_enc = label_enc[1:-1]
  31. # 合并
  32. origin_enc = []
  33. label_ids = []
  34. for index, text in enumerate(texts):
  35. text_enc = self._tokenizer.encode(text)['input_ids']
  36. text_enc = text_enc[1:-1]
  37. origin_enc += text_enc
  38. label_ids += [label_list['O']] * len(text_enc)
  39. if index != len(texts) - 1:
  40. origin_enc += label_enc
  41. label_ids += [label_list['B']] + [label_list['I']] * (len(label_enc) - 1)
  42. origin_enc = [cls_enc] + origin_enc + [sep_enc]
  43. label_ids = [label_list['O']] + label_ids + [label_list['O']]
  44. # 截断
  45. if len(origin_enc) > self._max_len:
  46. origin_enc = origin_enc[:self._max_len-1] + origin_enc[-1:]
  47. label_ids = label_ids[:self._max_len-1] + label_ids[-1:]
  48. return np.array(origin_enc, dtype='int64'), np.array(label_ids, dtype='int64')
  49. def batchify_fn(for_test=False):
  50. if for_test:
  51. return lambda samples, fn=Pad(axis=0, pad_val=tokenizer.pad_token_id): np.row_stack([data for data in fn(samples)])
  52. else:
  53. return lambda samples, fn=Tuple(Pad(axis=0, pad_val=tokenizer.pad_token_id),
  54. Pad(axis=0, pad_val=label_list['O'])): [data for data in fn(samples)]
  55. def get_data_loader(data, tokenizer, batch_size=32, max_len=512, for_test=False):
  56. dataset = MyDataset(data, tokenizer, max_len, for_test)
  57. shuffle = True if not for_test else False
  58. data_loader = DataLoader(dataset=dataset, batch_size=batch_size, collate_fn=batchify_fn(for_test), shuffle=shuffle)
  59. return data_loader

3.3 模型搭建并进行训练

与之前不同的是模型换成了Token分类。由于Accuracy不再适用于Token分类,我们用Perplexity来大致衡量预测的准确度(接近1为最佳)。

  1. import paddle
  2. from paddle.static import InputSpec
  3. from paddlenlp.metrics import Perplexity
  4. # 模型和分词
  5. model = SkepForTokenClassification.from_pretrained('skep_ernie_1.0_large_ch', num_classes=3)
  6. tokenizer = SkepTokenizer.from_pretrained('skep_ernie_1.0_large_ch')
  7. # 参数设置 cotebd cotedp cotemfw
  8. data_name = 'cotedp' # 更改此选项改变数据集
  9. ## 训练相关
  10. epochs = 1
  11. learning_rate = 2e-5
  12. batch_size = 8
  13. max_len = 512
  14. ## 数据相关
  15. train_dataloader = get_data_loader(data_dict[data_name]['train'], tokenizer, batch_size, max_len, for_test=False)
  16. input = InputSpec((-1, -1), dtype='int64', name='input')
  17. label = InputSpec((-1, -1, 3), dtype='int64', name='label')
  18. model = paddle.Model(model, [input], [label])
  19. # 模型准备
  20. optimizer = paddle.optimizer.Adam(learning_rate=learning_rate, parameters=model.parameters())
  21. model.prepare(optimizer, loss=paddle.nn.CrossEntropyLoss(), metrics=[Perplexity()])
  1. # 开始训练
  2. model.fit(train_dataloader, batch_size=batch_size, epochs=epochs, save_freq=5, save_dir='./checkpoints', log_freq=200)

step  200/9381 - loss: 0.0223 - Perplexity: 1.0659 - 248ms/step
step  400/9381 - loss: 0.0967 - Perplexity: 1.0434 - 247ms/step
step  600/9381 - loss: 0.0789 - Perplexity: 1.0363 - 244ms/step
step  800/9381 - loss: 0.0090 - Perplexity: 1.0323 - 244ms/step
step 1000/9381 - loss: 0.0143 - Perplexity: 1.0299 - 243ms/step
step 1200/9381 - loss: 0.0210 - Perplexity: 1.0279 - 244ms/step
step 1400/9381 - loss: 0.0419 - Perplexity: 1.0264 - 243ms/step
step 1600/9381 - loss: 0.0113 - Perplexity: 1.0252 - 244ms/step
step 1800/9381 - loss: 0.0369 - Perplexity: 1.0243 - 243ms/step
step 2000/9381 - loss: 0.0090 - Perplexity: 1.0236 - 242ms/step
step 2200/9381 - loss: 0.0095 - Perplexity: 1.0232 - 241ms/step
step 2400/9381 - loss: 0.0087 - Perplexity: 1.0226 - 241ms/step
step 2600/9381 - loss: 0.0055 - Perplexity: 1.0221 - 241ms/step
step 2800/9381 - loss: 0.0229 - Perplexity: 1.0216 - 241ms/step
step 3000/9381 - loss: 0.0281 - Perplexity: 1.0213 - 241ms/step
step 3200/9381 - loss: 0.0069 - Perplexity: 1.0209 - 241ms/step
step 3400/9381 - loss: 0.0036 - Perplexity: 1.0205 - 242ms/step
step 3600/9381 - loss: 0.0242 - Perplexity: 1.0203 - 242ms/step
step 3800/9381 - loss: 0.0070 - Perplexity: 1.0200 - 242ms/step
step 4000/9381 - loss: 0.0610 - Perplexity: 1.0197 - 242ms/step
step 4200/9381 - loss: 0.0042 - Perplexity: 1.0195 - 242ms/step
step 4400/9381 - loss: 0.0139 - Perplexity: 1.0193 - 242ms/step
step 4600/9381 - loss: 0.0333 - Perplexity: 1.0192 - 242ms/step
step 4800/9381 - loss: 0.0978 - Perplexity: 1.0189 - 242ms/step
step 5000/9381 - loss: 0.0255 - Perplexity: 1.0188 - 242ms/step
step 5200/9381 - loss: 0.0517 - Perplexity: 1.0187 - 242ms/step
step 5400/9381 - loss: 0.0153 - Perplexity: 1.0186 - 242ms/step
step 5600/9381 - loss: 0.0032 - Perplexity: 1.0185 - 242ms/step
step 5800/9381 - loss: 0.0134 - Perplexity: 1.0184 - 242ms/step
step 6000/9381 - loss: 0.0047 - Perplexity: 1.0183 - 242ms/step
step 6200/9381 - loss: 0.0198 - Perplexity: 1.0181 - 243ms/step
step 6400/9381 - loss: 0.0026 - Perplexity: 1.0180 - 243ms/step
step 6600/9381 - loss: 0.0200 - Perplexity: 1.0179 - 243ms/step
step 6800/9381 - loss: 0.0050 - Perplexity: 1.0179 - 243ms/step
step 7000/9381 - loss: 0.0191 - Perplexity: 1.0177 - 243ms/step
step 7200/9381 - loss: 0.0034 - Perplexity: 1.0176 - 244ms/step
step 7400/9381 - loss: 0.0127 - Perplexity: 1.0175 - 244ms/step
step 7600/9381 - loss: 0.0123 - Perplexity: 1.0174 - 244ms/step
step 7800/9381 - loss: 0.0934 - Perplexity: 1.0173 - 243ms/step
step 8000/9381 - loss: 0.0090 - Perplexity: 1.0173 - 243ms/step
step 8200/9381 - loss: 0.0314 - Perplexity: 1.0172 - 244ms/step
step 8400/9381 - loss: 0.0063 - Perplexity: 1.0172 - 244ms/step
step 8600/9381 - loss: 0.0073 - Perplexity: 1.0171 - 243ms/step
step 8800/9381 - loss: 0.0025 - Perplexity: 1.0170 - 244ms/step
step 9000/9381 - loss: 0.0121 - Perplexity: 1.0169 - 243ms/step
step 9200/9381 - loss: 0.0157 - Perplexity: 1.0169 - 244ms/step
step 9381/9381 - loss: 0.0330 - Perplexity: 1.0168 - 244ms/step
save checkpoint at /home/aistudio/checkpoints/0
save checkpoint at /home/aistudio/checkpoints/final

3.4 预测并保存

  1. import re
  2. # 导入预训练模型
  3. checkpoint_path = './checkpoints/final' # 填写预训练模型的保存路径
  4. model = SkepForTokenClassification.from_pretrained('skep_ernie_1.0_large_ch', num_classes=3)
  5. input = InputSpec((-1, -1), dtype='int64', name='input')
  6. model = paddle.Model(model, [input])
  7. model.load(checkpoint_path)
  8. # 导入测试集
  9. test_dataloader = get_data_loader(data_dict[data_name]['test'], tokenizer, batch_size, max_len, for_test=True)
  10. # 预测保存
  11. save_file = {'cotebd': './submission/COTE_BD.tsv', 'cotedp': './submission/COTE_DP.tsv', 'cotemfw': './submission/COTE_MFW.tsv'}
  12. predicts = []
  13. input_ids = []
  14. for batch in test_dataloader:
  15. predict = model.predict_batch(batch)
  16. predicts += predict[0].argmax(axis=-1).tolist()
  17. input_ids += batch.numpy().tolist()
  18. # 先找到B所在的位置,即标号为0的位置,然后顺着该位置一直找到所有的I,即标号为1,即为所得。
  19. def find_entity(prediction, input_ids):
  20. entity = []
  21. entity_ids = []
  22. for index, idx in enumerate(prediction):
  23. if idx == label_list['B']:
  24. entity_ids = [input_ids[index]]
  25. elif idx == label_list['I']:
  26. if entity_ids:
  27. entity_ids.append(input_ids[index])
  28. elif idx == label_list['O']:
  29. if entity_ids:
  30. entity.append(''.join(tokenizer.convert_ids_to_tokens(entity_ids)))
  31. entity_ids = []
  32. return entity
  33. with open(save_file[data_name], 'w', encoding='utf8') as f:
  34. f.write("index\tprediction\n")
  35. for idx, sample in enumerate(data_dict[data_name]['test']):
  36. qid = sample.split('\t')[0]
  37. entity = find_entity(predicts[idx], input_ids[idx])
  38. entity = list(set(entity)) # 去重
  39. entity = [re.sub('##', '', e) for e in entity] # 去除英文编码时的特殊符号
  40. entity = [re.sub('[UNK]', '', e) for e in entity] # 去除未知符号
  41. f.write(qid + '\t' + '\x01'.join(entity) + '\n')
  42. f.close()
!zip -r submission.zip submission

updating: submission/ (stored 0%)
updating: submission/COTE_BD.tsv (deflated 44%)
updating: submission/SE-ABSA16_CAME.tsv (deflated 65%)
updating: submission/ChnSentiCorp.tsv (deflated 63%)
updating: submission/COTE_DP.tsv (deflated 54%)
updating: submission/SE-ABSA16_PHNS.tsv (deflated 65%)
updating: submission/NLPCC14-SC.tsv (deflated 64%)
updating: submission/COTE_MFW.tsv (deflated 54%) 

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/IT小白/article/detail/721726
推荐阅读
相关标签
  

闽ICP备14008679号