当前位置:   article > 正文

【NLP】文本分类-情感分类_文本情感分类

文本情感分类

1 常见NLP文本分类模型

1.1 TextCNN

        论文原文:《Convolutional Neural Networks for Sentence Classification》

        论文地址:1408.5882.pdf (arxiv.org)

        结构图如下:

          值得一提的是,在2016年的《A Sensitivity Analysis of (and Practitioners’ Guide to) Convolutional Neural Networks for Sentence Classification》作者通过大量实验对TextCNN进行网络参数选取,并给出了参数建议。论文地址:https://arxiv.org/pdf/1510.03820.pdf,文章经典的结构图如下:

 

 1.2 TextRNN

        TextRNN指的是利用RNN循环神经网络解决文本分类问题。

        论文原文:《Recurrent Neural Network for Text Classification with Multi-Task Learning》

        论文链接:https://www.ijcai.org/Proceedings/16/Papers/408.pdf

        结构图如下:

1.3 TextRCNN

        论文原文:《Recurrent Convolutional Neural Networks for Text Classification》

        论文链接:TextRCNN论文

        结构图如下:

1.4 FastText

        论文原文:《Bag of Tricks for Efficient Text Classification》

        论文链接:https://arxiv.org/pdf/1607.01759v2.pdf

        结构图如下:

1.5 HAN

        论文原文:《Hierarchical Attention Networks for Document Classification》

        论文链接:https://aclanthology.org/N16-1174.pdf

        结构图如下:

1.6 CharCNN

        论文原文:《Character-level Convolutional Networks for Text Classification》

        论文链接:CharCNN论文

        结构图如下:

1.7 Transformer

        论文原文:《Attention is all you need》

        论文链接:https://arxiv.org/pdf/1706.03762.pdf

        结构图如下:

2 代码实现

  1. import torch
  2. import torch.nn as nn
  3. import torch.nn.functional as F
  4. import numpy as np
  5. import math
  6. import copy
  7. #TextCNN
  8. class TextCNN(nn.Module):
  9. def __init__(self, args):
  10. super(TextCNN, self).__init__()
  11. self.args = args
  12. class_num = args.class_num
  13. chanel_num = 1
  14. filter_num = args.filter_num
  15. filter_sizes = args.filter_sizes
  16. vocabulary_size = args.vocabulary_size
  17. embedding_dimension = args.embedding_dim
  18. self.embedding = nn.Embedding(vocabulary_size, embedding_dimension)
  19. if args.static:
  20. self.embedding = self.embedding.from_pretrained(args.vectors, freeze=not args.non_static)
  21. if args.multichannel:
  22. self.embedding2 = nn.Embedding(vocabulary_size, embedding_dimension).from_pretrained(args.vectors)
  23. chanel_num += 1
  24. else:
  25. self.embedding2 = None
  26. self.convs = nn.ModuleList(
  27. [nn.Conv2d(chanel_num, filter_num, (size, embedding_dimension)) for size in filter_sizes])
  28. self.dropout = nn.Dropout(args.dropout)
  29. self.fc = nn.Linear(len(filter_sizes) * filter_num, class_num)
  30. def forward(self, x):
  31. if self.embedding2:
  32. x = torch.stack([self.embedding(x), self.embedding2(x)], dim=1)
  33. else:
  34. x = self.embedding(x)
  35. x = x.unsqueeze(1)
  36. x = [F.relu(conv(x)).squeeze(3) for conv in self.convs]
  37. x = [F.max_pool1d(item, int(item.size(2))).squeeze(2) for item in x]
  38. x = torch.cat(x, 1)
  39. x = self.dropout(x)
  40. logits = self.fc(x)
  41. return logits
  42. #TextRNN
  43. class LSTM(torch.nn.Module):
  44. def __init__(self, args):
  45. super(LSTM, self).__init__()
  46. self.embed_size = args.embedding_dim
  47. self.label_num = args.class_num
  48. self.embed_dropout = 0.1
  49. self.fc_dropout = 0.1
  50. self.hidden_num = 1
  51. self.hidden_size = 50
  52. self.hidden_dropout = 0
  53. self.bidirectional = True
  54. vocabulary_size = args.vocabulary_size
  55. embedding_dimension = args.embedding_dim
  56. self.embeddings = nn.Embedding(vocabulary_size, embedding_dimension)
  57. # self.embeddings.weight.data.copy_(torch.from_numpy(vocabulary_size))
  58. self.embeddings.weight.requires_grad = False
  59. self.lstm = nn.LSTM(
  60. self.embed_size,
  61. self.hidden_size,
  62. dropout=self.hidden_dropout,
  63. num_layers=self.hidden_num,
  64. batch_first=True,
  65. bidirectional=True
  66. )
  67. self.embed_dropout = nn.Dropout(self.embed_dropout)
  68. self.fc_dropout = nn.Dropout(self.fc_dropout)
  69. self.linear1 = nn.Linear(self.hidden_size * 2, self.label_num)
  70. self.softmax = nn.Softmax()
  71. def forward(self, input):
  72. x = self.embeddings(input)
  73. x = self.embed_dropout(x)
  74. batch_size = len(input)
  75. _, (lstm_out, _) = self.lstm(x)
  76. lstm_out = lstm_out.permute(1, 0, 2)
  77. lstm_out = lstm_out.contiguous().view(batch_size, -1)
  78. out = self.linear1(lstm_out)
  79. out = self.fc_dropout(out)
  80. out = self.softmax(out)
  81. return out
  82. #TextRCNN
  83. class BiLSTM(nn.Module):
  84. def __init__(self, args):
  85. super(BiLSTM, self).__init__()
  86. self.embed_size = args.embedding_dim
  87. self.label_num = args.class_num
  88. self.embed_dropout = 0.1
  89. self.fc_dropout = 0.1
  90. self.hidden_num = 2
  91. self.hidden_size = 50
  92. self.hidden_dropout = 0
  93. self.bidirectional = True
  94. vocabulary_size = args.vocabulary_size
  95. embedding_dimension = args.embedding_dim
  96. self.embeddings = nn.Embedding(vocabulary_size, embedding_dimension)
  97. # self.embeddings.weight.data.copy_(torch.from_numpy(word_embeddings))
  98. self.embeddings.weight.requires_grad = False
  99. self.lstm = nn.LSTM(
  100. self.embed_size,
  101. self.hidden_size,
  102. dropout=self.hidden_dropout,
  103. num_layers=self.hidden_num,
  104. batch_first=True,
  105. bidirectional=self.bidirectional
  106. )
  107. self.embed_dropout = nn.Dropout(self.embed_dropout)
  108. self.fc_dropout = nn.Dropout(self.fc_dropout)
  109. self.linear1 = nn.Linear(self.hidden_size * 2, self.hidden_size // 2)
  110. self.linear2 = nn.Linear(self.hidden_size // 2, self.label_num)
  111. def forward(self, input):
  112. out = self.embeddings(input)
  113. out = self.embed_dropout(out)
  114. out, _ = self.lstm(out)
  115. out = torch.transpose(out, 1, 2)
  116. out = torch.tanh(out)
  117. out = F.max_pool1d(out, out.size(2))
  118. out = out.squeeze(2)
  119. out = self.fc_dropout(out)
  120. out = self.linear1(F.relu(out))
  121. output = self.linear2(F.relu(out))
  122. return output
  123. #FastText
  124. class FastText(nn.Module):
  125. def __init__(self, args):
  126. super().__init__()
  127. self.output_dim = args.class_num
  128. vocabulary_size = args.vocabulary_size
  129. embedding_dimension = args.embedding_dim
  130. self.embeddings = nn.Embedding(vocabulary_size, embedding_dimension)
  131. self.fc = nn.Linear(embedding_dimension, self.output_dim)
  132. def forward(self, text):
  133. # text = [sent len, batch size]
  134. text = text.permute(1,0)
  135. embedded = self.embeddings(text)
  136. # embedded = [sent len, batch size, emb dim]
  137. embedded = embedded.permute(1, 0, 2)
  138. # embedded = [batch size, sent len, emb dim]
  139. pooled = F.avg_pool2d(embedded, (embedded.shape[1], 1)).squeeze(1)
  140. # pooled = [batch size, embedding_dim]
  141. return self.fc(pooled)
  142. #HAN
  143. class SelfAttention(nn.Module):
  144. def __init__(self, input_size, hidden_size):
  145. super(SelfAttention, self).__init__()
  146. self.W = nn.Linear(input_size, hidden_size, True)
  147. self.u = nn.Linear(hidden_size, 1)
  148. def forward(self, x):
  149. u = torch.tanh(self.W(x))
  150. a = F.softmax(self.u(u), dim=1)
  151. x = a.mul(x).sum(1)
  152. return x
  153. class HAN(nn.Module):
  154. def __init__(self,args):
  155. super(HAN, self).__init__()
  156. hidden_size_gru = 50 # 50
  157. hidden_size_att = 100 # 100
  158. num_classes = args.class_num
  159. vocabulary_size = args.vocabulary_size
  160. embedding_dimension = args.embedding_dim
  161. self.num_words = 64 #词Pading大小
  162. self.embed = nn.Embedding(vocabulary_size, embedding_dimension)
  163. self.gru1 = nn.GRU(embedding_dimension, hidden_size_gru, bidirectional=True, batch_first=True)
  164. self.att1 = SelfAttention(hidden_size_gru * 2, hidden_size_att)
  165. self.gru2 = nn.GRU(hidden_size_att, hidden_size_gru, bidirectional=True, batch_first=True)
  166. self.att2 = SelfAttention(hidden_size_gru * 2, hidden_size_att)
  167. # 这里fc的参数很少,不需要dropout
  168. self.fc = nn.Linear(hidden_size_att, num_classes, True)
  169. def forward(self, x):
  170. # 64 512 200
  171. x = x.view(x.size(0) * self.num_words, -1).contiguous()
  172. x = self.embed(x)
  173. x, _ = self.gru1(x)
  174. x = self.att1(x)
  175. x = x.view(x.size(0) // self.num_words, self.num_words, -1).contiguous()
  176. x, _ = self.gru2(x)
  177. x = self.att2(x)
  178. x = self.fc(x)
  179. x = F.log_softmax(x, dim=1) # softmax
  180. return x
  181. #CharCNN
  182. class CharCNN(nn.Module):
  183. def __init__(self, args):
  184. super(CharCNN, self).__init__()
  185. self.num_chars = 64
  186. self.features = [128, 128, 128, 128, 128, 128]
  187. self.kernel_sizes = [7, 7, 3, 3, 3, 3]
  188. self.dropout = args.dropout
  189. self.num_labels = args.class_num
  190. vocabulary_size = args.vocabulary_size
  191. embedding_dimension = args.embedding_dim
  192. # Embedding Layer
  193. self.embeddings = nn.Embedding(vocabulary_size, embedding_dimension)
  194. self.embeddings.weight.requires_grad = False
  195. self.in_features = [self.num_chars]+self.features[:-1]
  196. self.out_features = self.features
  197. self.conv1d_1 = nn.Sequential(
  198. nn.Conv1d(self.in_features[0], self.out_features[0], self.kernel_sizes[0], stride=1),
  199. nn.BatchNorm1d(self.out_features[0]),
  200. nn.ReLU(),
  201. nn.MaxPool1d(kernel_size=3, stride=3)
  202. )
  203. self.conv1d_2 = nn.Sequential(
  204. nn.Conv1d(self.in_features[1], self.out_features[1], self.kernel_sizes[1], stride=1),
  205. nn.BatchNorm1d(self.out_features[1]),
  206. nn.ReLU(),
  207. nn.MaxPool1d(kernel_size=3, stride=3)
  208. )
  209. self.conv1d_3 = nn.Sequential(
  210. nn.Conv1d(self.in_features[2], self.out_features[2], self.kernel_sizes[2], stride=1),
  211. nn.BatchNorm1d(self.out_features[2]),
  212. nn.ReLU()
  213. )
  214. self.conv1d_4 = nn.Sequential(
  215. nn.Conv1d(self.in_features[3], self.out_features[3], self.kernel_sizes[3], stride=1),
  216. nn.BatchNorm1d(self.out_features[3]),
  217. nn.ReLU()
  218. )
  219. self.conv1d_5 = nn.Sequential(
  220. nn.Conv1d(self.in_features[4], self.out_features[4], self.kernel_sizes[4], stride=1),
  221. nn.BatchNorm1d(self.out_features[4]),
  222. nn.ReLU()
  223. )
  224. self.conv1d_6 = nn.Sequential(
  225. nn.Conv1d(self.in_features[5], self.out_features[5], self.kernel_sizes[5], stride=1),
  226. nn.BatchNorm1d(self.out_features[5]),
  227. nn.ReLU(),
  228. nn.MaxPool1d(kernel_size=3, stride=3)
  229. )
  230. self.fc1 = nn.Sequential(
  231. nn.Linear(128, 128),
  232. nn.ReLU(),
  233. nn.Dropout(self.dropout)
  234. )
  235. self.fc2 = nn.Sequential(
  236. nn.Linear(128, 128),
  237. nn.ReLU(),
  238. nn.Dropout(self.dropout)
  239. )
  240. self.fc3 = nn.Linear(128, self.num_labels)
  241. def forward(self, x):
  242. # x = torch.Tensor(x).long() # batch_size=128, num_chars=128, seq_len=64
  243. x = self.embeddings(x)
  244. # x = x.permute(0,2,1)
  245. x = self.conv1d_1(x) # b, out_features[0], (seq_len-f + 1)-f/s+1 = 64, 256, (1014-7+1)-3/3 + 1=1008-3/3+1=336
  246. x = self.conv1d_2(x) # 64, 256, (336-7+1)-3/3+1=110
  247. x = self.conv1d_3(x) # 64, 256, 110-3+1=108
  248. x = self.conv1d_4(x) # 64, 256, 108-3+1=106
  249. x = self.conv1d_5(x) # 64, 256, 106-3=1=104
  250. x = self.conv1d_6(x) # 64, 256, (104-3+1)-3/3+1=34
  251. x = x.view(x.size(0), -1) # 64, 256, 34 -> 64, 8704
  252. out = self.fc1(x) # 64, 1024
  253. out = self.fc2(out) # 64, 1024
  254. out = self.fc3(out) # 64, 4
  255. return out
  256. #Transformer
  257. class Transformer_Config(object):
  258. """配置参数"""
  259. def __init__(self, args):
  260. # self.model_name = 'Transformer'
  261. # self.train_path = dataset + '/data/train.txt' # 训练集
  262. # self.dev_path = dataset + '/data/dev.txt' # 验证集
  263. # self.test_path = dataset + '/data/test.txt' # 测试集
  264. # self.class_list = [x.strip() for x in open(
  265. # dataset + '/data/class.txt', encoding='utf-8').readlines()] # 类别名单
  266. # self.vocab_path = dataset + '/data/vocab.pkl' # 词表
  267. # self.save_path = dataset + '/saved_dict/' + self.model_name + '.ckpt' # 模型训练结果
  268. # self.log_path = dataset + '/log/' + self.model_name
  269. # self.embedding_pretrained = torch.tensor(
  270. # np.load(dataset + '/data/' + embedding)["embeddings"].astype('float32'))\
  271. # if embedding != 'random' else None # 预训练词向量
  272. self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # 设备
  273. self.dropout = 0.5 # 随机失活
  274. self.require_improvement = 2000 # 若超过1000batch效果还没提升,则提前结束训练
  275. self.num_classes = args.class_num # 类别数
  276. self.n_vocab = args.vocabulary_size # 词表大小,在运行时赋值
  277. self.num_epochs = args.epochs # epoch数
  278. self.batch_size = args.batch_size # mini-batch大小
  279. self.pad_size = 64 # 每句话处理成的长度(短填长切)
  280. self.learning_rate = 5e-4 # 学习率
  281. self.embedding_pretrained = None
  282. # self.embed = self.embedding_pretrained.size(1)\
  283. # if self.embedding_pretrained is not None else 300 # 字向量维度
  284. self.embed = 128
  285. self.dim_model = args.embedding_dim
  286. self.hidden = 1024
  287. self.last_hidden = 512
  288. self.num_head = 2
  289. self.num_encoder = 2
  290. '''Attention Is All You Need'''
  291. class Transformer(nn.Module):
  292. def __init__(self, config):
  293. super(Transformer, self).__init__()
  294. if config.embedding_pretrained is not None:
  295. self.embedding = nn.Embedding.from_pretrained(config.embedding_pretrained, freeze=False)
  296. else:
  297. self.embedding = nn.Embedding(config.n_vocab, config.embed)
  298. self.postion_embedding = Positional_Encoding(config.embed, config.pad_size, config.dropout, config.device)
  299. self.encoder = Encoder(config.dim_model, config.num_head, config.hidden, config.dropout)
  300. self.encoders = nn.ModuleList([
  301. copy.deepcopy(self.encoder)
  302. # Encoder(config.dim_model, config.num_head, config.hidden, config.dropout)
  303. for _ in range(config.num_encoder)])
  304. self.fc1 = nn.Linear(config.pad_size * config.dim_model, config.num_classes)
  305. # self.fc2 = nn.Linear(config.last_hidden, config.num_classes)
  306. # self.fc1 = nn.Linear(config.dim_model, config.num_classes)
  307. def forward(self, x):
  308. out = self.embedding(x)
  309. out = self.postion_embedding(out)
  310. for encoder in self.encoders:
  311. out = encoder(out)
  312. out = out.view(out.size(0), -1)
  313. # out = torch.mean(out, 1)
  314. out = self.fc1(out)
  315. return out
  316. class Encoder(nn.Module):
  317. def __init__(self, dim_model, num_head, hidden, dropout):
  318. super(Encoder, self).__init__()
  319. self.attention = Multi_Head_Attention(dim_model, num_head, dropout)
  320. self.feed_forward = Position_wise_Feed_Forward(dim_model, hidden, dropout)
  321. def forward(self, x):
  322. out = self.attention(x)
  323. out = self.feed_forward(out)
  324. return out
  325. class Positional_Encoding(nn.Module):
  326. def __init__(self, embed, pad_size, dropout, device):
  327. super(Positional_Encoding, self).__init__()
  328. self.device = device
  329. self.pe = torch.tensor([[pos / (10000.0 ** (i // 2 * 2.0 / embed)) for i in range(embed)] for pos in range(pad_size)])
  330. self.pe[:, 0::2] = np.sin(self.pe[:, 0::2])
  331. self.pe[:, 1::2] = np.cos(self.pe[:, 1::2])
  332. self.dropout = nn.Dropout(dropout)
  333. def forward(self, x):
  334. out = x + nn.Parameter(self.pe, requires_grad=False).to(self.device)
  335. out = self.dropout(out)
  336. return out
  337. class Scaled_Dot_Product_Attention(nn.Module):
  338. '''Scaled Dot-Product Attention '''
  339. def __init__(self):
  340. super(Scaled_Dot_Product_Attention, self).__init__()
  341. def forward(self, Q, K, V, scale=None):
  342. '''
  343. Args:
  344. Q: [batch_size, len_Q, dim_Q]
  345. K: [batch_size, len_K, dim_K]
  346. V: [batch_size, len_V, dim_V]
  347. scale: 缩放因子 论文为根号dim_K
  348. Return:
  349. self-attention后的张量,以及attention张量
  350. '''
  351. attention = torch.matmul(Q, K.permute(0, 2, 1))
  352. if scale:
  353. attention = attention * scale
  354. # if mask: # TODO change this
  355. # attention = attention.masked_fill_(mask == 0, -1e9)
  356. attention = F.softmax(attention, dim=-1)
  357. context = torch.matmul(attention, V)
  358. return context
  359. class Multi_Head_Attention(nn.Module):
  360. def __init__(self, dim_model, num_head, dropout=0.0):
  361. super(Multi_Head_Attention, self).__init__()
  362. self.num_head = num_head
  363. assert dim_model % num_head == 0
  364. self.dim_head = dim_model // self.num_head
  365. self.fc_Q = nn.Linear(dim_model, num_head * self.dim_head)
  366. self.fc_K = nn.Linear(dim_model, num_head * self.dim_head)
  367. self.fc_V = nn.Linear(dim_model, num_head * self.dim_head)
  368. self.attention = Scaled_Dot_Product_Attention()
  369. self.fc = nn.Linear(num_head * self.dim_head, dim_model)
  370. self.dropout = nn.Dropout(dropout)
  371. self.layer_norm = nn.LayerNorm(dim_model)
  372. def forward(self, x):
  373. batch_size = x.size(0)
  374. Q = self.fc_Q(x)
  375. K = self.fc_K(x)
  376. V = self.fc_V(x)
  377. Q = Q.view(batch_size * self.num_head, -1, self.dim_head)
  378. K = K.view(batch_size * self.num_head, -1, self.dim_head)
  379. V = V.view(batch_size * self.num_head, -1, self.dim_head)
  380. # if mask: # TODO
  381. # mask = mask.repeat(self.num_head, 1, 1) # TODO change this
  382. scale = K.size(-1) ** -0.5 # 缩放因子
  383. context = self.attention(Q, K, V, scale)
  384. context = context.view(batch_size, -1, self.dim_head * self.num_head)
  385. out = self.fc(context)
  386. out = self.dropout(out)
  387. out = out + x # 残差连接
  388. out = self.layer_norm(out)
  389. return out
  390. class Position_wise_Feed_Forward(nn.Module):
  391. def __init__(self, dim_model, hidden, dropout=0.0):
  392. super(Position_wise_Feed_Forward, self).__init__()
  393. self.fc1 = nn.Linear(dim_model, hidden)
  394. self.fc2 = nn.Linear(hidden, dim_model)
  395. self.dropout = nn.Dropout(dropout)
  396. self.layer_norm = nn.LayerNorm(dim_model)
  397. def forward(self, x):
  398. out = self.fc1(x)
  399. out = F.relu(out)
  400. out = self.fc2(out)
  401. out = self.dropout(out)
  402. out = out + x # 残差连接
  403. out = self.layer_norm(out)
  404. return out

3 结果讨论

        本文针对文本情感二分类任务展开训练,采用数据集的数据量包含Train有56700条,Evaluate有7000条。得到测试结果如下表。

        可以看到由于本文的数据量比较小,所以小模型还有更好的检测效果。如Transformer有点大材小用,缺少发挥空间。欢迎大家学习讨论。

声明:本文内容由网友自发贡献,转载请注明出处:【wpsshop】
推荐阅读
相关标签
  

闽ICP备14008679号