当前位置:   article > 正文

NLP笔记(9)——小白实现GPT中文对话系统_nlp gpt

nlp gpt

之前的文章在我们介绍了如何搭建seq2seq模型卷积神经网络、Tranformer等一系列,今天给大家带来基于Pytorch实现的小型GPT中文对话系统。文末获取代码及生成训练好的权重文件。

图片

一.GPT模型介绍

GPT也是一种预训练模型,与BERT不同的是,它采用的是Transformer的Decoder结构。它的训练过程与BERT类似,首先在大量未标记的数据上进行预训练,然后在少量标记数据上进行微调以训练下游模型。在微调过程中,只需要构建与任务相关的输入,而不需要太多地改变模型的架构。本次用的是GPT-2模型。相比GPT-1,GPT-2的layer norm位于“子层”(多头注意力和全连接层)之前;GPT-2在最后一个Transformer的输出那里,增加了一个layer norm。下图左为GPT-1,右为GPT-2。

图片

使用的模型是基于Transformer的Decoder解码器。与编码器不同的是,解码器在计算注意力机制时会添加一个掩码mask。这个掩码使得模型在计算某个词的自注意力时无法看到该词后面的内容,只能依靠前面的词来提取信息。而编码器没有掩码,它可以看到整个序列的所有信息,在计算某个词的自注意力时,可以考虑整个序列。BERT使用的是Encoder,因此它可以用于完形填空任务,通过提取前后词的信息来预测中间的词。而GPT只能预测后面的词。由于预测未来比预测中间要困难得多,这导致了GPT的效果不如BERT。模型的计算公式如下:

图片

二.GPT实现中文对话系统

1.数据集及源码

青云数据集,开放域,12万条对话:https://github.com/skyerhxx/Chatbot

图片

2.完整步骤

(1)生成字典信息

  1. """
  2. get_vocab.py 生成字典信息
  3. """
  4. import json
  5. def get_dict(datas):
  6. word_count ={}
  7. for data in datas:
  8. data = data.strip().replace('\t','')
  9. for word in data:
  10. word_count.setdefault(word,0)
  11. word_count[word]+=1
  12. word2id = {"<pad>":0,"<unk>":1,"<sep>":2}
  13. temp = {word: i + len(word2id) for i, word in enumerate(word_count.keys())}
  14. word2id.update(temp)
  15. id2word=list(word2id.keys())
  16. return word2id,id2word
  17. if __name__ == '__main__':
  18. with open('data/qingyun.txt', 'r', encoding='utf-8') as f:
  19. datas = f.readlines()
  20. word2id, id2word = get_dict(datas)
  21. dict_datas = {"word2id":word2id,"id2word":id2word}
  22. json.dump(dict_datas, open('data/dict_qingyun.json', 'w', encoding='utf-8'))

 (2)搭建模型

  1. """
  2. 该文件是GPT模型的实现,如果看不懂建议先去了解一下Transformer代码
  3. """
  4. import json
  5. import torch
  6. import torch.utils.data as Data
  7. from torch import nn, optim
  8. import numpy as np
  9. import time
  10. from tqdm import tqdm
  11. device = torch.device("cuda")
  12. dict_datas = json.load(open('data/dict_qingyun.json', 'r'))
  13. word2id, id2word = dict_datas['word2id'], dict_datas['id2word']
  14. vocab_size = len(word2id)
  15. max_pos = 1800
  16. d_model = 768 # Embedding Size
  17. d_ff = 2048 # FeedForward dimension
  18. d_k = d_v = 64 # dimension of K(=Q), V
  19. n_layers = 6 # number of Encoder of Decoder Layer
  20. n_heads = 8 # number of heads in Multi-Head Attention
  21. CLIP = 1
  22. def make_data(datas):
  23. train_datas = []
  24. for data in datas:
  25. data = data.strip()
  26. train_data = [i if i != '\t' else "<sep>" for i in data] + ['<sep>']
  27. train_datas.append(train_data)
  28. return train_datas
  29. class MyDataSet(Data.Dataset):
  30. def __init__(self, datas):
  31. self.datas = datas
  32. def __getitem__(self, item):
  33. data = self.datas[item]
  34. decoder_input = data[:-1]
  35. decoder_output = data[1:]
  36. decoder_input_len = len(decoder_input)
  37. decoder_output_len = len(decoder_output)
  38. return {"decoder_input": decoder_input, "decoder_input_len": decoder_input_len,
  39. "decoder_output": decoder_output, "decoder_output_len": decoder_output_len}
  40. def __len__(self):
  41. return len(self.datas)
  42. def padding_batch(self, batch):
  43. decoder_input_lens = [d["decoder_input_len"] for d in batch]
  44. decoder_output_lens = [d["decoder_output_len"] for d in batch]
  45. decoder_input_maxlen = max(decoder_input_lens)
  46. decoder_output_maxlen = max(decoder_output_lens)
  47. for d in batch:
  48. d["decoder_input"].extend([word2id["<pad>"]] * (decoder_input_maxlen - d["decoder_input_len"]))
  49. d["decoder_output"].extend([word2id["<pad>"]] * (decoder_output_maxlen - d["decoder_output_len"]))
  50. decoder_inputs = torch.tensor([d["decoder_input"] for d in batch], dtype=torch.long)
  51. decoder_outputs = torch.tensor([d["decoder_output"] for d in batch], dtype=torch.long)
  52. return decoder_inputs, decoder_outputs
  53. def get_attn_pad_mask(seq_q, seq_k):
  54. '''
  55. seq_q: [batch_size, seq_len]
  56. seq_k: [batch_size, seq_len]
  57. seq_len could be src_len or it could be tgt_len
  58. seq_len in seq_q and seq_len in seq_k maybe not equal
  59. '''
  60. batch_size, len_q = seq_q.size()
  61. batch_size, len_k = seq_k.size()
  62. # eq(zero) is PAD token
  63. pad_attn_mask = seq_k.data.eq(0).unsqueeze(1) # [batch_size, 1, len_k], True is masked
  64. return pad_attn_mask.expand(batch_size, len_q, len_k) # [batch_size, len_q, len_k]
  65. def get_attn_subsequence_mask(seq):
  66. '''
  67. seq: [batch_size, tgt_len]
  68. '''
  69. attn_shape = [seq.size(0), seq.size(1), seq.size(1)]
  70. subsequence_mask = np.triu(np.ones(attn_shape), k=1) # Upper triangular matrix
  71. subsequence_mask = torch.from_numpy(subsequence_mask).byte()
  72. subsequence_mask = subsequence_mask.to(device)
  73. return subsequence_mask # [batch_size, tgt_len, tgt_len]
  74. class ScaledDotProductAttention(nn.Module):
  75. def __init__(self):
  76. super(ScaledDotProductAttention, self).__init__()
  77. def forward(self, Q, K, V, attn_mask):
  78. '''
  79. Q: [batch_size, n_heads, len_q, d_k]
  80. K: [batch_size, n_heads, len_k, d_k]
  81. V: [batch_size, n_heads, len_v(=len_k), d_v]
  82. attn_mask: [batch_size, n_heads, seq_len, seq_len]
  83. '''
  84. scores = torch.matmul(Q, K.transpose(-1, -2)) / np.sqrt(
  85. d_k) # scores : [batch_size, n_heads, len_q, len_k]
  86. scores.masked_fill_(attn_mask, -1e9) # Fills elements of self tensor with value where mask is True.
  87. attn = nn.Softmax(dim=-1)(scores)
  88. context = torch.matmul(attn, V) # [batch_size, n_heads, len_q, d_v]
  89. return context, attn
  90. class MultiHeadAttention(nn.Module):
  91. def __init__(self):
  92. super(MultiHeadAttention, self).__init__()
  93. self.W_Q = nn.Linear(d_model, d_k * n_heads, bias=False)
  94. self.W_K = nn.Linear(d_model, d_k * n_heads, bias=False)
  95. self.W_V = nn.Linear(d_model, d_v * n_heads, bias=False)
  96. self.fc = nn.Linear(n_heads * d_v, d_model, bias=False)
  97. self.layernorm = nn.LayerNorm(d_model)
  98. def forward(self, input_Q, input_K, input_V, attn_mask):
  99. '''
  100. input_Q: [batch_size, len_q, d_model]
  101. input_K: [batch_size, len_k, d_model]
  102. input_V: [batch_size, len_v(=len_k), d_model]
  103. attn_mask: [batch_size, seq_len, seq_len]
  104. '''
  105. residual, batch_size = input_Q, input_Q.size(0)
  106. # (B, S, D) -proj-> (B, S, D_new) -split-> (B, S, H, W) -trans-> (B, H, S, W)
  107. Q = self.W_Q(input_Q).view(batch_size, -1, n_heads, d_k).transpose(1, 2) # Q: [batch_size, n_heads, len_q, d_k]
  108. K = self.W_K(input_K).view(batch_size, -1, n_heads, d_k).transpose(1, 2) # K: [batch_size, n_heads, len_k, d_k]
  109. V = self.W_V(input_V).view(batch_size, -1, n_heads, d_v).transpose(1,
  110. 2) # V: [batch_size, n_heads, len_v(=len_k), d_v]
  111. attn_mask = attn_mask.unsqueeze(1).repeat(1, n_heads, 1,
  112. 1) # attn_mask : [batch_size, n_heads, seq_len, seq_len]
  113. # context: [batch_size, n_heads, len_q, d_v], attn: [batch_size, n_heads, len_q, len_k]
  114. context, attn = ScaledDotProductAttention()(Q, K, V, attn_mask)
  115. context = context.transpose(1, 2).reshape(batch_size, -1,
  116. n_heads * d_v) # context: [batch_size, len_q, n_heads * d_v]
  117. output = self.fc(context) # [batch_size, len_q, d_model]
  118. return self.layernorm(output + residual), attn
  119. class PoswiseFeedForwardNet(nn.Module):
  120. def __init__(self):
  121. super(PoswiseFeedForwardNet, self).__init__()
  122. self.fc = nn.Sequential(
  123. nn.Linear(d_model, d_ff, bias=False),
  124. nn.ReLU(),
  125. nn.Linear(d_ff, d_model, bias=False)
  126. )
  127. self.layernorm = nn.LayerNorm(d_model)
  128. def forward(self, inputs):
  129. '''
  130. inputs: [batch_size, seq_len, d_model]
  131. '''
  132. residual = inputs
  133. output = self.fc(inputs)
  134. return self.layernorm(output + residual) # [batch_size, seq_len, d_model]
  135. class DecoderLayer(nn.Module):
  136. def __init__(self):
  137. super(DecoderLayer, self).__init__()
  138. self.dec_self_attn = MultiHeadAttention()
  139. self.dec_enc_attn = MultiHeadAttention()
  140. self.pos_ffn = PoswiseFeedForwardNet()
  141. def forward(self, dec_inputs, dec_self_attn_mask):
  142. '''
  143. dec_inputs: [batch_size, tgt_len, d_model]
  144. dec_self_attn_mask: [batch_size, tgt_len, tgt_len]
  145. '''
  146. # dec_outputs: [batch_size, tgt_len, d_model], dec_self_attn: [batch_size, n_heads, tgt_len, tgt_len]
  147. dec_outputs, dec_self_attn = self.dec_self_attn(dec_inputs, dec_inputs, dec_inputs, dec_self_attn_mask)
  148. dec_outputs = self.pos_ffn(dec_outputs) # [batch_size, tgt_len, d_model]
  149. return dec_outputs, dec_self_attn
  150. class Decoder(nn.Module):
  151. def __init__(self):
  152. super(Decoder, self).__init__()
  153. self.tgt_emb = nn.Embedding(vocab_size, d_model)
  154. self.pos_emb = nn.Embedding(max_pos, d_model)
  155. self.layers = nn.ModuleList([DecoderLayer() for _ in range(n_layers)])
  156. def forward(self, dec_inputs):
  157. '''
  158. dec_inputs: [batch_size, tgt_len]
  159. '''
  160. seq_len = dec_inputs.size(1)
  161. pos = torch.arange(seq_len, dtype=torch.long, device=device)
  162. pos = pos.unsqueeze(0).expand_as(dec_inputs) # [seq_len] -> [batch_size, seq_len]
  163. dec_outputs = self.tgt_emb(dec_inputs) + self.pos_emb(pos) # [batch_size, tgt_len, d_model]
  164. dec_self_attn_pad_mask = get_attn_pad_mask(dec_inputs, dec_inputs) # [batch_size, tgt_len, tgt_len]
  165. dec_self_attn_subsequence_mask = get_attn_subsequence_mask(dec_inputs) # [batch_size, tgt_len, tgt_len]
  166. dec_self_attn_mask = torch.gt((dec_self_attn_pad_mask + dec_self_attn_subsequence_mask),
  167. 0) # [batch_size, tgt_len, tgt_len]
  168. dec_self_attns = []
  169. for layer in self.layers:
  170. # dec_outputs: [batch_size, tgt_len, d_model], dec_self_attn: [batch_size, n_heads, tgt_len, tgt_len], dec_enc_attn: [batch_size, h_heads, tgt_len, src_len]
  171. dec_outputs, dec_self_attn = layer(dec_outputs, dec_self_attn_mask)
  172. dec_self_attns.append(dec_self_attn)
  173. return dec_outputs, dec_self_attns
  174. class GPT(nn.Module):
  175. def __init__(self):
  176. super(GPT, self).__init__()
  177. self.decoder = Decoder()
  178. self.projection = nn.Linear(d_model, vocab_size)
  179. def forward(self, dec_inputs):
  180. """
  181. dec_inputs: [batch_size, tgt_len]
  182. """
  183. # dec_outpus: [batch_size, tgt_len, d_model], dec_self_attns: [n_layers, batch_size, n_heads, tgt_len, tgt_len]
  184. dec_outputs, dec_self_attns = self.decoder(dec_inputs)
  185. # dec_logits: [batch_size, tgt_len, tgt_vocab_size]
  186. dec_logits = self.projection(dec_outputs)
  187. return dec_logits.view(-1, dec_logits.size(-1)), dec_self_attns
  188. def greedy_decoder(self, dec_input):
  189. terminal = False
  190. start_dec_len = len(dec_input[0])
  191. # 一直预测下一个单词,直到预测到"<sep>"结束,如果一直不到"<sep>",则根据长度退出循环,并在最后加上”<sep>“字符
  192. while not terminal:
  193. if len(dec_input[0]) - start_dec_len > 100:
  194. next_symbol = word2id['<sep>']
  195. dec_input = torch.cat(
  196. [dec_input.detach(), torch.tensor([[next_symbol]], dtype=dec_input.dtype, device=device)], -1)
  197. break
  198. dec_outputs, _ = self.decoder(dec_input)
  199. projected = self.projection(dec_outputs)
  200. prob = projected.squeeze(0).max(dim=-1, keepdim=False)[1]
  201. next_word = prob.data[-1]
  202. next_symbol = next_word
  203. if next_symbol == word2id["<sep>"]:
  204. terminal = True
  205. dec_input = torch.cat(
  206. [dec_input.detach(), torch.tensor([[next_symbol]], dtype=dec_input.dtype, device=device)], -1)
  207. return dec_input
  208. def answer(self, sentence):
  209. # 把原始句子的\t替换成”<sep>
  210. dec_input = [word2id.get(word, 1) if word != '\t' else word2id['<sep>'] for word in sentence]
  211. dec_input = torch.tensor(dec_input, dtype=torch.long, device=device).unsqueeze(0)
  212. output = self.greedy_decoder(dec_input).squeeze(0)
  213. out = [id2word[int(id)] for id in output]
  214. # 统计"<sep>"字符在结果中的索引
  215. sep_indexs = []
  216. for i in range(len(out)):
  217. if out[i] == "<sep>":
  218. sep_indexs.append(i)
  219. # 取最后两个sep中间的内容作为回答
  220. answer = out[sep_indexs[-2] + 1:-1]
  221. answer = "".join(answer)
  222. return answer

(3)模型训练

  1. """
  2. train.py 生成字典信息完整版文末链接获取
  3. """
  4. if __name__ == '__main__':
  5. with open('data/qingyun.txt', 'r', encoding='utf-8') as f:
  6. datas = f.readlines()
  7. print(len((datas)))
  8. train_data = make_data(datas)
  9. train_num_data = [[word2id[word] for word in line] for line in train_data]
  10. batch_size = 32
  11. epochs = 10
  12. dataset = MyDataSet(train_num_data)
  13. data_loader = Data.DataLoader(dataset, batch_size=batch_size, collate_fn=dataset.padding_batch)
  14. model = GPT().to(device)
  15. train(model,data_loader)

图片

小编用的是RTX3090(24G),batch_size和epochs设置的如上。大家可根据自己电脑算力,增大或减小batch_size和epochs,以达到最佳效果。

 (4)进行对话

  1. """
  2. demo.py 进行对话
  3. """
  4. import torch
  5. from gpt_model import GPT
  6. if __name__ == '__main__':
  7. device = torch.device('cuda')
  8. model = GPT().to(device)
  9. model.load_state_dict(torch.load('GPT2.pt'))
  10. model.eval()
  11. #初始输入是空,每次加上后面的对话信息
  12. sentence = ''
  13. while True:
  14. temp_sentence = input("question:")
  15. sentence += (temp_sentence + '\t')
  16. if len(sentence) > 200:
  17. #由于该模型输入最大长度为300,避免长度超出限制长度过长需要进行裁剪
  18. t_index = sentence.find('\t')
  19. sentence = sentence[t_index + 1:]
  20. print("answer:", model.answer(sentence))

图片

由于模型训练epoch过少,模型效果不佳,想提高效果可以增大数据集以及训练轮次epoch。喜欢的小伙伴快来试一下吧,完整代码及权重链接:​​​​​​​

https://pan.baidu.com/s/12DFQIomctt8VW36tovcIkA?pwd=k0cc 提取码:k0cc 
 

最后:

如果你想要进一步了解更多的相关知识,可以关注下面公众号联系~会不定期发布相关设计内容包括但不限于如下内容:信号处理、通信仿真、算法设计、matlab appdesigner,gui设计、simulink仿真......希望能帮到你!

5a8015ddde1e41418a38e958eb12ecbd.png

声明:本文内容由网友自发贡献,转载请注明出处:【wpsshop】
推荐阅读
相关标签
  

闽ICP备14008679号