当前位置:   article > 正文

Transformer 自注意力机制 及完整代码实现_scores.masked_fill

scores.masked_fill

词嵌入(Word Embedding

将输入单词用 One-Hot 形式编码成序列向量,向量长度就是预定义的词汇表中拥有的单词量。One-Hot 形式编码看似简洁,但缺点是稀疏,对于较大的字典会很长,浪费资源。更重要的是无法体现两个有关系的词之间的联系。因此,另一种词的表示方式,使得具有相近意思的词有相近的表示,即Word Embedding。设计一个可学习的权重矩阵W,将词向量与这个矩阵点乘,即得到词的表示。

假设like=[1,0,0,0]love=[0,0,0,1]

权重矩阵W=[[w00,w01,w02],[w10,w11,w12],[w20,w21,w22],[w30,w31,w32]]

词向量与权重矩阵W点乘结果为[w00,w01,w02]和[w30,w31,w32] ,在网络学习过程中权重矩阵的参数会不断进行更新,直至越来越相似。另外,还起到了降维的效果。

 编码器的每个句子是长为seq_len的语言序列,每个单词(汉字)可化长为d_model=512的向量,输入为batch*512*seq_len(x1,x2,…xn)。编码器可以看到整个输入的句子,将信息(时序)做一次汇聚(多个自注意力),提取特征。输出为batch*512*seq_len(z1,z2,…zn)

解码器收到解码指令(start of scentence),开始解码,先做subsequence_mask,看不到当前时刻后的信息(一个单词一个单词翻译),然后进行自注意力机制,然后拿出Q并编码器输出的K进行相似度计算(权重),此时注意力attention不是自注意力,因为K、V来自编码器的输出。

(1)数据构建、分别建立词库、模型参数:词向量d_model=512;线性层中间d_ff=2048;d_q=d_k=d_v=64;每个头维数n_layer=64;n_heads=8;

(2)自建库MyDataSet,加载数据Data.DataLoader

(3)位置编码PositionalEncoding:将位置信息与词向量相加,加入了位置信息。

x=x+self.pe[:x.size(0),:]

(4)Pad_mask:序列(句子),不够长时用pad填补。为了让pad的位置不参与权重计算,将pad=0的位置设为true。

pad_attn_mask = seq_k.data.eq(0).unsqueeze(1)#判断是否为0,是0则为True,True则masked,并扩一个维度。# 例如:seq_k = [[1,2,3,4,0], [1,2,3,5,0]],-->[[F,F,F,F,T]][F,F,F,F,T]

return pad_attn_mask.expand(batch_size, len_q, len_k) # [batch_size, len_q, len_k]

(5)subsequence_mask:一次只翻译一个词

subsequence_mask = np.triu(np.ones(attn_shape), k=1)  # 生成一个上三角矩阵

(6)ScaleDotProductAttention:

  1. scores = torch.matmul(Q, K.transpose(-1, -2)) / np.sqrt(d_k)
  2. scores.masked_fill_(attn_mask, -1e9)  # mask is True.的位置补一个很小的负数,softmax后为0
  3. attn = nn.Softmax(dim=-1)(scores) # 对最后一个维度(v)做softmax,得到权重
  4. context = torch.matmul(attn, V) # context: [batch_size, n_heads, len_q, d_v];

(7)MultiHeadAttention:多头注意力,投影

  1. #input_Q(n*512* W_Q (512*512=Q(n*51264*8))分成八个头
  2. #QKV的线性投影层,不改变QKV的形状。
  3. self.W_Q = nn.Linear(d_model, d_k * n_heads, bias=False)
  4. # Q: [batch_size, n_heads, len_q, d_k]。 
  5. Q = self.W_Q(input_Q).view(batch_size, -1, n_heads, d_k).transpose(1, 2)
  6. # 因为是多头,所以mask矩阵要扩充成4维的
  7. # attn_mask: [batch_size, seq_len, seq_len] -> [batch_size, n_heads, seq_len, seq_len];
  8. attn_mask = attn_mask.unsqueeze(1).repeat(1, n_heads, 1, 1)
  9. context, attn = ScaledDotProductAttention()(Q, K, V, attn_mask)#点积相似度

(8)PoswiseFeedForwardNet:两个线性层,512-2048-512

(9)EncoderLayer(一层注意力层、一层前馈网络)

  1. # 第一个enc_inputs * W_Q = Q(8*64
  2. # 第二个enc_inputs * W_K = K
  3. # 第三个enc_inputs * W_V = V
  4. enc_outputs, attn = self.enc_self_attn(enc_inputs, enc_inputs, enc_inputs,
  5.                               enc_self_attn_mask) 
  6. enc_outputs = self.pos_ffn(enc_outputs)#前馈网络,多层感知机

(10)DncoderLayer(两层注意力层、一层前馈网络)

  1. dec_outputs, dec_self_attn = self.dec_self_attn(dec_inputs, dec_inputs, dec_inputs,
  2.                                                         dec_self_attn_mask)  # 这里的Q,K,V全是Decoder自己的输入
  3. dec_outputs, dec_enc_attn = self.dec_enc_attn(dec_outputs, enc_outputs, enc_outputs,
  4.                                                       dec_enc_attn_mask)  # Attention层的Q(来自decoder) 和 K,V(来自encoder)
  5. dec_outputs = self.pos_ffn(dec_outputs)  # [batch_size, tgt_len, d_model]

(11)Encoder(传入编码器输入,传出编码器输出)e

  1. nc_outputs = self.src_emb(enc_inputs)#词向量
  2. enc_outputs = self.pos_emb(enc_outputs.transpose(0, 1)).transpose(0, 1)#位置信息
  3. # 上一个block的输出enc_outputs作为当前block的输入
  4. for layer in self.layers:
  5.     enc_outputs, enc_self_attn = layer(enc_outputs, enc_self_attn_mask)

(12)Decoder(传入三个参数:编码器输出、编码器输入、解码器输入。返回解码器输出。

  1. """
  2. code by Tae Hwan Jung(Jeff Jung) @graykode, Derek Miller @dmmiller612, modify by shwei
  3. Reference: https://github.com/jadore801120/attention-is-all-you-need-pytorch
  4. https://github.com/JayParks/transformer
  5. """
  6. # ====================================================================================================
  7. # 数据构建
  8. import math
  9. import torch
  10. import numpy as np
  11. import torch.nn as nn
  12. import torch.optim as optim
  13. import torch.utils.data as Data
  14. device = 'cpu'
  15. # device = 'cuda'
  16. # transformer epochs
  17. epochs = 100
  18. # epochs = 1000
  19. # 这里我没有用什么大型的数据集,而是手动输入了两对德语→英语的句子
  20. # 还有每个字的索引也是我手动硬编码上去的,主要是为了降低代码阅读难度
  21. # S: Symbol that shows starting of decoding input
  22. # E: Symbol that shows starting of decoding output
  23. # P: Symbol that will fill in blank sequence if current batch data size is short than time steps
  24. sentences = [
  25. # 德语和英语的单词个数不要求相同
  26. # enc_input dec_input dec_output
  27. ['ich mochte ein bier P', 'S i want a beer .', 'i want a beer . E'],
  28. ['ich mochte ein cola P', 'S i want a coke .', 'i want a coke . E']
  29. ]
  30. # 德语和英语的单词要分开建立词库
  31. # Padding Should be Zero
  32. src_vocab = {'P': 0, 'ich': 1, 'mochte': 2, 'ein': 3, 'bier': 4, 'cola': 5}
  33. src_idx2word = {i: w for i, w in enumerate(src_vocab)}
  34. src_vocab_size = len(src_vocab)
  35. tgt_vocab = {'P': 0, 'i': 1, 'want': 2, 'a': 3, 'beer': 4, 'coke': 5, 'S': 6, 'E': 7, '.': 8}
  36. idx2word = {i: w for i, w in enumerate(tgt_vocab)}
  37. tgt_vocab_size = len(tgt_vocab)
  38. src_len = 5 # (源句子的长度)enc_input max sequence length
  39. tgt_len = 6 # dec_input(=dec_output) max sequence length
  40. # Transformer Parameters
  41. d_model = 512 # Embedding Size(token embedding和position编码的维度)
  42. d_ff = 2048 # FeedForward dimension (两次线性层中的隐藏层 512->2048->512,线性层是用来做特征提取的),当然最后会再接一个projection层
  43. d_k = d_v = 64 # dimension of K(=Q), V(Q和K的维度需要相同,这里为了方便让K=V)
  44. n_layers = 6 # number of Encoder of Decoder Layer(Block的个数)
  45. n_heads = 8 # number of heads in Multi-Head Attention(有几套头)
  46. # ==============================================================================================
  47. # 数据构建
  48. def make_data(sentences):
  49. """把单词序列转换为数字序列"""
  50. enc_inputs, dec_inputs, dec_outputs = [], [], []
  51. for i in range(len(sentences)):
  52. enc_input = [[src_vocab[n] for n in sentences[i][0].split()]] # [[1, 2, 3, 4, 0], [1, 2, 3, 5, 0]]
  53. dec_input = [[tgt_vocab[n] for n in sentences[i][1].split()]] # [[6, 1, 2, 3, 4, 8], [6, 1, 2, 3, 5, 8]]
  54. dec_output = [[tgt_vocab[n] for n in sentences[i][2].split()]] # [[1, 2, 3, 4, 8, 7], [1, 2, 3, 5, 8, 7]]
  55. print(enc_input)
  56. enc_inputs.extend(enc_input)
  57. dec_inputs.extend(dec_input)
  58. dec_outputs.extend(dec_output)
  59. print(enc_inputs)
  60. return torch.LongTensor(enc_inputs), torch.LongTensor(dec_inputs), torch.LongTensor(dec_outputs)
  61. enc_inputs, dec_inputs, dec_outputs = make_data(sentences)
  62. class MyDataSet(Data.Dataset):
  63. """自定义DataLoader"""
  64. def __init__(self, enc_inputs, dec_inputs, dec_outputs):
  65. super(MyDataSet, self).__init__()
  66. self.enc_inputs = enc_inputs
  67. self.dec_inputs = dec_inputs
  68. self.dec_outputs = dec_outputs
  69. def __len__(self):
  70. return self.enc_inputs.shape[0]
  71. def __getitem__(self, idx):
  72. return self.enc_inputs[idx], self.dec_inputs[idx], self.dec_outputs[idx]
  73. loader = Data.DataLoader(MyDataSet(enc_inputs, dec_inputs, dec_outputs), 2, True)
  74. # ====================================================================================================
  75. # Transformer模型
  76. class PositionalEncoding(nn.Module):
  77. def __init__(self, d_model, dropout=0.1, max_len=5000):
  78. super(PositionalEncoding, self).__init__()
  79. self.dropout = nn.Dropout(p=dropout)
  80. pe = torch.zeros(max_len, d_model)
  81. position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1)
  82. div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-math.log(10000.0) / d_model))
  83. pe[:, 0::2] = torch.sin(position * div_term)
  84. pe[:, 1::2] = torch.cos(position * div_term)
  85. pe = pe.unsqueeze(0).transpose(0, 1)
  86. self.register_buffer('pe', pe)
  87. def forward(self, x):
  88. """
  89. x: [seq_len, batch_size, d_model]
  90. """
  91. x = x + self.pe[:x.size(0), :]
  92. return self.dropout(x)
  93. def get_attn_pad_mask(seq_q, seq_k):
  94. # pad mask的作用:在对value向量加权平均的时候,可以让pad对应的alpha_ij=0,这样注意力就不会考虑到pad向量
  95. """这里的q,k表示的是两个序列(跟注意力机制的q,k没有关系),例如encoder_inputs (x1,x2,..xm)和encoder_inputs (x1,x2..xm)
  96. encoder和decoder都可能调用这个函数,所以seq_len视情况而定
  97. seq_q: [batch_size, seq_len]
  98. seq_k: [batch_size, seq_len]
  99. seq_len could be src_len or it could be tgt_len
  100. seq_len in seq_q and seq_len in seq_k maybe not equal
  101. """
  102. batch_size, len_q = seq_q.size() # 这个seq_q只是用来expand维度的
  103. batch_size, len_k = seq_k.size()
  104. # eq(zero) is PAD token
  105. # 例如:seq_k = [[1,2,3,4,0], [1,2,3,5,0]]
  106. pad_attn_mask = seq_k.data.eq(0).unsqueeze(1) # [batch_size, 1, len_k], True is masked
  107. return pad_attn_mask.expand(batch_size, len_q, len_k) # [batch_size, len_q, len_k] 构成一个立方体(batch_size个这样的矩阵)
  108. def get_attn_subsequence_mask(seq):
  109. """建议打印出来看看是什么的输出(一目了然)
  110. seq: [batch_size, tgt_len]
  111. """
  112. attn_shape = [seq.size(0), seq.size(1), seq.size(1)]
  113. # attn_shape: [batch_size, tgt_len, tgt_len]
  114. subsequence_mask = np.triu(np.ones(attn_shape), k=1) # 生成一个上三角矩阵
  115. subsequence_mask = torch.from_numpy(subsequence_mask).byte()
  116. return subsequence_mask # [batch_size, tgt_len, tgt_len]
  117. # ==========================================================================================
  118. class ScaledDotProductAttention(nn.Module):
  119. def __init__(self):
  120. super(ScaledDotProductAttention, self).__init__()
  121. def forward(self, Q, K, V, attn_mask):
  122. """
  123. Q: [batch_size, n_heads, len_q, d_k]
  124. K: [batch_size, n_heads, len_k, d_k]
  125. V: [batch_size, n_heads, len_v(=len_k), d_v]
  126. attn_mask: [batch_size, n_heads, seq_len, seq_len]
  127. 说明:在encoder-decoder的Attention层中len_q(q1,..qt)和len_k(k1,...km)可能不同
  128. """
  129. scores = torch.matmul(Q, K.transpose(-1, -2)) / np.sqrt(d_k) # scores : [batch_size, n_heads, len_q, len_k]
  130. # mask矩阵填充scores(用-1e9填充scores中与attn_mask中值为1位置相对应的元素)
  131. scores.masked_fill_(attn_mask, -1e9) # Fills elements of self tensor with value where mask is True.
  132. attn = nn.Softmax(dim=-1)(scores) # 对最后一个维度(v)做softmax
  133. # scores : [batch_size, n_heads, len_q, len_k] * V: [batch_size, n_heads, len_v(=len_k), d_v]
  134. context = torch.matmul(attn, V) # context: [batch_size, n_heads, len_q, d_v]
  135. # context:[[z1,z2,...],[...]]向量, attn注意力稀疏矩阵(用于可视化的)
  136. return context, attn
  137. class MultiHeadAttention(nn.Module):
  138. """这个Attention类可以实现:
  139. Encoder的Self-Attention
  140. Decoder的Masked Self-Attention
  141. Encoder-Decoder的Attention
  142. """
  143. def __init__(self):
  144. super(MultiHeadAttention, self).__init__()
  145. self.W_Q = nn.Linear(d_model, d_k * n_heads, bias=False) # q,k必须维度相同,不然无法做点积
  146. self.W_K = nn.Linear(d_model, d_k * n_heads, bias=False)
  147. self.W_V = nn.Linear(d_model, d_v * n_heads, bias=False)
  148. self.fc = nn.Linear(n_heads * d_v, d_model, bias=False)
  149. def forward(self, input_Q, input_K, input_V, attn_mask):
  150. """
  151. input_Q: [batch_size, len_q, d_model]
  152. input_K: [batch_size, len_k, d_model]
  153. input_V: [batch_size, len_v(=len_k), d_model]
  154. attn_mask: [batch_size, seq_len, seq_len]
  155. """
  156. residual, batch_size = input_Q, input_Q.size(0)
  157. # 下面的多头的参数矩阵是放在一起做线性变换的,然后再拆成多个头,这是工程实现的技巧
  158. # B: batch_size, S:seq_len, D: dim
  159. # (B, S, D) -proj-> (B, S, D_new) -split-> (B, S, Head, W) -trans-> (B, Head, S, W)
  160. # 线性变换 拆成多头
  161. # Q: [batch_size, n_heads, len_q, d_k]
  162. Q = self.W_Q(input_Q).view(batch_size, -1, n_heads, d_k).transpose(1, 2)
  163. # K: [batch_size, n_heads, len_k, d_k] # K和V的长度一定相同,维度可以不同
  164. K = self.W_K(input_K).view(batch_size, -1, n_heads, d_k).transpose(1, 2)
  165. # V: [batch_size, n_heads, len_v(=len_k), d_v]
  166. V = self.W_V(input_V).view(batch_size, -1, n_heads, d_v).transpose(1, 2)
  167. # 因为是多头,所以mask矩阵要扩充成4维的
  168. # attn_mask: [batch_size, seq_len, seq_len] -> [batch_size, n_heads, seq_len, seq_len]
  169. attn_mask = attn_mask.unsqueeze(1).repeat(1, n_heads, 1, 1)
  170. # context: [batch_size, n_heads, len_q, d_v], attn: [batch_size, n_heads, len_q, len_k]
  171. context, attn = ScaledDotProductAttention()(Q, K, V, attn_mask)
  172. # 下面将不同头的输出向量拼接在一起
  173. # context: [batch_size, n_heads, len_q, d_v] -> [batch_size, len_q, n_heads * d_v]
  174. context = context.transpose(1, 2).reshape(batch_size, -1, n_heads * d_v)
  175. # 再做一个projection
  176. output = self.fc(context) # [batch_size, len_q, d_model]
  177. return nn.LayerNorm(d_model).to(device)(output + residual), attn
  178. # Pytorch中的Linear只会对最后一维操作,所以正好是我们希望的每个位置用同一个全连接网络
  179. class PoswiseFeedForwardNet(nn.Module):
  180. def __init__(self):
  181. super(PoswiseFeedForwardNet, self).__init__()
  182. self.fc = nn.Sequential(
  183. nn.Linear(d_model, d_ff, bias=False),
  184. nn.ReLU(),
  185. nn.Linear(d_ff, d_model, bias=False)
  186. )
  187. def forward(self, inputs):
  188. """
  189. inputs: [batch_size, seq_len, d_model]
  190. """
  191. residual = inputs
  192. output = self.fc(inputs)
  193. return nn.LayerNorm(d_model).to(device)(output + residual) # [batch_size, seq_len, d_model]
  194. class EncoderLayer(nn.Module):
  195. def __init__(self):
  196. super(EncoderLayer, self).__init__()
  197. self.enc_self_attn = MultiHeadAttention()
  198. self.pos_ffn = PoswiseFeedForwardNet()
  199. def forward(self, enc_inputs, enc_self_attn_mask):
  200. """E
  201. enc_inputs: [batch_size, src_len, d_model]
  202. enc_self_attn_mask: [batch_size, src_len, src_len] mask矩阵(pad mask or sequence mask)
  203. """
  204. # enc_outputs: [batch_size, src_len, d_model], attn: [batch_size, n_heads, src_len, src_len]
  205. # 第一个enc_inputs * W_Q = Q
  206. # 第二个enc_inputs * W_K = K
  207. # 第三个enc_inputs * W_V = V
  208. enc_outputs, attn = self.enc_self_attn(enc_inputs, enc_inputs, enc_inputs,
  209. enc_self_attn_mask) # enc_inputs to same Q,K,V(未线性变换前)
  210. enc_outputs = self.pos_ffn(enc_outputs)
  211. # enc_outputs: [batch_size, src_len, d_model]
  212. return enc_outputs, attn
  213. class DecoderLayer(nn.Module):
  214. def __init__(self):
  215. super(DecoderLayer, self).__init__()
  216. self.dec_self_attn = MultiHeadAttention()
  217. self.dec_enc_attn = MultiHeadAttention()
  218. self.pos_ffn = PoswiseFeedForwardNet()
  219. def forward(self, dec_inputs, enc_outputs, dec_self_attn_mask, dec_enc_attn_mask):
  220. """
  221. dec_inputs: [batch_size, tgt_len, d_model]
  222. enc_outputs: [batch_size, src_len, d_model]
  223. dec_self_attn_mask: [batch_size, tgt_len, tgt_len]
  224. dec_enc_attn_mask: [batch_size, tgt_len, src_len]
  225. """
  226. # dec_outputs: [batch_size, tgt_len, d_model], dec_self_attn: [batch_size, n_heads, tgt_len, tgt_len]
  227. dec_outputs, dec_self_attn = self.dec_self_attn(dec_inputs, dec_inputs, dec_inputs,
  228. dec_self_attn_mask) # 这里的Q,K,V全是Decoder自己的输入
  229. # dec_outputs: [batch_size, tgt_len, d_model], dec_enc_attn: [batch_size, h_heads, tgt_len, src_len]
  230. dec_outputs, dec_enc_attn = self.dec_enc_attn(dec_outputs, enc_outputs, enc_outputs,
  231. dec_enc_attn_mask) # Attention层的Q(来自decoder) 和 K,V(来自encoder)
  232. dec_outputs = self.pos_ffn(dec_outputs) # [batch_size, tgt_len, d_model]
  233. return dec_outputs, dec_self_attn, dec_enc_attn # dec_self_attn, dec_enc_attn这两个是为了可视化的
  234. class Encoder(nn.Module):
  235. def __init__(self):
  236. super(Encoder, self).__init__()
  237. self.src_emb = nn.Embedding(src_vocab_size, d_model) # token Embedding
  238. self.pos_emb = PositionalEncoding(d_model) # Transformer中位置编码时固定的,不需要学习
  239. self.layers = nn.ModuleList([EncoderLayer() for _ in range(n_layers)])
  240. def forward(self, enc_inputs):
  241. """
  242. enc_inputs: [batch_size, src_len]
  243. """
  244. enc_outputs = self.src_emb(enc_inputs) # [batch_size, src_len, d_model]
  245. enc_outputs = self.pos_emb(enc_outputs.transpose(0, 1)).transpose(0, 1) # [batch_size, src_len, d_model]
  246. # Encoder输入序列的pad mask矩阵
  247. enc_self_attn_mask = get_attn_pad_mask(enc_inputs, enc_inputs) # [batch_size, src_len, src_len]
  248. enc_self_attns = [] # 在计算中不需要用到,它主要用来保存你接下来返回的attention的值(这个主要是为了你画热力图等,用来看各个词之间的关系
  249. for layer in self.layers: # for循环访问nn.ModuleList对象
  250. # 上一个block的输出enc_outputs作为当前block的输入
  251. # enc_outputs: [batch_size, src_len, d_model], enc_self_attn: [batch_size, n_heads, src_len, src_len]
  252. enc_outputs, enc_self_attn = layer(enc_outputs,
  253. enc_self_attn_mask) # 传入的enc_outputs其实是input,传入mask矩阵是因为你要做self attention
  254. enc_self_attns.append(enc_self_attn) # 这个只是为了可视化
  255. return enc_outputs, enc_self_attns
  256. class Decoder(nn.Module):
  257. def __init__(self):
  258. super(Decoder, self).__init__()
  259. self.tgt_emb = nn.Embedding(tgt_vocab_size, d_model) # Decoder输入的embed词表
  260. self.pos_emb = PositionalEncoding(d_model)
  261. self.layers = nn.ModuleList([DecoderLayer() for _ in range(n_layers)]) # Decoder的blocks
  262. def forward(self, dec_inputs, enc_inputs, enc_outputs):
  263. """
  264. dec_inputs: [batch_size, tgt_len]
  265. enc_inputs: [batch_size, src_len]
  266. enc_outputs: [batch_size, src_len, d_model] # 用在Encoder-Decoder Attention层
  267. """
  268. dec_outputs = self.tgt_emb(dec_inputs) # [batch_size, tgt_len, d_model]
  269. dec_outputs = self.pos_emb(dec_outputs.transpose(0, 1)).transpose(0, 1).to(
  270. device) # [batch_size, tgt_len, d_model]
  271. # Decoder输入序列的pad mask矩阵(这个例子中decoder是没有加pad的,实际应用中都是有pad填充的)
  272. dec_self_attn_pad_mask = get_attn_pad_mask(dec_inputs, dec_inputs).to(device) # [batch_size, tgt_len, tgt_len]
  273. # Masked Self_Attention:当前时刻是看不到未来的信息的
  274. dec_self_attn_subsequence_mask = get_attn_subsequence_mask(dec_inputs).to(
  275. device) # [batch_size, tgt_len, tgt_len]
  276. # Decoder中把两种mask矩阵相加(既屏蔽了pad的信息,也屏蔽了未来时刻的信息)
  277. dec_self_attn_mask = torch.gt((dec_self_attn_pad_mask + dec_self_attn_subsequence_mask),
  278. 0).to(device) # [batch_size, tgt_len, tgt_len]; torch.gt比较两个矩阵的元素,大于则返回1,否则返回0
  279. # 这个mask主要用于encoder-decoder attention层
  280. # get_attn_pad_mask主要是enc_inputs的pad mask矩阵(因为enc是处理K,V的,求Attention时是用v1,v2,..vm去加权的,要把pad对应的v_i的相关系数设为0,这样注意力就不会关注pad向量)
  281. # dec_inputs只是提供expand的size
  282. dec_enc_attn_mask = get_attn_pad_mask(dec_inputs, enc_inputs) # [batc_size, tgt_len, src_len]
  283. dec_self_attns, dec_enc_attns = [], []
  284. for layer in self.layers:
  285. # dec_outputs: [batch_size, tgt_len, d_model], dec_self_attn: [batch_size, n_heads, tgt_len, tgt_len], dec_enc_attn: [batch_size, h_heads, tgt_len, src_len]
  286. # Decoder的Block是上一个Block的输出dec_outputs(变化)和Encoder网络的输出enc_outputs(固定)
  287. dec_outputs, dec_self_attn, dec_enc_attn = layer(dec_outputs, enc_outputs, dec_self_attn_mask,
  288. dec_enc_attn_mask)
  289. dec_self_attns.append(dec_self_attn)
  290. dec_enc_attns.append(dec_enc_attn)
  291. # dec_outputs: [batch_size, tgt_len, d_model]
  292. return dec_outputs, dec_self_attns, dec_enc_attns
  293. class Transformer(nn.Module):
  294. def __init__(self):
  295. super(Transformer, self).__init__()
  296. self.encoder = Encoder().to(device)
  297. self.decoder = Decoder().to(device)
  298. self.projection = nn.Linear(d_model, tgt_vocab_size, bias=False).to(device)
  299. def forward(self, enc_inputs, dec_inputs):
  300. """Transformers的输入:两个序列
  301. enc_inputs: [batch_size, src_len]
  302. dec_inputs: [batch_size, tgt_len]
  303. """
  304. # tensor to store decoder outputs
  305. # outputs = torch.zeros(batch_size, tgt_len, tgt_vocab_size).to(self.device)
  306. # enc_outputs: [batch_size, src_len, d_model], enc_self_attns: [n_layers, batch_size, n_heads, src_len, src_len]
  307. # 经过Encoder网络后,得到的输出还是[batch_size, src_len, d_model]
  308. enc_outputs, enc_self_attns = self.encoder(enc_inputs)
  309. # dec_outputs: [batch_size, tgt_len, d_model], dec_self_attns: [n_layers, batch_size, n_heads, tgt_len, tgt_len], dec_enc_attn: [n_layers, batch_size, tgt_len, src_len]
  310. dec_outputs, dec_self_attns, dec_enc_attns = self.decoder(dec_inputs, enc_inputs, enc_outputs)
  311. # dec_outputs: [batch_size, tgt_len, d_model] -> dec_logits: [batch_size, tgt_len, tgt_vocab_size]
  312. dec_logits = self.projection(dec_outputs)
  313. return dec_logits.view(-1, dec_logits.size(-1)), enc_self_attns, dec_self_attns, dec_enc_attns
  314. model = Transformer().to(device)
  315. # 这里的损失函数里面设置了一个参数 ignore_index=0,因为 "pad" 这个单词的索引为 0,这样设置以后,就不会计算 "pad" 的损失(因为本来 "pad" 也没有意义,不需要计算)
  316. criterion = nn.CrossEntropyLoss(ignore_index=0)
  317. optimizer = optim.SGD(model.parameters(), lr=1e-3, momentum=0.99) # 用adam的话效果不好
  318. # ====================================================================================================
  319. losses = []
  320. for epoch in range(epochs):
  321. for enc_inputs, dec_inputs, dec_outputs in loader:
  322. """
  323. enc_inputs: [batch_size, src_len]
  324. dec_inputs: [batch_size, tgt_len]
  325. dec_outputs: [batch_size, tgt_len]
  326. """
  327. enc_inputs, dec_inputs, dec_outputs = enc_inputs.to(device), dec_inputs.to(device), dec_outputs.to(device)
  328. # outputs: [batch_size * tgt_len, tgt_vocab_size]
  329. outputs, enc_self_attns, dec_self_attns, dec_enc_attns = model(enc_inputs, dec_inputs)
  330. loss = criterion(outputs, dec_outputs.view(-1)) # dec_outputs.view(-1):[batch_size * tgt_len * tgt_vocab_size]
  331. print('Epoch:', '%04d' % (epoch + 1), 'loss =', '{:.6f}'.format(loss))
  332. optimizer.zero_grad()
  333. loss.backward()
  334. optimizer.step()
  335. losses.append(loss.data.item())
  336. #loss曲线
  337. import matplotlib; matplotlib.use('TkAgg')
  338. import matplotlib.pyplot as plt
  339. plt.xkcd()
  340. plt.xlabel('Epoce')
  341. plt.ylabel('loss')
  342. plt.plot(losses)
  343. plt.show()
  344. def greedy_decoder(model, enc_input, start_symbol):
  345. """贪心编码
  346. For simplicity, a Greedy Decoder is Beam search when K=1. This is necessary for inference as we don't know the
  347. target sequence input. Therefore we try to generate the target input word by word, then feed it into the transformer.
  348. Starting Reference: http://nlp.seas.harvard.edu/2018/04/03/attention.html#greedy-decoding
  349. :param model: Transformer Model
  350. :param enc_input: The encoder input
  351. :param start_symbol: The start symbol. In this example it is 'S' which corresponds to index 4
  352. :return: The target input
  353. """
  354. enc_outputs, enc_self_attns = model.encoder(enc_input)
  355. dec_input = torch.zeros(1, 0).type_as(enc_input.data)
  356. terminal = False
  357. next_symbol = start_symbol
  358. while not terminal:
  359. # 预测阶段:dec_input序列会一点点变长(每次添加一个新预测出来的单词)
  360. dec_input = torch.cat([dec_input.to(device), torch.tensor([[next_symbol]], dtype=enc_input.dtype).to(device)],
  361. -1)
  362. dec_outputs, _, _ = model.decoder(dec_input, enc_input, enc_outputs)
  363. projected = model.projection(dec_outputs)
  364. prob = projected.squeeze(0).max(dim=-1, keepdim=False)[1]
  365. # 增量更新(我们希望重复单词预测结果是一样的)
  366. # 我们在预测是会选择性忽略重复的预测的词,只摘取最新预测的单词拼接到输入序列中
  367. next_word = prob.data[-1] # 拿出当前预测的单词(数字)。我们用x'_t对应的输出z_t去预测下一个单词的概率,不用z_1,z_2..z_{t-1}
  368. next_symbol = next_word
  369. if next_symbol == tgt_vocab["E"]:
  370. terminal = True
  371. # print(next_word)
  372. # greedy_dec_predict = torch.cat(
  373. # [dec_input.to(device), torch.tensor([[next_symbol]], dtype=enc_input.dtype).to(device)],
  374. # -1)
  375. greedy_dec_predict = dec_input[:, 1:]
  376. return greedy_dec_predict
  377. # ==========================================================================================
  378. # 预测阶段
  379. enc_inputs, _, _ = next(iter(loader))
  380. for i in range(len(enc_inputs)):
  381. greedy_dec_predict = greedy_decoder(model, enc_inputs[i].view(1, -1).to(device), start_symbol=tgt_vocab["S"])
  382. print(enc_inputs[i], '->', greedy_dec_predict.squeeze())
  383. print([src_idx2word[t.item()] for t in enc_inputs[i]], '->',
  384. [idx2word[n.item()] for n in greedy_dec_predict.squeeze()])

python的学习还是要多以练习为主,想要练习python的同学,推荐可以去看,他们现在的IT题库内容很丰富,属于国内做的很好的了,而且是课程+刷题+面经+求职+讨论区分享,一站式求职学习网站,最最最重要的里面的资源全部免费。

牛客网 - 找工作神器|笔试题库|面试经验|实习招聘内推,求职就业一站解决_牛客网求职之前,先上牛客,就业找工作一站解决。互联网IT技术/产品/运营/硬件/汽车机械制造/金融/财务管理/审计/银行/市场营销/地产/快消/管培生等等专业技能学习/备考/求职神器,在线进行企业校招实习笔试面试真题模拟考试练习,全面提升求职竞争力,找到好工作,拿到好offer。icon-default.png?t=M85Bhttps://www.nowcoder.com/link/pc_csdncpt_ssdxjg_python

他们这个python的练习题,知识点编排详细,题目安排合理,题目表述以指导的形式进行。整个题单覆盖了Python入门的全部知识点以及全部语法,通过知识点分类逐层递进,从Hello World开始到最后的实践任务,都会非常详细地指导你应该使用什么函数,应该怎么输入输出。

牛客网(牛客网 - 找工作神器|笔试题库|面试经验|实习招聘内推,求职就业一站解决_牛客网)还提供题解专区和讨论区会有大神提供题解思路,对新手玩家及其友好,有不清楚的语法,不理解的地方,看看别人的思路,别人的代码,也许就能豁然开朗。

快点击下方链接学起来吧!

牛客网 - 找工作神器|笔试题库|面试经验|实习招聘内推,求职就业一站解决_牛客网

参考:

参考
Reference: https://github.com/jadore801120/attention-is-all-you-need-pytorch https://github.com/JayParks/transformer
Transformer论文逐段精读【论文精读】_哔哩哔哩_bilibilihttps://www.bilibili.com/video/BV1pu411o7BE?spm_id_from=333.999.0.0

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/繁依Fanyi0/article/detail/92953
推荐阅读
相关标签
  

闽ICP备14008679号