当前位置:   article > 正文

【原创】实现ChatGPT中Transformer模型之Encoder-Decoder_gpt transformer decoder

gpt transformer decoder

作者:黑夜路人

时间:2023年7月

Transformer Block (通用块)实现

看以上整个链路图,其实我们可以很清晰看到这心其实在Encoder环节里面主要是有几个大环节,每一层主要的核心作用如下:

  • Multi-headed self Attention(注意力机制层):通过不同的注意力函数并拼接结果,提高模型的表达能力,主要计算词与词的相关性和长距离次的相关性。
  • Normalization layer(归一化层):对每个隐层神经元进行归一化,使其特征值的均值为0,方差为1,解决梯度爆炸和消失问题。通过归一化,可以将数据压缩在一个合适范围内,避免出现超大或超小值,有利于模型训练,也能改善模型的泛化能力和加速模型收敛以及减少参数量的依赖。
  • Feed forward network(前馈神经网络):对注意力输出结果进行变换。
  • Another normalization layer(另一个归一化层):Weight Normalization用于对模型中层与层之间的权重矩阵进行归一化,其主要目的是解决梯度消失问题

注意力(Attention)实现

参考其他我们了解的信息可以看到里面核心的自注意力层等等,我们把每个层剥离看看核心这一层应该如何实现。

简单看看注意力的计算过程:

这张图所表示的大致运算过程是:

对于每个token,先产生三个向量query,key,value:

query向量类比于询问。某个token问:“其余的token都和我有多大程度的相关呀?”

key向量类比于索引。某个token说:“我把每个询问内容的回答都压缩了下装在我的key里”

value向量类比于回答。某个token说:“我把我自身涵盖的信息又抽取了一层装在我的value里”

注意力计算代码:

  1. def attention(query: Tensor,
  2. key: Tensor,
  3. value: Tensor,
  4. mask: Optional[Tensor] = None,
  5. dropout: float = 0.1):
  6. """
  7. 定义如何计算注意力得分
  8. 参数:
  9. query: shape (batch_size, num_heads, seq_len, k_dim)
  10. key: shape(batch_size, num_heads, seq_len, k_dim)
  11. value: shape(batch_size, num_heads, seq_len, v_dim)
  12. mask: shape (batch_size, num_heads, seq_len, seq_len). Since our assumption, here the shape is
  13. (1, 1, seq_len, seq_len)
  14. 返回:
  15. out: shape (batch_size, v_dim). 注意力头的输出。注意力分数:形状(seq_len,seq_ln)。
  16. """
  17. k_dim = query.size(-1)
  18. # shape (seq_len ,seq_len),row: token,col: token记号的注意力得分
  19. scores = torch.matmul(query, key.transpose(-2, -1)) / math.sqrt(k_dim)
  20. if mask is not None:
  21. scores = scores.masked_fill(mask == 0, -1e10)
  22. attention_score = F.softmax(scores, dim=-1)
  23. if dropout is not None:
  24. attention_score = dropout(attention_score)
  25. out = torch.matmul(attention_score, value)
  26. return out, attention_score # shape: (seq_len, v_dim), (seq_len, seq_lem)

以图中的token a2为例:

它产生一个query,每个query都去和别的token的key做“某种方式”的计算,得到的结果我们称为attention score(即为图中的α

α
)。则一共得到四个attention score。(attention score又可以被称为attention weight)。

将这四个score分别乘上每个token的value,我们会得到四个抽取信息完毕的向量。

将这四个向量相加,就是最终a2过attention模型后所产生的结果b2。

整个这一层,我们通过代码来进行这个逻辑的简单实现:

  1. class MultiHeadedAttention(nn.Module):
  2. def __init__(self,
  3. num_heads: int,
  4. d_model: int,
  5. dropout: float = 0.1):
  6. super(MultiHeadedAttention, self).__init__()
  7. assert d_model % num_heads == 0, "d_model must be divisible by num_heads"
  8. # 假设v_dim总是等于k_dim
  9. self.k_dim = d_model // num_heads
  10. self.num_heads = num_heads
  11. self.proj_weights = clones(
  12. nn.Linear(d_model, d_model), 4) # W^Q, W^K, W^V, W^O
  13. self.attention_score = None
  14. self.dropout = nn.Dropout(p=dropout)
  15. def forward(self,
  16. query: Tensor,
  17. key: Tensor,
  18. value: Tensor,
  19. mask: Optional[Tensor] = None):
  20. """
  21. 参数:
  22. query: shape (batch_size, seq_len, d_model)
  23. key: shape (batch_size, seq_len, d_model)
  24. value: shape (batch_size, seq_len, d_model)
  25. mask: shape (batch_size, seq_len, seq_len). 由于我们假设所有数据都使用相同的掩码,因此这里的形状也等于(1,seq_len,seq-len)
  26. 返回:
  27. out: shape (batch_size, seq_len, d_model). 多头注意力层的输出
  28. """
  29. if mask is not None:
  30. mask = mask.unsqueeze(1)
  31. batch_size = query.size(0)
  32. # 1) 应用W^Q、W^K、W^V生成新的查询、键、值
  33. query, key, value \
  34. = [proj_weight(x).view(batch_size, -1, self.num_heads, self.k_dim).transpose(1, 2)
  35. for proj_weight, x in zip(self.proj_weights, [query, key, value])] # -1 equals to seq_len
  36. # 2) 计算注意力得分和out
  37. out, self.attention_score = attention(query, key, value, mask=mask,
  38. dropout=self.dropout)
  39. # 3) "Concat" 输出
  40. out = out.transpose(1, 2).contiguous() \
  41. .view(batch_size, -1, self.num_heads * self.k_dim)
  42. # 4) 应用W^O以获得最终输出
  43. out = self.proj_weights[-1](out)
  44. return out

Norm 归一化层实现

  1. # 归一化层,标准化的计算公式
  2. class NormLayer(nn.Module):
  3. def __init__(self, features, eps=1e-6):
  4. super(LayerNorm, self).__init__()
  5. self.a_2 = nn.Parameter(torch.ones(features))
  6. self.b_2 = nn.Parameter(torch.zeros(features))
  7. self.eps = eps
  8. def forward(self, x):
  9. mean = x.mean(-1, keepdim=True)
  10. std = x.std(-1, keepdim=True)
  11. return self.a_2 * (x - mean) / (std + self.eps) + self.b_2

前馈神经网络实现

  1. class FeedForward(nn.Module):
  2. def __init__(self, d_model, d_ff=2048, dropout=0.1):
  3. super().__init__()
  4. # 设置 d_ff 缺省值为2048
  5. self.linear_1 = nn.Linear(d_model, d_ff)
  6. self.dropout = nn.Dropout(dropout)
  7. self.linear_2 = nn.Linear(d_ff, d_model)
  8. def forward(self, x):
  9. x = self.dropout(F.relu(self.linear_1(x)))
  10. x = self.linear_2(x)

Encoder(编码器曾)实现

Encoder 就是将前面介绍的整个链路部分,全部组装迭代起来,完成将源编码到中间编码的转换。

  1. class EncoderLayer(nn.Module):
  2. def __init__(self, d_model, heads, dropout=0.1):
  3. super().__init__()
  4. self.norm_1 = Norm(d_model)
  5. self.norm_2 = Norm(d_model)
  6. self.attn = MultiHeadAttention(heads, d_model, dropout=dropout)
  7. self.ff = FeedForward(d_model, dropout=dropout)
  8. self.dropout_1 = nn.Dropout(dropout)
  9. self.dropout_2 = nn.Dropout(dropout)
  10. def forward(self, x, mask):
  11. x2 = self.norm_1(x)
  12. x = x + self.dropout_1(self.attn(x2, x2, x2, mask))
  13. x2 = self.norm_2(x)
  14. x = x + self.dropout_2(self.ff(x2))
  15. return x

  1. class Encoder(nn.Module):
  2. def __init__(self, vocab_size, d_model, N, heads, dropout):
  3. super().__init__()
  4. self.N = N
  5. self.embed = Embedder(d_model, vocab_size)
  6. self.pe = PositionalEncoder(d_model, dropout=dropout)
  7. self.layers = get_clones(EncoderLayer(d_model, heads, dropout), N)
  8. self.norm = Norm(d_model)
  9. def forward(self, src, mask):
  10. x = self.embed(src)
  11. x = self.pe(x)
  12. for i in range(self.N):
  13. x = self.layers[i](x, mask)
  14. return self.norm(x)

Decoder(解码器层)实现

Decoder部分和 Encoder 的部分非常的相似,它主要是把 Encoder 生成的中间编码,转换为目标编码。

  1. class DecoderLayer(nn.Module):
  2. def __init__(self, d_model, heads, dropout=0.1):
  3. super().__init__()
  4. self.norm_1 = Norm(d_model)
  5. self.norm_2 = Norm(d_model)
  6. self.norm_3 = Norm(d_model)
  7. self.dropout_1 = nn.Dropout(dropout)
  8. self.dropout_2 = nn.Dropout(dropout)
  9. self.dropout_3 = nn.Dropout(dropout)
  10. self.attn_1 = MultiHeadAttention(heads, d_model, dropout=dropout)
  11. self.attn_2 = MultiHeadAttention(heads, d_model, dropout=dropout)
  12. self.ff = FeedForward(d_model, dropout=dropout)
  13. def forward(self, x, e_outputs, src_mask, trg_mask):
  14. x2 = self.norm_1(x)
  15. x = x + self.dropout_1(self.attn_1(x2, x2, x2, trg_mask))
  16. x2 = self.norm_2(x)
  17. x = x + self.dropout_2(self.attn_2(x2, e_outputs, e_outputs,
  18. src_mask))
  19. x2 = self.norm_3(x)
  20. x = x + self.dropout_3(self.ff(x2))
  21. return x

  1. class Decoder(nn.Module):
  2. def __init__(self, vocab_size, d_model, N, heads, dropout):
  3. super().__init__()
  4. self.N = N
  5. self.embed = Embedder(vocab_size, d_model)
  6. self.pe = PositionalEncoder(d_model, dropout=dropout)
  7. self.layers = get_clones(DecoderLayer(d_model, heads, dropout), N)
  8. self.norm = Norm(d_model)
  9. def forward(self, trg, e_outputs, src_mask, trg_mask):
  10. x = self.embed(trg)
  11. x = self.pe(x)
  12. for i in range(self.N):
  13. x = self.layers[i](x, e_outputs, src_mask, trg_mask)
  14. return self.norm(x)

Transformer 实现

把整个链路结合,包括Encoder和Decoder,最终就能够形成一个Transformer框架的基本MVP实现。

  1. class Transformer(nn.Module):
  2. def __init__(self, src_vocab, trg_vocab, d_model, N, heads, dropout):
  3. super().__init__()
  4. self.encoder = Encoder(src_vocab, d_model, N, heads, dropout)
  5. self.decoder = Decoder(trg_vocab, d_model, N, heads, dropout)
  6. self.out = nn.Linear(d_model, trg_vocab)
  7. def forward(self, src, trg, src_mask, trg_mask):
  8. e_outputs = self.encoder(src, src_mask)
  9. d_output = self.decoder(trg, e_outputs, src_mask, trg_mask)
  10. output = self.out(d_output)
  11. return output

代码说明

如果想要学习阅读整个代码,访问 black-transformer 项目,github访问地址:

GitHub - heiyeluren/black-transformer: black-transformer 是一个轻量级模拟Transformer模型实现的概要代码,用于了解整个Transformer工作机制

取代你的不是AI,而是比你更了解AI和更会使用AI的人!

##End##

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/凡人多烦事01/article/detail/596567
推荐阅读
相关标签
  

闽ICP备14008679号