当前位置:   article > 正文

深度强化学习(DRL)算法 附录 6 —— NLP 回顾之预训练模型篇

深度强化学习(DRL)算法 附录 6 —— NLP 回顾之预训练模型篇

Self-Attention

模型结构

上图架构以 batch_size 为 1,两个时间步的 X 为例子,计算过程如下:

位置编码

根据 self-attention 的模型结构,改变 X 的输入顺序,不影响 attention 的结果,所以还需要引入额外的位置信息,即位置编码。

图里计算机二进制编码的低位和位置编码矩阵的前面几列对应。

除了上面捕获绝对位置信息之外,上述的位置编码还允许模型学习得到输入序列中相对位置信息。 这是因为对于任何确定的位置偏移δ,位置 i+δ 处的位置编码可以线性投影位置 i 处的位置编码来表示。

\begin{aligned} & {\left[\begin{array}{cc} \cos \left(\delta \omega_j\right) & \sin \left(\delta \omega_j\right) \\ -\sin \left(\delta \omega_j\right) & \cos \left(\delta \omega_j\right) \end{array}\right]\left[\begin{array}{c} p_{i, 2 j} \\ p_{i, 2 j+1} \end{array}\right] } \\ = & {\left[\begin{array}{c} \cos \left(\delta \omega_j\right) \sin \left(i \omega_j\right)+\sin \left(\delta \omega_j\right) \cos \left(i \omega_j\right) \\ -\sin \left(\delta \omega_j\right) \sin \left(i \omega_j\right)+\cos \left(\delta \omega_j\right) \cos \left(i \omega_j\right) \end{array}\right] } \\ = & {\left[\begin{array}{l} \sin \left((i+\delta) \omega_j\right) \\ \cos \left((i+\delta) \omega_j\right) \end{array}\right] } \\ = & {\left[\begin{array}{c} p_{i+\delta, 2 j} \\ p_{i+\delta, 2 j+1} \end{array}\right] } \end{aligned}

代码

  1. #@save
  2. class PositionalEncoding(nn.Module):
  3. """位置编码"""
  4. def __init__(self, num_hiddens, dropout, max_len=1000):
  5. super(PositionalEncoding, self).__init__()
  6. self.dropout = nn.Dropout(dropout)
  7. # 创建一个足够长的P
  8. self.P = torch.zeros((1, max_len, num_hiddens))
  9. X = torch.arange(max_len, dtype=torch.float32).reshape(
  10. -1, 1) / torch.pow(10000, torch.arange(
  11. 0, num_hiddens, 2, dtype=torch.float32) / num_hiddens)
  12. self.P[:, :, 0::2] = torch.sin(X)
  13. self.P[:, :, 1::2] = torch.cos(X)
  14. def forward(self, X):
  15. X = X + self.P[:, :X.shape[1], :].to(X.device)
  16. return self.dropout(X)

多头注意力

模型结构

  • 两头注意力

  • 七头注意力

添加图片注释,不超过 140 字(可选)

  • 七头注意力连接进行信息融合

  • 掩码多头注意力

和多头注意力是一样的,只不过每个头的 self-attention 变成了 masked self-attention

代码

  1. import math
  2. import torch
  3. from torch import nn
  4. from d2l import torch as d2l
  5. #@save
  6. def transpose_qkv(X, num_heads):
  7. """为了多注意力头的并行计算而变换形状"""
  8. # 输入X的形状:(batch_size,查询或者“键-值”对的个数,num_hiddens)
  9. # 输出X的形状:(batch_size,查询或者“键-值”对的个数,num_heads,
  10. # num_hiddens/num_heads)
  11. X = X.reshape(X.shape[0], X.shape[1], num_heads, -1)
  12. # 输出X的形状:(batch_size,num_heads,查询或者“键-值”对的个数,
  13. # num_hiddens/num_heads)
  14. X = X.permute(0, 2, 1, 3)
  15. # 最终输出的形状:(batch_size*num_heads,查询或者“键-值”对的个数,
  16. # num_hiddens/num_heads)
  17. return X.reshape(-1, X.shape[2], X.shape[3])
  18. #@save
  19. def transpose_output(X, num_heads):
  20. """逆转transpose_qkv函数的操作"""
  21. X = X.reshape(-1, num_heads, X.shape[1], X.shape[2])
  22. X = X.permute(0, 2, 1, 3)
  23. return X.reshape(X.shape[0], X.shape[1], -1)
  24. #@save
  25. class DotProductAttention(nn.Module):
  26. """Scaled dot product attention.
  27. Defined in :numref:`subsec_additive-attention`"""
  28. def __init__(self, dropout, **kwargs):
  29. super(DotProductAttention, self).__init__(**kwargs)
  30. self.dropout = nn.Dropout(dropout)
  31. # Shape of `queries`: (`batch_size`, no. of queries, `d`)
  32. # Shape of `keys`: (`batch_size`, no. of key-value pairs, `d`)
  33. # Shape of `values`: (`batch_size`, no. of key-value pairs, value
  34. # dimension)
  35. # Shape of `valid_lens`: (`batch_size`,) or (`batch_size`, no. of queries)
  36. def forward(self, queries, keys, values, valid_lens=None):
  37. d = queries.shape[-1]
  38. # Set `transpose_b=True` to swap the last two dimensions of `keys`
  39. scores = torch.bmm(queries, keys.transpose(1,2)) / math.sqrt(d)
  40. self.attention_weights = masked_softmax(scores, valid_lens)
  41. return torch.bmm(self.dropout(self.attention_weights), values)
  42. #@save
  43. class MultiHeadAttention(nn.Module):
  44. """多头注意力"""
  45. def __init__(self, key_size, query_size, value_size, num_hiddens,
  46. num_heads, dropout, bias=False, **kwargs):
  47. super(MultiHeadAttention, self).__init__(**kwargs)
  48. self.num_heads = num_heads
  49. self.attention = d2l.DotProductAttention(dropout)
  50. self.W_q = nn.Linear(query_size, num_hiddens, bias=bias)
  51. self.W_k = nn.Linear(key_size, num_hiddens, bias=bias)
  52. self.W_v = nn.Linear(value_size, num_hiddens, bias=bias)
  53. self.W_o = nn.Linear(num_hiddens, num_hiddens, bias=bias)
  54. def forward(self, queries, keys, values, valid_lens):
  55. # queries,keys,values的形状:
  56. # (batch_size,查询或者“键-值”对的个数,num_hiddens)
  57. # valid_lens 的形状:
  58. # (batch_size,)或(batch_size,查询的个数)
  59. # 经过变换后,输出的queries,keys,values 的形状:
  60. # (batch_size*num_heads,查询或者“键-值”对的个数,
  61. # num_hiddens/num_heads)
  62. queries = transpose_qkv(self.W_q(queries), self.num_heads)
  63. keys = transpose_qkv(self.W_k(keys), self.num_heads)
  64. values = transpose_qkv(self.W_v(values), self.num_heads)
  65. if valid_lens is not None:
  66. # 在轴0,将第一项(标量或者矢量)复制num_heads次,
  67. # 然后如此复制第二项,然后诸如此类。
  68. valid_lens = torch.repeat_interleave(
  69. valid_lens, repeats=self.num_heads, dim=0)
  70. # output的形状:(batch_size*num_heads,查询的个数,
  71. # num_hiddens/num_heads)
  72. output = self.attention(queries, keys, values, valid_lens)
  73. # output_concat的形状:(batch_size,查询的个数,num_hiddens)
  74. output_concat = transpose_output(output, self.num_heads)
  75. return self.W_o(output_concat)

Transformer

模型结构

encoder

decoder

代码

  1. import math
  2. import pandas as pd
  3. import torch
  4. from torch import nn
  5. from d2l import torch as d2l
  6. #@save
  7. class PositionWiseFFN(nn.Module):
  8. """基于位置的前馈网络"""
  9. def __init__(self, ffn_num_input, ffn_num_hiddens, ffn_num_outputs,
  10. **kwargs):
  11. super(PositionWiseFFN, self).__init__(**kwargs)
  12. self.dense1 = nn.Linear(ffn_num_input, ffn_num_hiddens)
  13. self.relu = nn.ReLU()
  14. self.dense2 = nn.Linear(ffn_num_hiddens, ffn_num_outputs)
  15. def forward(self, X):
  16. return self.dense2(self.relu(self.dense1(X)))
  17. #@save
  18. class AddNorm(nn.Module):
  19. """残差连接后进行层规范化"""
  20. def __init__(self, normalized_shape, dropout, **kwargs):
  21. super(AddNorm, self).__init__(**kwargs)
  22. self.dropout = nn.Dropout(dropout)
  23. self.ln = nn.LayerNorm(normalized_shape)
  24. def forward(self, X, Y):
  25. return self.ln(self.dropout(Y) + X)
  26. #@save
  27. class EncoderBlock(nn.Module):
  28. """Transformer编码器块"""
  29. def __init__(self, key_size, query_size, value_size, num_hiddens,
  30. norm_shape, ffn_num_input, ffn_num_hiddens, num_heads,
  31. dropout, use_bias=False, **kwargs):
  32. super(EncoderBlock, self).__init__(**kwargs)
  33. self.attention = d2l.MultiHeadAttention(
  34. key_size, query_size, value_size, num_hiddens, num_heads, dropout,
  35. use_bias)
  36. self.addnorm1 = AddNorm(norm_shape, dropout)
  37. self.ffn = PositionWiseFFN(
  38. ffn_num_input, ffn_num_hiddens, num_hiddens)
  39. self.addnorm2 = AddNorm(norm_shape, dropout)
  40. def forward(self, X, valid_lens):
  41. Y = self.addnorm1(X, self.attention(X, X, X, valid_lens))
  42. return self.addnorm2(Y, self.ffn(Y))
  43. #@save
  44. class TransformerEncoder(d2l.Encoder):
  45. """Transformer编码器"""
  46. def __init__(self, vocab_size, key_size, query_size, value_size,
  47. num_hiddens, norm_shape, ffn_num_input, ffn_num_hiddens,
  48. num_heads, num_layers, dropout, use_bias=False, **kwargs):
  49. super(TransformerEncoder, self).__init__(**kwargs)
  50. self.num_hiddens = num_hiddens
  51. self.embedding = nn.Embedding(vocab_size, num_hiddens)
  52. self.pos_encoding = d2l.PositionalEncoding(num_hiddens, dropout)
  53. self.blks = nn.Sequential()
  54. for i in range(num_layers):
  55. self.blks.add_module("block"+str(i),
  56. EncoderBlock(key_size, query_size, value_size, num_hiddens,
  57. norm_shape, ffn_num_input, ffn_num_hiddens,
  58. num_heads, dropout, use_bias))
  59. def forward(self, X, valid_lens, *args):
  60. # 因为位置编码值在-1和1之间,
  61. # 因此嵌入值乘以嵌入维度的平方根进行缩放,
  62. # 然后再与位置编码相加。
  63. X = self.pos_encoding(self.embedding(X) * math.sqrt(self.num_hiddens))
  64. self.attention_weights = [None] * len(self.blks)
  65. for i, blk in enumerate(self.blks):
  66. X = blk(X, valid_lens)
  67. self.attention_weights[
  68. i] = blk.attention.attention.attention_weights
  69. return X
  70. class DecoderBlock(nn.Module):
  71. """解码器中第i个块"""
  72. def __init__(self, key_size, query_size, value_size, num_hiddens,
  73. norm_shape, ffn_num_input, ffn_num_hiddens, num_heads,
  74. dropout, i, **kwargs):
  75. super(DecoderBlock, self).__init__(**kwargs)
  76. self.i = i
  77. self.attention1 = d2l.MultiHeadAttention(
  78. key_size, query_size, value_size, num_hiddens, num_heads, dropout)
  79. self.addnorm1 = AddNorm(norm_shape, dropout)
  80. self.attention2 = d2l.MultiHeadAttention(
  81. key_size, query_size, value_size, num_hiddens, num_heads, dropout)
  82. self.addnorm2 = AddNorm(norm_shape, dropout)
  83. self.ffn = PositionWiseFFN(ffn_num_input, ffn_num_hiddens,
  84. num_hiddens)
  85. self.addnorm3 = AddNorm(norm_shape, dropout)
  86. def forward(self, X, state):
  87. enc_outputs, enc_valid_lens = state[0], state[1]
  88. # 训练阶段,输出序列的所有词元都在同一时间处理,
  89. # 因此state[2][self.i]初始化为None。
  90. # 预测阶段,输出序列是通过词元一个接着一个解码的,
  91. # 因此state[2][self.i]包含着直到当前时间步第i个块解码的输出表示
  92. if state[2][self.i] is None:
  93. key_values = X
  94. else:
  95. key_values = torch.cat((state[2][self.i], X), axis=1)
  96. state[2][self.i] = key_values
  97. if self.training:
  98. batch_size, num_steps, _ = X.shape
  99. # dec_valid_lens的开头:(batch_size,num_steps),
  100. # 其中每一行是[1,2,...,num_steps]
  101. dec_valid_lens = torch.arange(
  102. 1, num_steps + 1, device=X.device).repeat(batch_size, 1)
  103. else:
  104. dec_valid_lens = None
  105. # 自注意力
  106. X2 = self.attention1(X, key_values, key_values, dec_valid_lens)
  107. Y = self.addnorm1(X, X2)
  108. # 编码器-解码器注意力。
  109. # enc_outputs的开头:(batch_size,num_steps,num_hiddens)
  110. Y2 = self.attention2(Y, enc_outputs, enc_outputs, enc_valid_lens)
  111. Z = self.addnorm2(Y, Y2)
  112. return self.addnorm3(Z, self.ffn(Z)), state
  113. class TransformerDecoder(d2l.AttentionDecoder):
  114. def __init__(self, vocab_size, key_size, query_size, value_size,
  115. num_hiddens, norm_shape, ffn_num_input, ffn_num_hiddens,
  116. num_heads, num_layers, dropout, **kwargs):
  117. super(TransformerDecoder, self).__init__(**kwargs)
  118. self.num_hiddens = num_hiddens
  119. self.num_layers = num_layers
  120. self.embedding = nn.Embedding(vocab_size, num_hiddens)
  121. self.pos_encoding = d2l.PositionalEncoding(num_hiddens, dropout)
  122. self.blks = nn.Sequential()
  123. for i in range(num_layers):
  124. self.blks.add_module("block"+str(i),
  125. DecoderBlock(key_size, query_size, value_size, num_hiddens,
  126. norm_shape, ffn_num_input, ffn_num_hiddens,
  127. num_heads, dropout, i))
  128. self.dense = nn.Linear(num_hiddens, vocab_size)
  129. def init_state(self, enc_outputs, enc_valid_lens, *args):
  130. return [enc_outputs, enc_valid_lens, [None] * self.num_layers]
  131. def forward(self, X, state):
  132. X = self.pos_encoding(self.embedding(X) * math.sqrt(self.num_hiddens))
  133. self._attention_weights = [[None] * len(self.blks) for _ in range (2)]
  134. for i, blk in enumerate(self.blks):
  135. X, state = blk(X, state)
  136. # 解码器自注意力权重
  137. self._attention_weights[0][
  138. i] = blk.attention1.attention.attention_weights
  139. # “编码器-解码器”自注意力权重
  140. self._attention_weights[1][
  141. i] = blk.attention2.attention.attention_weights
  142. return self.dense(X), state
  143. @property
  144. def attention_weights(self):
  145. return self._attention_weights
  146. num_hiddens, num_layers, dropout, batch_size, num_steps = 32, 2, 0.1, 64, 10
  147. lr, num_epochs, device = 0.005, 200, d2l.try_gpu()
  148. ffn_num_input, ffn_num_hiddens, num_heads = 32, 64, 4
  149. key_size, query_size, value_size = 32, 32, 32
  150. norm_shape = [32]
  151. train_iter, src_vocab, tgt_vocab = d2l.load_data_nmt(batch_size, num_steps)
  152. encoder = TransformerEncoder(
  153. len(src_vocab), key_size, query_size, value_size, num_hiddens,
  154. norm_shape, ffn_num_input, ffn_num_hiddens, num_heads,
  155. num_layers, dropout)
  156. decoder = TransformerDecoder(
  157. len(tgt_vocab), key_size, query_size, value_size, num_hiddens,
  158. norm_shape, ffn_num_input, ffn_num_hiddens, num_heads,
  159. num_layers, dropout)
  160. net = d2l.EncoderDecoder(encoder, decoder)
  161. d2l.train_seq2seq(net, train_iter, lr, num_epochs, tgt_vocab, device)

Bert

bert 开启了预训练模型的风潮,使用了带掩码的语言模型,具体就是通过大量的数据,模型获取了语言信息抽取的能力,从而可以通过 fine-tune 应用到各种 NLP 任务上。

3w 的词典,使用了 WordPiece。[cls] A [seq] B [seq]

位置嵌入换成了学习的矩阵。

模型结构

截取了 transformer 的 encoder(代码没有改动)

不同点:

  • 输入

  • 训练(类似完形填空,以及下一个句子预测)

尽管掩蔽语言建模能够编码双向上下文来表示单词,但它不能显式地建模文本对之间的逻辑关系。为了帮助理解两个文本序列之间的关系,BERT在预训练中考虑了一个二元分类任务——下一句预测。在为预训练生成句子对时,有一半的时间它们确实是标签为“真”的连续句子;在另一半的时间里,第二个句子是从语料库中随机抽取的,标记为“假”。

模型参数计算

BERT-base(H = 768,L = 12,A = 12)

Transformer encoder block 里面主要参数有:

  1. 嵌入层:H x 30000(vocab_size 约等于 30000)

2. 全连接层:H x 4H + 4H x H(一个 block 里面有两个全连接层)

3. 多头注意力机制层:H x H / head_num x 3(一个头的参数,3代表 Q,K,V 用不同矩阵做线性变换),所有头加起来 H x H x 3,再加上多头注意力机制层的线性变换 H x H,这里可以结合下图理解:

添加图片注释,不超过 140 字(可选)

1,2,3 加起来就是 BERT-base 的参数数量。

计算公式: L*12H^{2} + 30000*H \approx 110M (H=768, L=12)

BERT-large 同理可以计算出参数数量约等于 340M。

GPT-3

截取了 transformer 的 decoder(代码没有改动)

参考

51 序列模型【动手学深度学习v2】-跟李沐学AI-【完结】动手学深度学习 PyTorch版-哔哩哔哩视频

8.1. 序列模型 - 动手学深度学习 2.0.0 documentation

Visualizing A Neural Machine Translation Model (Mechanics of Seq2seq Models With Attention)

The Illustrated Transformer

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/Cpp五条/article/detail/421144
推荐阅读
相关标签
  

闽ICP备14008679号