当前位置:   article > 正文

transformer--transformer模型构建和测试_transformer测试代码

transformer测试代码

前面几节进行了各种组件的学习和编码,本节将组件组成transformer,并对其进行测试 

EncoderDecoder 编码器解码器构建

使用EnconderDecoder实现编码器-解码器结构

  1. # 使用EncoderDeconder类实现编码器和解码器
  2. class EncoderDecoder(nn.Module):
  3. def __init__(self, encoder, decoder, sourc_embed, target_embed, generator) -> None:
  4. """
  5. encoder: 编码器对象
  6. decoder: 解码器对象
  7. sourc_embed: 源数据嵌入函数
  8. target_embed: 目标数据嵌入函数
  9. generator: 输出部分的类别生成器
  10. """
  11. super(EncoderDecoder,self).__init__()
  12. self.encoder = encoder
  13. self.decoder = decoder
  14. self.src_embed = sourc_embed
  15. self.tgt_embed = target_embed
  16. self.generator = generator
  17. def encode(self,source, source_mask):
  18. """
  19. source: 源数据
  20. source_mask: 源数据的mask
  21. """
  22. return self.encoder(self.src_embed(source), source_mask)
  23. def decode(self, memory, source_mask, target,target_mask):
  24. return self.decoder(self.tgt_embed(target), memory, source_mask,target_mask)
  25. def forward(self,source, target, source_mask, target_mask):
  26. return self.decode(self.encode(source, source_mask), source_mask,target,target_mask)

测试代码放在最后,测试结果如下:

  1. ed_result.shape: torch.Size([2, 4, 512])
  2. ed_result: tensor([[[ 2.2391, -0.1173, -1.0894, ..., 0.9693, -0.9286, -0.4191],
  3. [ 1.4016, 0.0187, -0.0564, ..., 0.9323, 0.0403, -0.5115],
  4. [ 1.3623, 0.0854, -0.7648, ..., 0.9763, 0.6179, -0.1512],
  5. [ 1.6840, -0.3144, -0.6535, ..., 0.7420, 0.0729, -0.2303]],
  6. [[ 0.8726, -0.1610, -0.0819, ..., -0.6603, 2.1003, -0.4165],
  7. [ 0.5404, 0.8091, 0.8205, ..., -1.4623, 2.5762, -0.6019],
  8. [ 0.9892, -0.3134, -0.4118, ..., -1.1656, 1.0373, -0.3784],
  9. [ 1.3170, 0.3997, -0.3412, ..., -0.6014, 0.7564, -1.0851]]],
  10. grad_fn=<AddBackward0>)

Transformer模型构建

  1. # Tansformer模型的构建过程代码
  2. def make_model(source_vocab, target_vocab, N=6,d_model=512, d_ff=2048, head=8, dropout=0.1):
  3. """
  4. 该函数用来构建模型,有7个参数,分别是源数据特征(词汇)总数,目标数据特征(词汇)总数,编码器和解码器堆叠数,
  5. 词向量映射维度,前馈全连接网络中变换矩阵的维度,多头注意力结构中的多头数,以及置零比率dropout
  6. """
  7. c = copy.deepcopy
  8. #实例化多头注意力
  9. attn = MultiHeadedAttention(head, d_mode)
  10. # 实例化前馈全连接层 得到对象ff
  11. ff = PositionalEncoding(d_mode, dropout)
  12. # 实例化位置编码类,得到对象position
  13. position = PositionalEncoding(d_mode,dropout)
  14. # 根据结构图,最外层是EncoderDecoder,在EncoderDecoder中,
  15. # 分别是编码器层,解码器层,源数据Embedding层和位置编码组成的有序结构
  16. # 目标数据Embedding层和位置编码组成的有序结构,以及类别生成器层。在编码器层中有attention子层以及前馈全连接子层,
  17. # 在解码器层中有两个attention子层以及前馈全连接层
  18. model =EncoderDecoder(Encoder(EncoderLayer(d_mode, c(attn), c(ff), dropout),N),
  19. Decoder(DecoderLayer(d_mode, c(attn), c(attn),c(ff),dropout),N),
  20. nn.Sequential(Embeddings(d_mode,source_vocab), c(position)),
  21. nn.Sequential(Embeddings(d_mode, target_vocab), c(position)),
  22. Generator(d_mode, target_vocab)
  23. )
  24. # 模型结构完成后,接下来就是初始化模型中的参数,比如线性层中的变换矩阵,这里一但判断参数的维度大于1,
  25. # 则会将其初始化成一个服从均匀分布的矩阵
  26. for p in model.parameters():
  27. if p.dim()>1:
  28. nn.init.xavier_uniform(p)
  29. return model

测试代码

  1. import numpy as np
  2. import torch
  3. import torch.nn.functional as F
  4. import torch.nn as nn
  5. import matplotlib.pyplot as plt
  6. import math
  7. import copy
  8. from inputs import Embeddings,PositionalEncoding
  9. from encoder import subsequent_mask,attention,clones,MultiHeadedAttention,PositionwiseFeedForward,LayerNorm,SublayerConnection,Encoder,EncoderLayer
  10. # encode 代码在前面几节
  11. # 解码器层的类实现
  12. class DecoderLayer(nn.Module):
  13. def __init__(self, size, self_attn, src_attn, feed_forward,dropout) -> None:
  14. """
  15. size : 词嵌入维度
  16. self_attn:多头自注意对象,需要Q=K=V
  17. src_attn:多头注意力对象,这里Q!=K=V
  18. feed_forward: 前馈全连接层对象
  19. """
  20. super(DecoderLayer,self).__init__()
  21. self.size = size
  22. self.self_attn = self_attn
  23. self.src_attn = src_attn
  24. self.feed_forward = feed_forward
  25. # 根据论文图使用clones克隆三个子层对象
  26. self.sublayer = clones(SublayerConnection(size,dropout), 3)
  27. def forward(self, x, memory, source_mask, target_mask):
  28. """
  29. x : 上一层的输入
  30. memory: 来自编码器层的语义存储变量
  31. source_mask: 源码数据掩码张量,针对就是输入到解码器的数据
  32. target_mask: 目标数据掩码张量,针对解码器最后生成的数据,一个一个的推理生成的词
  33. """
  34. m = memory
  35. # 将x传入第一个子层结构,第一个子层结构输入分别是x和self_attn函数,因为是自注意力机制,所以Q=K=V=x
  36. # 最后一个参数是目标数据掩码张量,这时要对目标数据进行掩码,因为此时模型可能还没有生成任何目标数据,
  37. # 比如在解码器准备生成第一个字符或词汇时,我们其实已经传入第一个字符以便计算损失
  38. # 但是我们不希望在生成第一个字符时模型能利用这个信息,因为我们会将其遮掩,同样生成第二个字符或词汇时
  39. # 模型只能使用第一个字符或词汇信息,第二个字符以及以后得信息都不允许被模型使用
  40. x = self.sublayer[0](x, lambda x: self.self_attn(x,x,x,target_mask))
  41. # 紧接着第一层的输出进入第二个子层,这个子层是常规的注意力机制,但是q是输入x;k、v是编码层输出memory
  42. # 同样也传入source_mask, 但是进行源数据遮掩的原因并非是抑制信息泄露,而是遮蔽掉对结果没有意义的的字符而产生的注意力
  43. # 以此提升模型的效果和训练速度,这样就完成第二个子层的处理
  44. x = self.sublayer[1](x, lambda x: self.src_attn(x,m,m,source_mask))
  45. # 最后一个子层就是前馈全连接子层,经过他的处理后就可以返回结果,这就是解码器层的结构
  46. return self.sublayer[2](x,self.feed_forward)
  47. # 解码器
  48. class Decoder(nn.Module):
  49. def __init__(self,layer,N) -> None:
  50. """ layer: 解码器层, N:解码器层的个数"""
  51. super(Decoder,self).__init__()
  52. self.layers = clones(layer,N)
  53. self.norm = LayerNorm(layer.size)
  54. def forward(self, x, memory,source_mask, target_mask):
  55. # x:目标数据的嵌入表示
  56. # memory:编码器的输出
  57. # source_mask: 源数据的掩码张量
  58. # target_mask: 目标数据的掩码张量
  59. for layer in self.layers:
  60. x = layer(x,memory,source_mask,target_mask)
  61. return self.norm(x)
  62. # 输出
  63. class Generator(nn.Module):
  64. def __init__(self,d_mode, vocab_size) -> None:
  65. """
  66. d_mode: 词嵌入
  67. vocab_size: 词表大小
  68. """
  69. super(Generator,self).__init__()
  70. self.project = nn.Linear(d_mode, vocab_size)
  71. def forward(self, x):
  72. return F.log_softmax(self.project(x),dim=-1)
  73. # 使用EncoderDeconder类实现编码器和解码器
  74. class EncoderDecoder(nn.Module):
  75. def __init__(self, encoder, decoder, sourc_embed, target_embed, generator) -> None:
  76. """
  77. encoder: 编码器对象
  78. decoder: 解码器对象
  79. sourc_embed: 源数据嵌入函数
  80. target_embed: 目标数据嵌入函数
  81. generator: 输出部分的类别生成器
  82. """
  83. super(EncoderDecoder,self).__init__()
  84. self.encoder = encoder
  85. self.decoder = decoder
  86. self.src_embed = sourc_embed
  87. self.tgt_embed = target_embed
  88. self.generator = generator
  89. def encode(self,source, source_mask):
  90. """
  91. source: 源数据
  92. source_mask: 源数据的mask
  93. """
  94. return self.encoder(self.src_embed(source), source_mask)
  95. def decode(self, memory, source_mask, target,target_mask):
  96. return self.decoder(self.tgt_embed(target), memory, source_mask,target_mask)
  97. def forward(self,source, target, source_mask, target_mask):
  98. return self.decode(self.encode(source, source_mask), source_mask,target,target_mask)
  99. # Tansformer模型的构建过程代码
  100. def make_model(source_vocab, target_vocab, N=6,d_model=512, d_ff=2048, head=8, dropout=0.1):
  101. """
  102. 该函数用来构建模型,有7个参数,分别是源数据特征(词汇)总数,目标数据特征(词汇)总数,编码器和解码器堆叠数,
  103. 词向量映射维度,前馈全连接网络中变换矩阵的维度,多头注意力结构中的多头数,以及置零比率dropout
  104. """
  105. c = copy.deepcopy
  106. #实例化多头注意力
  107. attn = MultiHeadedAttention(head, d_mode)
  108. # 实例化前馈全连接层 得到对象ff
  109. ff = PositionalEncoding(d_mode, dropout)
  110. # 实例化位置编码类,得到对象position
  111. position = PositionalEncoding(d_mode,dropout)
  112. # 根据结构图,最外层是EncoderDecoder,在EncoderDecoder中,
  113. # 分别是编码器层,解码器层,源数据Embedding层和位置编码组成的有序结构
  114. # 目标数据Embedding层和位置编码组成的有序结构,以及类别生成器层。在编码器层中有attention子层以及前馈全连接子层,
  115. # 在解码器层中有两个attention子层以及前馈全连接层
  116. model =EncoderDecoder(Encoder(EncoderLayer(d_mode, c(attn), c(ff), dropout),N),
  117. Decoder(DecoderLayer(d_mode, c(attn), c(attn),c(ff),dropout),N),
  118. nn.Sequential(Embeddings(d_mode,source_vocab), c(position)),
  119. nn.Sequential(Embeddings(d_mode, target_vocab), c(position)),
  120. Generator(d_mode, target_vocab)
  121. )
  122. # 模型结构完成后,接下来就是初始化模型中的参数,比如线性层中的变换矩阵,这里一但判断参数的维度大于1,
  123. # 则会将其初始化成一个服从均匀分布的矩阵
  124. for p in model.parameters():
  125. if p.dim()>1:
  126. nn.init.xavier_uniform(p)
  127. return model
  128. if __name__ == "__main__":
  129. # 词嵌入
  130. dim = 512
  131. vocab =1000
  132. emb = Embeddings(dim,vocab)
  133. x = torch.LongTensor([[100,2,321,508],[321,234,456,324]])
  134. embr =emb(x)
  135. print("embr.shape = ",embr.shape)
  136. # 位置编码
  137. pe = PositionalEncoding(dim,0.1) # 位置向量的维度是20,dropout是0
  138. pe_result = pe(embr)
  139. print("pe_result.shape = ",pe_result.shape)
  140. # 编码器测试
  141. size = 512
  142. dropout=0.2
  143. head=8
  144. d_model=512
  145. d_ff = 64
  146. c = copy.deepcopy
  147. x = pe_result
  148. self_attn = MultiHeadedAttention(head,d_model,dropout)
  149. ff = PositionwiseFeedForward(d_model,d_ff,dropout)
  150. # 编码器层不是共享的,因此需要深度拷贝
  151. layer= EncoderLayer(size,c(self_attn),c(ff),dropout)
  152. N=8
  153. mask = torch.zeros(8,4,4)
  154. en = Encoder(layer,N)
  155. en_result = en(x,mask)
  156. print("en_result.shape : ",en_result.shape)
  157. print("en_result : ",en_result)
  158. # 解码器层测试
  159. size = 512
  160. dropout=0.2
  161. head=8
  162. d_model=512
  163. d_ff = 64
  164. self_attn = src_attn = MultiHeadedAttention(head,d_model,dropout)
  165. ff = PositionwiseFeedForward(d_model,d_ff,dropout)
  166. x = pe_result
  167. mask = torch.zeros(8,4,4)
  168. source_mask = target_mask = mask
  169. memory = en_result
  170. dl = DecoderLayer(size,self_attn,src_attn,ff,dropout)
  171. dl_result = dl(x,memory,source_mask,target_mask)
  172. print("dl_result.shape = ", dl_result.shape)
  173. print("dl_result = ", dl_result)
  174. # 解码器测试
  175. size = 512
  176. dropout=0.2
  177. head=8
  178. d_model=512
  179. d_ff = 64
  180. memory = en_result
  181. c = copy.deepcopy
  182. x = pe_result
  183. self_attn = MultiHeadedAttention(head,d_model,dropout)
  184. ff = PositionwiseFeedForward(d_model,d_ff,dropout)
  185. # 编码器层不是共享的,因此需要深度拷贝
  186. layer= DecoderLayer(size,c(self_attn),c(self_attn),c(ff),dropout)
  187. N=8
  188. mask = torch.zeros(8,4,4)
  189. source_mask = target_mask = mask
  190. de = Decoder(layer,N)
  191. de_result = de(x,memory,source_mask, target_mask)
  192. print("de_result.shape : ",de_result.shape)
  193. print("de_result : ",de_result)
  194. # 输出测试
  195. d_model = 512
  196. vocab =1000
  197. x = de_result
  198. gen = Generator(d_mode=d_model,vocab_size=vocab)
  199. gen_result = gen(x)
  200. print("gen_result.shape :", gen_result.shape)
  201. print("gen_result: ", gen_result)
  202. # encoderdeconder 测试
  203. vocab_size = 1000
  204. d_mode = 512
  205. encoder = en
  206. decoder= de
  207. source_embed = nn.Embedding(vocab_size, d_mode)
  208. target_embed = nn.Embedding(vocab_size, d_mode)
  209. generator = gen
  210. source = target = torch.LongTensor([[100,2,321,508],[321,234,456,324]])
  211. source_mask = target_mask = torch.zeros(8,4,4)
  212. ed = EncoderDecoder(encoder, decoder, source_embed, target_embed, generator)
  213. ed_result = ed(source, target, source_mask, target_mask)
  214. print("ed_result.shape: ", ed_result.shape)
  215. print("ed_result: ", ed_result)
  216. # transformer 测试
  217. source_vocab = 11
  218. target_vocab = 11
  219. N=6
  220. # 其他参数使用默认值
  221. res = make_model(source_vocab, target_vocab,6)
  222. print(res)

打印模型层结构:

  1. EncoderDecoder(
  2. (encoder): Encoder(
  3. (layers): ModuleList(
  4. (0): EncoderLayer(
  5. (self_attn): MultiHeadedAttention(
  6. (linears): ModuleList(
  7. (0): Linear(in_features=512, out_features=512, bias=True)
  8. (1): Linear(in_features=512, out_features=512, bias=True)
  9. (2): Linear(in_features=512, out_features=512, bias=True)
  10. (3): Linear(in_features=512, out_features=512, bias=True)
  11. )
  12. (dropout): Dropout(p=0.1, inplace=False)
  13. )
  14. (feed_forward): PositionalEncoding(
  15. (dropout): Dropout(p=0.1, inplace=False)
  16. )
  17. (sublayer): ModuleList(
  18. (0): SublayerConnection(
  19. (norm): LayerNorm()
  20. (dropout): Dropout(p=0.1, inplace=False)
  21. )
  22. (1): SublayerConnection(
  23. (norm): LayerNorm()
  24. (dropout): Dropout(p=0.1, inplace=False)
  25. )
  26. )
  27. )
  28. (1): EncoderLayer(
  29. (self_attn): MultiHeadedAttention(
  30. (linears): ModuleList(
  31. (0): Linear(in_features=512, out_features=512, bias=True)
  32. (1): Linear(in_features=512, out_features=512, bias=True)
  33. (2): Linear(in_features=512, out_features=512, bias=True)
  34. (3): Linear(in_features=512, out_features=512, bias=True)
  35. )
  36. (dropout): Dropout(p=0.1, inplace=False)
  37. )
  38. (feed_forward): PositionalEncoding(
  39. (dropout): Dropout(p=0.1, inplace=False)
  40. )
  41. (sublayer): ModuleList(
  42. (0): SublayerConnection(
  43. (norm): LayerNorm()
  44. (dropout): Dropout(p=0.1, inplace=False)
  45. )
  46. (1): SublayerConnection(
  47. (norm): LayerNorm()
  48. (dropout): Dropout(p=0.1, inplace=False)
  49. )
  50. )
  51. )
  52. (2): EncoderLayer(
  53. (self_attn): MultiHeadedAttention(
  54. (linears): ModuleList(
  55. (0): Linear(in_features=512, out_features=512, bias=True)
  56. (1): Linear(in_features=512, out_features=512, bias=True)
  57. (2): Linear(in_features=512, out_features=512, bias=True)
  58. (3): Linear(in_features=512, out_features=512, bias=True)
  59. )
  60. (dropout): Dropout(p=0.1, inplace=False)
  61. )
  62. (feed_forward): PositionalEncoding(
  63. (dropout): Dropout(p=0.1, inplace=False)
  64. )
  65. (sublayer): ModuleList(
  66. (0): SublayerConnection(
  67. (norm): LayerNorm()
  68. (dropout): Dropout(p=0.1, inplace=False)
  69. )
  70. (1): SublayerConnection(
  71. (norm): LayerNorm()
  72. (dropout): Dropout(p=0.1, inplace=False)
  73. )
  74. )
  75. )
  76. (3): EncoderLayer(
  77. (self_attn): MultiHeadedAttention(
  78. (linears): ModuleList(
  79. (0): Linear(in_features=512, out_features=512, bias=True)
  80. (1): Linear(in_features=512, out_features=512, bias=True)
  81. (2): Linear(in_features=512, out_features=512, bias=True)
  82. (3): Linear(in_features=512, out_features=512, bias=True)
  83. )
  84. (dropout): Dropout(p=0.1, inplace=False)
  85. )
  86. (feed_forward): PositionalEncoding(
  87. (dropout): Dropout(p=0.1, inplace=False)
  88. )
  89. (sublayer): ModuleList(
  90. (0): SublayerConnection(
  91. (norm): LayerNorm()
  92. (dropout): Dropout(p=0.1, inplace=False)
  93. )
  94. (1): SublayerConnection(
  95. (norm): LayerNorm()
  96. (dropout): Dropout(p=0.1, inplace=False)
  97. )
  98. )
  99. )
  100. (4): EncoderLayer(
  101. (self_attn): MultiHeadedAttention(
  102. (linears): ModuleList(
  103. (0): Linear(in_features=512, out_features=512, bias=True)
  104. (1): Linear(in_features=512, out_features=512, bias=True)
  105. (2): Linear(in_features=512, out_features=512, bias=True)
  106. (3): Linear(in_features=512, out_features=512, bias=True)
  107. )
  108. (dropout): Dropout(p=0.1, inplace=False)
  109. )
  110. (feed_forward): PositionalEncoding(
  111. (dropout): Dropout(p=0.1, inplace=False)
  112. )
  113. (sublayer): ModuleList(
  114. (0): SublayerConnection(
  115. (norm): LayerNorm()
  116. (dropout): Dropout(p=0.1, inplace=False)
  117. )
  118. (1): SublayerConnection(
  119. (norm): LayerNorm()
  120. (dropout): Dropout(p=0.1, inplace=False)
  121. )
  122. )
  123. )
  124. (5): EncoderLayer(
  125. (self_attn): MultiHeadedAttention(
  126. (linears): ModuleList(
  127. (0): Linear(in_features=512, out_features=512, bias=True)
  128. (1): Linear(in_features=512, out_features=512, bias=True)
  129. (2): Linear(in_features=512, out_features=512, bias=True)
  130. (3): Linear(in_features=512, out_features=512, bias=True)
  131. )
  132. (dropout): Dropout(p=0.1, inplace=False)
  133. )
  134. (feed_forward): PositionalEncoding(
  135. (dropout): Dropout(p=0.1, inplace=False)
  136. )
  137. (sublayer): ModuleList(
  138. (0): SublayerConnection(
  139. (norm): LayerNorm()
  140. (dropout): Dropout(p=0.1, inplace=False)
  141. )
  142. (1): SublayerConnection(
  143. (norm): LayerNorm()
  144. (dropout): Dropout(p=0.1, inplace=False)
  145. )
  146. )
  147. )
  148. )
  149. (norm): LayerNorm()
  150. )
  151. (decoder): Decoder(
  152. (layers): ModuleList(
  153. (0): DecoderLayer(
  154. (self_attn): MultiHeadedAttention(
  155. (linears): ModuleList(
  156. (0): Linear(in_features=512, out_features=512, bias=True)
  157. (1): Linear(in_features=512, out_features=512, bias=True)
  158. (2): Linear(in_features=512, out_features=512, bias=True)
  159. (3): Linear(in_features=512, out_features=512, bias=True)
  160. )
  161. (dropout): Dropout(p=0.1, inplace=False)
  162. )
  163. (src_attn): MultiHeadedAttention(
  164. (linears): ModuleList(
  165. (0): Linear(in_features=512, out_features=512, bias=True)
  166. (1): Linear(in_features=512, out_features=512, bias=True)
  167. (2): Linear(in_features=512, out_features=512, bias=True)
  168. (3): Linear(in_features=512, out_features=512, bias=True)
  169. )
  170. (dropout): Dropout(p=0.1, inplace=False)
  171. )
  172. (feed_forward): PositionalEncoding(
  173. (dropout): Dropout(p=0.1, inplace=False)
  174. )
  175. (sublayer): ModuleList(
  176. (0): SublayerConnection(
  177. (norm): LayerNorm()
  178. (dropout): Dropout(p=0.1, inplace=False)
  179. )
  180. (1): SublayerConnection(
  181. (norm): LayerNorm()
  182. (dropout): Dropout(p=0.1, inplace=False)
  183. )
  184. (2): SublayerConnection(
  185. (norm): LayerNorm()
  186. (dropout): Dropout(p=0.1, inplace=False)
  187. )
  188. )
  189. )
  190. (1): DecoderLayer(
  191. (self_attn): MultiHeadedAttention(
  192. (linears): ModuleList(
  193. (0): Linear(in_features=512, out_features=512, bias=True)
  194. (1): Linear(in_features=512, out_features=512, bias=True)
  195. (2): Linear(in_features=512, out_features=512, bias=True)
  196. (3): Linear(in_features=512, out_features=512, bias=True)
  197. )
  198. (dropout): Dropout(p=0.1, inplace=False)
  199. )
  200. (src_attn): MultiHeadedAttention(
  201. (linears): ModuleList(
  202. (0): Linear(in_features=512, out_features=512, bias=True)
  203. (1): Linear(in_features=512, out_features=512, bias=True)
  204. (2): Linear(in_features=512, out_features=512, bias=True)
  205. (3): Linear(in_features=512, out_features=512, bias=True)
  206. )
  207. (dropout): Dropout(p=0.1, inplace=False)
  208. )
  209. (feed_forward): PositionalEncoding(
  210. (dropout): Dropout(p=0.1, inplace=False)
  211. )
  212. (sublayer): ModuleList(
  213. (0): SublayerConnection(
  214. (norm): LayerNorm()
  215. (dropout): Dropout(p=0.1, inplace=False)
  216. )
  217. (1): SublayerConnection(
  218. (norm): LayerNorm()
  219. (dropout): Dropout(p=0.1, inplace=False)
  220. )
  221. (2): SublayerConnection(
  222. (norm): LayerNorm()
  223. (dropout): Dropout(p=0.1, inplace=False)
  224. )
  225. )
  226. )
  227. (2): DecoderLayer(
  228. (self_attn): MultiHeadedAttention(
  229. (linears): ModuleList(
  230. (0): Linear(in_features=512, out_features=512, bias=True)
  231. (1): Linear(in_features=512, out_features=512, bias=True)
  232. (2): Linear(in_features=512, out_features=512, bias=True)
  233. (3): Linear(in_features=512, out_features=512, bias=True)
  234. )
  235. (dropout): Dropout(p=0.1, inplace=False)
  236. )
  237. (src_attn): MultiHeadedAttention(
  238. (linears): ModuleList(
  239. (0): Linear(in_features=512, out_features=512, bias=True)
  240. (1): Linear(in_features=512, out_features=512, bias=True)
  241. (2): Linear(in_features=512, out_features=512, bias=True)
  242. (3): Linear(in_features=512, out_features=512, bias=True)
  243. )
  244. (dropout): Dropout(p=0.1, inplace=False)
  245. )
  246. (feed_forward): PositionalEncoding(
  247. (dropout): Dropout(p=0.1, inplace=False)
  248. )
  249. (sublayer): ModuleList(
  250. (0): SublayerConnection(
  251. (norm): LayerNorm()
  252. (dropout): Dropout(p=0.1, inplace=False)
  253. )
  254. (1): SublayerConnection(
  255. (norm): LayerNorm()
  256. (dropout): Dropout(p=0.1, inplace=False)
  257. )
  258. (2): SublayerConnection(
  259. (norm): LayerNorm()
  260. (dropout): Dropout(p=0.1, inplace=False)
  261. )
  262. )
  263. )
  264. (3): DecoderLayer(
  265. (self_attn): MultiHeadedAttention(
  266. (linears): ModuleList(
  267. (0): Linear(in_features=512, out_features=512, bias=True)
  268. (1): Linear(in_features=512, out_features=512, bias=True)
  269. (2): Linear(in_features=512, out_features=512, bias=True)
  270. (3): Linear(in_features=512, out_features=512, bias=True)
  271. )
  272. (dropout): Dropout(p=0.1, inplace=False)
  273. )
  274. (src_attn): MultiHeadedAttention(
  275. (linears): ModuleList(
  276. (0): Linear(in_features=512, out_features=512, bias=True)
  277. (1): Linear(in_features=512, out_features=512, bias=True)
  278. (2): Linear(in_features=512, out_features=512, bias=True)
  279. (3): Linear(in_features=512, out_features=512, bias=True)
  280. )
  281. (dropout): Dropout(p=0.1, inplace=False)
  282. )
  283. (feed_forward): PositionalEncoding(
  284. (dropout): Dropout(p=0.1, inplace=False)
  285. )
  286. (sublayer): ModuleList(
  287. (0): SublayerConnection(
  288. (norm): LayerNorm()
  289. (dropout): Dropout(p=0.1, inplace=False)
  290. )
  291. (1): SublayerConnection(
  292. (norm): LayerNorm()
  293. (dropout): Dropout(p=0.1, inplace=False)
  294. )
  295. (2): SublayerConnection(
  296. (norm): LayerNorm()
  297. (dropout): Dropout(p=0.1, inplace=False)
  298. )
  299. )
  300. )
  301. (4): DecoderLayer(
  302. (self_attn): MultiHeadedAttention(
  303. (linears): ModuleList(
  304. (0): Linear(in_features=512, out_features=512, bias=True)
  305. (1): Linear(in_features=512, out_features=512, bias=True)
  306. (2): Linear(in_features=512, out_features=512, bias=True)
  307. (3): Linear(in_features=512, out_features=512, bias=True)
  308. )
  309. (dropout): Dropout(p=0.1, inplace=False)
  310. )
  311. (src_attn): MultiHeadedAttention(
  312. (linears): ModuleList(
  313. (0): Linear(in_features=512, out_features=512, bias=True)
  314. (1): Linear(in_features=512, out_features=512, bias=True)
  315. (2): Linear(in_features=512, out_features=512, bias=True)
  316. (3): Linear(in_features=512, out_features=512, bias=True)
  317. )
  318. (dropout): Dropout(p=0.1, inplace=False)
  319. )
  320. (feed_forward): PositionalEncoding(
  321. (dropout): Dropout(p=0.1, inplace=False)
  322. )
  323. (sublayer): ModuleList(
  324. (0): SublayerConnection(
  325. (norm): LayerNorm()
  326. (dropout): Dropout(p=0.1, inplace=False)
  327. )
  328. (1): SublayerConnection(
  329. (norm): LayerNorm()
  330. (dropout): Dropout(p=0.1, inplace=False)
  331. )
  332. (2): SublayerConnection(
  333. (norm): LayerNorm()
  334. (dropout): Dropout(p=0.1, inplace=False)
  335. )
  336. )
  337. )
  338. (5): DecoderLayer(
  339. (self_attn): MultiHeadedAttention(
  340. (linears): ModuleList(
  341. (0): Linear(in_features=512, out_features=512, bias=True)
  342. (1): Linear(in_features=512, out_features=512, bias=True)
  343. (2): Linear(in_features=512, out_features=512, bias=True)
  344. (3): Linear(in_features=512, out_features=512, bias=True)
  345. )
  346. (dropout): Dropout(p=0.1, inplace=False)
  347. )
  348. (src_attn): MultiHeadedAttention(
  349. (linears): ModuleList(
  350. (0): Linear(in_features=512, out_features=512, bias=True)
  351. (1): Linear(in_features=512, out_features=512, bias=True)
  352. (2): Linear(in_features=512, out_features=512, bias=True)
  353. (3): Linear(in_features=512, out_features=512, bias=True)
  354. )
  355. (dropout): Dropout(p=0.1, inplace=False)
  356. )
  357. (feed_forward): PositionalEncoding(
  358. (dropout): Dropout(p=0.1, inplace=False)
  359. )
  360. (sublayer): ModuleList(
  361. (0): SublayerConnection(
  362. (norm): LayerNorm()
  363. (dropout): Dropout(p=0.1, inplace=False)
  364. )
  365. (1): SublayerConnection(
  366. (norm): LayerNorm()
  367. (dropout): Dropout(p=0.1, inplace=False)
  368. )
  369. (2): SublayerConnection(
  370. (norm): LayerNorm()
  371. (dropout): Dropout(p=0.1, inplace=False)
  372. )
  373. )
  374. )
  375. )
  376. (norm): LayerNorm()
  377. )
  378. (src_embed): Sequential(
  379. (0): Embeddings(
  380. (lut): Embedding(11, 512)
  381. )
  382. (1): PositionalEncoding(
  383. (dropout): Dropout(p=0.1, inplace=False)
  384. )
  385. )
  386. (tgt_embed): Sequential(
  387. (0): Embeddings(
  388. (lut): Embedding(11, 512)
  389. )
  390. (1): PositionalEncoding(
  391. (dropout): Dropout(p=0.1, inplace=False)
  392. )
  393. )
  394. (generator): Generator(
  395. (project): Linear(in_features=512, out_features=11, bias=True)
  396. )
  397. )

测试Transformer运行

我们将通过一个小的copy任务完成模型的基本测试工作

copy任务介绍:

任务描述:

        针对数字序列进行学习,学习的最终目标是使输出与输入的序列相同.如输入[1,5,8,9,3],输出也是[1,5,8,9,3].

任务意义:

copy任务在模型基础测试中具有重要意义,因为copy操作对于模型来讲是一条明显规律,因此模型能否在短时间内,小数据集中学会它,可以帮助我们断定模型所有过程是否正常,是否已具备基本学习能力.

使用copy任务进行模型基本测试的四步曲:

第一步: 构建数据集生成
第二步: 获得Transformer模型及其优化器和损失函数
第三步: 运行模型进行训练和评估
第四步: 使用模型进行贪婪解码

code

  1. from transformer import make_model
  2. import torch
  3. import numpy as np
  4. from pyitcast.transformer_utils import Batch
  5. # 第一步: 构建数据集生成器
  6. def data_generator(V, batch, num_batch):
  7. # 该函数用于随机生成copy任务的数据,它的三个输入参数是V:随机生成数字的最大值+1,
  8. # batch:每次输送给模型更新一次参数的数据量,num_batch:-共输送num_batch次完成一轮
  9. for i in range(num_batch):
  10. data = torch.from_numpy(np.random.randint(1,V, size=(batch,10),dtype="int64"))
  11. data[:,0]=1
  12. source = torch.tensor(data,requires_grad=False)
  13. target = torch.tensor(data, requires_grad=False)
  14. yield Batch(source, target)
  15. # 第二步: 获得Transformer模型及其优化器和损失函数
  16. # 导入优化器工具包get_std_opt,该工具用于获得标准的针对Transformer模型的优化器
  17. # 该标准优化器基于Adam优化器,使其对序列到序列的任务更有效
  18. from pyitcast.transformer_utils import get_std_opt
  19. # 导入标签平滑工具包,该工具用于标签平滑,标签平滑的作用就是小幅度的改变原有标签值的值域
  20. # 因为在理论上即使是人工的标注数据也可能并非完全正确,会受到一些外界因素的影响而产生一些微小的偏差
  21. # 因此使用标签平滑来弥补这种偏差,减少模型对某一条规律的绝对认知,以防止过拟合。通过下面示例了解更清晰
  22. from pyitcast.transformer_utils import LabelSmoothing
  23. # 导入损失计算工具包,该工具能够使用标签平滑后的结果进行损失的计算,
  24. # 损失的计算方法可以认为是交叉熵损失函数。
  25. from pyitcast.transformer_utils import SimpleLossCompute
  26. # 将生成0-10的整数
  27. V = 11
  28. # 每次喂给模型20个数据进行更新参数
  29. batch = 20
  30. # 连续喂30次完成全部数据的遍历,也就是一轮
  31. num_batch = 30
  32. # 使用make_model构建模型
  33. model = make_model(V,V,N=2)
  34. print(model.src_embed[0])
  35. # 使用get_std_opt获得模型优化器
  36. model_optimizer = get_std_opt(model)
  37. # 使用labelSmoothing获得标签平滑对象
  38. # 使用LabelSmoothing实例化一个crit对象。
  39. # 第一个参数size代表目标数据的词汇总数,也是模型最后一层得到张量的最后一维大小
  40. # 这里是5说明目标词汇总数是5个,第二个参数padding_idx表示要将那些tensor中的数字
  41. # 替换成0,一般padding_idx=0表示不进行替换。第三个参数smoothing,表示标签的平滑程度
  42. # 如原来标签的表示值为1,则平滑后它的值域变为[1-smoothing,1+smoothing].
  43. criterion = LabelSmoothing(size=V, padding_idx=0, smoothing=0.0)
  44. # 使用SimpleLossCompute获取到标签平滑结果的损失计算方法
  45. loss = SimpleLossCompute(model.generator,criterion,model_optimizer)
  46. # 第三步: 运行模型进行训练和评估
  47. from pyitcast.transformer_utils import run_epoch
  48. def run(model, loss, epochs=10):
  49. for epoch in range(epochs):
  50. # 进入训练模式,所有参数更新
  51. model.train()
  52. # 训练时batchsize是20
  53. run_epoch(data_generator(V,8,20),model,loss)
  54. model.eval()
  55. run_epoch(data_generator(V,8,5),model,loss)
  56. if __name__ == "__main__":
  57. # 将生成0-10的整数
  58. V = 11
  59. # 每次喂给模型20个数据进行更新参数
  60. batch = 20
  61. # 连续喂30次完成全部数据的遍历,也就是一轮
  62. num_batch = 30
  63. res = data_generator(V,batch, num_batch)
  64. run(model, loss)

如果直接跑上面的可能会报错,报错的主要原因是 pyitcast主要是针对pytorch 的版本很低,但是好像这个库也不升级了,所以你如果想要跑通,就需要修改下面两个地方:

第一个错误:'Embeddings' object has no attribute 'd_model'

 从上面可以看到,get_std_opt需要用到嵌入向量的维度,但是没有这个值,这个时候可以从两个地方修改,一个是我们embeding的类增加这个属性即:

第二种方法,直接进入 get_std_opt函数里面,修改这个参数

以上两个都可以解决问题 

第二个问题:RuntimeError: scatter(): Expected dtype int64 for index

这个属于数据类型的问题,主要是在生成训练数据时的问题,如下修改:

这样就可以正常训练了 

输出:

  1. Epoch Step: 1 Loss: 3.169641 Tokens per Sec: 285.952789
  2. Epoch Step: 1 Loss: 2.517479 Tokens per Sec: 351.509888
  3. Epoch Step: 1 Loss: 2.595001 Tokens per Sec: 294.475616
  4. Epoch Step: 1 Loss: 2.108872 Tokens per Sec: 476.050293
  5. Epoch Step: 1 Loss: 2.229053 Tokens per Sec: 387.324188
  6. Epoch Step: 1 Loss: 1.810681 Tokens per Sec: 283.639557
  7. Epoch Step: 1 Loss: 2.047313 Tokens per Sec: 394.773773
  8. Epoch Step: 1 Loss: 1.724596 Tokens per Sec: 415.394714
  9. Epoch Step: 1 Loss: 1.850358 Tokens per Sec: 421.050873
  10. Epoch Step: 1 Loss: 1.668582 Tokens per Sec: 368.275421
  11. Epoch Step: 1 Loss: 2.005047 Tokens per Sec: 424.458466
  12. Epoch Step: 1 Loss: 1.632835 Tokens per Sec: 408.158966
  13. Epoch Step: 1 Loss: 1.698805 Tokens per Sec: 441.689392
  14. Epoch Step: 1 Loss: 1.567691 Tokens per Sec: 392.488251
  15. Epoch Step: 1 Loss: 1.765411 Tokens per Sec: 428.815796
  16. Epoch Step: 1 Loss: 1.492155 Tokens per Sec: 426.288910
  17. Epoch Step: 1 Loss: 1.541114 Tokens per Sec: 411.078918
  18. Epoch Step: 1 Loss: 1.469818 Tokens per Sec: 454.231476
  19. Epoch Step: 1 Loss: 1.677189 Tokens per Sec: 431.382690
  20. Epoch Step: 1 Loss: 1.377327 Tokens per Sec: 433.725250

 

引入贪婪解码,并进行了训练测试

  1. from transformer import make_model
  2. import torch
  3. import numpy as np
  4. from pyitcast.transformer_utils import Batch
  5. # 第一步: 构建数据集生成器
  6. def data_generator(V, batch, num_batch):
  7. # 该函数用于随机生成copy任务的数据,它的三个输入参数是V:随机生成数字的最大值+1,
  8. # batch:每次输送给模型更新一次参数的数据量,num_batch:-共输送num_batch次完成一轮
  9. for i in range(num_batch):
  10. data = torch.from_numpy(np.random.randint(1,V, size=(batch,10),dtype="int64"))
  11. data[:,0]=1
  12. source = torch.tensor(data,requires_grad=False)
  13. target = torch.tensor(data, requires_grad=False)
  14. yield Batch(source, target)
  15. # 第二步: 获得Transformer模型及其优化器和损失函数
  16. # 导入优化器工具包get_std_opt,该工具用于获得标准的针对Transformer模型的优化器
  17. # 该标准优化器基于Adam优化器,使其对序列到序列的任务更有效
  18. from pyitcast.transformer_utils import get_std_opt
  19. # 导入标签平滑工具包,该工具用于标签平滑,标签平滑的作用就是小幅度的改变原有标签值的值域
  20. # 因为在理论上即使是人工的标注数据也可能并非完全正确,会受到一些外界因素的影响而产生一些微小的偏差
  21. # 因此使用标签平滑来弥补这种偏差,减少模型对某一条规律的绝对认知,以防止过拟合。通过下面示例了解更清晰
  22. from pyitcast.transformer_utils import LabelSmoothing
  23. # 导入损失计算工具包,该工具能够使用标签平滑后的结果进行损失的计算,
  24. # 损失的计算方法可以认为是交叉熵损失函数。
  25. from pyitcast.transformer_utils import SimpleLossCompute
  26. # 将生成0-10的整数
  27. V = 11
  28. # 每次喂给模型20个数据进行更新参数
  29. batch = 20
  30. # 连续喂30次完成全部数据的遍历,也就是一轮
  31. num_batch = 30
  32. # 使用make_model构建模型
  33. model = make_model(V,V,N=2)
  34. # 使用get_std_opt获得模型优化器
  35. model_optimizer = get_std_opt(model)
  36. # 使用labelSmoothing获得标签平滑对象
  37. # 使用LabelSmoothing实例化一个crit对象。
  38. # 第一个参数size代表目标数据的词汇总数,也是模型最后一层得到张量的最后一维大小
  39. # 这里是5说明目标词汇总数是5个,第二个参数padding_idx表示要将那些tensor中的数字
  40. # 替换成0,一般padding_idx=0表示不进行替换。第三个参数smoothing,表示标签的平滑程度
  41. # 如原来标签的表示值为1,则平滑后它的值域变为[1-smoothing,1+smoothing].
  42. criterion = LabelSmoothing(size=V, padding_idx=0, smoothing=0.0)
  43. # 使用SimpleLossCompute获取到标签平滑结果的损失计算方法
  44. loss = SimpleLossCompute(model.generator,criterion,model_optimizer)
  45. # 第三步: 运行模型进行训练和评估
  46. from pyitcast.transformer_utils import run_epoch
  47. def run(model, loss, epochs=10):
  48. for epoch in range(epochs):
  49. # 进入训练模式,所有参数更新
  50. model.train()
  51. # 训练时batchsize是20
  52. run_epoch(data_generator(V,8,20),model,loss)
  53. model.eval()
  54. run_epoch(data_generator(V,8,5),model,loss)
  55. # 引入贪婪解码
  56. # 导入贪婪解码工具包greedy_decode,该工具将对最终结进行贪婪解码贪婪解码的方式是每次预测都选择概率最大的结果作为输出,
  57. # 它不一定能获得全局最优性,但却拥有最高的执行效率。
  58. from pyitcast.transformer_utils import greedy_decode
  59. def run_greedy(model, loss, epochs=10):
  60. for epoch in range(epochs):
  61. # 进入训练模式,所有参数更新
  62. model.train()
  63. # 训练时batchsize是20
  64. run_epoch(data_generator(V,8,20),model,loss)
  65. model.eval()
  66. run_epoch(data_generator(V,8,5),model,loss)
  67. model.eval()
  68. # 假定输入张量
  69. source = torch.LongTensor([[1,8,3,4,10,6,7,2,9,5]])
  70. # 定义源数据掩码张量,因为元素都是1,在我们这里1代表不遮掩因此相当于对源数据没有任何遮掩.
  71. source_mask = torch.ones(1,1,10)
  72. # 最后将model,src,src_mask,解码的最大长度限制max_len,默认为10
  73. # 以及起始标志数字,默认为1,我们这里使用的也是1
  74. result = greedy_decode(model, source, source_mask, max_len=10,start_symbol=1)
  75. print(result)
  76. if __name__ == "__main__":
  77. # # 将生成0-10的整数
  78. # V = 11
  79. # # 每次喂给模型20个数据进行更新参数
  80. # batch = 20
  81. # # 连续喂30次完成全部数据的遍历,也就是一轮
  82. # num_batch = 30
  83. # res = data_generator(V,batch, num_batch)
  84. # run(model, loss)
  85. run_greedy(model, loss,50)

输出部分结果:

  1. Epoch Step: 1 Loss: 0.428033 Tokens per Sec: 389.530670
  2. Epoch Step: 1 Loss: 0.317753 Tokens per Sec: 399.060852
  3. Epoch Step: 1 Loss: 0.192723 Tokens per Sec: 387.384308
  4. Epoch Step: 1 Loss: 0.257650 Tokens per Sec: 379.354736
  5. Epoch Step: 1 Loss: 0.487521 Tokens per Sec: 410.506714
  6. Epoch Step: 1 Loss: 0.136969 Tokens per Sec: 388.222687
  7. Epoch Step: 1 Loss: 0.119838 Tokens per Sec: 375.405731
  8. Epoch Step: 1 Loss: 0.250391 Tokens per Sec: 408.776367
  9. Epoch Step: 1 Loss: 0.376862 Tokens per Sec: 419.787231
  10. Epoch Step: 1 Loss: 0.163561 Tokens per Sec: 393.896088
  11. Epoch Step: 1 Loss: 0.303041 Tokens per Sec: 395.884857
  12. Epoch Step: 1 Loss: 0.126261 Tokens per Sec: 386.709167
  13. Epoch Step: 1 Loss: 0.237891 Tokens per Sec: 376.114075
  14. Epoch Step: 1 Loss: 0.139017 Tokens per Sec: 405.207336
  15. Epoch Step: 1 Loss: 0.414842 Tokens per Sec: 389.219666
  16. Epoch Step: 1 Loss: 0.207141 Tokens per Sec: 392.840820
  17. tensor([[ 1, 8, 3, 4, 10, 6, 7, 2, 9, 5]])

从上面的代码可以看出测试输入的 是 source = torch.LongTensor([[1,8,3,4,10,6,7,2,9,5]])

推理出来的结果是完全正确的,因为我把epoch设置为50了,如果是10就会有错误的情况,大家可以尝试

声明:本文内容由网友自发贡献,转载请注明出处:【wpsshop博客】
推荐阅读
相关标签
  

闽ICP备14008679号