赞
踩
Transformer在近几年的热度一直居高不下,之前也简单了解了一下该网络架构,但是它的源码一直没有深度了解,所以对它也始终是一知半解的,毕竟Talk is cheap, show me your code。恰好这几天有时间,翻了翻自己的收藏夹(收藏从未停止,学习从未开始),找到了之前收藏的一篇还不错的英文博客,打算将其翻译下来,一方面倒逼自己再对其进行深度的理解,另一方面希望本文以及原文博客能够帮助更多的人。
原文:The Annotated Transformer (harvard.edu)
代码:GitHub - harvardnlp/annotated-transformer: An annotated implementation of the Transformer paper.
因为本文使用PyTorch深度学习框架对Transformer算法进行复现,因此你需要安装一下相关的库,后续的代码也建议在jupyter中逐模块的进行运行。
- # requirements.txt
-
- pandas==1.3.5
- torch==1.11.0+cu113
- torchdata==0.3.0
- torchtext==0.12
- spacy==3.2
- altair==4.1
- jupytext==1.13
- flake8
- black
- GPUtil
- wandb
上面需要的所有模块统一使用pip install即可安装。接下来就需要实现导入所需要的模块以及相关的测试代码,这些不输入Transformer算法的部分,为了不影响后续的叙述,这里也统一放在该部分了。
- import os
- from os.path import exists
- import torch
- import torch.nn as nn
- from torch.nn.functional import log_softmax, pad
- import math
- import copy
- import time
- from torch.optim.lr_scheduler import LambdaLR
- import pandas as pd
- import altair as alt
- from torchtext.data.functional import to_map_style_dataset
- from torch.utils.data import DataLoader
- from torchtext.vocab import build_vocab_from_iterator
- import torchtext.datasets as datasets
- import spacy
- import GPUtil
- import warnings
- from torch.utils.data.distributed import DistributedSampler
- import torch.distributed as dist
- import torch.multiprocessing as mp
- from torch.nn.parallel import DistributedDataParallel as DDP
-
-
- # Set to False to skip notebook execution (e.g. for debugging)
- warnings.filterwarnings("ignore")
- RUN_EXAMPLES = True
- def is_interactive_notebook():
- return __name__ == "__main__"
-
-
- def show_example(fn, args=[]):
- if __name__ == "__main__" and RUN_EXAMPLES:
- return fn(*args)
-
-
- def execute_example(fn, args=[]):
- if __name__ == "__main__" and RUN_EXAMPLES:
- fn(*args)
-
-
- class DummyOptimizer(torch.optim.Optimizer):
- def __init__(self):
- self.param_groups = [{"lr": 0}]
- None
-
- def step(self):
- None
-
- def zero_grad(self, set_to_none=False):
- None
-
-
- class DummyScheduler:
- def step(self):
- None
目前,很多优秀的深度神经网络都使用编码器-解码器的网络框架用于处理连续信号数据的处理,比如机器翻译、文本提取以及情感分析等任务。Transformer网络架构如上图所示,其中左半部分是编码器(encoder)部分,右半部分是解码器(decoder)部分。其中,编码器负责将连续的输入符号信号映射到一个连续的表征空间下,而解码器就需要将编码器的输出结果再次映射到空间下。其中算法处理过程中的每一步结果都是自回归生成的,上一步的输出结果可以直接作为下一步的输入数据。
由此,我们可以创建一个编码器-解码器类,用于对输入数据进行编码以及解码操作。在编码器部分,需要输入的数据以及位置编码的数据(对应encode函数);而在解码器部分(对应decode函数),不仅仅需要标签及其位置编码数据,还需要上一个编码器的输出数据及其位置编码信息作为输入。
- class EncoderDecoder(nn.Module):
- """
- A standard Encoder-Decoder architecture. Base for this and many
- other models.
- """
-
- def __init__(self, encoder, decoder, src_embed, tgt_embed, generator):
- super(EncoderDecoder, self).__init__()
- self.encoder = encoder
- self.decoder = decoder
- self.src_embed = src_embed
- self.tgt_embed = tgt_embed
- self.generator = generator
-
- def forward(self, src, tgt, src_mask, tgt_mask):
- "Take in and process masked src and target sequences."
- return self.decode(self.encode(src, src_mask), src_mask, tgt, tgt_mask)
-
- def encode(self, src, src_mask):
- return self.encoder(self.src_embed(src), src_mask)
-
- def decode(self, memory, src_mask, tgt, tgt_mask):
- return self.decoder(self.tgt_embed(tgt), memory, src_mask, tgt_mask)
- class Generator(nn.Module):
- "Define standard linear + softmax generation step."
-
- def __init__(self, d_model, vocab):
- super(Generator, self).__init__()
- self.proj = nn.Linear(d_model, vocab)
-
- def forward(self, x):
- return log_softmax(self.proj(x), dim=-1)
另外还需要补充一下src_embed和src_mask的作用。在深度学习中往往是对数据进行处理,而对于自然语言处理任务,它面对的都是一些文字,所以为了让深度学习模型也能够处理这些文字,就需要将这些文字都转成相应的数据,这个工作就主要是由src_embed负责的。另外在一段话中,字与字、词与词之间都是存在着一定的关系的,往往而言越近的两个字之间联系就越紧密,反之就越毫无关系,为了表示这些字与字、词与词之间的重要性,还需要使用mask来表征它们之间的位置信息。
在编码器部分,主要使用N=6个相同的结构堆叠,并对数据进行串行处理。
- def clones(module, N):
- "Produce N identical layers."
- return nn.ModuleList([copy.deepcopy(module) for _ in range(N)])
- # 编码器部分:N个layer进行叠加
- class Encoder(nn.Module):
- "Core encoder is a stack of N layers"
-
- def __init__(self, layer, N):
- super(Encoder, self).__init__()
- self.layers = clones(layer, N)
- self.norm = LayerNorm(layer.size)
-
- def forward(self, x, mask):
- "Pass the input (and mask) through each layer in turn."
- for layer in self.layers:
- x = layer(x, mask)
- # 对输出的x进行layernorm
- return self.norm(x)
在编码器部分,每一个基本单元(即EncoderLayer类)都由自注意力模块(self-attention)以及前馈前神经网络(Feed Forward Network)两个部分组成。在EncoderLayer类中这两个模块是通过残差结构进行连接了,之后再通过层归一化(LayerNorm)算法进行输出。
- class EncoderLayer(nn.Module):
- "Encoder is made up of self-attn and feed forward (defined below)"
-
- def __init__(self, size, self_attn, feed_forward, dropout):
- super(EncoderLayer, self).__init__()
- self.self_attn = self_attn
- self.feed_forward = feed_forward
- self.sublayer = clones(SublayerConnection(size, dropout), 2)
- self.size = size
-
- def forward(self, x, mask):
- "Follow Figure 1 (left) for connections."
- x = self.sublayer[0](x, lambda x: self.self_attn(x, x, x, mask)) # 进行attention计算
- return self.sublayer[1](x, self.feed_forward) # 进行ffn计算
在编码器中,每个子网络结构的输出都是,其中每个sublayer各自的功能由其输入的函数决定,即由输入的函数决定当前的网络层是进行自注意力计算还是进行前馈神经网络计算。
- # 这边添加了一个connection类,将残差结构的输入输出进行element-wise的加法
- # 这个类的功能不能放到encoderlayer中吗?
- # -->>> encoderlayer中由attention和ffn两部分构成,每个功能是不同的,但是输出都需要残差结构的加法
- # 所以从共同点出发,将不同layer逻辑代码传进来就比较方便了,就不需要单独创建attention和ffn两个类了
- class SublayerConnection(nn.Module):
- """
- A residual connection followed by a layer norm.
- Note for code simplicity the norm is first as opposed to last.
- """
-
- def __init__(self, size, dropout):
- super(SublayerConnection, self).__init__()
- self.norm = LayerNorm(size)
- self.dropout = nn.Dropout(dropout)
-
- def forward(self, x, sublayer):
- "Apply residual connection to any sublayer with the same size."
- return x + self.dropout(sublayer(self.norm(x)))
层归一化(LayerNorm)是自然语言处理任务重常用的数据归一化方法,它主要是对同一批数据的不同维度的特征进行归一化。
- class LayerNorm(nn.Module):
- "Construct a layernorm module (See citation for details)."
-
- def __init__(self, features, eps=1e-6):
- super(LayerNorm, self).__init__()
- self.a_2 = nn.Parameter(torch.ones(features))
- self.b_2 = nn.Parameter(torch.zeros(features))
- self.eps = eps
-
- def forward(self, x):
- # layernorm是对词向量进行归一化,因此mean和std都是取-1
- mean = x.mean(-1, keepdim=True)
- std = x.std(-1, keepdim=True)
- return self.a_2 * (x - mean) / (std + self.eps) + self.b_2
与编码器类似,解码器也主要由N=6个相同的子网络结构(DecoderLayer)构成。
- class Decoder(nn.Module):
- "Generic N layer decoder with masking."
-
- def __init__(self, layer, N):
- super(Decoder, self).__init__()
- self.layers = clones(layer, N)
- self.norm = LayerNorm(layer.size)
-
- def forward(self, x, memory, src_mask, tgt_mask):
- for layer in self.layers:
- # 只有decoder的x在记性迭代更新,memory等其他三个值都保持不变
- x = layer(x, memory, src_mask, tgt_mask)
- return self.norm(x)
EncoderLayer部分主要由自注意力机制与前馈神经网络两部分构成,而DecoderLayer则主要由两个自注意力机制以及一个前馈神经网络等三个部分构成。第一个自注意力机制的Q、K、V都是来自于解码器本身,而第二个自注意力机制的K和V来自于编码器,Q则来自于解码器,这一点其实不难理解,拿英译汉任务来说,一段话的生成过程中需要用到之前生成语义的信息。
- class DecoderLayer(nn.Module):
- "Decoder is made of self-attn, src-attn, and feed forward (defined below)"
-
- def __init__(self, size, self_attn, src_attn, feed_forward, dropout):
- super(DecoderLayer, self).__init__()
- self.size = size
- self.self_attn = self_attn
- self.src_attn = src_attn
- self.feed_forward = feed_forward
- self.sublayer = clones(SublayerConnection(size, dropout), 3)
-
- # 解码器主要由三个部分构成:decoder自注意力机制、encoder-decoder注意力机制和FNN
- def forward(self, x, memory, src_mask, tgt_mask):
- "Follow Figure 1 (right) for connections."
- m = memory
- x = self.sublayer[0](x, lambda x: self.self_attn(x, x, x, tgt_mask))
- x = self.sublayer[1](x, lambda x: self.src_attn(x, m, m, src_mask))
- return self.sublayer[2](x, self.feed_forward)
在解码器的输入中还需要加入位置信息,只不过这个位置信息与编码器的略有不同。试想一下,一段话的生成是逐个字得来的,后续未翻译的字是不知道之前对应的真值的。因此解码器的位置编码信息就应该是赢上三角矩阵。
- # decoder部分的mask主要是上三角矩阵
- def subsequent_mask(size):
- "Mask out subsequent positions."
- attn_shape = (1, size, size)
- subsequent_mask = torch.triu(torch.ones(attn_shape), diagonal=1).type(
- torch.uint8
- )
- return subsequent_mask == 0
注意力机制可以解释为将查询向量以及键值对映射为输出向量的过程。注意力机制的输出结果本质上就是V向量的加权值,其中权重是由查询向量Q和键向量K点乘得到。
- def attention(query, key, value, mask=None, dropout=None):
- "Compute 'Scaled Dot Product Attention'"
- d_k = query.size(-1)
- # query.shape = (1, 8, 10, 64), rst.shape = (1, 8, 10, 10)
- # 其中:1表示bs, 8表示head的个数,64表示d_model/head = 512/8 = 64
- # 只对后两维度的数据进行矩阵乘法
- scores = torch.matmul(query, key.transpose(-2, -1)) / math.sqrt(d_k)
- # 只对no_mask的atten进行softmax
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e9)
- p_attn = scores.softmax(dim=-1)
- # 对attent也进行dropout操作
- if dropout is not None:
- p_attn = dropout(p_attn)
- # Attention() = dropout(softmax()) * V
- return torch.matmul(p_attn, value), p_attn
多头注意力机制在Transformer算法中更常用,因为它可以将不同子空间中不同位置的信息进行融合,丰富神经网络的特征信息,可以学到更多的知识。
- class MultiHeadedAttention(nn.Module):
- # h表示head头的个数
- def __init__(self, h, d_model, dropout=0.1):
- "Take in model size and number of heads."
- super(MultiHeadedAttention, self).__init__()
- assert d_model % h == 0
- # We assume d_v always equals d_k
- self.d_k = d_model // h
- self.h = h
- self.linears = clones(nn.Linear(d_model, d_model), 4)
- self.attn = None
- self.dropout = nn.Dropout(p=dropout)
-
- def forward(self, query, key, value, mask=None):
- "Implements Figure 2"
- if mask is not None:
- # Same mask applied to all h heads.
- mask = mask.unsqueeze(1)
- nbatches = query.size(0)
- # before query.shape = (1, 10, 512)
- # after query.shape = (1, 8, 10, 64)
- # 其中,将512拆分成了8个头,每个头的长度就是64,因此要保证d_model % h==0
- # 其中,10表示一个句子有10个单词!!
- # 1) Do all the linear projections in batch from d_model => h x d_k
- query, key, value = [
- lin(x).view(nbatches, -1, self.h, self.d_k).transpose(1, 2)
- for lin, x in zip(self.linears, (query, key, value))
- ]
-
- # 2) Apply attention on all the projected vectors in batch.
- x, self.attn = attention(
- query, key, value, mask=mask, dropout=self.dropout
- )
-
- # 3) "Concat" using a view and apply a final linear.
- x = (
- x.transpose(1, 2)
- .contiguous()
- .view(nbatches, -1, self.h * self.d_k)
- )
- del query
- del key
- del value
- return self.linears[-1](x)
除了自注意力机制之外,每个子网络层中还包括前馈神经网络机制。前馈神经网络机制主要由两层神经网络构成,并且使用ReLU激活函数进行数据处理。
在两层神经网络中,隐藏层的神经元个数一般为。
- class PositionwiseFeedForward(nn.Module):
- "Implements FFN equation."
-
- def __init__(self, d_model, d_ff, dropout=0.1):
- super(PositionwiseFeedForward, self).__init__()
- self.w_1 = nn.Linear(d_model, d_ff)
- self.w_2 = nn.Linear(d_ff, d_model)
- self.dropout = nn.Dropout(dropout)
-
- def forward(self, x):
- return self.w_2(self.dropout(self.w_1(x).relu()))
在输入到Transformer网络之前,我们需要对输入、输出的数据进行编码处理。我们同样也使用可学习的线性变化和softmax函数的方法将其解码器的输出映射为置信度。
- # 生成词向量
- class Embeddings(nn.Module):
- def __init__(self, d_model, vocab):
- super(Embeddings, self).__init__()
- # vocab表示词典的个数
- # d_model表示词向量的长度,即每个单词由长度为d_model的向量来表示
- self.lut = nn.Embedding(vocab, d_model)
- self.d_model = d_model
-
- def forward(self, x):
- # x.shape = (1, 10)
- # rst.shape = (1, 10, 512(d_model))
- return self.lut(x) * math.sqrt(self.d_model)
由于Transformer中并没有循环结构也没有卷积结构,因为需要给输入数据中加入位置信息,这一点我们在上文也解释过一次,因此在这里就不在赘述了。
- class PositionalEncoding(nn.Module):
- "Implement the PE function."
-
- def __init__(self, d_model, dropout, max_len=5000):
- super(PositionalEncoding, self).__init__()
- self.dropout = nn.Dropout(p=dropout)
-
- # Compute the positional encodings once in log space.
- pe = torch.zeros(max_len, d_model)
- position = torch.arange(0, max_len).unsqueeze(1)
- div_term = torch.exp(
- torch.arange(0, d_model, 2) * -(math.log(10000.0) / d_model)
- )
- pe[:, 0::2] = torch.sin(position * div_term)
- pe[:, 1::2] = torch.cos(position * div_term)
- pe = pe.unsqueeze(0)
- self.register_buffer("pe", pe)
-
- def forward(self, x):
- x = x + self.pe[:, : x.size(1)].requires_grad_(False)
- return self.dropout(x)
- def make_model(
- src_vocab, tgt_vocab, N=6, d_model=512, d_ff=2048, h=8, dropout=0.1
- ):
- "Helper: Construct a model from hyperparameters."
- c = copy.deepcopy
- attn = MultiHeadedAttention(h, d_model)
- ff = PositionwiseFeedForward(d_model, d_ff, dropout)
- position = PositionalEncoding(d_model, dropout)
- model = EncoderDecoder(
- Encoder(EncoderLayer(d_model, c(attn), c(ff), dropout), N),
- Decoder(DecoderLayer(d_model, c(attn), c(attn), c(ff), dropout), N),
- nn.Sequential(Embeddings(d_model, src_vocab), c(position)), # 将embedding和positional encoding放在一起
- nn.Sequential(Embeddings(d_model, tgt_vocab), c(position)), # 将embedding和positional encoding放在一起
- Generator(d_model, tgt_vocab),
- )
-
- # This was important from their code.
- # Initialize parameters with Glorot / fan_avg.
- for p in model.parameters():
- if p.dim() > 1:
- nn.init.xavier_uniform_(p)
- return model
1.6 模型推理
- def inference_test():
- test_model = make_model(11, 11, 2)
- test_model.eval()
- src = torch.LongTensor([[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]])
- src_mask = torch.ones(1, 1, 10)
-
- memory = test_model.encode(src, src_mask)
- ys = torch.zeros(1, 1).type_as(src)
-
- for i in range(9):
- out = test_model.decode(
- memory, src_mask, ys, subsequent_mask(ys.size(1)).type_as(src.data)
- )
- prob = test_model.generator(out[:, -1])
- _, next_word = torch.max(prob, dim=1)
- next_word = next_word.data[0]
- ys = torch.cat(
- [ys, torch.empty(1, 1).type_as(src.data).fill_(next_word)], dim=1
- )
-
- print("Example Untrained Model Prediction:", ys)
-
-
- def run_tests():
- for _ in range(10):
- inference_test()
-
-
- show_example(run_tests)
下面就是模型推理的结果:
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。