当前位置:   article > 正文

基于术语词典干预的机器翻译挑战赛

基于术语词典干预的机器翻译挑战赛

赛事链接:2024 iFLYTEK AI开发者大赛-讯飞开放平台


一 关于NLP和大语言模型

自然语言处理(Natural Language Processing,NLP)是语言学与人工智能的分支,试图让计算机能够完成处理语言、理解语言和生成语言等任务。大致可以将 NLP 任务分为四类:序列标注、分类任务、句子关系判断、生成式任务。

参数调整小tips:

  • N:选择数据集的前N个样本进行训练。
  • N_EPOCHS:一次 epoch 是指将所有数据训练一遍的次数。

二 基于 Seq2Seq 的建模

当前机器翻译任务的主流解决方案是基于神经网络进行建模。Seq2Seq 技术开通了将经典深度神经网络模型(DNNs)运用于在翻译,文本自动摘要和机器人自动问答以及一些回归预测任务上。它的主要思路是将一个作为输入的序列映射为一个作为输出的序列,这一过程由编码(Encoder)输入与解码(Decoder)输出两个环节组成,前者负责把序列编码成一个固定长度的向量,这个向量作为输入传给后者,输出可变长度的向量。

编码器构建

  1. class Encoder(nn.Module):
  2. def __init__(self, input_dim, emb_dim, hid_dim, n_layers, dropout):
  3. super().__init__()
  4. self.hid_dim = hid_dim
  5. self.n_layers = n_layers
  6. self.embedding = nn.Embedding(input_dim, emb_dim)
  7. self.gru = nn.GRU(emb_dim, hid_dim, n_layers, dropout=dropout, batch_first=True)
  8. self.dropout = nn.Dropout(dropout)
  9. def forward(self, src):
  10. # src = [batch size, src len]
  11. embedded = self.dropout(self.embedding(src))
  12. # embedded = [batch size, src len, emb dim]
  13. outputs, hidden = self.gru(embedded)
  14. # outputs = [batch size, src len, hid dim * n directions]
  15. # hidden = [n layers * n directions, batch size, hid dim]
  16. return outputs, hidden

注意力机制

传统的 Seq2Seq 模型在解码阶段仅依赖于编码器产生的最后一个隐藏状态,这在处理长序列时效果不佳。注意力机制允许解码器在生成每个输出词时,关注编码器产生的所有中间状态,从而更好地利用源序列的信息。具体来说,给定源语言序列经过编码器输出的向量序列

h1,h2,h3,...,hm
注意力机制旨在依据解码端翻译的需要,自适应地从这个向量序列中查找对应的信息

  1. class Attention(nn.Module):
  2. def __init__(self, hid_dim):
  3. super().__init__()
  4. self.attn = nn.Linear(hid_dim * 2, hid_dim)
  5. self.v = nn.Linear(hid_dim, 1, bias=False)
  6. def forward(self, hidden, encoder_outputs):
  7. # hidden = [1, batch size, hid dim]
  8. # encoder_outputs = [batch size, src len, hid dim]
  9. batch_size = encoder_outputs.shape[0]
  10. src_len = encoder_outputs.shape[1]
  11. hidden = hidden.repeat(src_len, 1, 1).transpose(0, 1)
  12. # hidden = [batch size, src len, hid dim]
  13. energy = torch.tanh(self.attn(torch.cat((hidden, encoder_outputs), dim=2)))
  14. # energy = [batch size, src len, hid dim]
  15. attention = self.v(energy).squeeze(2)
  16. # attention = [batch size, src len]
  17. return F.softmax(attention, dim=1)

译码器构建

  1. class Decoder(nn.Module):
  2. def __init__(self, output_dim, emb_dim, hid_dim, n_layers, dropout, attention):
  3. super().__init__()
  4. self.output_dim = output_dim
  5. self.hid_dim = hid_dim
  6. self.n_layers = n_layers
  7. self.attention = attention
  8. self.embedding = nn.Embedding(output_dim, emb_dim)
  9. self.gru = nn.GRU(hid_dim + emb_dim, hid_dim, n_layers, dropout=dropout, batch_first=True)
  10. self.fc_out = nn.Linear(hid_dim * 2 + emb_dim, output_dim)
  11. self.dropout = nn.Dropout(dropout)
  12. def forward(self, input, hidden, encoder_outputs):
  13. # input = [batch size, 1]
  14. # hidden = [n layers, batch size, hid dim]
  15. # encoder_outputs = [batch size, src len, hid dim]
  16. input = input.unsqueeze(1)
  17. embedded = self.dropout(self.embedding(input))
  18. # embedded = [batch size, 1, emb dim]
  19. a = self.attention(hidden[-1:], encoder_outputs)
  20. # a = [batch size, src len]
  21. a = a.unsqueeze(1)
  22. # a = [batch size, 1, src len]
  23. weighted = torch.bmm(a, encoder_outputs)
  24. # weighted = [batch size, 1, hid dim]
  25. rnn_input = torch.cat((embedded, weighted), dim=2)
  26. # rnn_input = [batch size, 1, emb dim + hid dim]
  27. output, hidden = self.gru(rnn_input, hidden)
  28. # output = [batch size, 1, hid dim]
  29. # hidden = [n layers, batch size, hid dim]
  30. embedded = embedded.squeeze(1)
  31. output = output.squeeze(1)
  32. weighted = weighted.squeeze(1)
  33. prediction = self.fc_out(torch.cat((output, weighted, embedded), dim=1))
  34. # prediction = [batch size, output dim]
  35. return prediction, hidden

Seq2Seq 构建

  1. class Seq2Seq(nn.Module):
  2. def __init__(self, encoder, decoder, device):
  3. super().__init__()
  4. self.encoder = encoder
  5. self.decoder = decoder
  6. self.device = device
  7. def forward(self, src, trg, teacher_forcing_ratio=0.5):
  8. # src = [batch size, src len]
  9. # trg = [batch size, trg len]
  10. batch_size = src.shape[0]
  11. trg_len = trg.shape[1]
  12. trg_vocab_size = self.decoder.output_dim
  13. outputs = torch.zeros(batch_size, trg_len, trg_vocab_size).to(self.device)
  14. encoder_outputs, hidden = self.encoder(src)
  15. input = trg[:, 0]
  16. for t in range(1, trg_len):
  17. output, hidden = self.decoder(input, hidden, encoder_outputs)
  18. outputs[:, t] = output
  19. teacher_force = random.random() < teacher_forcing_ratio
  20. top1 = output.argmax(1)
  21. input = trg[:, t] if teacher_force else top1
  22. return outputs

三 基于 Transformer 模型实现在机器翻译任务上的应用

        Transformer 在原论文中第一次提出就是将其应用到机器翻译领域,它的出现使得机器翻译的性能和效率迈向了一个新的阶段。它摒弃了循环结构,并完全通过注意力机制完成对源语言序列和目标语言序列全局依赖的建模。在抽取每个单词的上下文特征时,Transformer 通过自注意力机制(self-attention)衡量上下文中每一个单词对当前单词的重要程度。

        Transformer的主要组件包括编码器(Encoder)、解码器(Decoder)和注意力层。其核心是利用多头自注意力机制(Multi-Head Self-Attention),使每个位置的表示不仅依赖于当前位置,还能够直接获取其他位置的表示。自从提出以来,Transformer模型在机器翻译、文本生成等自然语言处理任务中均取得了突破性进展,成为NLP领域新的主流模型。

位置编码

  1. class PositionalEncoding(nn.Module):
  2. def __init__(self, d_model, dropout=0.1, max_len=5000):
  3. super(PositionalEncoding, self).__init__()
  4. self.dropout = nn.Dropout(p=dropout)
  5. pe = torch.zeros(max_len, d_model)
  6. position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1)
  7. div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-math.log(10000.0) / d_model))
  8. pe[:, 0::2] = torch.sin(position * div_term)
  9. pe[:, 1::2] = torch.cos(position * div_term)
  10. pe = pe.unsqueeze(0).transpose(0, 1)
  11. self.register_buffer('pe', pe)
  12. def forward(self, x):
  13. x = x + self.pe[:x.size(0), :]
  14. return self.dropout(x)

Transformer

  1. # Transformer
  2. class TransformerModel(nn.Module):
  3. def __init__(self, src_vocab, tgt_vocab, d_model, nhead, num_encoder_layers, num_decoder_layers, dim_feedforward, dropout):
  4. super(TransformerModel, self).__init__()
  5. self.transformer = nn.Transformer(d_model, nhead, num_encoder_layers, num_decoder_layers, dim_feedforward, dropout)
  6. self.src_embedding = nn.Embedding(len(src_vocab), d_model)
  7. self.tgt_embedding = nn.Embedding(len(tgt_vocab), d_model)
  8. self.positional_encoding = PositionalEncoding(d_model, dropout)
  9. self.fc_out = nn.Linear(d_model, len(tgt_vocab))
  10. self.src_vocab = src_vocab
  11. self.tgt_vocab = tgt_vocab
  12. self.d_model = d_model
  13. def forward(self, src, tgt):
  14. # 调整src和tgt的维度
  15. src = src.transpose(0, 1) # (seq_len, batch_size)
  16. tgt = tgt.transpose(0, 1) # (seq_len, batch_size)
  17. src_mask = self.transformer.generate_square_subsequent_mask(src.size(0)).to(src.device)
  18. tgt_mask = self.transformer.generate_square_subsequent_mask(tgt.size(0)).to(tgt.device)
  19. src_padding_mask = (src == self.src_vocab['<pad>']).transpose(0, 1)
  20. tgt_padding_mask = (tgt == self.tgt_vocab['<pad>']).transpose(0, 1)
  21. src_embedded = self.positional_encoding(self.src_embedding(src) * math.sqrt(self.d_model))
  22. tgt_embedded = self.positional_encoding(self.tgt_embedding(tgt) * math.sqrt(self.d_model))
  23. output = self.transformer(src_embedded, tgt_embedded,
  24. src_mask, tgt_mask, None, src_padding_mask, tgt_padding_mask, src_padding_mask)
  25. return self.fc_out(output).transpose(0, 1)

然后在主函数里定义 Transformer 模型调用:

  1. def initialize_model(src_vocab, tgt_vocab, d_model=512, nhead=8, num_encoder_layers=6, num_decoder_layers=6, dim_feedforward=2048, dropout=0.1):
  2. model = TransformerModel(src_vocab, tgt_vocab, d_model, nhead, num_encoder_layers, num_decoder_layers, dim_feedforward, dropout)
  3. return model

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/weixin_40725706/article/detail/941823
推荐阅读
相关标签
  

闽ICP备14008679号