赞
踩
一、优化背景
赛题链接:2024 iFLYTEK AI开发者大赛-讯飞开放平台
经过上一次的Task02的Baseline代码之后,我们得到的得分为3.9088(比原来要好很多,但还有不足)。为此,我们还需要进一步优化代码。
二、优化方法
1.模型优化:
自然语言处理在模型构建上适合采用深度学习的方法,因此,我们想到可以用最为经典的Transformers模型。
2.数据优化:
(1)数据清洗:
①数据集大概样子我们可以看到如下,不难看出,这一个数据集是拿着某一个剧本来进行中英对照的,很明显加了很多场景式语句,这一些噪声容易对机器翻译产生干扰。(总而言之与正文的内容没有太多的相关性)
例如:Oops. Sorry. (感叹声) 哦,意外。
“The world needs you, badly,” begins celebrated biologist E.O. Wilson in his letter to a young scientist. Previewing his upcoming book, he gives advice collected from a lifetime of experience -- reminding us that wonder and creativity are the center of the scientific life. <em></em> ”这个世界非常需要你“, 著名生物学家 E. O. Wilson 在他给一个年轻科学家的信中这样开头。作为对他即将出版的著作的一个预览,他给出了从自己一身经历总结出的几个忠告 -- 告诫我们奇迹和创造力是科学生活的中心。(摄于 TEDMED)
②此外,在原数据中,我们能够看到有比较隐性的脏数据(括号内加以注释的),这一些也会对机器翻译造成干扰,这直接的方法是保留原有的专业名词。
例如:
At that time, it was the head of Caritas Germany. 当时,他是慈善德国(Caritas Germany)的负责人。
The structure of El Sistema is based on a new and flexible managing style adapted to the features of each community and region, and today attends to 300,000 children of the lower and middle class all over Venezuela. 我们的项目名为El Sistema(即西班牙语中的“体系”) 它采用的是一种新型而灵活的管理方式 能灵活地适应每个社区和地区的自身特点 现在有超过30万来自中低下阶层的孩子参与到这个项目来 遍及委内瑞拉全国。
③而且,有一些数据翻译结果并不是我们想要得到的(比如“The world is sound”--世界是充满声音的)
3.加入字典的指导翻译
三、Transformers的相关知识
模型架构:
1.基本架构:Transformers模型主要由编码器-译码器的架构改编而来,最主要的部分在于自注意力机制。
2.具体说明
①编码器所需输入(待翻译语言)和解码器输入(目标翻译句子)都作为不同的输入端。
②初始输入时,对文本采用位置编码,具体而言:
采用三角函数:
pos代表单词所在位置,2i和2i+1表示位置的维度,d为总维度数。
③编码器输入端经过自注意力机制(键值Q,K,V矩阵相乘后除以维度d(防止梯度产生问题))后累加平均与直接累加平均后前向传播
④在解码器与编码器中间还有两端侧的注意力机制
⑤上述网络结构仅仅只是1层的,多层迭代可以使准确率逐步提高。
Transformers的延伸与拓展:
1.将Transformers改为单向循环,则变成了GPT模型
2.Transfromers的双向循环,变成了BERT模型。
应用范围:
四、详解Baseline代码
导入相关的库:
- import torch
- import torch.nn as nn
- import torch.nn.functional as F
- import torch.optim as optim
- from torch.nn.utils import clip_grad_norm_
- from torchtext.data.metrics import bleu_score
- from torch.utils.data import Dataset, DataLoader
- from torchtext.data.utils import get_tokenizer
- from torchtext.vocab import build_vocab_from_iterator
- from typing import List, Tuple
- import jieba
- import random
- from torch.nn.utils.rnn import pad_sequence
- import sacrebleu
- import time
- import math
数据的导入与处理:
read_data主要是导入文件
preprocess_data主要是将原训练集中的词以元组的形式返回
build_vocab主要是构建相关的标注
- # 定义tokenizer
- en_tokenizer = get_tokenizer('spacy', language='en_core_web_trf')
- zh_tokenizer = lambda x: list(jieba.cut(x)) # 使用jieba分词
-
- # 读取数据函数
- def read_data(file_path: str) -> List[str]:
- with open(file_path, 'r', encoding='utf-8') as f:
- return [line.strip() for line in f]
-
- # 数据预处理函数
- def preprocess_data(en_data: List[str], zh_data: List[str]) -> List[Tuple[List[str], List[str]]]:
- processed_data = []
- for en, zh in zip(en_data, zh_data):
- en_tokens = en_tokenizer(en.lower())[:MAX_LENGTH]
- zh_tokens = zh_tokenizer(zh)[:MAX_LENGTH]
- if en_tokens and zh_tokens: # 确保两个序列都不为空
- processed_data.append((en_tokens, zh_tokens))
- return processed_data
-
- # 构建词汇表
- def build_vocab(data: List[Tuple[List[str], List[str]]]):
- en_vocab = build_vocab_from_iterator(
- (en for en, _ in data),
- specials=['<unk>', '<pad>', '<bos>', '<eos>']
- )
- zh_vocab = build_vocab_from_iterator(
- (zh for _, zh in data),
- specials=['<unk>', '<pad>', '<bos>', '<eos>']
- )
- en_vocab.set_default_index(en_vocab['<unk>'])
- zh_vocab.set_default_index(zh_vocab['<unk>'])
- return en_vocab, zh_vocab
模型的加载与初始化:
主要是编码与带有前向传播的Transformers架构。
- class PositionalEncoding(nn.Module):
- def __init__(self, d_model, dropout=0.1, max_len=5000):
- super(PositionalEncoding, self).__init__()
- self.dropout = nn.Dropout(p=dropout)
-
- pe = torch.zeros(max_len, d_model)
- position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1)
- div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-math.log(10000.0) / d_model))
- pe[:, 0::2] = torch.sin(position * div_term)
- pe[:, 1::2] = torch.cos(position * div_term)
- pe = pe.unsqueeze(0).transpose(0, 1)
- self.register_buffer('pe', pe)
-
- def forward(self, x):
- x = x + self.pe[:x.size(0), :]
- return self.dropout(x)
-
- class TransformerModel(nn.Module):
- def __init__(self, src_vocab, tgt_vocab, d_model, nhead, num_encoder_layers, num_decoder_layers, dim_feedforward, dropout):
- super(TransformerModel, self).__init__()
- self.transformer = nn.Transformer(d_model, nhead, num_encoder_layers, num_decoder_layers, dim_feedforward, dropout)
- self.src_embedding = nn.Embedding(len(src_vocab), d_model)
- self.tgt_embedding = nn.Embedding(len(tgt_vocab), d_model)
- self.positional_encoding = PositionalEncoding(d_model, dropout)
- self.fc_out = nn.Linear(d_model, len(tgt_vocab))
- self.src_vocab = src_vocab
- self.tgt_vocab = tgt_vocab
- self.d_model = d_model
-
- def forward(self, src, tgt):
- # 调整src和tgt的维度
- src = src.transpose(0, 1) # (seq_len, batch_size)
- tgt = tgt.transpose(0, 1) # (seq_len, batch_size)
-
- src_mask = self.transformer.generate_square_subsequent_mask(src.size(0)).to(src.device)
- tgt_mask = self.transformer.generate_square_subsequent_mask(tgt.size(0)).to(tgt.device)
-
- src_padding_mask = (src == self.src_vocab['<pad>']).transpose(0, 1)
- tgt_padding_mask = (tgt == self.tgt_vocab['<pad>']).transpose(0, 1)
-
- src_embedded = self.positional_encoding(self.src_embedding(src) * math.sqrt(self.d_model))
- tgt_embedded = self.positional_encoding(self.tgt_embedding(tgt) * math.sqrt(self.d_model))
-
- output = self.transformer(src_embedded, tgt_embedded,
- src_mask, tgt_mask, None, src_padding_mask, tgt_padding_mask, src_padding_mask)
- return self.fc_out(output).transpose(0, 1)
-
- def initialize_model(src_vocab, tgt_vocab, d_model=512, nhead=8, num_encoder_layers=6, num_decoder_layers=6, dim_feedforward=2048, dropout=0.1):
- model = TransformerModel(src_vocab, tgt_vocab, d_model, nhead, num_encoder_layers, num_decoder_layers, dim_feedforward, dropout)#调用Transformers模型初始化
- return model
训练过程:
- def train(model, iterator, optimizer, criterion, clip):
- model.train()
- epoch_loss = 0
-
- for i, batch in enumerate(iterator):
- src, tgt = batch
- if src.numel() == 0 or tgt.numel() == 0:
- continue
-
- src, tgt = src.to(DEVICE), tgt.to(DEVICE)
-
- optimizer.zero_grad()
- output = model(src, tgt[:, :-1])
-
- output_dim = output.shape[-1]
- output = output.contiguous().view(-1, output_dim)
- tgt = tgt[:, 1:].contiguous().view(-1)
-
- loss = criterion(output, tgt)
- loss.backward()
-
- clip_grad_norm_(model.parameters(), clip)
- optimizer.step()
-
- epoch_loss += loss.item()
-
- return epoch_loss / len(iterator)
-
- def evaluate(model, iterator, criterion):
- model.eval()
- epoch_loss = 0
- with torch.no_grad():
- for i, batch in enumerate(iterator):
- src, tgt = batch
- if src.numel() == 0 or tgt.numel() == 0:
- continue
-
- src, tgt = src.to(DEVICE), tgt.to(DEVICE)
-
- output = model(src, tgt[:, :-1])
-
- output_dim = output.shape[-1]
- output = output.contiguous().view(-1, output_dim)
- tgt = tgt[:, 1:].contiguous().view(-1)
-
- loss = criterion(output, tgt)
- epoch_loss += loss.item()
-
- return epoch_loss / len(iterator)
翻译过程的函数:
- def translate_sentence(src_indexes, src_vocab, tgt_vocab, model, device, max_length=50):
- model.eval()
-
- src_tensor = src_indexes.unsqueeze(0).to(device) # 添加批次维度
-
- with torch.no_grad():
- encoder_outputs = model.transformer.encoder(model.positional_encoding(model.src_embedding(src_tensor) * math.sqrt(model.d_model)))
-
- trg_indexes = [tgt_vocab['<bos>']]
- for i in range(max_length):
- trg_tensor = torch.LongTensor(trg_indexes).unsqueeze(0).to(device)
-
- with torch.no_grad():
- output = model(src_tensor, trg_tensor)
-
- pred_token = output.argmax(2)[:, -1].item()
- trg_indexes.append(pred_token)
-
- if pred_token == tgt_vocab['<eos>']:
- break
-
- trg_tokens = [tgt_vocab.get_itos()[i] for i in trg_indexes]
- return trg_tokens[1:-1] # 移除<bos>和<eos>标记
本次比赛采用BLEU模式评价,这里先预估一下结果,构造结果评价函数,以判别模型好坏:
- def calculate_bleu(dev_loader, src_vocab, tgt_vocab, model, device):
- model.eval()
- translations = []
- references = []
-
- with torch.no_grad():
- for src, tgt in dev_loader:
- src = src.to(device)
- for sentence in src:
- translated = translate_sentence(sentence, src_vocab, tgt_vocab, model, device)
- translations.append(' '.join(translated))
-
- for reference in tgt:
- ref_tokens = [tgt_vocab.get_itos()[idx] for idx in reference if idx not in [tgt_vocab['<bos>'], tgt_vocab['<eos>'], tgt_vocab['<pad>']]]
- references.append([' '.join(ref_tokens)])
-
- bleu = sacrebleu.corpus_bleu(translations, references)
- return bleu.score
主训练函数,负责控制几轮的训练过程
- def train_model(model, train_iterator, valid_iterator, optimizer, criterion, N_EPOCHS=10, CLIP=1, save_path = '../model/best-model_transformer.pt'):
- best_valid_loss = float('inf')
-
- for epoch in range(N_EPOCHS):
- start_time = time.time()
-
- #print(f"Starting Epoch {epoch + 1}")
- train_loss = train(model, train_iterator, optimizer, criterion, CLIP)
- valid_loss = evaluate(model, valid_iterator, criterion)
-
- end_time = time.time()
- epoch_mins, epoch_secs = epoch_time(start_time, end_time)
-
- if valid_loss < best_valid_loss:
- best_valid_loss = valid_loss
- torch.save(model.state_dict(), save_path)
-
- print(f'Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s')
- print(f'\tTrain Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f}')
- print(f'\t Val. Loss: {valid_loss:.3f} | Val. PPL: {math.exp(valid_loss):7.3f}')
开始构建Transformers模型,调用文件夹内容
- # 定义常量
- MAX_LENGTH = 100 # 最大句子长度
- BATCH_SIZE = 32
- DEVICE = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
- N = 148363 # 采样训练集的数量,最多148363
-
- train_path = '../dataset/train.txt'
- dev_en_path = '../dataset/dev_en.txt'
- dev_zh_path = '../dataset/dev_zh.txt'
- test_en_path = '../dataset/test_en.txt'
-
- train_loader, dev_loader, test_loader, en_vocab, zh_vocab = load_data(
- train_path, dev_en_path, dev_zh_path, test_en_path
- )
-
-
- print(f"英语词汇表大小: {len(en_vocab)}")
- print(f"中文词汇表大小: {len(zh_vocab)}")
- print(f"训练集大小: {len(train_loader.dataset)}")
- print(f"开发集大小: {len(dev_loader.dataset)}")
- print(f"测试集大小: {len(test_loader.dataset)}")
开始训练,下面设置的数值为网络层的基本信息,N_EPOCHS表示训练几轮。
- if __name__ == '__main__':
-
- # 模型参数
- D_MODEL = 256
- NHEAD = 8
- NUM_ENCODER_LAYERS = 3
- NUM_DECODER_LAYERS = 3
- DIM_FEEDFORWARD = 512
- DROPOUT = 0.1
-
- N_EPOCHS = 10
- CLIP = 1
-
- # 初始化模型
- model = initialize_model(en_vocab, zh_vocab, D_MODEL, NHEAD, NUM_ENCODER_LAYERS, NUM_DECODER_LAYERS, DIM_FEEDFORWARD, DROPOUT).to(DEVICE)
- print(f'The model has {sum(p.numel() for p in model.parameters() if p.requires_grad):,} trainable parameters')
-
- # 定义损失函数
- criterion = nn.CrossEntropyLoss(ignore_index=zh_vocab['<pad>'])
- # 初始化优化器
- optimizer = optim.Adam(model.parameters(), lr=0.0001, betas=(0.9, 0.98), eps=1e-9)
-
- # 训练模型
- save_path = '../model/best-model_transformer.pt'
- train_model(model, train_loader, dev_loader, optimizer, criterion, N_EPOCHS, CLIP, save_path=save_path)
-
- print(f"训练完成!模型已保存到:{save_path}")
上分技巧之一:使用字典进行辅助翻译,有专业词汇的可以选择直接代入。
- # 存储成字典
- def load_dictionary(dict_path):
- term_dict = {}
- with open(dict_path, 'r', encoding='utf-8') as f:
- data = f.read()
- data = data.strip().split('\n')
- source_term = [line.split('\t')[0] for line in data]
- target_term = [line.split('\t')[1] for line in data]
- for i in range(len(source_term)):
- term_dict[source_term[i]] = target_term[i]
- return term_dict
-
- def post_process_translation(translation, term_dict):
- """ 使用术语词典进行后处理 """
-
- translated_words = [term_dict.get(word, word) for word in translation]
- return "".join(translated_words)
-
- # 加载你的术语词典
- dict_path = '../dataset/en-zh.dic' # 这应该是你的术语词典文件路径
- term_dict = load_dictionary(dict_path)
最后一步,写文件回传,这里使用with结构主要是为了防止IO错误:
- save_dir = '../results/submit_add_dict.txt'
- with open(save_dir, 'w') as f:
- translated_sentences = []
- for batch in test_loader: # 遍历所有数据
- src, _ = batch
- src = src.to(DEVICE)
- translated = translate_sentence(src[0], en_vocab, zh_vocab, model, DEVICE) #翻译结果
- results = post_process_translation(translated, term_dict)
- results = "".join(results)
- f.write(results + '\n') # 将结果写入文件
- break
- print(f"翻译完成,结果已保存到{save_dir}")
最终经过判别,上分9.4297。
五、关于Task03的一些思考
1.语义判别网络将翻译再进行优化,剔除与应有翻译场景不相关的内容。
2.经过观察,我们发现翻译文本是一段文章,由于文章之间会存在上下文之间的关系,Transformers模块可以尝试加入反向循环。
3.怎样可以加快清洗的速度?
六、Task01-Task03的模型优化总结
1.Task01最初版--使用单纯的Encoder-Decoder模型,效果一般。
2.Task01改编版--在最初版基础上加上了前向传播
3.Task02--在Task01基础上加上了注意力机制,使得文章翻译得分有所提升。
4.Task03--使用Transformers深度学习模型,外加字典辅助以及数据清洗(虽然还没清洗完)
#Datawhale #AI夏令营 #自然语言处理
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。