当前位置:   article > 正文

NLP之MLP与CNN的姓氏分类实现_ml[和cnn

ml[和cnn

一、多层感知机(MLP)

1.1什么是MLP

    多层感知机(Multilayer Perceptron,简称MLP),MLP 是一种最基本的前馈神经网络。它包含一个输入层、一个或多个隐藏层以及一个输出层。每一层都由多个神经元组成,每个神经元与下一层的每个神经元相连。是一种常见的基于前馈神经网络的深度学习模型。

    在 MLP 中,每个神经元都使用激活函数来引入非线性特性。常见的激活函数包括 Sigmoid、Tanh 和 ReLU 等。通过多层神经元的组合和激活函数的非线性作用,MLP 能够学习复杂的非线性关系,从而适用于各种机器学习任务

    多层感知机在机器学习和深度学习领域被广泛应用,特别是在图像识别、自然语言处理和推荐系统等任务中取得了很好的效果。它的灵活性和能力使得它成为了人工神经网络的基本构建块之一。

    其基本的网络架构如下:

图1、MLP框架图

1.2对比单层感知机

    如下图,每个数据点的真正类别为该点的形状,星形或圆形。把错误的分类数据填充为黑色,正确的分类数据不进行填充。图中的这些虚线是每个模型的决策边界。

图2、XOR问题下的感知机与多层感知机结果对比

    在XOR异或问题中,上图左边为感知机得到的分类结果,可以看到只得到了一条决策边界,得到的分类结果较差,几乎一半数据点都被标位了黑色填充,对比看右边多层感知机的分类结果,可以看出其分类效果明显比感知机要更好,可以清晰的分出圆和星两类,并且显示MLP有两个决策边界,这是它的优点,但它实际上只是一个决策边界,决策边界就是这样出现的,因为中间表示法改变了空间,使一个超平面同时出现在这两个位置上。

二、基于MLP的姓氏分类

2.1库的导入

  1. from argparse import Namespace
  2. from collections import Counter
  3. import json
  4. import os
  5. import string
  6. import numpy as np
  7. import pandas as pd
  8. import torch
  9. import torch.nn as nn
  10. import torch.nn.functional as F
  11. import torch.optim as optim
  12. from torch.utils.data import Dataset, DataLoader
  13. from tqdm import tqdm_notebook

2.2定义数据向量化的类

2.2.1词汇表类 Vocabulary

  1. class Vocabulary(object):
  2. def __init__(self, token_to_idx=None, add_unk=True, unk_token="<UNK>"):
  3. if token_to_idx is None:
  4. token_to_idx = {}
  5. self._token_to_idx = token_to_idx
  6. # 创建一个将索引映射到标记的字典
  7. self._idx_to_token = {idx: token
  8. for token, idx in self._token_to_idx.items()}
  9. self._add_unk = add_unk
  10. self._unk_token = unk_token
  11. self.unk_index = -1
  12. if add_unk:
  13. # 添加未知标记,并将其索引保存为unk_index
  14. self.unk_index = self.add_token(unk_token)
  15. def to_serializable(self):
  16. return {'token_to_idx': self._token_to_idx,
  17. 'add_unk': self._add_unk,
  18. 'unk_token': self._unk_token}
  19. @classmethod
  20. def from_serializable(cls, contents):
  21. return cls(**contents)
  22. def add_token(self, token):
  23. try:
  24. index = self._token_to_idx[token]
  25. except KeyError:
  26. index = len(self._token_to_idx)
  27. self._token_to_idx[token] = index
  28. self._idx_to_token[index] = token
  29. return index
  30. def add_many(self, tokens):
  31. return [self.add_token(token) for token in tokens]
  32. def lookup_token(self, token):
  33. if self.unk_index >= 0:
  34. return self._token_to_idx.get(token, self.unk_index)
  35. else:
  36. return self._token_to_idx[token]
  37. def lookup_index(self, index):
  38. if index not in self._idx_to_token:
  39. raise KeyError("the index (%d) is not in the Vocabulary" % index)
  40. return self._idx_to_token[index]
  41. def __str__(self):
  42. return "<Vocabulary(size=%d)>" % len(self)
  43. def __len__(self):
  44. return len(self._token_to_idx)

    首先,初始化Vocabulary类,其中定义的参数有token_to_idx (dict)为一个将标记映射到其索引的字典。默认为None。add_unk (bool)为是否添加未知标记。默认为True。unk_token (str)为未知标记的字符串表示。默认为"<UNK>"。  

    这个"Vocabulary"的类,用于构建一个词汇表。词汇表是一个将标记映射到唯一索引的数据结构。这个类提供了初始化一个空的词汇表或者从现有的标记到索引的字典初始化词汇表。再将标记添加到词汇表中,并返回标记在词汇表中对应的索引。并且可以批量添加多个标记到词汇表中,并返回每个标记在词汇表中对应的索引列表。同时可以根据标记查找对应的索引。也可以根据索引查找对应的标记。然后将词汇表对象转化为可序列化的字典,以便保存到文件或进行网络传输。最后从可序列化的字典中重新创建词汇表对象。
 

2.2.2向量化类 SurnameVectorizer

  1. class SurnameVectorizer(object):
  2. def __init__(self, surname_vocab, nationality_vocab):
  3. self.surname_vocab = surname_vocab
  4. self.nationality_vocab = nationality_vocab
  5. def vectorize(self, surname):
  6. vocab = self.surname_vocab
  7. one_hot = np.zeros(len(vocab), dtype=np.float32)
  8. for token in surname:
  9. one_hot[vocab.lookup_token(token)] = 1
  10. return one_hot
  11. @classmethod
  12. def from_dataframe(cls, surname_df):
  13. surname_vocab = Vocabulary(unk_token="@")
  14. nationality_vocab = Vocabulary(add_unk=False)
  15. for index, row in surname_df.iterrows():
  16. for letter in row.surname:
  17. surname_vocab.add_token(letter)
  18. nationality_vocab.add_token(row.nationality)
  19. return cls(surname_vocab, nationality_vocab)
  20. @classmethod
  21. def from_serializable(cls, contents):
  22. surname_vocab = Vocabulary.from_serializable(contents['surname_vocab'])
  23. nationality_vocab = Vocabulary.from_serializable(contents['nationality_vocab'])
  24. return cls(surname_vocab=surname_vocab, nationality_vocab=nationality_vocab)
  25. def to_serializable(self):
  26. return {'surname_vocab': self.surname_vocab.to_serializable(),
  27. 'nationality_vocab': self.nationality_vocab.to_serializable()}

    定义一个名为"SurnameVectorizer"的类,它用于将姓氏转化为向量形式表示。用"Vocabulary"的词汇表类来管理标记到索引的映射关系,并将姓氏和国籍转化为对应的独热编码向量。首先初始化一个"SurnameVectorizer"对象,需要传入一个"Vocabulary"对象作为姓氏词汇表以及作为国籍词汇表。然后将姓氏转化为独热编码向量方法"vectorize",它会将姓氏中的每个字母根据姓氏词汇表转化为对应的索引,并创建一个全零的向量,将对应的索引位置置为1。
     通过从姓氏数据帧中构建"SurnameVectorizer"对象的类方法"from_dataframe",它会遍历数据帧中的每个姓氏和国籍,将姓氏中的每个字母添加到姓氏词汇表中,将国籍添加到国籍词汇表中,并返回一个初始化了词汇表的"SurnameVectorizer"对象。
     通过从可序列化的字典中重新创建"SurnameVectorizer"对象的类方法"from_serializable",它会使用可序列化的词汇表来创建一个"SurnameVectorizer"对象。将"SurnameVectorizer"对象转化为可序列化的字典形式的方法"to_serializable",以便保存到文件或进行网络传输。

2.3定义数据集

  1. class SurnameDataset(Dataset):
  2. def __init__(self, surname_df, vectorizer):
  3. self.surname_df = surname_df
  4. self._vectorizer = vectorizer
  5. self.train_df = self.surname_df[self.surname_df.split=='train']
  6. self.train_size = len(self.train_df)
  7. self.val_df = self.surname_df[self.surname_df.split=='val']
  8. self.validation_size = len(self.val_df)
  9. self.test_df = self.surname_df[self.surname_df.split=='test']
  10. self.test_size = len(self.test_df)
  11. self._lookup_dict = {'train': (self.train_df, self.train_size),
  12. 'val': (self.val_df, self.validation_size),
  13. 'test': (self.test_df, self.test_size)}
  14. self.set_split('train')
  15. class_counts = surname_df.nationality.value_counts().to_dict()
  16. def sort_key(item):
  17. return self._vectorizer.nationality_vocab.lookup_token(item[0])
  18. sorted_counts = sorted(class_counts.items(), key=sort_key)
  19. frequencies = [count for _, count in sorted_counts]
  20. self.class_weights = 1.0 / torch.tensor(frequencies, dtype=torch.float32)
  21. @classmethod
  22. def load_dataset_and_make_vectorizer(cls, surname_csv):
  23. surname_df = pd.read_csv(surname_csv)
  24. train_surname_df = surname_df[surname_df.split=='train']
  25. return cls(surname_df, SurnameVectorizer.from_dataframe(train_surname_df))
  26. @classmethod
  27. def load_dataset_and_load_vectorizer(cls, surname_csv, vectorizer_filepath):
  28. surname_df = pd.read_csv(surname_csv)
  29. vectorizer = cls.load_vectorizer_only(vectorizer_filepath)
  30. return cls(surname_df, vectorizer)
  31. @staticmethod
  32. def load_vectorizer_only(vectorizer_filepath):
  33. with open(vectorizer_filepath) as fp:
  34. return SurnameVectorizer.from_serializable(json.load(fp))
  35. def save_vectorizer(self, vectorizer_filepath):
  36. with open(vectorizer_filepath, "w") as fp:
  37. json.dump(self._vectorizer.to_serializable(), fp)
  38. def get_vectorizer(self):
  39. return self._vectorizer
  40. def set_split(self, split="train"):
  41. self._target_split = split
  42. self._target_df, self._target_size = self._lookup_dict[split]
  43. def __len__(self):
  44. return self._target_size
  45. def __getitem__(self, index):
  46. row = self._target_df.iloc[index]
  47. surname_vector = \
  48. self._vectorizer.vectorize(row.surname)
  49. nationality_index = \
  50. self._vectorizer.nationality_vocab.lookup_token(row.nationality)
  51. return {'x_surname': surname_vector,
  52. 'y_nationality': nationality_index}
  53. def get_num_batches(self, batch_size):
  54. return len(self) // batch_size
  55. def generate_batches(dataset, batch_size, shuffle=True,
  56. drop_last=True, device="cpu"):
  57. dataloader = DataLoader(dataset=dataset, batch_size=batch_size,
  58. shuffle=shuffle, drop_last=drop_last)
  59. for data_dict in dataloader:
  60. out_data_dict = {}
  61. for name, tensor in data_dict.items():
  62. out_data_dict[name] = data_dict[name].to(device)
  63. yield out_data_dict

    通过实现了一个"SurnameDataset"类,用于加载和处理姓氏数据集,并提供了一些用于数据处理和批处理的辅助函数。

    首先初始化一个"SurnameDataset"对象,需要传入姓氏数据帧和"SurnameVectorizer"对象作为数据集的向量化器。根据数据帧中的"split"列将数据集划分为训练集、验证集和测试集,并记录每个划分的大小。
    定义了一个内部字典"_lookup_dict",用于根据给定的划分获取对应的数据帧和大小。设置目标划分的方法"set_split",用于切换数据集使用的划分。实现了辅助函数"get_num_batches",用于计算给定批大小下的批次数量;类方法"load_dataset_and_make_vectorizer",用于从姓氏数据的CSV文件中加载数据集,并创建对应的向量化器;类方法"load_dataset_and_load_vectorizer",用于从姓氏数据的CSV文件和向量化器的文件路径中加载数据集和向量化器。
    辅助函数"generate_batches",用于生成批次数据的生成器,它使用PyTorch的DataLoader来实现批处理和数据加载,并在每个批次中将数据移动到指定的设备。

2.4定义模型类 MLP

  1. class SurnameClassifier(nn.Module):
  2. def __init__(self, input_dim, hidden_dim, output_dim):
  3. super(SurnameClassifier, self).__init__()
  4. self.fc1 = nn.Linear(input_dim, hidden_dim)
  5. self.fc2 = nn.Linear(hidden_dim, output_dim)
  6. def forward(self, x_in, apply_softmax=False):
  7. intermediate_vector = F.relu(self.fc1(x_in))
  8. prediction_vector = self.fc2(intermediate_vector)
  9. if apply_softmax:
  10. prediction_vector = F.softmax(prediction_vector, dim=1)
  11. return prediction_vector

    定义一个姓氏分类MLP模型对姓氏数据进行分类任务。

    我们初始化一个"SurnameClassifier"对象,需要传入输入维度、隐藏层维度和输出维度作为参数。在初始化过程中,创建了两个全连接层,分别是self.fc1和self.fc2。其中,self.fc1是输入层到隐藏层的线性变换,self.fc2是隐藏层到输出层的线性变换。
    实现了前向传播的方法"forward",接受输入张量x_in,并根据模型的参数进行计算。首先,将输入张量通过self.fc1进行线性变换,并通过ReLU激活函数获得中间向量。然后,将中间向量通过self.fc2进行线性变换,得到预测向量。
    在这个简单的两层全连接神经网络模型。输入维度决定了输入张量的大小,隐藏层维度决定了模型的复杂度,输出维度决定了分类的类别数量。

2.5训练和评估函数

2.5.1计算准确率

  1. def compute_accuracy(y_pred, y_target):
  2. _, y_pred_indices = y_pred.max(dim=1)
  3. n_correct = torch.eq(y_pred_indices, y_target).sum().item()
  4. return n_correct / len(y_pred_indices) * 100

    这里我们定义一个辅助函数来计算模型的准确率

2.5.2训练模型

  1. def train_model(model, dataset, vectorizer, optimizer, loss_func, num_epochs, batch_size, device):
  2. model = model.to(device)
  3. for epoch in range(num_epochs):
  4. dataset.set_split('train')
  5. batch_generator = dataset.generate_batches(batch_size=batch_size, device=device)
  6. running_loss = 0.0
  7. running_acc = 0.0
  8. model.train()
  9. for batch in batch_generator:
  10. optimizer.zero_grad()
  11. y_pred = model(batch['x_data'])
  12. loss = loss_func(y_pred, batch['y_target'])
  13. loss_t = loss.item()
  14. running_loss += (loss_t - running_loss) / (batch + 1)
  15. loss.backward()
  16. optimizer.step()
  17. acc_t = compute_accuracy(y_pred, batch['y_target'])
  18. running_acc += (acc_t - running_acc) / (batch + 1)
  19. print(f"Epoch {epoch+1}/{num_epochs} - Loss: {running_loss:.4f}, Accuracy: {running_acc:.4f}")

    定义模型训练的核心函数,包含前向传播、损失计算、反向传播和参数更新

2.5.3 验证模型

  1. def validate_model(model, dataset, vectorizer, loss_func, batch_size, device):
  2. dataset.set_split('val')
  3. batch_generator = dataset.generate_batches(batch_size=batch_size, device=device)
  4. running_loss = 0.0
  5. running_acc = 0.0
  6. model.eval()
  7. with torch.no_grad():
  8. for batch in batch_generator:
  9. y_pred = model(batch['x_data'])
  10. loss = loss_func(y_pred, batch['y_target'])
  11. loss_t = loss.item()
  12. running_loss += (loss_t - running_loss) / (batch + 1)
  13. acc_t = compute_accuracy(y_pred, batch['y_target'])
  14. running_acc += (acc_t - running_acc) / (batch + 1)
  15. print(f"Validation - Loss: {running_loss:.4f}, Accuracy: {running_acc:.4f}")

    模型参数如下(供参考)

model:要验证的模型
dataset:验证数据集
vectorizer:数据集的向量化器
loss_func:损失函数
batch_size:批大小
device:设备

    根据指定的批大小和设备生成一个批次数据的生成器,初始化运行损失和准确率为0.0。
将模型设置为评估模式,通过model.eval()实现。使用torch.no_grad()上下文管理器,禁止梯度计算,以减少内存和计算开销。对于每个批次数据,通过模型进行前向传播并计算预测值。根据预测值和真实标签计算损失,并将损失值累加到running_loss中。根据预测值和真实标签计算准确率,并将准确率累加到running_acc中。最后,输出验证结果,包括平均损失和准确率,通过打印语句实现。

2.6实验执行

2.6.1配置参数

  1. args = Namespace(
  2. surname_csv="data/surnames_with_splits.csv",
  3. vectorizer_file="vectorizer.json",
  4. model_state_file="model.pth",
  5. save_dir="model_storage/ch4/surname_mlp",
  6. reload_from_files=False,
  7. expand_filepaths_to_save_dir=True,
  8. cuda=True,
  9. seed=1337,
  10. learning_rate=0.001,
  11. batch_size=64,
  12. num_epochs=100,
  13. early_stopping_criteria=5,
  14. hidden_dim=100,
  15. )

    设置实验的超参数,包括学习率、批量大小、隐藏层维度、训练轮数等

2.6.2训练和验证模型

  1. if not torch.cuda.is_available():
  2. args.cuda = False
  3. args.device = torch.device("cuda" if args.cuda else "cpu")
  4. np.random.seed(args.seed)
  5. torch.manual_seed(args.seed)
  6. dataset = SurnameDataset.load_dataset_and_make_vectorizer(args.surname_csv)
  7. vectorizer = dataset.get_vectorizer()
  8. model = MLP(input_dim=len(vectorizer.surname_vocab), hidden_dim=args.hidden_dim, output_dim=len(vectorizer.nationality_vocab))
  9. optimizer = optim.Adam(model.parameters(), lr=args.learning_rate)
  10. loss_func = nn.CrossEntropyLoss()
  11. train_model(model, dataset, vectorizer, optimizer, loss_func, args.num_epochs, args.batch_size, args.device)
  12. validate_model(model, dataset, vectorizer, loss_func, args.batch_size, args.device)

    通过载入数据集和向量化器,初始化模型和优化器,并执行训练和验证过程,这里不再赘述。

2.7保存模型和向量化器

  1. torch.save(model.state_dict(), args.model_state_file)
  2. dataset.save_vectorizer(args.vectorizer_file)

    训练完成后,保存模型的状态和向量化器,以便后续使用

2.8测试模型

  1. # 加载模型状态
  2. model.load_state_dict(torch.load(args.model_state_file))
  3. model = model.to(args.device)
  4. # 测试模型
  5. dataset.set_split('test')
  6. batch_generator = dataset.generate_batches(batch_size=args.batch_size, device=args.device)
  7. running_acc = 0.0
  8. model.eval()
  9. with torch.no_grad():
  10. for batch in batch_generator:
  11. y_pred = model(batch['x_data'])
  12. acc_t = compute_accuracy(y_pred, batch['y_target'])
  13. running_acc += (acc_t - running_acc) / (batch + 1)
  14. print(f"Test Accuracy: {running_acc:.4f}")

    加载保存的模型状态和向量化器,并在测试集上进行评估

图3、得到的损失和准确率

2.9模型预测

  1. def predict_nationality(model, surname, vectorizer, max_length):
  2. model.eval()
  3. vectorized_surname = torch.tensor(vectorizer.vectorize(surname)).unsqueeze(0)
  4. result = model(vectorized_surname, apply_softmax=True)
  5. probability_values, indices = result.max(dim=1)
  6. predicted_nationality = vectorizer.nationality_vocab.lookup_index(indices.item())
  7. return {'nationality': predicted_nationality, 'probability': probability_values.item()}
  8. # 示例预测
  9. new_surname = "Smith"
  10. prediction = predict_nationality(model, new_surname, vectorizer, max_length=20)
  11. print(f"Surname: {new_surname} -> Nationality: {prediction['nationality']} (Probability: {prediction['probability']:.4f})")

    我们用一个函数来使用训练好的模型对一个新的姓氏数据进行预测,以达到对模型的预测效果的观察

图4、预测测试结果

结果处理

  1. def get_top_k_predictions(model, surname, vectorizer, k=5):
  2. model.eval()
  3. vectorized_surname = torch.tensor(vectorizer.vectorize(surname)).unsqueeze(0)
  4. result = model(vectorized_surname, apply_softmax=True)
  5. probability_values, indices = torch.topk(result, k)
  6. predicted_nationalities = [vectorizer.nationality_vocab.lookup_index(idx) for idx in indices[0].tolist()]
  7. probabilities = probability_values[0].tolist()
  8. return list(zip(predicted_nationalities, probabilities))
  9. # 示例获取前K个预测
  10. top_k_predictions = get_top_k_predictions(model, "Smith", vectorizer, k=5)
  11. for nationality, probability in top_k_predictions:
  12. print(f"Nationality: {nationality} (Probability: {probability:.4f})")

    把输入的姓氏向量化为张量形式,并添加额外的维度,然后将向量化后的姓氏通过模型进行前向传播,并应用Softmax函数处理预测结果,通过model(vectorized_surname, apply_softmax=True)实现。
    使用torch.topk函数获取预测结果中概率值最高的前K个值,并返回这些概率值和对应的索引。
根据索引通过向量化器的国籍词汇表,将索引转换为对应的国籍标签。将预测的国籍标签和概率值组合成元组的列表,并返回结果。
    在示例中,我们调用get_top_k_predictions函数,以姓氏"Smith"为例,获取模型对该姓氏的前5个预测结果。然后,使用循环遍历结果列表,并打印每个预测的国籍和对应的概率值。这个函数可以用于获取模型对给定姓氏的前K个预测结果,并提供相应的国籍标签和概率值

三、卷积神经网络(CNN)

3.1什么是CNN

    虽然 MLP 在许多任务上表现出色,但在处理大规模高维数据集时可能会受到限制。此时,其他类型的神经网络结构,如卷积神经网络(CNN),可能更适合处理特定类型的数据和任务

    

 图5、二维卷积图像

    CNN的基本结构由输入层、卷积层、池化层、全连接层及输出层构成。卷积层和池化层一般会取若干个,采用卷积层和池化层交替设置,即一个卷积层连接一个池化层,池化层后再连接一个卷积层,依此类推。由于卷积层中输出特征图的每个神经元与其输入进行局部连接,并通过对应的连接权值与局部输入进行加权求和再加上偏置值,得到该神经元输入值,该过程等同于卷积过程,CNN也由此而得名[6]

图6、卷积层 – 池化层- 卷积层 – 池化层 – 卷积层 – 全连接层(例)

3.2CNN的优势

1、卷积神经网络能够自动学习输入数据的特征,无需手动设计。

2、卷积操作的参数共享,减少了模型参数数量,这样可以降低过拟合风险。

3、卷积神经网络的深层结构可以能够逐渐抽象学习更高层次的特征。

4、适用于大规模数据集,能够处理大量的数据训练。

四、基于CNN的姓氏分类

4.1库的导入

  1. from argparse import Namespace
  2. from collections import Counter
  3. import json
  4. import os
  5. import string
  6. import numpy as np
  7. import pandas as pd
  8. import torch
  9. import torch.nn as nn
  10. import torch.nn.functional as F
  11. import torch.optim as optim
  12. from torch.utils.data import Dataset, DataLoader
  13. from tqdm import tqdm_notebook

    首先导入需要用到的库,argparse、collections、json、os、numpy、pandas、torch等

4.2数据向量化

4.2.1词汇表类Vocabulary

  1. class Vocabulary(object):
  2. def __init__(self, token_to_idx=None, add_unk=True, unk_token="<UNK>"):
  3. if token_to_idx is None:
  4. token_to_idx = {}
  5. self._token_to_idx = token_to_idx
  6. self._idx_to_token = {idx: token
  7. for token, idx in self._token_to_idx.items()}
  8. self._add_unk = add_unk
  9. self._unk_token = unk_token
  10. self.unk_index = -1
  11. if add_unk:
  12. self.unk_index = self.add_token(unk_token)
  13. def to_serializable(self):
  14. return {'token_to_idx': self._token_to_idx,
  15. 'add_unk': self._add_unk,
  16. 'unk_token': self._unk_token}
  17. @classmethod
  18. def from_serializable(cls, contents):
  19. return cls(**contents)
  20. def add_token(self, token):
  21. try:
  22. index = self._token_to_idx[token]
  23. except KeyError:
  24. index = len(self._token_to_idx)
  25. self._token_to_idx[token] = index
  26. self._idx_to_token[index] = token
  27. return index
  28. def add_many(self, tokens):
  29. return [self.add_token(token) for token in tokens]
  30. def lookup_token(self, token):
  31. if self.unk_index >= 0:
  32. return self._token_to_idx.get(token, self.unk_index)
  33. else:
  34. return self._token_to_idx[token]
  35. def lookup_index(self, index):
  36. if index not in self._idx_to_token:
  37. raise KeyError("the index (%d) is not in the Vocabulary" % index)
  38. return self._idx_to_token[index]
  39. def __str__(self):
  40. return "<Vocabulary(size=%d)>" % len(self)
  41. def __len__(self):
  42. return len(self._token_to_idx)

    定义一个Vocabulary类。这个类用于处理文本数据并提取词汇表以进行映射。该类具有添加单词、查找单词索引等功能。__init__是初始化方法,可以传入一个预先存在的token_to_idx映射,也可以选择UNK标记。to_serializable是将Vocabulary对象转换为可序列化的字典形式。from_serializable能从可序列化的字典中实例化Vocabulary对象。add_token是根据token更新映射字典,并返回对应的索引。add_many是将一个字符串列表中的多个token添加到Vocabulary中,并返回对应的索引列表。lookup_token是根据token查找对应的索引,如果token不存在,则返回UNK的索引。而lookup_index是根据索引查找对应的token,如果索引不存在,则抛出KeyError异常。最后__str__可返回Vocabulary对象的字符串表示,__len__可返回Vocabulary对象的大小。

4.2.2向量化类SurnameVectorizer

  1. class SurnameVectorizer(object):
  2. def __init__(self, surname_vocab, nationality_vocab, max_surname_length):
  3. self.surname_vocab = surname_vocab
  4. self.nationality_vocab = nationality_vocab
  5. self._max_surname_length = max_surname_length
  6. def vectorize(self, surname):
  7. one_hot_matrix_size = (len(self.surname_vocab), self._max_surname_length)
  8. one_hot_matrix = np.zeros(one_hot_matrix_size, dtype=np.float32)
  9. for position_index, character in enumerate(surname):
  10. character_index = self.surname_vocab.lookup_token(character)
  11. one_hot_matrix[character_index][position_index] = 1
  12. return one_hot_matrix
  13. @classmethod
  14. def from_dataframe(cls, surname_df):
  15. surname_vocab = Vocabulary(unk_token="@")
  16. nationality_vocab = Vocabulary(add_unk=False)
  17. max_surname_length = 0
  18. for index, row in surname_df.iterrows():
  19. max_surname_length = max(max_surname_length, len(row.surname))
  20. for letter in row.surname:
  21. surname_vocab.add_token(letter)
  22. nationality_vocab.add_token(row.nationality)
  23. return cls(surname_vocab, nationality_vocab, max_surname_length)
  24. @classmethod
  25. def from_serializable(cls, contents):
  26. surname_vocab = Vocabulary.from_serializable(contents['surname_vocab'])
  27. nationality_vocab = Vocabulary.from_serializable(contents['nationality_vocab'])
  28. return cls(surname_vocab=surname_vocab, nationality_vocab=nationality_vocab,
  29. max_surname_length=contents['max_surname_length'])
  30. def to_serializable(self):
  31. return {'surname_vocab': self.surname_vocab.to_serializable(),
  32. 'nationality_vocab': self.nationality_vocab.to_serializable(),
  33. 'max_surname_length': self._max_surname_length}

    定义一个SurnameVectorizer类。这个类是一个向量化器。该类协调使用Vocabularies并进行向量化处理。它将姓氏和国籍映射为整数,并生成一个独热编码矩阵。首先需要传入一个姓氏的Vocabulary对象、一个国籍的Vocabulary对象和最长姓氏的长度。再将输入的姓氏字符串转换为一个one-hot编码的矩阵。这个from_dataframe使从姓氏的数据集DataFrame实例化Vectorizer对象。
from_serializable是从可序列化的字典中实例化Vectorizer对象。最后to_serializable是将Vectorizer对象转换为可序列化的字典形式。

4.3定义数据集

  1. class SurnameDataset(Dataset):
  2. def __init__(self, surname_df, vectorizer):
  3. self.surname_df = surname_df
  4. self._vectorizer = vectorizer
  5. self.train_df = self.surname_df[self.surname_df.split=='train']
  6. self.train_size = len(self.train_df)
  7. self.val_df = self.surname_df[self.surname_df.split=='val']
  8. self.validation_size = len(self.val_df)
  9. self.test_df = self.surname_df[self.surname_df.split=='test']
  10. self.test_size = len(self.test_df)
  11. self._lookup_dict = {'train': (self.train_df, self.train_size),
  12. 'val': (self.val_df, self.validation_size),
  13. 'test': (self.test_df, self.test_size)}
  14. self.set_split('train')
  15. # Class weights
  16. class_counts = surname_df.nationality.value_counts().to_dict()
  17. def sort_key(item):
  18. return self._vectorizer.nationality_vocab.lookup_token(item[0])
  19. sorted_counts = sorted(class_counts.items(), key=sort_key)
  20. frequencies = [count for _, count in sorted_counts]
  21. self.class_weights = 1.0 / torch.tensor(frequencies, dtype=torch.float32)
  22. @classmethod
  23. def load_dataset_and_make_vectorizer(cls, surname_csv):
  24. surname_df = pd.read_csv(surname_csv)
  25. train_surname_df = surname_df[surname_df.split=='train']
  26. return cls(surname_df, SurnameVectorizer.from_dataframe(train_surname_df))
  27. @classmethod
  28. def load_dataset_and_load_vectorizer(cls, surname_csv, vectorizer_filepath):
  29. surname_df = pd.read_csv(surname_csv)
  30. vectorizer = cls.load_vectorizer_only(vectorizer_filepath)
  31. return cls(surname_df, vectorizer)
  32. @staticmethod
  33. def load_vectorizer_only(vectorizer_filepath):
  34. with open(vectorizer_filepath) as fp:
  35. return SurnameVectorizer.from_serializable(json.load(fp))
  36. def save_vectorizer(self, vectorizer_filepath):
  37. with open(vectorizer_filepath, "w") as fp:
  38. json.dump(self._vectorizer.to_serializable(), fp)
  39. def get_vectorizer(self):
  40. return self._vectorizer
  41. def set_split(self, split="train"):
  42. self._target_split = split
  43. self._target_df, self._target_size = self._lookup_dict[split]
  44. def __len__(self):
  45. return self._target_size
  46. def __getitem__(self, index):
  47. row = self._target_df.iloc[index]
  48. surname_matrix = \
  49. self._vectorizer.vectorize(row.surname)
  50. nationality_index = \
  51. self._vectorizer.nationality_vocab.lookup_token(row.nationality)
  52. return {'x_surname': surname_matrix,
  53. 'y_nationality': nationality_index}
  54. def get_num_batches(self, batch_size):
  55. return len(self) // batch_size
  56. def generate_batches(dataset, batch_size, shuffle=True,
  57. drop_last=True, device="cpu"):
  58. dataloader = DataLoader(dataset=dataset, batch_size=batch_size,
  59. shuffle=shuffle, drop_last=drop_last)
  60. for data_dict in dataloader:
  61. out_data_dict = {}
  62. for name, tensor in data_dict.items():
  63. out_data_dict[name] = data_dict[name].to(device)
  64. yield out_data_dict

    定义一个SurnameDataset的类。这个类是一个自定义的数据集类,用于加载和处理数据集。在类的初始化方法中,传入了姓氏数据框和一个Vectorizer对象。然后根据数据框中的拆分列将数据分为训练集、验证集和测试集,并记录每个数据集的大小。该类还定义了一个_lookup_dict字典,用于根据split参数获取相应的数据集和大小。set_split方法用于设置当前要使用的数据集。通过读取姓氏数据框和向量化器文件路径来创建SurnamesDataset对象。其中load_vectorizer_only和save_vectorizer函数用于单独加载和保存Vectorizer对象,而get_vectorizer函数返回当前数据集使用的Vectorizer对象,__len__函数返回当前数据集的大小。然后用__getitem__根据索引返回对应的数据。它首先将姓氏向量化为矩阵,然后查找国籍对应的索引,最后返回一个包含姓氏矩阵和国籍索引的字典,get_num_batches函数则根据批次大小返回批次数。最后generate_batches函数使用DataLoader类来生成批次数据。它将数据加载到指定的设备上,并使用yield关键字返回批次数据。

4.4CNN分类模型架构

  1. class SurnameClassifier(nn.Module):
  2. def __init__(self, initial_num_channels, num_classes, num_channels):
  3. super(SurnameClassifier, self).__init__()
  4. self.convnet = nn.Sequential(
  5. nn.Conv1d(in_channels=initial_num_channels,
  6. out_channels=num_channels, kernel_size=3),
  7. nn.ELU(),
  8. nn.Conv1d(in_channels=num_channels, out_channels=num_channels,
  9. kernel_size=3, stride=2),
  10. nn.ELU(),
  11. nn.Conv1d(in_channels=num_channels, out_channels=num_channels,
  12. kernel_size=3, stride=2),
  13. nn.ELU(),
  14. nn.Conv1d(in_channels=num_channels, out_channels=num_channels,
  15. kernel_size=3),
  16. nn.ELU()
  17. )
  18. self.fc = nn.Linear(num_channels, num_classes)
  19. def forward(self, x_surname, apply_softmax=False):
  20. features = self.convnet(x_surname).squeeze(dim=2)
  21. prediction_vector = self.fc(features)
  22. if apply_softmax:
  23. prediction_vector = F.softmax(prediction_vector, dim=1)
  24. return prediction_vector

    我们定义一个SurnameClassifier类,用于实现姓氏分类的卷积神经网络模型。初始化时,输入了初始通道数、类别数和通道数。然后定义了一个包含多个卷积层和激活函数的卷积神经网络。最后,定义了一个全连接层用于输出最终的预测结果。forward函数定义了前向传播的过程。它首先将输入的姓氏数据经过卷积神经网络得到特征,然后通过全连接层得到预测向量。apply_softmax参数如果为True,则对预测向量应用softmax函数进行归一化处理。

4.5模型训练和评估

4.5.1make_train_state函数

  1. def make_train_state(args):
  2. return {'stop_early': False,
  3. 'early_stopping_step': 0,
  4. 'early_stopping_best_val': 1e8,
  5. 'learning_rate': args.learning_rate,
  6. 'epoch_index': 0,
  7. 'train_loss': [],
  8. 'train_acc': [],
  9. 'val_loss': [],
  10. 'val_acc': [],
  11. 'test_loss': -1,
  12. 'test_acc': -1,
  13. 'model_filename': args.model_state_file}

    该函数用于创建一个保存训练状态的字典。

4.5.2update_train_state函数

  1. def update_train_state(args, model, train_state):
  2. # Save one model at least
  3. if train_state['epoch_index'] == 0:
  4. torch.save(model.state_dict(), train_state['model_filename'])
  5. train_state['stop_early'] = False
  6. # Save model if performance improved
  7. elif train_state['epoch_index'] >= 1:
  8. loss_tm1, loss_t = train_state['val_loss'][-2:]
  9. # If loss worsened
  10. if loss_t >= train_state['early_stopping_best_val']:
  11. # Update step
  12. train_state['early_stopping_step'] += 1
  13. # Loss decreased
  14. else:
  15. # Save the best model
  16. if loss_t < train_state['early_stopping_best_val']:
  17. torch.save(model.state_dict(), train_state['model_filename'])
  18. # Reset early stopping step
  19. train_state['early_stopping_step'] = 0
  20. # Stop early ?
  21. train_state['stop_early'] = \
  22. train_state['early_stopping_step'] >= args.early_stopping_criteria
  23. return train_state

    该函数用于处理训练状态的更新。函数的作用包括早期停止,为了防止过拟合,如果验证集的损失值在连续的若干个epoch中没有改善,就停止训练。也包括模型检查点,如果模型的性能提高了,则保存模型。

4.5.3compute_accuracy函数

  1. def compute_accuracy(y_pred, y_target):
  2. y_pred_indices = y_pred.max(dim=1)[1]
  3. n_correct = torch.eq(y_pred_indices, y_target).sum().item()
  4. return n_correct / len(y_pred_indices) * 100

    定义函数compute_accuracy,用于计算模型的准确率。函数的输入参数包括预测值y_pred和目标值y_target。

4.5.4args对象

  1. args = Namespace(
  2. # Data and Path information
  3. surname_csv="data/surnames/surnames_with_splits.csv",
  4. vectorizer_file="vectorizer.json",
  5. model_state_file="model.pth",
  6. save_dir="model_storage/ch4/cnn",
  7. # Model hyper parameters
  8. hidden_dim=100,
  9. num_channels=256,
  10. # Training hyper parameters
  11. seed=1337,
  12. learning_rate=0.001,
  13. batch_size=128,
  14. num_epochs=100,
  15. early_stopping_criteria=5,
  16. dropout_p=0.1,
  17. # Runtime options
  18. cuda=False,
  19. reload_from_files=False,
  20. expand_filepaths_to_save_dir=True,
  21. catch_keyboard_interrupt=True
  22. )
  23. if args.expand_filepaths_to_save_dir:
  24. args.vectorizer_file = os.path.join(args.save_dir,
  25. args.vectorizer_file)
  26. args.model_state_file = os.path.join(args.save_dir,
  27. args.model_state_file)
  28. print("Expanded filepaths: ")
  29. print("\t{}".format(args.vectorizer_file))
  30. print("\t{}".format(args.model_state_file))
  31. # Check CUDA
  32. if not torch.cuda.is_available():
  33. args.cuda = False
  34. args.device = torch.device("cuda" if args.cuda else "cpu")
  35. print("Using CUDA: {}".format(args.cuda))
  36. def set_seed_everywhere(seed, cuda):
  37. np.random.seed(seed)
  38. torch.manual_seed(seed)
  39. if cuda:
  40. torch.cuda.manual_seed_all(seed)
  41. def handle_dirs(dirpath):
  42. if not os.path.exists(dirpath):
  43. os.makedirs(dirpath)
  44. # Set seed for reproducibility
  45. set_seed_everywhere(args.seed, args.cuda)
  46. # handle dirs
  47. handle_dirs(args.save_dir)

    其中包含了各种数据和路径信息、模型超参数、训练超参数和运行时选项。hidden_dim为隐藏层的维度。num_channels为卷积层的输出通道数。learning_rate为学习率。batch_size为批大小。
num_epochs为训练的总epoch数。dropout_p为Dropout层的概率。
 

4.5.5主体部分

  1. if args.reload_from_files:
  2. # training from a checkpoint
  3. dataset = SurnameDataset.load_dataset_and_load_vectorizer(args.surname_csv,
  4. args.vectorizer_file)
  5. else:
  6. # create dataset and vectorizer
  7. dataset = SurnameDataset.load_dataset_and_make_vectorizer(args.surname_csv)
  8. dataset.save_vectorizer(args.vectorizer_file)
  9. vectorizer = dataset.get_vectorizer()
  10. classifier = SurnameClassifier(initial_num_channels=len(vectorizer.surname_vocab),
  11. num_classes=len(vectorizer.nationality_vocab),
  12. num_channels=args.num_channels)
  13. classifer = classifier.to(args.device)
  14. dataset.class_weights = dataset.class_weights.to(args.device)
  15. loss_func = nn.CrossEntropyLoss(weight=dataset.class_weights)
  16. optimizer = optim.Adam(classifier.parameters(), lr=args.learning_rate)
  17. scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer=optimizer,
  18. mode='min', factor=0.5,
  19. patience=1)
  20. train_state = make_train_state(args)
  21. epoch_bar = tqdm_notebook(desc='training routine',
  22. total=args.num_epochs,
  23. position=0)
  24. dataset.set_split('train')
  25. train_bar = tqdm_notebook(desc='split=train',
  26. total=dataset.get_num_batches(args.batch_size),
  27. position=1,
  28. leave=True)
  29. dataset.set_split('val')
  30. val_bar = tqdm_notebook(desc='split=val',
  31. total=dataset.get_num_batches(args.batch_size),
  32. position=1,
  33. leave=True)
  34. try:
  35. for epoch_index in range(args.num_epochs):
  36. train_state['epoch_index'] = epoch_index
  37. # Iterate over training dataset
  38. # setup: batch generator, set loss and acc to 0, set train mode on
  39. dataset.set_split('train')
  40. batch_generator = generate_batches(dataset,
  41. batch_size=args.batch_size,
  42. device=args.device)
  43. running_loss = 0.0
  44. running_acc = 0.0
  45. classifier.train()
  46. for batch_index, batch_dict in enumerate(batch_generator):
  47. # the training routine is these 5 steps:
  48. # --------------------------------------
  49. # step 1. zero the gradients
  50. optimizer.zero_grad()
  51. # step 2. compute the output
  52. y_pred = classifier(batch_dict['x_surname'])
  53. # step 3. compute the loss
  54. loss = loss_func(y_pred, batch_dict['y_nationality'])
  55. loss_t = loss.item()
  56. running_loss += (loss_t - running_loss) / (batch_index + 1)
  57. # step 4. use loss to produce gradients
  58. loss.backward()
  59. # step 5. use optimizer to take gradient step
  60. optimizer.step()
  61. # -----------------------------------------
  62. # compute the accuracy
  63. acc_t = compute_accuracy(y_pred, batch_dict['y_nationality'])
  64. running_acc += (acc_t - running_acc) / (batch_index + 1)
  65. # update bar
  66. train_bar.set_postfix(loss=running_loss, acc=running_acc,
  67. epoch=epoch_index)
  68. train_bar.update()
  69. train_state['train_loss'].append(running_loss)
  70. train_state['train_acc'].append(running_acc)
  71. # Iterate over val dataset
  72. # setup: batch generator, set loss and acc to 0; set eval mode on
  73. dataset.set_split('val')
  74. batch_generator = generate_batches(dataset,
  75. batch_size=args.batch_size,
  76. device=args.device)
  77. running_loss = 0.
  78. running_acc = 0.
  79. classifier.eval()
  80. for batch_index, batch_dict in enumerate(batch_generator):
  81. # compute the output
  82. y_pred = classifier(batch_dict['x_surname'])
  83. # step 3. compute the loss
  84. loss = loss_func(y_pred, batch_dict['y_nationality'])
  85. loss_t = loss.item()
  86. running_loss += (loss_t - running_loss) / (batch_index + 1)
  87. # compute the accuracy
  88. acc_t = compute_accuracy(y_pred, batch_dict['y_nationality'])
  89. running_acc += (acc_t - running_acc) / (batch_index + 1)
  90. val_bar.set_postfix(loss=running_loss, acc=running_acc,
  91. epoch=epoch_index)
  92. val_bar.update()
  93. train_state['val_loss'].append(running_loss)
  94. train_state['val_acc'].append(running_acc)
  95. train_state = update_train_state(args=args, model=classifier,
  96. train_state=train_state)
  97. scheduler.step(train_state['val_loss'][-1])
  98. if train_state['stop_early']:
  99. break
  100. train_bar.n = 0
  101. val_bar.n = 0
  102. epoch_bar.update()
  103. except KeyboardInterrupt:
  104. print("Exiting loop")
  105. classifier.load_state_dict(torch.load(train_state['model_filename']))
  106. classifier = classifier.to(args.device)
  107. dataset.class_weights = dataset.class_weights.to(args.device)
  108. loss_func = nn.CrossEntropyLoss(dataset.class_weights)
  109. dataset.set_split('test')
  110. batch_generator = generate_batches(dataset,
  111. batch_size=args.batch_size,
  112. device=args.device)
  113. running_loss = 0.
  114. running_acc = 0.
  115. classifier.eval()
  116. for batch_index, batch_dict in enumerate(batch_generator):
  117. # compute the output
  118. y_pred = classifier(batch_dict['x_surname'])
  119. # compute the loss
  120. loss = loss_func(y_pred, batch_dict['y_nationality'])
  121. loss_t = loss.item()
  122. running_loss += (loss_t - running_loss) / (batch_index + 1)
  123. # compute the accuracy
  124. acc_t = compute_accuracy(y_pred, batch_dict['y_nationality'])
  125. running_acc += (acc_t - running_acc) / (batch_index + 1)
  126. train_state['test_loss'] = running_loss
  127. train_state['test_acc'] = running_acc
  128. print("Test loss: {};".format(train_state['test_loss']))
  129. print("Test Accuracy: {}".format(train_state['test_acc']))

    首先根据args.reload_from_files的值,决定是从文件中重新加载模型还是创建新的数据集和向量化器。然后,通过dataset.get_vectorizer()获取向量化器对象,并使用向量化器的属性初始化SurnameClassifier分类器。接下来,定义损失函数loss_func为交叉熵损失函数,其中使用数据集的类别权重进行加权。定义优化器optimizer为Adam优化器,将分类器的参数传递给优化器。定义学习率调度器scheduler为ReduceLROnPlateau调度器,用于在验证集上监测损失的改善情况,并动态调整学习率。

    训练时使用tqdm_notebook创建了三个进度条,分别用于整体epoch进度、训练数据集进度和验证数据集进度。通过一个for循环迭代args.num_epochs次,对每个epoch进行训练和验证。通过update_train_state函数更新训练状态,包括保存模型和判断是否提前停止训练。最后,调用scheduler.step根据验证集上的损失调整学习率。如果train_state['stop_early']为True,则提前结束训练。

    最后评估模型在测试数据集上的性能,并打印测试集的损失和准确率。首先,使用torch.load函数将保存的模型状态字典加载到分类器中。再将分类器移动到指定的设备,并将数据集的类别权重也移动到相同的设备。然后,定义损失函数,设置测试集,生成测试数据集的批次数据。初始化损失和准确率为0,循环执行以下操作:计算输出,计算损失,计算准确率,更新测试集的损失和准确率。将最终的测试集损失和准确率存储在train_state字典中的相应键中。

图7、模型性能指标

4.6模型预测

4.6.1predict_nationality函数

  1. def predict_nationality(surname, classifier, vectorizer):
  2. vectorized_surname = vectorizer.vectorize(surname)
  3. vectorized_surname = torch.tensor(vectorized_surname).unsqueeze(0)
  4. result = classifier(vectorized_surname, apply_softmax=True)
  5. probability_values, indices = result.max(dim=1)
  6. index = indices.item()
  7. predicted_nationality = vectorizer.nationality_vocab.lookup_index(index)
  8. probability_value = probability_values.item()
  9. return {'nationality': predicted_nationality, 'probability': probability_value}

    predict_nationality函数用于根据给定的姓氏预测其国籍。首先,将输入的姓氏使用矢量化器进行矢量化,并将其转换为PyTorch张量并添加一个维度。然后,通过将矢量化的姓氏输入到分类器中,得到预测结果。使用max函数找到预测结果中概率最大的值和对应的索引。将索引转换为相应的国籍标签,将概率值转换为标量值。最后,将预测的国籍和概率值存储在一个字典中,并返回该字典作为函数的输出。

4.6.2predict_topk_nationality函数

  1. new_surname = input("Enter a surname to classify: ")
  2. classifier = classifier.cpu()
  3. prediction = predict_nationality(new_surname, classifier, vectorizer)
  4. print("{} -> {} (p={:0.2f})".format(new_surname,
  5. prediction['nationality'],
  6. prediction['probability']))
  7. def predict_topk_nationality(surname, classifier, vectorizer, k=5):
  8. vectorized_surname = vectorizer.vectorize(surname)
  9. vectorized_surname = torch.tensor(vectorized_surname).unsqueeze(dim=0)
  10. prediction_vector = classifier(vectorized_surname, apply_softmax=True)
  11. probability_values, indices = torch.topk(prediction_vector, k=k)
  12. # returned size is 1,k
  13. probability_values = probability_values[0].detach().numpy()
  14. indices = indices[0].detach().numpy()
  15. results = []
  16. for kth_index in range(k):
  17. nationality = vectorizer.nationality_vocab.lookup_index(indices[kth_index])
  18. probability_value = probability_values[kth_index]
  19. results.append({'nationality': nationality,
  20. 'probability': probability_value})
  21. return results
  22. new_surname = input("Enter a surname to classify: ")
  23. k = int(input("How many of the top predictions to see? "))
  24. if k > len(vectorizer.nationality_vocab):
  25. print("Sorry! That's more than the # of nationalities we have.. defaulting you to max size :)")
  26. k = len(vectorizer.nationality_vocab)
  27. predictions = predict_topk_nationality(new_surname, classifier, vectorizer, k=k)
  28. print("Top {} predictions:".format(k))
  29. print("===================")
  30. for prediction in predictions:
  31. print("{} -> {} (p={:0.2f})".format(new_surname,
  32. prediction['nationality'],
  33. prediction['probability']))

    此函数用于从给定的姓氏预测出前K个可能的国籍。首先,将输入的姓氏使用矢量化器进行矢量化接下来,使用循环遍历前K个结果,将每个结果的索引转换为相应的国籍标签,将概率值存储在结果字典中,并将结果字典添加到结果列表中。接下来,代码从用户输入中获取一个姓氏,并将其存储在new_surname变量中。然后,从用户输入中获取要返回的前K个预测结果的数量,并将其存储在变量k中。如果用户输入的k大于国籍词汇表的大小,则将其设置为国籍词汇表的大小。再调用predict_topk_nationality函数,传入用户输入的姓氏、分类器、矢量化器和k作为参数,得到预测结果。最后,使用print函数打印前K个预测结果。

    用户通过input函数提示输入一个姓氏,并将输入的姓氏存储在new_surname变量中。接下来,将分类器移动到CPU设备上,使用cpu方法。然后,调用predict_nationality函数,传入用户输入的姓氏、分类器和矢量化器作为参数,得到预测结果。最后,使用print函数打印姓氏、预测的国籍和概率值。

图8、预测结果

五、两种分类方式的总结

   当使用MLP实现姓氏分类时,通常会将姓氏作为输入,并通过一系列全连接层和非线性激活函数来学习特征表示和分类决策。MLP是一种经典的前馈神经网络,其隐藏层的节点可以捕捉输入的不同特征,并通过输出层的softmax函数将输入映射到不同的类别上。MLP通常需要手动设计特征提取器和分类器的结构,并使用反向传播算法来优化模型参数。

    当使用CNN实现姓氏分类时,可以利用CNN在图像处理领域的强大能力。将姓氏表示为字符级别的图像,然后通过卷积层、池化层和全连接层来学习特征表示和分类决策。CNN可以自动学习输入中的局部特征,并通过层级结构将这些特征组合起来进行分类。在姓氏分类中,CNN可以有效地捕捉字符之间的空间关系和重要特征。

    MLP和CNN在姓氏分类中的主要区别在于它们的网络结构和工作原理。MLP更适合处理较简单的结构化数据,而CNN则更适合处理图像或具有空间关系的数据。MLP需要手动设计特征提取器和分类器的结构,而CNN可以自动学习特征。此外,CNN通常具有更好的参数共享和平移不变性,使其在处理图像等领域具有优势。

声明:本文内容由网友自发贡献,转载请注明出处:【wpsshop博客】
推荐阅读
相关标签
  

闽ICP备14008679号

        
cppcmd=keepalive&