当前位置:   article > 正文

自然语言处理——前馈网络

自然语言处理——前馈网络

引言

1.了解前馈神经网络

1.1前馈神经网络简介

前馈神经网络(FeedforwardNeuralNetwork,FNN)是一种最简单的神经网络结构,广泛应用于各种领域。以下是关于前馈神经网络的详细介绍:

>结构特点:前馈神经网络采用单向多层结构,各神经元分层排列,每个神经元只与前一层的神经元相连,接收前一层的输出,并输出给下一层,各层间没有反馈。这种结构使得信息在网络中只能单向流动,从输入层依次传递到输出层。

>工作原理:在前馈神经网络中,输入层接收外部输入的数据,并将其传递给下一层。隐层(或称为隐藏层、隐含层)位于输入层和输出层之间,负责对输入数据进行非线性变换和特征提取。输出层接收隐层的输出,并将最终的结果输出。在每个神经元中,输入信号和对应的权重进行加权求和,然后经过激活函数进行非线性变换。这样,通过多层神经元的传递,网络可以学习到输入数据的复杂特征,并最终输出相应的结果。

1.2前馈神经网络类型

前馈神经网络可以根据其结构和特性进一步细分为多种类型。以下是一些常见的前馈神经网络类型:

>单层前馈神经网络:这是最简单的人工神经网络类型,它仅包含一个输出层。输出层节点的值(即输出值)是通过将输入值乘以对应的权重值直接得到的。

>多层前馈神经网络:多层前馈神经网络具有一个输入层,一个或多个隐含层,以及一个输出层。每个隐含层都可以对输入模式进行线性分类,但由于多层的组合,最终可以实现对输入模式的较复杂的分类。

>感知器网络:感知器(或感知机)是最简单的前馈网络,主要用于模式分类。感知器网络通过计算加权输入和并应用激活函数(如阈值函数)来产生输出。

>BP神经网络:BP(BackPropaqation)神经网络是一种多层前馈网络,其特点是在训练过程中使用反向传播算法来调整权重,以最小化输出误差。

>RBF神经网络:RBF(RadialBasisFunction)神经网络使用径向基函数作为隐含层的激活函数。这种网络在处理非线性问题和函数逼近方面表现出色。

>卷积神经网络:卷积神经网络具有表征学习能力,能够按其阶层结构对输入信息进行平移不变分类(shift-invariantclassification)。这种网络通过卷积运算,可以有效地减小模型参数量,提取稳定的特征,并且对平移、旋转、缩放等图形变换具有一定的不变性。


本实验研究的前馈神经网络——卷积神经网络,在处理数字信号时深受窗口滤波器的启发。通过这种窗口特性,卷积神经网络能够在输入中学习局部化模式,这不仅使其成为计算机视觉的主轴,而且是检测单词和句子等序列数据中的子结构的理想候选。在本实验中,多层感知器和卷积神经网络被分组在一起,因为它们都是前馈神经网络,并且与另一类神经网络——递归神经网络(RNNs)形成对比,递归神经网络(RNNs)允许反馈(或循环),这样每次计算都可以从之前的计算中获得信息。

2.在Pytorch下分别通过MLP和CNN实现姓氏分类

2.1实验要点

>通过“示例:带有多层感知器的姓氏分类”,掌握多层感知器在多层分类中的应用

>掌握每种类型的神经网络层对它所计算的数据张量的大小和形状的影响

2.2开发环境

Python3.6.7(Pytorch)

2.3实验所需数据来源

https://course.educg.net/3988f8e79b250f1a05f89db3711515df/files/surnames.csv

处理原始数据:

  1. import collections
  2. import numpy as np
  3. import pandas as pd
  4. import re
  5. from argparse import Namespace
  6. args = Namespace(
  7. raw_dataset_csv="/home/jovyan/surnames.csv",
  8. train_proportion=0.7,
  9. val_proportion=0.15,
  10. test_proportion=0.15,
  11. output_munged_csv="/home/jovyan/surnames_with_splits.csv",
  12. seed=1337
  13. )
  14. surnames = pd.read_csv(args.raw_dataset_csv, header=0)
  15. surnames.head()
  16. set(surnames.nationality)
  17. by_nationality = collections.defaultdict(list)
  18. for _, row in surnames.iterrows():
  19. by_nationality[row.nationality].append(row.to_dict())
  20. final_list = []
  21. np.random.seed(args.seed)
  22. for _, item_list in sorted(by_nationality.items()):
  23. np.random.shuffle(item_list)
  24. n = len(item_list)
  25. n_train = int(args.train_proportion*n)
  26. n_val = int(args.val_proportion*n)
  27. n_test = int(args.test_proportion*n)
  28. # Give data point a split attribute
  29. for item in item_list[:n_train]:
  30. item['split'] = 'train'
  31. for item in item_list[n_train:n_train+n_val]:
  32. item['split'] = 'val'
  33. for item in item_list[n_train+n_val:]:
  34. item['split'] = 'test'
  35. # Add to final list
  36. final_list.extend(item_list)
  37. final_surnames = pd.DataFrame(final_list)
  38. final_surnames.split.value_counts()

得到

进行最后一步处理:

  1. final_surnames.head()
  2. final_surnames.to_csv(args.output_munged_csv, index=False)

得到输出结果并生成新的实验数据:

3.代码实现(MLP)

3.1数据预处理

3.1.1初始化

导入所需的库

  1. from argparse import Namespace
  2. from collections import Counter
  3. import json
  4. import os
  5. import string
  6. import numpy as np
  7. import pandas as pd
  8. import torch
  9. import torch.nn as nn
  10. import torch.nn.functional as F
  11. import torch.optim as optim
  12. from torch.utils.data import Dataset, DataLoader
  13. from tqdm import tqdm_notebook
3.1.2数据集(部分代码)
  1. class SurnameDataset(Dataset):
  2. def __init__(self, surname_df, vectorizer):
  3. self.surname_df = surname_df
  4. self._vectorizer = vectorizer
  5. self.train_df = self.surname_df[self.surname_df.split=='train']
  6. self.train_size = len(self.train_df)
  7. self.val_df = self.surname_df[self.surname_df.split=='val']
  8. self.validation_size = len(self.val_df)
  9. self.test_df = self.surname_df[self.surname_df.split=='test']
  10. self.test_size = len(self.test_df)
  11. self._lookup_dict = {'train': (self.train_df, self.train_size),
  12. 'val': (self.val_df, self.validation_size),
  13. 'test': (self.test_df, self.test_size)}
  14. self.set_split('train')
  15. # Class weights
  16. class_counts = surname_df.nationality.value_counts().to_dict()
  17. def sort_key(item):
  18. return self._vectorizer.nationality_vocab.lookup_token(item[0])
  19. sorted_counts = sorted(class_counts.items(), key=sort_key)
  20. frequencies = [count for _, count in sorted_counts]
  21. self.class_weights = 1.0 / torch.tensor(frequencies, dtype=torch.float32)

利用DataLoader将数据集打包进行后续训练

  1. dataloader = DataLoader(dataset=dataset, batch_size=batch_size,
  2. shuffle=shuffle, drop_last=drop_last)
3.1.3词汇表、向量化器和DataLoader

为了使用字符对姓氏进行分类,我们使用词汇表、向量化器和DataLoader将姓氏字符串转换为向量化的minibatches。

  1. class Vocabulary(object):
  2. def __init__(self, token_to_idx=None, add_unk=True, unk_token="<UNK>"):
  3. if token_to_idx is None:
  4. token_to_idx = {}
  5. self._token_to_idx = token_to_idx
  6. self._idx_to_token = {idx: token
  7. for token, idx in self._token_to_idx.items()}
  8. self._add_unk = add_unk
  9. self._unk_token = unk_token
  10. self.unk_index = -1
  11. if add_unk:
  12. self.unk_index = self.add_token(unk_token)
  1. class SurnameVectorizer(object):
  2. def __init__(self, surname_vocab, nationality_vocab):
  3. self.surname_vocab = surname_vocab
  4. self.nationality_vocab = nationality_vocab
  5. def vectorize(self, surname):
  6. vocab = self.surname_vocab
  7. one_hot = np.zeros(len(vocab), dtype=np.float32)
  8. for token in surname:
  9. one_hot[vocab.lookup_token(token)] = 1
  10. return one_hot

3.2训练模型

3.2.1模型构建

第一个线性层将输入向量映射到中间向量,并对该向量应用非线性。第二线性层将中间向量映射到预测向量。

  1. class SurnameClassifier(nn.Module):
  2. def __init__(self, input_dim, hidden_dim, output_dim):
  3. super(SurnameClassifier, self).__init__()
  4. self.fc1 = nn.Linear(input_dim, hidden_dim)
  5. self.fc2 = nn.Linear(hidden_dim, output_dim)
  6. def forward(self, x_in, apply_softmax=False):
  7. intermediate_vector = F.relu(self.fc1(x_in))
  8. prediction_vector = self.fc2(intermediate_vector)
  9. if apply_softmax:
  10. prediction_vector = F.softmax(prediction_vector, dim=1)
  11. return prediction_vector

参数部分:

  1. args = Namespace(
  2. # Data and path information
  3. surname_csv="/home/jovyan/surnames_with_splits.csv",
  4. vectorizer_file="vectorizer.json",
  5. model_state_file="model.pth",
  6. save_dir="model_storage/ch4/surname_mlp",
  7. # Model hyper parameters
  8. hidden_dim=300,
  9. # Training hyper parameters
  10. seed=1337,
  11. num_epochs=100,
  12. early_stopping_criteria=5,
  13. learning_rate=0.001,
  14. batch_size=64,
  15. # Runtime options
  16. cuda=False,
  17. reload_from_files=False,
  18. expand_filepaths_to_save_dir=True,
  19. )
3.2.2循环训练

利用训练数据,计算模型输出、损失和梯度。然后,使用梯度来更新模型。

获取数据

  1. if args.reload_from_files:
  2. # training from a checkpoint
  3. print("Reloading!")
  4. dataset = SurnameDataset.load_dataset_and_load_vectorizer(args.surname_csv,
  5. args.vectorizer_file)
  6. else:
  7. # create dataset and vectorizer
  8. print("Creating fresh!")
  9. dataset = SurnameDataset.load_dataset_and_make_vectorizer(args.surname_csv)
  10. dataset.save_vectorizer(args.vectorizer_file)
  11. vectorizer = dataset.get_vectorizer()
  12. classifier = SurnameClassifier(input_dim=len(vectorizer.surname_vocab),
  13. hidden_dim=args.hidden_dim,
  14. output_dim=len(vectorizer.nationality_vocab))

开始训练

  1. try:
  2. for epoch_index in range(args.num_epochs):
  3. train_state['epoch_index'] = epoch_index
  4. # Iterate over training dataset
  5. # setup: batch generator, set loss and acc to 0, set train mode on
  6. dataset.set_split('train')
  7. batch_generator = generate_batches(dataset,
  8. batch_size=args.batch_size,
  9. device=args.device)
  10. running_loss = 0.0
  11. running_acc = 0.0
  12. classifier.train()
  13. for batch_index, batch_dict in enumerate(batch_generator):
  14. # the training routine is these 5 steps:
  15. # --------------------------------------
  16. # step 1. zero the gradients
  17. optimizer.zero_grad()
  18. # step 2. compute the output
  19. y_pred = classifier(batch_dict['x_surname'])
  20. # step 3. compute the loss
  21. loss = loss_func(y_pred, batch_dict['y_nationality'])
  22. loss_t = loss.item()
  23. running_loss += (loss_t - running_loss) / (batch_index + 1)
  24. # step 4. use loss to produce gradients
  25. loss.backward()
  26. # step 5. use optimizer to take gradient step
  27. optimizer.step()
  28. # -----------------------------------------
  29. # compute the accuracy
  30. acc_t = compute_accuracy(y_pred, batch_dict['y_nationality'])
  31. running_acc += (acc_t - running_acc) / (batch_index + 1)
  32. # update bar
  33. train_bar.set_postfix(loss=running_loss, acc=running_acc,
  34. epoch=epoch_index)
  35. train_bar.update()
  36. train_state['train_loss'].append(running_loss)
  37. train_state['train_acc'].append(running_acc)

展示训练过程:

训练结果:

  1. train_state['test_loss'] = running_loss
  2. train_state['test_acc'] = running_acc
  3. print("Test loss: {};".format(train_state['test_loss']))
  4. print("Test Accuracy: {}".format(train_state['test_acc']))

3.3模型预测

  1. def predict_nationality(surname, classifier, vectorizer):
  2. vectorized_surname = vectorizer.vectorize(surname)
  3. vectorized_surname = torch.tensor(vectorized_surname).unsqueeze(0)
  4. result = classifier(vectorized_surname, apply_softmax=True)
  5. probability_values, indices = result.max(dim=1)
  6. index = indices.item()
  7. predicted_nationality = vectorizer.nationality_vocab.lookup_index(index)
  8. probability_value = probability_values.item()
  9. return {'nationality': predicted_nationality, 'probability': probability_value}
  10. new_surname = input("Enter a surname to classify: ")
  11. classifier = classifier.cpu()
  12. prediction = predict_nationality(new_surname, classifier, vectorizer)
  13. print("{} -> {} (p={:0.2f})".format(new_surname,
  14. prediction['nationality'],
  15. prediction['probability']))

预测结果:

采用k-best预测并使用另一个模型对它们重新排序:

  1. def predict_topk_nationality(surname, classifier, vectorizer, k=5):
  2. vectorized_surname = vectorizer.vectorize(surname)
  3. vectorized_surname = torch.tensor(vectorized_surname).unsqueeze(dim=0)
  4. prediction_vector = classifier(vectorized_surname, apply_softmax=True)
  5. probability_values, indices = torch.topk(prediction_vector, k=k)
  6. # returned size is 1,k
  7. probability_values = probability_values[0].detach().numpy()
  8. indices = indices[0].detach().numpy()
  9. results = []
  10. for kth_index in range(k):
  11. nationality = vectorizer.nationality_vocab.lookup_index(indices[kth_index])
  12. probability_value = probability_values[kth_index]
  13. results.append({'nationality': nationality,
  14. 'probability': probability_value})
  15. return results
  16. new_surname = input("Enter a surname to classify: ")
  17. k = int(input("How many of the top predictions to see? "))
  18. if k > len(vectorizer.nationality_vocab):
  19. print("Sorry! That's more than the # of nationalities we have.. defaulting you to max size :)")
  20. k = len(vectorizer.nationality_vocab)
  21. predictions = predict_topk_nationality(new_surname, classifier, vectorizer, k=k)
  22. print("Top {} predictions:".format(k))
  23. print("===================")
  24. for prediction in predictions:
  25. print("{} -> {} (p={:0.2f})".format(new_surname,
  26. prediction['nationality'],
  27. prediction['probability']))

预测结果:

3.4Dropout

在训练过程中,dropout有一定概率使属于两个相邻层的单元之间的连接减弱。

  1. class MultilayerPerceptron(nn.Module):
  2. def __init__(self, input_dim, hidden_dim, output_dim):
  3. super(MultilayerPerceptron, self).__init__()
  4. self.fc1 = nn.Linear(input_dim, hidden_dim)
  5. self.fc2 = nn.Linear(hidden_dim, output_dim)
  6. def forward(self, x_in, apply_softmax=False):
  7. intermediate = F.relu(self.fc1(x_in))
  8. output = self.fc2(F.dropout(intermediate, p=0.5))
  9. if apply_softmax:
  10. output = F.softmax(output, dim=1)
  11. return output
  12. batch_size = 2 # number of samples input at once
  13. input_dim = 3
  14. hidden_dim = 100
  15. output_dim = 4
  16. # Initialize model
  17. mlp = MultilayerPerceptron(input_dim, hidden_dim, output_dim)
  18. print(mlp)
  19. y_output = mlp(x_input, apply_softmax=False)
  20. describe(y_output)

dropout只适用于训练期间,不适用于评估期间。

4.代码实现(CNN)

4.1数据预处理

4.1.1初始化

导入库

  1. from argparse import Namespace
  2. from collections import Counter
  3. import json
  4. import os
  5. import string
  6. import numpy as np
  7. import pandas as pd
  8. import torch
  9. import torch.nn as nn
  10. import torch.nn.functional as F
  11. import torch.optim as optim
  12. from torch.utils.data import Dataset, DataLoader
  13. from tqdm import tqdm_notebook
4.1.2数据集
  1. class SurnameDataset(Dataset):
  2. def __init__(self, surname_df, vectorizer):
  3. self.surname_df = surname_df
  4. self._vectorizer = vectorizer
  5. self.train_df = self.surname_df[self.surname_df.split=='train']
  6. self.train_size = len(self.train_df)
  7. self.val_df = self.surname_df[self.surname_df.split=='val']
  8. self.validation_size = len(self.val_df)
  9. self.test_df = self.surname_df[self.surname_df.split=='test']
  10. self.test_size = len(self.test_df)
  11. self._lookup_dict = {'train': (self.train_df, self.train_size),
  12. 'val': (self.val_df, self.validation_size),
  13. 'test': (self.test_df, self.test_size)}
  14. self.set_split('train')
  15. # Class weights
  16. class_counts = surname_df.nationality.value_counts().to_dict()
  17. def sort_key(item):
  18. return self._vectorizer.nationality_vocab.lookup_token(item[0])
  19. sorted_counts = sorted(class_counts.items(), key=sort_key)
  20. frequencies = [count for _, count in sorted_counts]
  21. self.class_weights = 1.0 / torch.tensor(frequencies, dtype=torch.float32)
4.1.3词汇表、向量化器和DataLoader

为了使用字符对姓氏进行分类,我们使用词汇表、向量化器和DataLoader将姓氏字符串转换为向量化的minibatches。

  1. class Vocabulary(object):
  2. def __init__(self, token_to_idx=None, add_unk=True, unk_token="<UNK>"):
  3. if token_to_idx is None:
  4. token_to_idx = {}
  5. self._token_to_idx = token_to_idx
  6. self._idx_to_token = {idx: token
  7. for token, idx in self._token_to_idx.items()}
  8. self._add_unk = add_unk
  9. self._unk_token = unk_token
  10. self.unk_index = -1
  11. if add_unk:
  12. self.unk_index = self.add_token(unk_token)
  1. class SurnameVectorizer(object):
  2. def __init__(self, surname_vocab, nationality_vocab, max_surname_length):
  3. self.surname_vocab = surname_vocab
  4. self.nationality_vocab = nationality_vocab
  5. self._max_surname_length = max_surname_length
  6. def vectorize(self, surname):
  7. one_hot_matrix_size = (len(self.surname_vocab), self._max_surname_length)
  8. one_hot_matrix = np.zeros(one_hot_matrix_size, dtype=np.float32)
  9. for position_index, character in enumerate(surname):
  10. character_index = self.surname_vocab.lookup_token(character)
  11. one_hot_matrix[character_index][position_index] = 1
  12. return one_hot_matrix

4.2训练模型

4.2.1模型构建

在最后一步中,可选地应用softmax操作,以确保输出和为1;这就是所谓的“概率”。它是可选的原因与我们使用的损失函数的数学公式有关——交叉熵损失。

  1. class SurnameClassifier(nn.Module):
  2. def __init__(self, initial_num_channels, num_classes, num_channels):
  3. super(SurnameClassifier, self).__init__()
  4. self.convnet = nn.Sequential(
  5. nn.Conv1d(in_channels=initial_num_channels,
  6. out_channels=num_channels, kernel_size=3),
  7. nn.ELU(),
  8. nn.Conv1d(in_channels=num_channels, out_channels=num_channels,
  9. kernel_size=3, stride=2),
  10. nn.ELU(),
  11. nn.Conv1d(in_channels=num_channels, out_channels=num_channels,
  12. kernel_size=3, stride=2),
  13. nn.ELU(),
  14. nn.Conv1d(in_channels=num_channels, out_channels=num_channels,
  15. kernel_size=3),
  16. nn.ELU()
  17. )
  18. self.fc = nn.Linear(num_channels, num_classes)

参数部分:

  1. args = Namespace(
  2. # Data and Path information
  3. surname_csv="/home/jovyan/surnames_with_splits.csv",
  4. vectorizer_file="vectorizer.json",
  5. model_state_file="model.pth",
  6. save_dir="model_storage/ch4/cnn",
  7. # Model hyper parameters
  8. hidden_dim=100,
  9. num_channels=256,
  10. # Training hyper parameters
  11. seed=1337,
  12. learning_rate=0.001,
  13. batch_size=128,
  14. num_epochs=100,
  15. early_stopping_criteria=5,
  16. dropout_p=0.1,
  17. # Runtime options
  18. cuda=False,
  19. reload_from_files=False,
  20. expand_filepaths_to_save_dir=True,
  21. catch_keyboard_interrupt=True
  22. )
4.2.2循环训练

获取数据:

  1. if args.reload_from_files:
  2. # training from a checkpoint
  3. dataset = SurnameDataset.load_dataset_and_load_vectorizer(args.surname_csv,
  4. args.vectorizer_file)
  5. else:
  6. # create dataset and vectorizer
  7. dataset = SurnameDataset.load_dataset_and_make_vectorizer(args.surname_csv)
  8. dataset.save_vectorizer(args.vectorizer_file)

开始训练:

  1. try:
  2. for epoch_index in range(args.num_epochs):
  3. train_state['epoch_index'] = epoch_index
  4. # Iterate over training dataset
  5. # setup: batch generator, set loss and acc to 0, set train mode on
  6. dataset.set_split('train')
  7. batch_generator = generate_batches(dataset,
  8. batch_size=args.batch_size,
  9. device=args.device)
  10. running_loss = 0.0
  11. running_acc = 0.0
  12. classifier.train()
  13. for batch_index, batch_dict in enumerate(batch_generator):
  14. # the training routine is these 5 steps:
  15. # --------------------------------------
  16. # step 1. zero the gradients
  17. optimizer.zero_grad()
  18. # step 2. compute the output
  19. y_pred = classifier(batch_dict['x_surname'])
  20. # step 3. compute the loss
  21. loss = loss_func(y_pred, batch_dict['y_nationality'])
  22. loss_t = loss.item()
  23. running_loss += (loss_t - running_loss) / (batch_index + 1)
  24. # step 4. use loss to produce gradients
  25. loss.backward()
  26. # step 5. use optimizer to take gradient step
  27. optimizer.step()
  28. # -----------------------------------------
  29. # compute the accuracy
  30. acc_t = compute_accuracy(y_pred, batch_dict['y_nationality'])
  31. running_acc += (acc_t - running_acc) / (batch_index + 1)
  32. # update bar
  33. train_bar.set_postfix(loss=running_loss, acc=running_acc,
  34. epoch=epoch_index)
  35. train_bar.update()
  36. train_state['train_loss'].append(running_loss)
  37. train_state['train_acc'].append(running_acc)

展示训练过程:

训练结果:

  1. train_state['test_loss'] = running_loss
  2. train_state['test_acc'] = running_acc
  3. print("Test loss: {};".format(train_state['test_loss']))
  4. print("Test Accuracy: {}".format(train_state['test_acc']))

4.3模型预测

开始预测:

  1. def predict_nationality(surname, classifier, vectorizer):
  2. vectorized_surname = vectorizer.vectorize(surname)
  3. vectorized_surname = torch.tensor(vectorized_surname).unsqueeze(0)
  4. result = classifier(vectorized_surname, apply_softmax=True)
  5. probability_values, indices = result.max(dim=1)
  6. index = indices.item()
  7. predicted_nationality = vectorizer.nationality_vocab.lookup_index(index)
  8. probability_value = probability_values.item()
  9. return {'nationality': predicted_nationality, 'probability': probability_value}
  10. new_surname = input("Enter a surname to classify: ")
  11. classifier = classifier.cpu()
  12. prediction = predict_nationality(new_surname, classifier, vectorizer)
  13. print("{} -> {} (p={:0.2f})".format(new_surname,
  14. prediction['nationality'],
  15. prediction['probability']))

预测结果:

卷积和线性层实例化和规范使用批处理:

  1. def predict_topk_nationality(surname, classifier, vectorizer, k=5):
  2. vectorized_surname = vectorizer.vectorize(surname)
  3. vectorized_surname = torch.tensor(vectorized_surname).unsqueeze(dim=0)
  4. prediction_vector = classifier(vectorized_surname, apply_softmax=True)
  5. probability_values, indices = torch.topk(prediction_vector, k=k)
  6. # returned size is 1,k
  7. probability_values = probability_values[0].detach().numpy()
  8. indices = indices[0].detach().numpy()
  9. results = []
  10. for kth_index in range(k):
  11. nationality = vectorizer.nationality_vocab.lookup_index(indices[kth_index])
  12. probability_value = probability_values[kth_index]
  13. results.append({'nationality': nationality,
  14. 'probability': probability_value})
  15. return results
  16. new_surname = input("Enter a surname to classify: ")
  17. k = int(input("How many of the top predictions to see? "))
  18. if k > len(vectorizer.nationality_vocab):
  19. print("Sorry! That's more than the # of nationalities we have.. defaulting you to max size :)")
  20. k = len(vectorizer.nationality_vocab)
  21. predictions = predict_topk_nationality(new_surname, classifier, vectorizer, k=k)
  22. print("Top {} predictions:".format(k))
  23. print("===================")
  24. for prediction in predictions:
  25. print("{} -> {} (p={:0.2f})".format(new_surname,
  26. prediction['nationality'],
  27. prediction['probability']))

预测结果:

注:本文中所述代码均为不完整的部分代码。

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/空白诗007/article/detail/761283
推荐阅读
相关标签
  

闽ICP备14008679号