赞
踩
MindSpore通过Dataset
和Transforms
实现高效的数据预处理
使用download
下载数据,并创建数据集对象:
from download import download
url = "https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/datasets/MNIST_Data.zip"
path = download(url, "./", kind="zip", replace=True)
train_dataset = MnistDataset('MNIST_Data/train')
test_dataset = MnistDataset('MNIST_Data/test')
数据集中的数据可以通过create_tuple_iterator
或create_dict_iterator
进行访问:
for image, label in test_dataset.create_tuple_iterator():
print(f"Shape of image [N, C, H, W]: {image.shape} {image.dtype}")
print(f"Shape of label: {label.shape} {label.dtype}")
break
for data in test_dataset.create_dict_iterator():
print(f"Shape of image [N, C, H, W]: {data['image'].shape} {data['image'].dtype}")
print(f"Shape of label: {data['label'].shape} {data['label'].dtype}")
break
原始的数据通常不能满足需求,需要通过数据流水线(Data Processing Pipeline)指定map、batch、shuffle等操作进行处理,并将数据打包为指定大小的batch:
def datapipe(dataset, batch_size):
image_transforms = [
vision.Rescale(1.0 / 255.0, 0),
vision.Normalize(mean=(0.1307,), std=(0.3081,)),
vision.HWC2CHW()
]
label_transform = transforms.TypeCast(mindspore.int32)
dataset = dataset.map(image_transforms, 'image')
dataset = dataset.map(label_transform, 'label')
dataset = dataset.batch(batch_size)
return dataset
train_dataset = datapipe(train_dataset, 64)
test_dataset = datapipe(test_dataset, 64)
通过继承nn.Cell
类,并重写__init__
和construct
方法来自定义网络结构。
# Define model class Network(nn.Cell): def __init__(self): super().__init__() self.flatten = nn.Flatten() self.dense_relu_sequential = nn.SequentialCell( nn.Dense(28*28, 512), nn.ReLU(), nn.Dense(512, 512), nn.ReLU(), nn.Dense(512, 10) ) def construct(self, x): x = self.flatten(x) logits = self.dense_relu_sequential(x) return logits model = Network() print(model)
__init__
方法中的dense_relu_sequential
定义了网络的层次结构,由Dense和ReLU组成。
construct
方法描述对输入数据的变换。
一个模型的训练需要经过三个步骤:
MindSpore使用函数式自动微分机制,因此需要实现:
loss_fn = nn.CrossEntropyLoss()
optimizer = nn.SGD(model.trainable_params(), 1e-2)
def forward_fn(data, label):
logits = model(data)
loss = loss_fn(logits, label)
return loss, logits
value_and_grad
获得梯度计算函数:grad_fn = mindspore.value_and_grad(forward_fn, None, optimizer.parameters, has_aux=True)
set_train
设置为训练模式,执行正向计算、反向传播和参数优化:def train_step(data, label):
(loss, _), grads = grad_fn(data, label)
optimizer(grads)
return loss
def train(model, dataset):
size = dataset.get_dataset_size()
model.set_train()
for batch, (data, label) in enumerate( dataset.create_tuple_iterator()):
loss = train_step(data, label)
if batch % 100 == 0:
loss, current = loss.asnumpy(), batch
print(f"loss: {loss:>7f} [{current:>3d}/{size:>3d}]")
通过测试函数来评估模型的性能:
def test(model, dataset, loss_fn):
num_batches = dataset.get_dataset_size()
model.set_train(False)
total, test_loss, correct = 0, 0, 0
for data, label in dataset.create_tuple_iterator():
pred = model(data)
total += len(data)
test_loss += loss_fn(pred, label).asnumpy()
correct += (pred.argmax(1) == label).asnumpy().sum()
test_loss /= num_batches
correct /= total
print(f"Test: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")
完整的遍历一次数据集成为一个epoch
epochs = 3
for t in range(epochs):
print(f"Epoch {t+1}\n-------------------------------")
train(model, train_dataset)
test(model, test_dataset, loss_fn)
print("Done!")
通过save_checkpoint
保存模型的参数:
mindspore.save_checkpoint(model, "model.ckpt")
print("Saved Model to model.ckpt")
模型的加载分为两步:
model = Network()
param_dict = mindspore.load_checkpoint("model.ckpt")
param_not_load, _ = mindspore.load_param_into_net(model, param_dict)
print(param_not_load)
加载的模型可以直接用于预测推理:
model.set_train(False)
for data, label in test_dataset:
pred = model(data)
predicted = pred.argmax(1)
print(f'Predicted: "{predicted[:10]}", Actual: "{label[:10]}"')
break
通过这一节的内容,对一个网络的诞生有了大概的认识,从原始数据到数据集的处理,简单网络结构的搭建,训练中自动微分机制的使用方法等都有了一定的了解。此外还有模型的保存与加载方法,为之后的深入学习奠定了基础。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。