当前位置:   article > 正文

mindspore环境配置及运行minst数据集_mindspore环境搭建

mindspore环境搭建

1、安装GPU

安装GPU版需要CUDA和cudnn,虚拟环境下的安装看我另一篇文章

在Ubuntu系统的conda虚拟环境下安装cuda和cudnn-CSDN博客

安装完cuda和cudnn后,进入mindspore官网

MindSpore官网

  1. pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/2.2.12/MindSpore/unified/x86_64/mindspore-2.2.12-cp39-cp39-linux_x86_64.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple

复制命令直接安装即可,这里不建议使用conda命令安装,我这边经常出现找不到镜像的情况,只要是切换到conda创建的环境下,使用pip安装只会安装到当前环境

2、minist数据集实验

这里有官网教程: AI Gallery 资产详情_华为云

官网教程里的代码是用jupyter写的,我把代码改了一下:

删了一部分,增加了指定gpu的代码

  1. import mindspore
  2. from mindspore import nn, value_and_grad
  3. from mindspore.dataset import vision, transforms
  4. from mindspore.dataset import MnistDataset
  5. from mindspore import context
  6. # 设置运行设备为GPU,device_id为0
  7. context.set_context(device_target="GPU", device_id=1)
  8. # Download data from open datasets
  9. from download import download
  10. ''' 第一次运行时取消注释,下载数据集后加上注释即可,防止重复下载
  11. url = "https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/" \
  12. "notebook/datasets/MNIST_Data.zip"
  13. path = download(url, "./", kind="zip", replace=True)
  14. '''
  15. train_dataset = MnistDataset('MNIST_Data/train')
  16. test_dataset = MnistDataset('MNIST_Data/test')
  17. # 定义数据处理Pipline
  18. def datapipe(dataset, batch_size):
  19. image_transforms = [
  20. vision.Rescale(1.0 / 255.0, 0),
  21. # 归一化
  22. vision.Normalize(mean=(0.1307,), std=(0.3081,)),
  23. # 28, 28, 1 --> channel, height, width 1, 28, 28
  24. vision.HWC2CHW()
  25. ]
  26. # uint8 --> int32
  27. label_transform = transforms.TypeCast(mindspore.int32)
  28. dataset = dataset.map(image_transforms, 'image')
  29. dataset = dataset.map(label_transform, 'label')
  30. # 1, 28, 28 --> 64, 1, 28, 28
  31. dataset = dataset.batch(batch_size, drop_remainder=True)
  32. return dataset
  33. train_dataset = datapipe(train_dataset, 64)
  34. test_dataset = datapipe(test_dataset, 64)
  35. # Define model
  36. class Network(nn.Cell):
  37. def __init__(self):
  38. super().__init__()
  39. self.flatten = nn.Flatten()
  40. self.dense_relu_sequential = nn.SequentialCell(
  41. nn.Dense(28*28, 512),
  42. nn.ReLU(),
  43. nn.Dense(512, 512),
  44. nn.ReLU(),
  45. nn.Dense(512, 10)
  46. )
  47. def construct(self, x):
  48. x = self.flatten(x)
  49. logits = self.dense_relu_sequential(x)
  50. return logits
  51. model = Network()
  52. # Instantiate loss function and optimizer
  53. loss_fn = nn.CrossEntropyLoss()
  54. optimizer = nn.SGD(model.trainable_params(), 1e-2)
  55. # 1. Define forward function
  56. def forward_fn(data, label):
  57. logits = model(data)
  58. loss = loss_fn(logits, label)
  59. return loss, logits
  60. # 2. Get gradient function
  61. grad_fn = value_and_grad(forward_fn, None, optimizer.parameters, has_aux=True)
  62. def train(model, dataset):
  63. size = dataset.get_dataset_size()
  64. model.set_train()
  65. for batch, (data, label) in enumerate(dataset.create_tuple_iterator()):
  66. #loss = train_step(data, label)
  67. #logits = model(data)
  68. #loss = loss_fn(logits, label)
  69. #grad_fn = value_and_grad(loss_fn(logits, label), None, optimizer.parameters)
  70. #optimizer(grad_fn)
  71. (loss, _), grads = grad_fn(data, label)
  72. optimizer(grads)
  73. if batch % 100 == 0:
  74. loss, current = loss.asnumpy(), batch
  75. print(f"loss: {loss:>7f} [{current:>3d}/{size:>3d}]")
  76. def test(model, dataset, loss_fn):
  77. num_batches = dataset.get_dataset_size()
  78. model.set_train(False)
  79. total, test_loss, correct = 0, 0, 0
  80. for data, label in dataset.create_tuple_iterator():
  81. pred = model(data)
  82. total += len(data)
  83. test_loss += loss_fn(pred, label).asnumpy()
  84. correct += (pred.argmax(1) == label).asnumpy().sum()
  85. test_loss /= num_batches
  86. correct /= total
  87. print(f"Test: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")
  88. epochs = 100
  89. for t in range(epochs):
  90. print(f"Epoch {t+1}\n-------------------------------")
  91. train(model, train_dataset)
  92. test(model, test_dataset, loss_fn)
  93. print("Done!")

运行:我装的cuda的版本是11.8,稍微高了一点,但似乎不影响正常使用

没运行时显存占用:

运行时显存占用:

使用nvidia-smi

使用gpustat

3、部分代码讲解

optimizer = nn.SGD(model.trainable_params(), 1e-2)
  1. # 1. Define forward function
  2. def forward_fn(data, label):
  3. logits = model(data)
  4. loss = loss_fn(logits, label)
  5. return loss, logits
  6. # 2. Get gradient function
  7. grad_fn = value_and_grad(forward_fn, None, optimizer.parameters, has_aux=True)
  1. for batch, (data, label) in enumerate(dataset.create_tuple_iterator()):
  2. #loss = train_step(data, label)
  3. #logits = model(data)
  4. #loss = loss_fn(logits, label)
  5. #grad_fn = value_and_grad(loss_fn(logits, label), None, optimizer.parameters)
  6. #optimizer(grad_fn)
  7. (loss, _), grads = grad_fn(data, label)
  8. optimizer(grads)

个人理解,主需要介绍的是mindspore中value_and_grad函数的用法,其中,如以value_and_grad函数中的第一个参数forward_fn,这里代表的应该是一个函数对象,这个函数可以是一个嵌套多次的函数,而需要求梯度的部分是optimizer.parameters中的部分,这也意味着,函数对象forward_fn中的参数是大于等于(包含)optimizer.parameters中的部分

还有就是反向传播的过程是需要知道输出和期望的差别,也就是损失,在pytorch里可以直接使用loss.back(),而在value_and_grad中,默认第一个参数forward_fn的返回值相当于loss,如果forward_fn有多个返回值,则需要在value_and_grad中把has_aux=True,这时,默认第一个返回值作为loss参与反向传播

如有问题请留言

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/菜鸟追梦旅行/article/detail/668242
推荐阅读
相关标签
  

闽ICP备14008679号