当前位置:   article > 正文

pytorch 训练数据以及测试 全部代码(2)_1196 state_dict = state_dict.copy() 1197 if metada

1196 state_dict = state_dict.copy() 1197 if metadata is not none: 1198

p={‘trainBatch’:6, 'nAveGrad':1, 'lr':1e-07, 'wd':0.0005, 'momentum':0.9,'epoch_size':10, 'optimizer':'SGD()'}最后一个optimizer的值是很长的字符串就不全部写出来了。这个字典长度是7。

其中的net 和criterion在稍后来进行讲解

if resume_epoch==0,那么从头开始训练 training from scratch;否则权重的初始化时一个已经训练好的模型,使用net.load_state_dict函数,这个函数是在torch.nn.Module类里面定义的一个函数。

  1. def load_state_dict(self, state_dict, strict=True):
  2. r"""Copies parameters and buffers from :attr:`state_dict` into
  3. this module and its descendants. If :attr:`strict` is ``True``, then
  4. the keys of :attr:`state_dict` must exactly match the keys returned
  5. by this module's :meth:`~torch.nn.Module.state_dict` function.
  6. Arguments:
  7. state_dict (dict): a dict containing parameters and
  8. persistent buffers.
  9. strict (bool, optional): whether to strictly enforce that the keys
  10. in :attr:`state_dict` match the keys returned by this module's
  11. :meth:`~torch.nn.Module.state_dict` function. Default: ``True``
  12. """
  13. missing_keys = []
  14. unexpected_keys = []
  15. error_msgs = []
  16. # copy state_dict so _load_from_state_dict can modify it
  17. metadata = getattr(state_dict, '_metadata', None)
  18. state_dict = state_dict.copy()
  19. if metadata is not None:
  20. state_dict._metadata = metadata
  21. def load(module, prefix=''):
  22. module._load_from_state_dict(
  23. state_dict, prefix, strict, missing_keys, unexpected_keys, error_msgs)
  24. for name, child in module._modules.items():
  25. if child is not None:
  26. load(child, prefix + name + '.')
  27. load(self)

而里面的torch.load函数定义如下.map_location参数有三种形式:函数,字符串,字典

  1. def load(f, map_location=None, pickle_module=pickle):
  2. """Loads an object saved with :func:`torch.save` from a file.
  3. :meth:`torch.load` uses Python's unpickling facilities but treats storages,
  4. which underlie tensors, specially. They are first deserialized on the
  5. CPU and are then moved to the device they were saved from. If this fails
  6. (e.g. because the run time system doesn't have certain devices), an exception
  7. is raised. However, storages can be dynamically remapped to an alternative
  8. set of devices using the `map_location` argument.
  9. If `map_location` is a callable, it will be called once for each serialized
  10. storage with two arguments: storage and location. The storage argument
  11. will be the initial deserialization of the storage, residing on the CPU.
  12. Each serialized storage has a location tag associated with it which
  13. identifies the device it was saved from, and this tag is the second
  14. argument passed to map_location. The builtin location tags are `'cpu'` for
  15. CPU tensors and `'cuda:device_id'` (e.g. `'cuda:2'`) for CUDA tensors.
  16. `map_location` should return either None or a storage. If `map_location` returns
  17. a storage, it will be used as the final deserialized object, already moved to
  18. the right device. Otherwise, :math:`torch.load` will fall back to the default
  19. behavior, as if `map_location` wasn't specified.
  20. If `map_location` is a string, it should be a device tag, where all tensors
  21. should be loaded.
  22. Otherwise, if `map_location` is a dict, it will be used to remap location tags
  23. appearing in the file (keys), to ones that specify where to put the
  24. storages (values).
  25. User extensions can register their own location tags and tagging and
  26. deserialization methods using `register_package`.
  27. Args:
  28. f: a file-like object (has to implement read, readline, tell, and seek),
  29. or a string containing a file name
  30. map_location: a function, string or a dict specifying how to remap storage
  31. locations
  32. pickle_module: module used for unpickling metadata and objects (has to
  33. match the pickle_module used to serialize file)
  34. Example:
  35. >>> torch.load('tensors.pt')
  36. # Load all tensors onto the CPU
  37. >>> torch.load('tensors.pt', map_location='cpu')
  38. # Load all tensors onto the CPU, using a function
  39. >>> torch.load('tensors.pt', map_location=lambda storage, loc: storage)
  40. # Load all tensors onto GPU 1
  41. >>> torch.load('tensors.pt', map_location=lambda storage, loc: storage.cuda(1))
  42. # Map tensors from GPU 1 to GPU 0
  43. >>> torch.load('tensors.pt', map_location={'cuda:1':'cuda:0'})
  44. # Load tensor from io.BytesIO object
  45. >>> with open('tensor.pt') as f:
  46. buffer = io.BytesIO(f.read())
  47. >>> torch.load(buffer)
  48. """

设置使用GPU,这里是

torch.cuda.set_device(device=0)  告诉编码器cuda使用gpu0号

net.cuda() 将模型放在gpu0号上面

关于writer = SummaryWriter(log_dir=log_dir)这个函数在后面会讲解

num_img_tr = len(trainloader)# 1764
num_img_ts = len(testloader)# 242 这是batch数目
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/weixin_40725706/article/detail/194131
推荐阅读
相关标签
  

闽ICP备14008679号