当前位置:   article > 正文

pytorch分布式训练报错RuntimeError: Socket Timeout

runtimeerror: socket timeout

出错背景:在我的训练过程中,因为任务特殊性,用的是多卡训练单卡测试策略。模型测试的时候,由于数据集太大且测试过程指标计算量大,因此测试时间较长。

报错信息:

  1. File "/home/anys/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 940, in __init__
  2. self._reset(loader, first_iter=True)
  3. File "/home/anys/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 971, in _reset
  4. self._try_put_index()
  5. File "/home/anys/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1205, in _try_put_index
  6. index = self._next_index()
  7. File "/home/anys/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 508, in _next_index
  8. return next(self._sampler_iter) # may raise StopIteration
  9. File "/home/anys/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/utils/data/sampler.py", line 227, in __iter__
  10. for idx in self.sampler:
  11. File "/home/anys/GRALF/AlignTransReID/TransReID/datasets/sampler_ddp.py", line 148, in __iter__
  12. seed = shared_random_seed()
  13. File "/home/anys/GRALF/AlignTransReID/TransReID/datasets/sampler_ddp.py", line 108, in shared_random_seed
  14. all_ints = all_gather(ints)
  15. File "/home/anys/GRALF/AlignTransReID/TransReID/datasets/sampler_ddp.py", line 77, in all_gather
  16. group = _get_global_gloo_group()
  17. File "/home/anys/GRALF/AlignTransReID/TransReID/datasets/sampler_ddp.py", line 18, in _get_global_gloo_group
  18. return dist.new_group(backend="gloo")
  19. File "/home/anys/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 2503, in new_group
  20. pg = _new_process_group_helper(group_world_size,
  21. File "/home/anys/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 588, in _new_process_group_helper
  22. pg = ProcessGroupGloo(
  23. RuntimeError: Socket Timeout

从报错信息中可以看到是数据加载的时候,创建进程引起的超时,解决方法就是将“进程”的“存活”时间加大:

torch.distributed.new_group(backend="gloo",timeout=datetime.timedelta(days=1))

当出现超时报错时,可以先检查所有创建子进程的地方,将timeout调大。

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/IT小白/article/detail/220545
推荐阅读
相关标签
  

闽ICP备14008679号