赞
踩
在训练阶段、验证、测试阶段添加代码:
- # whether use multi gpu:
- if self.args .multi_gpu:
- model = nn.DataParallel(model)
- else:
- model = model
使用 nn.DataParallel将model包装之后,各种数据在喂给模型之前如果在cpu上,会自动转移到GPU上,不需要手动将各个数据利用.cuda(),或.to('cuda')进行转移。
- import torch
- import config
- import argparse
-
-
- ...
-
-
- model = Mymodel()
-
- ...
-
- # whether use multi gpu:
-
- if self.args .multi_gpu:
- model = nn.DataParallel(model)
- else:
- model = model
-
- train()
- {
- // Use IntelliSense to learn about possible attributes.
- // Hover to view descriptions of existing attributes.
- // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
- "version": "0.2.0",
- "configurations": [
- {
- "name": "Python: Current File",
- "type": "python",
- "request": "launch",
- "program": "${file}",
- "console": "integratedTerminal",
- "justMyCode": true
- }
- ]
- }
注销掉代码:
- // "program": "${file}",
- // "console": "integratedTerminal",
添加代码:
- "connect": {
- "host": "localhost",
- "port": 50678
- }
- {
- // 使用 IntelliSense 了解相关属性。
- // 悬停以查看现有属性的描述。
- // 欲了解更多信息,请访问: https://go.microsoft.com/fwlink/?linkid=830387
- "version": "0.2.0",
- "configurations": [
- {
- "name": "Python: Current File",
- "type": "python",
- "request": "attach",
- // "program": "${file}",
- // "console": "integratedTerminal",
- "justMyCode": false,
- "connect": {
- "host": "localhost",
- "port": 50678
- }
- }
- ]
- }
利用debugpy调试,设定调试用的GPU标号为2
- #!/usr/bin/env bash
-
- #修改gpu编号
- export CUDA_VISIBLE_DEVICES=2,3
- python3 -m debugpy --listen 50678 --wait-for-client train.py
sh -x train.sh
运行完成shell脚本后停在python脚本调用的入口,点击F5进入python调试。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。