当前位置:   article > 正文

windows环境下使用单GPU训练pyskl下的模型

pyskl

pyskl官代码实在linux环境下运行的,在issues里作者也没有提供windows的运行修改,但是给出了pytorch分布式参考修改方法,本文记录下修改过程:

代码下载

PYSKL Release v0.2:https://github.com/kennymckormick/pyskl/releases/tag/v0.2

按官方说明安装pyskl:

  1. git clone https://github.com/kennymckormick/pyskl.git
  2. cd pyskl
  3. # This command runs well with conda 22.9.0, if you are running an early conda version and got some errors, try to update your conda first
  4. conda env create -f pyskl.yaml
  5. conda activate pyskl
  6. pip install -e .

 

下载需要的包

测试成功的安装包版本如下:

修改pycharm本地运行

 在config前面加‘--’

  1. def parse_args():
  2. parser = argparse.ArgumentParser(description='Train a recognizer')
  3. parser.add_argument('--config',default='../configs/posec3d/c3d_light_gym/joint.py', help='train config file path')
  4. parser.add_argument(
  5. '--validate',
  6. action='store_true',
  7. help='whether to evaluate the checkpoint during training')
  8. parser.add_argument(
  9. '--test-last',
  10. action='store_true',
  11. help='whether to test the checkpoint after training')
  12. parser.add_argument(
  13. '--test-best',
  14. action='store_true',
  15. help='whether to test the best checkpoint (if applicable) after training')
  16. parser.add_argument('--seed', type=int, default=None, help='random seed')
  17. parser.add_argument(
  18. '--deterministic',
  19. action='store_true',
  20. help='whether to set deterministic options for CUDNN backend.')
  21. parser.add_argument(
  22. '--launcher',
  23. choices=['pytorch', 'slurm'],
  24. default='pytorch',
  25. help='job launcher')
  26. # parser.add_argument('--local_rank', type=int, default=0)
  27. args = parser.parse_args()
  28. # if 'LOCAL_RANK' not in os.environ:
  29. # os.environ['LOCAL_RANK'] = str(args.local_rank)
  30. return args

修改分布式训练代码

tool/train.py

修改注释的代码:

  1. # 第47行,注释
  2. parser.add_argument('--local_rank', type=int, default=0)
  3. # 第49、50行,注释
  4. if 'LOCAL_RANK' not in os.environ:
  5. os.environ['LOCAL_RANK'] = str(args.local_rank)
  6. # 第70-74行,注释
  7. if not hasattr(cfg, 'dist_params'):
  8. cfg.dist_params = dict(backend='nccl')
  9. init_dist(args.launcher, **cfg.dist_params)
  10. rank, world_size = get_dist_info()
  11. # 修改第75行
  12. # cfg.gpu_ids = range(world_size)
  13. cfg.gpu_ids =[0]
  14. #修改第134行
  15. # if rank == 0 and memcached:
  16. if memcached:
  17. #修改第153行
  18. # if rank == 0 and memcached:
  19. if memcached:
  20. # 注释所有dist.barrier(),148行,151行

下面是修改完成后的代码,替换pyskl中的train.py即可

  1. # Copyright (c) OpenMMLab. All rights reserved.
  2. # flake8: noqa: E722
  3. import argparse
  4. import os
  5. import os.path as osp
  6. import time
  7. import mmcv
  8. import torch
  9. import torch.distributed as dist
  10. from mmcv import Config
  11. from mmcv.runner import get_dist_info, init_dist, set_random_seed
  12. from mmcv.utils import get_git_hash
  13. from pyskl import __version__
  14. from pyskl.apis import init_random_seed, train_model
  15. from pyskl.datasets import build_dataset
  16. from pyskl.models import build_model
  17. from pyskl.utils import collect_env, get_root_logger, mc_off, mc_on, test_port
  18. def parse_args():
  19. parser = argparse.ArgumentParser(description='Train a recognizer')
  20. parser.add_argument('--config',default='../configs/posec3d/c3d_light_gym/joint.py', help='train config file path')
  21. parser.add_argument(
  22. '--validate',
  23. action='store_true',
  24. help='whether to evaluate the checkpoint during training')
  25. parser.add_argument(
  26. '--test-last',
  27. action='store_true',
  28. help='whether to test the checkpoint after training')
  29. parser.add_argument(
  30. '--test-best',
  31. action='store_true',
  32. help='whether to test the best checkpoint (if applicable) after training')
  33. parser.add_argument('--seed', type=int, default=None, help='random seed')
  34. parser.add_argument(
  35. '--deterministic',
  36. action='store_true',
  37. help='whether to set deterministic options for CUDNN backend.')
  38. parser.add_argument(
  39. '--launcher',
  40. choices=['pytorch', 'slurm'],
  41. default='pytorch',
  42. help='job launcher')
  43. # parser.add_argument('--local_rank', type=int, default=0)
  44. args = parser.parse_args()
  45. # if 'LOCAL_RANK' not in os.environ:
  46. # os.environ['LOCAL_RANK'] = str(args.local_rank)
  47. return args
  48. def main():
  49. args = parse_args()
  50. cfg = Config.fromfile(args.config)
  51. # set cudnn_benchmark
  52. if cfg.get('cudnn_benchmark', False):
  53. torch.backends.cudnn.benchmark = True
  54. # work_dir is determined in this priority:
  55. # config file > default (base filename)
  56. if cfg.get('work_dir', None) is None:
  57. # use config filename as default work_dir if cfg.work_dir is None
  58. cfg.work_dir = osp.join('./work_dirs', osp.splitext(osp.basename(args.config))[0])
  59. # if not hasattr(cfg, 'dist_params'):
  60. # cfg.dist_params = dict(backend='nccl')
  61. #
  62. # init_dist(args.launcher, **cfg.dist_params)
  63. # rank, world_size = get_dist_info()
  64. cfg.gpu_ids =[0]
  65. auto_resume = cfg.get('auto_resume', True)
  66. if auto_resume and cfg.get('resume_from', None) is None:
  67. resume_pth = osp.join(cfg.work_dir, 'latest.pth')
  68. if osp.exists(resume_pth):
  69. cfg.resume_from = resume_pth
  70. # create work_dir
  71. mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir))
  72. # dump config
  73. cfg.dump(osp.join(cfg.work_dir, osp.basename(args.config)))
  74. # init logger before other steps
  75. timestamp = time.strftime('%Y%m%d_%H%M%S', time.localtime())
  76. log_file = osp.join(cfg.work_dir, f'{timestamp}.log')
  77. logger = get_root_logger(log_file=log_file, log_level=cfg.log_level)
  78. # init the meta dict to record some important information such as
  79. # environment info and seed, which will be logged
  80. meta = dict()
  81. # log env info
  82. env_info_dict = collect_env()
  83. env_info = '\n'.join([f'{k}: {v}' for k, v in env_info_dict.items()])
  84. dash_line = '-' * 60 + '\n'
  85. logger.info('Environment info:\n' + dash_line + env_info + '\n' +
  86. dash_line)
  87. meta['env_info'] = env_info
  88. # log some basic info
  89. logger.info(f'Config: {cfg.pretty_text}')
  90. # set random seeds
  91. seed = init_random_seed(args.seed)
  92. logger.info(f'Set random seed to {seed}, deterministic: {args.deterministic}')
  93. set_random_seed(seed, deterministic=args.deterministic)
  94. cfg.seed = seed
  95. meta['seed'] = seed
  96. meta['config_name'] = osp.basename(args.config)
  97. meta['work_dir'] = osp.basename(cfg.work_dir.rstrip('/\\'))
  98. model = build_model(cfg.model)
  99. datasets = [build_dataset(cfg.data.train)]
  100. cfg.workflow = cfg.get('workflow', [('train', 1)])
  101. assert len(cfg.workflow) == 1
  102. if cfg.checkpoint_config is not None:
  103. # save pyskl version, config file content and class names in
  104. # checkpoints as meta data
  105. cfg.checkpoint_config.meta = dict(
  106. pyskl_version=__version__ + get_git_hash(digits=7),
  107. config=cfg.pretty_text)
  108. test_option = dict(test_last=args.test_last, test_best=args.test_best)
  109. default_mc_cfg = ('localhost', 22077)
  110. memcached = cfg.get('memcached', False)
  111. # if rank == 0 and memcached:
  112. if memcached:
  113. # mc_list is a list of pickle files you want to cache in memory.
  114. # Basically, each pickle file is a dictionary.
  115. mc_cfg = cfg.get('mc_cfg', default_mc_cfg)
  116. assert isinstance(mc_cfg, tuple) and mc_cfg[0] == 'localhost'
  117. if not test_port(mc_cfg[0], mc_cfg[1]):
  118. mc_on(port=mc_cfg[1], launcher=args.launcher)
  119. retry = 3
  120. while not test_port(mc_cfg[0], mc_cfg[1]) and retry > 0:
  121. time.sleep(5)
  122. retry -= 1
  123. assert retry >= 0, 'Failed to launch memcached. '
  124. # dist.barrier()
  125. train_model(model, datasets, cfg, validate=args.validate, test=test_option, timestamp=timestamp, meta=meta)
  126. # dist.barrier()
  127. # if rank == 0 and memcached:
  128. if memcached:
  129. mc_off()
  130. if __name__ == '__main__':
  131. main()

pyskl/apis/train.py

这个不是是train的主要代码,使用了mmcv的分布式训练代码,需要替换

  1. # 第10行注释,替换
  2. # from mmcv.parallel import MMDistributedDataParallel
  3. from mmcv.parallel import MMDistributedDataParallel, MMDataParallel
  4. # 第94-98注释,替换
  5. # model = MMDistributedDataParallel(
  6. # model.cuda(),
  7. # device_ids=[torch.cuda.current_device()],
  8. # broadcast_buffers=False,
  9. # find_unused_parameters=fin
  10. model = MMDataParallel(model.cuda())
  11. # 注释第147行dist.barrier()
  12. # dist.barrier()

完整代码:

  1. # Copyright (c) OpenMMLab. All rights reserved.
  2. import warnings
  3. import numpy as np
  4. from ..builder import PIPELINES
  5. @PIPELINES.register_module()
  6. class UniformSampleFrames:
  7. """Uniformly sample frames from the video.
  8. To sample an n-frame clip from the video. UniformSampleFrames basically
  9. divide the video into n segments of equal length and randomly sample one
  10. frame from each segment. To make the testing results reproducible, a
  11. random seed is set during testing, to make the sampling results
  12. deterministic.
  13. Required keys are "total_frames", "start_index" , added or modified keys
  14. are "frame_inds", "clip_len", "frame_interval" and "num_clips".
  15. Args:
  16. clip_len (int): Frames of each sampled output clip.
  17. num_clips (int): Number of clips to be sampled. Default: 1.
  18. test_mode (bool): Store True when building test or validation dataset.
  19. Default: False.
  20. seed (int): The random seed used during test time. Default: 255.
  21. """
  22. def __init__(self,
  23. clip_len,
  24. num_clips=1,
  25. test_mode=False,
  26. float_ok=False,
  27. p_interval=1,
  28. seed=255):
  29. self.clip_len = clip_len
  30. self.num_clips = num_clips
  31. self.test_mode = test_mode
  32. self.float_ok = float_ok
  33. self.seed = seed
  34. self.p_interval = p_interval
  35. if not isinstance(p_interval, tuple):
  36. self.p_interval = (p_interval, p_interval)
  37. if self.float_ok:
  38. warnings.warn('When float_ok == True, there will be no loop.')
  39. def _get_train_clips(self, num_frames, clip_len):
  40. """Uniformly sample indices for training clips.
  41. Args:
  42. num_frames (int): The number of frames.
  43. clip_len (int): The length of the clip.
  44. """
  45. allinds = []
  46. for clip_idx in range(self.num_clips):
  47. old_num_frames = num_frames
  48. pi = self.p_interval
  49. ratio = np.random.rand() * (pi[1] - pi[0]) + pi[0]
  50. num_frames = int(ratio * num_frames)
  51. off = np.random.randint(old_num_frames - num_frames + 1)
  52. if self.float_ok:
  53. interval = (num_frames - 1) / clip_len
  54. offsets = np.arange(clip_len) * interval
  55. inds = np.random.rand(clip_len) * interval + offsets
  56. inds = inds.astype(np.float32)
  57. elif num_frames < clip_len:
  58. start = np.random.randint(0, num_frames)
  59. inds = np.arange(start, start + clip_len)
  60. elif clip_len <= num_frames < 2 * clip_len:
  61. basic = np.arange(clip_len)
  62. inds = np.random.choice(
  63. clip_len + 1, num_frames - clip_len, replace=False)
  64. offset = np.zeros(clip_len + 1, dtype=np.int64)
  65. offset[inds] = 1
  66. offset = np.cumsum(offset)
  67. inds = basic + offset[:-1]
  68. else:
  69. bids = np.array(
  70. [i * num_frames // clip_len for i in range(clip_len + 1)])
  71. bsize = np.diff(bids)
  72. bst = bids[:clip_len]
  73. offset = np.random.randint(bsize)
  74. inds = bst + offset
  75. inds = inds + off
  76. num_frames = old_num_frames
  77. allinds.append(inds)
  78. return np.concatenate(allinds)
  79. def _get_test_clips(self, num_frames, clip_len):
  80. """Uniformly sample indices for testing clips.
  81. Args:
  82. num_frames (int): The number of frames.
  83. clip_len (int): The length of the clip.
  84. """
  85. np.random.seed(self.seed)
  86. if self.float_ok:
  87. interval = (num_frames - 1) / clip_len
  88. offsets = np.arange(clip_len) * interval
  89. inds = np.concatenate([
  90. np.random.rand(clip_len) * interval + offsets
  91. for i in range(self.num_clips)
  92. ]).astype(np.float32)
  93. all_inds = []
  94. for i in range(self.num_clips):
  95. old_num_frames = num_frames
  96. pi = self.p_interval
  97. ratio = np.random.rand() * (pi[1] - pi[0]) + pi[0]
  98. num_frames = int(ratio * num_frames)
  99. off = np.random.randint(old_num_frames - num_frames + 1)
  100. if num_frames < clip_len:
  101. start_ind = i if num_frames < self.num_clips else i * num_frames // self.num_clips
  102. inds = np.arange(start_ind, start_ind + clip_len)
  103. elif clip_len <= num_frames < clip_len * 2:
  104. basic = np.arange(clip_len)
  105. inds = np.random.choice(clip_len + 1, num_frames - clip_len, replace=False)
  106. offset = np.zeros(clip_len + 1, dtype=int64)
  107. offset[inds] = 1
  108. offset = np.cumsum(offset)
  109. inds = basic + offset[:-1]
  110. else:
  111. bids = np.array([i * num_frames // clip_len for i in range(clip_len + 1)])
  112. bsize = np.diff(bids)
  113. bst = bids[:clip_len]
  114. offset = np.random.randint(bsize)
  115. inds = bst + offset
  116. all_inds.append(inds + off)
  117. num_frames = old_num_frames
  118. return np.concatenate(all_inds)
  119. def __call__(self, results):
  120. num_frames = results['total_frames']
  121. if self.test_mode:
  122. inds = self._get_test_clips(num_frames, self.clip_len)
  123. else:
  124. inds = self._get_train_clips(num_frames, self.clip_len)
  125. inds = np.mod(inds, num_frames)
  126. start_index = results['start_index']
  127. inds = inds + start_index
  128. if 'keypoint' in results:
  129. kp = results['keypoint']
  130. assert num_frames == kp.shape[1]
  131. num_person = kp.shape[0]
  132. num_persons = [num_person] * num_frames
  133. for i in range(num_frames):
  134. j = num_person - 1
  135. while j >= 0 and np.all(np.abs(kp[j, i]) < 1e-5):
  136. j -= 1
  137. num_persons[i] = j + 1
  138. transitional = [False] * num_frames
  139. for i in range(1, num_frames - 1):
  140. if num_persons[i] != num_persons[i - 1]:
  141. transitional[i] = transitional[i - 1] = True
  142. if num_persons[i] != num_persons[i + 1]:
  143. transitional[i] = transitional[i + 1] = True
  144. inds_int = inds.astype(int)
  145. coeff = np.array([transitional[i] for i in inds_int])
  146. inds = (coeff * inds_int + (1 - coeff) * inds).astype(np.float32)
  147. results['frame_inds'] = inds if self.float_ok else inds.astype(int)
  148. results['clip_len'] = self.clip_len
  149. results['frame_interval'] = None
  150. results['num_clips'] = self.num_clips
  151. return results
  152. def __repr__(self):
  153. repr_str = (f'{self.__class__.__name__}('
  154. f'clip_len={self.clip_len}, '
  155. f'num_clips={self.num_clips}, '
  156. f'test_mode={self.test_mode}, '
  157. f'seed={self.seed})')
  158. return repr_str
  159. @PIPELINES.register_module()
  160. class UniformSample(UniformSampleFrames):
  161. pass
  162. @PIPELINES.register_module()
  163. class SampleFrames:
  164. """Sample frames from the video.
  165. Required keys are "total_frames", "start_index" , added or modified keys
  166. are "frame_inds", "frame_interval" and "num_clips".
  167. Args:
  168. clip_len (int): Frames of each sampled output clip.
  169. frame_interval (int): Temporal interval of adjacent sampled frames.
  170. Default: 1.
  171. num_clips (int): Number of clips to be sampled. Default: 1.
  172. temporal_jitter (bool): Whether to apply temporal jittering.
  173. Default: False.
  174. twice_sample (bool): Whether to use twice sample when testing.
  175. If set to True, it will sample frames with and without fixed shift,
  176. which is commonly used for testing in TSM model. Default: False.
  177. out_of_bound_opt (str): The way to deal with out of bounds frame
  178. indexes. Available options are 'loop', 'repeat_last'.
  179. Default: 'loop'.
  180. test_mode (bool): Store True when building test or validation dataset.
  181. Default: False.
  182. start_index (None): This argument is deprecated and moved to dataset
  183. class (``BaseDataset``, ``VideoDatset``, ``RawframeDataset``, etc),
  184. see this: https://github.com/open-mmlab/mmaction2/pull/89.
  185. keep_tail_frames (bool): Whether to keep tail frames when sampling.
  186. Default: False.
  187. """
  188. def __init__(self,
  189. clip_len,
  190. frame_interval=1,
  191. num_clips=1,
  192. temporal_jitter=False,
  193. twice_sample=False,
  194. out_of_bound_opt='loop',
  195. test_mode=False,
  196. start_index=None,
  197. keep_tail_frames=False):
  198. self.clip_len = clip_len
  199. self.frame_interval = frame_interval
  200. self.num_clips = num_clips
  201. self.temporal_jitter = temporal_jitter
  202. self.twice_sample = twice_sample
  203. self.out_of_bound_opt = out_of_bound_opt
  204. self.test_mode = test_mode
  205. self.keep_tail_frames = keep_tail_frames
  206. assert self.out_of_bound_opt in ['loop', 'repeat_last']
  207. if start_index is not None:
  208. warnings.warn('No longer support "start_index" in "SampleFrames", '
  209. 'it should be set in dataset class, see this pr: '
  210. 'https://github.com/open-mmlab/mmaction2/pull/89')
  211. def _get_train_clips(self, num_frames):
  212. """Get clip offsets in train mode.
  213. It will calculate the average interval for selected frames,
  214. and randomly shift them within offsets between [0, avg_interval].
  215. If the total number of frames is smaller than clips num or origin
  216. frames length, it will return all zero indices.
  217. Args:
  218. num_frames (int): Total number of frame in the video.
  219. Returns:
  220. np.ndarray: Sampled frame indices in train mode.
  221. """
  222. ori_clip_len = self.clip_len * self.frame_interval
  223. if self.keep_tail_frames:
  224. avg_interval = (num_frames - ori_clip_len + 1) / float(
  225. self.num_clips)
  226. if num_frames > ori_clip_len - 1:
  227. base_offsets = np.arange(self.num_clips) * avg_interval
  228. clip_offsets = (base_offsets + np.random.uniform(
  229. 0, avg_interval, self.num_clips)).astype(int)
  230. else:
  231. clip_offsets = np.zeros((self.num_clips, ), dtype=int)
  232. else:
  233. avg_interval = (num_frames - ori_clip_len + 1) // self.num_clips
  234. if avg_interval > 0:
  235. base_offsets = np.arange(self.num_clips) * avg_interval
  236. clip_offsets = base_offsets + np.random.randint(
  237. avg_interval, size=self.num_clips)
  238. elif num_frames > max(self.num_clips, ori_clip_len):
  239. clip_offsets = np.sort(
  240. np.random.randint(
  241. num_frames - ori_clip_len + 1, size=self.num_clips))
  242. elif avg_interval == 0:
  243. ratio = (num_frames - ori_clip_len + 1.0) / self.num_clips
  244. clip_offsets = np.around(np.arange(self.num_clips) * ratio)
  245. else:
  246. clip_offsets = np.zeros((self.num_clips, ), dtype=int)
  247. return clip_offsets
  248. def _get_test_clips(self, num_frames):
  249. """Get clip offsets in test mode.
  250. Calculate the average interval for selected frames, and shift them
  251. fixedly by avg_interval/2. If set twice_sample True, it will sample
  252. frames together without fixed shift. If the total number of frames is
  253. not enough, it will return all zero indices.
  254. Args:
  255. num_frames (int): Total number of frame in the video.
  256. Returns:
  257. np.ndarray: Sampled frame indices in test mode.
  258. """
  259. ori_clip_len = self.clip_len * self.frame_interval
  260. avg_interval = (num_frames - ori_clip_len + 1) / float(self.num_clips)
  261. if num_frames > ori_clip_len - 1:
  262. base_offsets = np.arange(self.num_clips) * avg_interval
  263. clip_offsets = (base_offsets + avg_interval / 2.0).astype(int)
  264. if self.twice_sample:
  265. clip_offsets = np.concatenate([clip_offsets, base_offsets])
  266. else:
  267. clip_offsets = np.zeros((self.num_clips, ), dtype=int)
  268. return clip_offsets
  269. def _sample_clips(self, num_frames):
  270. """Choose clip offsets for the video in a given mode.
  271. Args:
  272. num_frames (int): Total number of frame in the video.
  273. Returns:
  274. np.ndarray: Sampled frame indices.
  275. """
  276. if self.test_mode:
  277. clip_offsets = self._get_test_clips(num_frames)
  278. else:
  279. clip_offsets = self._get_train_clips(num_frames)
  280. return clip_offsets
  281. def __call__(self, results):
  282. """Perform the SampleFrames loading.
  283. Args:
  284. results (dict): The resulting dict to be modified and passed
  285. to the next transform in pipeline.
  286. """
  287. total_frames = results['total_frames']
  288. clip_offsets = self._sample_clips(total_frames)
  289. frame_inds = clip_offsets[:, None] + np.arange(
  290. self.clip_len)[None, :] * self.frame_interval
  291. frame_inds = np.concatenate(frame_inds)
  292. if self.temporal_jitter:
  293. perframe_offsets = np.random.randint(
  294. self.frame_interval, size=len(frame_inds))
  295. frame_inds += perframe_offsets
  296. frame_inds = frame_inds.reshape((-1, self.clip_len))
  297. if self.out_of_bound_opt == 'loop':
  298. frame_inds = np.mod(frame_inds, total_frames)
  299. elif self.out_of_bound_opt == 'repeat_last':
  300. safe_inds = frame_inds < total_frames
  301. unsafe_inds = 1 - safe_inds
  302. last_ind = np.max(safe_inds * frame_inds, axis=1)
  303. new_inds = (safe_inds * frame_inds + (unsafe_inds.T * last_ind).T)
  304. frame_inds = new_inds
  305. else:
  306. raise ValueError('Illegal out_of_bound option.')
  307. start_index = results['start_index']
  308. frame_inds = np.concatenate(frame_inds) + start_index
  309. results['frame_inds'] = frame_inds.astype(int)
  310. results['clip_len'] = self.clip_len
  311. results['frame_interval'] = self.frame_interval
  312. results['num_clips'] = self.num_clips
  313. return results
  314. def __repr__(self):
  315. repr_str = (f'{self.__class__.__name__}('
  316. f'clip_len={self.clip_len}, '
  317. f'frame_interval={self.frame_interval}, '
  318. f'num_clips={self.num_clips}, '
  319. f'temporal_jitter={self.temporal_jitter}, '
  320. f'twice_sample={self.twice_sample}, '
  321. f'out_of_bound_opt={self.out_of_bound_opt}, '
  322. f'test_mode={self.test_mode})')
  323. return repr_str

重装yapf包

 此时运行train.py可能会遇到

TypeError: FormatCode() got an unexpected keyword argument 'verify'

这是因为yapf包版本太新了,降低为0.40.1

pip uninstall yapf
pip install yapf==0.40.1 -i https://pypi.tuna.tsinghua.edu.cn/simple

 修改mmcv-full1.5.0代码

安装mmcv-full1.5.0版本运行过程会遇到如下问题

 UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd3 in position 0: invalid continuation byte

这是因为编码格式的问题,修改C:\ProgramData\Anaconda3\envs\pyskl\lib\site-packages\mmcv\utils\env.py文件

  1. # 第91行env_info['MSVC'] = cc.decode(encoding).partition('\n')[0].strip()修改为
  2. env_info['MSVC'] = cc.decode(encoding, 'ignore').partition('\n')[0].strip()

修改sample.py

由于numpy的原因,会存在版本冲突,np.int在最先的版本中改为了int,降低numpy版本会导致scipy冲突,因此建议将sample.py中所有的np.int替换为int,同时np.int64无需更改

训练:

 

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/知新_RL/article/detail/107950
推荐阅读
相关标签
  

闽ICP备14008679号