当前位置:   article > 正文

ChatGLM2-6B windows/lora 微调报错踩坑_valueerror: fp16 mixed precision training with amp

valueerror: fp16 mixed precision training with amp or apex (`--fp16`) and fp

教程参考:ChatGLM2-6B 微调(初体验) - 知乎

Traceback (most recent call last):
  File "C:\Users\Admin\ChatGLM-Efficient-Tuning\src\train_bash.py", line 21, in
<module>
    main()
  File "C:\Users\Admin\ChatGLM-Efficient-Tuning\src\train_bash.py", line 5, in m
ain
    model_args, data_args, training_args, finetuning_args, general_args = get_tr
ain_args()
  File "C:\Users\Admin\ChatGLM-Efficient-Tuning\src\glmtuner\tuner\core\parser.p
y", line 35, in get_train_args
    model_args, data_args, training_args, finetuning_args, general_args = parser
.parse_args_into_dataclasses()
  File "E:\Aconda\envs\py310_chat\lib\site-packages\transformers\hf_argparser.py
", line 338, in parse_args_into_dataclasses
    obj = dtype(**inputs)
  File "<string>", line 119, in __init__
  File "E:\Aconda\envs\py310_chat\lib\site-packages\transformers\training_args.p
y", line 1405, in __post_init__
    raise ValueError(
ValueError: FP16 Mixed precision training with AMP or APEX (`--fp16`) and FP16 h
alf precision evaluation (`--fp16_full_eval`) can only be used on CUDA or NPU de
vices.
 

机器没问题,cuda没问题,发现是省事用的pip下载的pytorch是cpu版的,低级错误。

之后报错:

08/30/2023 15:59:41 - WARNING - glmtuner.tuner.core.parser - We recommend enable fp16 mixed precision training for
ChatGLM-6B.
08/30/2023 15:59:41 - WARNING - glmtuner.tuner.core.parser - `ddp_find_unused_parameters` needs to be set as False
in DDP training.
Traceback (most recent call last):
  File "C:\Users\Admin\ChatGLM-Efficient-Tuning\src\train_bash.py", line 21, in <module>
    main()
  File "C:\Users\Admin\ChatGLM-Efficient-Tuning\src\train_bash.py", line 5, in main
    model_args, data_args, training_args, finetuning_args, general_args = get_train_args()
  File "C:\Users\Admin\ChatGLM-Efficient-Tuning\src\glmtuner\tuner\core\parser.py", line 78, in get_train_args
    training_args.ddp_find_unused_parameters = False
  File "E:\Aconda\envs\py310_chat\lib\site-packages\transformers\training_args.py", line 1712, in __setattr__
    raise FrozenInstanceError(f"cannot assign to field {name}")
dataclasses.FrozenInstanceError: cannot assign to field ddp_find_unused_parameters
 

卡了N多个小时,什么都试了,崩溃差点没放弃

结果只是transformers版本太高了...调低就可以了...

4.32.1-->4.30.1

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/小蓝xlanll/article/detail/505778
推荐阅读
相关标签
  

闽ICP备14008679号