当前位置:   article > 正文

llama2 转换权重_you are using the default legacy behaviour of the

you are using the default legacy behaviour of the
python src/transformers/models/llama/convert_llama_weights_to_hf.py --input_dir  D:\code\Github_code\llama-main  --model_size 7B  --output_dir ./output
  • 1
(llama) D:\code\Github_code\transformers-main>python src/transformers/models/llama/convert_llama_weights_to_hf.py --input_dir  D:\code\Github_code\llama-main  --model
_size 7B  --output_dir ./output
You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This is expected, and simply means that the `
legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understa
nd what it means, and thouroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565
Fetching all parameters from the checkpoint at D:\code\Github_code\llama-main\7B.
Loading the checkpoint in a Llama model.
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 33/33 [00:06<00:00,  5.00it/s]
Saving in the Transformers format.

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/从前慢现在也慢/article/detail/372945
推荐阅读
相关标签