赞
踩
python src/transformers/models/llama/convert_llama_weights_to_hf.py --input_dir D:\code\Github_code\llama-main --model_size 7B --output_dir ./output
(llama) D:\code\Github_code\transformers-main>python src/transformers/models/llama/convert_llama_weights_to_hf.py --input_dir D:\code\Github_code\llama-main --model
_size 7B --output_dir ./output
You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This is expected, and simply means that the `
legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understa
nd what it means, and thouroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565
Fetching all parameters from the checkpoint at D:\code\Github_code\llama-main\7B.
Loading the checkpoint in a Llama model.
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 33/33 [00:06<00:00, 5.00it/s]
Saving in the Transformers format.
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。