当前位置:   article > 正文

lora_adapter 模型和原模型合并成一个模型

lora 模型合并到

lora 部分合并到原模型参数上

  1. import torch
  2. from peft import PeftModel
  3. from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaTokenizer
  4. from transformers.generation.utils import GenerationConfig
  5. def apply_lora(model_name_or_path, output_path, lora_path):
  6. print(f"Loading the base model from {model_name_or_path}")
  7. base_tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=False, trust_remote_code=True)
  8. base = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="cuda:0", torch_dtype=torch.bfloat16, trust_remote_code=True)
  9. # base.generation_config = GenerationConfig.from_pretrained(model_name_or_path)
  10. print(f"Loading the LoRA adapter from {lora_path}")
  11. lora_model = PeftModel.from_pretrained(
  12. base,
  13. lora_path,
  14. torch_dtype=torch.float16,
  15. )
  16. print("Applying the LoRA")
  17. model = lora_model.merge_and_unload()
  18. print(f"Saving the target model to {output_path}")
  19. model.save_pretrained(output_path)
  20. base_tokenizer.save_pretrained(output_path)
  21. if __name__ == "__main__":
  22. lora_path = "/data2/xinyuuliu/LLaMA-Factory/saves/qwen/lora/orpo"
  23. model_path = "/data2/xinyuuliu/Qwen1.5-7B-Chat"
  24. output = "/data2/xinyuuliu/LLaMA-Factory/saves/qwen/lora/orpo/lora_merge"
  25. apply_lora(model_path,output,lora_path)
声明:本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:【wpsshop博客】
推荐阅读
相关标签
  

闽ICP备14008679号