当前位置:   article > 正文

LLM-微调:Peft库--get_peft_model()-->在llm基座模型的基础上注入Lora模块(加载流程)【注入的Lora模块的初始参数是随机初始化的】

get_peft_model

 一、site-packages-->peft-->mapping.py-->get_peft_model()

  1. def get_peft_model(model: PreTrainedModel, peft_config: PeftConfig, adapter_name: str = "default") -> PeftModel:
  2. """
  3. Returns a Peft model object from a model and a config.
  4. Args:
  5. model ([`transformers.PreTrainedModel`]): Model to be wrapped.
  6. peft_config ([`PeftConfig`]): Configuration object containing the parameters of the Peft model.
  7. """
  8. model_config = getattr(model, "config", {"model_type": "custom"})
  9. if hasattr(model_config, "to_dict"):
  10. model_config = model_config.to_dict()
  11. peft_config.base_model_name_or_path = model.__dict__.get("name_or_path", None)
  12. if peft_config.task_type not in MODEL_TYPE_TO_PEFT_MODEL_MAPPING.keys() and not peft_config.is_prompt_learning:
  13. return PeftModel(model, peft_config, adapte
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/从前慢现在也慢/article/detail/200259
推荐阅读
相关标签
  

闽ICP备14008679号