赞
踩
RoSA: Accurate Parameter-Efficient Fine-Tuning via Robust Adaptation
Learning to Route Among Specialized Experts for Zero-Shot Generalization
AFLoRA: Adaptive Freezing of Low Rank Adaptation in Parameter
Efficient Fine-Tuning of Large Models
LoTR: Low Tensor Rank Weight Adaptation
BiLoRA: A Bi-level Optimization Framework for Overfitting-Resilient
Low-Rank Adaptation of Large Pre-trained Models
Navigating Text-To-Image Customization:
From LyCORIS Fine-Tuning to Model Evaluation
LoDA: Low-Dimensional Adaptation of Large
Language Models
SuperLoRA: Parameter-Efficient Unified Adaptation of Multi-Layer Attention Modules
GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
Mixture-of-LoRAs: An Efficient Multitask Tuning for Large Language Models
DoRA: Weight-Decomposed Low-Rank Adaptation
LoRA+: Efficient Low Rank Adaptation of Large Models
Orthogonal Subspace Learning for Language Model Continual Learning
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。