赞
踩
在hugging face上搜索 kigner/ruozhiba-llama3
使用的是弱智吧的数据
选择Gpu版本T2就可以了,然后下载unsloth这个微调框架,使用该框架的主要原因在于对硬件要求比较低。
在安装这个前先看一下本文的4
用这个代码安装试一下,或者就是直接顺序安装下来。
pip install xformers==0.0.25.post1
安装的代码如下所示:
#安装微调库
%%capture
import torch
major_version, minor_version = torch.cuda.get_device_capability()
# 由于Colab有torch 2.2.1,会破坏安装包,要单独安装
!pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
if major_version >=8:
# 新GPU
!pip install --no-deps packaging ninja einops flash-attn xformers trl peft accelerate bitsandbytes
else:
# 较久的GPU(V100、Tesla等)
!pip install --no-deps xformers trl peft accelerate bitsandbytes
pass
在微调库unsloth安装完成之后,开始加载模型
from unsloth import FastLanguageModel
import torch
max_seq_length = 2048
dtype = 2048
dtype = None
load_in_4bit = True
model,tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/llama-3-8b-bnb-4bit",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
在微调前测试发现了报错,报错表示的是在前面安装xformers的时候会自动将前面安装的torch给卸载掉,可以直接以以下命令安装xformers,指定一下版本。
pip install xformers==0.0.25.post1
# 微调前测试
alpaca_prompt="""
下面是描述一个任务,以一个输入然后提供一个回复
### Instruction:
{}
### Input:
{}
### Response:
{}
"""
FastLanguageModel.for_inference(model)
inputs = tokenizer(
[
alpaca_prompt.format(
"请用中文回答", # instruction
"海绵宝宝的书法是不是叫做海绵体?", # input
"", # output
)
], return_tensors = "pt").to("cuda")
from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
# _ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 128)
_ = model.generate(**inputs, streamer=text_streamer,max_new_tokens=128)
训练数据就是前文所提到的hugging_face地方。
# 准备微调数据集
EOS_TOKEN = tokenizer.eos_token # 必须添加 EOS_TOken
def formatting_prompts_func(examples):
instructions = examples["instruction"]
inputs = examples["input"]
outputs = examples["output"]
texts = []
for instruction, input, output in zip(instructions, inputs, outputs):
# 必须添加EOS_TOKEN,否则无限生成
text = alpaca_prompt.format(instruction, input, output) + EOS_TOKEN
texts.append(text)
return {"text":texts,}
pass
from datasets import load_dataset
dataset = load_dataset("kigner/ruozhiba-llama3-tt", split="train")
dataset = dataset.map(formatting_prompts_func, batched=True,)
#设置训练参数
from trl import SFTTrainer
from transformers import TrainingArguments
model = FastLanguageModel.get_peft_model(
model,
r = 16, # 建议8,16,32,64,128
target_modules = ["q_proj","k_proj","v_proj","o_proj","gate_proj", "up_proj","down_proj",], # 调整那些层
lora_alpha = 16,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth", # 检查点,长上下文长度
random_state = 3407,
use_rslora = False,
loftq_config = None,
)
trainer = SFTTrainer(
model=model,
tokenizer = tokenizer,
train_dataset = dataset,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 2,
packing = False, # 可以让短序列的训练速度提高5倍
args = TrainingArguments(
per_device_train_batch_size = 2,
gradient_accumulation_steps = 4,
warmup_steps = 5,
max_steps = 60, # 微调步数
learning_rate = 2e-4, # 学习率
fp16 = not torch.cuda.is_bf16_supported(),
bf16 = torch.cuda.is_bf16_supported(),
logging_steps = 1,
optim = "adamw_8bit",
weight_decay = 0.01,
lr_scheduler_type = "linear",
seed = 3407,
output_dir = "outputs",
),
)
设置完上述参数之后就可以开始进行训练了
trainer_stats = trainer.train()
上图所示,为其损失函数,若在有规律的下降,则代表模型正在学习这些内容。
#测试微调后的模型
FastLanguageModel.for_inference(model)
input_0 = tokenizer(
[
alpaca_prompt.format(
"请用中文回答", # instruction
"海绵宝宝的书法是不是叫做海绵体?", # input
"", # output
)
], return_tensors = "pt").to("cuda")
from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
# _ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 128)
_ = model.generate(**input_0, streamer=text_streamer,max_new_tokens=128)
# 保存LoRA模型
model.save_pretrained("lora_model")
#合并模型并量化成4位gguf保存
model.save_pretrained_gguf("model",tokenizer, quantization_method="q4_k_m")
可以去看下面这个链接up主讲的,该文也是对该up主讲的进行一个文档化,以及结合自己进行操作时候遇到问题等的一个整理。
etrained(“lora_model”)
#合并模型并量化成4位gguf保存
model.save_pretrained_gguf(“model”,tokenizer, quantization_method=“q4_k_m”)
可以去看下面这个链接up主讲的,该文也是对该up主讲的进行一个文档化,以及结合自己进行操作时候遇到问题等的一个整理。
## 参考链接
[Windows下中文微调Llama3,单卡8G显存只需5分钟,可接入GPT4All、Ollama实现CPU推理聊天,附一键训练脚本。_哔哩哔哩_bilibili](https://www.bilibili.com/video/BV1kC411n7hD/?spm_id_from=333.337.search-card.all.click&vd_source=e3b1d6ceec31cba80353bfd01d49ed17)
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。