赞
踩
LLaMA Factory快速对大模型进行快速调优,本文看一下如何本地搭建环境并调优,本文使用 ModelScope 社区中的模型,模型在国内,下载速度非常友好。
## LLaMA Factory官方
git pull https://github.com/hiyouga/LLaMA-Factory
FROM nvcr.io/nvidia/pytorch:24.01-py3
WORKDIR /app
COPY requirements.txt /app/
RUN pip install -i https://mirrors.aliyun.com/pypi/simple -r requirements.txt
COPY . /app/
RUN pip install -i https://mirrors.aliyun.com/pypi/simple -e .[metrics,bitsandbytes,qwen]
VOLUME [ "/root/.cache/huggingface/", "/app/data", "/app/output" ]
EXPOSE 7860
CMD [ "llamafactory-cli", "webui" ]
docker build -f ./Dockerfile -t llama-factory:latest .
docker run --runtime=nvidia --gpus all \
-v ./hf_cache:/root/.cache/huggingface/ \
-v ./data:/app/data \
-v ./examples:/app/examples \
-v ./output:/app/output \
-e CUDA_VISIBLE_DEVICES=0 \
-e USE_MODELSCOPE_HUB=1 \
-p 7860:7860 \
--shm-size 32G \
--name llama_factory \
-d llama-factory:latest
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_lora_sft.yaml
CUDA_VISIBLE_DEVICES=0 llamafactory-cli chat examples/inference/llama3_lora_sft.yaml
CUDA_VISIBLE_DEVICES=0 llamafactory-cli export examples/merge_lora/llama3_lora_sft.yaml
## 下载 llamafile
git clone https://github.com/ggerganov/llama.cpp.git
## 安装依赖
pip install -i https://pypi.tuna.tsinghua.edu.cn/simple -r llama.cpp/requirements.txt
##转换模型
python llama.cpp/convert-hf-to-gguf.py /app/models/llama3_lora_sft/ --outfile test-llama3.gguf --outtype q8_0
FROM ./test-llama3.gguf
TEMPLATE "{{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|>"
ollama create testllama3 -f testmodel
ollama run testllama3
本文只是简单对 LLaMA Factory 在本地调优的流程进行了简单的介绍,调优完成之后将模型到处为 GGUF 格式并用 ollama 运行,具体的调优参数还要参考 LLaMA Factory 官方网站,不得不吐槽一下,文档确实不太完善,得看源代码。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。