当前位置:   article > 正文

保姆级本地部署Qwen2_qwen2本地部署

qwen2本地部署

重点:Qwen2提供了CPU与GPU两种运行方式

运行成功效果图:

前提说明:如果需要用GPU,那么请在物理机安装ubuntu系统,不然显卡驱动很难安装,不建议新手部署。训练微调模型需要用到GPU。本文仅以ubuntu系统演示说明。

1、首先我们安装一个Ubutun系统,安装系统不展开说明,自行安装,我安装的是117~20.04.1-Ubuntu

2、新建2个文件夹,用于下载模型以及Qwen源码。

  1. mkdir -p /usr/local/project/conda/Qwen #【用来存放Qwen2源码】
  2. mkdir -p /home/zhangwei/llm #【用来存放Qwen2模型】

3、利用git clone 下载源码以及模型

  1. root@zhangwei-H610M-K-DDR4:/# cd  /usr/local/project/conda/Qwen #【进入文件夹】
  2. root@zhangwei-H610M-K-DDR4:/# git clone https://github.com/QwenLM/Qwen.git#【下载Qwen源码】
  3. root@zhangwei-H610M-K-DDR4:/usr/local/project/conda/Qwen# ls
  4. ascend-support docker FAQ.md LICENSE process_data_law.py README_ES.md recipes tech_memo.md 'Tongyi Qianwen LICENSE AGREEMENT' tran_data_law1.json
  5. assets eval FAQ_zh.md NOTICE qweb_lora_merge.py README_FR.md requirements.txt tokenization_note_ja.md 'Tongyi Qianwen RESEARCH LICENSE AGREEMENT' utils.py
  6. cli_demo.py examples finetune openai_api.py QWEN_TECHNICAL_REPORT.pdf README_JA.md requirements_web_demo.txt tokenization_note.md train_data_law2.json web_demo.py
  7. dcu-support FAQ_ja.md finetune.py output_qwen README_CN.md README.md run_gptq.py tokenization_note_zh.md train_data_law.json
  8. root@zhangwei-H610M-K-DDR4:/usr/local/project/conda/Qwen# cd  /home/zhangwei/llm#【进入文件夹】
  9. root@zhangwei-H610M-K-DDR4:/home/zhangwei/llm# git clone https://www.modelscope.cn/qwen/Qwen-1_8B-Chat.git#【下载Qwen_1_8模型】
  10. root@zhangwei-H610M-K-DDR4:/home/zhangwei/llm# ls
  11. Qwen-1_8B-Chat Qwen-1_8B-Chat_law2 Qwen-1_8B-Chat_law3 Qwen-1_8B-Chat_law4 tran_data_law1.json tran_data_law.json

4、安装miniconda以及python3.10【注意:必须安装3.10版本,否则启动不了】

  1. root@zhangwei-H610M-K-DDR4:/# wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh --no-check-certificate
  2. root@zhangwei-H610M-K-DDR4:/# bash ~/miniconda.sh
  3. root@zhangwei-H610M-K-DDR4:/# conda init
  4. root@zhangwei-H610M-K-DDR4:/# source ~/.bashrc
  5. root@zhangwei-H610M-K-DDR4:/# conda --version
  6. conda 24.5.0
  7. root@zhangwei-H610M-K-DDR4:/#conda create -n pytorch2 python=3.10
  8. root@zhangwei-H610M-K-DDR4:/#conda activate pytorch2
  9. root@zhangwei-H610M-K-DDR4:/#conda install pytorch torchvision torchaudio cpuonly -c pytorch
  10. root@zhangwei-H610M-K-DDR4:/#python -c "import torch; print(torch.__version__); print(torch.cuda.is_available())"
  11. root@zhangwei-H610M-K-DDR4:/#python --version
  12. Python 3.10.14

5、安装所需模块

  1. root@zhangwei-H610M-K-DDR4:/# cd /usr/local/project/conda/Qwen
  2. #在源码目录下有2个txt,分别为:requirements.txt,requirements_web_demo.txt安装他们
  3. root@zhangwei-H610M-K-DDR4: /usr/local/project/conda/Qwen/# pip install -r requirements.txt
  4. pip install -r requirements_web_demo.txt
  5. #最后启动web界面
  6. root@zhangwei-H610M-K-DDR4: /usr/local/project/conda/Qwen/# python web_demo.py --server-name 0.0.0.0 -c /home/zhangwei/llm/Qwen-1_8B-Chat --cpu-only
  7. #启动后打印如下信息,可以在浏览器输入http://ip:8000,最终呈现文章开头的页面
  8. /home/zhangwei/conda/envs/pytorch2/lib/python3.10/site-packages/torch/cuda/__init__.py:619: UserWarning: Can't initialize NVML
  9. warnings.warn("Can't initialize NVML")
  10. Warning: import flash_attn rotary fail, please install FlashAttention rotary to get higher efficiency https://github.com/Dao-AILab/flash-attention/tree/main/csrc/rotary
  11. Warning: import flash_attn rms_norm fail, please install FlashAttention layer_norm to get higher efficiency https://github.com/Dao-AILab/flash-attention/tree/main/csrc/layer_norm
  12. Warning: import flash_attn fail, please install FlashAttention to get higher efficiency https://github.com/Dao-AILab/flash-attention
  13. Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 15.99it/s]
  14. Running on local URL: http://0.0.0.0:8000
  15. To create a public link, set `share=True` in `launch()`.
  16. IMPORTANT: You are using gradio version 3.41.2, however version 4.29.0 is available, please upgrade.
  17. --------
  18. #--cpu-only这个参数是仅用cpu来跑

欢迎大家一起探讨,后续会更新微调Qwen2模型

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/小桥流水78/article/detail/851035?site
推荐阅读
相关标签
  

闽ICP备14008679号