赞
踩
Ubuntu22.04.4 LTS系统/安装Anaconda【GPU版】-CSDN博客
Ubuntu22.04.4系统/安装python3.9/pytorch/torchvision【GPU版】-CSDN博客
conda activate QwenChat
git clone https://github.com/QwenLM/Qwen.git
cd Qwen
pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install -r requirements_web_demo.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install modelscope transformers -i https://pypi.tuna.tsinghua.edu.cn/simple
touch chat-7b.py
vi chat-7b.py
chat-7b.py文件内容如下
- from modelscope import AutoModelForCausalLM, AutoTokenizer
- from modelscope import GenerationConfig
- #可选的模型包括: "qwen/Qwen-7B-Chat", "qwen/Qwen-14B-Chat"
- tokenizer = AutoTokenizer.from_pretrained("qwen/Qwen-7B-Chat", trust_remote_code=True)
- model = AutoModelForCausalLM.from_pretrained("qwen/Qwen-7B-Chat", device_map="auto", trust_remote_code=True, fp16=True).eval()
- model.generation_config = GenerationConfig.from_pretrained("Qwen/Qwen-7B-Chat", trust_remote_code=True) # 可指定不同的生成长度、top_p等相关超参
- response, history = model.chat(tokenizer, "你好", history=None)
- print(response)
- response, history = model.chat(tokenizer, "上海好玩吗?", history=history)
- print(response)
- response, history = model.chat(tokenizer, "七月份去上海,有推荐的旅游攻略吗?", history=history)
- print(response)
python chat-7b.py
vi web_demo.py
修改内容如下,将transformers修改为modelscope
- ...
- import gradio as gr
- import mdtex2html
- import torch
- #from transformers import AutoModelForCausalLM, AutoTokenizer
- #from transformers.generation import GenerationConfig
- from modelscope import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
- DEFAULT_CKPT_PATH = 'qwen/Qwen-7B-Chat-Int4'
- ...
- pip install auto-gptq
- pip install optimum
- pip install --upgrade gradio
python web_demo.py
本人安装浏览器的教程,自取链接如下
【Ubuntu20.04部署通义千问Qwen-7B,实测成功】_ubuntu上部署qwen-CSDN博客^v100^pc_search_result_base2&spm=1018.2226.3001.4187
赞
踩
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。