赞
踩
编辑测试python脚本chatglm-tst.py
- from modelscope import AutoTokenizer, AutoModel, snapshot_download
- model_dir = snapshot_download("ZhipuAI/chatglm3-6b", revision = "v1.0.0")
- tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True)
- model = AutoModel.from_pretrained(model_dir, trust_remote_code=True).half().cuda()
- model = model.eval()
- response, history = model.chat(tokenizer, "你好", history=[])
- print(response)
- response, history = model.chat(tokenizer, "晚上睡不着应该怎么办", history=history)
- print(response)
编写dockerfile脚本
- FROM python:slim-bullseye
- RUN apt-get update
- RUN apt-get install git -y
- RUN git clone https://github.com/THUDM/ChatGLM3.git
- RUN pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
- RUN pip install -r /ChatGLM3/requirements.txt
- RUN pip install modelscope
- COPY chatglm-tst.py /opt/
- RUN python /opt/chatglm-tst.py
执行docker打包命令
docker build -t chatglm3-cpu:0.1 .
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。