当前位置:   article > 正文

linux环境下,cpu only服务器docker部署chatglm_linux 下docker部署chatchat

linux 下docker部署chatchat

编辑测试python脚本chatglm-tst.py

  1. from modelscope import AutoTokenizer, AutoModel, snapshot_download
  2. model_dir = snapshot_download("ZhipuAI/chatglm3-6b", revision = "v1.0.0")
  3. tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True)
  4. model = AutoModel.from_pretrained(model_dir, trust_remote_code=True).half().cuda()
  5. model = model.eval()
  6. response, history = model.chat(tokenizer, "你好", history=[])
  7. print(response)
  8. response, history = model.chat(tokenizer, "晚上睡不着应该怎么办", history=history)
  9. print(response)

编写dockerfile脚本

  1. FROM python:slim-bullseye
  2. RUN apt-get update
  3. RUN apt-get install git -y
  4. RUN git clone https://github.com/THUDM/ChatGLM3.git
  5. RUN pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
  6. RUN pip install -r /ChatGLM3/requirements.txt
  7. RUN pip install modelscope
  8. COPY chatglm-tst.py /opt/
  9. RUN python /opt/chatglm-tst.py

执行docker打包命令

docker build -t chatglm3-cpu:0.1 .

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/黑客灵魂/article/detail/1001878
推荐阅读
相关标签
  

闽ICP备14008679号