赞
踩
conda create -n lmdeploy -y python=3.10
conda activate lmdeploy
安装0.3.0版本的lmdeploy。
pip install lmdeploy[all]==0.3.0
几个容易迷惑的点:
ln -s /root/share/new_models/Shanghai_AI_Laboratory/internlm2-chat-1_8b /root/
新建pipeline_transformer.py。
touch /root/pipeline_transformer.py
将以下内容复制粘贴进入pipeline_transformer.py。
import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("/root/internlm2-chat-1_8b", trust_remote_code=True) # Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and cause OOM Error. model = AutoModelForCausalLM.from_pretrained("/root/internlm2-chat-1_8b", torch_dtype=torch.float16, trust_remote_code=True).cuda() model = model.eval() inp = "hello" print("[INPUT]", inp) response, history = model.chat(tokenizer, inp, history=[]) print("[OUTPUT]", response) inp = "please provide three suggestions about time management" print("[INPUT]", inp) response, history = model.chat(tokenizer, inp, history=history) print("[OUTPUT]", response)
conda activate lmdeploy
# 运行python代码:
python /root/pipeline_transformer.py
conda activate lmdeploy
使用LMDeploy与模型进行对话的通用命令格式为:
lmdeploy chat [HF格式模型路径/TurboMind格式模型路径]
# 例如
lmdeploy chat /root/internlm2-chat-1_8b
拓展内容:有关LMDeploy的chat功能的更多参数可通过-h命令查看。
lmdeploy chat -h (lmdeploy) root@intern-studio-40061597:~# lmdeploy chat -h usage: lmdeploy chat [-h] [--backend {pytorch,turbomind}] [--trust-remote-code] [--meta-instruction META_INSTRUCTION] [--cap {completion,infilling,chat,python}] [--adapters [ADAPTERS ...]] [--tp TP] [--model-name MODEL_NAME] [--session-len SESSION_LEN] [--max-batch-size MAX_BATCH_SIZE] [--cache-max-entry-count CACHE_MAX_ENTRY_COUNT] [--model-format {hf,llama,awq}] [--quant-policy QUANT_POLICY] [--rope-scaling-factor ROPE_SCALING_FACTOR] model_path Chat with pytorch or turbomind engine. positional arguments: model_path The path of a model. it could be one of the following options: - i) a local directory path of a turbomind model which is converted by `lmdeploy convert` command or download from ii) and iii). - ii) the model_id of a lmdeploy- quantized model hosted inside a model repo on huggingface.co, such as "internlm/internlm-chat-20b-4bit", "lmdeploy/llama2-chat-70b-4bit", etc. - iii) the model_id of a model hosted inside a model repo on huggingface.co, such as "internlm/internlm-chat-7b", "qwen/qwen-7b-chat ", "baichuan- inc/baichuan2-7b-chat" and so on. Type: str options: -h, --help show this help message and exit --backend {pytorch,turbomind} Set the inference backend. Default: turbomind. Type: str --trust-remote-code Trust remote code for loading hf models. Default: True --meta-instruction META_INSTRUCTION System prompt for ChatTemplateConfig. Deprecated. Please use --chat-template instead. Default: None. Type: str --cap {completion,infilling,chat,python} The capability of a model. Deprecated. Please use --chat-template instead. Default: chat. Type: str PyTorch engine arguments: --adapters [ADAPTERS ...] Used to set path(s) of lora adapter(s). One can input key-value pairs in xxx=yyy format for multiple lora adapters. If only have one adapter, one can only input the path of the adapter.. Default: None. Type: str --tp TP GPU number used in tensor parallelism. Should be 2^n. Default: 1. Type: int --model-name MODEL_NAME The name of the to-be-deployed model, such as llama-7b, llama-13b, vicuna-7b and etc. You can run `lmdeploy list` to get the supported model names. Default: None. Type: str --session-len SESSION_LEN The max session length of a sequence. Default: None. Type: int --max-batch-size MAX_BATCH_SIZE Maximum batch size. Default: 128. Type: int --cache-max-entry-count CACHE_MAX_ENTRY_COUNT The percentage of gpu memory occupied by the k/v cache. Default: 0.8. Type: float TurboMind engine arguments: --tp TP GPU number used in tensor parallelism. Should be 2^n. Default: 1. Type: int --model-name MODEL_NAME The name of the to-be-deployed model, such as llama-7b, llama-13b, vicuna-7b and etc. You can run `lmdeploy list` to get the supported model names. Default: None. Type: str --session-len SESSION_LEN The max session length of a sequence. Default: None. Type: int --max-batch-size MAX_BATCH_SIZE Maximum batch size. Default: 128. Type: int --cache-max-entry-count CACHE_MAX_ENTRY_COUNT The percentage of gpu memory occupied by the k/v cache. Default: 0.8. Type: float --model-format {hf,llama,awq} The format of input model. `hf` meaning `hf_llama`, `llama` meaning `meta_llama`, `awq` meaning the quantized model by awq. Default: None. Type: str --quant-policy QUANT_POLICY Whether to use kv int8. Default: 0. Type: int --rope-scaling-factor ROPE_SCALING_FACTOR Rope scaling factor. Default: 0.0. Type: float
主要介绍如何对模型进行量化。主要包括 KV8量化和W4A16量化。总的来说,量化是一种以参数或计算中间结果精度下降换空间节省(以及同时带来的性能提升)的策略。
正式介绍 LMDeploy 量化方案前,需要先介绍两个概念:
常见的 LLM 模型由于 Decoder Only 架构的特性,实际推理时大多数的时间都消耗在了逐 Token 生成阶段(Decoding 阶段),是典型的访存密集型场景。
那么,如何优化 LLM 模型推理中的访存密集问题呢?
KV Cache是一种缓存技术,通过存储键值对的形式来复用计算结果,以达到提高性能和降低内存消耗的目的。在大规模训练和推理中,KV Cache可以显著减少重复计算量,从而提升模型的推理速度。理想情况下,KV Cache全部存储于显存,以加快访存速度。当显存空间不足时,也可以将KV Cache放在内存,通过缓存管理器控制将当前需要使用的数据放入显存。
模型在运行时,占用的显存可大致分为三部分:模型参数本身占用的显存、KV Cache占用的显存,以及中间运算结果占用的显存。LMDeploy的KV Cache管理器可以通过设置 --cache-max-entry-count
参数,控制KV缓存占用剩余显存的最大比例。默认的比例为0.8。
下面通过几个例子,来看一下调整 --cache-max-entry-count
参数的效果。首先保持不加该参数(默认0.8),运行1.8B模型。
lmdeploy chat /root/internlm2-chat-1_8b
(0.8 的比例)
此时显存占用为7856MB。下面,改变 --cache-max-entry-count
参数,设为0.5。
lmdeploy chat /root/internlm2-chat-1_8b --cache-max-entry-count 0.5
(0.5 的比例)
看到显存占用明显降低,变为6608M。
下面来一波“极限”,把 --cache-max-entry-count
参数设置为0.01,约等于禁止KV Cache占用显存。
lmdeploy chat /root/internlm2-chat-1_8b --cache-max-entry-count 0.01
(0.01 的比例)
可以看到,此时显存占用仅为4560MB,代价是会降低模型推理速度。
LMDeploy使用AWQ算法,实现模型4bit权重量化。推理引擎TurboMind提供了非常高效的4bit推理cuda kernel,性能是FP16的2.4倍以上。它支持以下NVIDIA显卡:
- 图灵架构(sm75):20系列、T4
- 安培架构(sm80,sm86):30系列、A10、A16、A30、A100
- Ada Lovelace架构(sm90):40 系列
运行前,首先安装一个依赖库。
pip install einops==0.7.0
仅需执行一条命令,就可以完成模型量化工作。
lmdeploy lite auto_awq \
/root/internlm2-chat-1_8b \
--calib-dataset 'ptb' \
--calib-samples 128 \
--calib-seqlen 1024 \
--w-bits 4 \
--w-group-size 128 \
--work-dir /root/internlm2-chat-1_8b-4bit
运行时间较长,请耐心等待。量化工作结束后,新的HF模型被保存到internlm2-chat-1_8b-4bit目录。下面使用Chat功能运行W4A16量化后的模型。
lmdeploy chat /root/internlm2-chat-1_8b-4bit --model-format awq
为了更加明显体会到W4A16的作用,我们将KV Cache比例再次调为0.01,查看显存占用情况。
lmdeploy chat /root/internlm2-chat-1_8b-4bit --model-format awq --cache-max-entry-count 0.01
可以看到,显存占用变为2472MB,明显降低。
拓展内容:有关LMDeploy的lite功能的更多参数可通过-h命令查看。
lmdeploy lite -h (lmdeploy) root@intern-studio-40061597:~# lmdeploy lite -h usage: lmdeploy lite [-h] {auto_awq,calibrate,kv_qparams,smooth_quant} ... Compressing and accelerating LLMs with lmdeploy.lite module options: -h, --help show this help message and exit Commands: This group has the following commands: {auto_awq,calibrate,kv_qparams,smooth_quant} auto_awq Perform weight quantization using AWQ algorithm. calibrate Perform calibration on a given dataset. kv_qparams Export key and value stats. smooth_quant Perform w8a8 quantization using SmoothQuant.
(lmdeploy) root@intern-studio-40061597:~# lmdeploy lite auto_awq -h usage: lmdeploy lite auto_awq [-h] [--work-dir WORK_DIR] [--calib-dataset CALIB_DATASET] [--calib-samples CALIB_SAMPLES] [--calib-seqlen CALIB_SEQLEN] [--device {cuda,cpu}] [--w-bits W_BITS] [--w-sym] [--w-group-size W_GROUP_SIZE] model Perform weight quantization using AWQ algorithm. positional arguments: model The path of model in hf format. Type: str options: -h, --help show this help message and exit --work-dir WORK_DIR The working directory to save results. Default: ./work_dir. Type: str --calib-dataset CALIB_DATASET The calibration dataset name. Default: ptb. Type: str --calib-samples CALIB_SAMPLES The number of samples for calibration. Default: 128. Type: int --calib-seqlen CALIB_SEQLEN The sequence length for calibration. Default: 2048. Type: int --device {cuda,cpu} Device type of running. Default: cuda. Type: str --w-bits W_BITS Bit number for weight quantization. Default: 4. Type: int --w-sym Whether to do symmetric quantization. Default: False --w-group-size W_GROUP_SIZE Group size for weight quantization statistics. Default: 128. Type: int
在第二章和第三章,我们都是在本地直接推理大模型,这种方式成为本地部署。在生产环境下,我们有时会将大模型封装为API接口服务,供客户端访问。
我们来看下面一张架构图:
我们把从架构上把整个服务流程分成下面几个模块。
值得说明的是,以上的划分是一个相对完整的模型,但在实际中这并不是绝对的。比如可以把“模型推理”和“API Server”合并,有的甚至是三个流程打包在一起提供服务。
通过以下命令启动API服务器,推理 internlm2-chat-1_8b
模型:
lmdeploy serve api_server \
/root/internlm2-chat-1_8b \
--model-format hf \
--quant-policy 0 \
--server-name 0.0.0.0 \
--server-port 23333 \
--tp 1
其中,model-format、quant-policy这些参数是与第二章中量化推理模型一致的;server-name和server-port表示API服务器的服务IP与服务端口;tp参数表示并行数量(GPU数量)。
通过运行以上指令,我们成功启动了API服务器,请勿关闭该窗口,后面我们要新建客户端连接该服务。
可以通过运行一下指令,查看更多参数及使用方法:
lmdeploy serve api_server -h
ssh -CNg -L 23333:127.0.0.1:23333 root@ssh.intern-ai.org.cn -p 你的ssh端口号
你也可以直接打开http://{host}:23333查看接口的具体使用说明,如下图所示。
在“4.1”中,我们在终端里新开了一个API服务器。
本节中,我们要新建一个命令行客户端去连接API服务器。
conda activate lmdeploy
lmdeploy serve api_client http://localhost:23333
运行后,可以通过命令行窗口直接与模型对话:
现在你使用的架构是这样的:
关闭刚刚的VSCode终端,但服务器端的终端不要关闭。
新建一个VSCode终端,激活conda环境。
conda activate lmdeploy
使用Gradio作为前端,启动网页客户端。
ssh -CNg -L 6006:127.0.0.1:6006 root@ssh.intern-ai.org.cn -p <你的ssh端口号>
lmdeploy serve gradio http://localhost:23333 \
--server-name 0.0.0.0 \
--server-port 6006
打开浏览器,访问地址http://127.0.0.1:6006
现在你使用的架构是这样的:
(lmdeploy) root@intern-studio-40061597:~# lmdeploy serve api_server -h usage: lmdeploy serve api_server [-h] [--server-name SERVER_NAME] [--server-port SERVER_PORT] [--allow-origins ALLOW_ORIGINS [ALLOW_ORIGINS ...]] [--allow-credentials] [--allow-methods ALLOW_METHODS [ALLOW_METHODS ...]] [--allow-headers ALLOW_HEADERS [ALLOW_HEADERS ...]] [--qos-config-path QOS_CONFIG_PATH] [--backend {pytorch,turbomind}] [--log-level {CRITICAL,FATAL,ERROR,WARN,WARNING,INFO,DEBUG,NOTSET}] [--api-keys [API_KEYS ...]] [--ssl] [--meta-instruction META_INSTRUCTION] [--chat-template CHAT_TEMPLATE] [--cap {completion,infilling,chat,python}] [--adapters [ADAPTERS ...]] [--tp TP] [--model-name MODEL_NAME] [--session-len SESSION_LEN] [--max-batch-size MAX_BATCH_SIZE] [--cache-max-entry-count CACHE_MAX_ENTRY_COUNT] [--cache-block-seq-len CACHE_BLOCK_SEQ_LEN] [--model-format {hf,llama,awq}] [--quant-policy QUANT_POLICY] [--rope-scaling-factor ROPE_SCALING_FACTOR] model_path Serve LLMs with restful api using fastapi. positional arguments: model_path The path of a model. it could be one of the following options: - i) a local directory path of a turbomind model which is converted by `lmdeploy convert` command or download from ii) and iii). - ii) the model_id of a lmdeploy- quantized model hosted inside a model repo on huggingface.co, such as "internlm/internlm-chat-20b-4bit", "lmdeploy/llama2-chat-70b-4bit", etc. - iii) the model_id of a model hosted inside a model repo on huggingface.co, such as "internlm/internlm-chat-7b", "qwen/qwen-7b-chat ", "baichuan- inc/baichuan2-7b-chat" and so on. Type: str options: -h, --help show this help message and exit --server-name SERVER_NAME Host ip for serving. Default: 0.0.0.0. Type: str --server-port SERVER_PORT Server port. Default: 23333. Type: int --allow-origins ALLOW_ORIGINS [ALLOW_ORIGINS ...] A list of allowed origins for cors. Default: ['*']. Type: str --allow-credentials Whether to allow credentials for cors. Default: False --allow-methods ALLOW_METHODS [ALLOW_METHODS ...] A list of allowed http methods for cors. Default: ['*']. Type: str --allow-headers ALLOW_HEADERS [ALLOW_HEADERS ...] A list of allowed http headers for cors. Default: ['*']. Type: str --qos-config-path QOS_CONFIG_PATH Qos policy config path. Default: . Type: str --backend {pytorch,turbomind} Set the inference backend. Default: turbomind. Type: str --log-level {CRITICAL,FATAL,ERROR,WARN,WARNING,INFO,DEBUG,NOTSET} Set the log level. Default: ERROR. Type: str --api-keys [API_KEYS ...] Optional list of space separated API keys. Default: None. Type: str --ssl Enable SSL. Requires OS Environment variables 'SSL_KEYFILE' and 'SSL_CERTFILE'. Default: False --meta-instruction META_INSTRUCTION System prompt for ChatTemplateConfig. Deprecated. Please use --chat-template instead. Default: None. Type: str --chat-template CHAT_TEMPLATE A JSON file or string that specifies the chat template configuration. Please refer to https://lmdeploy.readthedocs.io/en/latest/advance/chat_template.html for the specification. Default: None. Type: str --cap {completion,infilling,chat,python} The capability of a model. Deprecated. Please use --chat-template instead. Default: chat. Type: str PyTorch engine arguments: --adapters [ADAPTERS ...] Used to set path(s) of lora adapter(s). One can input key-value pairs in xxx=yyy format for multiple lora adapters. If only have one adapter, one can only input the path of the adapter.. Default: None. Type: str --tp TP GPU number used in tensor parallelism. Should be 2^n. Default: 1. Type: int --model-name MODEL_NAME The name of the to-be-deployed model, such as llama-7b, llama-13b, vicuna-7b and etc. You can run `lmdeploy list` to get the supported model names. Default: None. Type: str --session-len SESSION_LEN The max session length of a sequence. Default: None. Type: int --max-batch-size MAX_BATCH_SIZE Maximum batch size. Default: 128. Type: int --cache-max-entry-count CACHE_MAX_ENTRY_COUNT The percentage of gpu memory occupied by the k/v cache. Default: 0.8. Type: float --cache-block-seq-len CACHE_BLOCK_SEQ_LEN The length of the token sequence in a k/v block. For Turbomind Engine, if the GPU compute capability is >= 8.0, it should be a multiple of 32, otherwise it should be a multiple of 64. For Pytorch Engine, if Lora Adapter is specified, this parameter will be ignored. Default: 64. Type: int TurboMind engine arguments: --tp TP GPU number used in tensor parallelism. Should be 2^n. Default: 1. Type: int --model-name MODEL_NAME The name of the to-be-deployed model, such as llama-7b, llama-13b, vicuna-7b and etc. You can run `lmdeploy list` to get the supported model names. Default: None. Type: str --session-len SESSION_LEN The max session length of a sequence. Default: None. Type: int --max-batch-size MAX_BATCH_SIZE Maximum batch size. Default: 128. Type: int --cache-max-entry-count CACHE_MAX_ENTRY_COUNT The percentage of gpu memory occupied by the k/v cache. Default: 0.8. Type: float --cache-block-seq-len CACHE_BLOCK_SEQ_LEN The length of the token sequence in a k/v block. For Turbomind Engine, if the GPU compute capability is >= 8.0, it should be a multiple of 32, otherwise it should be a multiple of 64. For Pytorch Engine, if Lora Adapter is specified, this parameter will be ignored. Default: 64. Type: int --model-format {hf,llama,awq} The format of input model. `hf` meaning `hf_llama`, `llama` meaning `meta_llama`, `awq` meaning the quantized model by awq. Default: None. Type: str --quant-policy QUANT_POLICY Whether to use kv int8. Default: 0. Type: int --rope-scaling-factor ROPE_SCALING_FACTOR Rope scaling factor. Default: 0.0. Type: float
(lmdeploy) root@intern-studio-40061597:~# lmdeploy serve api_client -h usage: lmdeploy serve api_client [-h] [--api-key API_KEY] [--session-id SESSION_ID] api_server_url Interact with restful api server in terminal. positional arguments: api_server_url The URL of api server. Type: str options: -h, --help show this help message and exit --api-key API_KEY api key. Default to None, which means no api key will be used. Type: str --session-id SESSION_ID The identical id of a session. Default: 1. Type: int
在开发项目时,有时我们需要将大模型推理集成到Python代码里面。
conda activate lmdeploy
新建Python源代码文件pipeline.py。
touch /root/pipeline.py
打开pipeline.py,填入以下内容。
from lmdeploy import pipeline
pipe = pipeline('/root/internlm2-chat-1_8b')
response = pipe(['Hi, pls intro yourself', '上海是'])
print(response)
'''
代码解读:\
第1行,引入lmdeploy的pipeline模块 \
第3行,从目录“./internlm2-chat-1_8b”加载HF模型 \
第4行,运行pipeline,这里采用了批处理的方式,用一个列表包含两个输入,lmdeploy同时推理两个输入,产生两个输出结果,结果返回给response \
第5行,输出response
'''
保存后运行代码文件:
python /root/pipeline.py
在第3章,我们通过向lmdeploy传递附加参数,实现模型的量化推理,及设置KV Cache最大占用比例。在Python代码中,可以通过创建TurbomindEngineConfig,向lmdeploy传递参数。
以设置KV Cache占用比例为例,新建python文件pipeline_kv.py。
touch /root/pipeline_kv.py
打开pipeline_kv.py,填入如下内容:
from lmdeploy import pipeline, TurbomindEngineConfig
# 调低 k/v cache内存占比调整为总显存的 20%
backend_config = TurbomindEngineConfig(cache_max_entry_count=0.2)
pipe = pipeline('/root/internlm2-chat-1_8b',
backend_config=backend_config)
response = pipe(['Hi, pls intro yourself', '上海是'])
print(response)
保存后运行python代码:
python /root/pipeline_kv.py
最新版本的LMDeploy支持了llava多模态模型,下面演示使用pipeline推理llava-v1.6-7b。
conda activate lmdeploy
安装llava依赖库。
pip install git+https://github.com/haotian-liu/LLaVA.git@4e2277a060da264c4f21b364c867cc622c945874
新建一个python文件,比如pipeline_llava.py。
touch /root/pipeline_llava.py
打开pipeline_llava.py,填入内容如下:
from lmdeploy.vl import load_image
from lmdeploy import pipeline, TurbomindEngineConfig
backend_config = TurbomindEngineConfig(session_len=8192) # 图片分辨率较高时请调高session_len
# pipe = pipeline('liuhaotian/llava-v1.6-vicuna-7b', backend_config=backend_config) 非开发机运行此命令
pipe = pipeline('/share/new_models/liuhaotian/llava-v1.6-vicuna-7b', backend_config=backend_config)
image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg')
response = pipe(('describe this image', image))
print(response)
代码解读: \
保存后运行pipeline。
python /root/pipeline_llava.py
由于官方的Llava模型对中文支持性不好,因此如果使用中文提示词,可能会得到出乎意料的结果,比如将提示词改为“请描述一下这张图片”,你可能会得到类似《印度鳄鱼》的回复。
我们也可以通过Gradio来运行llava模型。新建python文件gradio_llava.py。
touch /root/gradio_llava.py
打开文件,填入以下内容:
import gradio as gr from lmdeploy import pipeline, TurbomindEngineConfig backend_config = TurbomindEngineConfig(session_len=8192) # 图片分辨率较高时请调高session_len # pipe = pipeline('liuhaotian/llava-v1.6-vicuna-7b', backend_config=backend_config) 非开发机运行此命令 pipe = pipeline('/share/new_models/liuhaotian/llava-v1.6-vicuna-7b', backend_config=backend_config) def model(image, text): if image is None: return [(text, "请上传一张图片。")] else: response = pipe((text, image)).text return [(text, response)] demo = gr.Interface(fn=model, inputs=[gr.Image(type="pil"), gr.Textbox()], outputs=gr.Chatbot()) demo.launch()
运行python程序。
python /root/gradio_llava.py
ssh -CNg -L 7860:127.0.0.1:7860 root@ssh.intern-ai.org.cn -p <你的ssh端口>
LMDeploy不仅支持运行InternLM系列大模型,还支持其他第三方大模型。支持的模型列表如下:
Model | Size |
---|---|
Llama | 7B - 65B |
Llama2 | 7B - 70B |
InternLM | 7B - 20B |
InternLM2 | 7B - 20B |
InternLM-XComposer | 7B |
QWen | 7B - 72B |
QWen-VL | 7B |
QWen1.5 | 0.5B - 72B |
QWen1.5-MoE | A2.7B |
Baichuan | 7B - 13B |
Baichuan2 | 7B - 13B |
Code Llama | 7B - 34B |
ChatGLM2 | 6B |
Falcon | 7B - 180B |
YI | 6B - 34B |
Mistral | 7B |
DeepSeek-MoE | 16B |
DeepSeek-VL | 7B |
Mixtral | 8x7B |
Gemma | 2B-7B |
Dbrx | 132B |
可以从Modelscope,OpenXLab下载相应的HF模型,下载好HF模型,下面的步骤就和使用LMDeploy运行InternLM2一样啦~
为了直观感受LMDeploy与Transformer库推理速度的差异,让我们来编写一个速度测试脚本。测试环境是30%的InternStudio开发机。
先来测试一波Transformer库推理Internlm2-chat-1.8b的速度,新建python文件,命名为benchmark_transformer.py,填入以下内容:
import torch import datetime from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("/root/internlm2-chat-1_8b", trust_remote_code=True) # Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and cause OOM Error. model = AutoModelForCausalLM.from_pretrained("/root/internlm2-chat-1_8b", torch_dtype=torch.float16, trust_remote_code=True).cuda() model = model.eval() # warmup inp = "hello" for i in range(5): print("Warm up...[{}/5]".format(i+1)) response, history = model.chat(tokenizer, inp, history=[]) # test speed inp = "请介绍一下你自己。" times = 10 total_words = 0 start_time = datetime.datetime.now() for i in range(times): response, history = model.chat(tokenizer, inp, history=history) total_words += len(response) end_time = datetime.datetime.now() delta_time = end_time - start_time delta_time = delta_time.seconds + delta_time.microseconds / 1000000.0 speed = total_words / delta_time print("Speed: {:.3f} words/s".format(speed))
python benchmark_transformer.py
得到运行结果:
可以看到,Transformer库的推理速度约为83.026 words/s,注意单位是words/s,不是token/s,word和token在数量上可以近似认为成线性关系。
下面来测试一下LMDeploy的推理速度,新建python文件benchmark_lmdeploy.py,填入以下内容:
import datetime from lmdeploy import pipeline pipe = pipeline('/root/internlm2-chat-1_8b') # warmup inp = "hello" for i in range(5): print("Warm up...[{}/5]".format(i+1)) response = pipe([inp]) # test speed inp = "请介绍一下你自己。" times = 10 total_words = 0 start_time = datetime.datetime.now() for i in range(times): response = pipe([inp]) total_words += len(response[0].text) end_time = datetime.datetime.now() delta_time = end_time - start_time delta_time = delta_time.seconds + delta_time.microseconds / 1000000.0 speed = total_words / delta_time print("Speed: {:.3f} words/s".format(speed))
python benchmark_lmdeploy.py
得到运行结果:
可以看到,LMDeploy的推理速度约为475.658 words/s,是Transformer库的6倍。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。