赞
踩
# Create model repository with placeholder for model and version 1
mkdir -p ./models/densenet_onnx/1
# Download model and place it in model repository
wget -O ./models/densenet_onnx/1/model.onnx https://contentmamluswest001.blob.core.windows.net/content/14b2744cf8d6418c87ffddc3f3127242/9502630827244d60a1214f250e3bbca7/08aed7327d694b8dbaee2c97b8d0fcba/densenet121-1.2.onnx
vim ./models/densenet_onnx/config.pbtxt
name: "densenet_onnx" backend: "onnxruntime" max_batch_size: 0 input: [ { name: "data_0", data_type: TYPE_FP32, dims: [ 1, 3, 224, 224] } ] output: [ { name: "fc6_1", data_type: TYPE_FP32, dims: [ 1, 1000, 1, 1 ] } ]
这里定义模型的输入输出可以通过netron工具可视化查看该模型的输入输出参数
docker pull nvcr.io/nvidia/tritonserver:23.02-py3 # triton server
docker pull nvcr.io/nvidia/tritonserver:23.02-py3-sdk # triton client
# Start server container in the background
docker run -d --gpus=all --network=host -v $PWD:/mnt --name triton-server nvcr.io/nvidia/tritonserver:23.02-py3 bash
[~]# tritonserver --model-repository=/mnt/models --model-control-mode=poll I0403 06:07:10.866992 1186 server.cc:522] +------------------+------+ | Repository Agent | Path | +------------------+------+ +------------------+------+ I0403 06:07:10.867083 1186 server.cc:549] +-------------+-------------------------------------------------------------------------+--------+ | Backend | Path | Config | +-------------+-------------------------------------------------------------------------+--------+ | pytorch | /opt/tritonserver/backends/pytorch/libtriton_pytorch.so | {} | | tensorflow | /opt/tritonserver/backends/tensorflow1/libtriton_tensorflow1.so | {} | | onnxruntime | /opt/tritonserver/backends/onnxruntime/libtriton_onnxruntime.so | {} | | openvino | /opt/tritonserver/backends/openvino_2021_2/libtriton_openvino_2021_2.so | {} | +-------------+-------------------------------------------------------------------------+--------+ I0403 06:07:10.867131 1186 server.cc:592] +---------------+---------+--------+ | Model | Version | Status | +---------------+---------+--------+ | densenet_onnx | 2 | READY | +---------------+---------+--------+ I0403 06:07:10.947730 1186 metrics.cc:623] Collecting metrics for GPU 0: NVIDIA GeForce RTX 3090 I0403 06:07:10.947760 1186 metrics.cc:623] Collecting metrics for GPU 1: NVIDIA GeForce RTX 3090 I0403 06:07:10.947772 1186 metrics.cc:623] Collecting metrics for GPU 2: NVIDIA GeForce RTX 3090 I0403 06:07:10.947784 1186 metrics.cc:623] Collecting metrics for GPU 3: NVIDIA GeForce RTX 3090 I0403 06:07:10.947800 1186 metrics.cc:623] Collecting metrics for GPU 4: NVIDIA GeForce RTX 3090 I0403 06:07:10.947819 1186 metrics.cc:623] Collecting metrics for GPU 5: NVIDIA GeForce RTX 3090 I0403 06:07:10.947852 1186 metrics.cc:623] Collecting metrics for GPU 6: NVIDIA GeForce RTX 3090 I0403 06:07:10.947886 1186 metrics.cc:623] Collecting metrics for GPU 7: NVIDIA GeForce RTX 3090 I0403 06:07:10.949215 1186 tritonserver.cc:1932] +----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Option | Value | +----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | server_id | triton | | server_version | 2.19.0 | | server_extensions | classification sequence model_repository model_repository(unload_dependents) schedule_policy model_configuration system_shared_memory cuda_shared_memory binary_tensor_data statistics trace | | model_repository_path[0] | /mnt/models | | model_control_mode | MODE_POLL | | strict_model_config | 1 | | rate_limit | OFF | | pinned_memory_pool_byte_size | 268435456 | | cuda_memory_pool_byte_size{0} | 67108864 | | cuda_memory_pool_byte_size{1} | 67108864 | | cuda_memory_pool_byte_size{2} | 67108864 | | cuda_memory_pool_byte_size{3} | 67108864 | | cuda_memory_pool_byte_size{4} | 67108864 | | cuda_memory_pool_byte_size{5} | 67108864 | | cuda_memory_pool_byte_size{6} | 67108864 | | cuda_memory_pool_byte_size{7} | 67108864 | | response_cache_byte_size | 0 | | min_supported_compute_capability | 6.0 | | strict_readiness | 1 | | exit_timeout | 30 | +----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ I0403 06:07:10.950873 1186 grpc_server.cc:4375] Started GRPCInferenceService at 0.0.0.0:8001 I0403 06:07:10.951176 1186 http_server.cc:3075] Started HTTPService at 0.0.0.0:8000 I0403 06:07:10.992539 1186 http_server.cc:178] Started Metrics Service at 0.0.0.0:8002
1.日志中显示densenet_onnx 模型已加载完毕,并启动了GRPC、HTTP、Metrics接口
2.–model-control-mode=poll该参数用于启动模型热更新,当模型文件发生变化,或者新增版本时,程序先启动新的实例版本出来,在将旧版本或者实例卸载掉
[~]# cp -rf 1 2 [~]# cp -rf 1 3 I0403 06:07:26.109494 1186 onnxruntime.cc:2400] TRITONBACKEND_ModelInitialize: densenet_onnx (version 3) I0403 06:07:26.119616 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 0) I0403 06:07:26.319224 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 1) I0403 06:07:26.495285 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 2) I0403 06:07:26.669370 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 3) I0403 06:07:26.829762 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 4) I0403 06:07:27.007662 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 5) I0403 06:07:27.182506 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 6) I0403 06:07:27.367420 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 7) I0403 06:07:27.532531 1186 model_repository_manager.cc:1149] successfully loaded 'densenet_onnx' version 3 I0403 06:07:27.532561 1186 model_repository_manager.cc:1026] unloading: densenet_onnx:2 I0403 06:07:27.532729 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state I0403 06:07:27.548199 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state I0403 06:07:27.561028 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state I0403 06:07:27.573967 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state I0403 06:07:27.585593 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state I0403 06:07:27.596050 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state I0403 06:07:27.605498 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state I0403 06:07:27.614892 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state I0403 06:07:27.624120 1186 onnxruntime.cc:2423] TRITONBACKEND_ModelFinalize: delete model state I0403 06:07:27.624158 1186 model_repository_manager.cc:1132] successfully unloaded 'densenet_onnx' version 2 I0403 06:14:42.551308 1186 model_repository_manager.cc:994] loading: densenet_onnx:3 I0403 06:14:42.651625 1186 onnxruntime.cc:2400] TRITONBACKEND_ModelInitialize: densenet_onnx (version 3) I0403 06:14:42.659502 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 0) I0403 06:14:42.851975 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 1) I0403 06:14:43.027086 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 2) I0403 06:14:43.203822 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 3) I0403 06:14:43.378325 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 4) I0403 06:14:43.552427 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 5) I0403 06:14:43.732855 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 6) I0403 06:14:43.903087 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 7) I0403 06:14:44.071766 1186 model_repository_manager.cc:1149] successfully loaded 'densenet_onnx' version 3 I0403 06:14:44.071795 1186 model_repository_manager.cc:1026] unloading: densenet_onnx:3 I0403 06:14:44.071970 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state I0403 06:14:44.081007 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state I0403 06:14:44.089658 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state I0403 06:14:44.098768 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state I0403 06:14:44.107905 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state I0403 06:14:44.116819 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state I0403 06:14:44.125697 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state I0403 06:14:44.134503 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state I0403 06:14:44.143469 1186 onnxruntime.cc:2423] TRITONBACKEND_ModelFinalize: delete model state I0403 06:14:44.143503 1186 model_repository_manager.cc:1132] successfully unloaded 'densenet_onnx' version 3
上述结果中将新增版本2,3,最终将triton server版本换到3
docker run -itd --gpus=all --network=host -v $PWD:/mnt --name triton-client nvcr.io/nvidia/tritonserver:23.02-py3-sdk bash
[~]# perf_analyzer -m densenet_onnx -u 127.0.0.1:8000 --concurrency-range 1:6
Inferences/Second vs. Client Average Batch Latency
Concurrency: 1, throughput: 96.1522 infer/sec, latency 10396 usec
Concurrency: 2, throughput: 197.181 infer/sec, latency 10138 usec
Concurrency: 3, throughput: 305.046 infer/sec, latency 9832 usec
Concurrency: 4, throughput: 425.759 infer/sec, latency 9392 usec
Concurrency: 5, throughput: 564.87 infer/sec, latency 8850 usec
Concurrency: 6, throughput: 704.574 infer/sec, latency 8514 usec
tritonserver --model-repository=/mnt/models --model-control-mode=explicit # 必须使用这个模式否则无法load/unload模型
root@53:/mnt# cat config.yaml # 使用remote方式的原因,可以通过这种方式进行测试当我们不需要GPU时,来测试国产芯片的吞吐以及延迟 model_repository: /mnt/models #checkpoint_directory: /mnt/checkpoints/ profile_models: densenet_onnx triton_grpc_endpoint: 127.0.0.1:9001 triton_metrics_url: 127.0.0.1:9002 triton_launch_mode: remote root@53:/mnt# rm -rf output_model_repository/ checkpoints/ && model-analyzer profile -f config.yaml # 这里选择使用配置文件方式 [Model Analyzer] Initializing GPUDevice handles [Model Analyzer] Using GPU 0 NVIDIA GeForce RTX 3090 with UUID GPU-b6d3bb44-b607-e9c1-c898-3977340c20a4 [Model Analyzer] Using GPU 1 NVIDIA GeForce RTX 3090 with UUID GPU-f37fdb1b-77c7-ff1f-21c0-e2db53fe0818 [Model Analyzer] Using GPU 2 NVIDIA GeForce RTX 3090 with UUID GPU-1a0e40f7-65eb-9694-f91c-253808416e71 [Model Analyzer] Using GPU 3 NVIDIA GeForce RTX 3090 with UUID GPU-c889529d-734f-8a13-f820-02597663a704 [Model Analyzer] Using GPU 4 NVIDIA GeForce RTX 3090 with UUID GPU-9f08b528-c421-bc60-2fc6-7f906e13404a [Model Analyzer] Using GPU 5 NVIDIA GeForce RTX 3090 with UUID GPU-9c9fbba1-0558-4f8e-1534-8ff8e8b03a6c [Model Analyzer] Using GPU 6 NVIDIA GeForce RTX 3090 with UUID GPU-55808174-5a3e-8082-8759-b248794a1e34 [Model Analyzer] Using GPU 7 NVIDIA GeForce RTX 3090 with UUID GPU-2a0fd91b-3ca8-8249-2d0c-70c7853491a6 [Model Analyzer] Using remote Triton Server [Model Analyzer] WARNING: GPU memory metrics reported in the remote mode are not accurate. Model Analyzer uses Triton explicit model control to load/unload models. Some frameworks do not release the GPU memory even when the memory is not being used. Consider using the "local" or "docker" mode if you want to accurately monitor the GPU memory usage for different models. [Model Analyzer] WARNING: Config sweep parameters are ignored in the "remote" mode because Model Analyzer does not have access to the model repository of the remote Triton Server. [Model Analyzer] No checkpoint file found, starting a fresh run. [Model Analyzer] Profiling server only metrics... [Model Analyzer] Profiling densenet_onnx: client batch size=1, concurrency=1 [Model Analyzer] Profiling densenet_onnx: client batch size=1, concurrency=2 [Model Analyzer] Profiling densenet_onnx: client batch size=1, concurrency=4 [Model Analyzer] Profiling densenet_onnx: client batch size=1, concurrency=8 [Model Analyzer] Profiling densenet_onnx: client batch size=1, concurrency=16 [Model Analyzer] Profiling densenet_onnx: client batch size=1, concurrency=32 [Model Analyzer] Profiling densenet_onnx: client batch size=1, concurrency=64 [Model Analyzer] Profiling densenet_onnx: client batch size=1, concurrency=128 [Model Analyzer] No longer increasing concurrency as throughput has plateaued [Model Analyzer] Saved checkpoint to /mnt/checkpoints/0.ckpt [Model Analyzer] Profile complete. Profiled 1 configurations for models: ['densenet_onnx'] [Model Analyzer] [Model Analyzer] WARNING: GPU output field "gpu_used_memory", has no data [Model Analyzer] WARNING: GPU output field "gpu_utilization", has no data [Model Analyzer] WARNING: GPU output field "gpu_power_usage", has no data [Model Analyzer] WARNING: Server output field "gpu_used_memory", has no data [Model Analyzer] WARNING: Server output field "gpu_utilization", has no data [Model Analyzer] WARNING: Server output field "gpu_power_usage", has no data [Model Analyzer] Exporting inference metrics to /mnt/results/metrics-model-inference.csv [Model Analyzer] WARNING: Requested top 3 configs, but found only 1. Showing all available configs for this model. [Model Analyzer] WARNING: Requested top 3 configs, but found only 1. Showing all available configs for this model. [Model Analyzer] WARNING: Requested top 3 configs, but found only 1. Showing all available configs for this model. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] Exporting Summary Report to /mnt/reports/summaries/densenet_onnx/result_summary.pdf [Model Analyzer] WARNING: Requested top 3 configs, but found only 1. Showing all available configs for this model. [Model Analyzer] WARNING: Requested top 3 configs, but found only 1. Showing all available configs for this model. [Model Analyzer] To generate detailed reports for the 1 best configurations, run `model-analyzer report --report-model-configs densenet_onnx --export-path /mnt --config-file config.yaml root@53:/mnt# ls checkpoints config.yaml models models23 output_model_repository plots reports results root@53:/mnt# cat results/metrics- metrics-model-inference.csv metrics-server-only.csv root@53:/mnt# cat results/metrics-model-inference.csv Model,Batch,Concurrency,Model Config Path,Instance Group,Max Batch Size,Satisfies Constraints,Throughput (infer/sec),p99 Latency (ms) densenet_onnx,1,16,densenet_onnx,8:GPU,0,Yes,1394.9,13.1 densenet_onnx,1,64,densenet_onnx,8:GPU,0,Yes,1384.2,50.0 densenet_onnx,1,32,densenet_onnx,8:GPU,0,Yes,1384.0,25.3 densenet_onnx,1,128,densenet_onnx,8:GPU,0,Yes,1331.5,104.6 densenet_onnx,1,8,densenet_onnx,8:GPU,0,Yes,1215.6,7.7 densenet_onnx,1,4,densenet_onnx,8:GPU,0,Yes,472.0,12.2 densenet_onnx,1,2,densenet_onnx,8:GPU,0,Yes,172.5,17.3 densenet_onnx,1,1,densenet_onnx,8:GPU,0,Yes,95.6,15.6
root@53:/mnt# model-analyzer report --report-model-configs densenet_onnx --export-path /mnt --config-file config.yaml [Model Analyzer] Loaded checkpoint from file /mnt/checkpoints/0.ckpt [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_power_usage' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_power_usage' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_power_usage' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_power_usage' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_power_usage' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_power_usage' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_power_usage' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_power_usage' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices. [Model Analyzer] Exporting Detailed Report to /mnt/reports/detailed/densenet_onnx/detailed_report.pdf
root@sse-lg-113-53:/mnt# model-analyzer profile --help usage: model-analyzer profile [-h] [-f CONFIG_FILE] [-s CHECKPOINT_DIRECTORY] [-i MONITORING_INTERVAL] [-d DURATION_SECONDS] [--collect-cpu-metrics] [--gpus GPUS] [--skip-summary-reports] [-m MODEL_REPOSITORY] [--output-model-repository-path OUTPUT_MODEL_REPOSITORY_PATH] [--override-output-model-repository] [-r CLIENT_MAX_RETRIES] [--client-protocol {http,grpc}] [--profile-models PROFILE_MODELS] [-b BATCH_SIZES] [-c CONCURRENCY] [--reload-model-disable] [--perf-analyzer-timeout PERF_ANALYZER_TIMEOUT] [--perf-analyzer-cpu-util PERF_ANALYZER_CPU_UTIL] [--perf-analyzer-path PERF_ANALYZER_PATH] [--perf-output] [--perf-output-path PERF_OUTPUT_PATH] [--perf-analyzer-max-auto-adjusts PERF_ANALYZER_MAX_AUTO_ADJUSTS] [--triton-launch-mode {local,docker,remote,c_api}] [--triton-docker-image TRITON_DOCKER_IMAGE] [--triton-http-endpoint TRITON_HTTP_ENDPOINT] [--triton-grpc-endpoint TRITON_GRPC_ENDPOINT] [--triton-metrics-url TRITON_METRICS_URL] [--triton-server-path TRITON_SERVER_PATH] [--triton-output-path TRITON_OUTPUT_PATH] [--triton-docker-mounts TRITON_DOCKER_MOUNTS] [--triton-docker-shm-size TRITON_DOCKER_SHM_SIZE] [--triton-install-path TRITON_INSTALL_PATH] [--early-exit-enable] [--run-config-search-max-concurrency RUN_CONFIG_SEARCH_MAX_CONCURRENCY] [--run-config-search-min-concurrency RUN_CONFIG_SEARCH_MIN_CONCURRENCY] [--run-config-search-max-instance-count RUN_CONFIG_SEARCH_MAX_INSTANCE_COUNT] [--run-config-search-min-instance-count RUN_CONFIG_SEARCH_MIN_INSTANCE_COUNT] [--run-config-search-max-model-batch-size RUN_CONFIG_SEARCH_MAX_MODEL_BATCH_SIZE] [--run-config-search-min-model-batch-size RUN_CONFIG_SEARCH_MIN_MODEL_BATCH_SIZE] [--run-config-search-mode {brute,quick}] [--run-config-search-disable] [--run-config-profile-models-concurrently-enable] [-e EXPORT_PATH] [--filename-model-inference FILENAME_MODEL_INFERENCE] [--filename-model-gpu FILENAME_MODEL_GPU] [--filename-server-only FILENAME_SERVER_ONLY] [--num-configs-per-model NUM_CONFIGS_PER_MODEL] [--num-top-model-configs NUM_TOP_MODEL_CONFIGS] [--inference-output-fields INFERENCE_OUTPUT_FIELDS] [--gpu-output-fields GPU_OUTPUT_FIELDS] [--server-output-fields SERVER_OUTPUT_FIELDS] [--latency-budget LATENCY_BUDGET] [--min-throughput MIN_THROUGHPUT] optional arguments: -h, --help show this help message and exit -f CONFIG_FILE, --config-file CONFIG_FILE Path to Config File for subcommand 'profile'. -s CHECKPOINT_DIRECTORY, --checkpoint-directory CHECKPOINT_DIRECTORY Full path to directory to which to read and write checkpoints and profile data. -i MONITORING_INTERVAL, --monitoring-interval MONITORING_INTERVAL Interval of time between metrics measurements in seconds -d DURATION_SECONDS, --duration-seconds DURATION_SECONDS Specifies how long (seconds) to gather server-only metrics --collect-cpu-metrics Specify whether CPU metrics are collected or not --gpus GPUS List of GPU UUIDs to be used for the profiling. Use 'all' to profile all the GPUs visible by CUDA. --skip-summary-reports Skips the generation of analysis summary reports and tables. -m MODEL_REPOSITORY, --model-repository MODEL_REPOSITORY Triton Model repository location --output-model-repository-path OUTPUT_MODEL_REPOSITORY_PATH Output model repository path used by Model Analyzer. This is the directory that will contain all the generated model configurations --override-output-model-repository Will override the contents of the output model repository and replace it with the new results. -r CLIENT_MAX_RETRIES, --client-max-retries CLIENT_MAX_RETRIES Specifies the max number of retries for any requests to Triton server. --client-protocol {http,grpc} The protocol used to communicate with the Triton Inference Server --profile-models PROFILE_MODELS List of the models to be profiled -b BATCH_SIZES, --batch-sizes BATCH_SIZES Comma-delimited list of batch sizes to use for the profiling -c CONCURRENCY, --concurrency CONCURRENCY Comma-delimited list of concurrency values or ranges <start:end:step> to be used during profiling --reload-model-disable Flag to indicate whether or not to disable model loading and unloading in remote mode. --perf-analyzer-timeout PERF_ANALYZER_TIMEOUT Perf analyzer timeout value in seconds. --perf-analyzer-cpu-util PERF_ANALYZER_CPU_UTIL Maximum CPU utilization value allowed for the perf_analyzer. --perf-analyzer-path PERF_ANALYZER_PATH The full path to the perf_analyzer binary executable --perf-output Enables the output from the perf_analyzer to a file specified by perf_output_path. If perf_output_path is None, output will be written to stdout. --perf-output-path PERF_OUTPUT_PATH Path to the file to which write perf_analyzer output, if enabled. --perf-analyzer-max-auto-adjusts PERF_ANALYZER_MAX_AUTO_ADJUSTS Maximum number of times perf_analyzer is launched with auto adjusted parameters in an attempt to profile a model. --triton-launch-mode {local,docker,remote,c_api} The method by which to launch Triton Server. 'local' assumes tritonserver binary is available locally. 'docker' pulls and launches a triton docker container with the specified version. 'remote' connects to a running server using given http, grpc and metrics endpoints. 'c_api' allows direct benchmarking of Triton locallywithout the use of endpoints. --triton-docker-image TRITON_DOCKER_IMAGE Triton Server Docker image tag --triton-http-endpoint TRITON_HTTP_ENDPOINT Triton Server HTTP endpoint url used by Model Analyzer client. --triton-grpc-endpoint TRITON_GRPC_ENDPOINT Triton Server HTTP endpoint url used by Model Analyzer client. --triton-metrics-url TRITON_METRICS_URL Triton Server Metrics endpoint url. --triton-server-path TRITON_SERVER_PATH The full path to the tritonserver binary executable --triton-output-path TRITON_OUTPUT_PATH The full path to the file to which Triton server instance will append their log output. If not specified, they are not written. --triton-docker-mounts TRITON_DOCKER_MOUNTS A list of strings representing volumes to be mounted. The strings should have the format '<host path>:<container path>:<access mode>'. --triton-docker-shm-size TRITON_DOCKER_SHM_SIZE The size of the /dev/shm for the triton docker container --triton-install-path TRITON_INSTALL_PATH Path to Triton install directory i.e. the parent directory of 'lib/libtritonserver.so'.Required only when using triton_launch_mode=c_api. --early-exit-enable Flag to indicate if Model Analyzer can skip some configurations when manually searching concurrency or max_batch_size --run-config-search-max-concurrency RUN_CONFIG_SEARCH_MAX_CONCURRENCY Max concurrency value that run config search should not go beyond that. --run-config-search-min-concurrency RUN_CONFIG_SEARCH_MIN_CONCURRENCY Min concurrency value that run config search should start with. --run-config-search-max-instance-count RUN_CONFIG_SEARCH_MAX_INSTANCE_COUNT Max instance count value that run config search should not go beyond that. --run-config-search-min-instance-count RUN_CONFIG_SEARCH_MIN_INSTANCE_COUNT Min instance count value that run config search should start with. --run-config-search-max-model-batch-size RUN_CONFIG_SEARCH_MAX_MODEL_BATCH_SIZE Value for the model's max_batch_size that run config search will not go beyond. --run-config-search-min-model-batch-size RUN_CONFIG_SEARCH_MIN_MODEL_BATCH_SIZE Value for the model's max_batch_size that run config search will start from. --run-config-search-mode {brute,quick} The search mode for Model Analyzer to find and evaluate model configurations. 'brute' will brute force all combinations of configuration options. 'quick' will attempt to find a near-optimal configuration as fast as possible, but isn't guaranteed to find the best. --run-config-search-disable Disable run config search. --run-config-profile-models-concurrently-enable Enable the profiling of all supplied models concurrently. -e EXPORT_PATH, --export-path EXPORT_PATH Full path to directory in which to store the results --filename-model-inference FILENAME_MODEL_INFERENCE Specifies filename for storing model inference metrics --filename-model-gpu FILENAME_MODEL_GPU Specifies filename for storing model GPU metrics --filename-server-only FILENAME_SERVER_ONLY Specifies filename for server-only metrics --num-configs-per-model NUM_CONFIGS_PER_MODEL The number of configurations to plot per model in the summary. --num-top-model-configs NUM_TOP_MODEL_CONFIGS Model Analyzer will compare this many of the top models configs across all models. --inference-output-fields INFERENCE_OUTPUT_FIELDS Specifies column keys for model inference metrics table --gpu-output-fields GPU_OUTPUT_FIELDS Specifies column keys for model gpu metrics table --server-output-fields SERVER_OUTPUT_FIELDS Specifies column keys for server-only metrics table --latency-budget LATENCY_BUDGET Shorthand flag for specifying a maximum latency in ms. --min-throughput MIN_THROUGHPUT Shorthand flag for specifying a minimum throughput.
Triton提供了一个叫做instance-group的模型配置项,允许指定每一个模型允许的并发实例的数量,这些并发的模型数量称之为一个instance。默认情况下,Triton是一个GPU上放一个模型,一次只推理一份数据。但通过设置模型的instance_group参数,可以对模型的并发实例数据量进行扩充
name: "ensemble_model" platform: "ensemble" max_batch_size: 1 input [ { name: "IMAGE" data_type: TYPE_STRING dims: [ 1 ] } ] output [ { name: "CLASSIFICATION" data_type: TYPE_FP32 dims: [ 1000 ] }, { name: "SEGMENTATION" data_type: TYPE_FP32 dims: [ 3, 224, 224 ] } ] ensemble_scheduling { step [ { model_name: "image_preprocess_model" model_version: -1 input_map { key: "RAW_IMAGE" value: "IMAGE" } output_map { key: "PREPROCESSED_OUTPUT" value: "preprocessed_image" } }, { model_name: "classification_model" model_version: -1 input_map { key: "FORMATTED_IMAGE" value: "preprocessed_image" } output_map { key: "CLASSIFICATION_OUTPUT" value: "CLASSIFICATION" } }, { model_name: "segmentation_model" model_version: -1 input_map { key: "FORMATTED_IMAGE" value: "preprocessed_image" } output_map { key: "SEGMENTATION_OUTPUT" value: "SEGMENTATION" } } ] }
max_batch_size: 0
input: [
{
name: "data_0",
data_type: TYPE_FP32,
dims: [ 1, 3, 224, 224]
}
]
output: [
{
name: "prob_1",
data_type: TYPE_FP32,
dims: [ 1, 1000, 1, 1 ]
}
]
instance_group [ { count: 2 kind: KIND_GPU #每个GPU上创建2个instance } ] instance_group [ { count: 1 kind: KIND_GPU gpus: [ 0 ] }, { count: 2 kind: KIND_GPU gpus: [ 1, 2 ] } ] #gpu0上创建一个instance,gpu1和gpu2上各创建两个instance instance_group [ { count: 2 kind: KIND_CPU } ] #在CPU上创建两个instance
instance_group [ { count: 1 kind: KIND_GPU gpus: [ 0, 1, 2 ] rate_limiter { resources [ { name: "R1" count: 4 }, { name: "R2" global: True count: 2 } ] priority: 2 } } ]
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。