赞
踩
This guide shows how to use vLLM to:
Be sure to complete theinstallation instructionsbefore continuing with this guide.
We first show an example of using vLLM for offline batched inference on a dataset. In other words, we use vLLM to generate texts for a list of input prompts.
Import LLM
and SamplingParams
from vLLM. The LLM
class is the main class for running offline inference with vLLM engine. The SamplingParams
class specifies the parameters for the sampling process.
from vllm import LLM,SamplingParams
Define the list of input prompts and the sampling parameters for generation. The sampling temperature is set to 0.8 and the nucleus sampling probability is set to 0.95. For more information about the sampling parameters, refer to theclass definition.
- prompts=[
- "Hello, my name is",
- "The president of the United States is",
- "The capital of France is",
- "The future of AI is",
- ]
- sampling_params=SamplingParams(temperature=0.8,top_p=0.95)
Initialize vLLM’s engine for offline inference with theLLM
class and theOPT-125M model. The list of supported models can be found at supported models.
llm=LLM(model="facebook/opt-125m")
Or use model from www.modelscope.cn 从国内站点下载
- # 设置环境变量 os.environ['VLLM_USE_MODELSCOPE']=True
- export VLLM_USE_MODELSCOPE=True
-
- llm=LLM(model="qwen/Qwen-7B-Chat",revision="v1.1.8",trust_remote_code=True)
Callllm.generate
to generate the outputs. It adds the input prompts to vLLM engine’s waiting queue and executes the vLLM engine to generate the outputs with high throughput. The outputs are returned as a list ofRequestOutput
objects, which include all the output tokens.
outputs=llm.generate(prompts, sampling_params)
-> Print the outputs.
- for output in outputs:
- prompt=output.prompt
- generated_text=output.outputs[0].text
- print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
The code example can also be found inexamples/offline_inference.py.
vLLM can be deployed as an LLM service. We provide an exampleFastAPIserver. Checkvllm/entrypoints/api_server.pyfor the server implementation. The server usesAsyncLLMEngine
class to support asynchronous processing of incoming requests.
$ python-mvllm.entrypoints.api_server
Or use model from www.modelscope.cn国内的开源社区下载
$ VLLM_USE_MODELSCOPE=True python-mvllm.entrypoints.api_server\
$ --model="qwen/Qwen-7B-Chat"\
$ --revision="v1.1.8"\
$ --trust-remote-code
By default, this command starts the server athttp://localhost:8000
with the OPT-125M model.
$ curl http://localhost:8000/generate \
$ -d'{
$ "prompt": "San Francisco is a",
$ "use_beam_search": true,
$ "n": 4,
$ "temperature": 0
$ }'
python版本
- """Example Python client for vllm.entrypoints.api_server"""
-
- import argparse
- import json
- from typing import Iterable, List
-
- import requests
-
-
- def clear_line(n: int = 1) -> None:
- LINE_UP = '\033[1A'
- LINE_CLEAR = '\x1b[2K'
- for _ in range(n):
- print(LINE_UP, end=LINE_CLEAR, flush=True)
-
-
- def post_http_request(prompt: str,
- api_url: str,
- n: int = 1,
- temperature=0.0,
- max_tokens=8192,
- stream: bool = False) -> requests.Response:
- headers = {"User-Agent": "Test Client"}
- pload = {
- "prompt": prompt,
- "n": n,
- "use_beam_search": True,
- "temperature": temperature,
- "max_tokens": max_tokens,
- "stream": stream,
- }
- response = requests.post(api_url, headers=headers, json=pload, stream=True)
- return response
-
-
- def get_streaming_response(response: requests.Response) -> Iterable[List[str]]:
- for chunk in response.iter_lines(chunk_size=8192,
- decode_unicode=False,
- delimiter=b"\0"):
- if chunk:
- data = json.loads(chunk.decode("utf-8"))
- output = data["text"]
- yield output
-
-
- def get_response(response: requests.Response) -> List[str]:
- data = json.loads(response.content)
- output = data["text"]
- return output
-
-
- if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("--host", type=str, default="localhost")
- parser.add_argument("--port", type=int, default=8000)
- parser.add_argument("--n", type=int, default=4)
- parser.add_argument("--prompt", type=str, default="San Francisco is a")
- parser.add_argument("--stream", action="store_true")
- args = parser.parse_args()
- prompt = args.prompt
- api_url = f"http://{args.host}:{args.port}/generate"
- n = args.n
- stream = args.stream
-
- print(f"Prompt: {prompt!r}\n", flush=True)
- response = post_http_request(prompt, api_url, n, stream)
-
- if stream:
- num_printed_lines = 0
- for h in get_streaming_response(response):
- clear_line(num_printed_lines)
- num_printed_lines = 0
- for i, line in enumerate(h):
- num_printed_lines += 1
- print(f"Beam candidate {i}: {line!r}", flush=True)
- else:
- output = get_response(response)
- for i, line in enumerate(output):
- print(f"Beam candidate {i}: {line!r}", flush=True)
See examples/api_client.py for a more detailed client example.
vLLM can be deployed as a server that mimics the OpenAI API protocol. This allows vLLM to be used as a drop-in replacement for applications using OpenAI API. By default, it starts the server athttp://localhost:8000
. You can specify the address with--host
and--port
arguments. The server currently hosts one model at a time (OPT-125M in the above command) and implementslist models,create chat completion, andcreate completionendpoints. We are actively adding support for more endpoints.
$ python-m vllm.entrypoints.openai.api_server \
$ --modelfacebook/opt-125m
Or use model from www.modelscope.cn
$ VLLM_USE_MODELSCOPE=True python-m vllm.entrypoints.openai.api_server \
$ --model="qwen/Qwen-7B-Chat"--revision="v1.1.8"--trust-remote-code
By default, the server uses a predefined chat template stored in the tokenizer. You can override this template by using the--chat-template
argument:
$ python-m vllm.entrypoints.openai.api_server\
$ --modelfacebook/opt-125m\
$ --chat-template./examples/template_chatml.jinja
This server can be queried in the same format as OpenAI API. For example, list the models:
$ curl http://localhost:8000/v1/models
Query the model with input prompts:
$ curl http://localhost:8000/v1/completions\
$ -H"Content-Type: application/json"\
$ -d'{
$ "model": "facebook/opt-125m",
$ "prompt": "San Francisco is a",
$ "max_tokens": 7,
$ "temperature": 0
$ }'
Since this server is compatible with OpenAI API, you can use it as a drop-in replacement for any applications using OpenAI API. For example, another way to query the server is via theopenai
python package:
Modify OpenAI's API key and API base to use vLLM's API server.
- from openai import OpenAI
-
- # 方法1通过设置环境变量
- # import os
- # os.environ['OPENAI_API_KEY']="ANY THING"
- # os.environ['OPENAI_API_BASE']="http://localhost:8000/v1"
-
- # 方法2Modify OpenAI's API key and API base to use vLLM's API server.
- openai_api_key = "ANY THING"
- openai_api_base = "http://localhost:8000/v1"
-
- client = OpenAI(
- api_key=openai_api_key,
- base_url=openai_api_base,
- )
- completion = client.completions.create(model="facebook/opt-125m",
- prompt="San Francisco is a")
- print("Completion result:", completion)
For a more detailed client example, refer toexamples/openai_completion_client.py.
The vLLM server is designed to support the OpenAI Chat API, allowing you to engage in dynamic conversations with the model. The chat interface is a more interactive way to communicate with the model, allowing back-and-forth exchanges that can be stored in the chat history. This is useful for tasks that require context or more detailed explanations.
Querying the model using OpenAI Chat API:
You can use thecreate chat completionendpoint to communicate with the model in a chat-like interface:
$ curl http://localhost:8000/v1/chat/completions\
$ -H"Content-Type: application/json"\
$ -d'{
$ "model": "facebook/opt-125m",
$ "messages": [
$ {"role": "system", "content": "You are a helpful assistant."},
$ {"role": "user", "content": "Who won the world series in 2020?"}
$ ]
$ }'
Python Client Example,using the openai python package, you can also communicate with the model in a chat-like manner:
- from openai import OpenAI
- # 方法1通过设置环境变量
- # import os
- # os.environ['OPENAI_API_KEY']="ANY THING"
- # os.environ['OPENAI_API_BASE']="http://localhost:8000/v1"
-
- # 方法2Modify OpenAI's API key and API base to use vLLM's API server.
- openai_api_key = "ANY THING"
- openai_api_base = "http://localhost:8000/v1"
-
- client = OpenAI(
- api_key=openai_api_key,
- base_url=openai_api_base,
- )
-
- chat_response = client.chat.completions.create(
- model="facebook/opt-125m",
- messages=[
- {"role": "system", "content": "You are a helpful assistant."},
- {"role": "user", "content": "Tell me a joke."},
- ]
- )
- print("Chat response:", chat_response)
For more in-depth examples and advanced features of the chat API, you can refer to the official OpenAI documentation.
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。