
guidellm
Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs
Stars: 557

README:
GuideLLM is a platform for evaluating and optimizing the deployment of large language models (LLMs). By simulating real-world inference workloads, GuideLLM enables users to assess the performance, resource requirements, and cost implications of deploying LLMs on various hardware configurations. This approach ensures efficient, scalable, and cost-effective LLM inference serving while maintaining high service quality.
- Performance Evaluation: Analyze LLM inference under different load scenarios to ensure your system meets your service level objectives (SLOs).
- Resource Optimization: Determine the most suitable hardware configurations for running your models effectively.
- Cost Estimation: Understand the financial impact of different deployment strategies and make informed decisions to minimize costs.
- Scalability Testing: Simulate scaling to handle large numbers of concurrent users without performance degradation.
Before installing, ensure you have the following prerequisites:
- OS: Linux or MacOS
- Python: 3.9 – 3.13
The latest GuideLLM release can be installed using pip:
pip install guidellm
Or from source code using pip:
pip install git+https://github.com/vllm-project/guidellm.git
For detailed installation instructions and requirements, see the Installation Guide.
Alternatively we publish container images at ghcr.io/vllm-project/guidellm. Running a container is (by default) equivalent to guidellm benchmark run
:
podman run \
--rm -it \
-v "./results:/results:rw" \
-e GUIDELLM_TARGET=http://localhost:8000 \
-e GUIDELLM_RATE_TYPE=sweep \
-e GUIDELLM_MAX_SECONDS=30 \
-e GUIDELLM_DATA="prompt_tokens=256,output_tokens=128" \
ghcr.io/vllm-project/guidellm:latest
[!TIP] CLI options can also be specified as ENV variables (E.g.
--rate-type sweep
->GUIDELLM_RATE_TYPE=sweep
). If both are specified then the CLI option overrides the the ENV.
Replace latest
with stable
for the newest tagged release or set a specific release if desired.
GuideLLM requires an OpenAI-compatible server to run evaluations. vLLM is recommended for this purpose. After installing vLLM on your desired server (pip install vllm
), start a vLLM server with a Llama 3.1 8B quantized model by running the following command:
vllm serve "neuralmagic/Meta-Llama-3.1-8B-Instruct-quantized.w4a16"
For more information on starting a vLLM server, see the vLLM Documentation.
For information on starting other supported inference servers or platforms, see the Supported Backends Documentation.
To run a GuideLLM benchmark, use the guidellm benchmark
command with the target set to an OpenAI-compatible server. For this example, the target is set to 'http://localhost:8000', assuming that vLLM is active and running on the same server. Otherwise, update it to the appropriate location. By default, GuideLLM automatically determines the model available on the server and uses it. To target a different model, pass the desired name with the --model
argument. Additionally, the --rate-type
is set to sweep
, which automatically runs a range of benchmarks to determine the minimum and maximum rates that the server and model can support. Each benchmark run under the sweep will run for 30 seconds, as set by the --max-seconds
argument. Finally, --data
is set to a synthetic dataset with 256 prompt tokens and 128 output tokens per request. For more arguments, supported scenarios, and configurations, jump to the Configurations Section or run guidellm benchmark --help
.
Now, to start benchmarking, run the following command:
guidellm benchmark \
--target "http://localhost:8000" \
--rate-type sweep \
--max-seconds 30 \
--data "prompt_tokens=256,output_tokens=128"
The above command will begin the evaluation and provide progress updates similar to the following:
After the evaluation is completed, GuideLLM will summarize the results into three sections:
- Benchmarks Metadata: A summary of the benchmark run and the arguments used to create it, including the server, data, profile, and more.
- Benchmarks Info: A high-level view of each benchmark and the requests that were run, including the type, duration, request statuses, and number of tokens.
- Benchmarks Stats: A summary of the statistics for each benchmark run, including the request rate, concurrency, latency, and token-level metrics such as TTFT, ITL, and more.
The sections will look similar to the following:
For more details about the metrics and definitions, please refer to the Metrics Documentation.
By default, the full results, including complete statistics and request data, are saved to a file benchmarks.json
in the current working directory. This file can be used for further analysis or reporting, and additionally can be reloaded into Python for further analysis using the guidellm.benchmark.GenerativeBenchmarksReport
class. You can specify a different file name and extension with the --output
argument.
For more details about the supported output file types, please take a look at the Outputs Documentation.
The results from GuideLLM are used to optimize your LLM deployment for performance, resource efficiency, and cost. By analyzing the performance metrics, you can identify bottlenecks, determine the optimal request rate, and select the most cost-effective hardware configuration for your deployment.
For example, when deploying a chat application, we likely want to ensure that our time to first token (TTFT) and inter-token latency (ITL) are under certain thresholds to meet our service level objectives (SLOs) or service level agreements (SLAs). For example, setting TTFT to 200ms and ITL 25ms for the sample data provided in the example above, we can see that even though the server is capable of handling up to 13 requests per second, we would only be able to meet our SLOs for 99% of users at a request rate of 3.5 requests per second. If we relax our constraints on ITL to 50 ms, then we can meet the TTFT SLA for 99% of users at a request rate of approximately 10 requests per second.
For further details on determining the optimal request rate and SLOs, refer to the SLOs Documentation.
GuideLLM offers a range of configurations through both the benchmark CLI command and environment variables, which provide default values and more granular controls. The most common configurations are listed below. A complete list is easily accessible, though, by running guidellm benchmark --help
or guidellm config
respectively.
The guidellm benchmark
command is used to run benchmarks against a generative AI backend/server. The command accepts a variety of arguments to customize the benchmark run. The most common arguments include:
-
--target
: Specifies the target path for the backend to run benchmarks against. For example,http://localhost:8000
. This is required to define the server endpoint. -
--model
: Allows selecting a specific model from the server. If not provided, it defaults to the first model available on the server. Useful when multiple models are hosted on the same server. -
--processor
: Used only for synthetic data creation or when the token source configuration is set to local for calculating token metrics locally. It must match the model's processor or tokenizer to ensure compatibility and correctness. This supports either a HuggingFace model ID or a local path to a processor or tokenizer. -
--data
: Specifies the dataset to use. This can be a HuggingFace dataset ID, a local path to a dataset, or standard text files such as CSV, JSONL, and more. Additionally, synthetic data configurations can be provided using JSON or key-value strings. Synthetic data options include:-
prompt_tokens
: Average number of tokens for prompts. -
output_tokens
: Average number of tokens for outputs. -
TYPE_stdev
,TYPE_min
,TYPE_max
: Standard deviation, minimum, and maximum values for the specified type (e.g.,prompt_tokens
,output_tokens
). If not provided, will use the provided tokens value only. -
samples
: Number of samples to generate, defaults to 1000. -
source
: Source text data for generation, defaults to a local copy of Pride and Prejudice.
-
-
--data-args
: A JSON string used to specify the columns to source data from (e.g.,prompt_column
,output_tokens_count_column
) and additional arguments to pass into the HuggingFace datasets constructor. -
--data-sampler
: Enables applyingrandom
shuffling or sampling to the dataset. If not set, no sampling is used. -
--rate-type
: Defines the type of benchmark to run (default sweep). Supported types include:-
synchronous
: Runs a single stream of requests one at a time.--rate
must not be set for this mode. -
throughput
: Runs all requests in parallel to measure the maximum throughput for the server (bounded by GUIDELLM__MAX_CONCURRENCY config argument).--rate
must not be set for this mode. -
concurrent
: Runs a fixed number of streams of requests in parallel.--rate
must be set to the desired concurrency level/number of streams. -
constant
: Sends requests asynchronously at a constant rate set by--rate
. -
poisson
: Sends requests at a rate following a Poisson distribution with the mean set by--rate
. -
sweep
: Automatically determines the minimum and maximum rates the server can support by running synchronous and throughput benchmarks, and then runs a series of benchmarks equally spaced between the two rates. The number of benchmarks is set by--rate
(default is 10).
-
-
--max-seconds
: Sets the maximum duration (in seconds) for each benchmark run. If not specified, the benchmark will run until the dataset is exhausted or the--max-requests
limit is reached. -
--max-requests
: Sets the maximum number of requests for each benchmark run. If not provided, the benchmark will run until--max-seconds
is reached or the dataset is exhausted. -
--warmup-percent
: Specifies the percentage of the benchmark to treat as a warmup phase. Requests during this phase are excluded from the final results. -
--cooldown-percent
: Specifies the percentage of the benchmark to treat as a cooldown phase. Requests during this phase are excluded from the final results. -
--output-path
: Defines the path to save the benchmark results. Supports JSON, YAML, or CSV formats. If a directory is provided, the results will be saved asbenchmarks.json
in that directory. If not set, the results will be saved in the current working directory.
GuideLLM UI is a companion frontend for visualizing the results of a GuideLLM benchmark run.
For either pathway below you'll need to set the output path to benchmarks.html for your run:
--output-path=benchmarks.html
Alternatively load a saved run using the from-file command and also set the output to benchmarks.html
- Use the Hosted Build (Recommended for Most Users)
This is preconfigured. The latest stable version of the hosted UI (https://blog.vllm.ai/guidellm/ui/latest) will be used to build the local html file.
Execute your run, then open benchmarks.html in your browser and you're done—no further setup required.
- Build and Serve the UI Locally (For Development) This option is useful if:
-
You are actively developing the UI
-
You want to test changes to the UI before publishing
-
You want full control over how the report is displayed
npm install
npm run build
npm run serve
This will start a local server (e.g., at http://localhost:3000). Then set the Environment to LOCAL before running your benchmarks.
export GUIDELLM__ENV=local
Then you can execute your run.
Our comprehensive documentation offers detailed guides and resources to help you maximize the benefits of GuideLLM. Whether just getting started or looking to dive deeper into advanced topics, you can find what you need in our Documentation.
- Installation Guide - This guide provides step-by-step instructions for installing GuideLLM, including prerequisites and setup tips.
- Backends Guide - A comprehensive overview of supported backends and how to set them up for use with GuideLLM.
- Data/Datasets Guide - Information on supported datasets, including how to use them for benchmarking.
- Metrics Guide - Detailed explanations of the metrics used in GuideLLM, including definitions and how to interpret them.
- Outputs Guide - Information on the different output formats supported by GuideLLM and how to use them.
- Architecture Overview - A detailed look at GuideLLM's design, components, and how they interact.
- vLLM Documentation - Official vLLM documentation provides insights into installation, usage, and supported models.
We appreciate contributions to the code, examples, integrations, documentation, bug reports, and feature requests! Your feedback and involvement are crucial in helping GuideLLM grow and improve. Below are some ways you can get involved:
- DEVELOPING.md - Development guide for setting up your environment and making contributions.
- CONTRIBUTING.md - Guidelines for contributing to the project, including code standards, pull request processes, and more.
- CODE_OF_CONDUCT.md - Our expectations for community behavior to ensure a welcoming and inclusive environment.
Visit our GitHub Releases Page and review the release notes to stay updated with the latest releases.
GuideLLM is licensed under the Apache License 2.0.
If you find GuideLLM helpful in your research or projects, please consider citing it:
@misc{guidellm2024,
title={GuideLLM: Scalable Inference and Optimization for Large Language Models},
author={Neural Magic, Inc.},
year={2024},
howpublished={\url{https://github.com/vllm-project/guidellm}},
}
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for guidellm
Similar Open Source Tools

guidellm
GuideLLM is a powerful tool for evaluating and optimizing the deployment of large language models (LLMs). By simulating real-world inference workloads, GuideLLM helps users gauge the performance, resource needs, and cost implications of deploying LLMs on various hardware configurations. This approach ensures efficient, scalable, and cost-effective LLM inference serving while maintaining high service quality. Key features include performance evaluation, resource optimization, cost estimation, and scalability testing.

0chain
Züs is a high-performance cloud on a fast blockchain offering privacy and configurable uptime. It uses erasure code to distribute data between data and parity servers, allowing flexibility for IT managers to design for security and uptime. Users can easily share encrypted data with business partners through a proxy key sharing protocol. The ecosystem includes apps like Blimp for cloud migration, Vult for personal cloud storage, and Chalk for NFT artists. Other apps include Bolt for secure wallet and staking, Atlus for blockchain explorer, and Chimney for network participation. The QoS protocol challenges providers based on response time, while the privacy protocol enables secure data sharing. Züs supports hybrid and multi-cloud architectures, allowing users to improve regulatory compliance and security requirements.

mosec
Mosec is a high-performance and flexible model serving framework for building ML model-enabled backend and microservices. It bridges the gap between any machine learning models you just trained and the efficient online service API. * **Highly performant** : web layer and task coordination built with Rust 🦀, which offers blazing speed in addition to efficient CPU utilization powered by async I/O * **Ease of use** : user interface purely in Python 🐍, by which users can serve their models in an ML framework-agnostic manner using the same code as they do for offline testing * **Dynamic batching** : aggregate requests from different users for batched inference and distribute results back * **Pipelined stages** : spawn multiple processes for pipelined stages to handle CPU/GPU/IO mixed workloads * **Cloud friendly** : designed to run in the cloud, with the model warmup, graceful shutdown, and Prometheus monitoring metrics, easily managed by Kubernetes or any container orchestration systems * **Do one thing well** : focus on the online serving part, users can pay attention to the model optimization and business logic

kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.

agentok
Agentok Studio is a visual tool built for AutoGen, a cutting-edge agent framework from Microsoft and various contributors. It offers intuitive visual tools to simplify the construction and management of complex agent-based workflows. Users can create workflows visually as graphs, chat with agents, and share flow templates. The tool is designed to streamline the development process for creators and developers working on next-generation Multi-Agent Applications.

geti-sdk
The Intel® Geti™ SDK is a python package that enables teams to rapidly develop AI models by easing the complexities of model development and enhancing collaboration between teams. It provides tools to interact with an Intel® Geti™ server via the REST API, allowing for project creation, downloading, uploading, deploying for local inference with OpenVINO, setting project and model configuration, launching and monitoring training jobs, and media upload and prediction. The SDK also includes tutorial-style Jupyter notebooks demonstrating its usage.

geti-sdk
The Intel® Geti™ SDK is a python package that enables teams to rapidly develop AI models by easing the complexities of model development and fostering collaboration. It provides tools to interact with an Intel® Geti™ server via the REST API, allowing for project creation, downloading, uploading, deploying for local inference with OpenVINO, configuration management, training job monitoring, media upload, and prediction. The repository also includes tutorial-style Jupyter notebooks demonstrating SDK usage.

unitycatalog
Unity Catalog is an open and interoperable catalog for data and AI, supporting multi-format tables, unstructured data, and AI assets. It offers plugin support for extensibility and interoperates with Delta Sharing protocol. The catalog is fully open with OpenAPI spec and OSS implementation, providing unified governance for data and AI with asset-level access control enforced through REST APIs.

vulnerability-analysis
The NVIDIA AI Blueprint for Vulnerability Analysis for Container Security showcases accelerated analysis on common vulnerabilities and exposures (CVE) at an enterprise scale, reducing mitigation time from days to seconds. It enables security analysts to determine software package vulnerabilities using large language models (LLMs) and retrieval-augmented generation (RAG). The blueprint is designed for security analysts, IT engineers, and AI practitioners in cybersecurity. It requires NVAIE developer license and API keys for vulnerability databases, search engines, and LLM model services. Hardware requirements include L40 GPU for pipeline operation and optional LLM NIM and Embedding NIM. The workflow involves LLM pipeline for CVE impact analysis, utilizing LLM planner, agent, and summarization nodes. The blueprint uses NVIDIA NIM microservices and Morpheus Cybersecurity AI SDK for vulnerability analysis.

serverless-pdf-chat
The serverless-pdf-chat repository contains a sample application that allows users to ask natural language questions of any PDF document they upload. It leverages serverless services like Amazon Bedrock, AWS Lambda, and Amazon DynamoDB to provide text generation and analysis capabilities. The application architecture involves uploading a PDF document to an S3 bucket, extracting metadata, converting text to vectors, and using a LangChain to search for information related to user prompts. The application is not intended for production use and serves as a demonstration and educational tool.

holohub
Holohub is a central repository for the NVIDIA Holoscan AI sensor processing community to share reference applications, operators, tutorials, and benchmarks. It includes example applications, community components, package configurations, and tutorials. Users and developers of the Holoscan platform are invited to reuse and contribute to this repository. The repository provides detailed instructions on prerequisites, building, running applications, contributing, and glossary terms. It also offers a searchable catalog of available components on the Holoscan SDK User Guide website.

h2o-llmstudio
H2O LLM Studio is a framework and no-code GUI designed for fine-tuning state-of-the-art large language models (LLMs). With H2O LLM Studio, you can easily and effectively fine-tune LLMs without the need for any coding experience. The GUI is specially designed for large language models, and you can finetune any LLM using a large variety of hyperparameters. You can also use recent finetuning techniques such as Low-Rank Adaptation (LoRA) and 8-bit model training with a low memory footprint. Additionally, you can use Reinforcement Learning (RL) to finetune your model (experimental), use advanced evaluation metrics to judge generated answers by the model, track and compare your model performance visually, and easily export your model to the Hugging Face Hub and share it with the community.

LLMeBench
LLMeBench is a flexible framework designed for accelerating benchmarking of Large Language Models (LLMs) in the field of Natural Language Processing (NLP). It supports evaluation of various NLP tasks using model providers like OpenAI, HuggingFace Inference API, and Petals. The framework is customizable for different NLP tasks, LLM models, and datasets across multiple languages. It features extensive caching capabilities, supports zero- and few-shot learning paradigms, and allows on-the-fly dataset download and caching. LLMeBench is open-source and continuously expanding to support new models accessible through APIs.

NeMo-Guardrails
NeMo Guardrails is an open-source toolkit for easily adding _programmable guardrails_ to LLM-based conversational applications. Guardrails (or "rails" for short) are specific ways of controlling the output of a large language model, such as not talking about politics, responding in a particular way to specific user requests, following a predefined dialog path, using a particular language style, extracting structured data, and more.

open-source-slack-ai
This repository provides a ready-to-run basic Slack AI solution that allows users to summarize threads and channels using OpenAI. Users can generate thread summaries, channel overviews, channel summaries since a specific time, and full channel summaries. The tool is powered by GPT-3.5-Turbo and an ensemble of NLP models. It requires Python 3.8 or higher, an OpenAI API key, Slack App with associated API tokens, Poetry package manager, and ngrok for local development. Users can customize channel and thread summaries, run tests with coverage using pytest, and contribute to the project for future enhancements.