ServerlessLLM

ServerlessLLM

Scalable and Efficient Serverless Deployment for Large AI Models.

Stars: 199

Visit
 screenshot

ServerlessLLM is a fast, affordable, and easy-to-use library designed for multi-LLM serving, optimized for environments with limited GPU resources. It supports loading various leading LLM inference libraries, achieving fast load times, and reducing model switching overhead. The library facilitates easy deployment via Ray Cluster and Kubernetes, integrates with the OpenAI Query API, and is actively maintained by contributors.

README:

ServerlessLLM

| Documentation | Paper | Discord |

ServerlessLLM

ServerlessLLM is a fast, affordable and easy library designed for multi-LLM serving, also known as Serverless Inference, Inference Endpoint, or Model Endpoints. This library is ideal for environments with limited GPU resources (GPU poor), as it allows efficient dynamic loading of models onto GPUs. By supporting high levels of GPU multiplexing, it maximizes GPU utilization without the need to dedicate GPUs to individual models.

News

  • [07/24] We are working towards to the first release and making documentation ready. Stay tuned!

About

ServerlessLLM is Fast:

  • Supports various leading LLM inference libraries including vLLM and HuggingFace Transformers.
  • Achieves 5-10X faster loading speed than Safetensors and PyTorch Checkpoint Loader.
  • Supports start-time-optimized model loading scheduler, achieving 5-100X better LLM start-up latency than Ray Serve and KServe.

ServerlessLLM is Affordable:

  • Supports many LLM models to share a few GPUs with low model switching overhead and seamless inference live migration.
  • Fully utilizes local storage resources available on multi-GPU servers, reducing the need for employing costly storage servers and network bandwidth.

ServerlessLLM is Easy:

Getting Started

  1. Install ServerlessLLM following Installation Guide.

  2. Start a local ServerlessLLM cluster following Quick Start Guide.

  3. Just want to try out fast checkpoint loading in your own code? Check out the ServerlessLLM Store Guide.

Performance

A detailed analysis of the performance of ServerlessLLM is here.

Contributing

ServerlessLLM is actively maintained and developed by those Contributors. We welcome new contributors to join us in making ServerlessLLM faster, better and more easier to use. Please check Contributing Guide for details.

Citation

If you use ServerlessLLM for your research, please cite our paper:

@inproceedings{fu2024serverlessllm,
  title={ServerlessLLM: Low-Latency Serverless Inference for Large Language Models},
  author={Fu, Yao and Xue, Leyang and Huang, Yeqi and Brabete, Andrei-Octavian and Ustiugov, Dmitrii and Patel, Yuvraj and Mai, Luo},
  booktitle={18th USENIX Symposium on Operating Systems Design and Implementation (OSDI 24)},
  pages={135--153},
  year={2024}
}

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for ServerlessLLM

Similar Open Source Tools

For similar tasks

For similar jobs