tiny-llm

tiny-llm

A course of learning LLM inference serving on Apple Silicon for systems engineers: build a tiny vLLM + Qwen.

Stars: 3222

Visit
 screenshot

tiny-llm is a course on LLM serving using MLX for system engineers. The codebase is focused on MLX array/matrix APIs to build model serving infrastructure from scratch and explore optimizations. The goal is to efficiently serve large language models like Qwen2 models. The course covers implementing components in Python, building an inference system similar to vLLM, and advanced topics on model interaction. The tool aims to provide hands-on experience in serving language models without high-level neural network APIs.

README:

tiny-llm - LLM Serving in a Week

CI (main)

A course on LLM serving using MLX for system engineers. The codebase is solely (almost!) based on MLX array/matrix APIs without any high-level neural network APIs, so that we can build the model serving infrastructure from scratch and dig into the optimizations.

The goal is to learn the techniques behind efficiently serving a large language model (e.g., Qwen2 models).

In week 1, you will implement the necessary components in Python (only Python!) to use the Qwen2 model to generate responses (e.g., attention, RoPE, etc). In week 2, you will implement the inference system which is similar to but a much simpler version of vLLM (e.g., KV cache, continuous batching, flash attention, etc). In week 3, we will cover more advanced topics and how the model interacts with the outside world.

Why MLX: nowadays it's easier to get a macOS-based local development environment than setting up an NVIDIA GPU.

Why Qwen2: this was the first LLM I've interacted with -- it's the go-to example in the vllm documentation. I spent some time looking at the vllm source code and built some knowledge around it.

Book

The tiny-llm book is available at https://skyzh.github.io/tiny-llm/. You can follow the guide and start building.

Community

You may join skyzh's Discord server and study with the tiny-llm community.

Join skyzh's Discord Server

Roadmap

Week 1 is complete. Week 2 is in progress.

Week + Chapter Topic Code Test Doc
1.1 Attention
1.2 RoPE
1.3 Grouped Query Attention
1.4 RMSNorm and MLP
1.5 Load the Model
1.6 Generate Responses (aka Decoding)
1.7 Sampling
2.1 Key-Value Cache
2.2 Quantized Matmul and Linear - CPU 🚧
2.3 Quantized Matmul and Linear - GPU 🚧
2.4 Flash Attention 2 - CPU 🚧
2.5 Flash Attention 2 - GPU 🚧
2.6 Continuous Batching
2.7 Chunked Prefill
3.1 Paged Attention - Part 1 🚧 🚧 🚧
3.2 Paged Attention - Part 2 🚧 🚧 🚧
3.3 MoE (Mixture of Experts) 🚧 🚧 🚧
3.4 Speculative Decoding 🚧 🚧 🚧
3.5 RAG Pipeline 🚧 🚧 🚧
3.6 AI Agent / Tool Calling 🚧 🚧 🚧
3.7 Long Context 🚧 🚧 🚧

Other topics not covered: quantized/compressed kv cache, prefix/prompt cache; sampling, fine tuning; smaller kernels (softmax, silu, etc)

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for tiny-llm

Similar Open Source Tools

For similar tasks

For similar jobs