KAI-Scheduler

KAI-Scheduler

KAI Scheduler is an open source Kubernetes Native scheduler for AI workloads at large scale

Stars: 221

Visit
 screenshot

README:

License

KAI Scheduler

KAI Scheduler is a robust, efficient, and scalable Kubernetes scheduler that optimizes GPU resource allocation for AI and machine learning workloads.

Designed to manage large-scale GPU clusters, including thousands of nodes, and high-throughput of workloads, makes the KAI Scheduler ideal for extensive and demanding environments. KAI Scheduler allows administrators of Kubernetes clusters to dynamically allocate GPU resources to workloads.

KAI Scheduler supports the entire AI lifecycle, from small, interactive jobs that require minimal resources to large training and inference, all within the same cluster. It ensures optimal resource allocation while maintaining resource fairness between the different consumers. It can run alongside other schedulers installed on the cluster.

Key Features

  • Batch Scheduling: Ensure all pods in a group are scheduled simultaneously or not at all.
  • Bin Packing & Spread Scheduling: Optimize node usage either by minimizing fragmentation (bin-packing) or increasing resiliency and load balancing (spread scheduling).
  • Workload Priority: Prioritize workloads effectively within queues.
  • Hierarchical Queues: Manage workloads with two-level queue hierarchies for flexible organizational control.
  • Resource distribution: Customize quotas, over-quota weights, limits, and priorities per queue.
  • Fairness Policies: Ensure equitable resource distribution using Dominant Resource Fairness (DRF) and resource reclamation across queues.
  • Workload Consolidation: Reallocate running workloads intelligently to reduce fragmentation and increase cluster utilization.
  • Elastic Workloads: Dynamically scale workloads within defined minimum and maximum pod counts.
  • Dynamic Resource Allocation (DRA): Support vendor-specific hardware resources through Kubernetes ResourceClaims (e.g., GPUs from NVIDIA or AMD).
  • GPU Sharing: Allow multiple workloads to efficiently share single or multiple GPUs, maximizing resource utilization.
  • Cloud & On-premise Support: Fully compatible with dynamic cloud infrastructures (including auto-scalers like Karpenter) as well as static on-premise deployments.

Prerequisites

Before installing KAI Scheduler, ensure you have:

  • A running Kubernetes cluster
  • Helm CLI installed
  • NVIDIA GPU-Operator installed in order to schedule workloads that request GPU resources

Installation

KAI Scheduler will be installed in kai-scheduler namespace. When submitting workloads make sure to use a dedicated namespace.

Installation Methods

KAI Scheduler can be installed:

  • From Production (Recommended)
  • From Source (Build it Yourself)

Install from Production

helm repo add nvidia-k8s https://helm.ngc.nvidia.com/nvidia/k8s
helm repo update
helm upgrade -i kai-scheduler nvidia-k8s/kai-scheduler -n kai-scheduler --create-namespace --set "global.registry=nvcr.io/nvidia/k8s"

Build from Source

Follow the instructions here

Quick Start

To start scheduling workloads with KAI Scheduler, please continue to Quick Start example

Support and Getting Help

Please open an issue on the GitHub project for any questions. Your feedback is appreciated.

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for KAI-Scheduler

Similar Open Source Tools

For similar tasks

No tools available

For similar jobs

No tools available