axolotl

axolotl

Go ahead and axolotl questions

Stars: 10333

Visit
 screenshot

Axolotl is a lightweight and efficient tool for managing and analyzing large datasets. It provides a user-friendly interface for data manipulation, visualization, and statistical analysis. With Axolotl, users can easily import, clean, and explore data to gain valuable insights and make informed decisions. The tool supports various data formats and offers a wide range of functions for data processing and modeling. Whether you are a data scientist, researcher, or business analyst, Axolotl can help streamline your data workflows and enhance your data analysis capabilities.

README:

Axolotl

GitHub License tests codecov Releases
contributors GitHub Repo stars
discord twitter google-colab
tests-nightly multigpu-semi-weekly tests

๐ŸŽ‰ Latest Updates

  • 2025/07:
    • ND Parallelism support has been added into Axolotl. Compose Context Parallelism (CP), Tensor Parallelism (TP), and Fully Sharded Data Parallelism (FSDP) within a single node and across multiple nodes. Check out the blog post for more info.
    • Axolotl adds more models: GPT-OSS, Gemma 3n, Liquid Foundation Model 2 (LFM2), and Arcee Foundation Models (AFM).
    • FP8 finetuning with fp8 gather op is now possible in Axolotl via torchao. Get started here!
    • Voxtral, Magistral 1.1, and Devstral with mistral-common tokenizer support has been integrated in Axolotl!
    • TiledMLP support for single-GPU to multi-GPU training with DDP, DeepSpeed and FSDP support has been added to support Arctic Long Sequence Training. (ALST). See examples for using ALST with Axolotl!
  • 2025/05: Quantization Aware Training (QAT) support has been added to Axolotl. Explore the docs to learn more!
  • 2025/03: Axolotl has implemented Sequence Parallelism (SP) support. Read the blog and docs to learn how to scale your context length when fine-tuning.
Expand older updates
  • 2025/06: Magistral with mistral-common tokenizer support has been added to Axolotl. See examples to start training your own Magistral models with Axolotl!
  • 2025/04: Llama 4 support has been added in Axolotl. See examples to start training your own Llama 4 models with Axolotl's linearized version!
  • 2025/03: (Beta) Fine-tuning Multimodal models is now supported in Axolotl. Check out the docs to fine-tune your own!
  • 2025/02: Axolotl has added LoRA optimizations to reduce memory usage and improve training speed for LoRA and QLoRA in single GPU and multi-GPU training (DDP and DeepSpeed). Jump into the docs to give it a try.
  • 2025/02: Axolotl has added GRPO support. Dive into our blog and GRPO example and have some fun!
  • 2025/01: Axolotl has added Reward Modelling / Process Reward Modelling fine-tuning support. See docs.

โœจ Overview

Axolotl is a tool designed to streamline post-training for various AI models.

Features:

๐Ÿš€ Quick Start

Requirements:

  • NVIDIA GPU (Ampere or newer for bf16 and Flash Attention) or AMD GPU
  • Python 3.11
  • PyTorch โ‰ฅ2.6.0

Google Colab

Open In Colab

Installation

Using pip

pip3 install -U packaging==23.2 setuptools==75.8.0 wheel ninja
pip3 install --no-build-isolation axolotl[flash-attn,deepspeed]

# Download example axolotl configs, deepspeed configs
axolotl fetch examples
axolotl fetch deepspeed_configs  # OPTIONAL

Using Docker

Installing with Docker can be less error prone than installing in your own environment.

docker run --gpus '"all"' --rm -it axolotlai/axolotl:main-latest

Other installation approaches are described here.

Cloud Providers

Your First Fine-tune

# Fetch axolotl examples
axolotl fetch examples

# Or, specify a custom path
axolotl fetch examples --dest path/to/folder

# Train a model using LoRA
axolotl train examples/llama-3/lora-1b.yml

That's it! Check out our Getting Started Guide for a more detailed walkthrough.

๐Ÿ“š Documentation

๐Ÿค Getting Help

๐ŸŒŸ Contributing

Contributions are welcome! Please see our Contributing Guide for details.

โค๏ธ Sponsors

Interested in sponsoring? Contact us at [email protected]

๐Ÿ“ Citing Axolotl

If you use Axolotl in your research or projects, please cite it as follows:

@software{axolotl,
  title = {Axolotl: Post-Training for AI Models},
  author = {{Axolotl maintainers and contributors}},
  url = {https://github.com/axolotl-ai-cloud/axolotl},
  license = {Apache-2.0},
  year = {2023}
}

๐Ÿ“œ License

This project is licensed under the Apache 2.0 License - see the LICENSE file for details.

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for axolotl

Similar Open Source Tools

For similar tasks

For similar jobs