finetrainers

finetrainers

Memory-optimized training library for diffusion models

Stars: 931

Visit
 screenshot

FineTrainers is a work-in-progress library designed to support the training of video models, with a focus on LoRA training for popular video models in Diffusers. It aims to eventually extend support to other methods like controlnets, control-loras, distillation, etc. The library provides tools for training custom models, handling big datasets, and supporting multi-backend distributed training. It also offers tooling for curating small and high-quality video datasets for fine-tuning.

README:

finetrainers 🧪

Finetrainers is a work-in-progress library to support (accessible) training of diffusion models. Our first priority is to support LoRA training for all popular video models in Diffusers, and eventually other methods like controlnets, control-loras, distillation, etc.

Your browser does not support the video tag. Your browser does not support the video tag.
CogVideoX LoRA training as the first iteration of this project Replication of PikaEffects

Table of Contents

Quickstart

Clone the repository and make sure the requirements are installed: pip install -r requirements.txt and install diffusers from source by pip install git+https://github.com/huggingface/diffusers. The requirements specify diffusers>=0.32.1, but it is always recommended to use the main branch of Diffusers for the latest features and bugfixes. Note that the main branch for finetrainers is also the development branch, and stable support should be expected from the release tags.

Checkout to the latest release tag:

git fetch --all --tags
git checkout tags/v0.0.1

Follow the instructions mentioned in the README for the latest stable release.

Using the main branch

To get started quickly with example training scripts on the main development branch, refer to the following:

The following are some simple datasets/HF orgs with good datasets to test training with quickly:

Please checkout docs/models and examples/training to learn more about supported models for training & example reproducible training launch scripts.

[!IMPORTANT] It is recommended to use Pytorch 2.5.1 or above for training. Previous versions can lead to completely black videos, OOM errors, or other issues and are not tested. For fully reproducible training, please use the same environment as mentioned in environment.md.

News

  • 🔥 2025-03-07: CogView4 support added!
  • 🔥 2025-03-03: Wan T2V support added!
  • 🔥 2025-03-03: We have shipped a complete refactor to support multi-backend distributed training, better precomputation handling for big datasets, model specification format (externally usable for training custom models), FSDP & more.
  • 🔥 2025-02-12: We have shipped a set of tooling to curate small and high-quality video datasets for fine-tuning. See video-dataset-scripts documentation page for details!
  • 🔥 2025-02-12: Check out eisneim/ltx_lora_training_i2v_t2v! It builds off of finetrainers to support image to video training for LTX-Video and STG guidance for inference.
  • 🔥 2025-01-15: Support for naive FP8 weight-casting training added! This allows training HunyuanVideo in under 24 GB upto specific resolutions.
  • 🔥 2025-01-13: Support for T2V full-finetuning added! Thanks to @ArEnSc for taking up the initiative!
  • 🔥 2025-01-03: Support for T2V LoRA finetuning of CogVideoX added!
  • 🔥 2024-12-20: Support for T2V LoRA finetuning of Hunyuan Video added! We would like to thank @SHYuanBest for his work on a training script here.
  • 🔥 2024-12-18: Support for T2V LoRA finetuning of LTX Video added!

Support Matrix

[!NOTE] The following numbers were obtained from the release branch. The main branch is unstable at the moment and may use higher memory.

Model Name Tasks Min. LoRA VRAM* Min. Full Finetuning VRAM^
LTX-Video Text-to-Video 5 GB 21 GB
HunyuanVideo Text-to-Video 32 GB OOM
CogVideoX-5b Text-to-Video 18 GB 53 GB
Wan Text-to-Video TODO TODO
CogView4 Text-to-Image TODO TODO

*Noted for training-only, no validation, at resolution 49x512x768, rank 128, with pre-computation, using FP8 weights & gradient checkpointing. Pre-computation of conditions and latents may require higher limits (but typically under 16 GB).
^Noted for training-only, no validation, at resolution 49x512x768, with pre-computation, using BF16 weights & gradient checkpointing.

If you would like to use a custom dataset, refer to the dataset preparation guide here.

Featured Projects 🔥

Checkout some amazing projects citing finetrainers:

Checkout the following UIs built for finetrainers:

Acknowledgements

  • finetrainers builds on top of & takes inspiration from great open-source libraries - transformers, accelerate, torchtune, torchtitan, peft, diffusers, bitsandbytes, torchao and deepspeed - to name a few.
  • Some of the design choices of finetrainers were inspired by SimpleTuner.

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for finetrainers

Similar Open Source Tools

For similar tasks

For similar jobs