verl

verl

veRL: Volcano Engine Reinforcement Learning for LLM

Stars: 459

Visit
 screenshot

veRL is a flexible and efficient reinforcement learning training framework designed for large language models (LLMs). It allows easy extension of diverse RL algorithms, seamless integration with existing LLM infrastructures, and flexible device mapping. The framework achieves state-of-the-art throughput and efficient actor model resharding with 3D-HybridEngine. It supports popular HuggingFace models and is suitable for users working with PyTorch FSDP, Megatron-LM, and vLLM backends.

README:

veRL: Volcano Engine Reinforcement Learning for LLM

veRL is a flexible, efficient and production-ready RL training framework designed for large language models (LLMs). veRL is the open-source version of HybridFlow paper.

veRL is flexible and easy to use with:

  • Easy extension of diverse RL algorithms: The Hybrid programming model combines the strengths of single-controller and multi-controller paradigms to enable flexible representation and efficient execution of complex Post-Training dataflows. Allowing users to build RL dataflows in a few lines of code.

  • Seamless integration of existing LLM infra with modular APIs: Decouples computation and data dependencies, enabling seamless integration with existing LLM frameworks, such as PyTorch FSDP, Megatron-LM and vLLM. Moreover, users can easily extend to other LLM training and inference frameworks.

  • Flexible device mapping: Supports various placement of models onto different sets of GPUs for efficient resource utilization and scalability across different cluster sizes.

  • Readily integration with popular HuggingFace models

veRL is fast with:

  • State-of-the-art throughput: By seamlessly integrating existing SOTA LLM training and inference frameworks, veRL achieves high generation and training throughput.

  • Efficient actor model resharding with 3D-HybridEngine: Eliminates memory redundancy and significantly reduces communication overhead during transitions between training and generation phases.

| Documentation | Paper | Slack |

News

Installation Guide

Below are the steps to install veRL in your environment.

Requirements

  • Python: Version >= 3.9
  • CUDA: Version >= 12.1

veRL supports various backends. Currently, the following configurations are available:

  • FSDP and Megatron-LM for training.
  • vLLM for rollout generation.

Training backends

We recommend using FSDP backend to investigate, research and prototype different models, datasets and RL algorithms. The guide for using FSDP backend can be found in PyTorch FSDP Backend

For users who pursue better scalability, we recommend using Megatron-LM backend. Currently, we support Megatron-LM@core_v0.4.0 and we fix some internal issues of Megatron-LM. Here's the additional installation guide. The guide for using Megatron-LM backend can be found in Megatron-LM Backend

Installation Options

1. From Docker Image

We provide pre-built Docker images for quick setup.

Image and tag: verlai/verl:vemlp-th2.4.0-cu124-vllm0.6.3-ray2.10-te1.7-v0.0.3

  1. Launch the desired Docker image:
docker run --runtime=nvidia -it --rm --shm-size="10g" --cap-add=SYS_ADMIN -v <image:tag> 
  1. Inside the container, install veRL:
# install the nightly version
git clone https://github.com/volcengine/verl && cd verl && pip3 install -e .
# or install from pypi via `pip3 install verl`
  1. Setup Megatron (optional)

If you want to enable training with Megatron, Megatron code must be added to PYTHONPATH:

cd ..
git clone -b core_v0.4.0 https://github.com/NVIDIA/Megatron-LM.git
cp verl/patches/megatron_v4.patch Megatron-LM/
cd Megatron-LM && git apply megatron_v4.patch
pip3 install -e .
export PYTHONPATH=$PYTHONPATH:$(pwd)

You can also get the Megatron code after verl's patch via

git clone -b core_v0.4.0_verl https://github.com/eric-haibin-lin/Megatron-LM

2. From Custom Environments

If you prefer setting up veRL in your custom environment, expand this section and follow the steps below.

Using conda is recommended for managing dependencies.

  1. Create a conda environment:
conda create -n verl python==3.9
conda activate verl
  1. Install common dependencies (required for all backends)
# install torch [or you can skip this step and let vllm to install the correct version for you]
pip3 install torch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 --index-url https://download.pytorch.org/whl/cu121

# install vllm
pip3 install vllm==0.6.3 # or you can install 0.5.4, 0.4.2 and 0.3.1
pip3 install ray

# flash attention 2
pip3 install flash-attn --no-build-isolation
  1. Install veRL
# install the nightly version
git clone https://github.com/volcengine/verl && cd verl && pip3 install -e .
# or install from pypi via `pip3 install verl`
  1. Setup Megatron (optional)
# FOR Megatron-LM Backend
# apex
pip3 install -v --disable-pip-version-check --no-cache-dir --no-build-isolation \
         --config-settings "--build-option=--cpp_ext" --config-settings "--build-option=--cuda_ext" \
         git+https://github.com/NVIDIA/apex

# transformer engine
pip3 install git+https://github.com/NVIDIA/[email protected]

# megatron core v0.4.0
cd ..
git clone -b core_v0.4.0 https://github.com/NVIDIA/Megatron-LM.git
cp verl/patches/megatron_v4.patch Megatron-LM/
cd Megatron-LM && git apply megatron_v4.patch
pip3 install -e .
export PYTHONPATH=$PYTHONPATH:$(pwd)

Getting Started

Visit our documentation to learn more.

Quickstart:

Running an PPO example step-by-step:

Reproducible algorithm baselines:

For code explanation and advance usage (extension):

Citation

@article{sheng2024hybridflow,
  title   = {HybridFlow: A Flexible and Efficient RLHF Framework},
  author  = {Guangming Sheng and Chi Zhang and Zilingfeng Ye and Xibin Wu and Wang Zhang and Ru Zhang and Yanghua Peng and Haibin Lin and Chuan Wu},
  year    = {2024},
  journal = {arXiv preprint arXiv: 2409.19256}
}

@inproceedings{zhang2024framework,
  title={A Framework for Training Large Language Models for Code Generation via Proximal Policy Optimization},
  author={Zhang, Chi and Sheng, Guangming and Liu, Siyao and Li, Jiahao and Feng, Ziyuan and Liu, Zherui and Liu, Xin and Jia, Xiaoying and Peng, Yanghua and Lin, Haibin and Wu, Chuan},
  booktitle={In NL2Code Workshop of ACM KDD},
  year={2024}
}

Publications Using veRL

@article{liu2024enhancing,
  title={Enhancing Multi-Step Reasoning Abilities of Language Models through Direct Q-Function Optimization},
  author={Liu, Guanlin and Ji, Kaixuan and Zheng, Renjie and Wu, Zheng and Dun, Chen and Gu, Quanquan and Yan, Lin},
  journal={arXiv preprint arXiv:2410.09302},
  year={2024}
}

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for verl

Similar Open Source Tools

For similar tasks

For similar jobs