SLAM-LLM

SLAM-LLM

Speech, Language, Audio, Music Processing with Large Language Model

Stars: 523

Visit
 screenshot

SLAM-LLM is a deep learning toolkit for training custom multimodal large language models (MLLM) focusing on speech, language, audio, and music processing. It provides detailed recipes for training and high-performance checkpoints for inference. The toolkit supports various tasks such as automatic speech recognition (ASR), text-to-speech (TTS), visual speech recognition (VSR), automated audio captioning (AAC), spatial audio understanding, and music caption (MC). Users can easily extend to new models and tasks, utilize mixed precision training for faster training with less GPU memory, and perform multi-GPU training with data and model parallelism. Configuration is flexible based on Hydra and dataclass, allowing different configuration methods.

README:

SLAM-LLM

SLAM-LLM is a deep learning toolkit that allows researchers and developers to train custom multimodal large language model (MLLM), focusing on Speech, Language, Audio, Music processing. We provide detailed recipes for training and high-performance checkpoints for inference.

SLAM-LLM Logo

version version python mit

Table of Contents

  1. News
  2. Installation
  3. Uasge
  4. Features
  5. Acknowledge
  6. Citation

News

  • [Update Oct. 12, 2024] Recipes for SLAM-AAC have been supported.
  • [Update Sep. 28, 2024] Recipes for CoT-ST have been supported.
  • [Update Sep. 25, 2024] Recipes for DRCap have been supported.
  • [Update Jun. 12, 2024] Recipes for MaLa-ASR have been supported.
  • [CALL FOR EXAMPLE] We sincerely invite developers and researchers to develop new applications, conduct academic research based on SLAM-LLM, and pull request your examples! We also acknowledge engineering PR (such as improving and speeding up multi-node training).
  • [Update May. 22, 2024] Please join slack or WeChat group. We will sync our updates and Q&A here.
  • [Update May. 21, 2024] Recipes for Spatial Audio Understanding have been supported.
  • [Update May. 20, 2024] Recipes for music caption (MC) have been supported.
  • [Update May. 8, 2024] Recipes for visual speech recognition (VSR) have been supported.
  • [Update May. 4, 2024] Recipes for zero-shot text-to-speech (TTS) have been supported.
  • [Update Apr. 28, 2024] Recipes for automated audio captioning (AAC) have been supported.
  • [Update Mar. 31, 2024] Recipes for automatic speech recognition (ASR) have been supported.

Installation

git clone https://github.com/huggingface/transformers.git
cd transformers
git checkout tags/v4.35.2
pip install -e .
cd ..
git clone https://github.com/huggingface/peft.git
cd peft
git checkout tags/v0.6.0
pip install -e .
cd ..
pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118
git clone https://github.com/ddlBoJack/SLAM-LLM.git
cd SLAM-LLM
pip install  -e .

For some examples, you may need to use fairseq, the command line is as follows:

# you need to install fairseq before SLAM-LLM
git clone https://github.com/pytorch/fairseq
cd fairseq
pip install --editable ./

We also provide a docker image for convenience:

# build docker image
docker build -t slam-llm:latest .

# run docker image with gpu
docker run -it --gpus all --name slam --shm-size=256g slam-llm:latest /bin/bash

Usage

List of Recipes

We provide reference implementations of various LLM-based speech, audio, and music tasks:

Configuration Priority

We provide hierarchical configuration inheritance relationships as follows:

command-line (shell file) > Hydra configuration (yaml file) > dataclass configuration (Python file)

Features

  • Easily extend to new models and tasks.
  • Detailed recipes for training and high-performance checkpoints for inference.
  • Mixed precision training which trains faster with less GPU memory on NVIDIA tensor cores.
  • Multi-GPU training with data and model parallel, supporting DDP, FSDP and deepspeed (still need to be improved).
  • Flexible configuration based on Hydra and dataclass allowing a combination of code, command-line and file based configuration.

Acknowledge

  • We borrow code from Llama-Recipes for the training process.
  • We borrow code from Fairseq for deepspeed configuration.
  • We thank the contributors for providing diverse recipes.

Citation

SLAM-ASR:

@article{ma2024embarrassingly,
  title={An Embarrassingly Simple Approach for LLM with Strong ASR Capacity},
  author={Ma, Ziyang and Yang, Guanrou and Yang, Yifan and Gao, Zhifu and Wang, Jiaming and Du, Zhihao and Yu, Fan and Chen, Qian and Zheng, Siqi and Zhang, Shiliang and others},
  journal={arXiv preprint arXiv:2402.08846},
  year={2024}
}

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for SLAM-LLM

Similar Open Source Tools

For similar tasks

For similar jobs