LongLLaVA

LongLLaVA

LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture

Stars: 158

Visit
 screenshot

LongLLaVA is a tool for scaling multi-modal LLMs to 1000 images efficiently via hybrid architecture. It includes stages for single-image alignment, instruction-tuning, and multi-image instruction-tuning, with evaluation through a command line interface and model inference. The tool aims to achieve GPT-4V level capabilities and beyond, providing reproducibility of results and benchmarks for efficiency and performance.

README:

header

📃 Paper • 🌐 Demo • 🤗 LongLLaVA-53B-A13B • 🤗 LongLLaVA-9B

efficiency

🌈 Update

  • [2024.09.05] LongLLaVA repo is published!🎉 The Code will

Architecture

Click to view the architecture image

Architecture Image

Results

Click to view the Results
  • Main Results Main Results
  • Diagnostic Results Diagnostic Results
  • Video-NIAH Video-NIAH

Results reproduction

1. Environment Setup

pip install -r requirements.txt

2. Data DownLoad and Construction

Dataset Taxonomy

Dataset

  • Dataset DownLoading and Construction

    Coming Soon.

3. Training

  • Downloading Language Models

    🤗 Jamba-9B-Instruct

  • Stage I: Single-image Alignment.

    bash Align.sh
  • Stage II: Single-image Instruction-tuning.

    bash SingleImageSFT.sh
  • Stage III: Multi-image Instruction-tuning.

    bash MultiImageSFT.sh

4. Evaluation

  • Command Line Interface
python cli.py --model_dir path-to-longllava
  • Model Inference
query = 'What does the picture show?'
image_paths = ['image_path1'] # image or video path

from cli import Chatbot
bot = Chatbot(path-to-longllava)
output = bot.chat(query, image_paths)
print(output) # Prints the output of the model
  • Benchmarks
python Eval.sh

5. Reproduce other results in Paper

  • FLOPs
python /utils/cal_flops.py
  • Prefill Time & Throughput & GPU Memory Usage
python ./benchmarks/Efficiency/evaluate.py
python ./benchmarks/Efficiency/evaluatevllm.py
  • DownCycling To Transfer Jamba-MoE to Dense
python ./utils/dense_downcycling.py

TO DO

  • [ ] Release Data Construction Code

Acknowledgement

  • LLaVA: Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.

Citation

@misc{wang2024longllavascalingmultimodalllms,
      title={LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture}, 
      author={Xidong Wang and Dingjie Song and Shunian Chen and Chen Zhang and Benyou Wang},
      year={2024},
      eprint={2409.02889},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2409.02889}, 
}

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for LongLLaVA

Similar Open Source Tools

For similar tasks

For similar jobs