MiniCPM-V-CookBook

MiniCPM-V-CookBook

Cook up amazing multimodal AI applications effortlessly with MiniCPM-o

Stars: 192

Visit
 screenshot

MiniCPM-V & o Cookbook is a comprehensive repository for building multimodal AI applications effortlessly. It provides easy-to-use documentation, supports a wide range of users, and offers versatile deployment scenarios. The repository includes live demonstrations, inference recipes for vision and audio capabilities, fine-tuning recipes, serving recipes, quantization recipes, and a framework support matrix. Users can customize models, deploy them efficiently, and compress models to improve efficiency. The repository also showcases awesome works using MiniCPM-V & o and encourages community contributions.

README:

🍳 MiniCPM-V & o Cookbook

🏠 Main Repository | πŸ“š Full Documentation

Cook up amazing multimodal AI applications effortlessly with MiniCPM-o, bringing vision, speech, and live-streaming capabilities right to your fingertips!

✨ What Makes Our Recipes Special?

Easy Usage Documentation

Our comprehensive documentation website presents every recipe in a clear, well-organized manner. All features are displayed at a glance, making it easy for you to quickly find exactly what you need.

Broad User Spectrum

We support a wide range of users, from individuals to enterprises and researchers.

  • Individuals: Enjoy effortless inference using Ollama and Llama.cpp with minimal setup.
  • Enterprises: Achieve high-throughput, scalable performance with vLLM and SGLang.
  • Researchers: Leverage advanced frameworks including Transformers , LLaMA-Factory, SWIFT, and Align-anything to enable flexible model development and cutting-edge experimentation.

Versatile Deployment Scenarios

Our ecosystem delivers optimal solution for a variety of hardware environments and deployment demands.

  • Web demo: Launch interactive multimodal AI web demo with FastAPI.
  • Quantized deployment: Maximize efficiency and minimize resource consumption using GGUF, BNB, and AWQ.
  • Edge devices: Bring powerful AI experiences to iPhone and iPad, supporting offline and privacy-sensitive applications.

⭐️ Live Demonstrations

Explore real-world examples of MiniCPM-V deployed on edge devices using our curated recipes. These demos highlight the model’s high efficiency and robust performance in practical scenarios.

Β Β Β Β 

  • Run locally on iPad with iOS demo, observing the process of drawing a rabbit.

πŸ”₯ Inference Recipes

Ready-to-run examples

Recipe Description
Vision Capabilities
πŸ–ΌοΈ Single-image QA Question answering on a single image
🧩 Multi-image QA Question answering with multiple images
🎬 Video QA Video-based question answering
πŸ“„ Document Parser Parse and extract content from PDFs and webpages
πŸ“ Text Recognition Reliable OCR for photos and screenshots
Audio Capabilities
🎀 Speech-to-Text Multilingual speech recognition
πŸ—£οΈ Text-to-Speech Instruction-following speech synthesis
🎭 Voice Cloning Realistic voice cloning and role-play

πŸ‹οΈ Fine-tuning Recipes

Customize your model with your own ingredients

Data preparation

Follow the guidance to set up your training datasets.

Training

We provide training methods serving different needs as following:

Framework Description
Transformers Most flexible for customization
LLaMA-Factory Modular fine-tuning toolkit
SWIFT Lightweight and fast parameter-efficient tuning
Align-anything Visual instruction alignment for multimodal models

πŸ“¦ Serving Recipes

Deploy your model efficiently

Method Description
vLLM High-throughput GPU inference
SGLang High-throughput GPU inference
Llama.cpp Fast CPU inference on PC, iPhone and iPad
Ollama User-friendly setup
OpenWebUI Interactive Web demo with Open WebUI
Gradio Interactive Web demo with Gradio
FastAPI Interactive Omni Streaming demo with FastAPI
iOS Interactive iOS demo with llama.cpp

πŸ₯„ Quantization Recipes

Compress your model to improve efficiency

Format Key Feature
GGUF Simplest and most portable format
BNB Simple and easy-to-use quantization method
AWQ High-performance quantization for efficient inference

Framework Support Matrix

Category Framework Cookbook Link Upstream PR Supported since (branch) Supported since (release)
Edge (On-device) Llama.cpp Llama.cpp Doc #15575 (2025-08-26) master (2025-08-26) b6282
Ollama Ollama Doc #12078 (2025-08-26) Merging Waiting for official release
Serving (Cloud) vLLM vLLM Doc #23586 (2025-08-26) main (2025-08-27) v0.10.2
SGLang SGLang Doc #9610 (2025-08-26) Merging Waiting for official release
Finetuning LLaMA-Factory LLaMA-Factory Doc #9022 (2025-08-26) main (2025-08-26) Waiting for official release
Quantization GGUF GGUF Doc β€” β€” β€”
BNB BNB Doc β€” β€” β€”
AWQ AWQ Doc β€” β€” β€”
Demos Gradio Demo Gradio Demo Doc β€” β€” β€”

If you'd like us to prioritize support for another open-source framework, please let us know via this short form.

Awesome Works using MiniCPM-V & o

  • text-extract-api: Document extraction API using OCRs and Ollama supported models GitHub Repo stars
  • comfyui_LLM_party: Build LLM workflows and integrate into existing image workflows GitHub Repo stars
  • Ollama-OCR: OCR package uses vlms through Ollama to extract text from images and PDF GitHub Repo stars
  • comfyui-mixlab-nodes: ComfyUI node suite supports Workflow-to-APP、GPT&3D and more GitHub Repo stars
  • OpenAvatarChat: Interactive digital human conversation implementation on single PC GitHub Repo stars
  • pensieve: A privacy-focused passive recording project by recording screen content GitHub Repo stars
  • paperless-gpt: Use LLMs to handle paperless-ngx, AI-powered titles, tags and OCR GitHub Repo stars
  • Neuro: A recreation of Neuro-Sama, but running on local models on consumer hardware GitHub Repo stars

πŸ‘₯ Community

Contributing

We love new recipes! Please share your creative dishes:

  1. Fork the repository
  2. Create your recipe
  3. Submit a pull request

Issues & Support

Institutions

This cookbook is developed by OpenBMB and OpenSQZ.

πŸ“œ License

This cookbook is served under the Apache-2.0 License - cook freely, share generously! 🍳

Citation

If you find our model/code/paper helpful, please consider citing our papers πŸ“ and staring us ⭐️!

@misc{yu2025minicpmv45cookingefficient,
      title={MiniCPM-V 4.5: Cooking Efficient MLLMs via Architecture, Data, and Training Recipe}, 
      author={Tianyu Yu and Zefan Wang and Chongyi Wang and Fuwei Huang and Wenshuo Ma and Zhihui He and Tianchi Cai and Weize Chen and Yuxiang Huang and Yuanqian Zhao and Bokai Xu and Junbo Cui and Yingjing Xu and Liqing Ruan and Luoyuan Zhang and Hanyu Liu and Jingkun Tang and Hongyuan Liu and Qining Guo and Wenhao Hu and Bingxiang He and Jie Zhou and Jie Cai and Ji Qi and Zonghao Guo and Chi Chen and Guoyang Zeng and Yuxuan Li and Ganqu Cui and Ning Ding and Xu Han and Yuan Yao and Zhiyuan Liu and Maosong Sun},
      year={2025},
      eprint={2509.18154},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2509.18154}, 
}

@article{yao2024minicpm,
  title={MiniCPM-V: A GPT-4V Level MLLM on Your Phone},
  author={Yao, Yuan and Yu, Tianyu and Zhang, Ao and Wang, Chongyi and Cui, Junbo and Zhu, Hongji and Cai, Tianchi and Li, Haoyu and Zhao, Weilin and He, Zhihui and others},
  journal={Nat Commun 16, 5509 (2025)},
  year={2025}
}

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for MiniCPM-V-CookBook

Similar Open Source Tools

For similar tasks

For similar jobs