Awesome_Mamba
Computation-Efficient Era: A Comprehensive Survey of State Space Models in Medical Image Analysis
Stars: 125
Awesome Mamba is a curated collection of groundbreaking research papers and articles on Mamba Architecture, a pioneering framework in deep learning known for its selective state spaces and efficiency in processing complex data structures. The repository offers a comprehensive exploration of Mamba architecture through categorized research papers covering various domains like visual recognition, speech processing, remote sensing, video processing, activity recognition, image enhancement, medical imaging, reinforcement learning, natural language processing, 3D recognition, multi-modal understanding, time series analysis, graph neural networks, point cloud analysis, and tabular data handling.
README:
Awesome Mamba
🔥🔥 This is a collection of awesome articles about Mamba models (With a particular emphasis on Medical Image Analysis)🔥🔥
- Our survey paper on arXiv: Computation-Efficient Era: A Comprehensive Survey of State Space Models in Medical Image Analysis ❤️
@misc{heidari2024computationefficient,
title={Computation-Efficient Era: A Comprehensive Survey of State Space Models in Medical Image Analysis},
author={Moein Heidari and Sina Ghorbani Kolahi and Sanaz Karimijafarbigloo and Bobby Azad and Afshin Bozorgpour and Soheila Hatami and Reza Azad and Ali Diba and Ulas Bagci and Dorit Merhof and Ilker Hacihaliloglu},
year={2024},
eprint={2406.03430},
archivePrefix={arXiv},
primaryClass={eess.IV}
}
- 😎 First release: June 05, 2024
- Survey Papers
- Architecture Redesign
- Remote Sensing
- Speech Processing
- Video Processing
- Activity Recognition
- Image Enhancement
- Image & Video Generation
- Medical Imaging
- Image Segmentation
- Reinforcement Learning
- Natural Language Processing
- 3D Recognition
- Multi-Modal Understanding
- Time Series
- GNN
- Point Cloud
- Tabular Data
- From Generalization Analysis to Optimization Designs for State Space Models
- MambaOut: Do We Really Need Mamba for Vision? [Github]
- State Space Model for New-Generation Network Alternative to Transformers: A Survey [Github]
- A Survey on Visual Mamba
- Mamba-360: Survey of State Space Models as Transformer Alternative for Long Sequence Modelling: Methods, Applications, and Challenges [Github]
- Vision Mamba: A Comprehensive Survey and Taxonomy [Github]
- A Survey on Vision Mamba: Models, Applications and Challenges [Github]
- HiPPO: Recurrent Memory with Optimal Polynomial Projections [Github]
- S4: Efficiently Modeling Long Sequences with Structured State Spaces [Github]
- H3: Hungry Hungry Hippos: Toward Language Modeling with State Space Models [Github]
- LOCOST: State-Space Models for Long Document Abstractive Summarization [Github]
- Theoretical Foundations of Deep Selective State-Space Models
- S4++: Elevating Long Sequence Modeling with State Memory Reply
- Hieros: Hierarchical Imagination on Structured State Space Sequence World Models [Github]
- State Space Models as Foundation Models: A Control Theoretic Overview
- Selective Structured State-Spaces for Long-Form Video Understanding
- Retentive Network: A Successor to Transformer for Large Language Models[Github]
- Convolutional State Space Models for Long-Range Spatiotemporal Modeling[Github]
- Laughing Hyena Distillery: Extracting Compact Recurrences From Convolutions[Github]
- Resurrecting Recurrent Neural Networks for Long Sequences
- Hyena Hierarchy: Towards Larger Convolutional Language Models[Github]
- Mamba: Linear-time sequence modeling with selective state spaces [Github]
- Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality [Github]
- Locating and Editing Factual Associations in Mamba [Github]
- MambaMixer: Efficient Selective State Space Models with Dual Token and Channel Selection [Github]
- Jamba: A Hybrid Transformer-Mamba Language Model
- Mamba-ND: Selective State Space Modeling for Multi-Dimensional Data[Github]
- Incorporating Exponential Smoothing into MLP: A Simple but Effective Sequence Model [Github]
- PlainMamba: Improving Non-Hierarchical Mamba in Visual Recognition [Github]
- Understanding Robustness of Visual State Space Models for Image Classification
- Efficientvmamba: Atrous selective scan for light weight visual mamba [Github]
- Localmamba: Visual state space model with windowed selective scan [Github]
- Mamba4Rec: Towards Efficient Sequential Recommendation with Selective State Space Models [Github]
- The hidden attention of mamba models [Github]
- Learning method for S4 with Diagonal State Space Layers using Balanced Truncation
- BlackMamba: Mixture of Experts for State-Space Models[Github]
- MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts[Github]
- Scalable Diffusion Models with State Space Backbone [Github]
- ZigMa: Zigzag Mamba Diffusion Model[Github]
- Spectral State Space Models
- Mamba-unet: Unet-like pure visual mamba for medical image segmentation [Github]
- Mambabyte: Token-free selective state space model
- Vmamba: Visual state space model [Github]
- Vision mamba: Efficient visual representation learning with bidirectional state space model [Github]
- ChangeMamba: Remote Sensing Change Detection with Spatio-Temporal State Space Model [Github]
- RS-Mamba for Large Remote Sensing Image Dense Prediction [Github]
- RS3Mamba: Visual State Space Model for Remote Sensing Images Semantic Segmentation [Github]
- HSIMamba: Hyperspectral Imaging Efficient Feature Learning with Bidirectional State Space for Classification [Github]
- Rsmamba: Remote sensing image classification with state space model [Github]
- HSIMamba: Hyperpsectral Imaging Efficient Feature Learning with Bidirectional State Space for Classification [Github]
- Samba: Semantic Segmentation of Remotely Sensed Images with State Space Model [Github]
- SPMamba: State-space model is all you need in speech separation [Github]
- Dual-path Mamba: Short and Long-term Bidirectional Selective Structured State Space Models for Speech Separation [Github]
- SpikeMba: Multi-Modal Spiking Saliency Mamba for Temporal Video Grounding
- Video mamba suite: State space model as a versatile alternative for video understanding [Github]
- SSM Meets Video Diffusion Models: Efficient Video Generation with Structured State Spaces [Github]
- Videomamba: State space model for efficient video understanding [Github]
- HARMamba: Efficient Wearable Sensor Human Activity Recognition Based on Bidirectional Selective SSM
- VMRNN: Integrating Vision Mamba and LSTM for Efficient and Accurate Spatiotemporal Forecasting[Github]
- Aggregating Local and Global Features via Selective State Spaces Model for Efficient Image Deblurring
- Serpent: Scalable and Efficient Image Restoration via Multi-scale Structured State Space Models
- VmambaIR: Visual State Space Model for Image Restoration [Github]
- Activating Wider Areas in Image Super-Resolution [Github]
- MambaIR: A Simple Baseline for Image Restoration with State-Space Model [Github]
- Pan-Mamba: Effective pan-sharpening with State Space Model [Github]
- U-shaped Vision Mamba for Single Image Dehazing [Github]
- Vim4Path: Self-Supervised Vision Mamba for Histopathology Images [Github]
- VMambaMorph: a Visual Mamba-based Framework with Cross-Scan Module for Deformable 3D Image Registration [Github]
- UltraLight VM-UNet: Parallel Vision Mamba Significantly Reduces Parameters for Skin Lesion Segmentation [Github]
- Rotate to Scan: UNet-like Mamba with Triplet SSM Module for Medical Image Segmentation
- Integrating Mamba Sequence Model and Hierarchical Upsampling Network for Accurate Semantic Segmentation of Multiple Sclerosis Legion
- CMViM: Contrastive Masked Vim Autoencoder for 3D Multi-modal Representation Learning for AD classification
- H-vmunet: High-order vision mamba unet for medical image segmentation [Github]
- ProMamba: Prompt-Mamba for polyp segmentation
- Vm-unet-v2 rethinking vision mamba unet for medical image segmentation [Github]
- MD-Dose: A Diffusion Model based on the Mamba for Radiotherapy Dose Prediction [Github]
- Large Window-based Mamba UNet for Medical Image Segmentation: Beyond Convolution and Self-attention [Github]
- MambaMIL: Enhancing Long Sequence Modeling with Sequence Reordering in Computational Pathology [Github]
- Clinicalmamba: A generative clinical language model on longitudinal clinical notes [Github]
- Lightm-unet: Mamba assists in lightweight unet for medical image segmentation [Github]
- MedMamba: Vision Mamba for Medical Image Classification [Github]
- Weak-Mamba-UNet: Visual Mamba Makes CNN and ViT Work Better for Scribble-based Medical Image Segmentation [Github]
- P-Mamba: Marrying Perona Malik Diffusion with Mamba for Efficient Pediatric Echocardiographic Left Ventricular Segmentation
- Semi-Mamba-UNet: Pixel-Level Contrastive Cross-Supervised Visual Mamba-based UNet for Semi-Supervised Medical Image Segmentation [Github]
- FD-Vision Mamba for Endoscopic Exposure Correction [Github]
- Swin-umamba: Mamba-based unet with imagenet-based pretraining [Github]
- Vm-unet: Vision mamba unet for medical image segmentation[Github]
- Vivim: a video vision mamba for medical video object segmentation [Github]
- Segmamba: Long-range sequential modeling mamba for 3d medical image segmentation [Github]
- T-Mamba: Frequency-Enhanced Gated Long-Range Dependency for Tooth 3D CBCT Segmentation [Github]
- U-mamba: Enhancing long-range dependency for biomedical image segmentation [Github]
- MambaMorph: a Mamba-based Backbone with Contrastive Feature Learning for Deformable MR-CT Registration[Github]
- nnMamba: 3D Biomedical Image Segmentation, Classification and Landmark Detection with State Space Model[Github]
- MambaMIR: An Arbitrary-Masked Mamba for Joint Medical Image Reconstruction and Uncertainty Estimation[Github]
- ViM-UNet: Vision Mamba for Biomedical Segmentation[Github]
- VM-DDPM: Vision Mamba Diffusion for Medical Image Synthesis
- HC-Mamba: Vision MAMBA with Hybrid Convolutional Techniques for Medical Image Segmentation
- I2I-Mamba: Multi-modal Medical Image Synthesis via Selective State Space Modeling [GitHub]
- Decision Mamba: Reinforcement Learning via Sequence Modeling with Selective State Spaces [Github]
- MAMBA: an Effective World Model Approach for Meta-Reinforcement Learning [Github]
- RankMamba Benchmarking Mamba's Document Ranking Performance in the Era of Transformers [Github]
- Densemamba: State space models with dense hidden connection for efficient large language models [Github]
- Is Mamba Capable of In-Context Learning?
- Gamba: Marry Gaussian Splatting with Mamba for single view 3D reconstruction
- Motion mamba: Efficient and long sequence motion generation with hierarchical and bidirectional selective ssm [Github]
- Meteor: Mamba-based Traversal of Rationale for Large Language and Vision Models[Github]
- ReMamber: Referring Image Segmentation with Mamba Twister
- VL-Mamba: Exploring State Space Models for Multimodal Learning
- Cobra: Extending Mamba to Multi-Modal Large Language Model for Efficient Inference[Github]
- SurvMamba: State Space Model with Multi-grained Multi-modal Interaction for Survival Prediction
- MambaDFuse: A Mamba-based Dual-phase Model forMulti-modality Image Fusion
- APRICOT-Mamba: Acuity Prediction in Intensive Care Unit (ICU): Development and Validation of a Stability, Transitions, and Life-Sustaining Therapies Prediction Model
- SiMBA: Simplified Mamba-Based Architecture for Vision and Multivariate Time series [Github]
- Is Mamba Effective for Time Series Forecasting? [Github]
- TimeMachine: A Time Series is Worth 4 Mambas for Long-term Forecasting [Github]
- MambaStock: Selective state space model for stock prediction [Github]
- Bi-Mamba4TS: Bidirectional Mamba for Time Series Forecasting
- STG-Mamba: Spatial-Temporal Graph Learning via Selective State Space Model
- Graph Mamba: Towards Learning on Graphs with State Space Models [Github]
- Graph-Mamba: Towards long-range graph sequence modeling with selective state spaces [Github]
- Recurrent Distance Filtering for Graph Representation Learning[Github]
- Modeling multivariate biosignals with graph neural networks and structured state space models[Github]
- Point mamba: A novel point cloud backbone based on state space model with octree-based ordering strategy [Github]
- Point Could Mamba: Point Cloud Learning via State Space Model [Github]
- PointMamba: A Simple State Space Model for Point Cloud Analysis [Github]
- 3DMambaIPF: A State Space Model for Iterative Point Cloud Filtering via Differentiable Rendering
- 3DMambaComplete: Exploring Structured State Space Model for Point Cloud Completion
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for Awesome_Mamba
Similar Open Source Tools
Awesome_Mamba
Awesome Mamba is a curated collection of groundbreaking research papers and articles on Mamba Architecture, a pioneering framework in deep learning known for its selective state spaces and efficiency in processing complex data structures. The repository offers a comprehensive exploration of Mamba architecture through categorized research papers covering various domains like visual recognition, speech processing, remote sensing, video processing, activity recognition, image enhancement, medical imaging, reinforcement learning, natural language processing, 3D recognition, multi-modal understanding, time series analysis, graph neural networks, point cloud analysis, and tabular data handling.
llm-continual-learning-survey
This repository is an updating survey for Continual Learning of Large Language Models (CL-LLMs), providing a comprehensive overview of various aspects related to the continual learning of large language models. It covers topics such as continual pre-training, domain-adaptive pre-training, continual fine-tuning, model refinement, model alignment, multimodal LLMs, and miscellaneous aspects. The survey includes a collection of relevant papers, each focusing on different areas within the field of continual learning of large language models.
Awesome-LLMs-on-device
Welcome to the ultimate hub for on-device Large Language Models (LLMs)! This repository is your go-to resource for all things related to LLMs designed for on-device deployment. Whether you're a seasoned researcher, an innovative developer, or an enthusiastic learner, this comprehensive collection of cutting-edge knowledge is your gateway to understanding, leveraging, and contributing to the exciting world of on-device LLMs.
Awesome-LLM-Survey
This repository, Awesome-LLM-Survey, serves as a comprehensive collection of surveys related to Large Language Models (LLM). It covers various aspects of LLM, including instruction tuning, human alignment, LLM agents, hallucination, multi-modal capabilities, and more. Researchers are encouraged to contribute by updating information on their papers to benefit the LLM survey community.
AI-System-School
AI System School is a curated list of research in machine learning systems, focusing on ML/DL infra, LLM infra, domain-specific infra, ML/LLM conferences, and general resources. It provides resources such as data processing, training systems, video systems, autoML systems, and more. The repository aims to help users navigate the landscape of AI systems and machine learning infrastructure, offering insights into conferences, surveys, books, videos, courses, and blogs related to the field.
Efficient-LLMs-Survey
This repository provides a systematic and comprehensive review of efficient LLMs research. We organize the literature in a taxonomy consisting of three main categories, covering distinct yet interconnected efficient LLMs topics from **model-centric** , **data-centric** , and **framework-centric** perspective, respectively. We hope our survey and this GitHub repository can serve as valuable resources to help researchers and practitioners gain a systematic understanding of the research developments in efficient LLMs and inspire them to contribute to this important and exciting field.
Embodied-AI-Guide
Embodied-AI-Guide is a comprehensive guide for beginners to understand Embodied AI, focusing on the path of entry and useful information in the field. It covers topics such as Reinforcement Learning, Imitation Learning, Large Language Model for Robotics, 3D Vision, Control, Benchmarks, and provides resources for building cognitive understanding. The repository aims to help newcomers quickly establish knowledge in the field of Embodied AI.
Recommendation-Systems-without-Explicit-ID-Features-A-Literature-Review
This repository is a collection of papers and resources related to recommendation systems, focusing on foundation models, transferable recommender systems, large language models, and multimodal recommender systems. It explores questions such as the necessity of ID embeddings, the shift from matching to generating paradigms, and the future of multimodal recommender systems. The papers cover various aspects of recommendation systems, including pretraining, user representation, dataset benchmarks, and evaluation methods. The repository aims to provide insights and advancements in the field of recommendation systems through literature reviews, surveys, and empirical studies.
Awesome-LLM-Compression
Awesome LLM compression research papers and tools to accelerate LLM training and inference.
awesome-AIOps
awesome-AIOps is a curated list of academic researches and industrial materials related to Artificial Intelligence for IT Operations (AIOps). It includes resources such as competitions, white papers, blogs, tutorials, benchmarks, tools, companies, academic materials, talks, workshops, papers, and courses covering various aspects of AIOps like anomaly detection, root cause analysis, incident management, microservices, dependency tracing, and more.
awesome-llm-security
Awesome LLM Security is a curated collection of tools, documents, and projects related to Large Language Model (LLM) security. It covers various aspects of LLM security including white-box, black-box, and backdoor attacks, defense mechanisms, platform security, and surveys. The repository provides resources for researchers and practitioners interested in understanding and safeguarding LLMs against adversarial attacks. It also includes a list of tools specifically designed for testing and enhancing LLM security.
LLM-Tool-Survey
This repository contains a collection of papers related to tool learning with large language models (LLMs). The papers are organized according to the survey paper 'Tool Learning with Large Language Models: A Survey'. The survey focuses on the benefits and implementation of tool learning with LLMs, covering aspects such as task planning, tool selection, tool calling, response generation, benchmarks, evaluation, challenges, and future directions in the field. It aims to provide a comprehensive understanding of tool learning with LLMs and inspire further exploration in this emerging area.
Awesome-LLM4Graph-Papers
A collection of papers and resources about Large Language Models (LLM) for Graph Learning (Graph). Integrating LLMs with graph learning techniques to enhance performance in graph learning tasks. Categorizes approaches based on four primary paradigms and nine secondary-level categories. Valuable for research or practice in self-supervised learning for recommendation systems.
Awesome-LLM4RS-Papers
This paper list is about Large Language Model-enhanced Recommender System. It also contains some related works. Keywords: recommendation system, large language models
awesome-weather-models
A catalogue and categorization of AI-based weather forecasting models. This page provides a catalogue and categorization of AI-based weather forecasting models to enable discovery and comparison of different available model options. The weather models are categorized based on metadata found in the JSON schema specification. The table includes information such as the name of the weather model, the organization that developed it, operational data availability, open-source status, and links for further details.
For similar tasks
Awesome_Mamba
Awesome Mamba is a curated collection of groundbreaking research papers and articles on Mamba Architecture, a pioneering framework in deep learning known for its selective state spaces and efficiency in processing complex data structures. The repository offers a comprehensive exploration of Mamba architecture through categorized research papers covering various domains like visual recognition, speech processing, remote sensing, video processing, activity recognition, image enhancement, medical imaging, reinforcement learning, natural language processing, 3D recognition, multi-modal understanding, time series analysis, graph neural networks, point cloud analysis, and tabular data handling.
NewEraAI-Papers
The NewEraAI-Papers repository provides links to collections of influential and interesting research papers from top AI conferences, along with open-source code to promote reproducibility and provide detailed implementation insights beyond the scope of the article. Users can stay up to date with the latest advances in AI research by exploring this repository. Contributions to improve the completeness of the list are welcomed, and users can create pull requests, open issues, or contact the repository owner via email to enhance the repository further.
OnAIR
The On-board Artificial Intelligence Research (OnAIR) Platform is a framework that enables AI algorithms written in Python to interact with NASA's cFS. It is intended to explore research concepts in autonomous operations in a simulated environment. The platform provides tools for generating environments, handling telemetry data through Redis, running unit tests, and contributing to the repository. Users can set up a conda environment, configure telemetry and Redis examples, run simulations, and conduct unit tests to ensure the functionality of their AI algorithms. The platform also includes guidelines for licensing, copyright, and contributions to the repository.
model-catalog
model-catalog is a repository containing standardized JSON descriptors for Large Language Model (LLM) model files. Each model is described in a JSON file with details about the model, authors, additional resources, available model files, and providers. The format captures factors like model size, architecture, file format, and quantization format. A Github action merges individual JSON files from the `models/` directory into a `catalog.json` file, which is validated using a JSON schema. Contributors can help by adding new model JSON files following the contribution process.
Devon
Devon is an open-source pair programmer tool designed to facilitate collaborative coding sessions. It provides features such as multi-file editing, codebase exploration, test writing, bug fixing, and architecture exploration. The tool supports Anthropic, OpenAI, and Groq APIs, with plans to add more models in the future. Devon is community-driven, with ongoing development goals including multi-model support, plugin system for tool builders, self-hostable Electron app, and setting SOTA on SWE-bench Lite. Users can contribute to the project by developing core functionality, conducting research on agent performance, providing feedback, and testing the tool.
Perplexica
Perplexica is an open-source AI-powered search engine that utilizes advanced machine learning algorithms to provide clear answers with sources cited. It offers various modes like Copilot Mode, Normal Mode, and Focus Modes for specific types of questions. Perplexica ensures up-to-date information by using SearxNG metasearch engine. It also features image and video search capabilities and upcoming features include finalizing Copilot Mode and adding Discover and History Saving features.
For similar jobs
Awesome_Mamba
Awesome Mamba is a curated collection of groundbreaking research papers and articles on Mamba Architecture, a pioneering framework in deep learning known for its selective state spaces and efficiency in processing complex data structures. The repository offers a comprehensive exploration of Mamba architecture through categorized research papers covering various domains like visual recognition, speech processing, remote sensing, video processing, activity recognition, image enhancement, medical imaging, reinforcement learning, natural language processing, 3D recognition, multi-modal understanding, time series analysis, graph neural networks, point cloud analysis, and tabular data handling.
unilm
The 'unilm' repository is a collection of tools, models, and architectures for Foundation Models and General AI, focusing on tasks such as NLP, MT, Speech, Document AI, and Multimodal AI. It includes various pre-trained models, such as UniLM, InfoXLM, DeltaLM, MiniLM, AdaLM, BEiT, LayoutLM, WavLM, VALL-E, and more, designed for tasks like language understanding, generation, translation, vision, speech, and multimodal processing. The repository also features toolkits like s2s-ft for sequence-to-sequence fine-tuning and Aggressive Decoding for efficient sequence-to-sequence decoding. Additionally, it offers applications like TrOCR for OCR, LayoutReader for reading order detection, and XLM-T for multilingual NMT.
llm-app-stack
LLM App Stack, also known as Emerging Architectures for LLM Applications, is a comprehensive list of available tools, projects, and vendors at each layer of the LLM app stack. It covers various categories such as Data Pipelines, Embedding Models, Vector Databases, Playgrounds, Orchestrators, APIs/Plugins, LLM Caches, Logging/Monitoring/Eval, Validators, LLM APIs (proprietary and open source), App Hosting Platforms, Cloud Providers, and Opinionated Clouds. The repository aims to provide a detailed overview of tools and projects for building, deploying, and maintaining enterprise data solutions, AI models, and applications.
awesome-deeplogic
Awesome deep logic is a curated list of papers and resources focusing on integrating symbolic logic into deep neural networks. It includes surveys, tutorials, and research papers that explore the intersection of logic and deep learning. The repository aims to provide valuable insights and knowledge on how logic can be used to enhance reasoning, knowledge regularization, weak supervision, and explainability in neural networks.
Awesome-LLMs-on-device
Welcome to the ultimate hub for on-device Large Language Models (LLMs)! This repository is your go-to resource for all things related to LLMs designed for on-device deployment. Whether you're a seasoned researcher, an innovative developer, or an enthusiastic learner, this comprehensive collection of cutting-edge knowledge is your gateway to understanding, leveraging, and contributing to the exciting world of on-device LLMs.
duo-attention
DuoAttention is a framework designed to optimize long-context large language models (LLMs) by reducing memory and latency during inference without compromising their long-context abilities. It introduces a concept of Retrieval Heads and Streaming Heads to efficiently manage attention across tokens. By applying a full Key and Value (KV) cache to retrieval heads and a lightweight, constant-length KV cache to streaming heads, DuoAttention achieves significant reductions in memory usage and decoding time for LLMs. The framework uses an optimization-based algorithm with synthetic data to accurately identify retrieval heads, enabling efficient inference with minimal accuracy loss compared to full attention. DuoAttention also supports quantization techniques for further memory optimization, allowing for decoding of up to 3.3 million tokens on a single GPU.
spear
SPEAR (Simulator for Photorealistic Embodied AI Research) is a powerful tool for training embodied agents. It features 300 unique virtual indoor environments with 2,566 unique rooms and 17,234 unique objects that can be manipulated individually. Each environment is designed by a professional artist and features detailed geometry, photorealistic materials, and a unique floor plan and object layout. SPEAR is implemented as Unreal Engine assets and provides an OpenAI Gym interface for interacting with the environments via Python.
openvino
OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference. It provides a common API to deliver inference solutions on various platforms, including CPU, GPU, NPU, and heterogeneous devices. OpenVINO™ supports pre-trained models from Open Model Zoo and popular frameworks like TensorFlow, PyTorch, and ONNX. Key components of OpenVINO™ include the OpenVINO™ Runtime, plugins for different hardware devices, frontends for reading models from native framework formats, and the OpenVINO Model Converter (OVC) for adjusting models for optimal execution on target devices.