
dora
DORA (Dataflow-Oriented Robotic Architecture) is middleware designed to streamline and simplify the creation of AI-based robotic applications. It offers low latency, composable, and distributed dataflow capabilities. Applications are modeled as directed graphs, also referred to as pipelines.
Stars: 2483

Dataflow-oriented robotic application (dora-rs) is a framework that makes creation of robotic applications fast and simple. Building a robotic application can be summed up as bringing together hardwares, algorithms, and AI models, and make them communicate with each others. At dora-rs, we try to: make integration of hardware and software easy by supporting Python, C, C++, and also ROS2. make communication low latency by using zero-copy Arrow messages. dora-rs is still experimental and you might experience bugs, but we're working very hard to make it stable as possible.
README:
Website | Python API | Rust API | Guide | Discord
- π dora-rs is a framework to run realtime multi-AI and multi-hardware applications.
- π¦ dora-rs internals are 100% Rust making it extremely fast compared to alternative such as being β‘οΈ 10-17x faster than
ros2
. - βοΈ Includes a large set of pre-packaged nodes for fast prototyping which simplifies integration of hardware, algorithms, and AI models.
Latency benchmark with Python API for both framework, sending 40M of random bytes.
2025
- [07/25] Added Kornia rust nodes in the hub for V4L / Gstreamer cameras and Sobel image processing.
- [06/25] Add support for git based node, dora-vggt for multi-camera depth estimation, and adding robot_descriptions_py as a default way to get urdfs within dora.
- [05/25] Add support for dora-pytorch-kinematics for fk and ik, dora-mediapipe for pose estimation, dora-rustypot for rust serialport read/write, points2d and points3d visualization in rerun.
- [04/25] Add support for dora-cotracker to track any point on a frame, dora-rav1e AV1 encoding up to 12bit and dora-dav1d AV1 decoding,
- [03/25] Add support for dora async Python.
- [03/25] Add support for Microsoft Phi4, Microsoft Magma.
- [03/25] dora-rs has been accepted to GSoC 2025 π, with the following idea list.
- [03/25] Add support for Zenoh for distributed dataflow.
- [03/25] Add support for Meta SAM2, Kokoro(TTS), Improved Qwen2.5 Performance using
llama.cpp
. - [02/25] Add support for Qwen2.5(LLM), Qwen2.5-VL(VLM), outetts(TTS)
dora-rs | |
---|---|
APIs | Python >= 3.7 including sync ββ
Rust β C/C++ π ROS2 >= Foxy π |
OS | Linux: Arm 32 ββ
Arm 64 ββ
x64_86 ββ
MacOS: Arm 64 ββ Windows: x64_86 π WSL: x64_86 π Android: π οΈ (Blocked by: https://github.com/elast0ny/shared_memory/issues/32) IOS: π οΈ |
Message Format | Arrow β
Standard Specification π οΈ |
Local Communication | Shared Memory β
Cuda IPC π |
Remote Communication | Zenoh π |
Metrics, Tracing, and Logging | Opentelemetry π |
Configuration | YAML β |
Package Manager |
pip: Python Node β
Rust Node β
C/C++ Node π οΈ cargo: Rust Node β |
- β = Recommended
- β = First Class Support
- π = Best Effort Support
- π = Experimental and looking for contributions
- π οΈ = Unsupported but hoped for through contributions
Everything is open for contributions π
Feel free to modify this README with your own nodes so that it benefits the community.
Type | Title | Support | Description | Downloads | License |
---|---|---|---|---|---|
Camera | PyOrbbeckSDK | π | Image and depth from Orbbeck Camera | ||
Camera | PyRealsense | Linuxπ Macπ οΈ |
Image and depth from Realsense | ||
Camera | OpenCV Video Capture | β | Image stream from OpenCV Camera | ||
Camera | Kornia V4L Capture | β | Video stream for Linux Camera (rust) | ||
Camera | Kornia GST Capture | β | Video Capture using Gstreamer (rust) | ||
Peripheral | Keyboard | β | Keyboard char listener | ||
Peripheral | Microphone | β | Audio from microphone | ||
Peripheral | PyAudio(Speaker) | β | Output audio from speaker | ||
Actuator | Feetech | π | Feetech Client | ||
Actuator | Dynamixel | π | Dynamixel Client | ||
Chassis | Agilex - UGV | π | Robomaster Client | ||
Chassis | DJI - Robomaster S1 | π | Robomaster Client | ||
Chassis | Dora Kit Car | π | Open Source Chassis | ||
Arm | Alex Koch - Low Cost Robot | π | Alex Koch - Low Cost Robot Client | ||
Arm | Lebai - LM3 | π | Lebai client | ||
Arm | Agilex - Piper | π | Agilex arm client | ||
Robot | Pollen - Reachy 1 | π | Reachy 1 Client | ||
Robot | Pollen - Reachy 2 | π | Reachy 2 client | ||
Robot | Trossen - Aloha | π | Aloha client | ||
Voice Activity Detection(VAD) | Silero VAD | β | Silero Voice activity detection | ||
Speech to Text(STT) | Whisper | β | Transcribe audio to text | ||
Object Detection | Yolov8 | β | Object detection | ||
Segmentation | SAM2 | Cudaβ
Metalπ οΈ |
Segment Anything | ||
Large Language Model(LLM) | Qwen2.5 | β | Large Language Model using Qwen | ||
Vision Language Model(VLM) | Qwen2.5-vl | β | Vision Language Model using Qwen2.5 VL | ||
Vision Language Model(VLM) | InternVL | π | InternVL is a vision language model | ||
Vision Language Action(VLA) | RDT-1B | π | Infer policy using Robotic Diffusion Transformer | ||
Translation | ArgosTranslate | π | Open Source translation engine | ||
Translation | Opus MT | π | Translate text between language | ||
Text to Speech(TTS) | Kokoro TTS | β | Efficient Text to Speech | ||
Recorder | Llama Factory Recorder | π | Record data to train LLM and VLM | ||
Recorder | LeRobot Recorder | π | LeRobot Recorder helper | ||
Visualization | Plot | β | Simple OpenCV plot visualization | ||
Visualization | Rerun | β | Visualization tool | ||
Simulator | Mujoco | π | Mujoco Simulator | ||
Simulator | Carla | π | Carla Simulator | ||
Simulator | Gymnasium | π | Experimental OpenAI Gymnasium bridge | ||
Image Processing | Kornia Sobel Operator | β | Kornia image processing Sobel operator (rust) |
Type | Title | Description | Last Commit |
---|---|---|---|
Audio | Speech to Text(STT) | Transform speech to text. | |
Audio | Translation | Translate audio in real time. | |
Vision | Vision Language Model(VLM) | Use a VLM to understand images. | |
Vision | YOLO | Use YOLO to detect object within image. | |
Vision | Camera | Simple webcam plot example | |
Vision | Image Processing | Multi camera image processing | |
Model Training | Piper RDT | Piper RDT Pipeline | |
Model Training | LeRobot - Alexander Koch | Training Alexander Koch Low Cost Robot with LeRobot | |
ROS2 | C++ ROS2 Example | Example using C++ ROS2 | |
ROS2 | Rust ROS2 Example | Example using Rust ROS2 | |
ROS2 | Python ROS2 Example | Example using Python ROS2 | |
Benchmark | GPU Benchmark | GPU Benchmark of dora-rs | |
Benchmark | CPU Benchmark | CPU Benchmark of dora-rs | |
Tutorial | Rust Example | Example using Rust | |
Tutorial | Python Example | Example using Python | |
Tutorial | CMake Example | Example using CMake | |
Tutorial | C Example | Example with C node | |
Tutorial | CUDA Example | Example using CUDA Zero Copy | |
Tutorial | C++ Example | Example with C++ node |
pip install dora-rs-cli
Additional installation methods
Install dora with our standalone installers, or from crates.io:
cargo install dora-cli
curl --proto '=https' --tlsv1.2 -LsSf https://github.com/dora-rs/dora/releases/latest/download/dora-cli-installer.sh | sh
powershell -ExecutionPolicy ByPass -c "irm https://github.com/dora-rs/dorareleases/latest/download/dora-cli-installer.ps1 | iex"
git clone https://github.com/dora-rs/dora.git
cd dora
cargo build --release -p dora-cli
PATH=$PATH:$(pwd)/target/release
- Run the yolo python example:
## Create a virtual environment
uv venv --seed -p 3.11
## Install nodes dependencies of a remote graph
dora build https://raw.githubusercontent.com/dora-rs/dora/refs/heads/main/examples/object-detection/yolo.yml --uv
## Run yolo graph
dora run yolo.yml --uv
Make sure to have a webcam
To stop your dataflow, you can use ctrl+c
- To understand what is happening, you can look at the dataflow with:
cat yolo.yml
- Resulting in:
nodes:
- id: camera
build: pip install opencv-video-capture
path: opencv-video-capture
inputs:
tick: dora/timer/millis/20
outputs:
- image
env:
CAPTURE_PATH: 0
IMAGE_WIDTH: 640
IMAGE_HEIGHT: 480
- id: object-detection
build: pip install dora-yolo
path: dora-yolo
inputs:
image: camera/image
outputs:
- bbox
- id: plot
build: pip install dora-rerun
path: dora-rerun
inputs:
image: camera/image
boxes2d: object-detection/bbox
- In the above example, we can understand that the camera is sending image to both the rerun viewer as well as a yolo model that generates bounding box that is visualized within rerun.
The full documentation is available on our website. A lot of guides are available on this section of our website.
Dataflow-Oriented Robotic Architecture (dora-rs
) is a framework that makes creation of robotic applications fast and simple.
dora-rs
implements a declarative dataflow paradigm where tasks are split between nodes isolated as individual processes.
The dataflow paradigm has the advantage of creating an abstraction layer that makes robotic applications modular and easily configurable.
Communication between nodes is handled with shared memory on a same machine and TCP on distributed machines. Our shared memory implementation tracks messages across processes and discards them when obsolete. Shared memory slots are cached to avoid new memory allocation.
Nodes communicate with Apache Arrow Data Format.
Apache Arrow is a universal memory format for flat and hierarchical data. The Arrow memory format supports zero-copy reads for lightning-fast data access without serialization overhead. It defines a C data interface without any build-time or link-time dependency requirement, that means that dora-rs
has no compilation step beyond the native compiler of your favourite language.
dora-rs uses Opentelemetry to record all your logs, metrics and traces. This means that the data and telemetry can be linked using a shared abstraction.
Opentelemetry is an open source observability standard that makes dora-rs telemetry collectable by most backends such as elasticsearch, prometheus, Datadog...
Opentelemetry is language independent, backend agnostic, and easily collect distributed data, making it perfect for dora-rs applications.
Note: this feature is marked as unstable.
- Compilation Free Message passing to ROS 2
- Automatic conversion ROS 2 Message <-> Arrow Array
import pyarrow as pa
# Configuration Boilerplate...
turtle_twist_writer = ...
## Arrow Based ROS2 Twist Message
## which does not require ROS2 import
message = pa.array([{
"linear": {
"x": 1,
},
"angular": {
"z": 1
},
}])
turtle_twist_writer.publish(message)
You might want to use ChatGPT to write the Arrow Formatting: https://chat.openai.com/share/4eec1c6d-dbd2-46dc-b6cd-310d2895ba15
We are passionate about supporting contributors of all levels of experience and would love to see you get involved in the project. See the contributing guide to get started.
Our main communication channels are:
Feel free to reach out on any topic, issues or ideas.
We also have a contributing guide.
This project is licensed under Apache-2.0. Check out NOTICE.md for more information.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for dora
Similar Open Source Tools

dora
Dataflow-oriented robotic application (dora-rs) is a framework that makes creation of robotic applications fast and simple. Building a robotic application can be summed up as bringing together hardwares, algorithms, and AI models, and make them communicate with each others. At dora-rs, we try to: make integration of hardware and software easy by supporting Python, C, C++, and also ROS2. make communication low latency by using zero-copy Arrow messages. dora-rs is still experimental and you might experience bugs, but we're working very hard to make it stable as possible.

dexto
Dexto is a lightweight runtime for creating and running AI agents that turn natural language into real-world actions. It serves as the missing intelligence layer for building AI applications, standalone chatbots, or as the reasoning engine inside larger products. Dexto features a powerful CLI and Web UI for running AI agents, supports multiple interfaces, allows hot-swapping of LLMs from various providers, connects to remote tool servers via the Model Context Protocol, is config-driven with version-controlled YAML, offers production-ready core features, extensibility for custom services, and enables multi-agent collaboration via MCP and A2A.

baibot
Baibot is a versatile chatbot framework designed to simplify the process of creating and deploying chatbots. It provides a user-friendly interface for building custom chatbots with various functionalities such as natural language processing, conversation flow management, and integration with external APIs. Baibot is highly customizable and can be easily extended to suit different use cases and industries. With Baibot, developers can quickly create intelligent chatbots that can interact with users in a seamless and engaging manner, enhancing user experience and automating customer support processes.

deeppowers
Deeppowers is a powerful Python library for deep learning applications. It provides a wide range of tools and utilities to simplify the process of building and training deep neural networks. With Deeppowers, users can easily create complex neural network architectures, perform efficient training and optimization, and deploy models for various tasks. The library is designed to be user-friendly and flexible, making it suitable for both beginners and experienced deep learning practitioners.

hyper-mcp
hyper-mcp is a fast and secure MCP server that enables adding AI capabilities to applications through WebAssembly plugins. It supports writing plugins in various languages, distributing them via standard OCI registries, and running them in resource-constrained environments. The tool offers sandboxing with WASM for limiting access, cross-platform compatibility, and deployment flexibility. Security features include sandboxed plugins, memory-safe execution, secure plugin distribution, and fine-grained access control. Users can configure the tool for global or project-specific use, start the server with different transport options, and utilize available plugins for tasks like time calculations, QR code generation, hash generation, IP retrieval, and webpage fetching.

jadx-mcp-server
JADX-MCP-SERVER is a standalone Python server that interacts with JADX-AI-MCP Plugin to analyze Android APKs using LLMs like Claude. It enables live communication with decompiled Android app context, uncovering vulnerabilities, parsing manifests, and facilitating reverse engineering effortlessly. The tool combines JADX-AI-MCP and JADX MCP SERVER to provide real-time reverse engineering support with LLMs, offering features like quick analysis, vulnerability detection, AI code modification, static analysis, and reverse engineering helpers. It supports various MCP tools for fetching class information, text, methods, fields, smali code, AndroidManifest.xml content, strings.xml file, resource files, and more. Tested on Claude Desktop, it aims to support other LLMs in the future, enhancing Android reverse engineering and APK modification tools connectivity for easier reverse engineering purely from vibes.

motia
Motia is an AI agent framework designed for software engineers to create, test, and deploy production-ready AI agents quickly. It provides a code-first approach, allowing developers to write agent logic in familiar languages and visualize execution in real-time. With Motia, developers can focus on business logic rather than infrastructure, offering zero infrastructure headaches, multi-language support, composable steps, built-in observability, instant APIs, and full control over AI logic. Ideal for building sophisticated agents and intelligent automations, Motia's event-driven architecture and modular steps enable the creation of GenAI-powered workflows, decision-making systems, and data processing pipelines.

lmnr
Laminar is an all-in-one open-source platform designed for engineering AI products. It allows users to trace, evaluate, label, and analyze LLM data efficiently. The platform offers features such as automatic tracing of common AI frameworks and SDKs, local and online evaluations, simple UI for data labeling, dataset management, and scalability with gRPC communication. Laminar is built with a modern open-source stack including RabbitMQ, Postgres, Clickhouse, and Qdrant for semantic similarity search. It provides fast and beautiful dashboards for traces, evaluations, and labels, making it a comprehensive tool for AI product development.

dot-ai
Dot-ai is a machine learning library designed to simplify the process of building and deploying AI models. It provides a wide range of tools and utilities for data preprocessing, model training, and evaluation. With Dot-ai, users can easily create and experiment with various machine learning algorithms without the need for extensive coding knowledge. The library is built with scalability and performance in mind, making it suitable for both small-scale projects and large-scale applications. Whether you are a beginner or an experienced data scientist, Dot-ai offers a user-friendly interface to streamline your AI development workflow.

GEN-AI
GEN-AI is a versatile Python library for implementing various artificial intelligence algorithms and models. It provides a wide range of tools and functionalities to support machine learning, deep learning, natural language processing, computer vision, and reinforcement learning tasks. With GEN-AI, users can easily build, train, and deploy AI models for diverse applications such as image recognition, text classification, sentiment analysis, object detection, and game playing. The library is designed to be user-friendly, efficient, and scalable, making it suitable for both beginners and experienced AI practitioners.

mcp-fundamentals
The mcp-fundamentals repository is a collection of fundamental concepts and examples related to microservices, cloud computing, and DevOps. It covers topics such as containerization, orchestration, CI/CD pipelines, and infrastructure as code. The repository provides hands-on exercises and code samples to help users understand and apply these concepts in real-world scenarios. Whether you are a beginner looking to learn the basics or an experienced professional seeking to refresh your knowledge, mcp-fundamentals has something for everyone.

ml-retreat
ML-Retreat is a comprehensive machine learning library designed to simplify and streamline the process of building and deploying machine learning models. It provides a wide range of tools and utilities for data preprocessing, model training, evaluation, and deployment. With ML-Retreat, users can easily experiment with different algorithms, hyperparameters, and feature engineering techniques to optimize their models. The library is built with a focus on scalability, performance, and ease of use, making it suitable for both beginners and experienced machine learning practitioners.

trustgraph
TrustGraph is a tool that deploys private GraphRAG pipelines to build a RDF style knowledge graph from data, enabling accurate and secure `RAG` requests compatible with cloud LLMs and open-source SLMs. It showcases the reliability and efficiencies of GraphRAG algorithms, capturing contextual language flags missed in conventional RAG approaches. The tool offers features like PDF decoding, text chunking, inference of various LMs, RDF-aligned Knowledge Graph extraction, and more. TrustGraph is designed to be modular, supporting multiple Language Models and environments, with a plug'n'play architecture for easy customization.

nndeploy
nndeploy is a tool that allows you to quickly build your visual AI workflow without the need for frontend technology. It provides ready-to-use algorithm nodes for non-AI programmers, including large language models, Stable Diffusion, object detection, image segmentation, etc. The workflow can be exported as a JSON configuration file, supporting Python/C++ API for direct loading and running, deployment on cloud servers, desktops, mobile devices, edge devices, and more. The framework includes mainstream high-performance inference engines and deep optimization strategies to help you transform your workflow into enterprise-level production applications.

ten-framework
TEN is an open-source ecosystem for creating, customizing, and deploying real-time conversational AI agents with multimodal capabilities including voice, vision, and avatar interactions. It includes various components like TEN Framework, TEN Turn Detection, TEN VAD, TEN Agent, TMAN Designer, and TEN Portal. Users can follow the provided guidelines to set up and customize their agents using TMAN Designer, run them locally or in Codespace, and deploy them with Docker or other cloud services. The ecosystem also offers community channels for developers to connect, contribute, and get support.

jadx-ai-mcp
JADX-AI-MCP is a plugin for the JADX decompiler that integrates with Model Context Protocol (MCP) to provide live reverse engineering support with LLMs like Claude. It allows for quick analysis, vulnerability detection, and AI code modification, all in real time. The tool combines JADX-AI-MCP and JADX MCP SERVER to analyze Android APKs effortlessly. It offers various prompts for code understanding, vulnerability detection, reverse engineering helpers, static analysis, AI code modification, and documentation. The tool is part of the Zin MCP Suite and aims to connect all android reverse engineering and APK modification tools with a single MCP server for easy reverse engineering of APK files.
For similar tasks

dora
Dataflow-oriented robotic application (dora-rs) is a framework that makes creation of robotic applications fast and simple. Building a robotic application can be summed up as bringing together hardwares, algorithms, and AI models, and make them communicate with each others. At dora-rs, we try to: make integration of hardware and software easy by supporting Python, C, C++, and also ROS2. make communication low latency by using zero-copy Arrow messages. dora-rs is still experimental and you might experience bugs, but we're working very hard to make it stable as possible.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.