
dora
DORA (Dataflow-Oriented Robotic Architecture) is middleware designed to streamline and simplify the creation of AI-based robotic applications. It offers low latency, composable, and distributed dataflow capabilities. Applications are modeled as directed graphs, also referred to as pipelines.
Stars: 2034

Dataflow-oriented robotic application (dora-rs) is a framework that makes creation of robotic applications fast and simple. Building a robotic application can be summed up as bringing together hardwares, algorithms, and AI models, and make them communicate with each others. At dora-rs, we try to: make integration of hardware and software easy by supporting Python, C, C++, and also ROS2. make communication low latency by using zero-copy Arrow messages. dora-rs is still experimental and you might experience bugs, but we're working very hard to make it stable as possible.
README:
Website | Python API | Rust API | Guide | Discord
- π dora-rs is a framework to run realtime multi-AI and multi-hardware applications.
- π¦ dora-rs internals are 100% Rust making it extremely fast compared to alternative such as being β‘οΈ 10-17x faster than
ros2
. - βοΈ Includes a large set of pre-packaged nodes for fast prototyping which simplifies integration of hardware, algorithms, and AI models.
Latency benchmark with Python API for both framework, sending 40M of random bytes.
2025
- [03/05] dora-rs has been accepted to GSoC 2025 π, with the following idea list.
- [03/04] Add support for Zenoh for distributed dataflow.
- [03/04] Add support for Meta SAM2, Kokoro(TTS), Improved Qwen2.5 Performance using
llama.cpp
. - [02/25] Add support for Qwen2.5(LLM), Qwen2.5-VL(VLM), outetts(TTS)
dora-rs | |
---|---|
APIs | Python >= 3.7 β
Rust β C/C++ π ROS2 >= Foxy π |
OS | Linux: Arm 32 β
Arm 64 β
x64_86 β
MacOS: Arm 64 β x64_86 β Windows: x64_86 π Android: π οΈ (Blocked by: https://github.com/elast0ny/shared_memory/issues/32) IOS: π οΈ |
Message Format | Arrow β
Standard Specification π οΈ |
Local Communication | Shared Memory β
Cuda IPC π |
Remote Communication | Zenoh π |
Metrics, Tracing, and Logging | Opentelemetry π |
Configuration | YAML β |
Package Manager |
pip: Python Node β
Rust Node β
C/C++ Node π οΈ cargo: Rust Node β |
- β = First Class Support
- π = Best Effort Support
- π = Experimental and looking for contributions
- π οΈ = Unsupported but hoped for through contributions
Everything is open for contributions π
Feel free to modify this README with your own nodes so that it benefits the community.
Type | Title | Support | Description | Downloads | License |
---|---|---|---|---|---|
Camera | PyOrbbeckSDK | π | Image and depth from Orbbeck Camera | ||
Camera | PyRealsense | Linuxπ Macπ οΈ |
Image and depth from Realsense | ||
Camera | OpenCV Video Capture | β | Image stream from OpenCV Camera | ||
Peripheral | Keyboard | β | Keyboard char listener | ||
Peripheral | Microphone | β | Audio from microphone | ||
Peripheral | PyAudio(Speaker) | β | Output audio from speaker | ||
Actuator | Feetech | π | Feetech Client | ||
Actuator | Dynamixel | π | Dynamixel Client | ||
Chassis | Agilex - UGV | π | Robomaster Client | ||
Chassis | DJI - Robomaster S1 | π | Robomaster Client | ||
Chassis | Dora Kit Car | π | Open Source Chassis | ||
Arm | Alex Koch - Low Cost Robot | π | Alex Koch - Low Cost Robot Client | ||
Arm | Lebai - LM3 | π | Lebai client | ||
Arm | Agilex - Piper | π | Agilex arm client | ||
Robot | Pollen - Reachy 1 | π | Reachy 1 Client | ||
Robot | Pollen - Reachy 2 | π | Reachy 2 client | ||
Robot | Trossen - Aloha | π | Aloha client | ||
Voice Activity Detection(VAD) | Silero VAD | β | Silero Voice activity detection | ||
Speech to Text(STT) | Whisper | β | Transcribe audio to text | ||
Object Detection | Yolov8 | β | Object detection | ||
Segmentation | SAM2 | Cudaβ
Metalπ οΈ |
Segment Anything | ||
Large Language Model(LLM) | Qwen2.5 | β | Large Language Model using Qwen | ||
Vision Language Model(VLM) | Qwen2.5-vl | β | Vision Language Model using Qwen2.5 VL | ||
Vision Language Model(VLM) | InternVL | π | InternVL is a vision language model | ||
Vision Language Action(VLA) | RDT-1B | π | Infer policy using Robotic Diffusion Transformer | ||
Translation | ArgosTranslate | π | Open Source translation engine | ||
Translation | Opus MT | π | Translate text between language | ||
Text to Speech(TTS) | Kokoro TTS | β | Efficient Text to Speech | ||
Recorder | Llama Factory Recorder | π | Record data to train LLM and VLM | ||
Recorder | LeRobot Recorder | π | LeRobot Recorder helper | ||
Visualization | Plot | β | Simple OpenCV plot visualization | ||
Visualization | Rerun | β | Visualization tool | ||
Simulator | Mujoco | π | Mujoco Simulator | ||
Simulator | Carla | π | Carla Simulator | ||
Simulator | Gymnasium | π | Experimental OpenAI Gymnasium bridge |
Type | Title | Description | Last Commit |
---|---|---|---|
Audio | Speech to Text(STT) | Transform speech to text. | |
Audio | Translation | Translate audio in real time. | |
Vision | Vision Language Model(VLM) | Use a VLM to understand images. | |
Vision | YOLO | Use YOLO to detect object within image. | |
Vision | Camera | Simple webcam plot example | |
Model Training | Piper RDT | Piper RDT Pipeline | |
Model Training | LeRobot - Alexander Koch | Training Alexander Koch Low Cost Robot with LeRobot | |
ROS2 | C++ ROS2 Example | Example using C++ ROS2 | |
ROS2 | Rust ROS2 Example | Example using Rust ROS2 | |
ROS2 | Python ROS2 Example | Example using Python ROS2 | |
Benchmark | GPU Benchmark | GPU Benchmark of dora-rs | |
Benchmark | CPU Benchmark | CPU Benchmark of dora-rs | |
Tutorial | Rust Example | Example using Rust | |
Tutorial | Python Example | Example using Python | |
Tutorial | CMake Example | Example using CMake | |
Tutorial | C Example | Example with C node | |
Tutorial | CUDA Example | Example using CUDA Zero Copy | |
Tutorial | C++ Example | Example with C++ node |
pip install dora-rs-cli
Additional installation methods
Install dora with our standalone installers, or from crates.io:
cargo install dora-cli
curl --proto '=https' --tlsv1.2 -sSf https://raw.githubusercontent.com/dora-rs/dora/main/install.sh | bash
powershell -c "irm https://raw.githubusercontent.com/dora-rs/dora/main/install.ps1 | iex"
git clone https://github.com/dora-rs/dora.git
cd dora
cargo build --release -p dora-cli
PATH=$PATH:$(pwd)/target/release
- Run the yolo python example:
## Create a virtual environment
uv venv --seed -p 3.11
## Install nodes dependencies of a remote graph
dora build https://raw.githubusercontent.com/dora-rs/dora/refs/heads/main/examples/object-detection/yolo.yml --uv
## Run yolo graph
dora run yolo.yml --uv
Make sure to have a webcam
To stop your dataflow, you can use ctrl+c
- To understand what is happening, you can look at the dataflow with:
cat yolo.yml
- Resulting in:
nodes:
- id: camera
build: pip install opencv-video-capture
path: opencv-video-capture
inputs:
tick: dora/timer/millis/20
outputs:
- image
env:
CAPTURE_PATH: 0
IMAGE_WIDTH: 640
IMAGE_HEIGHT: 480
- id: object-detection
build: pip install dora-yolo
path: dora-yolo
inputs:
image: camera/image
outputs:
- bbox
- id: plot
build: pip install dora-rerun
path: dora-rerun
inputs:
image: camera/image
boxes2d: object-detection/bbox
- In the above example, we can understand that the camera is sending image to both the rerun viewer as well as a yolo model that generates bounding box that is visualized within rerun.
The full documentation is available on our website. A lot of guides are available on this section of our website.
Dataflow-Oriented Robotic Architecture (dora-rs
) is a framework that makes creation of robotic applications fast and simple.
dora-rs
implements a declarative dataflow paradigm where tasks are split between nodes isolated as individual processes.
The dataflow paradigm has the advantage of creating an abstraction layer that makes robotic applications modular and easily configurable.
Communication between nodes is handled with shared memory on a same machine and TCP on distributed machines. Our shared memory implementation tracks messages across processes and discards them when obsolete. Shared memory slots are cached to avoid new memory allocation.
Nodes communicate with Apache Arrow Data Format.
Apache Arrow is a universal memory format for flat and hierarchical data. The Arrow memory format supports zero-copy reads for lightning-fast data access without serialization overhead. It defines a C data interface without any build-time or link-time dependency requirement, that means that dora-rs
has no compilation step beyond the native compiler of your favourite language.
dora-rs uses Opentelemetry to record all your logs, metrics and traces. This means that the data and telemetry can be linked using a shared abstraction.
Opentelemetry is an open source observability standard that makes dora-rs telemetry collectable by most backends such as elasticsearch, prometheus, Datadog...
Opentelemetry is language independent, backend agnostic, and easily collect distributed data, making it perfect for dora-rs applications.
Note: this feature is marked as unstable.
- Compilation Free Message passing to ROS 2
- Automatic conversion ROS 2 Message <-> Arrow Array
import pyarrow as pa
# Configuration Boilerplate...
turtle_twist_writer = ...
## Arrow Based ROS2 Twist Message
## which does not require ROS2 import
message = pa.array([{
"linear": {
"x": 1,
},
"angular": {
"z": 1
},
}])
turtle_twist_writer.publish(message)
You might want to use ChatGPT to write the Arrow Formatting: https://chat.openai.com/share/4eec1c6d-dbd2-46dc-b6cd-310d2895ba15
Zenoh is a high-performance pub/sub and query protocol that unifies data in motion and at rest. In dora-rs, Zenoh is used for remote communication between nodes running on different machines, enabling distributed dataflow across networks.
-
Definition:
Zenoh is an open-source communication middleware offering pub/sub and query capabilities. -
Benefits in DORA:
- Simplifies communication between distributed nodes.
- Handles NAT traversal and inter-network communication.
- Integrates with DORA to manage remote data exchange while local communication still uses efficient shared memory.
-
Run a Zenoh Router (
zenohd
):
Launch a Zenoh daemon to mediate communication. For example, using Docker:docker run -p 7447:7447 -p 8000:8000 --name zenoh-router eclipse/zenohd:latest
## Create a Zenoh Configuration File ποΈ
Create a file (e.g., `zenoh.json5`) with the router endpoint details:
```json5
{
"connect": {
"endpoints": [ "tcp/203.0.113.10:7447" ]
}
}
On each machine, export the configuration and start the daemon:
export ZENOH_CONFIG=/path/to/zenoh.json5
dora daemon --coordinator-addr <COORD_IP> --machine-id <MACHINE_NAME>
Mark nodes for remote deployment using the _unstable_deploy
key:
nodes:
- id: camera_node
outputs: [image]
- id: processing_node
_unstable_deploy:
machine: robot1
path: /home/robot/dora-nodes/processing_node
inputs:
image: camera_node/image
outputs: [result]
Run the coordinator on a designated machine and start the dataflow:
dora coordinator
dora start dataflow.yml
communication:
zenoh: {}
nodes:
- id: camera_node
custom:
run: ./camera_driver.py
outputs:
- image
- id: processing_node
_unstable_deploy:
machine: robot1
path: /home/robot/dora-nodes/processing_node
inputs:
image: camera_node/image
outputs:
- result
We are passionate about supporting contributors of all levels of experience and would love to see you get involved in the project. See the contributing guide to get started.
Our main communication channels are:
Feel free to reach out on any topic, issues or ideas.
We also have a contributing guide.
This project is licensed under Apache-2.0. Check out NOTICE.md for more information.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for dora
Similar Open Source Tools

dora
Dataflow-oriented robotic application (dora-rs) is a framework that makes creation of robotic applications fast and simple. Building a robotic application can be summed up as bringing together hardwares, algorithms, and AI models, and make them communicate with each others. At dora-rs, we try to: make integration of hardware and software easy by supporting Python, C, C++, and also ROS2. make communication low latency by using zero-copy Arrow messages. dora-rs is still experimental and you might experience bugs, but we're working very hard to make it stable as possible.

PocketFlow
Pocket Flow is a 100-line minimalist LLM framework designed for (Multi-)Agents, Workflow, RAG, etc. It provides a core abstraction for LLM projects by focusing on computation and communication through a graph structure and shared store. The framework aims to support the development of LLM Agents, such as Cursor AI, by offering a minimal and low-level approach that is well-suited for understanding and usage. Users can install Pocket Flow via pip or by copying the source code, and detailed documentation is available on the project website.

web-builder
Web Builder is a low-code front-end framework based on Material for Angular, offering a rich component library for excellent digital innovation experience. It allows rapid construction of modern responsive UI, multi-theme, multi-language web pages through drag-and-drop visual configuration. The framework includes a beautiful admin theme, complete front-end solutions, and AI integration in the Pro version for optimizing copy, creating components, and generating pages with a single sentence.

chatluna
Chatluna is a machine learning model plugin that provides chat services with large language models. It is highly extensible, supports multiple output formats, and offers features like custom conversation presets, rate limiting, and context awareness. Users can deploy Chatluna under Koishi without additional configuration. The plugin supports various models/platforms like OpenAI, Azure OpenAI, Google Gemini, and more. It also provides preset customization using YAML files and allows for easy forking and development within Koishi projects. However, the project lacks web UI, HTTP server, and project documentation, inviting contributions from the community.

chat-master
ChatMASTER is a self-built backend conversation service based on AI large model APIs, supporting synchronous and streaming responses with perfect printer effects. It supports switching between mainstream models such as DeepSeek, Kimi, Doubao, OpenAI, Claude3, Yiyan, Tongyi, Xinghuo, ChatGLM, Shusheng, and more. It also supports loading local models and knowledge bases using Ollama and Langchain, as well as online API interfaces like Coze and Gitee AI. The project includes Java server-side, web-side, mobile-side, and management background configuration. It provides various assistant types for prompt output and allows creating custom assistant templates in the management background. The project uses technologies like Spring Boot, Spring Security + JWT, Mybatis-Plus, Lombok, Mysql & Redis, with easy-to-understand code and comprehensive permission control using JWT authentication system for multi-terminal support.

gpupixel
GPUPixel is a real-time, high-performance image and video filter library written in C++11 and based on OpenGL/ES. It incorporates a built-in beauty face filter that achieves commercial-grade beauty effects. The library is extremely easy to compile and integrate with a small size, supporting platforms including iOS, Android, Mac, Windows, and Linux. GPUPixel provides various filters like skin smoothing, whitening, face slimming, big eyes, lipstick, and blush. It supports input formats like YUV420P, RGBA, JPEG, PNG, and output formats like RGBA and YUV420P. The library's performance on devices like iPhone and Android is optimized, with low CPU usage and fast processing times. GPUPixel's lib size is compact, making it suitable for mobile and desktop applications.

nx
Nx is a build system optimized for monorepos, featuring AI-powered architectural awareness and advanced CI capabilities. It provides faster task scheduling, caching, and more for existing workspaces. Nx Cloud enhances CI by offering remote caching, task distribution, automated e2e test splitting, and task flakiness detection. The tool aims to scale monorepos efficiently and improve developer productivity.

TRACE
TRACE is a temporal grounding video model that utilizes causal event modeling to capture videos' inherent structure. It presents a task-interleaved video LLM model tailored for sequential encoding/decoding of timestamps, salient scores, and textual captions. The project includes various model checkpoints for different stages and fine-tuning on specific datasets. It provides evaluation codes for different tasks like VTG, MVBench, and VideoMME. The repository also offers annotation files and links to raw videos preparation projects. Users can train the model on different tasks and evaluate the performance based on metrics like CIDER, METEOR, SODA_c, F1, mAP, Hit@1, etc. TRACE has been enhanced with trace-retrieval and trace-uni models, showing improved performance on dense video captioning and general video understanding tasks.

openinference
OpenInference is a set of conventions and plugins that complement OpenTelemetry to enable tracing of AI applications. It provides a way to capture and analyze the performance and behavior of AI models, including their interactions with other components of the application. OpenInference is designed to be language-agnostic and can be used with any OpenTelemetry-compatible backend. It includes a set of instrumentations for popular machine learning SDKs and frameworks, making it easy to add tracing to your AI applications.

data-prep-kit
Data Prep Kit accelerates unstructured data preparation for LLM app developers. It allows developers to cleanse, transform, and enrich unstructured data for pre-training, fine-tuning, instruct-tuning LLMs, or building RAG applications. The kit provides modules for Python, Ray, and Spark runtimes, supporting Natural Language and Code data modalities. It offers a framework for custom transforms and uses Kubeflow Pipelines for workflow automation. Users can install the kit via PyPi and access a variety of transforms for data processing pipelines.

airbyte-connectors
This repository contains Airbyte connectors used in Faros and Faros Community Edition platforms as well as Airbyte Connector Development Kit (CDK) for JavaScript/TypeScript.

Awesome-LLM-Tabular
This repository is a curated list of research papers that explore the integration of Large Language Model (LLM) technology with tabular data. It aims to provide a comprehensive resource for researchers and practitioners interested in this emerging field. The repository includes papers on a wide range of topics, including table-to-text generation, table question answering, and tabular data classification. It also includes a section on related datasets and resources.

awesome-llm-webapps
This repository is a curated list of open-source, actively maintained web applications that leverage large language models (LLMs) for various use cases, including chatbots, natural language interfaces, assistants, and question answering systems. The projects are evaluated based on key criteria such as licensing, maintenance status, complexity, and features, to help users select the most suitable starting point for their LLM-based applications. The repository welcomes contributions and encourages users to submit projects that meet the criteria or suggest improvements to the existing list.

vlmrun-cookbook
VLM Run Cookbook is a repository containing practical examples and tutorials for extracting structured data from images, videos, and documents using Vision Language Models (VLMs). It offers comprehensive Colab notebooks demonstrating real-world applications of VLM Run, with complete code and documentation for easy adaptation. The examples cover various domains such as financial documents and TV news analysis.
For similar tasks

dora
Dataflow-oriented robotic application (dora-rs) is a framework that makes creation of robotic applications fast and simple. Building a robotic application can be summed up as bringing together hardwares, algorithms, and AI models, and make them communicate with each others. At dora-rs, we try to: make integration of hardware and software easy by supporting Python, C, C++, and also ROS2. make communication low latency by using zero-copy Arrow messages. dora-rs is still experimental and you might experience bugs, but we're working very hard to make it stable as possible.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.