adk-rust
Rust Agent Development Kit (ADK-Rust): Build AI agents in Rust with modular components for models, tools, memory, realtime voice, and more. ADK-Rust is a flexible framework for developing AI agents with simplicity and power. Model-agnostic, deployment-agnostic, optimized for frontier AI models. Includes support for real-time voice agents.
Stars: 104
ADK-Rust is a comprehensive and production-ready Rust framework for building AI agents. It features type-safe agent abstractions with async execution and event streaming, multiple agent types including LLM agents, workflow agents, and custom agents, realtime voice agents with bidirectional audio streaming, a tool ecosystem with function tools, Google Search, and MCP integration, production features like session management, artifact storage, memory systems, and REST/A2A APIs, and a developer-friendly experience with interactive CLI, working examples, and comprehensive documentation. The framework follows a clean layered architecture and is production-ready and actively maintained.
README:
🎉 v0.3.0 Released! ADK Studio action nodes & debugger, ADK UI support for A2UI, AG UI and MCP Apps, Skills.md augmentation, and Vertex AI integration....including an all new ADK-Rust VS Code Extension! Get started →
A comprehensive and production-ready Rust framework for building AI agents. Create powerful and high-performance AI agent systems with a flexible, modular architecture. Model-agnostic. Type-safe. Blazingly fast.
ADK-Rust provides a comprehensive framework for building AI agents in Rust, featuring:
- Type-safe agent abstractions with async execution and event streaming
- Multiple agent types: LLM agents, workflow agents (sequential, parallel, loop), and custom agents
- Realtime voice agents: Bidirectional audio streaming with OpenAI Realtime API and Gemini Live API
- Tool ecosystem: Function tools, Google Search, MCP (Model Context Protocol) integration
- Production features: Session management, artifact storage, memory systems, REST/A2A APIs
- Developer experience: Interactive CLI, 80+ working examples, comprehensive documentation
Status: Production-ready, actively maintained
ADK-Rust follows a clean layered architecture from application interface down to foundational services.
LLM Agents: Powered by large language models with tool use, function calling, and streaming responses.
Workflow Agents: Deterministic orchestration patterns.
-
SequentialAgent: Execute agents in sequence -
ParallelAgent: Execute agents concurrently -
LoopAgent: Iterative execution with exit conditions
Custom Agents: Implement the Agent trait for specialized behavior.
Realtime Voice Agents: Build voice-enabled AI assistants with bidirectional audio streaming.
Graph Agents: LangGraph-style workflow orchestration with state management and checkpointing.
ADK supports multiple LLM providers with a unified API:
| Provider | Model Examples | Feature Flag |
|---|---|---|
| Gemini |
gemini-3-pro-preview, gemini-2.5-flash, gemini-2.5-pro
|
(default) |
| OpenAI |
gpt-5.2, gpt-5, gpt-4o, gpt-4o-mini
|
openai |
| Anthropic |
claude-opus-4-20250514, claude-sonnet-4-20250514
|
anthropic |
| DeepSeek |
deepseek-chat, deepseek-reasoner
|
deepseek |
| Groq |
llama-3.3-70b-versatile, mixtral-8x7b-32768
|
groq |
| Ollama |
llama3.2, qwen2.5, mistral
|
ollama |
| mistral.rs | Phi-3, Mistral, Llama, Gemma, LLaVa, FLUX | git dependency |
All providers support streaming, function calling, and multimodal inputs (where available).
Built-in tools:
- Function tools (custom Rust functions)
- Google Search
- Artifact loading
- Loop termination
MCP Integration: Connect to Model Context Protocol servers for extended capabilities.
- Session Management: In-memory and SQLite-backed sessions with state persistence
- Memory System: Long-term memory with semantic search and vector embeddings
- Servers: REST API with SSE streaming, A2A protocol for agent-to-agent communication
- Guardrails: PII redaction, content filtering, JSON schema validation
- Observability: OpenTelemetry tracing, structured logging
| Crate | Purpose | Key Features |
|---|---|---|
adk-core |
Foundational traits and types |
Agent trait, Content, Part, error types, streaming primitives |
adk-agent |
Agent implementations |
LlmAgent, SequentialAgent, ParallelAgent, LoopAgent, builder patterns |
adk-skill |
AgentSkills parsing and selection | Skill markdown parser, .skills discovery/indexing, lexical matching, prompt injection helpers |
adk-model |
LLM integrations | Gemini, OpenAI, Anthropic, DeepSeek, Groq, Ollama clients, streaming, function calling |
adk-gemini |
Gemini client | Google Gemini API client with streaming and multimodal support |
adk-mistralrs |
Native local inference | mistral.rs integration, ISQ quantization, LoRA adapters (git-only) |
adk-tool |
Tool system and extensibility |
FunctionTool, Google Search, MCP protocol, schema validation |
adk-session |
Session and state management | SQLite/in-memory backends, conversation history, state persistence |
adk-artifact |
Artifact storage system | File-based storage, MIME type handling, image/PDF/video support |
adk-memory |
Long-term memory | Vector embeddings, semantic search, Qdrant integration |
adk-runner |
Agent execution runtime | Context management, event streaming, session lifecycle, callbacks |
adk-server |
Production API servers | REST API, A2A protocol, middleware, health checks |
adk-cli |
Command-line interface | Interactive REPL, session management, MCP server integration |
adk-realtime |
Real-time voice agents | OpenAI Realtime API, Gemini Live API, bidirectional audio, VAD |
adk-graph |
Graph-based workflows | LangGraph-style orchestration, state management, checkpointing, human-in-the-loop |
adk-browser |
Browser automation | 46 WebDriver tools, navigation, forms, screenshots, PDF generation |
adk-eval |
Agent evaluation | Test definitions, trajectory validation, LLM-judged scoring, rubrics |
adk-guardrail |
Input/output validation | PII redaction, content filtering, JSON schema validation |
adk-auth |
Access control | Role-based permissions, SSO/OAuth, audit logging |
adk-telemetry |
Observability | Structured logging, OpenTelemetry tracing, span helpers |
adk-ui |
Dynamic UI generation | 28 components, 10 templates, React client, streaming updates |
adk-studio |
Visual development | Drag-and-drop agent builder, code generation, live testing |
Requires Rust 1.85 or later (Rust 2024 edition). Add to your Cargo.toml:
[dependencies]
adk-rust = "0.3.0"
# Or individual crates
adk-core = "0.3.0"
adk-agent = "0.3.0"
adk-model = "0.3.0" # Add features for providers: features = ["openai", "anthropic"]
adk-tool = "0.3.0"
adk-runner = "0.3.0"Nightly (latest features):
adk-rust = { git = "https://github.com/zavora-ai/adk-rust", branch = "develop" }Set your API key:
# For Gemini (default)
export GOOGLE_API_KEY="your-api-key"
# For OpenAI
export OPENAI_API_KEY="your-api-key"
# For Anthropic
export ANTHROPIC_API_KEY="your-api-key"
# For DeepSeek
export DEEPSEEK_API_KEY="your-api-key"
# For Groq
export GROQ_API_KEY="your-api-key"
# For Ollama (no key, just run: ollama serve)use adk_rust::prelude::*;
use adk_rust::Launcher;
#[tokio::main]
async fn main() -> AnyhowResult<()> {
dotenvy::dotenv().ok();
let api_key = std::env::var("GOOGLE_API_KEY")?;
let model = GeminiModel::new(&api_key, "gemini-2.5-flash")?;
let agent = LlmAgentBuilder::new("assistant")
.description("Helpful AI assistant")
.instruction("You are a helpful assistant. Be concise and accurate.")
.model(Arc::new(model))
.build()?;
Launcher::new(Arc::new(agent)).run().await?;
Ok(())
}use adk_rust::prelude::*;
use adk_rust::Launcher;
#[tokio::main]
async fn main() -> AnyhowResult<()> {
dotenvy::dotenv().ok();
let api_key = std::env::var("OPENAI_API_KEY")?;
let model = OpenAIClient::new(OpenAIConfig::new(api_key, "gpt-4o"))?;
let agent = LlmAgentBuilder::new("assistant")
.instruction("You are a helpful assistant.")
.model(Arc::new(model))
.build()?;
Launcher::new(Arc::new(agent)).run().await?;
Ok(())
}use adk_rust::prelude::*;
use adk_rust::Launcher;
#[tokio::main]
async fn main() -> AnyhowResult<()> {
dotenvy::dotenv().ok();
let api_key = std::env::var("ANTHROPIC_API_KEY")?;
let model = AnthropicClient::new(AnthropicConfig::new(api_key, "claude-sonnet-4-20250514"))?;
let agent = LlmAgentBuilder::new("assistant")
.instruction("You are a helpful assistant.")
.model(Arc::new(model))
.build()?;
Launcher::new(Arc::new(agent)).run().await?;
Ok(())
}use adk_rust::prelude::*;
use adk_rust::Launcher;
#[tokio::main]
async fn main() -> AnyhowResult<()> {
dotenvy::dotenv().ok();
let api_key = std::env::var("DEEPSEEK_API_KEY")?;
// Standard chat model
let model = DeepSeekClient::chat(api_key)?;
// Or use reasoner for chain-of-thought reasoning
// let model = DeepSeekClient::reasoner(api_key)?;
let agent = LlmAgentBuilder::new("assistant")
.instruction("You are a helpful assistant.")
.model(Arc::new(model))
.build()?;
Launcher::new(Arc::new(agent)).run().await?;
Ok(())
}use adk_rust::prelude::*;
use adk_rust::Launcher;
#[tokio::main]
async fn main() -> AnyhowResult<()> {
dotenvy::dotenv().ok();
let api_key = std::env::var("GROQ_API_KEY")?;
let model = GroqClient::new(GroqConfig::llama70b(api_key))?;
let agent = LlmAgentBuilder::new("assistant")
.instruction("You are a helpful assistant.")
.model(Arc::new(model))
.build()?;
Launcher::new(Arc::new(agent)).run().await?;
Ok(())
}use adk_rust::prelude::*;
use adk_rust::Launcher;
#[tokio::main]
async fn main() -> AnyhowResult<()> {
dotenvy::dotenv().ok();
// Requires: ollama serve && ollama pull llama3.2
let model = OllamaModel::new(OllamaConfig::new("llama3.2"))?;
let agent = LlmAgentBuilder::new("assistant")
.instruction("You are a helpful assistant.")
.model(Arc::new(model))
.build()?;
Launcher::new(Arc::new(agent)).run().await?;
Ok(())
}# Interactive console (Gemini)
cargo run --example quickstart
# OpenAI examples (requires --features openai)
cargo run --example openai_basic --features openai
cargo run --example openai_tools --features openai
# DeepSeek examples (requires --features deepseek)
cargo run --example deepseek_basic --features deepseek
cargo run --example deepseek_reasoner --features deepseek
# Groq examples (requires --features groq)
cargo run --example groq_basic --features groq
# Ollama examples (requires --features ollama)
cargo run --example ollama_basic --features ollama
# REST API server
cargo run --example server
# Workflow agents
cargo run --example sequential_agent
cargo run --example parallel_agent
# See all examples
ls examples/A visual development environment for building AI agents with drag-and-drop. Design complex multi-agent workflows, compile to production Rust code, and test live — all from your browser.
# Install and run
cargo install adk-studio
adk-studioFeatures:
- Drag-and-drop canvas with LLM agents, workflow agents, and 14 action nodes
- Execution Timeline with step-by-step replay and State Inspector
- Debug mode with live input/output state visualization per node
- Real-time chat with SSE streaming and event trace
- 14 action nodes: Trigger, HTTP, Set, Transform, Switch, Loop, Merge, Wait, Code, Database, Email, Notification, RSS, File
- Triggers: Manual, Webhook (with auth), Cron Schedule, Event (with JSONPath filters)
- Code generation: Compile visual designs to production ADK-Rust with auto-detected dependencies
- Build, run, and deploy executables directly from Studio
Build voice-enabled AI assistants using the adk-realtime crate:
use adk_realtime::{RealtimeAgent, openai::OpenAIRealtimeModel, RealtimeModel};
use std::sync::Arc;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let model: Arc<dyn RealtimeModel> = Arc::new(
OpenAIRealtimeModel::new(&api_key, "gpt-4o-realtime-preview-2024-12-17")
);
let agent = RealtimeAgent::builder("voice_assistant")
.model(model)
.instruction("You are a helpful voice assistant.")
.voice("alloy")
.server_vad() // Enable voice activity detection
.build()?;
Ok(())
}Supported Realtime Models:
| Provider | Model | Description |
|---|---|---|
| OpenAI | gpt-4o-realtime-preview-2024-12-17 |
Stable realtime model |
| OpenAI | gpt-realtime |
Latest model with improved speech quality and function calling |
gemini-2.0-flash-live-preview-04-09 |
Gemini Live API |
Features:
- OpenAI Realtime API and Gemini Live API support
- Bidirectional audio streaming (PCM16, G711)
- Server-side Voice Activity Detection (VAD)
- Real-time tool calling during voice conversations
- Multi-agent handoffs for complex workflows
Run realtime examples:
cargo run --example realtime_basic --features realtime-openai
cargo run --example realtime_tools --features realtime-openai
cargo run --example realtime_handoff --features realtime-openaiBuild complex, stateful workflows using the adk-graph crate (LangGraph-style):
use adk_graph::{prelude::*, node::AgentNode};
use adk_agent::LlmAgentBuilder;
use adk_model::GeminiModel;
// Create LLM agents for different tasks
let translator = Arc::new(LlmAgentBuilder::new("translator")
.model(Arc::new(GeminiModel::new(&api_key, "gemini-2.0-flash")?))
.instruction("Translate the input text to French.")
.build()?);
let summarizer = Arc::new(LlmAgentBuilder::new("summarizer")
.model(model.clone())
.instruction("Summarize the input text in one sentence.")
.build()?);
// Create AgentNodes with custom input/output mappers
let translator_node = AgentNode::new(translator)
.with_input_mapper(|state| {
let text = state.get("input").and_then(|v| v.as_str()).unwrap_or("");
adk_core::Content::new("user").with_text(text)
})
.with_output_mapper(|events| {
let mut updates = HashMap::new();
for event in events {
if let Some(content) = event.content() {
let text: String = content.parts.iter()
.filter_map(|p| p.text())
.collect::<Vec<_>>()
.join("");
updates.insert("translation".to_string(), json!(text));
}
}
updates
});
// Build graph with parallel execution
let agent = GraphAgent::builder("text_processor")
.description("Translates and summarizes text in parallel")
.channels(&["input", "translation", "summary"])
.node(translator_node)
.node(summarizer_node) // Similar setup
.edge(START, "translator")
.edge(START, "summarizer") // Parallel execution
.edge("translator", "combine")
.edge("summarizer", "combine")
.edge("combine", END)
.build()?;
// Execute
let mut input = State::new();
input.insert("input".to_string(), json!("AI is transforming how we work."));
let result = agent.invoke(input, ExecutionConfig::new("thread-1")).await?;Features:
- AgentNode: Wrap LLM agents as graph nodes with custom input/output mappers
- Parallel & Sequential: Execute agents concurrently or in sequence
- Cyclic Graphs: ReAct pattern with tool loops and iteration limiting
-
Conditional Routing: Dynamic routing via
Router::by_fieldor custom functions - Checkpointing: Memory and SQLite backends for fault tolerance
- Human-in-the-Loop: Dynamic interrupts based on state, resume from checkpoint
- Streaming: Multiple modes (values, updates, messages, debug)
Run graph examples:
cargo run --example graph_agent # Parallel LLM agents with callbacks
cargo run --example graph_workflow # Sequential multi-agent pipeline
cargo run --example graph_conditional # LLM-based routing
cargo run --example graph_react # ReAct pattern with tools
cargo run --example graph_supervisor # Multi-agent supervisor
cargo run --example graph_hitl # Human-in-the-loop approval
cargo run --example graph_checkpoint # State persistenceGive agents web browsing capabilities using the adk-browser crate:
use adk_browser::{BrowserSession, BrowserToolset, BrowserConfig};
// Create browser session
let config = BrowserConfig::new().webdriver_url("http://localhost:4444");
let session = Arc::new(BrowserSession::new(config));
// Get all 46 browser tools
let toolset = BrowserToolset::new(session);
let tools = toolset.all_tools();
// Add to agent
let mut builder = LlmAgentBuilder::new("web_agent")
.model(model)
.instruction("Browse the web and extract information.");
for tool in tools {
builder = builder.tool(tool);
}
let agent = builder.build()?;46 Browser Tools:
- Navigation:
browser_navigate,browser_back,browser_forward,browser_refresh - Extraction:
browser_extract_text,browser_extract_links,browser_extract_html - Interaction:
browser_click,browser_type,browser_select,browser_submit - Forms:
browser_fill_form,browser_get_form_fields,browser_clear_field - Screenshots:
browser_screenshot,browser_screenshot_element - JavaScript:
browser_evaluate,browser_evaluate_async - Cookies, frames, windows, and more
Requirements: WebDriver (Selenium, ChromeDriver, etc.)
docker run -d -p 4444:4444 selenium/standalone-chrome
cargo run --example browser_agentTest and validate agent behavior using the adk-eval crate:
use adk_eval::{Evaluator, EvaluationConfig, EvaluationCriteria};
let config = EvaluationConfig::with_criteria(
EvaluationCriteria::exact_tools()
.with_response_similarity(0.8)
);
let evaluator = Evaluator::new(config);
let report = evaluator
.evaluate_file(agent, "tests/my_agent.test.json")
.await?;
assert!(report.all_passed());Evaluation Capabilities:
- Trajectory validation (tool call sequences)
- Response similarity (Jaccard, Levenshtein, ROUGE)
- LLM-judged semantic matching
- Rubric-based scoring with custom criteria
- Safety and hallucination detection
- Detailed reporting with failure analysis
For native local inference without external dependencies, use the adk-mistralrs crate:
use adk_mistralrs::{MistralRsModel, MistralRsConfig, ModelSource, QuantizationLevel};
use adk_agent::LlmAgentBuilder;
use std::sync::Arc;
#[tokio::main]
async fn main() -> anyhow::Result<()> {
// Load model with ISQ quantization for reduced memory
let config = MistralRsConfig::builder()
.model_source(ModelSource::huggingface("microsoft/Phi-3.5-mini-instruct"))
.isq(QuantizationLevel::Q4_0)
.paged_attention(true)
.build();
let model = MistralRsModel::new(config).await?;
let agent = LlmAgentBuilder::new("local-assistant")
.instruction("You are a helpful assistant running locally.")
.model(Arc::new(model))
.build()?;
Ok(())
}Note: adk-mistralrs is not on crates.io due to git dependencies. Add via:
adk-mistralrs = { git = "https://github.com/zavora-ai/adk-rust" }
# With Metal: features = ["metal"]
# With CUDA: features = ["cuda"]Features: ISQ quantization, PagedAttention, multi-GPU splitting, LoRA/X-LoRA adapters, vision/speech/diffusion models, MCP integration.
The adk-ui crate enables agents to render rich user interfaces:
use adk_ui::{UiToolset, UI_AGENT_PROMPT};
let tools = UiToolset::all_tools(); // 10 render tools
let mut builder = LlmAgentBuilder::new("ui_assistant")
.instruction(UI_AGENT_PROMPT); // Tested prompt for reliable UI generation
for tool in tools {
builder = builder.tool(tool);
}
let agent = builder.build()?;React Client: npm install @zavora-ai/adk-ui-react
Features: 28 components, 10 templates, dark mode, streaming updates, server-side validation
# See all available commands
make help
# Build all crates (CPU-only, works on all systems)
make build
# Build with all features (safe - adk-mistralrs excluded)
make build-all
# Build all examples
make examples
# Run tests
make test
# Run clippy lints
make clippy# Build workspace (CPU-only)
cargo build --workspace
# Build with all features (works without CUDA)
cargo build --workspace --all-features
# Build examples with common features
cargo build --examples --features "openai,anthropic,deepseek,ollama,groq,browser,guardrails,sso"adk-mistralrs is excluded from the workspace by default to allow --all-features to work without CUDA toolkit. Build it explicitly:
# CPU-only (works on all systems)
make build-mistralrs
# or: cargo build --manifest-path adk-mistralrs/Cargo.toml
# macOS with Apple Silicon (Metal GPU)
make build-mistralrs-metal
# or: cargo build --manifest-path adk-mistralrs/Cargo.toml --features metal
# NVIDIA GPU (requires CUDA toolkit)
make build-mistralrs-cuda
# or: cargo build --manifest-path adk-mistralrs/Cargo.toml --features cuda# Build and run examples with mistralrs
cargo run --example mistralrs_basic --features mistralrs
# With Metal GPU acceleration (macOS)
cargo run --example mistralrs_basic --features mistralrs,metalAdd to your Cargo.toml:
[dependencies]
# All-in-one crate
adk-rust = "0.3.0"
# Or individual crates for finer control
adk-core = "0.3.0"
adk-agent = "0.3.0"
adk-model = { version = "0.3.0", features = ["openai", "anthropic"] } # Enable providers
adk-tool = "0.3.0"
adk-runner = "0.3.0"
# Optional dependencies
adk-session = { version = "0.3.0", optional = true }
adk-artifact = { version = "0.3.0", optional = true }
adk-memory = { version = "0.3.0", optional = true }
adk-server = { version = "0.3.0", optional = true }
adk-cli = { version = "0.3.0", optional = true }
adk-realtime = { version = "0.3.0", features = ["openai"], optional = true }
adk-graph = { version = "0.3.0", features = ["sqlite"], optional = true }
adk-browser = { version = "0.3.0", optional = true }
adk-eval = { version = "0.3.0", optional = true }See examples/ directory for complete, runnable examples:
Getting Started
-
quickstart/- Basic agent setup and chat loop -
function_tool/- Custom tool implementation -
multiple_tools/- Agent with multiple tools -
agent_tool/- Use agents as callable tools
OpenAI Integration (requires --features openai)
-
openai_basic/- Simple OpenAI GPT agent -
openai_tools/- OpenAI with function calling -
openai_workflow/- Multi-agent workflows with OpenAI -
openai_structured/- Structured JSON output
DeepSeek Integration (requires --features deepseek)
-
deepseek_basic/- Basic DeepSeek chat -
deepseek_reasoner/- Chain-of-thought reasoning mode -
deepseek_tools/- Function calling with DeepSeek -
deepseek_caching/- Context caching for cost reduction
Workflow Agents
-
sequential/- Sequential workflow execution -
parallel/- Concurrent agent execution -
loop_workflow/- Iterative refinement patterns -
sequential_code/- Code generation pipeline
Realtime Voice Agents (requires --features realtime-openai)
-
realtime_basic/- Basic text-only realtime session -
realtime_vad/- Voice assistant with VAD -
realtime_tools/- Tool calling in realtime sessions -
realtime_handoff/- Multi-agent handoffs
Graph Workflows
-
graph_agent/- GraphAgent with parallel LLM agents and callbacks -
graph_workflow/- Sequential multi-agent pipeline -
graph_conditional/- LLM-based classification and routing -
graph_react/- ReAct pattern with tools and cycles -
graph_supervisor/- Multi-agent supervisor routing -
graph_hitl/- Human-in-the-loop with risk-based interrupts -
graph_checkpoint/- State persistence and time travel debugging
Browser Automation
-
browser_basic/- Basic browser session and tools -
browser_agent/- AI agent with browser tools -
browser_interactive/- Full 46-tool interactive example
Agent Evaluation
-
eval_basic/- Basic evaluation setup -
eval_trajectory/- Tool call trajectory validation -
eval_semantic/- LLM-judged semantic matching -
eval_rubric/- Rubric-based scoring
Guardrails
-
guardrail_basic/- PII redaction and content filtering -
guardrail_schema/- JSON schema validation -
guardrail_agent/- Full agent integration with guardrails
mistral.rs Local Inference (requires git dependency)
-
mistralrs_basic/- Basic text generation with local models -
mistralrs_tools/- Function calling with mistral.rs -
mistralrs_vision/- Image understanding with vision models -
mistralrs_isq/- In-situ quantization for memory efficiency -
mistralrs_lora/- LoRA adapter usage and hot-swapping -
mistralrs_multimodel/- Multi-model serving -
mistralrs_mcp/- MCP client integration
Dynamic UI
-
ui_agent/- Agent with UI rendering tools -
ui_server/- UI server with streaming updates -
ui_react_client/- React client example
Production Features
-
load_artifacts/- Working with images and PDFs -
mcp/- Model Context Protocol integration -
server/- REST API deployment -
a2a/- Agent-to-Agent communication -
web/- Web UI with streaming -
research_paper/- Complex multi-agent workflow
# Run all tests
cargo test
# Test specific crate
cargo test --package adk-core
# With output
cargo test -- --nocapture# Linting
cargo clippy
# Formatting
cargo fmt
# Security audit
cargo audit# Development build
cargo build
# Optimized release build
cargo build --release- Wiki: GitHub Wiki - Comprehensive guides and tutorials
- API Reference: docs.rs/adk-rust - Full API documentation
- Examples: examples/README.md - 80+ working examples with detailed explanations
Optimized for production use:
- Zero-cost abstractions with Rust's ownership model
- Efficient async I/O via Tokio runtime
- Minimal allocations and copying
- Streaming responses for lower latency
- Connection pooling and caching support
Apache 2.0 (same as Google's ADK)
- ADK - Google's Agent Development Kit
- MCP Protocol - Model Context Protocol for tool integration
- Gemini API - Google's multimodal AI model
Contributions welcome! Please open an issue or pull request on GitHub.
Implemented (v0.3.0):
- adk-gemini overhaul — Vertex AI support (ADC, Service Accounts, WIF), v1 stable API, image generation, speech generation, thinking mode, content caching, batch processing, URL context
- Context compaction — automatic conversation history summarization to stay within token limits
- Production hardening — deterministic event ordering, bounded history, configurable limits across adk-core, adk-agent, adk-runner
- ADK Studio debug mode — Execution Timeline with step-by-step replay, State Inspector with per-node input/output visualization
- Action nodes code generation — HTTP (reqwest), Database (sqlx/mongodb/redis), Email (lettre/imap), Code (boa_engine JS sandbox) compile to production Rust
- 14 action nodes — Trigger, HTTP, Set, Transform, Switch, Loop, Merge, Wait, Code, Database, Email, Notification, RSS, File
- Triggers — Manual, Webhook (with bearer/API key auth), Cron Schedule (with timezone), Event (with JSONPath filters)
- A2UI protocol support — render_screen, render_page, render_kit tools with AG-UI and MCP Apps adapters
- SSO/OAuth integration — Auth0, Okta, Azure AD, Google OIDC providers in adk-auth
- Plugin system (adk-plugin) — dynamic agent/tool/model loading with hot-reload
Implemented (v0.2.0):
- Core framework and agent types
- Multi-provider LLM support (Gemini, OpenAI, Anthropic, DeepSeek, Groq, Ollama)
- Native local inference (adk-mistralrs) with ISQ quantization, LoRA adapters, vision/speech/diffusion
- Tool system with MCP support
- Agent Tool — use agents as callable tools
- Session and artifact management
- Memory system with vector embeddings
- REST and A2A servers
- CLI with interactive mode
- Realtime voice agents (OpenAI Realtime API, Gemini Live API)
- Graph-based workflows (LangGraph-style) with checkpointing and human-in-the-loop
- Browser automation (46 WebDriver tools)
- Agent evaluation framework with trajectory validation and LLM-judged scoring
- Dynamic UI generation (adk-ui) with 28 components, 10 templates, React client
- Guardrails (adk-guardrail) with PII redaction, content filtering, schema validation
- ADK Studio — visual agent builder with drag-and-drop, code generation, live streaming
Planned (see docs/roadmap/):
| Priority | Feature | Target | Status |
|---|---|---|---|
| 🔴 P0 | ADK-UI vNext (A2UI + Generative UI) | Q2-Q4 2026 | Planned |
| 🟡 P1 | Cloud Integrations | Q2-Q3 2026 | Planned |
| 🟢 P2 | Enterprise Features | Q4 2026 | Planned |
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for adk-rust
Similar Open Source Tools
adk-rust
ADK-Rust is a comprehensive and production-ready Rust framework for building AI agents. It features type-safe agent abstractions with async execution and event streaming, multiple agent types including LLM agents, workflow agents, and custom agents, realtime voice agents with bidirectional audio streaming, a tool ecosystem with function tools, Google Search, and MCP integration, production features like session management, artifact storage, memory systems, and REST/A2A APIs, and a developer-friendly experience with interactive CLI, working examples, and comprehensive documentation. The framework follows a clean layered architecture and is production-ready and actively maintained.
dexto
Dexto is a lightweight runtime for creating and running AI agents that turn natural language into real-world actions. It serves as the missing intelligence layer for building AI applications, standalone chatbots, or as the reasoning engine inside larger products. Dexto features a powerful CLI and Web UI for running AI agents, supports multiple interfaces, allows hot-swapping of LLMs from various providers, connects to remote tool servers via the Model Context Protocol, is config-driven with version-controlled YAML, offers production-ready core features, extensibility for custom services, and enables multi-agent collaboration via MCP and A2A.
alphora
Alphora is a full-stack framework for building production AI agents, providing agent orchestration, prompt engineering, tool execution, memory management, streaming, and deployment with an async-first, OpenAI-compatible design. It offers features like agent derivation, reasoning-action loop, async streaming, visual debugger, OpenAI compatibility, multimodal support, tool system with zero-config tools and type safety, prompt engine with dynamic prompts, memory and storage management, sandbox for secure execution, deployment as API, and more. Alphora allows users to build sophisticated AI agents easily and efficiently.
agentops
AgentOps is a toolkit for evaluating and developing robust and reliable AI agents. It provides benchmarks, observability, and replay analytics to help developers build better agents. AgentOps is open beta and can be signed up for here. Key features of AgentOps include: - Session replays in 3 lines of code: Initialize the AgentOps client and automatically get analytics on every LLM call. - Time travel debugging: (coming soon!) - Agent Arena: (coming soon!) - Callback handlers: AgentOps works seamlessly with applications built using Langchain and LlamaIndex.
mistral.rs
Mistral.rs is a fast LLM inference platform written in Rust. We support inference on a variety of devices, quantization, and easy-to-use application with an Open-AI API compatible HTTP server and Python bindings.
flyte-sdk
Flyte 2 SDK is a pure Python tool for type-safe, distributed orchestration of agents, ML pipelines, and more. It allows users to write data pipelines, ML training jobs, and distributed compute in Python without any DSL constraints. With features like async-first parallelism and fine-grained observability, Flyte 2 offers a seamless workflow experience. Users can leverage core concepts like TaskEnvironments for container configuration, pure Python workflows for flexibility, and async parallelism for distributed execution. Advanced features include sub-task observability with tracing and remote task execution. The tool also provides native Jupyter integration for running and monitoring workflows directly from notebooks. Configuration and deployment are made easy with configuration files and commands for deploying and running workflows. Flyte 2 is licensed under the Apache 2.0 License.
Neosgenesis
Neogenesis System is an advanced AI decision-making framework that enables agents to 'think about how to think'. It implements a metacognitive approach with real-time learning, tool integration, and multi-LLM support, allowing AI to make expert-level decisions in complex environments. Key features include metacognitive intelligence, tool-enhanced decisions, real-time learning, aha-moment breakthroughs, experience accumulation, and multi-LLM support.
augustus
Augustus is a Go-based LLM vulnerability scanner designed for security professionals to test large language models against a wide range of adversarial attacks. It integrates with 28 LLM providers, covers 210+ adversarial attacks including prompt injection, jailbreaks, encoding exploits, and data extraction, and produces actionable vulnerability reports. The tool is built for production security testing with features like concurrent scanning, rate limiting, retry logic, and timeout handling out of the box.
UMbreLLa
UMbreLLa is a tool designed for deploying Large Language Models (LLMs) for personal agents. It combines offloading, speculative decoding, and quantization to optimize single-user LLM deployment scenarios. With UMbreLLa, 70B-level models can achieve performance comparable to human reading speed on an RTX 4070Ti, delivering exceptional efficiency and responsiveness, especially for coding tasks. The tool supports deploying models on various GPUs and offers features like code completion and CLI/Gradio chatbots. Users can configure the LLM engine for optimal performance based on their hardware setup.
rust-genai
genai is a multi-AI providers library for Rust that aims to provide a common and ergonomic single API to various generative AI providers such as OpenAI, Anthropic, Cohere, Ollama, and Gemini. It focuses on standardizing chat completion APIs across major AI services, prioritizing ergonomics and commonality. The library initially focuses on text chat APIs and plans to expand to support images, function calling, and more in the future versions. Version 0.1.x will have breaking changes in patches, while version 0.2.x will follow semver more strictly. genai does not provide a full representation of a given AI provider but aims to simplify the differences at a lower layer for ease of use.
LocalAGI
LocalAGI is a powerful, self-hostable AI Agent platform that allows you to design AI automations without writing code. It provides a complete drop-in replacement for OpenAI's Responses APIs with advanced agentic capabilities. With LocalAGI, you can create customizable AI assistants, automations, chat bots, and agents that run 100% locally, without the need for cloud services or API keys. The platform offers features like no-code agents, web-based interface, advanced agent teaming, connectors for various platforms, comprehensive REST API, short & long-term memory capabilities, planning & reasoning, periodic tasks scheduling, memory management, multimodal support, extensible custom actions, fully customizable models, observability, and more.
kiss_ai
KISS AI is a lightweight and powerful multi-agent evolutionary framework that simplifies building AI agents. It uses native function calling for efficiency and accuracy, making building AI agents as straightforward as possible. The framework includes features like multi-agent orchestration, agent evolution and optimization, relentless coding agent for long-running tasks, output formatting, trajectory saving and visualization, GEPA for prompt optimization, KISSEvolve for algorithm discovery, self-evolving multi-agent, Docker integration, multiprocessing support, and support for various models from OpenAI, Anthropic, Gemini, Together AI, and OpenRouter.
ai
The react-native-ai repository allows users to run Large Language Models (LLM) locally in a React Native app using the Universal MLC LLM Engine with compatibility for Vercel AI SDK. Please note that this project is experimental and not ready for production. The repository is licensed under MIT and was created with create-react-native-library.
json-io
json-io is a powerful and lightweight Java library that simplifies JSON5, JSON, and TOON serialization and deserialization while handling complex object graphs with ease. It preserves object references, handles polymorphic types, and maintains cyclic relationships in data structures. It offers full JSON5 support, TOON read/write capabilities, and is compatible with JDK 1.8 through JDK 24. The library is built with a focus on correctness over speed, providing extensive configuration options and two modes for data representation. json-io is designed for developers who require advanced serialization features and support for various Java types without external dependencies.
z-ai-sdk-python
Z.ai Open Platform Python SDK is the official Python SDK for Z.ai's large model open interface, providing developers with easy access to Z.ai's open APIs. The SDK offers core features like chat completions, embeddings, video generation, audio processing, assistant API, and advanced tools. It supports various functionalities such as speech transcription, text-to-video generation, image understanding, and structured conversation handling. Developers can customize client behavior, configure API keys, and handle errors efficiently. The SDK is designed to simplify AI interactions and enhance AI capabilities for developers.
auto-round
AutoRound is an advanced weight-only quantization algorithm for low-bits LLM inference. It competes impressively against recent methods without introducing any additional inference overhead. The method adopts sign gradient descent to fine-tune rounding values and minmax values of weights in just 200 steps, often significantly outperforming SignRound with the cost of more tuning time for quantization. AutoRound is tailored for a wide range of models and consistently delivers noticeable improvements.
For similar tasks
AutoGPT
AutoGPT is a revolutionary tool that empowers everyone to harness the power of AI. With AutoGPT, you can effortlessly build, test, and delegate tasks to AI agents, unlocking a world of possibilities. Our mission is to provide the tools you need to focus on what truly matters: innovation and creativity.
agent-os
The Agent OS is an experimental framework and runtime to build sophisticated, long running, and self-coding AI agents. We believe that the most important super-power of AI agents is to write and execute their own code to interact with the world. But for that to work, they need to run in a suitable environment—a place designed to be inhabited by agents. The Agent OS is designed from the ground up to function as a long-term computing substrate for these kinds of self-evolving agents.
chatdev
ChatDev IDE is a tool for building your AI agent, Whether it's NPCs in games or powerful agent tools, you can design what you want for this platform. It accelerates prompt engineering through **JavaScript Support** that allows implementing complex prompting techniques.
module-ballerinax-ai.agent
This library provides functionality required to build ReAct Agent using Large Language Models (LLMs).
npi
NPi is an open-source platform providing Tool-use APIs to empower AI agents with the ability to take action in the virtual world. It is currently under active development, and the APIs are subject to change in future releases. NPi offers a command line tool for installation and setup, along with a GitHub app for easy access to repositories. The platform also includes a Python SDK and examples like Calendar Negotiator and Twitter Crawler. Join the NPi community on Discord to contribute to the development and explore the roadmap for future enhancements.
ai-agents
The 'ai-agents' repository is a collection of books and resources focused on developing AI agents, including topics such as GPT models, building AI agents from scratch, machine learning theory and practice, and basic methods and tools for data analysis. The repository provides detailed explanations and guidance for individuals interested in learning about and working with AI agents.
llms
The 'llms' repository is a comprehensive guide on Large Language Models (LLMs), covering topics such as language modeling, applications of LLMs, statistical language modeling, neural language models, conditional language models, evaluation methods, transformer-based language models, practical LLMs like GPT and BERT, prompt engineering, fine-tuning LLMs, retrieval augmented generation, AI agents, and LLMs for computer vision. The repository provides detailed explanations, examples, and tools for working with LLMs.
ai-app
The 'ai-app' repository is a comprehensive collection of tools and resources related to artificial intelligence, focusing on topics such as server environment setup, PyCharm and Anaconda installation, large model deployment and training, Transformer principles, RAG technology, vector databases, AI image, voice, and music generation, and AI Agent frameworks. It also includes practical guides and tutorials on implementing various AI applications. The repository serves as a valuable resource for individuals interested in exploring different aspects of AI technology.
For similar jobs
sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.
teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.

