
AutoAgents
A multi-agent framework written in Rust that enables you to build, deploy, and coordinate multiple intelligent agents
Stars: 65

README:
AutoAgents is a cutting-edge multi-agent framework built in Rust that enables the creation of intelligent, autonomous agents powered by Large Language Models (LLMs) and Ractor. Designed for performance, safety, and scalability. AutoAgents provides a robust foundation for building complex AI systems that can reason, act, and collaborate. With AutoAgents you can create Cloud Native Agents, Edge Native Agents and Hybrid Models as well. It is So extensible that other ML Models can be used to create complex pipelines using Actor Framework.
- Built-in Tools: File operations, web scraping, API calls, and more coming soon!
- Custom Tools: Easy integration of external tools and services
- Tool Chaining: Complex workflows through tool composition
- Modular Design: Plugin-based architecture for easy extensibility
- Provider Agnostic: Support for multiple LLM providers
- Memory Systems: Configurable memory backends (sliding window, persistent, etc.)
- JSON Schema Support: Type-safe agent responses with automatic validation
- Custom Output Types: Define complex structured outputs for your agents
- Serialization: Built-in support for various data formats
- Sandboxed Environment: Secure and isolated execution of tools using WebAssembly
- Cross-Platform Compatibility: Run tools uniformly across diverse platforms and architectures
- Fast Startup & Low Overhead: Near-native performance with minimal resource consumption
- Safe Resource Control: Limit CPU, memory, and execution time to prevent runaway processes
- Extensibility: Easily add new tools from Hub (Coming Soon!)
- Reasoning: Advanced reasoning capabilities with step-by-step logic
- Acting: Tool execution with intelligent decision making
- Observation: Environmental feedback and adaptation
- Agent Coordination: Seamless communication and collaboration between multiple agents
- Type Safe Pub/Sub: Type Safe Rust Native Pub/Sub
- Knowledge Sharing: Shared memory and context between agents (In Roadmap)
AutoAgents supports a wide range of LLM providers, allowing you to choose the best fit for your use case:
Provider | Status |
---|---|
LiquidEdge (ONNX) | โ |
OpenAI | โ |
Anthropic | โ |
Ollama | โ |
DeepSeek | โ |
xAI | โ |
Phind | โ |
Groq | โ |
โ | |
Azure OpenAI | โ |
Provider support is actively expanding based on community needs.
For contributing to AutoAgents or building from source:
- Rust (latest stable recommended)
- Cargo package manager
- LeftHook for Git hooks management
macOS (using Homebrew):
brew install lefthook
Linux/Windows:
# Using npm
npm install -g lefthook
# Clone the repository
git clone https://github.com/liquidos-ai/AutoAgents.git
cd AutoAgents
# Install Git hooks using lefthook
lefthook install
# Build the project
cargo build --release
# Run tests to verify setup
cargo test --all-features
The lefthook configuration will automatically:
- Format code with
cargo fmt
- Run linting with
cargo clippy
- Execute tests before commits
use autoagents::core::actor::Topic;
use autoagents::core::agent::memory::SlidingWindowMemory;
use autoagents::core::agent::prebuilt::executor::{ReActAgentOutput, ReActExecutor};
use autoagents::core::agent::task::Task;
use autoagents::core::agent::{AgentBuilder, AgentDeriveT, AgentOutputT};
use autoagents::core::environment::Environment;
use autoagents::core::error::Error;
use autoagents::core::protocol::{Event, TaskResult};
use autoagents::core::runtime::{SingleThreadedRuntime, TypedRuntime};
use autoagents::core::tool::{ToolCallError, ToolInputT, ToolRuntime, ToolT};
use autoagents::llm::LLMProvider;
use autoagents_derive::{agent, tool, AgentOutput, ToolInput};
use colored::*;
use serde::{Deserialize, Serialize};
use serde_json::Value;
use std::sync::Arc;
use tokio_stream::{wrappers::ReceiverStream, StreamExt};
#[derive(Serialize, Deserialize, ToolInput, Debug)]
pub struct AdditionArgs {
#[input(description = "Left Operand for addition")]
left: i64,
#[input(description = "Right Operand for addition")]
right: i64,
}
#[tool(
name = "Addition",
description = "Use this tool to Add two numbers",
input = AdditionArgs,
)]
struct Addition {}
impl ToolRuntime for Addition {
fn execute(&self, args: Value) -> Result<Value, ToolCallError> {
let typed_args: AdditionArgs = serde_json::from_value(args)?;
let result = typed_args.left + typed_args.right;
Ok(result.into())
}
}
/// Math agent output with Value and Explanation
#[derive(Debug, Serialize, Deserialize, AgentOutput)]
pub struct MathAgentOutput {
#[output(description = "The addition result")]
value: i64,
#[output(description = "Explanation of the logic")]
explanation: String,
#[output(description = "If user asks other than math questions, use this to answer them.")]
generic: Option<String>,
}
#[agent(
name = "math_agent",
description = "You are a Math agent",
tools = [Addition],
output = MathAgentOutput
)]
pub struct MathAgent {}
impl ReActExecutor for MathAgent {}
pub async fn simple_agent(llm: Arc<dyn LLMProvider>) -> Result<(), Error> {
let sliding_window_memory = Box::new(SlidingWindowMemory::new(10));
let agent = MathAgent {};
let runtime = SingleThreadedRuntime::new(None);
let test_topic = Topic::<Task>::new("test");
let agent_handle = AgentBuilder::new(agent)
.with_llm(llm)
.runtime(runtime.clone())
.subscribe_topic(test_topic.clone())
.with_memory(sliding_window_memory)
.build()
.await?;
// Create environment and set up event handling
let mut environment = Environment::new(None);
let _ = environment.register_runtime(runtime.clone()).await;
let receiver = environment.take_event_receiver(None).await?;
handle_events(receiver);
// Publish message to all the subscribing actors
runtime.publish(&Topic::<Task>::new("test"), Task::new("what is 2 + 2?")).await?;
// Send a direct message for memory test
println!("\n๐ง Sending direct message to test memory...");
runtime.send_message(Task::new("What was the question I asked?"), agent_handle.addr()).await?;
let _ = environment.run().await;
Ok(())
}
fn handle_events(event_stream: Option<ReceiverStream<Event>>) {
if let Some(mut event_stream) = event_stream {
tokio::spawn(async move {
while let Some(event) = event_stream.next().await {
match event {
Event::TaskComplete { result, .. } => {
match result {
TaskResult::Value(val) => {
let agent_out: ReActAgentOutput =
serde_json::from_value(val).unwrap();
let math_out: MathAgentOutput =
serde_json::from_str(&agent_out.response).unwrap();
println!(
"{}",
format!(
"Math Value: {}, Explanation: {}",
math_out.value, math_out.explanation
)
.green()
);
}
_ => {
//
}
}
}
_ => {
//
}
}
}
});
}
}
Explore our comprehensive examples to get started quickly:
A simple agent demonstrating core functionality and event-driven architecture.
export OPENAI_API_KEY="your-api-key"
cargo run --package basic-example -- --usecase simple
A simple agent which can run tools in WASM runtime.
export OPENAI_API_KEY="your-api-key"
cargo run --package wasm-runner
A sophisticated ReAct-based coding agent with file manipulation capabilities.
export OPENAI_API_KEY="your-api-key"
cargo run --package coding_agent -- --usecase interactive
AutoAgents is built with a modular architecture:
AutoAgents/
โโโ crates/
โ โโโ autoagents/ # Main library entry point
โ โโโ core/ # Core agent framework
โ โโโ llm/ # LLM provider implementations
โ โโโ liquid-edge/ # Edge Runtime Implementation
โ โโโ derive/ # Procedural macros
โโโ examples/ # Example implementations
- Agent: The fundamental unit of intelligence
- Environment: Manages agent lifecycle and communication
- Memory: Configurable memory systems
- Tools: External capability integration
- Executors: Different reasoning patterns (ReAct, Chain-of-Thought)
For development setup instructions, see the Installation section above.
# Run all tests
cargo test --all-features
# Run tests with coverage (requires cargo-tarpaulin)
cargo install cargo-tarpaulin
cargo tarpaulin --all-features --out html
This project uses LeftHook for Git hooks management. The hooks will automatically:
- Format code with
cargo fmt --check
- Run linting with
cargo clippy -- -D warnings
- Execute tests with
cargo test --features full
We welcome contributions! Please see our Contributing Guidelines and Code of Conduct for details.
- API Documentation: Complete Framework Docs
- Examples: Practical implementation examples
- GitHub Issues: Bug reports and feature requests
- Discussions: Community Q&A and ideas
- Discord: Join our Discord Community using https://discord.gg/Ghau8xYn
AutoAgents is designed for high performance:
- Memory Efficient: Optimized memory usage with configurable backends
- Concurrent: Full async/await support with tokio
- Scalable: Horizontal scaling with multi-agent coordination
- Type Safe: Compile-time guarantees with Rust's type system
AutoAgents is dual-licensed under:
- MIT License (MIT_LICENSE)
- Apache License 2.0 (APACHE_LICENSE)
You may choose either license for your use case.
Built with โค๏ธ by the Liquidos AI team and our amazing community contributors.
Special thanks to:
- The Rust community for the excellent ecosystem
- OpenAI, Anthropic, and other LLM providers for their APIs
- All contributors who help make AutoAgents better
โญ Star us on GitHub | ๐ Report Issues | ๐ฌ Join Discussions
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for AutoAgents
Similar Open Source Tools

AgentNeo
AgentNeo is an advanced, open-source Agentic AI Application Observability, Monitoring, and Evaluation Framework designed to provide deep insights into AI agents, Large Language Model (LLM) calls, and tool interactions. It offers robust logging, visualization, and evaluation capabilities to help debug and optimize AI applications with ease. With features like tracing LLM calls, monitoring agents and tools, tracking interactions, detailed metrics collection, flexible data storage, simple instrumentation, interactive dashboard, project management, execution graph visualization, and evaluation tools, AgentNeo empowers users to build efficient, cost-effective, and high-quality AI-driven solutions.

Rankify
Rankify is a Python toolkit designed for unified retrieval, re-ranking, and retrieval-augmented generation (RAG) research. It integrates 40 pre-retrieved benchmark datasets and supports 7 retrieval techniques, 24 state-of-the-art re-ranking models, and multiple RAG methods. Rankify provides a modular and extensible framework, enabling seamless experimentation and benchmarking across retrieval pipelines. It offers comprehensive documentation, open-source implementation, and pre-built evaluation tools, making it a powerful resource for researchers and practitioners in the field.

anylabeling
AnyLabeling is a tool for effortless data labeling with AI support from YOLO and Segment Anything. It combines features from LabelImg and Labelme with an improved UI and auto-labeling capabilities. Users can annotate images with polygons, rectangles, circles, lines, and points, as well as perform auto-labeling using YOLOv5 and Segment Anything. The tool also supports text detection, recognition, and Key Information Extraction (KIE) labeling, with multiple language options available such as English, Vietnamese, and Chinese.

Avalonia-Assistant
Avalonia-Assistant is an open-source desktop intelligent assistant that aims to provide a user-friendly interactive experience based on the Avalonia UI framework and the integration of Semantic Kernel with OpenAI or other large LLM models. By utilizing Avalonia-Assistant, you can perform various desktop operations through text or voice commands, enhancing your productivity and daily office experience.

instill-core
Instill Core is an open-source orchestrator comprising a collection of source-available projects designed to streamline every aspect of building versatile AI features with unstructured data. It includes Instill VDP (Versatile Data Pipeline) for unstructured data, AI, and pipeline orchestration, Instill Model for scalable MLOps and LLMOps for open-source or custom AI models, and Instill Artifact for unified unstructured data management. Instill Core can be used for tasks such as building, testing, and sharing pipelines, importing, serving, fine-tuning, and monitoring ML models, and transforming documents, images, audio, and video into a unified AI-ready format.

opcode
opcode is a powerful desktop application built with Tauri 2 that serves as a command center for interacting with Claude Code. It offers a visual GUI for managing Claude Code sessions, creating custom agents, tracking usage, and more. Users can navigate projects, create specialized AI agents, monitor usage analytics, manage MCP servers, create session checkpoints, edit CLAUDE.md files, and more. The tool bridges the gap between command-line tools and visual experiences, making AI-assisted development more intuitive and productive.

llamafarm
LlamaFarm is a comprehensive AI framework that empowers users to build powerful AI applications locally, with full control over costs and deployment options. It provides modular components for RAG systems, vector databases, model management, prompt engineering, and fine-tuning. Users can create differentiated AI products without needing extensive ML expertise, using simple CLI commands and YAML configs. The framework supports local-first development, production-ready components, strategy-based configuration, and deployment anywhere from laptops to the cloud.

hugging-llm
HuggingLLM is a project that aims to introduce ChatGPT to a wider audience, particularly those interested in using the technology to create new products or applications. The project focuses on providing practical guidance on how to use ChatGPT-related APIs to create new features and applications. It also includes detailed background information and system design introductions for relevant tasks, as well as example code and implementation processes. The project is designed for individuals with some programming experience who are interested in using ChatGPT for practical applications, and it encourages users to experiment and create their own applications and demos.

human
AI-powered 3D Face Detection & Rotation Tracking, Face Description & Recognition, Body Pose Tracking, 3D Hand & Finger Tracking, Iris Analysis, Age & Gender & Emotion Prediction, Gaze Tracking, Gesture Recognition, Body Segmentation

Fay
Fay is an open-source digital human framework that offers different versions for various purposes. The 'ๅธฆ่ดงๅฎๆด็' is suitable for online and offline salespersons. The 'ๅฉ็ๅฎๆด็' serves as a human-machine interactive digital assistant that can also control devices upon command. The 'agent็' is designed to be an autonomous agent capable of making decisions and contacting its owner. The framework provides updates and improvements across its different versions, including features like emotion analysis integration, model optimizations, and compatibility enhancements. Users can access detailed documentation for each version through the provided links.

LabelQuick
LabelQuick_V2.0 is a fast image annotation tool designed and developed by the AI Horizon team. This version has been optimized and improved based on the previous version. It provides an intuitive interface and powerful annotation and segmentation functions to efficiently complete dataset annotation work. The tool supports video object tracking annotation, quick annotation by clicking, and various video operations. It introduces the SAM2 model for accurate and efficient object detection in video frames, reducing manual intervention and improving annotation quality. The tool is designed for Windows systems and requires a minimum of 6GB of memory.

AIResume
AIResume is an open-source resume creation platform that helps users easily create professional resumes, integrating AI technology to assist users in polishing their resumes. The project allows for template development using Vue 3, Vite, TypeScript, and Ant Design Vue. Users can edit resumes, export them as PDFs, switch between multiple resume templates, and collaborate on template development. AI features include resume refinement, deep optimization based on individual projects or experiences, and simulated interviews for user practice. Additional functionalities include theme color switching, high customization options, dark/light mode switching, real-time preview, drag-and-drop resume scaling, data export/import, data clearing, sample data prefilling, template market showcasing, and more.

bifrost
Bifrost is a high-performance AI gateway that unifies access to multiple providers through a single OpenAI-compatible API. It offers features like automatic failover, load balancing, semantic caching, and enterprise-grade functionalities. Users can deploy Bifrost in seconds with zero configuration, benefiting from its core infrastructure, advanced features, enterprise and security capabilities, and developer experience. The repository structure is modular, allowing for maximum flexibility. Bifrost is designed for quick setup, easy configuration, and seamless integration with various AI models and tools.

codemod
Codemod platform is a tool that helps developers create, distribute, and run codemods in codebases of any size. The AI-powered, community-led codemods enable automation of framework upgrades, large refactoring, and boilerplate programming with speed and developer experience. It aims to make dream migrations a reality for developers by providing a platform for seamless codemod operations.

J.A.R.V.I.S.2.0
J.A.R.V.I.S. 2.0 is an AI-powered assistant designed for voice commands, capable of tasks like providing weather reports, summarizing news, sending emails, and more. It features voice activation, speech recognition, AI responses, and handles multiple tasks including email sending, weather reports, news reading, image generation, database functions, phone call automation, AI-based task execution, website & application automation, and knowledge-based interactions. The assistant also includes timeout handling, automatic input processing, and the ability to call multiple functions simultaneously. It requires Python 3.9 or later and specific API keys for weather, news, email, and AI access. The tool integrates Gemini AI for function execution and Ollama as a fallback mechanism. It utilizes a RAG-based knowledge system and ADB integration for phone automation. Future enhancements include deeper mobile integration, advanced AI-driven automation, improved NLP-based command execution, and multi-modal interactions.