
AutoAgents
A multi-agent framework written in Rust that enables you to build, deploy, and coordinate multiple intelligent agents
Stars: 65

AutoAgents is a cutting-edge multi-agent framework built in Rust that enables the creation of intelligent, autonomous agents powered by Large Language Models (LLMs) and Ractor. Designed for performance, safety, and scalability. AutoAgents provides a robust foundation for building complex AI systems that can reason, act, and collaborate. With AutoAgents you can create Cloud Native Agents, Edge Native Agents and Hybrid Models as well. It is so extensible that other ML Models can be used to create complex pipelines using Actor Framework.
README:
AutoAgents is a cutting-edge multi-agent framework built in Rust that enables the creation of intelligent, autonomous agents powered by Large Language Models (LLMs) and Ractor. Designed for performance, safety, and scalability. AutoAgents provides a robust foundation for building complex AI systems that can reason, act, and collaborate. With AutoAgents you can create Cloud Native Agents, Edge Native Agents and Hybrid Models as well. It is So extensible that other ML Models can be used to create complex pipelines using Actor Framework.
- Built-in Tools: File operations, web scraping, API calls, and more coming soon!
- Custom Tools: Easy integration of external tools and services
- Tool Chaining: Complex workflows through tool composition
- Modular Design: Plugin-based architecture for easy extensibility
- Provider Agnostic: Support for multiple LLM providers
- Memory Systems: Configurable memory backends (sliding window, persistent, etc.)
- JSON Schema Support: Type-safe agent responses with automatic validation
- Custom Output Types: Define complex structured outputs for your agents
- Serialization: Built-in support for various data formats
- Sandboxed Environment: Secure and isolated execution of tools using WebAssembly
- Cross-Platform Compatibility: Run tools uniformly across diverse platforms and architectures
- Fast Startup & Low Overhead: Near-native performance with minimal resource consumption
- Safe Resource Control: Limit CPU, memory, and execution time to prevent runaway processes
- Extensibility: Easily add new tools from Hub (Coming Soon!)
- Reasoning: Advanced reasoning capabilities with step-by-step logic
- Acting: Tool execution with intelligent decision making
- Observation: Environmental feedback and adaptation
- Agent Coordination: Seamless communication and collaboration between multiple agents
- Type Safe Pub/Sub: Type Safe Rust Native Pub/Sub
- Knowledge Sharing: Shared memory and context between agents (In Roadmap)
AutoAgents supports a wide range of LLM providers, allowing you to choose the best fit for your use case:
Provider | Status |
---|---|
LiquidEdge (ONNX) | β |
OpenAI | β |
Anthropic | β |
Ollama | β |
DeepSeek | β |
xAI | β |
Phind | β |
Groq | β |
β | |
Azure OpenAI | β |
Provider support is actively expanding based on community needs.
For contributing to AutoAgents or building from source:
- Rust (latest stable recommended)
- Cargo package manager
- LeftHook for Git hooks management
macOS (using Homebrew):
brew install lefthook
Linux/Windows:
# Using npm
npm install -g lefthook
# Clone the repository
git clone https://github.com/liquidos-ai/AutoAgents.git
cd AutoAgents
# Install Git hooks using lefthook
lefthook install
# Build the project
cargo build --release
# Run tests to verify setup
cargo test --all-features
The lefthook configuration will automatically:
- Format code with
cargo fmt
- Run linting with
cargo clippy
- Execute tests before commits
use autoagents::core::actor::Topic;
use autoagents::core::agent::memory::SlidingWindowMemory;
use autoagents::core::agent::prebuilt::executor::{ReActAgentOutput, ReActExecutor};
use autoagents::core::agent::task::Task;
use autoagents::core::agent::{AgentBuilder, AgentDeriveT, AgentOutputT};
use autoagents::core::environment::Environment;
use autoagents::core::error::Error;
use autoagents::core::protocol::{Event, TaskResult};
use autoagents::core::runtime::{SingleThreadedRuntime, TypedRuntime};
use autoagents::core::tool::{ToolCallError, ToolInputT, ToolRuntime, ToolT};
use autoagents::llm::LLMProvider;
use autoagents_derive::{agent, tool, AgentOutput, ToolInput};
use colored::*;
use serde::{Deserialize, Serialize};
use serde_json::Value;
use std::sync::Arc;
use tokio_stream::{wrappers::ReceiverStream, StreamExt};
#[derive(Serialize, Deserialize, ToolInput, Debug)]
pub struct AdditionArgs {
#[input(description = "Left Operand for addition")]
left: i64,
#[input(description = "Right Operand for addition")]
right: i64,
}
#[tool(
name = "Addition",
description = "Use this tool to Add two numbers",
input = AdditionArgs,
)]
struct Addition {}
impl ToolRuntime for Addition {
fn execute(&self, args: Value) -> Result<Value, ToolCallError> {
let typed_args: AdditionArgs = serde_json::from_value(args)?;
let result = typed_args.left + typed_args.right;
Ok(result.into())
}
}
/// Math agent output with Value and Explanation
#[derive(Debug, Serialize, Deserialize, AgentOutput)]
pub struct MathAgentOutput {
#[output(description = "The addition result")]
value: i64,
#[output(description = "Explanation of the logic")]
explanation: String,
#[output(description = "If user asks other than math questions, use this to answer them.")]
generic: Option<String>,
}
#[agent(
name = "math_agent",
description = "You are a Math agent",
tools = [Addition],
output = MathAgentOutput
)]
pub struct MathAgent {}
impl ReActExecutor for MathAgent {}
pub async fn simple_agent(llm: Arc<dyn LLMProvider>) -> Result<(), Error> {
let sliding_window_memory = Box::new(SlidingWindowMemory::new(10));
let agent = MathAgent {};
let runtime = SingleThreadedRuntime::new(None);
let test_topic = Topic::<Task>::new("test");
let agent_handle = AgentBuilder::new(agent)
.with_llm(llm)
.runtime(runtime.clone())
.subscribe_topic(test_topic.clone())
.with_memory(sliding_window_memory)
.build()
.await?;
// Create environment and set up event handling
let mut environment = Environment::new(None);
let _ = environment.register_runtime(runtime.clone()).await;
let receiver = environment.take_event_receiver(None).await?;
handle_events(receiver);
// Publish message to all the subscribing actors
runtime.publish(&Topic::<Task>::new("test"), Task::new("what is 2 + 2?")).await?;
// Send a direct message for memory test
println!("\nπ§ Sending direct message to test memory...");
runtime.send_message(Task::new("What was the question I asked?"), agent_handle.addr()).await?;
let _ = environment.run().await;
Ok(())
}
fn handle_events(event_stream: Option<ReceiverStream<Event>>) {
if let Some(mut event_stream) = event_stream {
tokio::spawn(async move {
while let Some(event) = event_stream.next().await {
match event {
Event::TaskComplete { result, .. } => {
match result {
TaskResult::Value(val) => {
let agent_out: ReActAgentOutput =
serde_json::from_value(val).unwrap();
let math_out: MathAgentOutput =
serde_json::from_str(&agent_out.response).unwrap();
println!(
"{}",
format!(
"Math Value: {}, Explanation: {}",
math_out.value, math_out.explanation
)
.green()
);
}
_ => {
//
}
}
}
_ => {
//
}
}
}
});
}
}
Explore our comprehensive examples to get started quickly:
A simple agent demonstrating core functionality and event-driven architecture.
export OPENAI_API_KEY="your-api-key"
cargo run --package basic-example -- --usecase simple
A simple agent which can run tools in WASM runtime.
export OPENAI_API_KEY="your-api-key"
cargo run --package wasm-runner
A sophisticated ReAct-based coding agent with file manipulation capabilities.
export OPENAI_API_KEY="your-api-key"
cargo run --package coding_agent -- --usecase interactive
AutoAgents is built with a modular architecture:
AutoAgents/
βββ crates/
β βββ autoagents/ # Main library entry point
β βββ core/ # Core agent framework
β βββ llm/ # LLM provider implementations
β βββ liquid-edge/ # Edge Runtime Implementation
β βββ derive/ # Procedural macros
βββ examples/ # Example implementations
- Agent: The fundamental unit of intelligence
- Environment: Manages agent lifecycle and communication
- Memory: Configurable memory systems
- Tools: External capability integration
- Executors: Different reasoning patterns (ReAct, Chain-of-Thought)
For development setup instructions, see the Installation section above.
# Run all tests
cargo test --all-features
# Run tests with coverage (requires cargo-tarpaulin)
cargo install cargo-tarpaulin
cargo tarpaulin --all-features --out html
This project uses LeftHook for Git hooks management. The hooks will automatically:
- Format code with
cargo fmt --check
- Run linting with
cargo clippy -- -D warnings
- Execute tests with
cargo test --features full
We welcome contributions! Please see our Contributing Guidelines and Code of Conduct for details.
- API Documentation: Complete Framework Docs
- Examples: Practical implementation examples
- GitHub Issues: Bug reports and feature requests
- Discussions: Community Q&A and ideas
- Discord: Join our Discord Community using https://discord.gg/Ghau8xYn
AutoAgents is designed for high performance:
- Memory Efficient: Optimized memory usage with configurable backends
- Concurrent: Full async/await support with tokio
- Scalable: Horizontal scaling with multi-agent coordination
- Type Safe: Compile-time guarantees with Rust's type system
AutoAgents is dual-licensed under:
- MIT License (MIT_LICENSE)
- Apache License 2.0 (APACHE_LICENSE)
You may choose either license for your use case.
Built with β€οΈ by the Liquidos AI team and our amazing community contributors.
Special thanks to:
- The Rust community for the excellent ecosystem
- OpenAI, Anthropic, and other LLM providers for their APIs
- All contributors who help make AutoAgents better
β Star us on GitHub | π Report Issues | π¬ Join Discussions
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for AutoAgents
Similar Open Source Tools

AutoAgents
AutoAgents is a cutting-edge multi-agent framework built in Rust that enables the creation of intelligent, autonomous agents powered by Large Language Models (LLMs) and Ractor. Designed for performance, safety, and scalability. AutoAgents provides a robust foundation for building complex AI systems that can reason, act, and collaborate. With AutoAgents you can create Cloud Native Agents, Edge Native Agents and Hybrid Models as well. It is so extensible that other ML Models can be used to create complex pipelines using Actor Framework.

aegra
Aegra is a self-hosted AI agent backend platform that provides LangGraph power without vendor lock-in. Built with FastAPI + PostgreSQL, it offers complete control over agent orchestration for teams looking to escape vendor lock-in, meet data sovereignty requirements, enable custom deployments, and optimize costs. Aegra is Agent Protocol compliant and perfect for teams seeking a free, self-hosted alternative to LangGraph Platform with zero lock-in, full control, and compatibility with existing LangGraph Client SDK.

AgentNeo
AgentNeo is an advanced, open-source Agentic AI Application Observability, Monitoring, and Evaluation Framework designed to provide deep insights into AI agents, Large Language Model (LLM) calls, and tool interactions. It offers robust logging, visualization, and evaluation capabilities to help debug and optimize AI applications with ease. With features like tracing LLM calls, monitoring agents and tools, tracking interactions, detailed metrics collection, flexible data storage, simple instrumentation, interactive dashboard, project management, execution graph visualization, and evaluation tools, AgentNeo empowers users to build efficient, cost-effective, and high-quality AI-driven solutions.

ToolUniverse
ToolUniverse is a collection of 211 biomedical tools designed for Agentic AI, providing access to biomedical knowledge for solving therapeutic reasoning tasks. The tools cover various aspects of drugs and diseases, linked to trusted sources like US FDA-approved drugs since 1939, Open Targets, and Monarch Initiative.

claude-007-agents
Claude Code Agents is an open-source AI agent system designed to enhance development workflows by providing specialized AI agents for orchestration, resilience engineering, and organizational memory. These agents offer specialized expertise across technologies, AI system with organizational memory, and an agent orchestration system. The system includes features such as engineering excellence by design, advanced orchestration system, Task Master integration, live MCP integrations, professional-grade workflows, and organizational intelligence. It is suitable for solo developers, small teams, enterprise teams, and open-source projects. The system requires a one-time bootstrap setup for each project to analyze the tech stack, select optimal agents, create configuration files, set up Task Master integration, and validate system readiness.

mcp
Model Context Protocol (MCP) is an open protocol that standardizes how applications provide context to large language models (LLMs). It allows AI applications to connect with various data sources and tools in a consistent manner, enhancing their capabilities and flexibility. This repository contains core libraries, test frameworks, engineering systems, pipelines, and tooling for Microsoft MCP Server contributors to unify engineering investments and reduce duplication and divergence. For more details, visit the official MCP website.

Rankify
Rankify is a Python toolkit designed for unified retrieval, re-ranking, and retrieval-augmented generation (RAG) research. It integrates 40 pre-retrieved benchmark datasets and supports 7 retrieval techniques, 24 state-of-the-art re-ranking models, and multiple RAG methods. Rankify provides a modular and extensible framework, enabling seamless experimentation and benchmarking across retrieval pipelines. It offers comprehensive documentation, open-source implementation, and pre-built evaluation tools, making it a powerful resource for researchers and practitioners in the field.

mcp-memory-service
The MCP Memory Service is a universal memory service designed for AI assistants, providing semantic memory search and persistent storage. It works with various AI applications and offers fast local search using SQLite-vec and global distribution through Cloudflare. The service supports intelligent memory management, universal compatibility with AI tools, flexible storage options, and is production-ready with cross-platform support and secure connections. Users can store and recall memories, search by tags, check system health, and configure the service for Claude Desktop integration and environment variables.

evi-run
evi-run is a powerful, production-ready multi-agent AI system built on Python using the OpenAI Agents SDK. It offers instant deployment, ultimate flexibility, built-in analytics, Telegram integration, and scalable architecture. The system features memory management, knowledge integration, task scheduling, multi-agent orchestration, custom agent creation, deep research, web intelligence, document processing, image generation, DEX analytics, and Solana token swap. It supports flexible usage modes like private, free, and pay mode, with upcoming features including NSFW mode, task scheduler, and automatic limit orders. The technology stack includes Python 3.11, OpenAI Agents SDK, Telegram Bot API, PostgreSQL, Redis, and Docker & Docker Compose for deployment.

llamafarm
LlamaFarm is a comprehensive AI framework that empowers users to build powerful AI applications locally, with full control over costs and deployment options. It provides modular components for RAG systems, vector databases, model management, prompt engineering, and fine-tuning. Users can create differentiated AI products without needing extensive ML expertise, using simple CLI commands and YAML configs. The framework supports local-first development, production-ready components, strategy-based configuration, and deployment anywhere from laptops to the cloud.

handit.ai
Handit.ai is an autonomous engineer tool designed to fix AI failures 24/7. It catches failures, writes fixes, tests them, and ships PRs automatically. It monitors AI applications, detects issues, generates fixes, tests them against real data, and ships them as pull requestsβall automatically. Users can write JavaScript, TypeScript, Python, and more, and the tool automates what used to require manual debugging and firefighting.

shimmy
Shimmy is a 5.1MB single-binary local inference server providing OpenAI-compatible endpoints for GGUF models. It offers fast, reliable AI inference with sub-second responses, zero configuration, and automatic port management. Perfect for developers seeking privacy, cost-effectiveness, speed, and easy integration with popular tools like VSCode and Cursor. Shimmy is designed to be invisible infrastructure that simplifies local AI development and deployment.

finite-monkey-engine
FiniteMonkey is an advanced vulnerability mining engine powered purely by GPT, requiring no prior knowledge base or fine-tuning. Its effectiveness significantly surpasses most current related research approaches. The tool is task-driven, prompt-driven, and focuses on prompt design, leveraging 'deception' and hallucination as key mechanics. It has helped identify vulnerabilities worth over $60,000 in bounties. The tool requires PostgreSQL database, OpenAI API access, and Python environment for setup. It supports various languages like Solidity, Rust, Python, Move, Cairo, Tact, Func, Java, and Fake Solidity for scanning. FiniteMonkey is best suited for logic vulnerability mining in real projects, not recommended for academic vulnerability testing. GPT-4-turbo is recommended for optimal results with an average scan time of 2-3 hours for medium projects. The tool provides detailed scanning results guide and implementation tips for users.

RSTGameTranslation
RSTGameTranslation is a tool designed for translating game text into multiple languages efficiently. It provides a user-friendly interface for game developers to easily manage and localize their game content. With RSTGameTranslation, developers can streamline the translation process, ensuring consistency and accuracy across different language versions of their games. The tool supports various file formats commonly used in game development, making it versatile and adaptable to different project requirements. Whether you are working on a small indie game or a large-scale production, RSTGameTranslation can help you reach a global audience by making localization a seamless and hassle-free experience.

claude-flow
Claude-Flow is a workflow automation tool designed to streamline and optimize business processes. It provides a user-friendly interface for creating and managing workflows, allowing users to automate repetitive tasks and improve efficiency. With features such as drag-and-drop workflow builder, customizable templates, and integration with popular business tools, Claude-Flow empowers users to automate their workflows without the need for extensive coding knowledge. Whether you are a small business owner looking to streamline your operations or a project manager seeking to automate task assignments, Claude-Flow offers a flexible and scalable solution to meet your workflow automation needs.

nodetool
NodeTool is a platform designed for AI enthusiasts, developers, and creators, providing a visual interface to access a variety of AI tools and models. It simplifies access to advanced AI technologies, offering resources for content creation, data analysis, automation, and more. With features like a visual editor, seamless integration with leading AI platforms, model manager, and API integration, NodeTool caters to both newcomers and experienced users in the AI field.
For similar tasks

MiniAgents
MiniAgents is an open-source Python framework designed to simplify the creation of multi-agent AI systems. It offers a parallelism and async-first design, allowing users to focus on building intelligent agents while handling concurrency challenges. The framework, built on asyncio, supports LLM-based applications with immutable messages and seamless asynchronous token and message streaming between agents.

AutoAgents
AutoAgents is a cutting-edge multi-agent framework built in Rust that enables the creation of intelligent, autonomous agents powered by Large Language Models (LLMs) and Ractor. Designed for performance, safety, and scalability. AutoAgents provides a robust foundation for building complex AI systems that can reason, act, and collaborate. With AutoAgents you can create Cloud Native Agents, Edge Native Agents and Hybrid Models as well. It is so extensible that other ML Models can be used to create complex pipelines using Actor Framework.

trpc-agent-go
A powerful Go framework for building intelligent agent systems with large language models (LLMs), hierarchical planners, memory, telemetry, and a rich tool ecosystem. tRPC-Agent-Go enables the creation of autonomous or semi-autonomous agents that reason, call tools, collaborate with sub-agents, and maintain long-term state. The framework provides detailed documentation, examples, and tools for accelerating the development of AI applications.

ISEK
ISEK is a decentralized agent network framework that enables building intelligent, collaborative agent-to-agent systems. It integrates the Google A2A protocol and ERC-8004 contracts for identity registration, reputation building, and cooperative task-solving, creating a self-organizing, decentralized society of agents. The platform addresses challenges in the agent ecosystem by providing an incentive system for users to pay for agent services, motivating developers to build high-quality agents and fostering innovation and quality in the ecosystem. ISEK focuses on decentralized agent collaboration and coordination, allowing agents to find each other, reason together, and act as a decentralized system without central control. The platform utilizes ERC-8004 for decentralized identity, reputation, and validation registries, establishing trustless verification and reputation management.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.