AutoAgents

AutoAgents

A multi-agent framework written in Rust that enables you to build, deploy, and coordinate multiple intelligent agents

Stars: 65

Visit
 screenshot

README:

AutoAgents Logo

AutoAgents

A Modern Multi-Agent Framework in Rust

Crates.io Documentation License Build Status codecov

Documentation | Examples | Contributing


๐Ÿš€ Overview

AutoAgents is a cutting-edge multi-agent framework built in Rust that enables the creation of intelligent, autonomous agents powered by Large Language Models (LLMs) and Ractor. Designed for performance, safety, and scalability. AutoAgents provides a robust foundation for building complex AI systems that can reason, act, and collaborate. With AutoAgents you can create Cloud Native Agents, Edge Native Agents and Hybrid Models as well. It is So extensible that other ML Models can be used to create complex pipelines using Actor Framework.


โœจ Key Features

๐Ÿ”ง Extensive Tool Integration

  • Built-in Tools: File operations, web scraping, API calls, and more coming soon!
  • Custom Tools: Easy integration of external tools and services
  • Tool Chaining: Complex workflows through tool composition

๐Ÿ—๏ธ Flexible Architecture

  • Modular Design: Plugin-based architecture for easy extensibility
  • Provider Agnostic: Support for multiple LLM providers
  • Memory Systems: Configurable memory backends (sliding window, persistent, etc.)

๐Ÿ“Š Structured Outputs

  • JSON Schema Support: Type-safe agent responses with automatic validation
  • Custom Output Types: Define complex structured outputs for your agents
  • Serialization: Built-in support for various data formats

๐Ÿ•น๏ธ WASM Runtime for Tool Execution

  • Sandboxed Environment: Secure and isolated execution of tools using WebAssembly
  • Cross-Platform Compatibility: Run tools uniformly across diverse platforms and architectures
  • Fast Startup & Low Overhead: Near-native performance with minimal resource consumption
  • Safe Resource Control: Limit CPU, memory, and execution time to prevent runaway processes
  • Extensibility: Easily add new tools from Hub (Coming Soon!)

๐ŸŽฏ ReAct Framework

  • Reasoning: Advanced reasoning capabilities with step-by-step logic
  • Acting: Tool execution with intelligent decision making
  • Observation: Environmental feedback and adaptation

๐Ÿค– Multi-Agent Orchestration

  • Agent Coordination: Seamless communication and collaboration between multiple agents
  • Type Safe Pub/Sub: Type Safe Rust Native Pub/Sub
  • Knowledge Sharing: Shared memory and context between agents (In Roadmap)

๐ŸŒ Supported LLM Providers

AutoAgents supports a wide range of LLM providers, allowing you to choose the best fit for your use case:

Provider Status
LiquidEdge (ONNX) โœ…
OpenAI โœ…
Anthropic โœ…
Ollama โœ…
DeepSeek โœ…
xAI โœ…
Phind โœ…
Groq โœ…
Google โœ…
Azure OpenAI โœ…

Provider support is actively expanding based on community needs.


๐Ÿ“ฆ Installation

Development Setup

For contributing to AutoAgents or building from source:

Prerequisites

  • Rust (latest stable recommended)
  • Cargo package manager
  • LeftHook for Git hooks management

Install LeftHook

macOS (using Homebrew):

brew install lefthook

Linux/Windows:

# Using npm
npm install -g lefthook

Clone and Setup

# Clone the repository
git clone https://github.com/liquidos-ai/AutoAgents.git
cd AutoAgents

# Install Git hooks using lefthook
lefthook install

# Build the project
cargo build --release

# Run tests to verify setup
cargo test --all-features

The lefthook configuration will automatically:

  • Format code with cargo fmt
  • Run linting with cargo clippy
  • Execute tests before commits

๐Ÿš€ Quick Start

Basic Usage

use autoagents::core::actor::Topic;
use autoagents::core::agent::memory::SlidingWindowMemory;
use autoagents::core::agent::prebuilt::executor::{ReActAgentOutput, ReActExecutor};
use autoagents::core::agent::task::Task;
use autoagents::core::agent::{AgentBuilder, AgentDeriveT, AgentOutputT};
use autoagents::core::environment::Environment;
use autoagents::core::error::Error;
use autoagents::core::protocol::{Event, TaskResult};
use autoagents::core::runtime::{SingleThreadedRuntime, TypedRuntime};
use autoagents::core::tool::{ToolCallError, ToolInputT, ToolRuntime, ToolT};
use autoagents::llm::LLMProvider;
use autoagents_derive::{agent, tool, AgentOutput, ToolInput};
use colored::*;
use serde::{Deserialize, Serialize};
use serde_json::Value;
use std::sync::Arc;
use tokio_stream::{wrappers::ReceiverStream, StreamExt};

#[derive(Serialize, Deserialize, ToolInput, Debug)]
pub struct AdditionArgs {
    #[input(description = "Left Operand for addition")]
    left: i64,
    #[input(description = "Right Operand for addition")]
    right: i64,
}

#[tool(
    name = "Addition",
    description = "Use this tool to Add two numbers",
    input = AdditionArgs,
)]
struct Addition {}

impl ToolRuntime for Addition {
    fn execute(&self, args: Value) -> Result<Value, ToolCallError> {
        let typed_args: AdditionArgs = serde_json::from_value(args)?;
        let result = typed_args.left + typed_args.right;
        Ok(result.into())
    }
}

/// Math agent output with Value and Explanation
#[derive(Debug, Serialize, Deserialize, AgentOutput)]
pub struct MathAgentOutput {
    #[output(description = "The addition result")]
    value: i64,
    #[output(description = "Explanation of the logic")]
    explanation: String,
    #[output(description = "If user asks other than math questions, use this to answer them.")]
    generic: Option<String>,
}

#[agent(
    name = "math_agent",
    description = "You are a Math agent",
    tools = [Addition],
    output = MathAgentOutput
)]
pub struct MathAgent {}

impl ReActExecutor for MathAgent {}

pub async fn simple_agent(llm: Arc<dyn LLMProvider>) -> Result<(), Error> {
    let sliding_window_memory = Box::new(SlidingWindowMemory::new(10));

    let agent = MathAgent {};

    let runtime = SingleThreadedRuntime::new(None);

    let test_topic = Topic::<Task>::new("test");

    let agent_handle = AgentBuilder::new(agent)
        .with_llm(llm)
        .runtime(runtime.clone())
        .subscribe_topic(test_topic.clone())
        .with_memory(sliding_window_memory)
        .build()
        .await?;

    // Create environment and set up event handling
    let mut environment = Environment::new(None);
    let _ = environment.register_runtime(runtime.clone()).await;

    let receiver = environment.take_event_receiver(None).await?;
    handle_events(receiver);

    // Publish message to all the subscribing actors
    runtime.publish(&Topic::<Task>::new("test"), Task::new("what is 2 + 2?")).await?;
    // Send a direct message for memory test
    println!("\n๐Ÿ“ง Sending direct message to test memory...");
    runtime.send_message(Task::new("What was the question I asked?"), agent_handle.addr()).await?;

    let _ = environment.run().await;
    Ok(())
}

fn handle_events(event_stream: Option<ReceiverStream<Event>>) {
    if let Some(mut event_stream) = event_stream {
        tokio::spawn(async move {
            while let Some(event) = event_stream.next().await {
                match event {
                    Event::TaskComplete { result, .. } => {
                        match result {
                            TaskResult::Value(val) => {
                                let agent_out: ReActAgentOutput =
                                    serde_json::from_value(val).unwrap();
                                let math_out: MathAgentOutput =
                                    serde_json::from_str(&agent_out.response).unwrap();
                                println!(
                                    "{}",
                                    format!(
                                        "Math Value: {}, Explanation: {}",
                                        math_out.value, math_out.explanation
                                    )
                                        .green()
                                );
                            }
                            _ => {
                                //
                            }
                        }
                    }
                    _ => {
                        //
                    }
                }
            }
        });
    }
}

๐Ÿ“š Examples

Explore our comprehensive examples to get started quickly:

A simple agent demonstrating core functionality and event-driven architecture.

export OPENAI_API_KEY="your-api-key"
cargo run --package basic-example -- --usecase simple

A simple agent which can run tools in WASM runtime.

export OPENAI_API_KEY="your-api-key"
cargo run --package wasm-runner

A sophisticated ReAct-based coding agent with file manipulation capabilities.

export OPENAI_API_KEY="your-api-key"
cargo run --package coding_agent -- --usecase interactive

๐Ÿ—๏ธ Architecture

AutoAgents Architecture

AutoAgents is built with a modular architecture:

AutoAgents/
โ”œโ”€โ”€ crates/
โ”‚   โ”œโ”€โ”€ autoagents/     # Main library entry point
โ”‚   โ”œโ”€โ”€ core/           # Core agent framework
โ”‚   โ”œโ”€โ”€ llm/            # LLM provider implementations
โ”‚   โ”œโ”€โ”€ liquid-edge/    # Edge Runtime Implementation
โ”‚   โ””โ”€โ”€ derive/         # Procedural macros
โ”œโ”€โ”€ examples/           # Example implementations

Core Components

  • Agent: The fundamental unit of intelligence
  • Environment: Manages agent lifecycle and communication
  • Memory: Configurable memory systems
  • Tools: External capability integration
  • Executors: Different reasoning patterns (ReAct, Chain-of-Thought)

๐Ÿ› ๏ธ Development

Setup

For development setup instructions, see the Installation section above.

Running Tests

# Run all tests
cargo test --all-features

# Run tests with coverage (requires cargo-tarpaulin)
cargo install cargo-tarpaulin
cargo tarpaulin --all-features --out html

Git Hooks

This project uses LeftHook for Git hooks management. The hooks will automatically:

  • Format code with cargo fmt --check
  • Run linting with cargo clippy -- -D warnings
  • Execute tests with cargo test --features full

Contributing

We welcome contributions! Please see our Contributing Guidelines and Code of Conduct for details.


๐Ÿ“– Documentation


๐Ÿค Community

  • GitHub Issues: Bug reports and feature requests
  • Discussions: Community Q&A and ideas
  • Discord: Join our Discord Community using https://discord.gg/Ghau8xYn

๐Ÿ“Š Performance

AutoAgents is designed for high performance:

  • Memory Efficient: Optimized memory usage with configurable backends
  • Concurrent: Full async/await support with tokio
  • Scalable: Horizontal scaling with multi-agent coordination
  • Type Safe: Compile-time guarantees with Rust's type system

๐Ÿ“œ License

AutoAgents is dual-licensed under:

You may choose either license for your use case.


๐Ÿ™ Acknowledgments

Built with โค๏ธ by the Liquidos AI team and our amazing community contributors.

Special thanks to:

  • The Rust community for the excellent ecosystem
  • OpenAI, Anthropic, and other LLM providers for their APIs
  • All contributors who help make AutoAgents better

Ready to build intelligent agents? Get started with AutoAgents today!

โญ Star us on GitHub | ๐Ÿ› Report Issues | ๐Ÿ’ฌ Join Discussions

Star History

Star History Chart

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for AutoAgents

Similar Open Source Tools

For similar tasks

No tools available

For similar jobs

No tools available