
LocalAGI
LocalAGI is a powerful, self-hostable AI Agent platform designed for maximum privacy and flexibility. A complete drop-in replacement for OpenAI's Responses APIs with advanced agentic capabilities. No clouds. No data leaks. Just pure local AI that works on consumer-grade hardware (CPU and GPU).
Stars: 1153

LocalAGI is a powerful, self-hostable AI Agent platform that allows you to design AI automations without writing code. It provides a complete drop-in replacement for OpenAI's Responses APIs with advanced agentic capabilities. With LocalAGI, you can create customizable AI assistants, automations, chat bots, and agents that run 100% locally, without the need for cloud services or API keys. The platform offers features like no-code agents, web-based interface, advanced agent teaming, connectors for various platforms, comprehensive REST API, short & long-term memory capabilities, planning & reasoning, periodic tasks scheduling, memory management, multimodal support, extensible custom actions, fully customizable models, observability, and more.
README:
Create customizable AI assistants, automations, chat bots and agents that run 100% locally. No need for agentic Python libraries or cloud service keys, just bring your GPU (or even just CPU) and a web browser.
LocalAGI is a powerful, self-hostable AI Agent platform that allows you to design AI automations without writing code. A complete drop-in replacement for OpenAI's Responses APIs with advanced agentic capabilities. No clouds. No data leaks. Just pure local AI that works on consumer-grade hardware (CPU and GPU).
Are you tired of AI wrappers calling out to cloud APIs, risking your privacy? So were we.
LocalAGI ensures your data stays exactly where you want itโon your hardware. No API keys, no cloud subscriptions, no compromise.
- ๐ No-Code Agents: Easy-to-configure multiple agents via Web UI.
- ๐ฅ Web-Based Interface: Simple and intuitive agent management.
- ๐ค Advanced Agent Teaming: Instantly create cooperative agent teams from a single prompt.
- ๐ก Connectors: Built-in integrations with Discord, Slack, Telegram, GitHub Issues, and IRC.
- ๐ Comprehensive REST API: Seamless integration into your workflows. Every agent created will support OpenAI Responses API out of the box.
- ๐ Short & Long-Term Memory: Powered by LocalRecall.
- ๐ง Planning & Reasoning: Agents intelligently plan, reason, and adapt.
- ๐ Periodic Tasks: Schedule tasks with cron-like syntax.
- ๐พ Memory Management: Control memory usage with options for long-term and summary memory.
- ๐ผ Multimodal Support: Ready for vision, text, and more.
- ๐ง Extensible Custom Actions: Easily script dynamic agent behaviors in Go (interpreted, no compilation!).
- ๐ Fully Customizable Models: Use your own models or integrate seamlessly with LocalAI.
- ๐ Observability: Monitor agent status and view detailed observable updates in real-time.
# Clone the repository
git clone https://github.com/mudler/LocalAGI
cd LocalAGI
# CPU setup (default)
docker compose up
# NVIDIA GPU setup
docker compose -f docker-compose.nvidia.yaml up
# Intel GPU setup (for Intel Arc and integrated GPUs)
docker compose -f docker-compose.intel.yaml up
# AMD GPU setup
docker compose -f docker-compose.amd.yaml up
# Start with a specific model (see available models in models.localai.io, or localai.io to use any model in huggingface)
MODEL_NAME=gemma-3-12b-it docker compose up
# NVIDIA GPU setup with custom multimodal and image models
MODEL_NAME=gemma-3-12b-it \
MULTIMODAL_MODEL=moondream2-20250414 \
IMAGE_MODEL=flux.1-dev-ggml \
docker compose -f docker-compose.nvidia.yaml up
Now you can access and manage your agents at http://localhost:8080
Still having issues? see this Youtube video: https://youtu.be/HtVwIxW3ePg
๐ LocalAI is now part of a comprehensive suite of AI tools designed to work together:
LocalAGI supports multiple hardware configurations through Docker Compose profiles:
- No special configuration needed
- Runs on any system with Docker
- Best for testing and development
- Supports text models only
- Requires NVIDIA GPU and drivers
- Uses CUDA for acceleration
- Best for high-performance inference
- Supports text, multimodal, and image generation models
- Run with:
docker compose -f docker-compose.nvidia.yaml up
- Default models:
- Text:
gemma-3-4b-it-qat
- Multimodal:
moondream2-20250414
- Image:
sd-1.5-ggml
- Text:
- Environment variables:
-
MODEL_NAME
: Text model to use -
MULTIMODAL_MODEL
: Multimodal model to use -
IMAGE_MODEL
: Image generation model to use -
LOCALAI_SINGLE_ACTIVE_BACKEND
: Set totrue
to enable single active backend mode
-
- Supports Intel Arc and integrated GPUs
- Uses SYCL for acceleration
- Best for Intel-based systems
- Supports text, multimodal, and image generation models
- Run with:
docker compose -f docker-compose.intel.yaml up
- Default models:
- Text:
gemma-3-4b-it-qat
- Multimodal:
moondream2-20250414
- Image:
sd-1.5-ggml
- Text:
- Environment variables:
-
MODEL_NAME
: Text model to use -
MULTIMODAL_MODEL
: Multimodal model to use -
IMAGE_MODEL
: Image generation model to use -
LOCALAI_SINGLE_ACTIVE_BACKEND
: Set totrue
to enable single active backend mode
-
You can customize the models used by LocalAGI by setting environment variables when running docker-compose. For example:
# CPU with custom model
MODEL_NAME=gemma-3-12b-it docker compose up
# NVIDIA GPU with custom models
MODEL_NAME=gemma-3-12b-it \
MULTIMODAL_MODEL=moondream2-20250414 \
IMAGE_MODEL=flux.1-dev-ggml \
docker compose -f docker-compose.nvidia.yaml up
# Intel GPU with custom models
MODEL_NAME=gemma-3-12b-it \
MULTIMODAL_MODEL=moondream2-20250414 \
IMAGE_MODEL=sd-1.5-ggml \
docker compose -f docker-compose.intel.yaml up
# With custom actions directory
LOCALAGI_CUSTOM_ACTIONS_DIR=/app/custom-actions docker compose up
If no models are specified, it will use the defaults:
- Text model:
gemma-3-4b-it-qat
- Multimodal model:
moondream2-20250414
- Image model:
sd-1.5-ggml
Good (relatively small) models that have been tested are:
-
qwen_qwq-32b
(best in co-ordinating agents) gemma-3-12b-it
gemma-3-27b-it
- โ Ultimate Privacy: No data ever leaves your hardware.
- โ Flexible Model Integration: Supports GGUF, GGML, and more thanks to LocalAI.
- โ Developer-Friendly: Rich APIs and intuitive interfaces.
- โ Effortless Setup: Simple Docker compose setups and pre-built binaries.
- โ Feature-Rich: From planning to multimodal capabilities, connectors for Slack, MCP support, LocalAGI has it all.
Explore detailed documentation including:
LocalAGI supports environment configurations. Note that these environment variables needs to be specified in the localagi container in the docker-compose file to have effect.
Variable | What It Does |
---|---|
LOCALAGI_MODEL |
Your go-to model |
LOCALAGI_MULTIMODAL_MODEL |
Optional model for multimodal capabilities |
LOCALAGI_LLM_API_URL |
OpenAI-compatible API server URL |
LOCALAGI_LLM_API_KEY |
API authentication |
LOCALAGI_TIMEOUT |
Request timeout settings |
LOCALAGI_STATE_DIR |
Where state gets stored |
LOCALAGI_LOCALRAG_URL |
LocalRecall connection |
LOCALAGI_ENABLE_CONVERSATIONS_LOGGING |
Toggle conversation logs |
LOCALAGI_API_KEYS |
A comma separated list of api keys used for authentication |
LOCALAGI_CUSTOM_ACTIONS_DIR |
Directory containing custom Go action files to be automatically loaded |
Download ready-to-run binaries from the Releases page.
Requirements:
- Go 1.20+
- Git
- Bun 1.2+
# Clone repo
git clone https://github.com/mudler/LocalAGI.git
cd LocalAGI
# Build it
cd webui/react-ui && bun i && bun run build
cd ../..
go build -o localagi
# Run it
./localagi
LocalAGI can be used as a Go library to programmatically create and manage AI agents. Let's start with a simple example of creating a single agent:
Basic Usage: Single Agent
import (
"github.com/mudler/LocalAGI/core/agent"
"github.com/mudler/LocalAGI/core/types"
)
// Create a new agent with basic configuration
agent, err := agent.New(
agent.WithModel("gpt-4"),
agent.WithLLMAPIURL("http://localhost:8080"),
agent.WithLLMAPIKey("your-api-key"),
agent.WithSystemPrompt("You are a helpful assistant."),
agent.WithCharacter(agent.Character{
Name: "my-agent",
}),
agent.WithActions(
// Add your custom actions here
),
agent.WithStateFile("./state/my-agent.state.json"),
agent.WithCharacterFile("./state/my-agent.character.json"),
agent.WithTimeout("10m"),
agent.EnableKnowledgeBase(),
agent.EnableReasoning(),
)
if err != nil {
log.Fatal(err)
}
// Start the agent
go func() {
if err := agent.Run(); err != nil {
log.Printf("Agent stopped: %v", err)
}
}()
// Stop the agent when done
agent.Stop()
This basic example shows how to:
- Create a single agent with essential configuration
- Set up the agent's model and API connection
- Configure basic features like knowledge base and reasoning
- Start and stop the agent
Advanced Usage: Agent Pools
For managing multiple agents, you can use the AgentPool system:
import (
"github.com/mudler/LocalAGI/core/state"
"github.com/mudler/LocalAGI/core/types"
)
// Create a new agent pool
pool, err := state.NewAgentPool(
"default-model", // default model name
"default-multimodal-model", // default multimodal model
"image-model", // image generation model
"http://localhost:8080", // API URL
"your-api-key", // API key
"./state", // state directory
"", // MCP box URL (optional)
"http://localhost:8081", // LocalRAG API URL
func(config *AgentConfig) func(ctx context.Context, pool *AgentPool) []types.Action {
// Define available actions for agents
return func(ctx context.Context, pool *AgentPool) []types.Action {
return []types.Action{
// Add your custom actions here
}
}
},
func(config *AgentConfig) []Connector {
// Define connectors for agents
return []Connector{
// Add your custom connectors here
}
},
func(config *AgentConfig) []DynamicPrompt {
// Define dynamic prompts for agents
return []DynamicPrompt{
// Add your custom prompts here
}
},
func(config *AgentConfig) types.JobFilters {
// Define job filters for agents
return types.JobFilters{
// Add your custom filters here
}
},
"10m", // timeout
true, // enable conversation logs
)
// Create a new agent in the pool
agentConfig := &AgentConfig{
Name: "my-agent",
Model: "gpt-4",
SystemPrompt: "You are a helpful assistant.",
EnableKnowledgeBase: true,
EnableReasoning: true,
// Add more configuration options as needed
}
err = pool.CreateAgent("my-agent", agentConfig)
// Start all agents
err = pool.StartAll()
// Get agent status
status := pool.GetStatusHistory("my-agent")
// Stop an agent
pool.Stop("my-agent")
// Remove an agent
err = pool.Remove("my-agent")
Available Features
Key features available through the library:
- Single Agent Management: Create and manage individual agents with basic configuration
- Agent Pool Management: Create, start, stop, and remove multiple agents
- Configuration: Customize agent behavior through AgentConfig
- Actions: Define custom actions for agents to perform
- Connectors: Add custom connectors for external services
- Dynamic Prompts: Create dynamic prompt templates
- Job Filters: Implement custom job filtering logic
- Status Tracking: Monitor agent status and history
- State Persistence: Automatic state saving and loading
For more details about available configuration options and features, refer to the Agent Configuration Reference section.
LocalAGI provides two powerful ways to extend its functionality with custom actions:
LocalAGI supports custom actions written in Go that can be defined inline when creating an agent. These actions are interpreted at runtime, so no compilation is required.
You can also place custom Go action files in a directory and have LocalAGI automatically load them. Set the LOCALAGI_CUSTOM_ACTIONS_DIR
environment variable to point to a directory containing your custom action files. Each .go
file in this directory will be automatically loaded and made available to all agents.
Example setup:
# Set the environment variable
export LOCALAGI_CUSTOM_ACTIONS_DIR="/path/to/custom/actions"
# Or in docker-compose.yaml
environment:
- LOCALAGI_CUSTOM_ACTIONS_DIR=/app/custom-actions
Directory structure:
custom-actions/
โโโ weather_action.go
โโโ file_processor.go
โโโ database_query.go
Each file should contain the three required functions (Run
, Definition
, RequiredFields
) as described below.
When creating a new Agent, in the action sections select the "custom" action, you can add the Golang code directly there.
Custom actions in LocalAGI require three main functions:
-
Run(config map[string]interface{}) (string, map[string]interface{}, error)
- The main execution function -
Definition() map[string][]string
- Defines the action's parameters and their types -
RequiredFields() []string
- Specifies which parameters are required
Note: You can't use additional modules, but just use libraries that are included in Go.
Here's a practical example of a custom action that fetches weather information:
import (
"encoding/json"
"fmt"
"net/http"
"io"
)
type WeatherParams struct {
City string `json:"city"`
Country string `json:"country"`
}
type WeatherResponse struct {
Main struct {
Temp float64 `json:"temp"`
Humidity int `json:"humidity"`
} `json:"main"`
Weather []struct {
Description string `json:"description"`
} `json:"weather"`
}
func Run(config map[string]interface{}) (string, map[string]interface{}, error) {
// Parse parameters
p := WeatherParams{}
b, err := json.Marshal(config)
if err != nil {
return "", map[string]interface{}{}, err
}
if err := json.Unmarshal(b, &p); err != nil {
return "", map[string]interface{}{}, err
}
// Make API call to weather service
url := fmt.Sprintf("http://api.openweathermap.org/data/2.5/weather?q=%s,%s&appid=YOUR_API_KEY&units=metric", p.City, p.Country)
resp, err := http.Get(url)
if err != nil {
return "", map[string]interface{}{}, err
}
defer resp.Body.Close()
body, err := io.ReadAll(resp.Body)
if err != nil {
return "", map[string]interface{}{}, err
}
var weather WeatherResponse
if err := json.Unmarshal(body, &weather); err != nil {
return "", map[string]interface{}{}, err
}
// Format response
result := fmt.Sprintf("Weather in %s, %s: %.1fยฐC, %s, Humidity: %d%%",
p.City, p.Country, weather.Main.Temp, weather.Weather[0].Description, weather.Main.Humidity)
return result, map[string]interface{}{}, nil
}
func Definition() map[string][]string {
return map[string][]string{
"city": []string{
"string",
"The city name to get weather for",
},
"country": []string{
"string",
"The country code (e.g., US, UK, DE)",
},
}
}
func RequiredFields() []string {
return []string{"city", "country"}
}
Here's another example that demonstrates file system operations:
import (
"encoding/json"
"fmt"
"os"
"path/filepath"
)
type FileParams struct {
Path string `json:"path"`
Action string `json:"action"`
Content string `json:"content,omitempty"`
}
func Run(config map[string]interface{}) (string, map[string]interface{}, error) {
p := FileParams{}
b, err := json.Marshal(config)
if err != nil {
return "", map[string]interface{}{}, err
}
if err := json.Unmarshal(b, &p); err != nil {
return "", map[string]interface{}{}, err
}
switch p.Action {
case "read":
content, err := os.ReadFile(p.Path)
if err != nil {
return "", map[string]interface{}{}, err
}
return string(content), map[string]interface{}{}, nil
case "write":
err := os.WriteFile(p.Path, []byte(p.Content), 0644)
if err != nil {
return "", map[string]interface{}{}, err
}
return fmt.Sprintf("Successfully wrote to %s", p.Path), map[string]interface{}{}, nil
case "list":
files, err := os.ReadDir(p.Path)
if err != nil {
return "", map[string]interface{}{}, err
}
var fileList []string
for _, file := range files {
fileList = append(fileList, file.Name())
}
result, _ := json.Marshal(fileList)
return string(result), map[string]interface{}{}, nil
default:
return "", map[string]interface{}{}, fmt.Errorf("unknown action: %s", p.Action)
}
}
func Definition() map[string][]string {
return map[string][]string{
"path": []string{
"string",
"The file or directory path",
},
"action": []string{
"string",
"The action to perform: read, write, or list",
},
"content": []string{
"string",
"Content to write (required for write action)",
},
}
}
func RequiredFields() []string {
return []string{"path", "action"}
}
To use custom actions, add them to your agent configuration:
- Via Web UI: In the agent creation form, add a "Custom" action and paste your Go code
- Via API: Include the custom action in your agent configuration JSON
- Via Library: Add the custom action to your agent's actions list
LocalAGI supports both local and remote MCP servers, allowing you to extend functionality with external tools and services.
The Model Context Protocol (MCP) is a standard for connecting AI applications to external data sources and tools. LocalAGI can connect to any MCP-compliant server to access additional capabilities.
Local MCP servers run as processes that LocalAGI can spawn and communicate with via STDIO.
{
"mcpServers": {
"github": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"GITHUB_PERSONAL_ACCESS_TOKEN",
"ghcr.io/github/github-mcp-server"
],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "<YOUR_TOKEN>"
}
}
}
}
Remote MCP servers are HTTP-based and can be accessed over the network.
You can create MCP servers in any language that supports the MCP protocol and add the URLs of the servers to LocalAGI.
- Via Web UI: In the MCP Settings section of agent creation, add MCP servers
- Via API: Include MCP server configuration in your agent config
- Security: Always validate inputs and use proper authentication for remote MCP servers
- Error Handling: Implement robust error handling in your MCP servers
- Documentation: Provide clear descriptions for all tools exposed by your MCP server
- Testing: Test your MCP servers independently before integrating with LocalAGI
- Resource Management: Ensure your MCP servers properly clean up resources
The development workflow is similar to the source build, but with additional steps for hot reloading of the frontend:
# Clone repo
git clone https://github.com/mudler/LocalAGI.git
cd LocalAGI
# Install dependencies and start frontend development server
cd webui/react-ui && bun i && bun run dev
Then in separate terminal:
# Start development server
cd ../.. && go run main.go
Note: see webui/react-ui/.vite.config.js for env vars that can be used to configure the backend URL
Link your agents to the services you already use. Configuration examples below.
GitHub Issues
{
"token": "YOUR_PAT_TOKEN",
"repository": "repo-to-monitor",
"owner": "repo-owner",
"botUserName": "bot-username"
}
Discord
After creating your Discord bot:
{
"token": "Bot YOUR_DISCORD_TOKEN",
"defaultChannel": "OPTIONAL_CHANNEL_ID"
}
Don't forget to enable "Message Content Intent" in Bot(tab) settings! Enable " Message Content Intent " in the Bot tab!
Slack
Use the included slack.yaml
manifest to create your app, then configure:
{
"botToken": "xoxb-your-bot-token",
"appToken": "xapp-your-app-token"
}
- Create Oauth token bot token from "OAuth & Permissions" -> "OAuth Tokens for Your Workspace"
- Create App level token (from "Basic Information" -> "App-Level Tokens" ( scope connections:writeRoute authorizations:read ))
Telegram
Get a token from @botfather, then:
{
"token": "your-bot-father-token",
"group_mode": "true",
"mention_only": "true",
"admins": "username1,username2"
}
Configuration options:
-
token
: Your bot token from BotFather -
group_mode
: Enable/disable group chat functionality -
mention_only
: When enabled, bot only responds when mentioned in groups -
admins
: Comma-separated list of Telegram usernames allowed to use the bot in private chats -
channel_id
: Optional channel ID for the bot to send messages to
Important: For group functionality to work properly:
- Go to @BotFather
- Select your bot
- Go to "Bot Settings" > "Group Privacy"
- Select "Turn off" to allow the bot to read all messages in groups
- Restart your bot after changing this setting
IRC
Connect to IRC networks:
{
"server": "irc.example.com",
"port": "6667",
"nickname": "LocalAGIBot",
"channel": "#yourchannel",
"alwaysReply": "false"
}
{
"smtpServer": "smtp.gmail.com:587",
"imapServer": "imap.gmail.com:993",
"smtpInsecure": "false",
"imapInsecure": "false",
"username": "[email protected]",
"email": "[email protected]",
"password": "correct-horse-battery-staple",
"name": "LogalAGI Agent"
}
Agent Management
Endpoint | Method | Description | Example |
---|---|---|---|
/api/agents |
GET | List all available agents | Example |
/api/agent/:name/status |
GET | View agent status history | Example |
/api/agent/create |
POST | Create a new agent | Example |
/api/agent/:name |
DELETE | Remove an agent | Example |
/api/agent/:name/pause |
PUT | Pause agent activities | Example |
/api/agent/:name/start |
PUT | Resume a paused agent | Example |
/api/agent/:name/config |
GET | Get agent configuration | |
/api/agent/:name/config |
PUT | Update agent configuration | |
/api/meta/agent/config |
GET | Get agent configuration metadata | |
/settings/export/:name |
GET | Export agent config | Example |
/settings/import |
POST | Import agent config | Example |
Actions and Groups
Endpoint | Method | Description | Example |
---|---|---|---|
/api/actions |
GET | List available actions | |
/api/action/:name/run |
POST | Execute an action | |
/api/agent/group/generateProfiles |
POST | Generate group profiles | |
/api/agent/group/create |
POST | Create a new agent group |
Chat Interactions
Endpoint | Method | Description | Example |
---|---|---|---|
/api/chat/:name |
POST | Send message & get response | Example |
/api/notify/:name |
POST | Send notification to agent | Example |
/api/sse/:name |
GET | Real-time agent event stream | Example |
/v1/responses |
POST | Send message & get response | OpenAI's Responses |
Curl Examples
curl -X GET "http://localhost:3000/api/agents"
curl -X GET "http://localhost:3000/api/agent/my-agent/status"
curl -X POST "http://localhost:3000/api/agent/create" \
-H "Content-Type: application/json" \
-d '{
"name": "my-agent",
"model": "gpt-4",
"system_prompt": "You are an AI assistant.",
"enable_kb": true,
"enable_reasoning": true
}'
curl -X DELETE "http://localhost:3000/api/agent/my-agent"
curl -X PUT "http://localhost:3000/api/agent/my-agent/pause"
curl -X PUT "http://localhost:3000/api/agent/my-agent/start"
curl -X GET "http://localhost:3000/api/agent/my-agent/config"
curl -X PUT "http://localhost:3000/api/agent/my-agent/config" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4",
"system_prompt": "You are an AI assistant."
}'
curl -X GET "http://localhost:3000/settings/export/my-agent" --output my-agent.json
curl -X POST "http://localhost:3000/settings/import" \
-F "file=@/path/to/my-agent.json"
curl -X POST "http://localhost:3000/api/chat/my-agent" \
-H "Content-Type: application/json" \
-d '{"message": "Hello, how are you today?"}'
curl -X POST "http://localhost:3000/api/notify/my-agent" \
-H "Content-Type: application/json" \
-d '{"message": "Important notification"}'
curl -N -X GET "http://localhost:3000/api/sse/my-agent"
Note: For proper SSE handling, you should use a client that supports SSE natively.
Configuration Structure
The agent configuration defines how an agent behaves and what capabilities it has. You can view the available configuration options and their descriptions by using the metadata endpoint:
curl -X GET "http://localhost:3000/api/meta/agent/config"
This will return a JSON object containing all available configuration fields, their types, and descriptions.
Here's an example of the agent configuration structure:
{
"name": "my-agent",
"model": "gpt-4",
"multimodal_model": "gpt-4-vision",
"hud": true,
"standalone_job": false,
"random_identity": false,
"initiate_conversations": true,
"enable_planning": true,
"identity_guidance": "You are a helpful assistant.",
"periodic_runs": "0 * * * *",
"permanent_goal": "Help users with their questions.",
"enable_kb": true,
"enable_reasoning": true,
"kb_results": 5,
"can_stop_itself": false,
"system_prompt": "You are an AI assistant.",
"long_term_memory": true,
"summary_long_term_memory": false
}
Environment Configuration
LocalAGI supports environment configurations. Note that these environment variables needs to be specified in the localagi container in the docker-compose file to have effect.
Variable | What It Does |
---|---|
LOCALAGI_MODEL |
Your go-to model |
LOCALAGI_MULTIMODAL_MODEL |
Optional model for multimodal capabilities |
LOCALAGI_LLM_API_URL |
OpenAI-compatible API server URL |
LOCALAGI_LLM_API_KEY |
API authentication |
LOCALAGI_TIMEOUT |
Request timeout settings |
LOCALAGI_STATE_DIR |
Where state gets stored |
LOCALAGI_LOCALRAG_URL |
LocalRecall connection |
LOCALAGI_SSHBOX_URL |
LocalAGI SSHBox URL, e.g. user:pass@ip:port |
LOCALAGI_MCPBOX_URL |
LocalAGI MCPBox URL, e.g. http://mcpbox:8080 |
LOCALAGI_ENABLE_CONVERSATIONS_LOGGING |
Toggle conversation logs |
LOCALAGI_API_KEYS |
A comma separated list of api keys used for authentication |
LOCALAGI_CUSTOM_ACTIONS_DIR |
Directory containing custom Go action files to be automatically loaded |
MIT License โ See the LICENSE file for details.
LOCAL PROCESSING. GLOBAL THINKING.
Made with โค๏ธ by mudler
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for LocalAGI
Similar Open Source Tools

LocalAGI
LocalAGI is a powerful, self-hostable AI Agent platform that allows you to design AI automations without writing code. It provides a complete drop-in replacement for OpenAI's Responses APIs with advanced agentic capabilities. With LocalAGI, you can create customizable AI assistants, automations, chat bots, and agents that run 100% locally, without the need for cloud services or API keys. The platform offers features like no-code agents, web-based interface, advanced agent teaming, connectors for various platforms, comprehensive REST API, short & long-term memory capabilities, planning & reasoning, periodic tasks scheduling, memory management, multimodal support, extensible custom actions, fully customizable models, observability, and more.

osaurus
Osaurus is a native, Apple Silicon-only local LLM server built on Apple's MLX for maximum performance on Mโseries chips. It is a SwiftUI app + SwiftNIO server with OpenAIโcompatible and Ollamaโcompatible endpoints. The tool supports native MLX text generation, model management, streaming and nonโstreaming chat completions, OpenAIโcompatible function calling, real-time system resource monitoring, and path normalization for API compatibility. Osaurus is designed for macOS 15.5+ and Apple Silicon (M1 or newer) with Xcode 16.4+ required for building from source.

cua
Cua is a tool for creating and running high-performance macOS and Linux virtual machines on Apple Silicon, with built-in support for AI agents. It provides libraries like Lume for running VMs with near-native performance, Computer for interacting with sandboxes, and Agent for running agentic workflows. Users can refer to the documentation for onboarding, explore demos showcasing AI-Gradio and GitHub issue fixing, and utilize accessory libraries like Core, PyLume, Computer Server, and SOM. Contributions are welcome, and the tool is open-sourced under the MIT License.

dingo
Dingo is a data quality evaluation tool that automatically detects data quality issues in datasets. It provides built-in rules and model evaluation methods, supports text and multimodal datasets, and offers local CLI and SDK usage. Dingo is designed for easy integration into evaluation platforms like OpenCompass.

quantalogic
QuantaLogic is a ReAct framework for building advanced AI agents that seamlessly integrates large language models with a robust tool system. It aims to bridge the gap between advanced AI models and practical implementation in business processes by enabling agents to understand, reason about, and execute complex tasks through natural language interaction. The framework includes features such as ReAct Framework, Universal LLM Support, Secure Tool System, Real-time Monitoring, Memory Management, and Enterprise Ready components.

lumen
Lumen is a command-line tool that leverages AI to enhance your git workflow. It assists in generating commit messages, understanding changes, interactive searching, and analyzing impacts without the need for an API key. With smart commit messages, git history insights, interactive search, change analysis, and rich markdown output, Lumen offers a seamless and flexible experience for users across various git workflows.

oxylabs-mcp
The Oxylabs MCP Server acts as a bridge between AI models and the web, providing clean, structured data from any site. It enables scraping of URLs, rendering JavaScript-heavy pages, content extraction for AI use, bypassing anti-scraping measures, and accessing geo-restricted web data from 195+ countries. The implementation utilizes the Model Context Protocol (MCP) to facilitate secure interactions between AI assistants and web content. Key features include scraping content from any site, automatic data cleaning and conversion, bypassing blocks and geo-restrictions, flexible setup with cross-platform support, and built-in error handling and request management.

nexus
Nexus is a tool that acts as a unified gateway for multiple LLM providers and MCP servers. It allows users to aggregate, govern, and control their AI stack by connecting multiple servers and providers through a single endpoint. Nexus provides features like MCP Server Aggregation, LLM Provider Routing, Context-Aware Tool Search, Protocol Support, Flexible Configuration, Security features, Rate Limiting, and Docker readiness. It supports tool calling, tool discovery, and error handling for STDIO servers. Nexus also integrates with AI assistants, Cursor, Claude Code, and LangChain for seamless usage.

Groq2API
Groq2API is a REST API wrapper around the Groq2 model, a large language model trained by Google. The API allows you to send text prompts to the model and receive generated text responses. The API is easy to use and can be integrated into a variety of applications.

mcp-documentation-server
The mcp-documentation-server is a lightweight server application designed to serve documentation files for projects. It provides a simple and efficient way to host and access project documentation, making it easy for team members and stakeholders to find and reference important information. The server supports various file formats, such as markdown and HTML, and allows for easy navigation through the documentation. With mcp-documentation-server, teams can streamline their documentation process and ensure that project information is easily accessible to all involved parties.

LLMVoX
LLMVoX is a lightweight 30M-parameter, LLM-agnostic, autoregressive streaming Text-to-Speech (TTS) system designed to convert text outputs from Large Language Models into high-fidelity streaming speech with low latency. It achieves significantly lower Word Error Rate compared to speech-enabled LLMs while operating at comparable latency and speech quality. Key features include being lightweight & fast with only 30M parameters, LLM-agnostic for easy integration with existing models, multi-queue streaming for continuous speech generation, and multilingual support for easy adaptation to new languages.

mcphub.nvim
MCPHub.nvim is a powerful Neovim plugin that integrates MCP (Model Context Protocol) servers into your workflow. It offers a centralized config file for managing servers and tools, with an intuitive UI for testing resources. Ideal for LLM integration, it provides programmatic API access and interactive testing through the `:MCPHub` command.

open-responses
OpenResponses API provides enterprise-grade AI capabilities through a powerful API, simplifying development and deployment while ensuring complete data control. It offers automated tracing, integrated RAG for contextual information retrieval, pre-built tool integrations, self-hosted architecture, and an OpenAI-compatible interface. The toolkit addresses development challenges like feature gaps and integration complexity, as well as operational concerns such as data privacy and operational control. Engineering teams can benefit from improved productivity, production readiness, compliance confidence, and simplified architecture by choosing OpenResponses.

json-translator
The json-translator repository provides a free tool to translate JSON/YAML files or JSON objects into different languages using various translation modules. It supports CLI usage and package support, allowing users to translate words, sentences, JSON objects, and JSON files. The tool also offers multi-language translation, ignoring specific words, and safe translation practices. Users can contribute to the project by updating CLI, translation functions, JSON operations, and more. The roadmap includes features like Libre Translate option, Argos Translate option, Bing Translate option, and support for additional translation modules.

MassGen
MassGen is a cutting-edge multi-agent system that leverages the power of collaborative AI to solve complex tasks. It assigns a task to multiple AI agents who work in parallel, observe each other's progress, and refine their approaches to converge on the best solution to deliver a comprehensive and high-quality result. The system operates through an architecture designed for seamless multi-agent collaboration, with key features including cross-model/agent synergy, parallel processing, intelligence sharing, consensus building, and live visualization. Users can install the system, configure API settings, and run MassGen for various tasks such as question answering, creative writing, research, development & coding tasks, and web automation & browser tasks. The roadmap includes plans for advanced agent collaboration, expanded model, tool & agent integration, improved performance & scalability, enhanced developer experience, and a web interface.

mcp-omnisearch
mcp-omnisearch is a Model Context Protocol (MCP) server that acts as a unified gateway to multiple search providers and AI tools. It integrates Tavily, Perplexity, Kagi, Jina AI, Brave, Exa AI, and Firecrawl to offer a wide range of search, AI response, content processing, and enhancement features through a single interface. The server provides powerful search capabilities, AI response generation, content extraction, summarization, web scraping, structured data extraction, and more. It is designed to work flexibly with the API keys available, enabling users to activate only the providers they have keys for and easily add more as needed.
For similar tasks

LocalAGI
LocalAGI is a powerful, self-hostable AI Agent platform that allows you to design AI automations without writing code. It provides a complete drop-in replacement for OpenAI's Responses APIs with advanced agentic capabilities. With LocalAGI, you can create customizable AI assistants, automations, chat bots, and agents that run 100% locally, without the need for cloud services or API keys. The platform offers features like no-code agents, web-based interface, advanced agent teaming, connectors for various platforms, comprehensive REST API, short & long-term memory capabilities, planning & reasoning, periodic tasks scheduling, memory management, multimodal support, extensible custom actions, fully customizable models, observability, and more.

cherry-studio
Cherry Studio is a desktop client that supports multiple LLM providers on Windows, Mac, and Linux. It offers diverse LLM provider support, AI assistants & conversations, document & data processing, practical tools integration, and enhanced user experience. The tool includes features like support for major LLM cloud services, AI web service integration, local model support, pre-configured AI assistants, document processing for text, images, and more, global search functionality, topic management system, AI-powered translation, and cross-platform support with ready-to-use features and themes for a better user experience.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.