
agent-sdk-go
A powerful Go framework for building production-ready AI agents!
Stars: 248

Agent Go SDK is a powerful Go framework for building production-ready AI agents that seamlessly integrates memory management, tool execution, multi-LLM support, and enterprise features into a flexible, extensible architecture. It offers core capabilities like multi-model intelligence, modular tool ecosystem, advanced memory management, and MCP integration. The SDK is enterprise-ready with built-in guardrails, complete observability, and support for enterprise multi-tenancy. It provides a structured task framework, declarative configuration, and zero-effort bootstrapping for development experience. The SDK supports environment variables for configuration and includes features like creating agents with YAML configuration, auto-generating agent configurations, using MCP servers with an agent, and CLI tool for headless usage.
README:
A powerful Go framework for building production-ready AI agents that seamlessly integrates memory management, tool execution, multi-LLM support, and enterprise features into a flexible, extensible architecture.
- 🧠 Multi-Model Intelligence: Seamless integration with OpenAI, Anthropic, and Google Vertex AI (Gemini models).
- 🔧 Modular Tool Ecosystem: Expand agent capabilities with plug-and-play tools for web search, data retrieval, and custom operations
- 📝 Advanced Memory Management: Persistent conversation tracking with buffer and vector-based retrieval options
- 🔌 MCP Integration: Support for Model Context Protocol (MCP) servers via HTTP and stdio transports
- 🚦 Built-in Guardrails: Comprehensive safety mechanisms for responsible AI deployment
- 📈 Complete Observability: Integrated tracing and logging for monitoring and debugging
- 🏢 Enterprise Multi-tenancy: Securely support multiple organizations with isolated resources
- 🛠️ Structured Task Framework: Plan, approve, and execute complex multi-step operations
- 📄 Declarative Configuration: Define sophisticated agents and tasks using intuitive YAML definitions
- 🧙 Zero-Effort Bootstrapping: Auto-generate complete agent configurations from simple system prompts
- Go 1.23+
- Redis (optional, for distributed memory)
Add the SDK to your Go project:
go get github.com/Ingenimax/agent-sdk-go
Build and install the CLI tool for headless usage:
# Clone the repository
git clone https://github.com/Ingenimax/agent-sdk-go
cd agent-sdk-go
# Build the CLI tool
make build-cli
# Install to system PATH (optional)
make install
# Or run the installation script
./scripts/install-cli.sh
Quick CLI Start:
# Initialize configuration
./bin/agent-cli init
# Option 1: Set environment variables
export OPENAI_API_KEY=your_api_key_here
# Option 2: Use .env file (recommended)
cp env.example .env
# Edit .env with your API keys
# Run a simple query
./bin/agent-cli run "What's the weather in San Francisco?"
# Start interactive chat
./bin/agent-cli chat
The SDK uses environment variables for configuration. Key variables include:
-
OPENAI_API_KEY
: Your OpenAI API key -
OPENAI_MODEL
: The model to use (e.g., gpt-4o-mini) -
LOG_LEVEL
: Logging level (debug, info, warn, error) -
REDIS_ADDRESS
: Redis server address (if using Redis for memory)
See .env.example
for a complete list of configuration options.
package main
import (
"context"
"fmt"
"github.com/Ingenimax/agent-sdk-go/pkg/agent"
"github.com/Ingenimax/agent-sdk-go/pkg/config"
"github.com/Ingenimax/agent-sdk-go/pkg/llm/openai"
"github.com/Ingenimax/agent-sdk-go/pkg/logging"
"github.com/Ingenimax/agent-sdk-go/pkg/memory"
"github.com/Ingenimax/agent-sdk-go/pkg/multitenancy"
"github.com/Ingenimax/agent-sdk-go/pkg/tools"
"github.com/Ingenimax/agent-sdk-go/pkg/tools/websearch"
)
func main() {
// Create a logger
logger := logging.New()
// Get configuration
cfg := config.Get()
// Create a new agent with OpenAI
openaiClient := openai.NewClient(cfg.LLM.OpenAI.APIKey,
openai.WithLogger(logger))
agent, err := agent.NewAgent(
agent.WithLLM(openaiClient),
agent.WithMemory(memory.NewConversationBuffer()),
agent.WithTools(createTools(logger).List()...),
agent.WithSystemPrompt("You are a helpful AI assistant. When you don't know the answer or need real-time information, use the available tools to find the information."),
agent.WithName("ResearchAssistant"),
)
if err != nil {
logger.Error(context.Background(), "Failed to create agent", map[string]interface{}{"error": err.Error()})
return
}
// Create a context with organization ID and conversation ID
ctx := context.Background()
ctx = multitenancy.WithOrgID(ctx, "default-org")
ctx = context.WithValue(ctx, memory.ConversationIDKey, "conversation-123")
// Run the agent
response, err := agent.Run(ctx, "What's the weather in San Francisco?")
if err != nil {
logger.Error(ctx, "Failed to run agent", map[string]interface{}{"error": err.Error()})
return
}
fmt.Println(response)
}
func createTools(logger logging.Logger) *tools.Registry {
// Get configuration
cfg := config.Get()
// Create tools registry
toolRegistry := tools.NewRegistry()
// Add web search tool if API keys are available
if cfg.Tools.WebSearch.GoogleAPIKey != "" && cfg.Tools.WebSearch.GoogleSearchEngineID != "" {
searchTool := websearch.New(
cfg.Tools.WebSearch.GoogleAPIKey,
cfg.Tools.WebSearch.GoogleSearchEngineID,
)
toolRegistry.Register(searchTool)
}
return toolRegistry
}
package main
import (
"context"
"fmt"
"log"
"os"
"path/filepath"
"strings"
"github.com/Ingenimax/agent-sdk-go/pkg/agent"
"github.com/Ingenimax/agent-sdk-go/pkg/llm/openai"
)
func main() {
// Get OpenAI API key from environment
apiKey := os.Getenv("OPENAI_API_KEY")
if apiKey == "" {
log.Fatal("OpenAI API key not provided. Set OPENAI_API_KEY environment variable.")
}
// Create the LLM client
llm := openai.NewClient(apiKey)
// Load agent configurations
agentConfigs, err := agent.LoadAgentConfigsFromFile("agents.yaml")
if err != nil {
log.Fatalf("Failed to load agent configurations: %v", err)
}
// Load task configurations
taskConfigs, err := agent.LoadTaskConfigsFromFile("tasks.yaml")
if err != nil {
log.Fatalf("Failed to load task configurations: %v", err)
}
// Create variables map for template substitution
variables := map[string]string{
"topic": "Artificial Intelligence",
}
// Create the agent for a specific task
taskName := "research_task"
agent, err := agent.CreateAgentForTask(taskName, agentConfigs, taskConfigs, variables, agent.WithLLM(llm))
if err != nil {
log.Fatalf("Failed to create agent for task: %v", err)
}
// Execute the task
fmt.Printf("Executing task '%s' with topic '%s'...\n", taskName, variables["topic"])
result, err := agent.ExecuteTaskFromConfig(context.Background(), taskName, taskConfigs, variables)
if err != nil {
log.Fatalf("Failed to execute task: %v", err)
}
// Print the result
fmt.Println("\nTask Result:")
fmt.Println(result)
}
Example YAML configurations:
agents.yaml:
researcher:
role: >
{topic} Senior Data Researcher
goal: >
Uncover cutting-edge developments in {topic}
backstory: >
You're a seasoned researcher with a knack for uncovering the latest
developments in {topic}. Known for your ability to find the most relevant
information and present it in a clear and concise manner.
reporting_analyst:
role: >
{topic} Reporting Analyst
goal: >
Create detailed reports based on {topic} data analysis and research findings
backstory: >
You're a meticulous analyst with a keen eye for detail. You're known for
your ability to turn complex data into clear and concise reports, making
it easy for others to understand and act on the information you provide.
tasks.yaml:
research_task:
description: >
Conduct a thorough research about {topic}
Make sure you find any interesting and relevant information given
the current year is 2025.
expected_output: >
A list with 10 bullet points of the most relevant information about {topic}
agent: researcher
reporting_task:
description: >
Review the context you got and expand each topic into a full section for a report.
Make sure the report is detailed and contains any and all relevant information.
expected_output: >
A fully fledged report with the main topics, each with a full section of information.
Formatted as markdown without '```'
agent: reporting_analyst
output_file: "{topic}_report.md"
The SDK supports defining structured output (JSON responses) directly in YAML configuration files. This allows you to automatically apply structured output when creating agents from YAML and unmarshal responses directly into Go structs.
agents.yaml with structured output:
researcher:
role: >
{topic} Senior Data Researcher
goal: >
Uncover cutting-edge developments in {topic}
backstory: >
You're a seasoned researcher with a knack for uncovering the latest
developments in {topic}. Known for your ability to find the most relevant
information and present it in a clear and concise manner.
response_format:
type: "json_object"
schema_name: "ResearchResult"
schema_definition:
type: "object"
properties:
findings:
type: "array"
items:
type: "object"
properties:
title:
type: "string"
description: "Title of the finding"
description:
type: "string"
description: "Detailed description"
source:
type: "string"
description: "Source of the information"
summary:
type: "string"
description: "Executive summary of findings"
metadata:
type: "object"
properties:
total_findings:
type: "integer"
research_date:
type: "string"
tasks.yaml with structured output:
research_task:
description: >
Conduct a thorough research about {topic}
Make sure you find any interesting and relevant information.
expected_output: >
A structured JSON response with findings, summary, and metadata
agent: researcher
output_file: "{topic}_report.json"
response_format:
type: "json_object"
schema_name: "ResearchResult"
schema_definition:
# Same schema as above
Usage in Go code:
// Define your Go struct to match the YAML schema
type ResearchResult struct {
Findings []struct {
Title string `json:"title"`
Description string `json:"description"`
Source string `json:"source"`
} `json:"findings"`
Summary string `json:"summary"`
Metadata struct {
TotalFindings int `json:"total_findings"`
ResearchDate string `json:"research_date"`
} `json:"metadata"`
}
// Create agent and execute task
agent, err := agent.CreateAgentForTask("research_task", agentConfigs, taskConfigs, variables, agent.WithLLM(llm))
result, err := agent.ExecuteTaskFromConfig(context.Background(), "research_task", taskConfigs, variables)
// Unmarshal structured output
var structured ResearchResult
err = json.Unmarshal([]byte(result), &structured)
For more details, see Structured Output with YAML Configuration.
package main
import (
"context"
"fmt"
"os"
"github.com/Ingenimax/agent-sdk-go/pkg/agent"
"github.com/Ingenimax/agent-sdk-go/pkg/config"
"github.com/Ingenimax/agent-sdk-go/pkg/llm/openai"
)
func main() {
// Load configuration
cfg := config.Get()
// Create LLM client
openaiClient := openai.NewClient(cfg.LLM.OpenAI.APIKey)
// Create agent with auto-configuration from system prompt
agent, err := agent.NewAgentWithAutoConfig(
context.Background(),
agent.WithLLM(openaiClient),
agent.WithSystemPrompt("You are a travel advisor who helps users plan trips and vacations. You specialize in finding hidden gems and creating personalized itineraries based on travelers' preferences."),
agent.WithName("Travel Assistant"),
)
if err != nil {
panic(err)
}
// Access the generated configurations
agentConfig := agent.GetGeneratedAgentConfig()
taskConfigs := agent.GetGeneratedTaskConfigs()
// Print generated agent details
fmt.Printf("Generated Agent Role: %s\n", agentConfig.Role)
fmt.Printf("Generated Agent Goal: %s\n", agentConfig.Goal)
fmt.Printf("Generated Agent Backstory: %s\n", agentConfig.Backstory)
// Print generated tasks
fmt.Println("\nGenerated Tasks:")
for taskName, taskConfig := range taskConfigs {
fmt.Printf("- %s: %s\n", taskName, taskConfig.Description)
}
// Save the generated configurations to YAML files
agentConfigMap := map[string]agent.AgentConfig{
"Travel Assistant": *agentConfig,
}
// Save agent configs to file
agentYaml, _ := os.Create("agent_config.yaml")
defer agentYaml.Close()
agent.SaveAgentConfigsToFile(agentConfigMap, agentYaml)
// Save task configs to file
taskYaml, _ := os.Create("task_config.yaml")
defer taskYaml.Close()
agent.SaveTaskConfigsToFile(taskConfigs, taskYaml)
// Use the auto-configured agent
response, err := agent.Run(context.Background(), "I want to plan a 3-day trip to Tokyo.")
if err != nil {
panic(err)
}
fmt.Println(response)
}
The auto-configuration feature uses LLM reasoning to derive a complete agent profile and associated tasks from a simple system prompt. The generated configurations include:
- Agent Profile: Role, goal, and backstory that define the agent's persona
- Task Definitions: Specialized tasks the agent can perform, with descriptions and expected outputs
- Reusable YAML: Save configurations for reuse in other applications
This approach dramatically reduces the effort needed to create specialized agents while ensuring consistency and quality.
The SDK supports both eager and lazy MCP server initialization:
- Eager: MCP servers are initialized when the agent is created
- Lazy: MCP servers are initialized only when their tools are first called (recommended)
package main
import (
"context"
"fmt"
"log"
"os"
"github.com/Ingenimax/agent-sdk-go/pkg/agent"
"github.com/Ingenimax/agent-sdk-go/pkg/llm/openai"
"github.com/Ingenimax/agent-sdk-go/pkg/memory"
)
func main() {
// Create OpenAI LLM client
apiKey := os.Getenv("OPENAI_API_KEY")
llm := openai.NewClient(apiKey, openai.WithModel("gpt-4o-mini"))
// Define lazy MCP configurations
// Note: The CLI supports dynamic tool discovery, but the SDK requires explicit tool definitions
lazyMCPConfigs := []agent.LazyMCPConfig{
{
Name: "aws-api-server",
Type: "stdio",
Command: "docker",
Args: []string{"run", "--rm", "-i", "public.ecr.aws/awslabs-mcp/awslabs/aws-api-mcp-server:latest"},
Env: []string{"AWS_REGION=us-west-2"},
Tools: []agent.LazyMCPToolConfig{
{
Name: "suggest_aws_commands",
Description: "Suggest AWS CLI commands based on natural language",
Schema: map[string]interface{}{"type": "object", "properties": map[string]interface{}{"query": map[string]interface{}{"type": "string"}}},
},
},
},
{
Name: "kubectl-ai",
Type: "stdio",
Command: "kubectl-ai",
Args: []string{"--mcp-server"},
Tools: []agent.LazyMCPToolConfig{
{
Name: "kubectl",
Description: "Execute kubectl commands against Kubernetes cluster",
Schema: map[string]interface{}{"type": "object", "properties": map[string]interface{}{"command": map[string]interface{}{"type": "string"}}},
},
},
},
}
// Create agent with lazy MCP configurations
myAgent, err := agent.NewAgent(
agent.WithLLM(llm),
agent.WithLazyMCPConfigs(lazyMCPConfigs),
agent.WithMemory(memory.NewConversationBuffer()),
agent.WithSystemPrompt("You are an AI assistant with access to AWS and Kubernetes tools."),
)
if err != nil {
log.Fatalf("Failed to create agent: %v", err)
}
// Use the agent - MCP servers will be initialized on first tool use
response, err := myAgent.Run(context.Background(), "List my EC2 instances and show cluster pods")
if err != nil {
log.Fatalf("Failed to run agent: %v", err)
}
fmt.Println("Agent Response:", response)
}
package main
import (
"context"
"fmt"
"log"
"os"
"github.com/Ingenimax/agent-sdk-go/pkg/agent"
"github.com/Ingenimax/agent-sdk-go/pkg/interfaces"
"github.com/Ingenimax/agent-sdk-go/pkg/llm/openai"
"github.com/Ingenimax/agent-sdk-go/pkg/mcp"
"github.com/Ingenimax/agent-sdk-go/pkg/memory"
"github.com/Ingenimax/agent-sdk-go/pkg/multitenancy"
)
func main() {
logger := log.New(os.Stderr, "AGENT: ", log.LstdFlags)
// Create OpenAI LLM client
apiKey := os.Getenv("OPENAI_API_KEY")
if apiKey == "" {
logger.Fatal("Please set the OPENAI_API_KEY environment variable.")
}
llm := openai.NewClient(apiKey, openai.WithModel("gpt-4o-mini"))
// Create MCP servers
var mcpServers []interfaces.MCPServer
// Connect to HTTP-based MCP server
httpServer, err := mcp.NewHTTPServer(context.Background(), mcp.HTTPServerConfig{
BaseURL: "http://localhost:8083/mcp",
})
if err != nil {
logger.Printf("Warning: Failed to initialize HTTP MCP server: %v", err)
} else {
mcpServers = append(mcpServers, httpServer)
logger.Println("Successfully initialized HTTP MCP server.")
}
// Connect to stdio-based MCP server
stdioServer, err := mcp.NewStdioServer(context.Background(), mcp.StdioServerConfig{
Command: "go",
Args: []string{"run", "./server-stdio/main.go"},
})
if err != nil {
logger.Printf("Warning: Failed to initialize STDIO MCP server: %v", err)
} else {
mcpServers = append(mcpServers, stdioServer)
logger.Println("Successfully initialized STDIO MCP server.")
}
// Create agent with MCP server support
myAgent, err := agent.NewAgent(
agent.WithLLM(llm),
agent.WithMCPServers(mcpServers),
agent.WithMemory(memory.NewConversationBuffer()),
agent.WithSystemPrompt("You are an AI assistant that can use tools from MCP servers."),
agent.WithName("MCPAgent"),
)
if err != nil {
logger.Fatalf("Failed to create agent: %v", err)
}
// Create context with organization and conversation IDs
ctx := context.Background()
ctx = multitenancy.WithOrgID(ctx, "default-org")
ctx = context.WithValue(ctx, memory.ConversationIDKey, "mcp-demo")
// Run the agent with a query that will use MCP tools
response, err := myAgent.Run(ctx, "What time is it right now?")
if err != nil {
logger.Fatalf("Agent run failed: %v", err)
}
fmt.Println("Agent response:", response)
}
The SDK follows a modular architecture with these key components:
- Agent: Coordinates the LLM, memory, and tools
- LLM: Interface to language model providers (OpenAI, Anthropic, Google Vertex AI)
- Memory: Stores conversation history and context
- Tools: Extend the agent's capabilities
- Vector Store: For semantic search and retrieval
- Guardrails: Ensures safe and responsible AI usage
- Execution Plan: Manages planning, approval, and execution of complex tasks
- Configuration: YAML-based agent and task definitions
- OpenAI: GPT-4, GPT-3.5, and other OpenAI models
- Anthropic: Claude 3.5 Sonnet, Claude 3 Haiku, and other Claude models
-
Google Vertex AI: Gemini 1.5 Pro, Gemini 1.5 Flash, Gemini 2.0 Flash, and Gemini Pro Vision
- Advanced reasoning modes (none, minimal, comprehensive)
- Multimodal capabilities with vision models
- Function calling and tool integration
- Flexible authentication (ADC or service account files)
-
Ollama: Local LLM server supporting various open-source models
- Run models locally without external API calls
- Support for Llama2, Mistral, CodeLlama, and other models
- Model management (list, pull, switch models)
- Local processing for reduced latency and privacy
-
vLLM: High-performance local LLM inference with PagedAttention
- Optimized for GPU inference with CUDA
- Efficient memory management for large models
- Support for Llama2, Mistral, CodeLlama, and other models
- Model management (list, pull, switch models)
- Local processing for reduced latency and privacy
The Agent SDK includes a powerful command-line interface for headless usage:
- 🤖 Multiple LLM Providers: OpenAI, Anthropic, Google Vertex AI, Ollama, vLLM
- 💬 Interactive Chat Mode: Real-time conversations with persistent memory
- 📝 Task Execution: Run predefined tasks from YAML configurations
- 🎨 Auto-Configuration: Generate agent configs from simple prompts
- 🔧 Flexible Configuration: JSON-based configuration with environment variables
- 🛠️ Rich Tool Integration: Web search, GitHub, MCP servers, and more
- 🔌 MCP Server Management: Add, list, remove, and test MCP servers
- 📄 .env File Support: Automatic loading of environment variables from .env files
# Initialize configuration
agent-cli init
# Run agent with a single prompt
agent-cli run "Explain quantum computing in simple terms"
# Direct execution (no setup required)
agent-cli --prompt "What is 2+2?"
# Direct execution with MCP server
agent-cli --prompt "List my EC2 instances" \
--mcp-config ./aws_api_server.json \
--allowedTools "mcp__aws__suggest_aws_commands,mcp__aws__call_aws" \
--dangerously-skip-permissions
# Execute predefined tasks
agent-cli task --agent-config=agents.yaml --task-config=tasks.yaml --task=research_task --topic="AI"
# Start interactive chat
agent-cli chat
# Generate configurations from system prompt
agent-cli generate --prompt="You are a travel advisor" --output=./configs
# List available resources
agent-cli list providers
agent-cli list models
agent-cli list tools
# Manage configuration
agent-cli config show
agent-cli config set provider anthropic
# Manage MCP servers
agent-cli mcp add --type=http --url=http://localhost:8083/mcp --name=my-server
agent-cli mcp list
agent-cli mcp remove --name=my-server
# Import/Export MCP servers from JSON config
agent-cli mcp import --file=mcp-servers.json
agent-cli mcp export --file=mcp-servers.json
# Direct execution with MCP servers and tool filtering
agent-cli --prompt "List my EC2 instances" \
--mcp-config ./aws_api_server.json \
--allowedTools "suggest_aws_commands,call_aws" \
--dangerously-skip-permissions
# Kubernetes management with kubectl-ai
agent-cli --prompt "List all pods in the default namespace" \
--mcp-config ./kubectl_ai.json \
--allowedTools "kubectl" \
--dangerously-skip-permissions
The CLI now supports dynamic tool discovery and flexible tool filtering:
- No Hardcoded Tools: MCP servers define their own tools and schemas
- Dynamic Discovery: Tools are discovered when MCP servers are first initialized
-
Flexible Filtering: Use
--allowedTools
to specify exactly which tools can be used - JSON Configuration: Load MCP server configurations from external JSON files
- Environment Variables: Each MCP server can specify custom environment variables
Popular MCP Servers:
- AWS API Server: AWS CLI operations and suggestions
- kubectl-ai: Kubernetes cluster management via natural language
- Filesystem Server: File system operations and management
- Database Server: SQL query execution and database operations
For complete CLI documentation, see: CLI README
Check out the cmd/examples
directory for complete examples:
- Simple Agent: Basic agent with system prompt
- YAML Configuration: Defining agents and tasks in YAML
- Auto-Configuration: Generating agent configurations from system prompts
- Agent Config Wizard: Interactive CLI for creating and using agents
- MCP Integration: Using Model Context Protocol servers with agents
- Multi-LLM Support: Examples using OpenAI, Azure OpenAI, Anthropic, and Vertex AI
- Vertex AI Integration: Comprehensive examples with Gemini models, reasoning modes, and tools
-
examples/llm/openai/
: OpenAI integration examples -
examples/llm/azureopenai/
: Azure OpenAI integration examples with deployment-based configuration -
examples/llm/anthropic/
: Anthropic Claude integration examples -
examples/llm/ollama/
: Ollama local LLM integration examples -
examples/llm/vllm/
: vLLM high-performance local LLM integration examples
This project is licensed under the MIT License - see the LICENSE file for details.
For more detailed information, refer to the following documents:
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for agent-sdk-go
Similar Open Source Tools

agent-sdk-go
Agent Go SDK is a powerful Go framework for building production-ready AI agents that seamlessly integrates memory management, tool execution, multi-LLM support, and enterprise features into a flexible, extensible architecture. It offers core capabilities like multi-model intelligence, modular tool ecosystem, advanced memory management, and MCP integration. The SDK is enterprise-ready with built-in guardrails, complete observability, and support for enterprise multi-tenancy. It provides a structured task framework, declarative configuration, and zero-effort bootstrapping for development experience. The SDK supports environment variables for configuration and includes features like creating agents with YAML configuration, auto-generating agent configurations, using MCP servers with an agent, and CLI tool for headless usage.

instructor
Instructor is a tool that provides structured outputs from Large Language Models (LLMs) in a reliable manner. It simplifies the process of extracting structured data by utilizing Pydantic for validation, type safety, and IDE support. With Instructor, users can define models and easily obtain structured data without the need for complex JSON parsing, error handling, or retries. The tool supports automatic retries, streaming support, and extraction of nested objects, making it production-ready for various AI applications. Trusted by a large community of developers and companies, Instructor is used by teams at OpenAI, Google, Microsoft, AWS, and YC startups.

FlashLearn
FlashLearn is a tool that provides a simple interface and orchestration for incorporating Agent LLMs into workflows and ETL pipelines. It allows data transformations, classifications, summarizations, rewriting, and custom multi-step tasks using LLMs. Each step and task has a compact JSON definition, making pipelines easy to understand and maintain. FlashLearn supports LiteLLM, Ollama, OpenAI, DeepSeek, and other OpenAI-compatible clients.

acte
Acte is a framework designed to build GUI-like tools for AI Agents. It aims to address the issues of cognitive load and freedom degrees when interacting with multiple APIs in complex scenarios. By providing a graphical user interface (GUI) for Agents, Acte helps reduce cognitive load and constraints interaction, similar to how humans interact with computers through GUIs. The tool offers APIs for starting new sessions, executing actions, and displaying screens, accessible via HTTP requests or the SessionManager class.

aio-pika
Aio-pika is a wrapper around aiormq for asyncio and humans. It provides a completely asynchronous API, object-oriented API, transparent auto-reconnects with complete state recovery, Python 3.7+ compatibility, transparent publisher confirms support, transactions support, and complete type-hints coverage.

LightRAG
LightRAG is a repository hosting the code for LightRAG, a system that supports seamless integration of custom knowledge graphs, Oracle Database 23ai, Neo4J for storage, and multiple file types. It includes features like entity deletion, batch insert, incremental insert, and graph visualization. LightRAG provides an API server implementation for RESTful API access to RAG operations, allowing users to interact with it through HTTP requests. The repository also includes evaluation scripts, code for reproducing results, and a comprehensive code structure.

gollm
gollm is a Go package designed to simplify interactions with Large Language Models (LLMs) for AI engineers and developers. It offers a unified API for multiple LLM providers, easy provider and model switching, flexible configuration options, advanced prompt engineering, prompt optimization, memory retention, structured output and validation, provider comparison tools, high-level AI functions, robust error handling and retries, and extensible architecture. The package enables users to create AI-powered golems for tasks like content creation workflows, complex reasoning tasks, structured data generation, model performance analysis, prompt optimization, and creating a mixture of agents.

llm-sandbox
LLM Sandbox is a lightweight and portable sandbox environment designed to securely execute large language model (LLM) generated code in a safe and isolated manner using Docker containers. It provides an easy-to-use interface for setting up, managing, and executing code in a controlled Docker environment, simplifying the process of running code generated by LLMs. The tool supports multiple programming languages, offers flexibility with predefined Docker images or custom Dockerfiles, and allows scalability with support for Kubernetes and remote Docker hosts.

ai-sdk-cpp
The AI SDK CPP is a modern C++ toolkit that provides a unified, easy-to-use API for building AI-powered applications with popular model providers like OpenAI and Anthropic. It bridges the gap for C++ developers by offering a clean, expressive codebase with minimal dependencies. The toolkit supports text generation, streaming content, multi-turn conversations, error handling, tool calling, async tool execution, and configurable retries. Future updates will include additional providers, text embeddings, and image generation models. The project also includes a patched version of nlohmann/json for improved thread safety and consistent behavior in multi-threaded environments.

mcp-omnisearch
mcp-omnisearch is a Model Context Protocol (MCP) server that acts as a unified gateway to multiple search providers and AI tools. It integrates Tavily, Perplexity, Kagi, Jina AI, Brave, Exa AI, and Firecrawl to offer a wide range of search, AI response, content processing, and enhancement features through a single interface. The server provides powerful search capabilities, AI response generation, content extraction, summarization, web scraping, structured data extraction, and more. It is designed to work flexibly with the API keys available, enabling users to activate only the providers they have keys for and easily add more as needed.

agentpress
AgentPress is a collection of simple but powerful utilities that serve as building blocks for creating AI agents. It includes core components for managing threads, registering tools, processing responses, state management, and utilizing LLMs. The tool provides a modular architecture for handling messages, LLM API calls, response processing, tool execution, and results management. Users can easily set up the environment, create custom tools with OpenAPI or XML schema, and manage conversation threads with real-time interaction. AgentPress aims to be agnostic, simple, and flexible, allowing users to customize and extend functionalities as needed.

mcphub.nvim
MCPHub.nvim is a powerful Neovim plugin that integrates MCP (Model Context Protocol) servers into your workflow. It offers a centralized config file for managing servers and tools, with an intuitive UI for testing resources. Ideal for LLM integration, it provides programmatic API access and interactive testing through the `:MCPHub` command.

mcp-documentation-server
The mcp-documentation-server is a lightweight server application designed to serve documentation files for projects. It provides a simple and efficient way to host and access project documentation, making it easy for team members and stakeholders to find and reference important information. The server supports various file formats, such as markdown and HTML, and allows for easy navigation through the documentation. With mcp-documentation-server, teams can streamline their documentation process and ensure that project information is easily accessible to all involved parties.

pilottai
PilottAI is a Python framework for building autonomous multi-agent systems with advanced orchestration capabilities. It provides enterprise-ready features for building scalable AI applications. The framework includes hierarchical agent systems, production-ready features like asynchronous processing and fault tolerance, advanced memory management with semantic storage, and integrations with multiple LLM providers and custom tools. PilottAI offers specialized agents for various tasks such as customer service, document processing, email handling, knowledge acquisition, marketing, research analysis, sales, social media, and web search. The framework also provides documentation, example use cases, and advanced features like memory management, load balancing, and fault tolerance.

flo-ai
Flo AI is a Python framework that enables users to build production-ready AI agents and teams with minimal code. It allows users to compose complex AI architectures using pre-built components while maintaining the flexibility to create custom components. The framework supports composable, production-ready, YAML-first, and flexible AI systems. Users can easily create AI agents and teams, manage teams of AI agents working together, and utilize built-in support for Retrieval-Augmented Generation (RAG) and compatibility with Langchain tools. Flo AI also provides tools for output parsing and formatting, tool logging, data collection, and JSON output collection. It is MIT Licensed and offers detailed documentation, tutorials, and examples for AI engineers and teams to accelerate development, maintainability, scalability, and testability of AI systems.

memento-mcp
Memento MCP is a scalable, high-performance knowledge graph memory system designed for LLMs. It offers semantic retrieval, contextual recall, and temporal awareness to any LLM client supporting the model context protocol. The system is built on core concepts like entities and relations, utilizing Neo4j as its storage backend for unified graph and vector search capabilities. With advanced features such as semantic search, temporal awareness, confidence decay, and rich metadata support, Memento MCP provides a robust solution for managing knowledge graphs efficiently and effectively.
For similar tasks

AGiXT
AGiXT is a dynamic Artificial Intelligence Automation Platform engineered to orchestrate efficient AI instruction management and task execution across a multitude of providers. Our solution infuses adaptive memory handling with a broad spectrum of commands to enhance AI's understanding and responsiveness, leading to improved task completion. The platform's smart features, like Smart Instruct and Smart Chat, seamlessly integrate web search, planning strategies, and conversation continuity, transforming the interaction between users and AI. By leveraging a powerful plugin system that includes web browsing and command execution, AGiXT stands as a versatile bridge between AI models and users. With an expanding roster of AI providers, code evaluation capabilities, comprehensive chain management, and platform interoperability, AGiXT is consistently evolving to drive a multitude of applications, affirming its place at the forefront of AI technology.

aiexe
aiexe is a cutting-edge command-line interface (CLI) and graphical user interface (GUI) tool that integrates powerful AI capabilities directly into your terminal or desktop. It is designed for developers, tech enthusiasts, and anyone interested in AI-powered automation. aiexe provides an easy-to-use yet robust platform for executing complex tasks with just a few commands. Users can harness the power of various AI models from OpenAI, Anthropic, Ollama, Gemini, and GROQ to boost productivity and enhance decision-making processes.

claude.vim
Claude.vim is a Vim plugin that integrates Claude, an AI pair programmer, into your Vim workflow. It allows you to chat with Claude about what to build or how to debug problems, and Claude offers opinions, proposes modifications, or even writes code. The plugin provides a chat/instruction-centric interface optimized for human collaboration, with killer features like access to chat history and vimdiff interface. It can refactor code, modify or extend selected pieces of code, execute complex tasks by reading documentation, cloning git repositories, and more. Note that it is early alpha software and expected to rapidly evolve.

mistreevous
Mistreevous is a library written in TypeScript for Node and browsers, used to declaratively define, build, and execute behaviour trees for creating complex AI. It allows defining trees with JSON or a minimal DSL, providing in-browser editor and visualizer. The tool offers methods for tree state, stepping, resetting, and getting node details, along with various composite, decorator, leaf nodes, callbacks, guards, and global functions/subtrees. Version history includes updates for node types, callbacks, global functions, and TypeScript conversion.

project_alice
Alice is an agentic workflow framework that integrates task execution and intelligent chat capabilities. It provides a flexible environment for creating, managing, and deploying AI agents for various purposes, leveraging a microservices architecture with MongoDB for data persistence. The framework consists of components like APIs, agents, tasks, and chats that interact to produce outputs through files, messages, task results, and URL references. Users can create, test, and deploy agentic solutions in a human-language framework, making it easy to engage with by both users and agents. The tool offers an open-source option, user management, flexible model deployment, and programmatic access to tasks and chats.

flock
Flock is a workflow-based low-code platform that enables rapid development of chatbots, RAG applications, and coordination of multi-agent teams. It offers a flexible, low-code solution for orchestrating collaborative agents, supporting various node types for specific tasks, such as input processing, text generation, knowledge retrieval, tool execution, intent recognition, answer generation, and more. Flock integrates LangChain and LangGraph to provide offline operation capabilities and supports future nodes like Conditional Branch, File Upload, and Parameter Extraction for creating complex workflows. Inspired by StreetLamb, Lobe-chat, Dify, and fastgpt projects, Flock introduces new features and directions while leveraging open-source models and multi-tenancy support.

RoboMatrix
RoboMatrix is a skill-centric hierarchical framework for scalable robot task planning and execution in an open-world environment. It provides a structured approach to robot task execution using a combination of hardware components, environment configuration, installation procedures, and data collection methods. The framework is developed using the ROS2 framework on Ubuntu and supports robots from DJI's RoboMaster series. Users can follow the provided installation guidance to set up RoboMatrix and utilize it for various tasks such as data collection, task execution, and dataset construction. The framework also includes a supervised fine-tuning dataset and aims to optimize communication and release additional components in the future.

agentscript
AgentScript is an open-source framework for building AI agents that think in code. It prompts a language model to generate JavaScript code, which is then executed in a dedicated runtime with resumability, state persistence, and interactivity. The framework allows for abstract task execution without needing to know all the data beforehand, making it flexible and efficient. AgentScript supports tools, deterministic functions, and LLM-enabled functions, enabling dynamic data processing and decision-making. It also provides state management and human-in-the-loop capabilities, allowing for pausing, serialization, and resumption of execution.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.