
trpc-agent-go
trpc-agent-go is a powerful Go framework for building intelligent agent systems using large language models (LLMs) and tools.
Stars: 122

A powerful Go framework for building intelligent agent systems with large language models (LLMs), hierarchical planners, memory, telemetry, and a rich tool ecosystem. tRPC-Agent-Go enables the creation of autonomous or semi-autonomous agents that reason, call tools, collaborate with sub-agents, and maintain long-term state. The framework provides detailed documentation, examples, and tools for accelerating the development of AI applications.
README:
English | ไธญๆ
๐ A powerful Go framework for building intelligent agent systems that transforms how you create AI applications. Build autonomous agents that think, remember, collaborate, and act with unprecedented ease.
โจ Why tRPC-Agent-Go?
- ๐ง Intelligent Reasoning: Advanced hierarchical planners and multi-agent orchestration
- ๐งฐ Rich Tool Ecosystem: Seamless integration with external APIs, databases, and services
- ๐พ Persistent Memory: Long-term state management and contextual awareness
- ๐ Multi-Agent Collaboration: Chain, parallel, and graph-based agent workflows
- ๐ Production Ready: Built-in telemetry, tracing, and enterprise-grade reliability
- โก High Performance: Optimized for scalability and low latency
Perfect for building:
- ๐ค Customer Support Bots - Intelligent agents that understand context and solve complex queries
- ๐ Data Analysis Assistants - Agents that query databases, generate reports, and provide insights
- ๐ง DevOps Automation - Smart deployment, monitoring, and incident response systems
- ๐ผ Business Process Automation - Multi-step workflows with human-in-the-loop capabilities
- ๐ง Research & Knowledge Management - RAG-powered agents for document analysis and Q&A
// Chain agents for complex workflows
pipeline := chainagent.New("pipeline",
chainagent.WithSubAgents([]agent.Agent{
analyzer, processor, reporter,
}))
// Or run them in parallel
parallel := parallelagent.New("concurrent",
parallelagent.WithSubAgents(tasks)) |
// Persistent memory with search
memory := memorysvc.NewInMemoryService()
agent := llmagent.New("assistant",
llmagent.WithTools(memory.Tools()),
llmagent.WithModel(model))
// Memory service managed at runner level
runner := runner.NewRunner("app", agent,
runner.WithMemoryService(memory))
// Agents remember context across sessions |
// Any function becomes a tool
calculator := function.NewFunctionTool(
calculate,
function.WithName("calculator"),
function.WithDescription("Math operations"))
// MCP protocol support
mcpTool := mcptool.New(serverConn) |
// OpenTelemetry integration
runner := runner.NewRunner("app", agent,
runner.WithTelemetry(telemetry.Config{
TracingEnabled: true,
MetricsEnabled: true,
})) |
- Use Cases
- Key Features
- Documentation
- Quick Start
- Examples
- Architecture Overview
- Using Built-in Agents
- Future Enhancements
- Contributing
- Acknowledgements
Ready to dive into tRPC-Agent-Go? Our documentation covers everything from basic concepts to advanced techniques, helping you build powerful AI applications with confidence. Whether you're new to AI agents or an experienced developer, you'll find detailed guides, practical examples, and best practices to accelerate your development journey.
๐ฌ See it in Action: [Demo GIF placeholder - showing agent reasoning and tool usage]
- โ Go 1.21 or later
- ๐ LLM provider API key (OpenAI, DeepSeek, etc.)
- ๐ก 5 minutes to build your first intelligent agent
Get started in 3 simple steps:
# 1๏ธโฃ Clone and setup
git clone https://github.com/trpc-group/trpc-agent-go.git
cd trpc-agent-go
# 2๏ธโฃ Configure your LLM
export OPENAI_API_KEY="your-api-key-here"
export OPENAI_BASE_URL="your-base-url-here" # Optional
# 3๏ธโฃ Run your first agent! ๐
cd examples/runner
go run . -model="gpt-4o-mini" -streaming=true
What you'll see:
- ๐ฌ Interactive chat with your AI agent
- โก Real-time streaming responses
- ๐งฎ Tool usage (calculator + time tools)
- ๐ Multi-turn conversations with memory
Try asking: "What's the current time? Then calculate 15 * 23 + 100"
package main
import (
"context"
"fmt"
"log"
"trpc.group/trpc-go/trpc-agent-go/agent/llmagent"
"trpc.group/trpc-go/trpc-agent-go/model"
"trpc.group/trpc-go/trpc-agent-go/model/openai"
"trpc.group/trpc-go/trpc-agent-go/runner"
"trpc.group/trpc-go/trpc-agent-go/tool"
"trpc.group/trpc-go/trpc-agent-go/tool/function"
)
func main() {
// Create model.
modelInstance := openai.New("deepseek-chat")
// Create tool.
calculatorTool := function.NewFunctionTool(
calculator,
function.WithName("calculator"),
function.WithDescription("Execute addition, subtraction, multiplication, and division. "+
"Parameters: a, b are numeric values, op takes values add/sub/mul/div; "+
"returns result as the calculation result."),
)
// Enable streaming output.
genConfig := model.GenerationConfig{
Stream: true,
}
// Create Agent.
agent := llmagent.New("assistant",
llmagent.WithModel(modelInstance),
llmagent.WithTools([]tool.Tool{calculatorTool}),
llmagent.WithGenerationConfig(genConfig),
)
// Create Runner.
runner := runner.NewRunner("calculator-app", agent)
// Execute conversation.
ctx := context.Background()
events, err := runner.Run(ctx,
"user-001",
"session-001",
model.NewUserMessage("Calculate what 2+3 equals"),
)
if err != nil {
log.Fatal(err)
}
// Process event stream.
for event := range events {
if event.Object == "chat.completion.chunk" {
fmt.Print(event.Choices[0].Delta.Content)
}
}
fmt.Println()
}
func calculator(ctx context.Context, req calculatorReq) (calculatorRsp, error) {
var result float64
switch req.Op {
case "add", "+":
result = req.A + req.B
case "sub", "-":
result = req.A - req.B
case "mul", "*":
result = req.A * req.B
case "div", "/":
result = req.A / req.B
}
return calculatorRsp{Result: result}, nil
}
type calculatorReq struct {
A float64 `json:"A" jsonschema:"description=First integer operand,required"`
B float64 `json:"B" jsonschema:"description=Second integer operand,required"`
Op string `json:"Op" jsonschema:"description=Operation type,enum=add,enum=sub,enum=mul,enum=div,required"`
}
type calculatorRsp struct {
Result float64 `json:"result"`
}
The examples
directory contains runnable demos covering every major feature.
- examples/agenttool โ Wrap agents as callable tools.
- examples/multitools โ Multiple tools orchestration.
- examples/duckduckgo โ Web search tool integration.
- examples/filetoolset โ File operations as tools.
- examples/fileinput โ Provide files as inputs.
- examples/agenttool shows streaming and non-streaming patterns.
2. LLM-Only Agent (examples/llmagent)
- Wrap any chat-completion model as an
LLMAgent
. - Configure system instructions, temperature, max tokens, etc.
- Receive incremental
event.Event
updates while the model streams.
3. Multi-Agent Runners (examples/multiagent)
- ChainAgent โ linear pipeline of sub-agents.
- ParallelAgent โ run sub-agents concurrently and merge results.
- CycleAgent โ iterate until a termination condition is met.
4. Graph Agent (examples/graph)
-
GraphAgent โ demonstrates building and executing complex, conditional
workflows using the
graph
andagent/graph
packages. It shows how to construct a graph-based agent, manage state safely, implement conditional routing, and orchestrate execution with the Runner.
5. Memory (examples/memory)
- Inโmemory and Redis memory services with CRUD, search and tool integration.
- How to configure, call tools and customize prompts.
6. Knowledge (examples/knowledge)
- Basic RAG example: load sources, embed to a vector store, and search.
- How to use conversation context and tune loading/concurrency options.
7. Telemetry & Tracing (examples/telemetry)
- OpenTelemetry hooks across model, tool and runner layers.
- Export traces to OTLP endpoint for real-time analysis.
8. MCP Integration (examples/mcptool)
- Wrapper utilities around trpc-mcp-go, an implementation of the Model Context Protocol (MCP).
- Provides structured prompts, tool calls, resource and session messages that follow the MCP specification.
- Enables dynamic tool execution and context-rich interactions between agents and LLMs.
9. Debug Web Demo (examples/debugserver)
- Launches a debug Server that speaks ADK-compatible HTTP endpoints.
- Front-end: google/adk-web connects via
/run_sse
, streams agent responses in real-time. - Great starting point for building your own chat UI.
Other notable examples:
- examples/humaninloop โ Human in the loop.
- examples/codeexecution โ Secure code execution.
See individual README.md
files in each example folder for usage details.
Architecture
- ๐ Runner orchestrates the entire execution pipeline with session management
- ๐ค Agent processes requests using multiple specialized components
- ๐ง Planner determines the optimal strategy and tool selection
- ๐ ๏ธ Tools execute specific tasks (API calls, calculations, web searches)
- ๐พ Memory maintains context and learns from interactions
- ๐ Knowledge provides RAG capabilities for document understanding
Key packages:
Package | Responsibility |
---|---|
agent |
Core execution unit, responsible for processing user input and generating responses. |
runner |
Agent executor, responsible for managing execution flow and connecting Session/Memory Service capabilities. |
model |
Supports multiple LLM models (OpenAI, DeepSeek, etc.). |
tool |
Provides various tool capabilities (Function, MCP, DuckDuckGo, etc.). |
session |
Manages user session state and events. |
memory |
Records user long-term memory and personalized information. |
knowledge |
Implements RAG knowledge retrieval capabilities. |
planner |
Provides Agent planning and reasoning capabilities. |
For most applications you do not need to implement the agent.Agent
interface yourself. The framework already ships with several ready-to-use
agents that you can compose like Lego bricks:
Agent | Purpose |
---|---|
LLMAgent |
Wraps an LLM chat-completion model as an agent. |
ChainAgent |
Executes sub-agents sequentially. |
ParallelAgent |
Executes sub-agents concurrently and merges output. |
CycleAgent |
Loops over a planner + executor until stop signal. |
// 1. Create a base LLM agent.
base := llmagent.New(
"assistant",
llmagent.WithModel(openai.New("gpt-4o-mini")),
)
// 2. Create a second LLM agent with a different instruction.
translator := llmagent.New(
"translator",
llmagent.WithInstruction("Translate everything to French"),
llmagent.WithModel(openai.New("gpt-3.5-turbo")),
)
// 3. Combine them in a chain.
pipeline := chainagent.New(
"pipeline",
chainagent.WithSubAgents([]agent.Agent{base, translator}),
)
// 4. Run through the runner for sessions & telemetry.
run := runner.NewRunner("demo-app", pipeline)
events, _ := run.Run(ctx, "user-1", "sess-1",
model.NewUserMessage("Hello!"))
for ev := range events { /* ... */ }
The composition API lets you nest chains, cycles, or parallels to build complex workflows without low-level plumbing.
We โค๏ธ contributions! Join our growing community of developers building the future of AI agents.
- ๐ Report bugs or suggest features via Issues
- ๐ Improve documentation - help others learn faster
- ๐ง Submit PRs - bug fixes, new features, or examples
- ๐ก Share your use cases - inspire others with your agent applications
# Fork & clone the repo
git clone https://github.com/YOUR_USERNAME/trpc-agent-go.git
cd trpc-agent-go
# Run tests to ensure everything works
go test ./...
go vet ./...
# Make your changes and submit a PR! ๐
๐ Please read CONTRIBUTING.md for detailed guidelines and coding standards.
Special thanks to Tencent's business units including Tencent Yuanbao, Tencent Video, Tencent News, IMA, and QQ Music for their invaluable support and real-world validation. Production usage drives framework excellence! ๐
Inspired by amazing frameworks like ADK, Agno, CrewAI, AutoGen, and many others. Standing on the shoulders of giants! ๐
Licensed under the Apache 2.0 License - see LICENSE file for details.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for trpc-agent-go
Similar Open Source Tools

trpc-agent-go
A powerful Go framework for building intelligent agent systems with large language models (LLMs), hierarchical planners, memory, telemetry, and a rich tool ecosystem. tRPC-Agent-Go enables the creation of autonomous or semi-autonomous agents that reason, call tools, collaborate with sub-agents, and maintain long-term state. The framework provides detailed documentation, examples, and tools for accelerating the development of AI applications.

R2R
R2R (RAG to Riches) is a fast and efficient framework for serving high-quality Retrieval-Augmented Generation (RAG) to end users. The framework is designed with customizable pipelines and a feature-rich FastAPI implementation, enabling developers to quickly deploy and scale RAG-based applications. R2R was conceived to bridge the gap between local LLM experimentation and scalable production solutions. **R2R is to LangChain/LlamaIndex what NextJS is to React**. A JavaScript client for R2R deployments can be found here. ### Key Features * **๐ Deploy** : Instantly launch production-ready RAG pipelines with streaming capabilities. * **๐งฉ Customize** : Tailor your pipeline with intuitive configuration files. * **๐ Extend** : Enhance your pipeline with custom code integrations. * **โ๏ธ Autoscale** : Scale your pipeline effortlessly in the cloud using SciPhi. * **๐ค OSS** : Benefit from a framework developed by the open-source community, designed to simplify RAG deployment.

MassGen
MassGen is a cutting-edge multi-agent system that leverages the power of collaborative AI to solve complex tasks. It assigns a task to multiple AI agents who work in parallel, observe each other's progress, and refine their approaches to converge on the best solution to deliver a comprehensive and high-quality result. The system operates through an architecture designed for seamless multi-agent collaboration, with key features including cross-model/agent synergy, parallel processing, intelligence sharing, consensus building, and live visualization. Users can install the system, configure API settings, and run MassGen for various tasks such as question answering, creative writing, research, development & coding tasks, and web automation & browser tasks. The roadmap includes plans for advanced agent collaboration, expanded model, tool & agent integration, improved performance & scalability, enhanced developer experience, and a web interface.

zotero-mcp
Zotero MCP is an open-source project that integrates AI capabilities with Zotero using the Model Context Protocol. It consists of a Zotero plugin and an MCP server, enabling AI assistants to search, retrieve, and cite references from Zotero library. The project features a unified architecture with an integrated MCP server, eliminating the need for a separate server process. It provides features like intelligent search, detailed reference information, filtering by tags and identifiers, aiding in academic tasks such as literature reviews and citation management.

unity-mcp
MCP for Unity is a tool that acts as a bridge, enabling AI assistants to interact with the Unity Editor via a local MCP Client. Users can instruct their LLM to manage assets, scenes, scripts, and automate tasks within Unity. The tool offers natural language control, powerful tools for asset management, scene manipulation, and automation of workflows. It is extensible and designed to work with various MCP Clients, providing a range of functions for precise text edits, script management, GameObject operations, and more.

klavis
Klavis AI is a production-ready solution for managing Multiple Communication Protocol (MCP) servers. It offers self-hosted solutions and a hosted service with enterprise OAuth support. With Klavis AI, users can easily deploy and manage over 50 MCP servers for various services like GitHub, Gmail, Google Sheets, YouTube, Slack, and more. The tool provides instant access to MCP servers, seamless authentication, and integration with AI frameworks, making it ideal for individuals and businesses looking to streamline their communication and data management workflows.

tunacode
TunaCode CLI is an AI-powered coding assistant that provides a command-line interface for developers to enhance their coding experience. It offers features like model selection, parallel execution for faster file operations, and various commands for code management. The tool aims to improve coding efficiency and provide a seamless coding environment for developers.

mem0
Mem0 is a tool that provides a smart, self-improving memory layer for Large Language Models, enabling personalized AI experiences across applications. It offers persistent memory for users, sessions, and agents, self-improving personalization, a simple API for easy integration, and cross-platform consistency. Users can store memories, retrieve memories, search for related memories, update memories, get the history of a memory, and delete memories using Mem0. It is designed to enhance AI experiences by enabling long-term memory storage and retrieval.

rkllama
RKLLama is a server and client tool designed for running and interacting with LLM models optimized for Rockchip RK3588(S) and RK3576 platforms. It allows models to run on the NPU, with features such as running models on NPU, partial Ollama API compatibility, pulling models from Huggingface, API REST with documentation, dynamic loading/unloading of models, inference requests with streaming modes, simplified model naming, CPU model auto-detection, and optional debug mode. The tool supports Python 3.8 to 3.12 and has been tested on Orange Pi 5 Pro and Orange Pi 5 Plus with specific OS versions.

shards
Shards is a high-performance, multi-platform, type-safe programming language designed for visual development. It is a dataflow visual programming language that enables building full-fledged apps and games without traditional coding. Shards features automatic type checking, optimized shard implementations for high performance, and an intuitive visual workflow for beginners. The language allows seamless round-trip engineering between code and visual models, empowering users to create multi-platform apps easily. Shards also powers an upcoming AI-powered game creation system, enabling real-time collaboration and game development in a low to no-code environment.

astrsk
astrsk is a tool that pushes the boundaries of AI storytelling by offering advanced AI agents, customizable response formatting, and flexible prompt editing for immersive roleplaying experiences. It provides complete AI agent control, a visual flow editor for conversation flows, and ensures 100% local-first data storage. The tool is true cross-platform with support for various AI providers and modern technologies like React, TypeScript, and Tailwind CSS. Coming soon features include cross-device sync, enhanced session customization, and community features.

local-deep-research
Local Deep Research is a powerful AI-powered research assistant that performs deep, iterative analysis using multiple LLMs and web searches. It can be run locally for privacy or configured to use cloud-based LLMs for enhanced capabilities. The tool offers advanced research capabilities, flexible LLM support, rich output options, privacy-focused operation, enhanced search integration, and academic & scientific integration. It also provides a web interface, command line interface, and supports multiple LLM providers and search engines. Users can configure AI models, search engines, and research parameters for customized research experiences.

probe
Probe is an AI-friendly, fully local, semantic code search tool designed to power the next generation of AI coding assistants. It combines the speed of ripgrep with the code-aware parsing of tree-sitter to deliver precise results with complete code blocks, making it perfect for large codebases and AI-driven development workflows. Probe supports various features like AI-friendly code extraction, fully local operation without external APIs, fast scanning of large codebases, accurate code structure parsing, re-rankers and NLP methods for better search results, multi-language support, interactive AI chat mode, and flexibility to run as a CLI tool, MCP server, or interactive AI chat.

MemOS
MemOS is an operating system for Large Language Models (LLMs) that enhances them with long-term memory capabilities. It allows LLMs to store, retrieve, and manage information, enabling more context-aware, consistent, and personalized interactions. MemOS provides Memory-Augmented Generation (MAG) with a unified API for memory operations, a Modular Memory Architecture (MemCube) for easy integration and management of different memory types, and multiple memory types including Textual Memory, Activation Memory, and Parametric Memory. It is extensible, allowing users to customize memory modules, data sources, and LLM integrations. MemOS demonstrates significant improvements over baseline memory solutions in multiple reasoning tasks, with a notable improvement in temporal reasoning accuracy compared to the OpenAI baseline.

enferno
Enferno is a modern Flask framework optimized for AI-assisted development workflows. It combines carefully crafted development patterns, smart Cursor Rules, and modern libraries to enable developers to build sophisticated web applications with unprecedented speed. Enferno's intelligent patterns and contextual guides help create production-ready SAAS applications faster than ever. It includes features like modern stack, authentication, OAuth integration, database support, task queue, frontend components, security measures, Docker readiness, and more.

probe
Probe is an AI-friendly, fully local, semantic code search tool designed to power the next generation of AI coding assistants. It combines the speed of ripgrep with the code-aware parsing of tree-sitter to deliver precise results with complete code blocks, making it perfect for large codebases and AI-driven development workflows. Probe is fully local, keeping code on the user's machine without relying on external APIs. It supports multiple languages, offers various search options, and can be used in CLI mode, MCP server mode, AI chat mode, and web interface. The tool is designed to be flexible, fast, and accurate, providing developers and AI models with full context and relevant code blocks for efficient code exploration and understanding.
For similar tasks

trpc-agent-go
A powerful Go framework for building intelligent agent systems with large language models (LLMs), hierarchical planners, memory, telemetry, and a rich tool ecosystem. tRPC-Agent-Go enables the creation of autonomous or semi-autonomous agents that reason, call tools, collaborate with sub-agents, and maintain long-term state. The framework provides detailed documentation, examples, and tools for accelerating the development of AI applications.

MiniAgents
MiniAgents is an open-source Python framework designed to simplify the creation of multi-agent AI systems. It offers a parallelism and async-first design, allowing users to focus on building intelligent agents while handling concurrency challenges. The framework, built on asyncio, supports LLM-based applications with immutable messages and seamless asynchronous token and message streaming between agents.

AutoAgents
AutoAgents is a cutting-edge multi-agent framework built in Rust that enables the creation of intelligent, autonomous agents powered by Large Language Models (LLMs) and Ractor. Designed for performance, safety, and scalability. AutoAgents provides a robust foundation for building complex AI systems that can reason, act, and collaborate. With AutoAgents you can create Cloud Native Agents, Edge Native Agents and Hybrid Models as well. It is so extensible that other ML Models can be used to create complex pipelines using Actor Framework.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.