
mcp-go
A Go implementation of the Model Context Protocol (MCP), enabling seamless integration between LLM applications and external data sources and tools.
Stars: 317

MCP Go is a Go implementation of the Model Context Protocol (MCP), facilitating seamless integration between LLM applications and external data sources and tools. It handles complex protocol details and server management, allowing developers to focus on building tools. The tool is designed to be fast, simple, and complete, aiming to provide a high-level and easy-to-use interface for developing MCP servers. MCP Go is currently under active development, with core features working and advanced capabilities in progress.
README:
A Go implementation of the Model Context Protocol (MCP), enabling seamless integration between LLM applications and external data sources and tools.
package main
import (
"context"
"fmt"
"github.com/mark3labs/mcp-go/mcp"
"github.com/mark3labs/mcp-go/server"
)
func main() {
// Create MCP server
s := server.NewMCPServer(
"Demo ๐",
"1.0.0",
)
// Add tool
tool := mcp.NewTool("hello_world",
mcp.WithDescription("Say hello to someone"),
mcp.WithString("name",
mcp.Required(),
mcp.Description("Name of the person to greet"),
),
)
// Add tool handler
s.AddTool(tool, helloHandler)
// Start the stdio server
if err := server.ServeStdio(s); err != nil {
fmt.Printf("Server error: %v\n", err)
}
}
func helloHandler(ctx context.Context, request mcp.CallToolRequest) (*mcp.CallToolResult, error) {
name, ok := request.Params.Arguments["name"].(string)
if !ok {
return mcp.NewToolResultError("name must be a string"), nil
}
return mcp.NewToolResultText(fmt.Sprintf("Hello, %s!", name)), nil
}
That's it!
MCP Go handles all the complex protocol details and server management, so you can focus on building great tools. It aims to be high-level and easy to use.
- Fast: High-level interface means less code and faster development
- Simple: Build MCP servers with minimal boilerplate
- Complete*: MCP Go aims to provide a full implementation of the core MCP specification
(*emphasis on aims)
๐จ ๐ง ๐๏ธ MCP Go is under active development, as is the MCP specification itself. Core features are working but some advanced capabilities are still in progress.
go get github.com/mark3labs/mcp-go
Let's create a simple MCP server that exposes a calculator tool and some data:
package main
import (
"context"
"fmt"
"github.com/mark3labs/mcp-go/mcp"
"github.com/mark3labs/mcp-go/server"
)
func main() {
// Create a new MCP server
s := server.NewMCPServer(
"Calculator Demo",
"1.0.0",
server.WithResourceCapabilities(true, true),
server.WithLogging(),
)
// Add a calculator tool
calculatorTool := mcp.NewTool("calculate",
mcp.WithDescription("Perform basic arithmetic operations"),
mcp.WithString("operation",
mcp.Required(),
mcp.Description("The operation to perform (add, subtract, multiply, divide)"),
mcp.Enum("add", "subtract", "multiply", "divide"),
),
mcp.WithNumber("x",
mcp.Required(),
mcp.Description("First number"),
),
mcp.WithNumber("y",
mcp.Required(),
mcp.Description("Second number"),
),
)
// Add the calculator handler
s.AddTool(calculatorTool, func(ctx context.Context, request mcp.CallToolRequest) (*mcp.CallToolResult, error) {
op := request.Params.Arguments["operation"].(string)
x := request.Params.Arguments["x"].(float64)
y := request.Params.Arguments["y"].(float64)
var result float64
switch op {
case "add":
result = x + y
case "subtract":
result = x - y
case "multiply":
result = x * y
case "divide":
if y == 0 {
return mcp.NewToolResultError("Cannot divide by zero"), nil
}
result = x / y
}
return mcp.NewToolResultText(fmt.Sprintf("%.2f", result)), nil
})
// Start the server
if err := server.ServeStdio(s); err != nil {
fmt.Printf("Server error: %v\n", err)
}
}
The Model Context Protocol (MCP) lets you build servers that expose data and functionality to LLM applications in a secure, standardized way. Think of it like a web API, but specifically designed for LLM interactions. MCP servers can:
- Expose data through Resources (think of these sort of like GET endpoints; they are used to load information into the LLM's context)
- Provide functionality through Tools (sort of like POST endpoints; they are used to execute code or otherwise produce a side effect)
- Define interaction patterns through Prompts (reusable templates for LLM interactions)
- And more!
Show Server Examples
The server is your core interface to the MCP protocol. It handles connection management, protocol compliance, and message routing:
// Create a basic server
s := server.NewMCPServer(
"My Server", // Server name
"1.0.0", // Version
)
// Start the server using stdio
if err := server.ServeStdio(s); err != nil {
log.Fatalf("Server error: %v", err)
}
Show Resource Examples
Resources are how you expose data to LLMs. They can be anything - files, API responses, database queries, system information, etc. Resources can be:- Static (fixed URI)
- Dynamic (using URI templates)
Here's a simple example of a static resource:
// Static resource example - exposing a README file
resource := mcp.NewResource(
"docs://readme",
"Project README",
mcp.WithResourceDescription("The project's README file"),
mcp.WithMIMEType("text/markdown"),
mcp.WithAnnotations([]mcp.Role{mcp.RoleAssistant}, 0.8),
)
// Add resource with its handler
s.AddResource(resource, func(ctx context.Context, request mcp.ReadResourceRequest) ([]interface{}, error) {
content, err := os.ReadFile("README.md")
if err != nil {
return nil, err
}
return []interface{}{
mcp.TextResourceContents{
ResourceContents: mcp.ResourceContents{
URI: "docs://readme",
MIMEType: "text/markdown",
},
Text: string(content),
},
}, nil
})
And here's an example of a dynamic resource using a template:
// Dynamic resource example - user profiles by ID
template := mcp.NewResourceTemplate(
"users://{id}/profile",
"User Profile",
mcp.WithTemplateDescription("Returns user profile information"),
mcp.WithTemplateMIMEType("application/json"),
mcp.WithTemplateAnnotations([]mcp.Role{mcp.RoleAssistant, mcp.RoleUser}, 0.5),
)
// Add template with its handler
s.AddResourceTemplate(template, func(ctx context.Context, request mcp.ReadResourceRequest) ([]interface{}, error) {
userID := request.Params.URI // Extract ID from the full URI
profile, err := getUserProfile(userID) // Your DB/API call here
if err != nil {
return nil, err
}
return []interface{}{
mcp.TextResourceContents{
ResourceContents: mcp.ResourceContents{
URI: fmt.Sprintf("users://%s/profile", userID),
MIMEType: "application/json",
},
Text: profile,
},
}, nil
})
The examples are simple but demonstrate the core concepts. Resources can be much more sophisticated - serving multiple contents, using annotations, integrating with databases or external APIs, etc.
Show Tool Examples
Tools let LLMs take actions through your server. Unlike resources, tools are expected to perform computation and have side effects. They're similar to POST endpoints in a REST API.
Simple calculation example:
calculatorTool := mcp.NewTool("calculate",
mcp.WithDescription("Perform basic arithmetic calculations"),
mcp.WithString("operation",
mcp.Required(),
mcp.Description("The arithmetic operation to perform"),
mcp.Enum("add", "subtract", "multiply", "divide"),
),
mcp.WithNumber("x",
mcp.Required(),
mcp.Description("First number"),
),
mcp.WithNumber("y",
mcp.Required(),
mcp.Description("Second number"),
),
)
s.AddTool(calculatorTool, func(ctx context.Context, request mcp.CallToolRequest) (*mcp.CallToolResult, error) {
op := request.Params.Arguments["operation"].(string)
x := request.Params.Arguments["x"].(float64)
y := request.Params.Arguments["y"].(float64)
var result float64
switch op {
case "add":
result = x + y
case "subtract":
result = x - y
case "multiply":
result = x * y
case "divide":
if y == 0 {
return mcp.NewToolResultError("Division by zero is not allowed"), nil
}
result = x / y
}
return mcp.FormatNumberResult(result), nil
})
HTTP request example:
httpTool := mcp.NewTool("http_request",
mcp.WithDescription("Make HTTP requests to external APIs"),
mcp.WithString("method",
mcp.Required(),
mcp.Description("HTTP method to use"),
mcp.Enum("GET", "POST", "PUT", "DELETE"),
),
mcp.WithString("url",
mcp.Required(),
mcp.Description("URL to send the request to"),
mcp.Pattern("^https?://.*"),
),
mcp.WithString("body",
mcp.Description("Request body (for POST/PUT)"),
),
)
s.AddTool(httpTool, func(ctx context.Context, request mcp.CallToolRequest) (*mcp.CallToolResult, error) {
method := request.Params.Arguments["method"].(string)
url := request.Params.Arguments["url"].(string)
body := ""
if b, ok := request.Params.Arguments["body"].(string); ok {
body = b
}
// Create and send request
var req *http.Request
var err error
if body != "" {
req, err = http.NewRequest(method, url, strings.NewReader(body))
} else {
req, err = http.NewRequest(method, url, nil)
}
if err != nil {
return mcp.NewToolResultError(fmt.Sprintf("Failed to create request: %v", err)), nil
}
client := &http.Client{}
resp, err := client.Do(req)
if err != nil {
return mcp.NewToolResultError(fmt.Sprintf("Request failed: %v", err)), nil
}
defer resp.Body.Close()
// Return response
respBody, err := io.ReadAll(resp.Body)
if err != nil {
return mcp.NewToolResultError(fmt.Sprintf("Failed to read response: %v", err)), nil
}
return mcp.NewToolResultText(fmt.Sprintf("Status: %d\nBody: %s", resp.StatusCode, string(respBody))), nil
})
Tools can be used for any kind of computation or side effect:
- Database queries
- File operations
- External API calls
- Calculations
- System operations
Each tool should:
- Have a clear description
- Validate inputs
- Handle errors gracefully
- Return structured responses
- Use appropriate result types
Show Prompt Examples
Prompts are reusable templates that help LLMs interact with your server effectively. They're like "best practices" encoded into your server. Here are some examples:
// Simple greeting prompt
s.AddPrompt(mcp.NewPrompt("greeting",
mcp.WithPromptDescription("A friendly greeting prompt"),
mcp.WithArgument("name",
mcp.ArgumentDescription("Name of the person to greet"),
),
), func(ctx context.Context, request mcp.GetPromptRequest) (*mcp.GetPromptResult, error) {
name := request.Params.Arguments["name"].(string)
if name == "" {
name = "friend"
}
return mcp.NewGetPromptResult(
"A friendly greeting",
[]mcp.PromptMessage{
mcp.NewPromptMessage(
mcp.RoleAssistant,
mcp.NewTextContent(fmt.Sprintf("Hello, %s! How can I help you today?", name)),
),
},
), nil
})
// Code review prompt with embedded resource
s.AddPrompt(mcp.NewPrompt("code_review",
mcp.WithPromptDescription("Code review assistance"),
mcp.WithArgument("pr_number",
mcp.ArgumentDescription("Pull request number to review"),
mcp.RequiredArgument(),
),
), func(ctx context.Context, request mcp.GetPromptRequest) (*mcp.GetPromptResult, error) {
prNumber := request.Params.Arguments["pr_number"].(string)
if prNumber == "" {
return nil, fmt.Errorf("pr_number is required")
}
return mcp.NewGetPromptResult(
"Code review assistance",
[]mcp.PromptMessage{
mcp.NewPromptMessage(
mcp.RoleSystem,
mcp.NewTextContent("You are a helpful code reviewer. Review the changes and provide constructive feedback."),
),
mcp.NewPromptMessage(
mcp.RoleAssistant,
mcp.NewEmbeddedResource(mcp.ResourceContents{
URI: fmt.Sprintf("git://pulls/%s/diff", prNumber),
MIMEType: "text/x-diff",
}),
),
},
), nil
})
// Database query builder prompt
s.AddPrompt(mcp.NewPrompt("query_builder",
mcp.WithPromptDescription("SQL query builder assistance"),
mcp.WithArgument("table",
mcp.ArgumentDescription("Name of the table to query"),
mcp.RequiredArgument(),
),
), func(ctx context.Context, request mcp.GetPromptRequest) (*mcp.GetPromptResult, error) {
tableName := request.Params.Arguments["table"].(string)
if tableName == "" {
return nil, fmt.Errorf("table name is required")
}
return mcp.NewGetPromptResult(
"SQL query builder assistance",
[]mcp.PromptMessage{
mcp.NewPromptMessage(
mcp.RoleSystem,
mcp.NewTextContent("You are a SQL expert. Help construct efficient and safe queries."),
),
mcp.NewPromptMessage(
mcp.RoleAssistant,
mcp.NewEmbeddedResource(mcp.ResourceContents{
URI: fmt.Sprintf("db://schema/%s", tableName),
MIMEType: "application/json",
}),
),
},
), nil
})
Prompts can include:
- System instructions
- Required arguments
- Embedded resources
- Multiple messages
- Different content types (text, images, etc.)
- Custom URI schemes
For examples, see the examples/
directory.
Go version >= 1.23
Create a fork of this repository, then clone it:
git clone https://github.com/mark3labs/mcp-go.git
cd mcp-go
Please make sure to test any new functionality. Your tests should be simple and atomic and anticipate change rather than cement complex patterns.
Run tests from the root directory:
go test -v './...'
Fork the repository and create a new branch:
git checkout -b my-branch
Make your changes and commit them:
git add . && git commit -m "My changes"
Push your changes to your fork:
git push origin my-branch
Feel free to reach out in a GitHub issue or discussion if you have any questions!
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for mcp-go
Similar Open Source Tools

mcp-go
MCP Go is a Go implementation of the Model Context Protocol (MCP), facilitating seamless integration between LLM applications and external data sources and tools. It handles complex protocol details and server management, allowing developers to focus on building tools. The tool is designed to be fast, simple, and complete, aiming to provide a high-level and easy-to-use interface for developing MCP servers. MCP Go is currently under active development, with core features working and advanced capabilities in progress.

gollm
gollm is a Go package designed to simplify interactions with Large Language Models (LLMs) for AI engineers and developers. It offers a unified API for multiple LLM providers, easy provider and model switching, flexible configuration options, advanced prompt engineering, prompt optimization, memory retention, structured output and validation, provider comparison tools, high-level AI functions, robust error handling and retries, and extensible architecture. The package enables users to create AI-powered golems for tasks like content creation workflows, complex reasoning tasks, structured data generation, model performance analysis, prompt optimization, and creating a mixture of agents.

whetstone.chatgpt
Whetstone.ChatGPT is a simple light-weight library that wraps the Open AI API with support for dependency injection. It supports features like GPT 4, GPT 3.5 Turbo, chat completions, audio transcription and translation, vision completions, files, fine tunes, images, embeddings, moderations, and response streaming. The library provides a video walkthrough of a Blazor web app built on it and includes examples such as a command line bot. It offers quickstarts for dependency injection, chat completions, completions, file handling, fine tuning, image generation, and audio transcription.

swarmgo
SwarmGo is a Go package designed to create AI agents capable of interacting, coordinating, and executing tasks. It focuses on lightweight agent coordination and execution, offering powerful primitives like Agents and handoffs. SwarmGo enables building scalable solutions with rich dynamics between tools and networks of agents, all while keeping the learning curve low. It supports features like memory management, streaming support, concurrent agent execution, LLM interface, and structured workflows for organizing and coordinating multiple agents.

dashscope-sdk
DashScope SDK for .NET is an unofficial SDK maintained by Cnblogs, providing various APIs for text embedding, generation, multimodal generation, image synthesis, and more. Users can interact with the SDK to perform tasks such as text completion, chat generation, function calls, file operations, and more. The project is under active development, and users are advised to check the Release Notes before upgrading.

UniChat
UniChat is a pipeline tool for creating online and offline chat-bots in Unity. It leverages Unity.Sentis and text vector embedding technology to enable offline mode text content search based on vector databases. The tool includes a chain toolkit for embedding LLM and Agent in games, along with middleware components for Text to Speech, Speech to Text, and Sub-classifier functionalities. UniChat also offers a tool for invoking tools based on ReActAgent workflow, allowing users to create personalized chat scenarios and character cards. The tool provides a comprehensive solution for designing flexible conversations in games while maintaining developer's ideas.

modelfusion
ModelFusion is an abstraction layer for integrating AI models into JavaScript and TypeScript applications, unifying the API for common operations such as text streaming, object generation, and tool usage. It provides features to support production environments, including observability hooks, logging, and automatic retries. You can use ModelFusion to build AI applications, chatbots, and agents. ModelFusion is a non-commercial open source project that is community-driven. You can use it with any supported provider. ModelFusion supports a wide range of models including text generation, image generation, vision, text-to-speech, speech-to-text, and embedding models. ModelFusion infers TypeScript types wherever possible and validates model responses. ModelFusion provides an observer framework and logging support. ModelFusion ensures seamless operation through automatic retries, throttling, and error handling mechanisms. ModelFusion is fully tree-shakeable, can be used in serverless environments, and only uses a minimal set of dependencies.

continuous-eval
Open-Source Evaluation for LLM Applications. `continuous-eval` is an open-source package created for granular and holistic evaluation of GenAI application pipelines. It offers modularized evaluation, a comprehensive metric library covering various LLM use cases, the ability to leverage user feedback in evaluation, and synthetic dataset generation for testing pipelines. Users can define their own metrics by extending the Metric class. The tool allows running evaluation on a pipeline defined with modules and corresponding metrics. Additionally, it provides synthetic data generation capabilities to create user interaction data for evaluation or training purposes.

claude-api
claude-api is a web conversation library for ClaudeAI implemented in GoLang. It provides functionalities to interact with ClaudeAI for web-based conversations. Users can easily integrate this library into their Go projects to enable chatbot capabilities and handle conversations with ClaudeAI. The library includes features for sending messages, receiving responses, and managing chat sessions, making it a valuable tool for developers looking to incorporate AI-powered chatbots into their applications.

OpenAI-DotNet
OpenAI-DotNet is a simple C# .NET client library for OpenAI to use through their RESTful API. It is independently developed and not an official library affiliated with OpenAI. Users need an OpenAI API account to utilize this library. The library targets .NET 6.0 and above, working across various platforms like console apps, winforms, wpf, asp.net, etc., and on Windows, Linux, and Mac. It provides functionalities for authentication, interacting with models, assistants, threads, chat, audio, images, files, fine-tuning, embeddings, and moderations.

com.openai.unity
com.openai.unity is an OpenAI package for Unity that allows users to interact with OpenAI's API through RESTful requests. It is independently developed and not an official library affiliated with OpenAI. Users can fine-tune models, create assistants, chat completions, and more. The package requires Unity 2021.3 LTS or higher and can be installed via Unity Package Manager or Git URL. Various features like authentication, Azure OpenAI integration, model management, thread creation, chat completions, audio processing, image generation, file management, fine-tuning, batch processing, embeddings, and content moderation are available.

dynamiq
Dynamiq is an orchestration framework designed to streamline the development of AI-powered applications, specializing in orchestrating retrieval-augmented generation (RAG) and large language model (LLM) agents. It provides an all-in-one Gen AI framework for agentic AI and LLM applications, offering tools for multi-agent orchestration, document indexing, and retrieval flows. With Dynamiq, users can easily build and deploy AI solutions for various tasks.

LightRAG
LightRAG is a PyTorch library designed for building and optimizing Retriever-Agent-Generator (RAG) pipelines. It follows principles of simplicity, quality, and optimization, offering developers maximum customizability with minimal abstraction. The library includes components for model interaction, output parsing, and structured data generation. LightRAG facilitates tasks like providing explanations and examples for concepts through a question-answering pipeline.

Ollama
Ollama SDK for .NET is a fully generated C# SDK based on OpenAPI specification using OpenApiGenerator. It supports automatic releases of new preview versions, source generator for defining tools natively through C# interfaces, and all modern .NET features. The SDK provides support for all Ollama API endpoints including chats, embeddings, listing models, pulling and creating new models, and more. It also offers tools for interacting with weather data and providing weather-related information to users.

ChatRex
ChatRex is a Multimodal Large Language Model (MLLM) designed to seamlessly integrate fine-grained object perception and robust language understanding. By adopting a decoupled architecture with a retrieval-based approach for object detection and leveraging high-resolution visual inputs, ChatRex addresses key challenges in perception tasks. It is powered by the Rexverse-2M dataset with diverse image-region-text annotations. ChatRex can be applied to various scenarios requiring fine-grained perception, such as object detection, grounded conversation, grounded image captioning, and region understanding.

langchaingo
LangChain Go is a Go language implementation of LangChain, a framework for building applications with LLMs through composability. It provides a simple and easy-to-use API for interacting with LLMs, making it easy to add language-based features to your applications.
For similar tasks

mcp-go
MCP Go is a Go implementation of the Model Context Protocol (MCP), facilitating seamless integration between LLM applications and external data sources and tools. It handles complex protocol details and server management, allowing developers to focus on building tools. The tool is designed to be fast, simple, and complete, aiming to provide a high-level and easy-to-use interface for developing MCP servers. MCP Go is currently under active development, with core features working and advanced capabilities in progress.
For similar jobs

minio
MinIO is a High Performance Object Storage released under GNU Affero General Public License v3.0. It is API compatible with Amazon S3 cloud storage service. Use MinIO to build high performance infrastructure for machine learning, analytics and application data workloads.

ai-on-gke
This repository contains assets related to AI/ML workloads on Google Kubernetes Engine (GKE). Run optimized AI/ML workloads with Google Kubernetes Engine (GKE) platform orchestration capabilities. A robust AI/ML platform considers the following layers: Infrastructure orchestration that support GPUs and TPUs for training and serving workloads at scale Flexible integration with distributed computing and data processing frameworks Support for multiple teams on the same infrastructure to maximize utilization of resources

kong
Kong, or Kong API Gateway, is a cloud-native, platform-agnostic, scalable API Gateway distinguished for its high performance and extensibility via plugins. It also provides advanced AI capabilities with multi-LLM support. By providing functionality for proxying, routing, load balancing, health checking, authentication (and more), Kong serves as the central layer for orchestrating microservices or conventional API traffic with ease. Kong runs natively on Kubernetes thanks to its official Kubernetes Ingress Controller.

AI-in-a-Box
AI-in-a-Box is a curated collection of solution accelerators that can help engineers establish their AI/ML environments and solutions rapidly and with minimal friction, while maintaining the highest standards of quality and efficiency. It provides essential guidance on the responsible use of AI and LLM technologies, specific security guidance for Generative AI (GenAI) applications, and best practices for scaling OpenAI applications within Azure. The available accelerators include: Azure ML Operationalization in-a-box, Edge AI in-a-box, Doc Intelligence in-a-box, Image and Video Analysis in-a-box, Cognitive Services Landing Zone in-a-box, Semantic Kernel Bot in-a-box, NLP to SQL in-a-box, Assistants API in-a-box, and Assistants API Bot in-a-box.

awsome-distributed-training
This repository contains reference architectures and test cases for distributed model training with Amazon SageMaker Hyperpod, AWS ParallelCluster, AWS Batch, and Amazon EKS. The test cases cover different types and sizes of models as well as different frameworks and parallel optimizations (Pytorch DDP/FSDP, MegatronLM, NemoMegatron...).

generative-ai-cdk-constructs
The AWS Generative AI Constructs Library is an open-source extension of the AWS Cloud Development Kit (AWS CDK) that provides multi-service, well-architected patterns for quickly defining solutions in code to create predictable and repeatable infrastructure, called constructs. The goal of AWS Generative AI CDK Constructs is to help developers build generative AI solutions using pattern-based definitions for their architecture. The patterns defined in AWS Generative AI CDK Constructs are high level, multi-service abstractions of AWS CDK constructs that have default configurations based on well-architected best practices. The library is organized into logical modules using object-oriented techniques to create each architectural pattern model.

model_server
OpenVINOโข Model Server (OVMS) is a high-performance system for serving models. Implemented in C++ for scalability and optimized for deployment on Intel architectures, the model server uses the same architecture and API as TensorFlow Serving and KServe while applying OpenVINO for inference execution. Inference service is provided via gRPC or REST API, making deploying new algorithms and AI experiments easy.

dify-helm
Deploy langgenius/dify, an LLM based chat bot app on kubernetes with helm chart.