mesh
One secure endpoint for every MCP server. Deploy anywhere.
Stars: 329
MCP Mesh is an open-source control plane for MCP traffic that provides a unified layer for authentication, routing, and observability. It replaces multiple integrations with a single production endpoint, simplifying configuration management. Built for multi-tenant organizations, it offers workspace/project scoping for policies, credentials, and logs. With core capabilities like MeshContext, AccessControl, and OpenTelemetry, it ensures fine-grained RBAC, full tracing, and metrics for tools and workflows. Users can define tools with input/output validation, access control checks, audit logging, and OpenTelemetry traces. The project structure includes apps for full-stack MCP Mesh, encryption, observability, and more, with deployment options ranging from Docker to Kubernetes. The tech stack includes Bun/Node runtime, TypeScript, Hono API, React, Kysely ORM, and Better Auth for OAuth and API keys.
README:
MCP-native Β· TypeScript-first Β· Deploy anywhere
One secure endpoint for every MCP server.
π Docs Β· π¬ Discord Β· π decocms.com/mesh
TL;DR:
- Route all MCP traffic through a single governed endpoint
- Enforce RBAC, policies, and audit trails at the control plane
- Full observability with OpenTelemetry β traces, costs, errors
- Runtime strategies as mcps for optimal tool selection
- Self-host with Docker, Bun/Node, Kubernetes, or run locally
MCP Mesh is an open-source control plane for MCP traffic. It sits between your MCP clients (Cursor, Claude, Windsurf, VS Code, custom agents) and your MCP servers, providing a unified layer for auth, routing and observability.
It replaces MΓN integrations (M MCP servers Γ N clients) with one production endpoint, so you stop maintaining separate configs in every client. Built for multi-tenant orgs: workspace/project scoping for policies, credentials, and logs.
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β MCP Clients β
β Cursor Β· Claude Β· VS Code Β· Custom Agents β
βββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β MCP MESH β
β Virtual MCP Β· Policy Engine Β· Observability Β· Token Vault β
βββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β MCP Servers β
β Salesforce Β· Slack Β· GitHub Β· Postgres Β· Your APIs β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
# Clone and install
git clone https://github.com/decocms/mesh.git
bun install
# Run locally (client + API server)
bun run devβ runs at http://localhost:3000 (client) + API server
Or use npx @decocms/mesh to instantly get a mesh running.
As tool surfaces grow, βsend every tool definition to the model on every callβ gets expensive and slow. The mesh models runtime strategies as Virtual MCPs: one endpoint, different ways of exposing tools.
Examples:
- Full-context: expose everything (simple and deterministic for small toolsets)
- Smart selection: narrow the toolset before execution
- Code execution: load tools on demand and run code in a sandbox
Virtual MCPs are configurable and extensible. You can add new strategies and also curate toolsets (see Virtual MCPs).
| Capability | What it does |
|---|---|
| MeshContext | Unified runtime interface providing auth, storage, observability, and policy control |
| defineTool() | Declarative API for typed, auditable, observable MCP tools |
| AccessControl | Fine-grained RBAC via Better Auth β OAuth 2.1 + API keys per workspace/project |
| Multi-tenancy | Workspace/project isolation for config, credentials, policies, and audit logs |
| OpenTelemetry | Full tracing and metrics for tools, workflows, and UI interactions |
| Storage Adapters | Kysely ORM β SQLite / Postgres, easily swapped |
| Proxy Layer | Secure bridge to remote MCP servers with token vault + OAuth |
| Virtual MCPs | Compose and expose governed toolsets as new MCP servers |
| Event Bus | Pub/sub between connections with scheduled/cron delivery and at-least-once guarantees |
| Bindings | Capability contracts (ex.: agents, workflows, views) so apps target interfaces instead of specific MCP implementations |
Tools are first-class citizens. Type-safe, audited, observable, and callable via MCP.
import { z } from "zod";
import { defineTool } from "~/core/define-tool";
export const CONNECTION_CREATE = defineTool({
name: "CONNECTION_CREATE",
description: "Create a new MCP connection",
inputSchema: z.object({
name: z.string(),
connection: z.object({
type: z.enum(["HTTP", "SSE", "WebSocket"]),
url: z.string().url(),
token: z.string().optional(),
}),
}),
outputSchema: z.object({
id: z.string(),
scope: z.enum(["workspace", "project"]),
}),
handler: async (input, ctx) => {
await ctx.access.check();
const conn = await ctx.storage.connections.create({
projectId: ctx.project?.id ?? null,
...input,
createdById: ctx.auth.user!.id,
});
return { id: conn.id, scope: conn.projectId ? "project" : "workspace" };
},
});Every tool call automatically gets: input/output validation, access control checks, audit logging, and OpenTelemetry traces.
βββ apps/
β βββ mesh/ # Full-stack MCP Mesh (Hono API + Vite/React)
β β βββ src/
β β β βββ api/ # Hono HTTP + MCP proxy routes
β β β βββ auth/ # Better Auth (OAuth + API keys)
β β β βββ core/ # MeshContext, AccessControl, defineTool
β β β βββ tools/ # Built-in MCP management tools
β β β βββ storage/ # Kysely DB adapters
β β β βββ event-bus/ # Pub/sub event delivery system
β β β βββ encryption/ # Token vault & credential management
β β β βββ observability/ # OpenTelemetry tracing & metrics
β β β βββ web/ # React 19 admin UI
β β βββ migrations/ # Kysely database migrations
β βββ docs/ # Astro documentation site
β
βββ packages/
βββ bindings/ # Core MCP bindings and connection abstractions
βββ runtime/ # MCP proxy, OAuth, and runtime utilities
βββ ui/ # Shared React components (shadcn-based)
βββ cli/ # CLI tooling (deco commands)
βββ create-deco/ # Project scaffolding (npm create deco)
βββ vite-plugin-deco/ # Vite plugin for Deco projects
# Install dependencies
bun install
# Run dev server (client + API)
bun run dev
# Run tests
bun test
# Type check
bun run check
# Lint
bun run lint
# Format
bun run fmtbun run dev:client # Vite dev server (port 4000)
bun run dev:server # Hono server with hot reload
bun run migrate # Run database migrations# Docker Compose (SQLite)
docker compose -f deploy/docker-compose.yml up
# Docker Compose (PostgreSQL)
docker compose -f deploy/docker-compose.postgres.yml up
# Self-host with Bun
bun run build:client && bun run build:server
bun run start
# Kubernetes
kubectl apply -f k8s/Runs on any infrastructure β Docker, Kubernetes, AWS, GCP, or local Bun/Node runtimes. No vendor lock-in.
| Layer | Tech |
|---|---|
| Runtime | Bun / Node |
| Language | TypeScript + Zod |
| Framework | Hono (API) + Vite + React 19 |
| Database | Kysely β SQLite / PostgreSQL |
| Auth | Better Auth (OAuth 2.1 + API keys) |
| Observability | OpenTelemetry |
| UI | React 19 + Tailwind v4 + shadcn |
| Protocol | Model Context Protocol (MCP) |
- [ ] Multi-tenant admin dashboard
- [ ] MCP bindings (swap providers without rewrites)
- [ ] Version history for mesh configs
- [ ] NPM package runtime
- [ ] Edge debugger / live tracing
- [ ] Cost analytics and spend caps
- [ ] MCP Store β discover and install pre-built MCP apps
The MCP Mesh is the infrastructure layer of decoCMS.
| Layer | What it does |
|---|---|
| MCP Mesh | Connect, govern, and observe MCP traffic |
| MCP Studio (coming soon) | Package durable MCP capabilities into shareable apps (SDK + no-code admin) |
| MCP Store (coming soon) | Discover, install (and eventually monetize) pre-built MCP apps. |
The MCP Mesh ships with a Sustainable Use License (SUL). See LICENSE.md.
- β Free to self-host for internal use
- β Free for client projects (agencies, SIs)
β οΈ Commercial license required for SaaS or revenue-generating production systems
Questions? [email protected]
We welcome contributions! Run the following before submitting a PR:
bun run fmt # Format code
bun run lint # Check linting
bun test # Run testsSee AGENTS.md for detailed coding guidelines and conventions.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for mesh
Similar Open Source Tools
For similar tasks
only_train_once
Only Train Once (OTO) is an automatic, architecture-agnostic DNN training and compression framework that allows users to train a general DNN from scratch or a pretrained checkpoint to achieve high performance and slimmer architecture simultaneously in a one-shot manner without fine-tuning. The framework includes features for automatic structured pruning and erasing operators, as well as hybrid structured sparse optimizers for efficient model compression. OTO provides tools for pruning zero-invariant group partitioning, constructing pruned models, and visualizing pruning and erasing dependency graphs. It supports the HESSO optimizer and offers a sanity check for compliance testing on various DNNs. The repository also includes publications, installation instructions, quick start guides, and a roadmap for future enhancements and collaborations.
ChaKt-KMP
ChaKt is a multiplatform app built using Kotlin and Compose Multiplatform to demonstrate the use of Generative AI SDK for Kotlin Multiplatform to generate content using Google's Generative AI models. It features a simple chat based user interface and experience to interact with AI. The app supports mobile, desktop, and web platforms, and is built with Kotlin Multiplatform, Kotlin Coroutines, Compose Multiplatform, Generative AI SDK, Calf - File picker, and BuildKonfig. Users can contribute to the project by following the guidelines in CONTRIBUTING.md. The app is licensed under the MIT License.
crawl4ai
Crawl4AI is a powerful and free web crawling service that extracts valuable data from websites and provides LLM-friendly output formats. It supports crawling multiple URLs simultaneously, replaces media tags with ALT, and is completely free to use and open-source. Users can integrate Crawl4AI into Python projects as a library or run it as a standalone local server. The tool allows users to crawl and extract data from specified URLs using different providers and models, with options to include raw HTML content, force fresh crawls, and extract meaningful text blocks. Configuration settings can be adjusted in the `crawler/config.py` file to customize providers, API keys, chunk processing, and word thresholds. Contributions to Crawl4AI are welcome from the open-source community to enhance its value for AI enthusiasts and developers.
sandbox
Sandbox is an open-source cloud-based code editing environment with custom AI code autocompletion and real-time collaboration. It consists of a frontend built with Next.js, TailwindCSS, Shadcn UI, Clerk, Monaco, and Liveblocks, and a backend with Express, Socket.io, Cloudflare Workers, D1 database, R2 storage, Workers AI, and Drizzle ORM. The backend includes microservices for database, storage, and AI functionalities. Users can run the project locally by setting up environment variables and deploying the containers. Contributions are welcome following the commit convention and structure provided in the repository.
void
Void is an open-source Cursor alternative, providing a full source code for users to build and develop. It is a fork of the vscode repository, offering a waitlist for the official release. Users can contribute by checking the Project board and following the guidelines in CONTRIBUTING.md. Support is available through Discord or email.
aphrodite-engine
Aphrodite is an inference engine optimized for serving HuggingFace-compatible models at scale. It leverages vLLM's Paged Attention technology to deliver high-performance model inference for multiple concurrent users. The engine supports continuous batching, efficient key/value management, optimized CUDA kernels, quantization support, distributed inference, and modern samplers. It can be easily installed and launched, with Docker support for deployment. Aphrodite requires Linux or Windows OS, Python 3.8 to 3.12, and CUDA >= 11. It is designed to utilize 90% of GPU VRAM but offers options to limit memory usage. Contributors are welcome to enhance the engine.
cua
Cua is a tool for creating and running high-performance macOS and Linux virtual machines on Apple Silicon, with built-in support for AI agents. It provides libraries like Lume for running VMs with near-native performance, Computer for interacting with sandboxes, and Agent for running agentic workflows. Users can refer to the documentation for onboarding, explore demos showcasing AI-Gradio and GitHub issue fixing, and utilize accessory libraries like Core, PyLume, Computer Server, and SOM. Contributions are welcome, and the tool is open-sourced under the MIT License.
GeneratedOnBoardings
GeneratedOnBoardings is a repository containing automatically generated onboarding diagrams for over 800+ Python projects using CodeBoarding, an open-source tool for creating interactive visual documentation. The tool helps developers explore unfamiliar codebases through visual documentation, making it easier to understand and contribute to open-source projects. Users can provide feedback to improve the tool, and can also generate onboarding diagrams for their own projects by running CodeBoarding locally or trying the online demo at CodeBoarding.org/demo.
For similar jobs
minio
MinIO is a High Performance Object Storage released under GNU Affero General Public License v3.0. It is API compatible with Amazon S3 cloud storage service. Use MinIO to build high performance infrastructure for machine learning, analytics and application data workloads.
ai-on-gke
This repository contains assets related to AI/ML workloads on Google Kubernetes Engine (GKE). Run optimized AI/ML workloads with Google Kubernetes Engine (GKE) platform orchestration capabilities. A robust AI/ML platform considers the following layers: Infrastructure orchestration that support GPUs and TPUs for training and serving workloads at scale Flexible integration with distributed computing and data processing frameworks Support for multiple teams on the same infrastructure to maximize utilization of resources
kong
Kong, or Kong API Gateway, is a cloud-native, platform-agnostic, scalable API Gateway distinguished for its high performance and extensibility via plugins. It also provides advanced AI capabilities with multi-LLM support. By providing functionality for proxying, routing, load balancing, health checking, authentication (and more), Kong serves as the central layer for orchestrating microservices or conventional API traffic with ease. Kong runs natively on Kubernetes thanks to its official Kubernetes Ingress Controller.
AI-in-a-Box
AI-in-a-Box is a curated collection of solution accelerators that can help engineers establish their AI/ML environments and solutions rapidly and with minimal friction, while maintaining the highest standards of quality and efficiency. It provides essential guidance on the responsible use of AI and LLM technologies, specific security guidance for Generative AI (GenAI) applications, and best practices for scaling OpenAI applications within Azure. The available accelerators include: Azure ML Operationalization in-a-box, Edge AI in-a-box, Doc Intelligence in-a-box, Image and Video Analysis in-a-box, Cognitive Services Landing Zone in-a-box, Semantic Kernel Bot in-a-box, NLP to SQL in-a-box, Assistants API in-a-box, and Assistants API Bot in-a-box.
awsome-distributed-training
This repository contains reference architectures and test cases for distributed model training with Amazon SageMaker Hyperpod, AWS ParallelCluster, AWS Batch, and Amazon EKS. The test cases cover different types and sizes of models as well as different frameworks and parallel optimizations (Pytorch DDP/FSDP, MegatronLM, NemoMegatron...).
generative-ai-cdk-constructs
The AWS Generative AI Constructs Library is an open-source extension of the AWS Cloud Development Kit (AWS CDK) that provides multi-service, well-architected patterns for quickly defining solutions in code to create predictable and repeatable infrastructure, called constructs. The goal of AWS Generative AI CDK Constructs is to help developers build generative AI solutions using pattern-based definitions for their architecture. The patterns defined in AWS Generative AI CDK Constructs are high level, multi-service abstractions of AWS CDK constructs that have default configurations based on well-architected best practices. The library is organized into logical modules using object-oriented techniques to create each architectural pattern model.
model_server
OpenVINOβ’ Model Server (OVMS) is a high-performance system for serving models. Implemented in C++ for scalability and optimized for deployment on Intel architectures, the model server uses the same architecture and API as TensorFlow Serving and KServe while applying OpenVINO for inference execution. Inference service is provided via gRPC or REST API, making deploying new algorithms and AI experiments easy.
dify-helm
Deploy langgenius/dify, an LLM based chat bot app on kubernetes with helm chart.