
fast-mcp
A Ruby Implementation of the Model Context Protocol
Stars: 228

README:
No complex protocols, no integration headaches, no compatibility issues β just beautiful, expressive Ruby code.
AI models are powerful, but they need to interact with your applications to be truly useful. Traditional approaches mean wrestling with:
- π Complex communication protocols and custom JSON formats
- π Integration challenges with different model providers
- 𧩠Compatibility issues between your app and AI tools
- π§ Managing the state between AI interactions and your data
Fast MCP solves all these problems by providing a clean, Ruby-focused implementation of the Model Context Protocol, making AI integration a joy, not a chore.
- π οΈ Tools API - Let AI models call your Ruby functions securely, with in-depth argument validation through Dry-Schema.
- π Resources API - Share data between your app and AI models
- π Multiple Transports - Choose from STDIO, HTTP, or SSE based on your needs
- 𧩠Framework Integration - Works seamlessly with Rails, Sinatra, and Hanami
- π Authentication Support - Secure your AI endpoints with ease
- π Real-time Updates - Subscribe to changes for interactive applications
# Define tools for AI models to use
server = MCP::Server.new(name: 'recipe-ai', version: '1.0.0')
# Define a tool by inheriting from MCP::Tool
class GetRecipesTool < MCP::Tool
description "Find recipes based on ingredients"
# These arguments will generate the needed JSON to be presented to the MCP Client
# And they will be validated at run time.
# The validation is based off Dry-Schema, with the addition of the description.
arguments do
required(:ingredients).array(:string).description("List of ingredients")
optional(:cuisine).filled(:string).description("Type of cuisine")
end
def call(ingredients:, cuisine: nil)
Recipe.find_by_ingredients(ingredients, cuisine: cuisine)
end
end
# Register the tool with the server
server.register_tool(GetRecipesTool)
# Share data resources with AI models by inheriting from MCP::Resource
class IngredientsResource < MCP::Resource
uri "food/popular_ingredients"
resource_name "Popular Ingredients"
mime_type "application/json"
def default_content
JSON.generate(Ingredient.popular.as_json)
end
end
# Register the resource with the server
server.register_resource(IngredientsResource)
# Accessing the resource through the server
server.read_resource("food/popular_ingredients")
# Updating the resource content through the server
server.update_resource("food/popular_ingredients", JSON.generate({id: 1, name: 'tomato'}))
# Easily integrate with web frameworks
# config/application.rb (Rails)
config.middleware.use MCP::RackMiddleware.new(
name: 'recipe-ai',
version: '1.0.0'
) do |server|
# Register tools and resources here
server.register_tool(GetRecipesTool)
end
# Secure your AI endpoints
config.middleware.use MCP::AuthenticatedRackMiddleware.new(
name: 'recipe-ai',
version: '1.0.0',
token: ENV['MCP_AUTH_TOKEN']
)
# Build real-time applications
server.on_resource_update do |resource|
ActionCable.server.broadcast("recipe_updates", resource.metadata)
end
# In your Gemfile
gem 'fast-mcp'
# Then run
bundle install
# Or install it yourself
gem install fast-mcp
require 'fast_mcp'
# Create an MCP server
server = MCP::Server.new(name: 'my-ai-server', version: '1.0.0')
# Define a tool by inheriting from MCP::Tool
class SummarizeTool < MCP::Tool
description "Summarize a given text"
arguments do
required(:text).filled(:string).description("Text to summarize")
optional(:max_length).filled(:integer).description("Maximum length of summary")
end
def call(text:, max_length: 100)
# Your summarization logic here
text.split('.').first(3).join('.') + '...'
end
end
# Register the tool with the server
server.register_tool(SummarizeTool)
# Create a resource by inheriting from MCP::Resource
class StatisticsResource < MCP::Resource
uri "data/statistics"
resource_name "Usage Statistics"
description "Current system statistics"
mime_type "application/json"
def default_content
JSON.generate({
users_online: 120,
queries_per_minute: 250,
popular_topics: ["Ruby", "AI", "WebDev"]
})
end
end
# Register the resource with the server
server.register_resource(StatisticsResource)
# Start the server
server.start
# config/application.rb
module YourApp
class Application < Rails::Application
# ...
config.middleware.use MCP::RackMiddleware.new(
name: 'my-ai-server',
version: '1.0.0'
) do |server|
# Register tools and resources here
server.register_tool(SummarizeTool)
end
end
end
MCP has developed a very useful inspector. You can use it to validate your implementation. I suggest you use the examples I provided with this project as an easy boilerplate. Clone this project, then give it a go !
npx @modelcontextprotocol/inspector examples/server_with_stdio_transport.rb
Or to test with an SSE transport using a rack middleware:
npx @modelcontextprotocol/inspector examples/rack_middleware.rb
Or to test over SSE with an authenticated rack middleware:
npx @modelcontextprotocol/inspector examples/authenticated_rack_middleware.rb
You can test your custom implementation with the official MCP inspector by using:
# Test with a stdio transport:
npx @modelcontextprotocol/inspector path/to/your_ruby_file.rb
# Test with an HTTP / SSE server. In the UI select SSE and input your address.
npx @modelcontextprotocol/inspector
# app.rb
require 'sinatra'
require 'fast_mcp'
use MCP::RackMiddleware.new(name: 'my-ai-server', version: '1.0.0') do |server|
# Register tools and resources here
server.register_tool(SummarizeTool)
end
get '/' do
'Hello World!'
end
Add your server to your Claude Desktop configuration at:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
- Windows:
%APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"my-great-server": {
"command": "ruby",
"args": [
"/Users/path/to/your/awesome/fast-mcp/server.rb"
]
}
}
}
Feature | Status |
---|---|
β JSON-RPC 2.0 | Full implementation for communication |
β Tool Definition & Calling | Define and call tools with rich argument types |
β Resource Management | Create, read, update, and subscribe to resources |
β Transport Options | STDIO, HTTP, and SSE for flexible integration |
β Framework Integration | Rails, Sinatra, Hanami, and any Rack-compatible framework |
β Authentication | Secure your AI endpoints with token authentication |
β Schema Support | Full JSON Schema for tool arguments with validation |
- π€ AI-powered Applications: Connect LLMs to your Ruby app's functionality
- π Real-time Dashboards: Build dashboards with live AI-generated insights
- π Microservice Communication: Use MCP as a clean protocol between services
- π Interactive Documentation: Create AI-enhanced API documentation
- π¬ Chatbots and Assistants: Build AI assistants with access to your app's data
- π Getting Started Guide
- 𧩠Integration Guide
- π€οΈ Rails Integration
- π Sinatra Integration
- πΈ Hanami Integration
- π Resources
- π οΈ Tools
- π Transports
- π API Reference
Check out the examples directory for more detailed examples:
-
π¨ Basic Examples:
-
π Web Integration:
- Ruby 3.2+
We welcome contributions to Fast MCP! Here's how you can help:
- Fork the repository
- Create your feature branch (
git checkout -b my-new-feature
) - Commit your changes (
git commit -am 'Add some feature'
) - Push to the branch (
git push origin my-new-feature
) - Create a new Pull Request
Please read our Contributing Guide for more details.
This project is available as open source under the terms of the MIT License.
- The Model Context Protocol team for creating the specification
- The Dry-Schema team for the argument validation.
- All contributors to this project
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for fast-mcp
Similar Open Source Tools

orra
Orra is a tool for building production-ready multi-agent applications that handle complex real-world interactions. It coordinates tasks across existing stack, agents, and tools run as services using intelligent reasoning. With features like smart pre-evaluated execution plans, domain grounding, durable execution, and automatic service health monitoring, Orra enables users to go fast with tools as services and revert state to handle failures. It provides real-time status tracking and webhook result delivery, making it ideal for developers looking to move beyond simple crews and agents.

R2R
R2R (RAG to Riches) is a fast and efficient framework for serving high-quality Retrieval-Augmented Generation (RAG) to end users. The framework is designed with customizable pipelines and a feature-rich FastAPI implementation, enabling developers to quickly deploy and scale RAG-based applications. R2R was conceived to bridge the gap between local LLM experimentation and scalable production solutions. **R2R is to LangChain/LlamaIndex what NextJS is to React**. A JavaScript client for R2R deployments can be found here. ### Key Features * **π Deploy** : Instantly launch production-ready RAG pipelines with streaming capabilities. * **𧩠Customize** : Tailor your pipeline with intuitive configuration files. * **π Extend** : Enhance your pipeline with custom code integrations. * **βοΈ Autoscale** : Scale your pipeline effortlessly in the cloud using SciPhi. * **π€ OSS** : Benefit from a framework developed by the open-source community, designed to simplify RAG deployment.

WebAI-to-API
This project implements a web API that offers a unified interface to Google Gemini and Claude 3. It provides a self-hosted, lightweight, and scalable solution for accessing these AI models through a streaming API. The API supports both Claude and Gemini models, allowing users to interact with them in real-time. The project includes a user-friendly web UI for configuration and documentation, making it easy to get started and explore the capabilities of the API.

quantalogic
QuantaLogic is a ReAct framework for building advanced AI agents that seamlessly integrates large language models with a robust tool system. It aims to bridge the gap between advanced AI models and practical implementation in business processes by enabling agents to understand, reason about, and execute complex tasks through natural language interaction. The framework includes features such as ReAct Framework, Universal LLM Support, Secure Tool System, Real-time Monitoring, Memory Management, and Enterprise Ready components.

probe
Probe is an AI-friendly, fully local, semantic code search tool designed to power the next generation of AI coding assistants. It combines the speed of ripgrep with the code-aware parsing of tree-sitter to deliver precise results with complete code blocks, making it perfect for large codebases and AI-driven development workflows. Probe is fully local, keeping code on the user's machine without relying on external APIs. It supports multiple languages, offers various search options, and can be used in CLI mode, MCP server mode, AI chat mode, and web interface. The tool is designed to be flexible, fast, and accurate, providing developers and AI models with full context and relevant code blocks for efficient code exploration and understanding.

Shellsage
Shell Sage is an intelligent terminal companion and AI-powered terminal assistant that enhances the terminal experience with features like local and cloud AI support, context-aware error diagnosis, natural language to command translation, and safe command execution workflows. It offers interactive workflows, supports various API providers, and allows for custom model selection. Users can configure the tool for local or API mode, select specific models, and switch between modes easily. Currently in alpha development, Shell Sage has known limitations like limited Windows support and occasional false positives in error detection. The roadmap includes improvements like better context awareness, Windows PowerShell integration, Tmux integration, and CI/CD error pattern database.

evalchemy
Evalchemy is a unified and easy-to-use toolkit for evaluating language models, focusing on post-trained models. It integrates multiple existing benchmarks such as RepoBench, AlpacaEval, and ZeroEval. Key features include unified installation, parallel evaluation, simplified usage, and results management. Users can run various benchmarks with a consistent command-line interface and track results locally or integrate with a database for systematic tracking and leaderboard submission.

pilottai
PilottAI is a Python framework for building autonomous multi-agent systems with advanced orchestration capabilities. It provides enterprise-ready features for building scalable AI applications. The framework includes hierarchical agent systems, production-ready features like asynchronous processing and fault tolerance, advanced memory management with semantic storage, and integrations with multiple LLM providers and custom tools. PilottAI offers specialized agents for various tasks such as customer service, document processing, email handling, knowledge acquisition, marketing, research analysis, sales, social media, and web search. The framework also provides documentation, example use cases, and advanced features like memory management, load balancing, and fault tolerance.

one
ONE is a modern web and AI agent development toolkit that empowers developers to build AI-powered applications with high performance, beautiful UI, AI integration, responsive design, type safety, and great developer experience. It is perfect for building modern web applications, from simple landing pages to complex AI-powered platforms.

CrewAI-GUI
CrewAI-GUI is a Node-Based Frontend tool designed to revolutionize AI workflow creation. It empowers users to design complex AI agent interactions through an intuitive drag-and-drop interface, export designs to JSON for modularity and reusability, and supports both GPT-4 API and Ollama for flexible AI backend. The tool ensures cross-platform compatibility, allowing users to create AI workflows on Windows, Linux, or macOS efficiently.

Zero
Zero is an open-source AI email solution that allows users to self-host their email app while integrating external services like Gmail. It aims to modernize and enhance emails through AI agents, offering features like open-source transparency, AI-driven enhancements, data privacy, self-hosting freedom, unified inbox, customizable UI, and developer-friendly extensibility. Built with modern technologies, Zero provides a reliable tech stack including Next.js, React, TypeScript, TailwindCSS, Node.js, Drizzle ORM, and PostgreSQL. Users can set up Zero using standard setup or Dev Container setup for VS Code users, with detailed environment setup instructions for Better Auth, Google OAuth, and optional GitHub OAuth. Database setup involves starting a local PostgreSQL instance, setting up database connection, and executing database commands for dependencies, tables, migrations, and content viewing.

openai-edge-tts
This project provides a local, OpenAI-compatible text-to-speech (TTS) API using `edge-tts`. It emulates the OpenAI TTS endpoint (`/v1/audio/speech`), enabling users to generate speech from text with various voice options and playback speeds, just like the OpenAI API. `edge-tts` uses Microsoft Edge's online text-to-speech service, making it completely free. The project supports multiple audio formats, adjustable playback speed, and voice selection options, providing a flexible and customizable TTS solution for users.