
mcp
Laravel MCP makes it easy to add MCP servers to your project and let AI talk to your apps.
Stars: 331

Laravel MCP Server SDK makes it easy to add MCP servers to your project and let AI talk to your apps. It provides tools for creating servers, tools, resources, prompts, and registering servers for web-based and local access. The package includes features for handling tool inputs, annotating tools, tool results, streaming tool responses, creating resources, creating prompts, and authentication using Laravel Passport. The MCP Inspector tool is available for testing and debugging servers.
README:
[!IMPORTANT] This package is still in development and not recommended for public usage. This package is currently only intended to power Boost.
Laravel MCP makes it easy to add MCP servers to your project and let AI talk to your apps.
To get started, install Laravel MCP via the Composer package manager:
composer require laravel/mcp
Next, publish the routes/ai.php
file to define your MCP servers:
php artisan vendor:publish --tag=ai-routes
The package will automatically register MCP servers defined in this file.
Create the Server and Tool
First, create a new MCP server using the mcp:server
Artisan command:
php artisan make:mcp-server DemoServer
Next, create a tool for the MCP server:
php artisan make:mcp-tool HelloTool
This will create two files: app/Mcp/Servers/DemoServer.php
and app/Mcp/Tools/HelloTool.php
.
Add the Tool to the Server
Open app/Mcp/Servers/DemoServer.php
and add your new tool to the $tools
property:
<?php
namespace App\Mcp\Servers;
use App\Mcp\Tools\HelloTool;
use Laravel\Mcp\Server;
class DemoServer extends Server
{
public array $tools = [
HelloTool::class,
];
}
Next, register your server in routes/ai.php
:
use App\Mcp\Servers\DemoServer;
use Laravel\Mcp\Facades\Mcp;
Mcp::local('demo', DemoServer::class);
Finally, you can test it with the MCP Inspector tool:
php artisan mcp:inspector demo
A server is the central point that handles communication and exposes MCP methods, like tools and resources. Create a server with the make:mcp-server
Artisan command:
php artisan make:mcp-server ExampleServer
Tools let your server expose functionality that clients can call, and that language models can use to perform actions, run code, or interact with external systems.
Use the mcp:tool
Artisan command to generate a tool class:
php artisan make:mcp-tool ExampleTool
Your tools can request arguments from the MCP client using a tool input schema:
use Illuminate\JsonSchema\JsonSchema;
public function schema(JsonSchema $schema): array
{
return [
'name' => $schema->string()
->description('The name of the user')
->required(),
];
}
You can add annotations to your tools to provide hints to the MCP client about their behavior. This is done using PHP attributes on your tool class. Adding annotations to your tools is optional.
Annotation | Type | Description |
---|---|---|
#[Title] |
string | A human-readable title for the tool. |
#[IsReadOnly] |
boolean | Indicates the tool does not modify its environment. |
#[IsDestructive] |
boolean | Indicates the tool may perform destructive updates. This is only meaningful when the tool is not read-only. |
#[IsIdempotent] |
boolean | Indicates that calling the tool repeatedly with the same arguments has no additional effect. This is only meaningful when the tool is not read-only. |
#[IsOpenWorld] |
boolean | Indicates the tool may interact with an "open world" of external entities. |
Here's an example of how to add annotations to a tool:
<?php
namespace App\Mcp\Tools;
use Laravel\Mcp\Server\Tools\Annotations\IsReadOnly;
use Laravel\Mcp\Server\Tools\Annotations\Title;
use Laravel\Mcp\Server\Tool;
#[Title('A read-only tool')]
#[IsReadOnly]
class ExampleTool extends Tool
{
// ...
}
The handle
method of a tool must return an instance of Laravel\Mcp\Server\Tools\ToolResult
. This class provides a few convenient methods for creating responses.
For a simple text response, you can use the text()
method:
$response = ToolResult::text('This is a test response.');
To indicate that the tool execution resulted in an error, use the error()
method:
$response = ToolResult::error('This is an error response.');
A tool result can contain multiple content items. The items()
method allows you to construct a result from different content objects, like TextContent
.
use Laravel\Mcp\Server\Tools\TextContent;
$plainText = 'This is the plain text version.';
$markdown = 'This is the **markdown** version.';
$response = ToolResult::items(
new TextContent($plainText),
new TextContent($markdown)
);
For tools that send multiple updates or stream large amounts of data, you can return a generator from the handle()
method. For web-based servers, this automatically opens an SSE stream and sends an event for each message the generator yields.
Within your generator, you can yield any number of Laravel\Mcp\Server\Tools\ToolNotification
instances to send intermediate updates to the client. When you're done, yield a single Laravel\Mcp\Server\Tools\ToolResult
to complete the execution.
This is particularly useful for long-running tasks or when you want to provide real-time feedback to the client, such as streaming tokens in a chat application:
<?php
namespace App\Mcp\Tools;
use Generator;
use Laravel\Mcp\Server\Tool;
use Laravel\Mcp\Server\Tools\ToolNotification;
use Laravel\Mcp\Server\Tools\ToolResult;
class ChatStreamingTool extends Tool
{
public function handle(array $arguments): Generator
{
$tokens = explode(' ', $arguments['message']);
foreach ($tokens as $token) {
yield new ToolNotification('chat/token', ['token' => $token . ' ']);
}
yield ToolResult::text("Message streamed successfully.");
}
}
Resources let your server expose data and content that clients can read and use as context when interacting with language models.
Use the make:mcp-resource
Artisan command to generate a resource class:
php artisan make:mcp-resource ExampleResource
To make a resource available to clients, you must register it in your server class in the $resources
property.
Prompts let your server share reusable prompts that clients can use to prompt the LLM.
Use the make:mcp-prompt
Artisan command to generate a prompt class:
php artisan make:mcp-prompt ExamplePrompt
To make a prompt available to clients, you must register it in your server class in the $prompts
property.
The easiest way to register MCP servers is by publishing the routes/ai.php
file included with the package. If this file exists, the package will automatically load any servers registered via the Mcp
facade. You can expose a server over HTTP or make it available locally as an Artisan command.
To register a web-based MCP server that can be accessed via HTTP POST requests, you should use the web
method:
use App\Mcp\Servers\ExampleServer;
use Laravel\Mcp\Facades\Mcp;
Mcp::web('/mcp/demo', ExampleServer::class);
This will make ExampleServer
available at the /mcp/demo
endpoint.
To register a local MCP server that can be run as an Artisan command:
use App\Mcp\Servers\ExampleServer;
use Laravel\Mcp\Facades\Mcp;
Mcp::local('demo', ExampleServer::class);
This makes the server available via the mcp:start
Artisan command:
php artisan mcp:start demo
Web-based MCP servers can be protected using Laravel Passport, turning your MCP server into an OAuth2 protected resource.
If you already have Passport set up for your app, all you need to do is add the Mcp::oauthRoutes()
helper to your routes/web.php
file. This registers the required OAuth2 discovery and client registration endpoints. The method accepts an optional route prefix, which defaults to oauth
.
use Laravel\Mcp\Facades\Mcp;
Mcp::oauthRoutes();
Then, apply the auth:api
middleware to your server registration in routes/ai.php
:
use App\Mcp\Servers\ExampleServer;
use Laravel\Mcp\Facades\Mcp;
Mcp::web('/mcp/demo', ExampleServer::class)
->middleware('auth:api');
Your MCP server is now protected using OAuth.
The MCP Inspector is an interactive tool for testing and debugging your MCP servers. You can use it to connect to your server, verify authentication, and try out tools, resources, and other parts of the protocol.
Run mcp:inspector to test your server:
php artisan mcp:inspector demo
This will run the MCP inspector and provide settings you can input to ensure it's setup correctly.
Thank you for considering contributing to Laravel MCP! You can read the contribution guide here.
In order to ensure that the Laravel community is welcoming to all, please review and abide by the Code of Conduct.
Please review our security policy on how to report security vulnerabilities.
Laravel MCP is open-sourced software licensed under the MIT license.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for mcp
Similar Open Source Tools

mcp
Laravel MCP Server SDK makes it easy to add MCP servers to your project and let AI talk to your apps. It provides tools for creating servers, tools, resources, prompts, and registering servers for web-based and local access. The package includes features for handling tool inputs, annotating tools, tool results, streaming tool responses, creating resources, creating prompts, and authentication using Laravel Passport. The MCP Inspector tool is available for testing and debugging servers.

code2prompt
code2prompt is a command-line tool that converts your codebase into a single LLM prompt with a source tree, prompt templating, and token counting. It automates generating LLM prompts from codebases of any size, customizing prompt generation with Handlebars templates, respecting .gitignore, filtering and excluding files using glob patterns, displaying token count, including Git diff output, copying prompt to clipboard, saving prompt to an output file, excluding files and folders, adding line numbers to source code blocks, and more. It helps streamline the process of creating LLM prompts for code analysis, generation, and other tasks.

please-cli
Please CLI is an AI helper script designed to create CLI commands by leveraging the GPT model. Users can input a command description, and the script will generate a Linux command based on that input. The tool offers various functionalities such as invoking commands, copying commands to the clipboard, asking questions about commands, and more. It supports parameters for explanation, using different AI models, displaying additional output, storing API keys, querying ChatGPT with specific models, showing the current version, and providing help messages. Users can install Please CLI via Homebrew, apt, Nix, dpkg, AUR, or manually from source. The tool requires an OpenAI API key for operation and offers configuration options for setting API keys and OpenAI settings. Please CLI is licensed under the Apache License 2.0 by TNG Technology Consulting GmbH.

garak
Garak is a free tool that checks if a Large Language Model (LLM) can be made to fail in a way that is undesirable. It probes for hallucination, data leakage, prompt injection, misinformation, toxicity generation, jailbreaks, and many other weaknesses. Garak's a free tool. We love developing it and are always interested in adding functionality to support applications.

hash
HASH is a self-building, open-source database which grows, structures and checks itself. With it, we're creating a platform for decision-making, which helps you integrate, understand and use data in a variety of different ways.

codespin
CodeSpin.AI is a set of open-source code generation tools that leverage large language models (LLMs) to automate coding tasks. With CodeSpin, you can generate code in various programming languages, including Python, JavaScript, Java, and C++, by providing natural language prompts. CodeSpin offers a range of features to enhance code generation, such as custom templates, inline prompting, and the ability to use ChatGPT as an alternative to API keys. Additionally, CodeSpin provides options for regenerating code, executing code in prompt files, and piping data into the LLM for processing. By utilizing CodeSpin, developers can save time and effort in coding tasks, improve code quality, and explore new possibilities in code generation.

log10
Log10 is a one-line Python integration to manage your LLM data. It helps you log both closed and open-source LLM calls, compare and identify the best models and prompts, store feedback for fine-tuning, collect performance metrics such as latency and usage, and perform analytics and monitor compliance for LLM powered applications. Log10 offers various integration methods, including a python LLM library wrapper, the Log10 LLM abstraction, and callbacks, to facilitate its use in both existing production environments and new projects. Pick the one that works best for you. Log10 also provides a copilot that can help you with suggestions on how to optimize your prompt, and a feedback feature that allows you to add feedback to your completions. Additionally, Log10 provides prompt provenance, session tracking and call stack functionality to help debug prompt chains. With Log10, you can use your data and feedback from users to fine-tune custom models with RLHF, and build and deploy more reliable, accurate and efficient self-hosted models. Log10 also supports collaboration, allowing you to create flexible groups to share and collaborate over all of the above features.

garak
Garak is a vulnerability scanner designed for LLMs (Large Language Models) that checks for various weaknesses such as hallucination, data leakage, prompt injection, misinformation, toxicity generation, and jailbreaks. It combines static, dynamic, and adaptive probes to explore vulnerabilities in LLMs. Garak is a free tool developed for red-teaming and assessment purposes, focusing on making LLMs or dialog systems fail. It supports various LLM models and can be used to assess their security and robustness.

ai-starter-kit
SambaNova AI Starter Kits is a collection of open-source examples and guides designed to facilitate the deployment of AI-driven use cases for developers and enterprises. The kits cover various categories such as Data Ingestion & Preparation, Model Development & Optimization, Intelligent Information Retrieval, and Advanced AI Capabilities. Users can obtain a free API key using SambaNova Cloud or deploy models using SambaStudio. Most examples are written in Python but can be applied to any programming language. The kits provide resources for tasks like text extraction, fine-tuning embeddings, prompt engineering, question-answering, image search, post-call analysis, and more.

fabric
Fabric is an open-source framework for augmenting humans using AI. It provides a structured approach to breaking down problems into individual components and applying AI to them one at a time. Fabric includes a collection of pre-defined Patterns (prompts) that can be used for a variety of tasks, such as extracting the most interesting parts of YouTube videos and podcasts, writing essays, summarizing academic papers, creating AI art prompts, and more. Users can also create their own custom Patterns. Fabric is designed to be easy to use, with a command-line interface and a variety of helper apps. It is also extensible, allowing users to integrate it with their own AI applications and infrastructure.

moxin
Moxin is an AI LLM client written in Rust to demonstrate the functionality of the Robius framework for multi-platform application development. It is currently in early stages of development and not fully functional. The tool supports building and running on macOS and Linux systems, with packaging options available for distribution. Users can install the required WasmEdge WASM runtime and dependencies to build and run Moxin. Packaging for distribution includes generating `.deb` Debian packages, AppImage, and pacman installation packages for Linux, as well as `.app` bundles and `.dmg` disk images for macOS. The macOS app is not signed, leading to a warning on installation, which can be resolved by removing the quarantine attribute from the installed app.

mods
AI for the command line, built for pipelines. LLM based AI is really good at interpreting the output of commands and returning the results in CLI friendly text formats like Markdown. Mods is a simple tool that makes it super easy to use AI on the command line and in your pipelines. Mods works with OpenAI, Groq, Azure OpenAI, and LocalAI To get started, install Mods and check out some of the examples below. Since Mods has built-in Markdown formatting, you may also want to grab Glow to give the output some _pizzazz_.

openai_trtllm
OpenAI-compatible API for TensorRT-LLM and NVIDIA Triton Inference Server, which allows you to integrate with langchain

termax
Termax is an LLM agent in your terminal that converts natural language to commands. It is featured by: - Personalized Experience: Optimize the command generation with RAG. - Various LLMs Support: OpenAI GPT, Anthropic Claude, Google Gemini, Mistral AI, and more. - Shell Extensions: Plugin with popular shells like `zsh`, `bash` and `fish`. - Cross Platform: Able to run on Windows, macOS, and Linux.

sandbox
Sandbox is an open-source cloud-based code editing environment with custom AI code autocompletion and real-time collaboration. It consists of a frontend built with Next.js, TailwindCSS, Shadcn UI, Clerk, Monaco, and Liveblocks, and a backend with Express, Socket.io, Cloudflare Workers, D1 database, R2 storage, Workers AI, and Drizzle ORM. The backend includes microservices for database, storage, and AI functionalities. Users can run the project locally by setting up environment variables and deploying the containers. Contributions are welcome following the commit convention and structure provided in the repository.
For similar tasks

mcp-framework
MCP-Framework is a TypeScript framework for building Model Context Protocol (MCP) servers with automatic directory-based discovery for tools, resources, and prompts. It provides powerful abstractions, simple server setup, and a CLI for rapid development and project scaffolding.

mcp
Laravel MCP Server SDK makes it easy to add MCP servers to your project and let AI talk to your apps. It provides tools for creating servers, tools, resources, prompts, and registering servers for web-based and local access. The package includes features for handling tool inputs, annotating tools, tool results, streaming tool responses, creating resources, creating prompts, and authentication using Laravel Passport. The MCP Inspector tool is available for testing and debugging servers.

zig-aio
zig-aio is a library that provides an io_uring-like asynchronous API and coroutine-powered IO tasks for the Zig programming language. It offers support for different operating systems and backends, such as io_uring, iocp, and posix. The library aims to provide efficient IO operations by leveraging coroutines and async IO mechanisms. Users can create servers and clients with ease using the provided API functions for socket operations, sending and receiving data, and managing connections.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.