
MCP2Lambda
Run any AWS Lambda function as a Large Language Model (LLM) tool without code changes using Anthropic's Model Context Protocol (MCP).
Stars: 57

README:
Run any AWS Lambda function as a Large Language Model (LLM) tool without code changes using Anthropic's Model Context Protocol (MCP).
graph LR
A[Model] <--> B[MCP Client]
B <--> C["MCP2Lambda<br>(MCP Server)"]
C <--> D[Lambda Function]
D <--> E[Other AWS Services]
D <--> F[Internet]
D <--> G[VPC]
style A fill:#f9f,stroke:#333,stroke-width:2px
style B fill:#bbf,stroke:#333,stroke-width:2px
style C fill:#bfb,stroke:#333,stroke-width:4px
style D fill:#fbb,stroke:#333,stroke-width:2px
style E fill:#fbf,stroke:#333,stroke-width:2px
style F fill:#dff,stroke:#333,stroke-width:2px
style G fill:#ffd,stroke:#333,stroke-width:2px
This MCP server acts as a bridge between MCP clients and AWS Lambda functions, allowing generative AI models to access and run Lambda functions as tools. This is useful, for example, to access private resources such as internal applications and databases without the need to provide public network access. This approach allows the model to use other AWS services, private networks, and the public internet.
From a security perspective, this approach implements segregation of duties by allowing the model to invoke the Lambda functions but not to access the other AWS services directly. The client only needs AWS credentials to invoke the Lambda functions. The Lambda functions can then interact with other AWS services (using the function role) and access public or private networks.
The MCP server gives access to two tools:
-
The first tool can autodiscover all Lambda functions in your account that match a prefix or an allowed list of names. This tool shares the names of the functions and their descriptions with the model.
-
The second tool allows to invoke those Lambda functions by name passing the required parameters.
No code changes are required. You should change these configurations to improve results:
The gateway supports two different strategies for handling Lambda functions:
-
Pre-Discovery Mode (default: enabled): Registers each Lambda function as an individual tool at startup. This provides a more intuitive interface where each function appears as its own named tool.
-
Generic Mode: Uses two generic tools (
list_lambda_functions
andinvoke_lambda_function
) to interact with Lambda functions.
You can control this behavior through:
- Environment variable:
PRE_DISCOVERY=true|false
- CLI flag:
--no-pre-discovery
(disables pre-discovery mode)
Example:
# Disable pre-discovery mode
export PRE_DISCOVERY=false
python main.py
# Or using CLI flag to disable pre-discovery
python main.py --no-pre-discovery
-
To provide the MCP client with the knowledge to use a Lambda function, the description of the Lambda function should indicate what the function does and which parameters it uses. See the sample functions for a quick demo and more details.
-
To help the model use the tools available via AWS Lambda, you can add something like this to your system prompt:
Use the AWS Lambda tools to improve your answers.
MCP2Lambda enables LLMs to interact with AWS Lambda functions as tools, extending their capabilities beyond text generation. This allows models to:
- Access real-time and private data, including data sources in your VPCs
- Execute custom code using a Lambda function as sandbox environment
- Interact with external services and APIs using Lambda functions internet access (and bandwidth)
- Perform specialized calculations or data processing
The server uses the MCP protocol, which standardizes the way AI models can access external tools.
By default, only functions whose name starts with mcp2lambda-
will be available to the model.
- Python 3.12 or higher
- AWS account with configured credentials
- AWS Lambda functions (sample functions provided in the repo)
- An application using Amazon Bedrock with the Converse API
- An MCP-compatible client like Claude Desktop
To install MCP2Lambda for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @danilop/MCP2Lambda --client claude
-
Clone the repository:
git clone https://github.com/yourusername/mcp2lambda.git cd mcp2lambda
-
Configure AWS credentials. For example, using the AWS CLI:
aws configure
This repository includes three sample Lambda functions that demonstrate different use cases. These functions have basic permissions and can only write to CloudWatch logs.
Retrieves a customer ID based on an email address. This function takes an email parameter and returns the associated customer ID, demonstrating how to build simple lookup tools. The function is hard coded to reply to the [email protected]
email address. For example, you can ask the model to get the customer ID for the email [email protected]
.
Retrieves detailed customer information based on a customer ID. This function returns customer details like name, email, and status, showing how Lambda can provide context-specific data. The function is hard coded to reply to the customer ID returned by the previous function. For example, you can ask the model to "Get the customer status for email [email protected]
". This will use both functions to get to the result.
Executes arbitrary Python code within a Lambda sandbox environment. This powerful function allows Claude to write and run Python code to perform calculations, data processing, or other operations not built into the model. For example, you can ask the model to "Calculate the number of prime numbers between 1 and 10, 1 and 100, and so on up to 1M".
The repository includes sample Lambda functions in the sample_functions
directory.
-
Install the AWS SAM CLI: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/install-sam-cli.html
-
Deploy the sample functions:
cd sample_functions sam build sam deploy
The sample functions will be deployed with the prefix mcp2lambda-
.
MCP2Lambda can also be used with Amazon Bedrock's Converse API, allowing you to use the MCP protocol with any of the models supported by Bedrock.
The mcp_client_bedrock
directory contains a client implementation that connects MCP2Lambda to Amazon Bedrock models.
See https://github.com/mikegc-aws/amazon-bedrock-mcp for more information.
- Amazon Bedrock access and permissions to use models like Claude, Mistral, Llama, etc.
- Boto3 configured with appropriate credentials
-
Navigate to the mcp_client_bedrock directory:
cd mcp_client_bedrock
-
Install dependencies:
uv pip install -e .
-
Run the client:
python main.py
The client is configured to use Anthropic's Claude 3.7 Sonnet by default, but you can modify the model_id
in main.py
to use other Bedrock models:
# Examples of supported models:
model_id = "us.anthropic.claude-3-7-sonnet-20250219-v1:0"
#model_id = "us.amazon.nova-pro-v1:0"
You can also customize the system prompt in the same file to change how the model behaves.
-
Start the MCP2Lambda server in one terminal:
cd mcp2lambda uv run main.py
-
Run the Bedrock client in another terminal:
cd mcp_client_bedrock python main.py
-
Interact with the model through the command-line interface. The model will have access to the Lambda functions deployed earlier.
Add the following to your Claude Desktop configuration file:
{
"mcpServers": {
"mcp2lambda": {
"command": "uv",
"args": [
"--directory",
"<full path to the mcp2lambda directory>",
"run",
"main.py"
]
}
}
}
To help the model use tools via AWS Lambda, in your settings profile, you can add to your personal preferences a sentence like:
Use the AWS Lambda tools to improve your answers.
Start the MCP server locally:
cd mcp2lambda
uv run main.py
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for MCP2Lambda
Similar Open Source Tools

talon-ai-tools
Control large language models and AI tools through voice commands using the Talon Voice dictation engine. This tool is designed to help users quickly edit text, code by voice, reduce keyboard use for those with health issues, and speed up workflow by using AI commands across the desktop. It prompts and extends tools like Github Copilot and OpenAI API for text and image generation. Users can set up the tool by downloading the repo, obtaining an OpenAI API key, and customizing the endpoint URL for preferred models. The tool can be used without an OpenAI key and can be exclusively used with Copilot for those not needing LLM integration.

SWELancer-Benchmark
SWE-Lancer is a benchmark repository containing datasets and code for the paper 'SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?'. It provides instructions for package management, building Docker images, configuring environment variables, and running evaluations. Users can use this tool to assess the performance of language models in real-world freelance software engineering tasks.

vectara-answer
Vectara Answer is a sample app for Vectara-powered Summarized Semantic Search (or question-answering) with advanced configuration options. For examples of what you can build with Vectara Answer, check out Ask News, LegalAid, or any of the other demo applications.

tonic_validate
Tonic Validate is a framework for the evaluation of LLM outputs, such as Retrieval Augmented Generation (RAG) pipelines. Validate makes it easy to evaluate, track, and monitor your LLM and RAG applications. Validate allows you to evaluate your LLM outputs through the use of our provided metrics which measure everything from answer correctness to LLM hallucination. Additionally, Validate has an optional UI to visualize your evaluation results for easy tracking and monitoring.

telemetry-airflow
This repository codifies the Airflow cluster that is deployed at workflow.telemetry.mozilla.org (behind SSO) and commonly referred to as "WTMO" or simply "Airflow". Some links relevant to users and developers of WTMO: * The `dags` directory in this repository contains some custom DAG definitions * Many of the DAGs registered with WTMO don't live in this repository, but are instead generated from ETL task definitions in bigquery-etl * The Data SRE team maintains a WTMO Developer Guide (behind SSO)

AgentIQ
AgentIQ is a flexible library designed to seamlessly integrate enterprise agents with various data sources and tools. It enables true composability by treating agents, tools, and workflows as simple function calls. With features like framework agnosticism, reusability, rapid development, profiling, observability, evaluation system, user interface, and MCP compatibility, AgentIQ empowers developers to move quickly, experiment freely, and ensure reliability across agent-driven projects.

CoolCline
CoolCline is a proactive programming assistant that combines the best features of Cline, Roo Code, and Bao Cline. It seamlessly collaborates with your command line interface and editor, providing the most powerful AI development experience. It optimizes queries, allows quick switching of LLM Providers, and offers auto-approve options for actions. Users can configure LLM Providers, select different chat modes, perform file and editor operations, integrate with the command line, automate browser tasks, and extend capabilities through the Model Context Protocol (MCP). Context mentions help provide explicit context, and installation is easy through the editor's extension panel or by dragging and dropping the `.vsix` file. Local setup and development instructions are available for contributors.

Bard-API
The Bard API is a Python package that returns responses from Google Bard through the value of a cookie. It is an unofficial API that operates through reverse-engineering, utilizing cookie values to interact with Google Bard for users struggling with frequent authentication problems or unable to authenticate via Google Authentication. The Bard API is not a free service, but rather a tool provided to assist developers with testing certain functionalities due to the delayed development and release of Google Bard's API. It has been designed with a lightweight structure that can easily adapt to the emergence of an official API. Therefore, using it for any other purposes is strongly discouraged. If you have access to a reliable official PaLM-2 API or Google Generative AI API, replace the provided response with the corresponding official code. Check out https://github.com/dsdanielpark/Bard-API/issues/262.

OpenAI-sublime-text
The OpenAI Completion plugin for Sublime Text provides first-class code assistant support within the editor. It utilizes LLM models to manipulate code, engage in chat mode, and perform various tasks. The plugin supports OpenAI, llama.cpp, and ollama models, allowing users to customize their AI assistant experience. It offers separated chat histories and assistant settings for different projects, enabling context-specific interactions. Additionally, the plugin supports Markdown syntax with code language syntax highlighting, server-side streaming for faster response times, and proxy support for secure connections. Users can configure the plugin's settings to set their OpenAI API key, adjust assistant modes, and manage chat history. Overall, the OpenAI Completion plugin enhances the Sublime Text editor with powerful AI capabilities, streamlining coding workflows and fostering collaboration with AI assistants.

slack-bot
The Slack Bot is a tool designed to enhance the workflow of development teams by integrating with Jenkins, GitHub, GitLab, and Jira. It allows for custom commands, macros, crons, and project-specific commands to be implemented easily. Users can interact with the bot through Slack messages, execute commands, and monitor job progress. The bot supports features like starting and monitoring Jenkins jobs, tracking pull requests, querying Jira information, creating buttons for interactions, generating images with DALL-E, playing quiz games, checking weather, defining custom commands, and more. Configuration is managed via YAML files, allowing users to set up credentials for external services, define custom commands, schedule cron jobs, and configure VCS systems like Bitbucket for automated branch lookup in Jenkins triggers.

geti-sdk
The Intel® Geti™ SDK is a python package that enables teams to rapidly develop AI models by easing the complexities of model development and enhancing collaboration between teams. It provides tools to interact with an Intel® Geti™ server via the REST API, allowing for project creation, downloading, uploading, deploying for local inference with OpenVINO, setting project and model configuration, launching and monitoring training jobs, and media upload and prediction. The SDK also includes tutorial-style Jupyter notebooks demonstrating its usage.

unitycatalog
Unity Catalog is an open and interoperable catalog for data and AI, supporting multi-format tables, unstructured data, and AI assets. It offers plugin support for extensibility and interoperates with Delta Sharing protocol. The catalog is fully open with OpenAPI spec and OSS implementation, providing unified governance for data and AI with asset-level access control enforced through REST APIs.

describer
Describer is a tool that analyzes codebases using AI to generate architectural overviews, documentation, explanations, bug reports, and more. It scans all files in a directory and uses Google's Gemini AI to provide insights such as markdown architectural overviews, codebase summaries, code pattern analysis, codebase structure documentation, bug identification, and test idea generation. The tool respects .gitignore rules by default but allows users to include/exclude specific files or patterns for analysis.

genai-toolbox
Gen AI Toolbox for Databases is an open source server that simplifies building Gen AI tools for interacting with databases. It handles complexities like connection pooling, authentication, and more, enabling easier, faster, and more secure tool development. The toolbox sits between the application's orchestration framework and the database, providing a control plane to modify, distribute, or invoke tools. It offers simplified development, better performance, enhanced security, and end-to-end observability. Users can install the toolbox as a binary, container image, or compile from source. Configuration is done through a 'tools.yaml' file, defining sources, tools, and toolsets. The project follows semantic versioning and welcomes contributions.