mcp-llm-bridge

mcp-llm-bridge

MCP implementation that enables communication between MCP servers and OpenAI-compatible LLMs

Stars: 266

Visit
 screenshot

The MCP LLM Bridge is a tool that acts as a bridge connecting Model Context Protocol (MCP) servers to OpenAI-compatible LLMs. It provides a bidirectional protocol translation layer between MCP and OpenAI's function-calling interface, enabling any OpenAI-compatible language model to leverage MCP-compliant tools through a standardized interface. The tool supports primary integration with the OpenAI API and offers additional compatibility for local endpoints that implement the OpenAI API specification. Users can configure the tool for different endpoints and models, facilitating the execution of complex queries and tasks using cloud-based or local models like Ollama and LM Studio.

README:

MCP LLM Bridge

A bridge connecting Model Context Protocol (MCP) servers to OpenAI-compatible LLMs. Primary support for OpenAI API, with additional compatibility for local endpoints that implement the OpenAI API specification.

The implementation provides a bidirectional protocol translation layer between MCP and OpenAI's function-calling interface. It converts MCP tool specifications into OpenAI function schemas and handles the mapping of function invocations back to MCP tool executions. This enables any OpenAI-compatible language model to leverage MCP-compliant tools through a standardized interface, whether using cloud-based models or local implementations like Ollama.

Read more about MCP by Anthropic here:

Demo:

MCP LLM Bridge Demo

Quick Start

# Install
curl -LsSf https://astral.sh/uv/install.sh | sh
git clone https://github.com/bartolli/mcp-llm-bridge.git
cd mcp-llm-bridge
uv venv
source .venv/bin/activate
uv pip install -e .

# Create test database
python -m mcp_llm_bridge.create_test_db

Configuration

OpenAI (Primary)

Create .env:

OPENAI_API_KEY=your_key
OPENAI_MODEL=gpt-4o # or any other OpenAI model that supports tools

Note: reactivate the environment if needed to use the keys in .env: source .venv/bin/activate

Then configure the bridge in src/mcp_llm_bridge/main.py

config = BridgeConfig(
    mcp_server_params=StdioServerParameters(
        command="uvx",
        args=["mcp-server-sqlite", "--db-path", "test.db"],
        env=None
    ),
    llm_config=LLMConfig(
        api_key=os.getenv("OPENAI_API_KEY"),
        model=os.getenv("OPENAI_MODEL", "gpt-4o"),
        base_url=None
    )
)

Additional Endpoint Support

The bridge also works with any endpoint implementing the OpenAI API specification:

Ollama

llm_config=LLMConfig(
    api_key="not-needed",
    model="mistral-nemo:12b-instruct-2407-q8_0",
    base_url="http://localhost:11434/v1"
)

Note: After testing various models, including llama3.2:3b-instruct-fp16, I found that mistral-nemo:12b-instruct-2407-q8_0 handles complex queries more effectively.

LM Studio

llm_config=LLMConfig(
    api_key="not-needed",
    model="local-model",
    base_url="http://localhost:1234/v1"
)

I didn't test this, but it should work.

Usage

python -m mcp_llm_bridge.main

# Try: "What are the most expensive products in the database?"
# Exit with 'quit' or Ctrl+C

Running Tests

Install the package with test dependencies:

uv pip install -e ".[test]"

Then run the tests:

python -m pytest -v tests/

License

MIT

Contributing

PRs welcome.

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for mcp-llm-bridge

Similar Open Source Tools

For similar tasks

For similar jobs