
mcphub.nvim
A powerful Neovim plugin for managing MCP (Model Context Protocol) servers
Stars: 61

MCPHub.nvim is a powerful Neovim plugin that integrates MCP (Model Context Protocol) servers into your workflow. It offers a centralized config file for managing servers and tools, with an intuitive UI for testing resources. Ideal for LLM integration, it provides programmatic API access and interactive testing through the `:MCPHub` command.
README:
A powerful Neovim plugin that integrates MCP (Model Context Protocol) servers into your workflow. Configure and manage MCP servers through a centralized config file while providing an intuitive UI for testing tools and resources. Perfect for LLM integration, offering both programmatic API access and interactive testing capabilities through the :MCPHub
command.
Using Codecompanion Chat plugin
- Simple single-command interface (
:MCPHub
) - Integrated Hub view for managing servers and tools
- Dynamically enable/disable servers and tools to optimize token usage
- Start/stop servers with persistent state
- Enable/disable specific tools per server
- State persists across restarts
- Parallel startup for improved performance
- Interactive UI for testing tools and resources
- Automatic server lifecycle management across multiple Neovim instances
- Smart shutdown handling with configurable delay
- Both sync and async operations supported
- Clean client registration/cleanup
- Comprehensive API for tool and resource access
Using lazy.nvim:
{
"ravitemer/mcphub.nvim",
dependencies = {
"nvim-lua/plenary.nvim", -- Required for Job and HTTP requests
},
build = "npm install -g mcp-hub@latest", -- Installs required mcp-hub npm module
config = function()
require("mcphub").setup({
-- Required options
port = 3000, -- Port for MCP Hub server
config = vim.fn.expand("~/mcpservers.json"), -- Absolute path to config file
-- Optional options
on_ready = function(hub)
-- Called when hub is ready
end,
on_error = function(err)
-- Called on errors
end,
shutdown_delay = 0, -- Wait 0ms before shutting down server after last client exits
log = {
level = vim.log.levels.WARN,
to_file = false,
file_path = nil,
prefix = "MCPHub"
},
})
end
}
Example configuration file:
{
"mcpServers": {
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
},
"todoist": {
"command": "npx",
"args": ["-y", "@abhiz123/todoist-mcp-server"],
"disabled": true,
"env": {
"TODOIST_API_TOKEN": "your-api-token-here"
}
}
}
}
- Neovim >= 0.8.0
- Node.js >= 18.0.0
- plenary.nvim
- mcp-hub (automatically installed via build command)
- Open the MCPHub UI to manage servers, test tools and monitor status:
:MCPHub
You can:
- Start/stop servers directly from the Hub view
- Enable/disable specific tools for each server
- Test tools and resources interactively
- Monitor server status and logs
- Use the hub instance in your code:
-- Get hub instance after setup
local mcphub = require("mcphub")
-- Option 1: Use on_ready callback
mcphub.setup({
port = 3000,
config = vim.fn.expand("~/mcpservers.json"),
on_ready = function(hub)
-- Hub is ready to use here
end
})
-- Option 2: Get hub instance directly (might be nil if setup in progress)
local hub = mcphub.get_hub_instance()
-- Call a tool (sync)
local response, err = hub:call_tool("server-name", "tool-name", {
param1 = "value1"
}, {
return_text = true -- Parse response to LLM-suitable text
})
-- Call a tool (async)
hub:call_tool("server-name", "tool-name", {
param1 = "value1"
}, {
return_text = true,
callback = function(response, err)
-- Use response
end
})
-- Access resource (sync)
local response, err = hub:access_resource("server-name", "resource://uri", {
return_text = true
})
-- Get prompt helpers for system prompts
local prompts = hub:get_prompts({
use_mcp_tool_example = [[<use_mcp_tool>
<server_name>weather-server</server_name>
<tool_name>get_forecast</tool_name>
<arguments>
{
"city": "San Francisco",
"days": 5
}
</arguments>
</use_mcp_tool>]],
access_mcp_resource_example = [[<access_mcp_resource>
<server_name>weather-server</server_name>
<uri>weather://san-francisco/current</uri>
</access_mcp_resource>]]
})
-- prompts.active_servers: Lists currently active servers
-- prompts.use_mcp_tool: Instructions for tool usage with example
-- prompts.access_mcp_resource: Instructions for resource access with example
MCPHub.nvim provides extensions that integrate with popular Neovim chat plugins. These extensions allow you to use MCP tools and resources directly within your chat interfaces.
Add MCP capabilities to CodeCompanion.
Add it as a dependency to load the plugin before codecompanion:
{
"olimorris/codecompanion.nvim",
dependencies = {
"nvim-lua/plenary.nvim",
"nvim-treesitter/nvim-treesitter",
"ravitemer/mcphub.nvim"
},
},
- Please note there are some breaking changes with codecompanion v13 in the way we configure tools.
require("codecompanion").setup({
strategies = {
chat = {
tools = {
["mcp"] = {
callback = require("mcphub.extensions.codecompanion"),
description = "Call tools and resources from the MCP Servers",
opts = {
-- user_approval = true,
requires_approval = true,
}
}
}
}
}
})
See the extensions/ folder for more examples and implementation details.
The Avante extension automatically updates your [mode].avanterules
file (e.g. planning.avanterules
) whenever MCP servers or their tools are updated. This ensures Avante always has up-to-date information about available MCP capabilities in its system prompt.
- Exit Neovim completely
- Start Neovim again
- Open Avante fresh
Commands like /reset
, /new
, or /clear
do not cause Avante to reload the rules file.
Add MCP capabilities to Avante by including the MCP tool in your setup:
require("avante").setup({
-- other config
custom_tools = {
-- optional: mode is "planning" that is when you open the chat using toggle or <leader>aa
--optional: cwd can be a string or a function that should returns path
require("mcphub.extensions.avante").mcp_tool("planning",function()
return require("avante.utils").get_project_root()
end)
}
})
This extension modifies your [mode].avanterules
file. When creating a new file or replacing content, it uses this template:
{% block mcp_servers %}
# Active MCP Servers:
- server1: tool1, tool2
- server2: tool3, tool4
# Available Tools and Resources:
[Detailed list of capabilities...]
{% endblock %}
File handling works as follows:
- If file has the MCP servers block:
- Only content within the block is modified
- Content outside the block remains untouched
- If no block present:
- ENTIRE FILE CONTENT WILL BE REPLACED
- A new file with proper block structure will be created
- All custom instructions will be lost
To safely use custom instructions:
- Add the MCP servers block at the END of your
[mode].avanterules
file - Put your custom instructions BEFORE the block
- Keep the block at the END to prevent MCP content from being overwritten
Example .avanterules
file with custom instructions:
# Your Custom Instructions
You should always write tests for your code.
Handle edge cases carefully.
# IMPORTANT: Keep this block at the end of file
{% block mcp_servers %}
[MCP server capabilities will be automatically updated here]
{% endblock %}
Note: You can also access the Express server directly at http://localhost:[port]/api
-
Environment Requirements
- Ensure these are installed as they're required by most MCP servers:
node --version # Should be >= 18.0.0 python --version # Should be installed uvx --version # Should be installed
- Most server commands use
npx
oruvx
- verify these work in your terminal
- Ensure these are installed as they're required by most MCP servers:
-
Port Issues
- If you get
EADDRINUSE
error, kill the existing process:lsof -i :[port] # Find process ID kill [pid] # Kill the process
- If you get
-
Configuration File
- Ensure config path is absolute
- Verify file contains valid JSON with
mcpServers
key - Check server-specific configuration requirements
- Validate server command and args are correct for your system
-
MCP Server Issues
- Validate server configurations using either:
- MCP Inspector: GUI tool for verifying server operation
- mcp-cli: Command-line tool for testing servers with config files
- Check server logs in MCPHub UI (Logs view)
- Test tools and resources individually to isolate issues
- Validate server configurations using either:
-
Need Help?
- Create a Discussion for questions
- Open an Issue for bugs
MCPHub.nvim uses an Express server to manage MCP servers and handle client requests:
-
When
setup()
is called:- Checks for mcp-hub command installation
- Verifies version compatibility
- Starts mcp-hub with provided port and config file
- Creates Express server at localhost:[port]
-
After successful setup:
- Calls on_ready callback with hub instance
- Hub instance provides REST API interface
- UI updates in real-time via
:MCPHub
command
-
Express Server Features:
- Manages MCP server configurations
- Handles tool execution requests
- Provides resource access
- Multi-client support
- Automatic cleanup
-
When Neovim instances close:
- Unregister as clients
- Last client triggers shutdown timer
- Server waits shutdown_delay seconds before stopping
- Timer cancels if new client connects
This architecture ensures:
- Consistent server management
- Real-time status monitoring
- Efficient resource usage
- Clean process handling
- Multiple client support
sequenceDiagram
participant N1 as First Neovim
participant N2 as Other Neovims
participant S as MCP Hub Server
Note over N1,S: First Client Connection
N1->>S: Check if Running
activate S
S-->>N1: Not Running
N1->>S: start_hub()
Note over S: Server Start
S-->>N1: Ready Signal
N1->>S: Register Client
S-->>N1: Registration OK
Note over N2,S: Other Clients
N2->>S: Check if Running
S-->>N2: Running
N2->>S: Register Client
S-->>N2: Registration OK
Note over N1,S: Server stays active
Note over N2,S: Client Disconnection
N2->>S: Unregister Client
S-->>N2: OK
Note over S: Keep Running
Note over N1,S: Last Client Exit
N1->>S: Unregister Client
S-->>N1: OK
Note over S: Grace Period
Note over S: Auto Shutdown
deactivate S
sequenceDiagram
participant N as Neovim
participant P as Plugin
participant S as MCP Hub Server
N->>P: start_hub()
P->>S: Health Check
alt Server Not Running
P->>S: Start Server
S-->>P: Ready Signal
end
P->>S: Register Client
S-->>P: Registration OK
N->>P: :MCPHub
P->>S: Get Status
S-->>P: Server Status
P->>N: Display UI
flowchart LR
A[VimLeavePre] -->|Trigger| B[Stop Hub]
B -->|If Ready| C[Unregister Client]
C -->|Last Client| D[Server Auto-shutdown]
C -->|Other Clients| E[Server Continues]
B --> F[Clear State]
F --> G[Ready = false]
F --> H[Owner = false]
sequenceDiagram
participant C as Chat Plugin
participant H as Hub Instance
participant S as MCP Server
C->>H: call_tool()
H->>H: Check Ready
alt Not Ready
H-->>C: Error: Not Ready
end
H->>S: POST /tools
S-->>H: Tool Result
H-->>C: Return Result
Note over C,S: Similar flow for resources
C->>H: access_resource()
H->>H: Check Ready
H->>S: POST /resources
S-->>H: Resource Data
H-->>C: Return Data
Currently planning these features:
- Workflow integration with CodeCompanion for enhanced code assistance
- Enhanced help view with comprehensive documentation
- Community marketplace for sharing and discovering MCP servers
- Add custom descriptions for each MCP server through the UI
- Support server-specific configuration through the interface
Thanks to:
- nui.nvim for inspiring our text highlighting utilities
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for mcphub.nvim
Similar Open Source Tools

mcphub.nvim
MCPHub.nvim is a powerful Neovim plugin that integrates MCP (Model Context Protocol) servers into your workflow. It offers a centralized config file for managing servers and tools, with an intuitive UI for testing resources. Ideal for LLM integration, it provides programmatic API access and interactive testing through the `:MCPHub` command.

daydreams
Daydreams is a generative agent library designed for playing onchain games by injecting context. It is chain agnostic and allows users to perform onchain tasks, including playing any onchain game. The tool is lightweight and powerful, enabling users to define game context, register actions, set goals, monitor progress, and integrate with external agents. Daydreams aims to be 'lite' and 'composable', dynamically generating code needed to play games. It is currently in pre-alpha stage, seeking feedback and collaboration for further development.

orra
Orra is a tool for building production-ready multi-agent applications that handle complex real-world interactions. It coordinates tasks across existing stack, agents, and tools run as services using intelligent reasoning. With features like smart pre-evaluated execution plans, domain grounding, durable execution, and automatic service health monitoring, Orra enables users to go fast with tools as services and revert state to handle failures. It provides real-time status tracking and webhook result delivery, making it ideal for developers looking to move beyond simple crews and agents.

pocketgroq
PocketGroq is a tool that provides advanced functionalities for text generation, web scraping, web search, and AI response evaluation. It includes features like an Autonomous Agent for answering questions, web crawling and scraping capabilities, enhanced web search functionality, and flexible integration with Ollama server. Users can customize the agent's behavior, evaluate responses using AI, and utilize various methods for text generation, conversation management, and Chain of Thought reasoning. The tool offers comprehensive methods for different tasks, such as initializing RAG, error handling, and tool management. PocketGroq is designed to enhance development processes and enable the creation of AI-powered applications with ease.

solana-agent-kit
Solana Agent Kit is an open-source toolkit designed for connecting AI agents to Solana protocols. It enables agents, regardless of the model used, to autonomously perform various Solana actions such as trading tokens, launching new tokens, lending assets, sending compressed airdrops, executing blinks, and more. The toolkit integrates core blockchain features like token operations, NFT management via Metaplex, DeFi integration, Solana blinks, AI integration features with LangChain, autonomous modes, and AI tools. It provides ready-to-use tools for blockchain operations, supports autonomous agent actions, and offers features like memory management, real-time feedback, and error handling. Solana Agent Kit facilitates tasks such as deploying tokens, creating NFT collections, swapping tokens, lending tokens, staking SOL, and sending SPL token airdrops via ZK compression. It also includes functionalities for fetching price data from Pyth and relies on key Solana and Metaplex libraries for its operations.

client-python
The Mistral Python Client is a tool inspired by cohere-python that allows users to interact with the Mistral AI API. It provides functionalities to access and utilize the AI capabilities offered by Mistral. Users can easily install the client using pip and manage dependencies using poetry. The client includes examples demonstrating how to use the API for various tasks, such as chat interactions. To get started, users need to obtain a Mistral API Key and set it as an environment variable. Overall, the Mistral Python Client simplifies the integration of Mistral AI services into Python applications.

OpenAI
OpenAI is a Swift community-maintained implementation over OpenAI public API. It is a non-profit artificial intelligence research organization founded in San Francisco, California in 2015. OpenAI's mission is to ensure safe and responsible use of AI for civic good, economic growth, and other public benefits. The repository provides functionalities for text completions, chats, image generation, audio processing, edits, embeddings, models, moderations, utilities, and Combine extensions.

rag-chat
The `@upstash/rag-chat` package simplifies the development of retrieval-augmented generation (RAG) chat applications by providing Next.js compatibility with streaming support, built-in vector store, optional Redis compatibility for fast chat history management, rate limiting, and disableRag option. Users can easily set up the environment variables and initialize RAGChat to interact with AI models, manage knowledge base, chat history, and enable debugging features. Advanced configuration options allow customization of RAGChat instance with built-in rate limiting, observability via Helicone, and integration with Next.js route handlers and Vercel AI SDK. The package supports OpenAI models, Upstash-hosted models, and custom providers like TogetherAi and Replicate.

clarifai-python
The Clarifai Python SDK offers a comprehensive set of tools to integrate Clarifai's AI platform to leverage computer vision capabilities like classification , detection ,segementation and natural language capabilities like classification , summarisation , generation , Q&A ,etc into your applications. With just a few lines of code, you can leverage cutting-edge artificial intelligence to unlock valuable insights from visual and textual content.

aws-mcp
AWS MCP is a Model Context Protocol (MCP) server that facilitates interactions between AI assistants and AWS environments. It allows for natural language querying and management of AWS resources during conversations. The server supports multiple AWS profiles, SSO authentication, multi-region operations, and secure credential handling. Users can locally execute commands with their AWS credentials, enhancing the conversational experience with AWS resources.

Agentarium
Agentarium is a powerful Python framework for managing and orchestrating AI agents with ease. It provides a flexible and intuitive way to create, manage, and coordinate interactions between multiple AI agents in various environments. The framework offers advanced agent management, robust interaction management, a checkpoint system for saving and restoring agent states, data generation through agent interactions, performance optimization, flexible environment configuration, and an extensible architecture for customization.

ai-gateway
LangDB AI Gateway is an open-source enterprise AI gateway built in Rust. It provides a unified interface to all LLMs using the OpenAI API format, focusing on high performance, enterprise readiness, and data control. The gateway offers features like comprehensive usage analytics, cost tracking, rate limiting, data ownership, and detailed logging. It supports various LLM providers and provides OpenAI-compatible endpoints for chat completions, model listing, embeddings generation, and image generation. Users can configure advanced settings, such as rate limiting, cost control, dynamic model routing, and observability with OpenTelemetry tracing. The gateway can be run with Docker Compose and integrated with MCP tools for server communication.

acte
Acte is a framework designed to build GUI-like tools for AI Agents. It aims to address the issues of cognitive load and freedom degrees when interacting with multiple APIs in complex scenarios. By providing a graphical user interface (GUI) for Agents, Acte helps reduce cognitive load and constraints interaction, similar to how humans interact with computers through GUIs. The tool offers APIs for starting new sessions, executing actions, and displaying screens, accessible via HTTP requests or the SessionManager class.

VITA
VITA is an open-source interactive omni multimodal Large Language Model (LLM) capable of processing video, image, text, and audio inputs simultaneously. It stands out with features like Omni Multimodal Understanding, Non-awakening Interaction, and Audio Interrupt Interaction. VITA can respond to user queries without a wake-up word, track and filter external queries in real-time, and handle various query inputs effectively. The model utilizes state tokens and a duplex scheme to enhance the multimodal interactive experience.

instructor
Instructor is a Python library that makes it a breeze to work with structured outputs from large language models (LLMs). Built on top of Pydantic, it provides a simple, transparent, and user-friendly API to manage validation, retries, and streaming responses. Get ready to supercharge your LLM workflows!

agentlang
AgentLang is an open-source programming language and framework designed for solving complex tasks with the help of AI agents. It allows users to build business applications rapidly from high-level specifications, making it more efficient than traditional programming languages. The language is data-oriented and declarative, with a syntax that is intuitive and closer to natural languages. AgentLang introduces innovative concepts such as first-class AI agents, graph-based hierarchical data model, zero-trust programming, declarative dataflow, resolvers, interceptors, and entity-graph-database mapping.
For similar tasks

mcphub.nvim
MCPHub.nvim is a powerful Neovim plugin that integrates MCP (Model Context Protocol) servers into your workflow. It offers a centralized config file for managing servers and tools, with an intuitive UI for testing resources. Ideal for LLM integration, it provides programmatic API access and interactive testing through the `:MCPHub` command.

LLM-Tool-Survey
This repository contains a collection of papers related to tool learning with large language models (LLMs). The papers are organized according to the survey paper 'Tool Learning with Large Language Models: A Survey'. The survey focuses on the benefits and implementation of tool learning with LLMs, covering aspects such as task planning, tool selection, tool calling, response generation, benchmarks, evaluation, challenges, and future directions in the field. It aims to provide a comprehensive understanding of tool learning with LLMs and inspire further exploration in this emerging area.

tool-ahead-of-time
Tool-Ahead-of-Time (TAoT) is a Python package that enables tool calling for any model available through Langchain's ChatOpenAI library, even before official support is provided. It reformats model output into a JSON parser for tool calling. The package supports OpenAI and non-OpenAI models, following LangChain's syntax for tool calling. Users can start using the tool without waiting for official support, providing a more robust solution for tool calling.

1Panel
1Panel is an open-source, modern web-based control panel for Linux server management. It provides efficient management through a user-friendly web graphical interface, enabling users to effortlessly manage their Linux servers. Key features include host monitoring, file management, database administration, container management, rapid website deployment with WordPress integration, an application store for easy installation and updates, security and reliability through containerization and secure application deployment practices, integrated firewall management, log auditing capabilities, and one-click backup & restore functionality supporting various cloud storage solutions.
For similar jobs

weave
Weave is a toolkit for developing Generative AI applications, built by Weights & Biases. With Weave, you can log and debug language model inputs, outputs, and traces; build rigorous, apples-to-apples evaluations for language model use cases; and organize all the information generated across the LLM workflow, from experimentation to evaluations to production. Weave aims to bring rigor, best-practices, and composability to the inherently experimental process of developing Generative AI software, without introducing cognitive overhead.

agentcloud
AgentCloud is an open-source platform that enables companies to build and deploy private LLM chat apps, empowering teams to securely interact with their data. It comprises three main components: Agent Backend, Webapp, and Vector Proxy. To run this project locally, clone the repository, install Docker, and start the services. The project is licensed under the GNU Affero General Public License, version 3 only. Contributions and feedback are welcome from the community.

oss-fuzz-gen
This framework generates fuzz targets for real-world `C`/`C++` projects with various Large Language Models (LLM) and benchmarks them via the `OSS-Fuzz` platform. It manages to successfully leverage LLMs to generate valid fuzz targets (which generate non-zero coverage increase) for 160 C/C++ projects. The maximum line coverage increase is 29% from the existing human-written targets.

LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.

VisionCraft
The VisionCraft API is a free API for using over 100 different AI models. From images to sound.

kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.

PyRIT
PyRIT is an open access automation framework designed to empower security professionals and ML engineers to red team foundation models and their applications. It automates AI Red Teaming tasks to allow operators to focus on more complicated and time-consuming tasks and can also identify security harms such as misuse (e.g., malware generation, jailbreaking), and privacy harms (e.g., identity theft). The goal is to allow researchers to have a baseline of how well their model and entire inference pipeline is doing against different harm categories and to be able to compare that baseline to future iterations of their model. This allows them to have empirical data on how well their model is doing today, and detect any degradation of performance based on future improvements.

Azure-Analytics-and-AI-Engagement
The Azure-Analytics-and-AI-Engagement repository provides packaged Industry Scenario DREAM Demos with ARM templates (Containing a demo web application, Power BI reports, Synapse resources, AML Notebooks etc.) that can be deployed in a customerβs subscription using the CAPE tool within a matter of few hours. Partners can also deploy DREAM Demos in their own subscriptions using DPoC.