
mcp-ui
SDK for UI over MCP. Create next-gen UI experiences!
Stars: 2365

mcp-ui is a collection of SDKs that bring interactive web components to the Model Context Protocol (MCP). It allows servers to define reusable UI snippets, render them securely in the client, and react to their actions in the MCP host environment. The SDKs include @mcp-ui/server (TypeScript) for generating UI resources on the server, @mcp-ui/client (TypeScript) for rendering UI components on the client, and mcp_ui_server (Ruby) for generating UI resources in a Ruby environment. The project is an experimental community playground for MCP UI ideas, with rapid iteration and enhancements.
README:
What's mcp-ui? • Core Concepts • Installation • Getting Started • Walkthrough • Examples • Supported Hosts • Security • Roadmap • Contributing • License
mcp-ui
brings interactive web components to the Model Context Protocol (MCP). Deliver rich, dynamic UI resources directly from your MCP server to be rendered by the client. Take AI interaction to the next level!
This project is an experimental community playground for MCP UI ideas. Expect rapid iteration and enhancements!
mcp-ui
is a collection of SDKs comprising:
-
@mcp-ui/server
(TypeScript): Utilities to generate UI resources (UIResource
) on your MCP server. -
@mcp-ui/client
(TypeScript): UI components (e.g.,<UIResourceRenderer />
) to render the UI resources and handle their events. -
mcp_ui_server
(Ruby): Utilities to generate UI resources on your MCP server in a Ruby environment. -
mcp-ui-server
(Python): Utilities to generate UI resources on your MCP server in a Python environment.
Together, they let you define reusable UI snippets on the server side, seamlessly and securely render them in the client, and react to their actions in the MCP host environment.
In essence, by using mcp-ui
SDKs, servers and hosts can agree on contracts that enable them to create and render interactive UI snippets (as a path to a standardized UI approach in MCP).
The primary payload returned from the server to the client is the UIResource
:
interface UIResource {
type: 'resource';
resource: {
uri: string; // e.g., ui://component/id
mimeType: 'text/html' | 'text/uri-list' | 'application/vnd.mcp-ui.remote-dom'; // text/html for HTML content, text/uri-list for URL content, application/vnd.mcp-ui.remote-dom for remote-dom content (Javascript)
text?: string; // Inline HTML, external URL, or remote-dom script
blob?: string; // Base64-encoded HTML, URL, or remote-dom script
};
}
-
uri
: Unique identifier for caching and routing-
ui://…
— UI resources (rendering method determined by mimeType)
-
-
mimeType
:text/html
for HTML content (iframe srcDoc),text/uri-list
for URL content (iframe src),application/vnd.mcp-ui.remote-dom
for remote-dom content (Javascript)-
MCP-UI requires a single URL: While
text/uri-list
format supports multiple URLs, MCP-UI uses only the first validhttp/s
URL and warns if additional URLs are found
-
MCP-UI requires a single URL: While
-
text
vs.blob
: Choosetext
for simple strings; useblob
for larger or encoded content.
The UI Resource is rendered in the <UIResourceRenderer />
component. It automatically detects the resource type and renders the appropriate component.
It is available as a React component and as a Web Component.
React Component
It accepts the following props:
-
resource
: The resource object from an MCP Tool response. It must includeuri
,mimeType
, and content (text
,blob
) -
onUIAction
: Optional callback for handling UI actions from the resource:When actions include a{ type: 'tool', payload: { toolName: string, params: Record<string, unknown> }, messageId?: string } | { type: 'intent', payload: { intent: string, params: Record<string, unknown> }, messageId?: string } | { type: 'prompt', payload: { prompt: string }, messageId?: string } | { type: 'notify', payload: { message: string }, messageId?: string } | { type: 'link', payload: { url: string }, messageId?: string }
messageId
, the iframe automatically receives response messages for asynchronous handling. -
supportedContentTypes
: Optional array to restrict which content types are allowed (['rawHtml', 'externalUrl', 'remoteDom']
) -
htmlProps
: Optional props for the internal<HTMLResourceRenderer>
-
style
: Optional custom styles for the iframe -
iframeProps
: Optional props passed to the iframe element -
iframeRenderData
: OptionalRecord<string, unknown>
to pass data to the iframe upon rendering. This enables advanced use cases where the parent application needs to provide initial state or configuration to the sandboxed iframe content. -
autoResizeIframe
: Optionalboolean | { width?: boolean; height?: boolean }
to automatically resize the iframe to the size of the content.
-
-
remoteDomProps
: Optional props for the internal<RemoteDOMResourceRenderer>
-
library
: Optional component library for Remote DOM resources (defaults tobasicComponentLibrary
) -
remoteElements
: remote element definitions for Remote DOM resources.
-
Web Component
The Web Component is available as <ui-resource-renderer>
. It accepts the same props as the React component, but they must be passed as strings.
Example:
<ui-resource-renderer
resource='{ "mimeType": "text/html", "text": "<h2>Hello from the Web Component!</h2>" }'
></ui-resource-renderer>
The onUIAction
prop can be handled by attaching an event listener to the component:
const renderer = document.querySelector('ui-resource-renderer');
renderer.addEventListener('onUIAction', (event) => {
console.log('Action:', event.detail);
});
The Web Component is available in the @mcp-ui/client
package at dist/ui-resource-renderer.wc.js
.
Rendered using the internal <HTMLResourceRenderer />
component, which displays content inside an <iframe>
. This is suitable for self-contained HTML or embedding external apps.
-
mimeType
:-
text/html
: Renders inline HTML content. -
text/uri-list
: Renders an external URL. MCP-UI uses the first validhttp/s
URL.
-
Rendered using the internal <RemoteDOMResourceRenderer />
component, which utilizes Shopify's remote-dom
. The server responds with a script that describes the UI and events. On the host, the script is securely rendered in a sandboxed iframe, and the UI changes are communicated to the host in JSON, where they're rendered using the host's component library. This is more flexible than iframes and allows for UIs that match the host's look-and-feel.
-
mimeType
:application/vnd.mcp-ui.remote-dom+javascript; framework={react | webcomponents}
UI snippets must be able to interact with the agent. In mcp-ui
, this is done by hooking into events sent from the UI snippet and reacting to them in the host (see onUIAction
prop). For example, an HTML may trigger a tool call when a button is clicked by sending an event which will be caught handled by the client.
# using npm
npm install @mcp-ui/server @mcp-ui/client
# or pnpm
pnpm add @mcp-ui/server @mcp-ui/client
# or yarn
yarn add @mcp-ui/server @mcp-ui/client
gem install mcp_ui_server
# using pip
pip install mcp-ui-server
# or uv
uv add mcp-ui-server
You can use GitMCP to give your IDE access to mcp-ui
's latest documentation!
-
Server-side: Build your UI resources
import { createUIResource } from '@mcp-ui/server'; import { createRemoteComponent, createRemoteDocument, createRemoteText, } from '@remote-dom/core'; // Inline HTML const htmlResource = createUIResource({ uri: 'ui://greeting/1', content: { type: 'rawHtml', htmlString: '<p>Hello, MCP UI!</p>' }, encoding: 'text', }); // External URL const externalUrlResource = createUIResource({ uri: 'ui://greeting/1', content: { type: 'externalUrl', iframeUrl: 'https://example.com' }, encoding: 'text', }); // remote-dom const remoteDomResource = createUIResource({ uri: 'ui://remote-component/action-button', content: { type: 'remoteDom', script: ` const button = document.createElement('ui-button'); button.setAttribute('label', 'Click me for a tool call!'); button.addEventListener('press', () => { window.parent.postMessage({ type: 'tool', payload: { toolName: 'uiInteraction', params: { action: 'button-click', from: 'remote-dom' } } }, '*'); }); root.appendChild(button); `, framework: 'react', // or 'webcomponents' }, encoding: 'text', });
-
Client-side: Render in your MCP host
import React from 'react'; import { UIResourceRenderer } from '@mcp-ui/client'; function App({ mcpResource }) { if ( mcpResource.type === 'resource' && mcpResource.resource.uri?.startsWith('ui://') ) { return ( <UIResourceRenderer resource={mcpResource.resource} onUIAction={(result) => { console.log('Action:', result); }} /> ); } return <p>Unsupported resource</p>; }
Server-side: Build your UI resources
from mcp_ui_server import create_ui_resource
# Inline HTML
html_resource = create_ui_resource({
"uri": "ui://greeting/1",
"content": { "type": "rawHtml", "htmlString": "<p>Hello, from Python!</p>" },
"encoding": "text",
})
# External URL
external_url_resource = create_ui_resource({
"uri": "ui://greeting/2",
"content": { "type": "externalUrl", "iframeUrl": "https://example.com" },
"encoding": "text",
})
Server-side: Build your UI resources
require 'mcp_ui_server'
# Inline HTML
html_resource = McpUiServer.create_ui_resource(
uri: 'ui://greeting/1',
content: { type: :raw_html, htmlString: '<p>Hello, from Ruby!</p>' },
encoding: :text
)
# External URL
external_url_resource = McpUiServer.create_ui_resource(
uri: 'ui://greeting/2',
content: { type: :external_url, iframeUrl: 'https://example.com' },
encoding: :text
)
# remote-dom
remote_dom_resource = McpUiServer.create_ui_resource(
uri: 'ui://remote-component/action-button',
content: {
type: :remote_dom,
script: "
const button = document.createElement('ui-button');
button.setAttribute('label', 'Click me from Ruby!');
button.addEventListener('press', () => {
window.parent.postMessage({ type: 'tool', payload: { toolName: 'uiInteraction', params: { action: 'button-click', from: 'ruby-remote-dom' } } }, '*');
});
root.appendChild(button);
",
framework: :react,
},
encoding: :text
)
For a detailed, simple, step-by-step guide on how to integrate mcp-ui
into your own server, check out the full server walkthroughs on the mcp-ui documentation site:
These guides will show you how to add a mcp-ui
endpoint to an existing server, create tools that return UI resources, and test your setup with the ui-inspector
!
Client Examples
-
ui-inspector - inspect local
mcp-ui
-enabled servers. -
MCP-UI Chat - interactive chat built with the
mcp-ui
client. Check out the hosted version! - MCP-UI RemoteDOM Playground (
examples/remote-dom-demo
) - local demo app to test RemoteDOM resources (intended for hosts) - MCP-UI Web Component Demo (
examples/wc-demo
) - local demo app to test the Web Component
Server Examples
-
TypeScript: A full-featured server that is deployed to a hosted environment for easy testing.
-
typescript-server-demo
: A simple Typescript server that demonstrates how to generate UI resources. -
server: A full-featured Typescript server that is deployed to a hosted Cloudflare environment for easy testing.
-
HTTP Streaming:
https://remote-mcp-server-authless.idosalomon.workers.dev/mcp
-
SSE:
https://remote-mcp-server-authless.idosalomon.workers.dev/sse
-
HTTP Streaming:
-
-
Ruby: A barebones demo server that shows how to use
mcp_ui_server
andmcp
gems together. -
Python: A simple demo server that shows how to use the
mcp-ui-server
Python package.
Drop those URLs into any MCP-compatible host to see mcp-ui
in action. For a supported local inspector, see the ui-inspector.
mcp-ui
is supported by a growing number of MCP-compatible clients. Feature support varies by host:
Host | Rendering | UI Actions |
---|---|---|
Postman | ✅ | |
Goose | ✅ | |
Smithery | ✅ | ❌ |
MCPJam | ✅ | ❌ |
fast-agent | ✅ | ❌ |
VSCode (TBA) | ? | ? |
Legend:
- ✅: Supported
⚠️ : Partial Support- ❌: Not Supported (yet)
Host and user security is one of mcp-ui
's primary concerns. In all content types, the remote code is executed in a sandboxed iframe.
- [X] Add online playground
- [X] Expand UI Action API (beyond tool calls)
- [X] Support Web Components
- [X] Support Remote-DOM
- [ ] Add component libraries (in progress)
- [ ] Add SDKs for additional programming languages (in progress; Ruby available)
- [ ] Support additional frontend frameworks
- [ ] Add declarative UI content type
- [ ] Support generative UI?
mcp-ui
is a project by Ido Salomon, in collaboration with Liad Yosef.
Contributions, ideas, and bug reports are welcome! See the contribution guidelines to get started.
Apache License 2.0 © The MCP-UI Authors
This project is provided "as is", without warranty of any kind. The mcp-ui
authors and contributors shall not be held liable for any damages, losses, or issues arising from the use of this software. Use at your own risk.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for mcp-ui
Similar Open Source Tools

mcp-ui
mcp-ui is a collection of SDKs that bring interactive web components to the Model Context Protocol (MCP). It allows servers to define reusable UI snippets, render them securely in the client, and react to their actions in the MCP host environment. The SDKs include @mcp-ui/server (TypeScript) for generating UI resources on the server, @mcp-ui/client (TypeScript) for rendering UI components on the client, and mcp_ui_server (Ruby) for generating UI resources in a Ruby environment. The project is an experimental community playground for MCP UI ideas, with rapid iteration and enhancements.

docutranslate
Docutranslate is a versatile tool for translating documents efficiently. It supports multiple file formats and languages, making it ideal for businesses and individuals needing quick and accurate translations. The tool uses advanced algorithms to ensure high-quality translations while maintaining the original document's formatting. With its user-friendly interface, Docutranslate simplifies the translation process and saves time for users. Whether you need to translate legal documents, technical manuals, or personal letters, Docutranslate is the go-to solution for all your document translation needs.

openai-edge-tts
This project provides a local, OpenAI-compatible text-to-speech (TTS) API using `edge-tts`. It emulates the OpenAI TTS endpoint (`/v1/audio/speech`), enabling users to generate speech from text with various voice options and playback speeds, just like the OpenAI API. `edge-tts` uses Microsoft Edge's online text-to-speech service, making it completely free. The project supports multiple audio formats, adjustable playback speed, and voice selection options, providing a flexible and customizable TTS solution for users.

LocalAGI
LocalAGI is a powerful, self-hostable AI Agent platform that allows you to design AI automations without writing code. It provides a complete drop-in replacement for OpenAI's Responses APIs with advanced agentic capabilities. With LocalAGI, you can create customizable AI assistants, automations, chat bots, and agents that run 100% locally, without the need for cloud services or API keys. The platform offers features like no-code agents, web-based interface, advanced agent teaming, connectors for various platforms, comprehensive REST API, short & long-term memory capabilities, planning & reasoning, periodic tasks scheduling, memory management, multimodal support, extensible custom actions, fully customizable models, observability, and more.

code-graph-rag
Graph-Code is an accurate Retrieval-Augmented Generation (RAG) system that analyzes multi-language codebases using Tree-sitter. It builds comprehensive knowledge graphs, enabling natural language querying of codebase structure and relationships, along with editing capabilities. The system supports various languages, uses Tree-sitter for parsing, Memgraph for storage, and AI models for natural language to Cypher translation. It offers features like code snippet retrieval, advanced file editing, shell command execution, interactive code optimization, reference-guided optimization, dependency analysis, and more. The architecture consists of a multi-language parser and an interactive CLI for querying the knowledge graph.

hud-python
hud-python is a Python library for creating interactive heads-up displays (HUDs) in video games. It provides a simple and flexible way to overlay information on the screen, such as player health, score, and notifications. The library is designed to be easy to use and customizable, allowing game developers to enhance the user experience by adding dynamic elements to their games. With hud-python, developers can create engaging HUDs that improve gameplay and provide important feedback to players.

react-native-rag
React Native RAG is a library that enables private, local RAGs to supercharge LLMs with a custom knowledge base. It offers modular and extensible components like `LLM`, `Embeddings`, `VectorStore`, and `TextSplitter`, with multiple integration options. The library supports on-device inference, vector store persistence, and semantic search implementation. Users can easily generate text responses, manage documents, and utilize custom components for advanced use cases.

dexto
Dexto is a lightweight runtime for creating and running AI agents that turn natural language into real-world actions. It serves as the missing intelligence layer for building AI applications, standalone chatbots, or as the reasoning engine inside larger products. Dexto features a powerful CLI and Web UI for running AI agents, supports multiple interfaces, allows hot-swapping of LLMs from various providers, connects to remote tool servers via the Model Context Protocol, is config-driven with version-controlled YAML, offers production-ready core features, extensibility for custom services, and enables multi-agent collaboration via MCP and A2A.

llm-scraper
LLM Scraper is a TypeScript library that allows you to convert any webpages into structured data using LLMs. It supports Local (GGUF), OpenAI, Groq chat models, and schemas defined with Zod. With full type-safety in TypeScript and based on the Playwright framework, it offers streaming when crawling multiple pages and supports four input modes: html, markdown, text, and image.

ai00_server
AI00 RWKV Server is an inference API server for the RWKV language model based upon the web-rwkv inference engine. It supports VULKAN parallel and concurrent batched inference and can run on all GPUs that support VULKAN. No need for Nvidia cards!!! AMD cards and even integrated graphics can be accelerated!!! No need for bulky pytorch, CUDA and other runtime environments, it's compact and ready to use out of the box! Compatible with OpenAI's ChatGPT API interface. 100% open source and commercially usable, under the MIT license. If you are looking for a fast, efficient, and easy-to-use LLM API server, then AI00 RWKV Server is your best choice. It can be used for various tasks, including chatbots, text generation, translation, and Q&A.

text-extract-api
The text-extract-api is a powerful tool that allows users to convert images, PDFs, or Office documents to Markdown text or JSON structured documents with high accuracy. It is built using FastAPI and utilizes Celery for asynchronous task processing, with Redis for caching OCR results. The tool provides features such as PDF/Office to Markdown and JSON conversion, improving OCR results with LLama, removing Personally Identifiable Information from documents, distributed queue processing, caching using Redis, switchable storage strategies, and a CLI tool for task management. Users can run the tool locally or on cloud services, with support for GPU processing. The tool also offers an online demo for testing purposes.

docs-mcp-server
The docs-mcp-server repository contains the server-side code for the documentation management system. It provides functionalities for managing, storing, and retrieving documentation files. Users can upload, update, and delete documents through the server. The server also supports user authentication and authorization to ensure secure access to the documentation system. Additionally, the server includes APIs for integrating with other systems and tools, making it a versatile solution for managing documentation in various projects and organizations.

wikipedia-mcp
The Wikipedia MCP Server is a Model Context Protocol (MCP) server that provides real-time access to Wikipedia information for Large Language Models (LLMs). It allows AI assistants to retrieve accurate and up-to-date information from Wikipedia to enhance their responses. The server offers features such as searching Wikipedia, retrieving article content, getting article summaries, extracting specific sections, discovering links within articles, finding related topics, supporting multiple languages and country codes, optional caching for improved performance, and compatibility with Google ADK agents and other AI frameworks. Users can install the server using pipx, Smithery, PyPI, virtual environment, or from source. The server can be run with various options for transport protocol, language, country/locale, caching, access token, and more. It also supports Docker and Kubernetes deployment. The server provides MCP tools for interacting with Wikipedia, such as searching articles, getting article content, summaries, sections, links, coordinates, related topics, and extracting key facts. It also supports country/locale codes and language variants for languages like Chinese, Serbian, Kurdish, and Norwegian. The server includes example prompts for querying Wikipedia and provides MCP resources for interacting with Wikipedia through MCP endpoints. The project structure includes main packages, API implementation, core functionality, utility functions, and a comprehensive test suite for reliability and functionality testing.

mcp-omnisearch
mcp-omnisearch is a Model Context Protocol (MCP) server that acts as a unified gateway to multiple search providers and AI tools. It integrates Tavily, Perplexity, Kagi, Jina AI, Brave, Exa AI, and Firecrawl to offer a wide range of search, AI response, content processing, and enhancement features through a single interface. The server provides powerful search capabilities, AI response generation, content extraction, summarization, web scraping, structured data extraction, and more. It is designed to work flexibly with the API keys available, enabling users to activate only the providers they have keys for and easily add more as needed.

aiocache
Aiocache is an asyncio cache library that supports multiple backends such as memory, redis, and memcached. It provides a simple interface for functions like add, get, set, multi_get, multi_set, exists, increment, delete, clear, and raw. Users can easily install and use the library for caching data in Python applications. Aiocache allows for easy instantiation of caches and setup of cache aliases for reusing configurations. It also provides support for backends, serializers, and plugins to customize cache operations. The library offers detailed documentation and examples for different use cases and configurations.

mcphub.nvim
MCPHub.nvim is a powerful Neovim plugin that integrates MCP (Model Context Protocol) servers into your workflow. It offers a centralized config file for managing servers and tools, with an intuitive UI for testing resources. Ideal for LLM integration, it provides programmatic API access and interactive testing through the `:MCPHub` command.
For similar tasks

mcp-ui
mcp-ui is a collection of SDKs that bring interactive web components to the Model Context Protocol (MCP). It allows servers to define reusable UI snippets, render them securely in the client, and react to their actions in the MCP host environment. The SDKs include @mcp-ui/server (TypeScript) for generating UI resources on the server, @mcp-ui/client (TypeScript) for rendering UI components on the client, and mcp_ui_server (Ruby) for generating UI resources in a Ruby environment. The project is an experimental community playground for MCP UI ideas, with rapid iteration and enhancements.
For similar jobs

Protofy
Protofy is a full-stack, batteries-included low-code enabled web/app and IoT system with an API system and real-time messaging. It is based on Protofy (protoflow + visualui + protolib + protodevices) + Expo + Next.js + Tamagui + Solito + Express + Aedes + Redbird + Many other amazing packages. Protofy can be used to fast prototype Apps, webs, IoT systems, automations, or APIs. It is a ultra-extensible CMS with supercharged capabilities, mobile support, and IoT support (esp32 thanks to esphome).

react-native-vision-camera
VisionCamera is a powerful, high-performance Camera library for React Native. It features Photo and Video capture, QR/Barcode scanner, Customizable devices and multi-cameras ("fish-eye" zoom), Customizable resolutions and aspect-ratios (4k/8k images), Customizable FPS (30..240 FPS), Frame Processors (JS worklets to run facial recognition, AI object detection, realtime video chats, ...), Smooth zooming (Reanimated), Fast pause and resume, HDR & Night modes, Custom C++/GPU accelerated video pipeline (OpenGL).

dev-conf-replay
This repository contains information about various IT seminars and developer conferences in South Korea, allowing users to watch replays of past events. It covers a wide range of topics such as AI, big data, cloud, infrastructure, devops, blockchain, mobility, games, security, mobile development, frontend, programming languages, open source, education, and community events. Users can explore upcoming and past events, view related YouTube channels, and access additional resources like free programming ebooks and data structures and algorithms tutorials.

OpenDevin
OpenDevin is an open-source project aiming to replicate Devin, an autonomous AI software engineer capable of executing complex engineering tasks and collaborating actively with users on software development projects. The project aspires to enhance and innovate upon Devin through the power of the open-source community. Users can contribute to the project by developing core functionalities, frontend interface, or sandboxing solutions, participating in research and evaluation of LLMs in software engineering, and providing feedback and testing on the OpenDevin toolset.

polyfire-js
Polyfire is an all-in-one managed backend for AI apps that allows users to build AI applications directly from the frontend, eliminating the need for a separate backend. It simplifies the process by providing most backend services in just a few lines of code. With Polyfire, users can easily create chatbots, transcribe audio files, generate simple text, manage long-term memory, and generate images. The tool also offers starter guides and tutorials to help users get started quickly and efficiently.

sdfx
SDFX is the ultimate no-code platform for building and sharing AI apps with beautiful UI. It enables the creation of user-friendly interfaces for complex workflows by combining Comfy workflow with a UI. The tool is designed to merge the benefits of form-based UI and graph-node based UI, allowing users to create intricate graphs with a high-level UI overlay. SDFX is fully compatible with ComfyUI, abstracting the need for installing ComfyUI. It offers features like animated graph navigation, node bookmarks, UI debugger, custom nodes manager, app and template export, image and mask editor, and more. The tool compiles as a native app or web app, making it easy to maintain and add new features.

aimeos-laravel
Aimeos Laravel is a professional, full-featured, and ultra-fast Laravel ecommerce package that can be easily integrated into existing Laravel applications. It offers a wide range of features including multi-vendor, multi-channel, and multi-warehouse support, fast performance, support for various product types, subscriptions with recurring payments, multiple payment gateways, full RTL support, flexible pricing options, admin backend, REST and GraphQL APIs, modular structure, SEO optimization, multi-language support, AI-based text translation, mobile optimization, and high-quality source code. The package is highly configurable and extensible, making it suitable for e-commerce SaaS solutions, marketplaces, and online shops with millions of vendors.

llm-ui
llm-ui is a React library designed for LLMs, providing features such as removing broken markdown syntax, adding custom components to LLM output, smoothing out pauses in streamed output, rendering at native frame rate, supporting code blocks for every language with Shiki, and being headless to allow for custom styles. The library aims to enhance the user experience and flexibility when working with LLMs.