
mcp-ui
SDK for UI over MCP. Create next-gen UI experiences!
Stars: 2204

mcp-ui is a collection of SDKs that bring interactive web components to the Model Context Protocol (MCP). It allows servers to define reusable UI snippets, render them securely in the client, and react to their actions in the MCP host environment. The SDKs include @mcp-ui/server (TypeScript) for generating UI resources on the server, @mcp-ui/client (TypeScript) for rendering UI components on the client, and mcp_ui_server (Ruby) for generating UI resources in a Ruby environment. The project is an experimental community playground for MCP UI ideas, with rapid iteration and enhancements.
README:
What's mcp-ui? • Core Concepts • Installation • Getting Started • Walkthrough • Examples • Supported Hosts • Security • Roadmap • Contributing • License
mcp-ui
brings interactive web components to the Model Context Protocol (MCP). Deliver rich, dynamic UI resources directly from your MCP server to be rendered by the client. Take AI interaction to the next level!
This project is an experimental community playground for MCP UI ideas. Expect rapid iteration and enhancements!
mcp-ui
is a collection of SDKs comprising:
-
@mcp-ui/server
(TypeScript): Utilities to generate UI resources (UIResource
) on your MCP server. -
@mcp-ui/client
(TypeScript): UI components (e.g.,<UIResourceRenderer />
) to render the UI resources and handle their events. -
mcp_ui_server
(Ruby): Utilities to generate UI resources on your MCP server in a Ruby environment.
Together, they let you define reusable UI snippets on the server side, seamlessly and securely render them in the client, and react to their actions in the MCP host environment.
In essence, by using mcp-ui
SDKs, servers and hosts can agree on contracts that enable them to create and render interactive UI snippets (as a path to a standardized UI approach in MCP).
The primary payload returned from the server to the client is the UIResource
:
interface UIResource {
type: 'resource';
resource: {
uri: string; // e.g., ui://component/id
mimeType: 'text/html' | 'text/uri-list' | 'application/vnd.mcp-ui.remote-dom'; // text/html for HTML content, text/uri-list for URL content, application/vnd.mcp-ui.remote-dom for remote-dom content (Javascript)
text?: string; // Inline HTML, external URL, or remote-dom script
blob?: string; // Base64-encoded HTML, URL, or remote-dom script
};
}
-
uri
: Unique identifier for caching and routing-
ui://…
— UI resources (rendering method determined by mimeType)
-
-
mimeType
:text/html
for HTML content (iframe srcDoc),text/uri-list
for URL content (iframe src),application/vnd.mcp-ui.remote-dom
for remote-dom content (Javascript)-
MCP-UI requires a single URL: While
text/uri-list
format supports multiple URLs, MCP-UI uses only the first validhttp/s
URL and warns if additional URLs are found
-
MCP-UI requires a single URL: While
-
text
vs.blob
: Choosetext
for simple strings; useblob
for larger or encoded content.
The UI Resource is rendered in the <UIResourceRenderer />
component. It automatically detects the resource type and renders the appropriate component.
It is available as a React component and as a Web Component.
React Component
It accepts the following props:
-
resource
: The resource object from an MCP Tool response. It must includeuri
,mimeType
, and content (text
,blob
) -
onUIAction
: Optional callback for handling UI actions from the resource:When actions include a{ type: 'tool', payload: { toolName: string, params: Record<string, unknown> }, messageId?: string } | { type: 'intent', payload: { intent: string, params: Record<string, unknown> }, messageId?: string } | { type: 'prompt', payload: { prompt: string }, messageId?: string } | { type: 'notify', payload: { message: string }, messageId?: string } | { type: 'link', payload: { url: string }, messageId?: string }
messageId
, the iframe automatically receives response messages for asynchronous handling. -
supportedContentTypes
: Optional array to restrict which content types are allowed (['rawHtml', 'externalUrl', 'remoteDom']
) -
htmlProps
: Optional props for the internal<HTMLResourceRenderer>
-
style
: Optional custom styles for the iframe -
iframeProps
: Optional props passed to the iframe element -
iframeRenderData
: OptionalRecord<string, unknown>
to pass data to the iframe upon rendering. This enables advanced use cases where the parent application needs to provide initial state or configuration to the sandboxed iframe content. -
autoResizeIframe
: Optionalboolean | { width?: boolean; height?: boolean }
to automatically resize the iframe to the size of the content.
-
-
remoteDomProps
: Optional props for the internal<RemoteDOMResourceRenderer>
-
library
: Optional component library for Remote DOM resources (defaults tobasicComponentLibrary
) -
remoteElements
: remote element definitions for Remote DOM resources.
-
Web Component
The Web Component is available as <ui-resource-renderer>
. It accepts the same props as the React component, but they must be passed as strings.
Example:
<ui-resource-renderer
resource='{ "mimeType": "text/html", "text": "<h2>Hello from the Web Component!</h2>" }'
></ui-resource-renderer>
The onUIAction
prop can be handled by attaching an event listener to the component:
const renderer = document.querySelector('ui-resource-renderer');
renderer.addEventListener('onUIAction', (event) => {
console.log('Action:', event.detail);
});
The Web Component is available in the @mcp-ui/client
package at dist/ui-resource-renderer.wc.js
.
Rendered using the internal <HTMLResourceRenderer />
component, which displays content inside an <iframe>
. This is suitable for self-contained HTML or embedding external apps.
-
mimeType
:-
text/html
: Renders inline HTML content. -
text/uri-list
: Renders an external URL. MCP-UI uses the first validhttp/s
URL.
-
Rendered using the internal <RemoteDOMResourceRenderer />
component, which utilizes Shopify's remote-dom
. The server responds with a script that describes the UI and events. On the host, the script is securely rendered in a sandboxed iframe, and the UI changes are communicated to the host in JSON, where they're rendered using the host's component library. This is more flexible than iframes and allows for UIs that match the host's look-and-feel.
-
mimeType
:application/vnd.mcp-ui.remote-dom+javascript; framework={react | webcomponents}
UI snippets must be able to interact with the agent. In mcp-ui
, this is done by hooking into events sent from the UI snippet and reacting to them in the host (see onUIAction
prop). For example, an HTML may trigger a tool call when a button is clicked by sending an event which will be caught handled by the client.
# using npm
npm install @mcp-ui/server @mcp-ui/client
# or pnpm
pnpm add @mcp-ui/server @mcp-ui/client
# or yarn
yarn add @mcp-ui/server @mcp-ui/client
gem install mcp_ui_server
You can use GitMCP to give your IDE access to mcp-ui
's latest documentation!
-
Server-side: Build your UI resources
import { createUIResource } from '@mcp-ui/server'; import { createRemoteComponent, createRemoteDocument, createRemoteText, } from '@remote-dom/core'; // Inline HTML const htmlResource = createUIResource({ uri: 'ui://greeting/1', content: { type: 'rawHtml', htmlString: '<p>Hello, MCP UI!</p>' }, encoding: 'text', }); // External URL const externalUrlResource = createUIResource({ uri: 'ui://greeting/1', content: { type: 'externalUrl', iframeUrl: 'https://example.com' }, encoding: 'text', }); // remote-dom const remoteDomResource = createUIResource({ uri: 'ui://remote-component/action-button', content: { type: 'remoteDom', script: ` const button = document.createElement('ui-button'); button.setAttribute('label', 'Click me for a tool call!'); button.addEventListener('press', () => { window.parent.postMessage({ type: 'tool', payload: { toolName: 'uiInteraction', params: { action: 'button-click', from: 'remote-dom' } } }, '*'); }); root.appendChild(button); `, framework: 'react', // or 'webcomponents' }, encoding: 'text', });
-
Client-side: Render in your MCP host
import React from 'react'; import { UIResourceRenderer } from '@mcp-ui/client'; function App({ mcpResource }) { if ( mcpResource.type === 'resource' && mcpResource.resource.uri?.startsWith('ui://') ) { return ( <UIResourceRenderer resource={mcpResource.resource} onUIAction={(result) => { console.log('Action:', result); }} /> ); } return <p>Unsupported resource</p>; }
Server-side: Build your UI resources
require 'mcp_ui_server'
# Inline HTML
html_resource = McpUiServer.create_ui_resource(
uri: 'ui://greeting/1',
content: { type: :raw_html, htmlString: '<p>Hello, from Ruby!</p>' },
encoding: :text
)
# External URL
external_url_resource = McpUiServer.create_ui_resource(
uri: 'ui://greeting/2',
content: { type: :external_url, iframeUrl: 'https://example.com' },
encoding: :text
)
# remote-dom
remote_dom_resource = McpUiServer.create_ui_resource(
uri: 'ui://remote-component/action-button',
content: {
type: :remote_dom,
script: "
const button = document.createElement('ui-button');
button.setAttribute('label', 'Click me from Ruby!');
button.addEventListener('press', () => {
window.parent.postMessage({ type: 'tool', payload: { toolName: 'uiInteraction', params: { action: 'button-click', from: 'ruby-remote-dom' } } }, '*');
});
root.appendChild(button);
",
framework: :react,
},
encoding: :text
)
For a detailed, simple, step-by-step guide on how to integrate mcp-ui
into your own server, check out the full server walkthroughs on the mcp-ui documentation site:
These guides will show you how to add a mcp-ui
endpoint to an existing server, create tools that return UI resources, and test your setup with the ui-inspector
!
Client Examples
-
ui-inspector - inspect local
mcp-ui
-enabled servers. -
MCP-UI Chat - interactive chat built with the
mcp-ui
client. Check out the hosted version! - MCP-UI RemoteDOM Playground (
examples/remote-dom-demo
) - local demo app to test RemoteDOM resources (intended for hosts) - MCP-UI Web Component Demo (
examples/wc-demo
) - local demo app to test the Web Component
Server Examples
-
TypeScript: A full-featured server that is deployed to a hosted environment for easy testing.
-
typescript-server-demo
: A simple Typescript server that demonstrates how to generate UI resources. -
server: A full-featured Typescript server that is deployed to a hosted Cloudflare environment for easy testing.
-
HTTP Streaming:
https://remote-mcp-server-authless.idosalomon.workers.dev/mcp
-
SSE:
https://remote-mcp-server-authless.idosalomon.workers.dev/sse
-
HTTP Streaming:
-
-
Ruby: A barebones demo server that shows how to use
mcp_ui_server
andmcp
gems together.
Drop those URLs into any MCP-compatible host to see mcp-ui
in action. For a supported local inspector, see the ui-inspector.
mcp-ui
is supported by a growing number of MCP-compatible clients. Feature support varies by host:
Host | Rendering | UI Actions |
---|---|---|
Postman | ✅ | |
Goose | ✅ | |
Smithery | ✅ | ❌ |
MCPJam | ✅ | ❌ |
fast-agent | ✅ | ❌ |
VSCode (TBA) | ? | ? |
Legend:
- ✅: Supported
⚠️ : Partial Support- ❌: Not Supported (yet)
Host and user security is one of mcp-ui
's primary concerns. In all content types, the remote code is executed in a sandboxed iframe.
- [X] Add online playground
- [X] Expand UI Action API (beyond tool calls)
- [X] Support Web Components
- [X] Support Remote-DOM
- [ ] Add component libraries (in progress)
- [ ] Add SDKs for additional programming languages (in progress; Ruby available)
- [ ] Support additional frontend frameworks
- [ ] Add declarative UI content type
- [ ] Support generative UI?
mcp-ui
is a project by Ido Salomon, in collaboration with Liad Yosef.
Contributions, ideas, and bug reports are welcome! See the contribution guidelines to get started.
Apache License 2.0 © The MCP-UI Authors
This project is provided "as is", without warranty of any kind. The mcp-ui
authors and contributors shall not be held liable for any damages, losses, or issues arising from the use of this software. Use at your own risk.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for mcp-ui
Similar Open Source Tools

mcp-ui
mcp-ui is a collection of SDKs that bring interactive web components to the Model Context Protocol (MCP). It allows servers to define reusable UI snippets, render them securely in the client, and react to their actions in the MCP host environment. The SDKs include @mcp-ui/server (TypeScript) for generating UI resources on the server, @mcp-ui/client (TypeScript) for rendering UI components on the client, and mcp_ui_server (Ruby) for generating UI resources in a Ruby environment. The project is an experimental community playground for MCP UI ideas, with rapid iteration and enhancements.

docutranslate
Docutranslate is a versatile tool for translating documents efficiently. It supports multiple file formats and languages, making it ideal for businesses and individuals needing quick and accurate translations. The tool uses advanced algorithms to ensure high-quality translations while maintaining the original document's formatting. With its user-friendly interface, Docutranslate simplifies the translation process and saves time for users. Whether you need to translate legal documents, technical manuals, or personal letters, Docutranslate is the go-to solution for all your document translation needs.

llm-scraper
LLM Scraper is a TypeScript library that allows you to convert any webpages into structured data using LLMs. It supports Local (GGUF), OpenAI, Groq chat models, and schemas defined with Zod. With full type-safety in TypeScript and based on the Playwright framework, it offers streaming when crawling multiple pages and supports four input modes: html, markdown, text, and image.

openai-edge-tts
This project provides a local, OpenAI-compatible text-to-speech (TTS) API using `edge-tts`. It emulates the OpenAI TTS endpoint (`/v1/audio/speech`), enabling users to generate speech from text with various voice options and playback speeds, just like the OpenAI API. `edge-tts` uses Microsoft Edge's online text-to-speech service, making it completely free. The project supports multiple audio formats, adjustable playback speed, and voice selection options, providing a flexible and customizable TTS solution for users.

react-native-rag
React Native RAG is a library that enables private, local RAGs to supercharge LLMs with a custom knowledge base. It offers modular and extensible components like `LLM`, `Embeddings`, `VectorStore`, and `TextSplitter`, with multiple integration options. The library supports on-device inference, vector store persistence, and semantic search implementation. Users can easily generate text responses, manage documents, and utilize custom components for advanced use cases.

text-extract-api
The text-extract-api is a powerful tool that allows users to convert images, PDFs, or Office documents to Markdown text or JSON structured documents with high accuracy. It is built using FastAPI and utilizes Celery for asynchronous task processing, with Redis for caching OCR results. The tool provides features such as PDF/Office to Markdown and JSON conversion, improving OCR results with LLama, removing Personally Identifiable Information from documents, distributed queue processing, caching using Redis, switchable storage strategies, and a CLI tool for task management. Users can run the tool locally or on cloud services, with support for GPU processing. The tool also offers an online demo for testing purposes.

ai00_server
AI00 RWKV Server is an inference API server for the RWKV language model based upon the web-rwkv inference engine. It supports VULKAN parallel and concurrent batched inference and can run on all GPUs that support VULKAN. No need for Nvidia cards!!! AMD cards and even integrated graphics can be accelerated!!! No need for bulky pytorch, CUDA and other runtime environments, it's compact and ready to use out of the box! Compatible with OpenAI's ChatGPT API interface. 100% open source and commercially usable, under the MIT license. If you are looking for a fast, efficient, and easy-to-use LLM API server, then AI00 RWKV Server is your best choice. It can be used for various tasks, including chatbots, text generation, translation, and Q&A.

repomix
Repomix is a powerful tool that packs your entire repository into a single, AI-friendly file. It is designed to format your codebase for easy understanding by AI tools like Large Language Models (LLMs), Claude, ChatGPT, and Gemini. Repomix offers features such as AI optimization, token counting, simplicity in usage, customization options, Git awareness, and security-focused checks using Secretlint. It allows users to pack their entire repository or specific directories/files using glob patterns, and even supports processing remote Git repositories. The tool generates output in plain text, XML, or Markdown formats, with options for including/excluding files, removing comments, and performing security checks. Repomix also provides a global configuration option, custom instructions for AI context, and a security check feature to detect sensitive information in files.

LEANN
LEANN is an innovative vector database that democratizes personal AI, transforming your laptop into a powerful RAG system that can index and search through millions of documents using 97% less storage than traditional solutions without accuracy loss. It achieves this through graph-based selective recomputation and high-degree preserving pruning, computing embeddings on-demand instead of storing them all. LEANN allows semantic search of file system, emails, browser history, chat history, codebase, or external knowledge bases on your laptop with zero cloud costs and complete privacy. It is a drop-in semantic search MCP service fully compatible with Claude Code, enabling intelligent retrieval without changing your workflow.

docs-mcp-server
The docs-mcp-server repository contains the server-side code for the documentation management system. It provides functionalities for managing, storing, and retrieving documentation files. Users can upload, update, and delete documents through the server. The server also supports user authentication and authorization to ensure secure access to the documentation system. Additionally, the server includes APIs for integrating with other systems and tools, making it a versatile solution for managing documentation in various projects and organizations.

LLMTSCS
LLMLight is a novel framework that employs Large Language Models (LLMs) as decision-making agents for Traffic Signal Control (TSC). The framework leverages the advanced generalization capabilities of LLMs to engage in a reasoning and decision-making process akin to human intuition for effective traffic control. LLMLight has been demonstrated to be remarkably effective, generalizable, and interpretable against various transportation-based and RL-based baselines on nine real-world and synthetic datasets.

wikipedia-mcp
The Wikipedia MCP Server is a Model Context Protocol (MCP) server that provides real-time access to Wikipedia information for Large Language Models (LLMs). It allows AI assistants to retrieve accurate and up-to-date information from Wikipedia to enhance their responses. The server offers features such as searching Wikipedia, retrieving article content, getting article summaries, extracting specific sections, discovering links within articles, finding related topics, supporting multiple languages and country codes, optional caching for improved performance, and compatibility with Google ADK agents and other AI frameworks. Users can install the server using pipx, Smithery, PyPI, virtual environment, or from source. The server can be run with various options for transport protocol, language, country/locale, caching, access token, and more. It also supports Docker and Kubernetes deployment. The server provides MCP tools for interacting with Wikipedia, such as searching articles, getting article content, summaries, sections, links, coordinates, related topics, and extracting key facts. It also supports country/locale codes and language variants for languages like Chinese, Serbian, Kurdish, and Norwegian. The server includes example prompts for querying Wikipedia and provides MCP resources for interacting with Wikipedia through MCP endpoints. The project structure includes main packages, API implementation, core functionality, utility functions, and a comprehensive test suite for reliability and functionality testing.

mcphub.nvim
MCPHub.nvim is a powerful Neovim plugin that integrates MCP (Model Context Protocol) servers into your workflow. It offers a centralized config file for managing servers and tools, with an intuitive UI for testing resources. Ideal for LLM integration, it provides programmatic API access and interactive testing through the `:MCPHub` command.

opencode.nvim
Opencode.nvim is a neovim frontend for Opencode, a terminal-based AI coding agent. It provides a chat interface between neovim and the Opencode AI agent, capturing editor context to enhance prompts. The plugin maintains persistent sessions for continuous conversations with the AI assistant, similar to Cursor AI.

dive
Dive is an AI toolkit for Go that enables the creation of specialized teams of AI agents and seamless integration with leading LLMs. It offers a CLI and APIs for easy integration, with features like creating specialized agents, hierarchical agent systems, declarative configuration, multiple LLM support, extended reasoning, model context protocol, advanced model settings, tools for agent capabilities, tool annotations, streaming, CLI functionalities, thread management, confirmation system, deep research, and semantic diff. Dive also provides semantic diff analysis, unified interface for LLM providers, tool system with annotations, custom tool creation, and support for various verified models. The toolkit is designed for developers to build AI-powered applications with rich agent capabilities and tool integrations.

candle-vllm
Candle-vllm is an efficient and easy-to-use platform designed for inference and serving local LLMs, featuring an OpenAI compatible API server. It offers a highly extensible trait-based system for rapid implementation of new module pipelines, streaming support in generation, efficient management of key-value cache with PagedAttention, and continuous batching. The tool supports chat serving for various models and provides a seamless experience for users to interact with LLMs through different interfaces.
For similar tasks

mcp-ui
mcp-ui is a collection of SDKs that bring interactive web components to the Model Context Protocol (MCP). It allows servers to define reusable UI snippets, render them securely in the client, and react to their actions in the MCP host environment. The SDKs include @mcp-ui/server (TypeScript) for generating UI resources on the server, @mcp-ui/client (TypeScript) for rendering UI components on the client, and mcp_ui_server (Ruby) for generating UI resources in a Ruby environment. The project is an experimental community playground for MCP UI ideas, with rapid iteration and enhancements.
For similar jobs

Protofy
Protofy is a full-stack, batteries-included low-code enabled web/app and IoT system with an API system and real-time messaging. It is based on Protofy (protoflow + visualui + protolib + protodevices) + Expo + Next.js + Tamagui + Solito + Express + Aedes + Redbird + Many other amazing packages. Protofy can be used to fast prototype Apps, webs, IoT systems, automations, or APIs. It is a ultra-extensible CMS with supercharged capabilities, mobile support, and IoT support (esp32 thanks to esphome).

react-native-vision-camera
VisionCamera is a powerful, high-performance Camera library for React Native. It features Photo and Video capture, QR/Barcode scanner, Customizable devices and multi-cameras ("fish-eye" zoom), Customizable resolutions and aspect-ratios (4k/8k images), Customizable FPS (30..240 FPS), Frame Processors (JS worklets to run facial recognition, AI object detection, realtime video chats, ...), Smooth zooming (Reanimated), Fast pause and resume, HDR & Night modes, Custom C++/GPU accelerated video pipeline (OpenGL).

dev-conf-replay
This repository contains information about various IT seminars and developer conferences in South Korea, allowing users to watch replays of past events. It covers a wide range of topics such as AI, big data, cloud, infrastructure, devops, blockchain, mobility, games, security, mobile development, frontend, programming languages, open source, education, and community events. Users can explore upcoming and past events, view related YouTube channels, and access additional resources like free programming ebooks and data structures and algorithms tutorials.

OpenDevin
OpenDevin is an open-source project aiming to replicate Devin, an autonomous AI software engineer capable of executing complex engineering tasks and collaborating actively with users on software development projects. The project aspires to enhance and innovate upon Devin through the power of the open-source community. Users can contribute to the project by developing core functionalities, frontend interface, or sandboxing solutions, participating in research and evaluation of LLMs in software engineering, and providing feedback and testing on the OpenDevin toolset.

polyfire-js
Polyfire is an all-in-one managed backend for AI apps that allows users to build AI applications directly from the frontend, eliminating the need for a separate backend. It simplifies the process by providing most backend services in just a few lines of code. With Polyfire, users can easily create chatbots, transcribe audio files, generate simple text, manage long-term memory, and generate images. The tool also offers starter guides and tutorials to help users get started quickly and efficiently.

sdfx
SDFX is the ultimate no-code platform for building and sharing AI apps with beautiful UI. It enables the creation of user-friendly interfaces for complex workflows by combining Comfy workflow with a UI. The tool is designed to merge the benefits of form-based UI and graph-node based UI, allowing users to create intricate graphs with a high-level UI overlay. SDFX is fully compatible with ComfyUI, abstracting the need for installing ComfyUI. It offers features like animated graph navigation, node bookmarks, UI debugger, custom nodes manager, app and template export, image and mask editor, and more. The tool compiles as a native app or web app, making it easy to maintain and add new features.

aimeos-laravel
Aimeos Laravel is a professional, full-featured, and ultra-fast Laravel ecommerce package that can be easily integrated into existing Laravel applications. It offers a wide range of features including multi-vendor, multi-channel, and multi-warehouse support, fast performance, support for various product types, subscriptions with recurring payments, multiple payment gateways, full RTL support, flexible pricing options, admin backend, REST and GraphQL APIs, modular structure, SEO optimization, multi-language support, AI-based text translation, mobile optimization, and high-quality source code. The package is highly configurable and extensible, making it suitable for e-commerce SaaS solutions, marketplaces, and online shops with millions of vendors.

llm-ui
llm-ui is a React library designed for LLMs, providing features such as removing broken markdown syntax, adding custom components to LLM output, smoothing out pauses in streamed output, rendering at native frame rate, supporting code blocks for every language with Shiki, and being headless to allow for custom styles. The library aims to enhance the user experience and flexibility when working with LLMs.