director
Context infrastructure for AI agents
Stars: 343
Director is a context infrastructure tool for AI agents that simplifies managing MCP servers, prompts, and configurations by packaging them into portable workspaces accessible through a single endpoint. It allows users to define context workspaces once and share them across different AI clients, enabling seamless collaboration, instant context switching, and secure isolation of untrusted servers without cloud dependencies or API keys. Director offers features like workspaces, universal portability, local-first architecture, sandboxing, smart filtering, unified OAuth, observability, multiple interfaces, and compatibility with all MCP clients and servers.
README:
Context infrastructure for AI agents
curl -LsSf https://director.run/install.sh | sh
Director is a context engine that packages MCP servers, prompts, and configuration into workspaces — portable contexts accessible through a single endpoint.
Instead of configuring MCP servers individually for each agent, Director lets you define context workspaces once and use them everywhere. Share complete AI contexts between Claude, Cursor, VSCode or any MCP enabled client. Distribute workspaces to your team. Switch between development and production contexts instantly. Run untrusted servers in isolation. All without cloud dependencies, API keys or accounts.
# Install Director
$ curl -LsSf https://director.run/install.sh | sh
# Start the onboarding flow
$ director quickstartMCP standardizes how AI agents access context. However, the ecosystem is still nascent and using it remains complicated.
Every agent needs it's own configuration. You can't share context between Claude Code and Cursor. You definitely can't share with teammates. And running untrusted MCP servers means executing arbitrary code on your machine.
Director fixes this by treating context as infrastructure - something you define once and deploy everywhere.
| Problem | Current State | With Director |
|---|---|---|
| Agent Portability | Each agent has proprietary config format | One workspace works with all MCP clients |
| Context Switching | Manual JSON editing to change tool sets |
director use production switches instantly |
| Team Collaboration | "Send me your MCP config" "Which one?" "The working one" |
director export > context.yaml - complete, working context |
| Token Efficiency | 50+ tools loaded, 5 actually needed |
include: [create_pr, review_code] - load only what's relevant |
| Security | npm install sketchy-mcp-server && pray |
sandbox: docker - full isolation |
| Debugging | Black box with no visibility | Structured JSON logs for every operation |
- 📚 Workspaces - Isolated contexts for different tasks or environments
- 🚀 Universal Portability - One workspace, all agents, any teammate
- 🏠 Local-First - Runs on your machine, not ours
- 🔐 Sandboxing - Docker/VM isolation for untrusted servers
- 🎯 Smart Filtering - Reduce token usage and improve accuracy
- 👤 Unified OAuth - Authenticate once, use everywhere
- 📊 Observability - Structured logs for debugging and compliance
- 🔧 Multiple Interfaces - CLI, YAML, Studio UI, or TypeScript SDK
- 🔌 MCP Native - Works with all clients and servers
A workspace isn't configuration — it's a complete context for your AI. Tools, prompts, environment, and security boundaries packaged together:
# Define a Workspace
workspaces:
production_support:
description: Investigate and resolve production issues
servers:
sentry: # alerts
type: http
url: https://mcp.sentry.dev/mcp
cloudwatch: # logging
type: stdio
command: uvx
args: ["awslabs.cloudwatch-mcp-server@latest"]
env:
AWS_PROFILE: "[The AWS Profile Name to use for AWS access]",
include: [search_logs, get_metrics] # No write access
github: # code
type: http
url: https://api.githubcopilot.com/mcp/
tools:
include: [ create_pr, search_code ]
prompts:
- name: investigate
content: |
Check recent alerts, correlate with deployment times,
search logs for errors, identify root cause
# Use with any MCP client
director connect production_support --target claude_code # Auto-configures Claude Code
director connect production_support --target cursor # Same workspace in Cursor
director export production_support > team-fix.yaml # Share with teamThis workspace is:
- Portable: Works with any MCP client
- Shareable: One file contains everything
- Auditable: Every tool call is logged
- Safe: Dangerous operations filtered out
Director runs entirely on your machine. No cloud services, no accounts, no api keys. Your context never leaves your control.
# Everything runs locally
director start
# Or sandbox everything in Docker
director start --sandbox dockerDirector meets you where you are. You can interact with it via YAML, CLI or the web based management UI.
There are two ways to install director:
# Option 1: Install director & it's dependencies (node, npm & uvx) using the installation script
$ curl -LsSf https://director.run/install.sh | sh
# Option 2: If you already have node installed, you can use npm
$ npm install -g @director.run/cli
# Start director & open the UI
$ director quickstartDirector is designed to be an always-on background service:
# Start director
director start
# Stop director
director stopIf you'd like to configure Director visually, this will open the management UI in your browser:
director studioDirector makes it easy to sandbox untrusted or insecure MCP servers:
# Run director (and all MCP servers) inside a docker sandbox
director start --sandbox dockerA workspace is a collection of MCP servers, prompts, and configuration that work together for a specific purpose. For example, maintaining a changelog, fixing bugs, performing research, replying to support tickets...
You can create as many workspaces as you'd like:
director create <workspace_name>Once you've created a workspace, you can add MCP servers. Director will proxy all tools, prompts and resources to the client.
# Add a server from the director registry
director server add <workspace_name> --entry <registry_entry>
# Add an Stdio server by specifying the command to run
director server add <workspace_name> --name <server_name> --command "uvx ..."
# Add a streamable or SSE sever by specifying it's URL
director server add <workspace_name> --name <server_name> --url https://example.com/mcpDirector has full OAuth support. Currently, we only support OAuth in the CLI.
# Add an OAuth server by specifying the URL
director server add <workspace_name> --name notion --url https://mcp.notion.com/mcp
# If you query the workspace, you'll notice that the server is "unauthorized"
director get <workspace_name>
# This will trigger the OAuth flow in your browser
director auth <workspace_name> notionMCP servers often add too many tools to your context, which can lead to hallucinations. You can use director to include only the tools you need.
director update <workspace_name> <server_name> -a includeTools=[<tool_name_1>, <tool_name_2>] You can use tool name prefixing to avoid conflicts when includeing multiple MCP servers that use the same tool name (for example search).
director update <workspace_name> <server_name> -a toolPrefix="prefix__"Director can manage client connections for you. Currently we support claude_code, claude, cursor & vscode.
# Conntect the workspace to a client, currently: "claude_code", "claude", "cursor", "vscode"
director connect <workspace_name> -t <client_name>If your client isn't supported yet, you can connect manually.
# This will print out the Streamable / SSE URL as well as the Stdio connection config
$ director connect test_workspaceDirector will not only proxy prompts from the underlying MCP servers, but will also allow you define your own prompts at the workspace level. This is helpful to capture and share prompts that you re-use often.
# Add a prompt to a workspace, this will open up your editor for you to add in the prompt body.
director prompts add <workspace_name> --name <prompt_name>You can now invoke the prompt from your favourite client as follows: \director__<prompt_name>
Director uses a flat configuration file to manage all of it's state. Which makes it trivial to make large edits to your context as well as sharing.
Director will use the director.yaml file in the current directory if it is present. Otherwise, it will default to ~/.director/director.yaml.
# Configuration file reference
workspaces:
name: code_review
description: Automates code reviews
servers:
filesystem:
type: stdio
command: npx
args: [ "@modelcontextprotocol/server-filesystem", "./src" ]
github:
type: http
url: https://api.githubcopilot.com/mcp/
tools:
include: [ create_issue, search_code ]
prompts:
- name: code_review
content: "Review this code for security vulnerabilities and performance issues"
- name: write_tests
content: "Write comprehensive unit tests including edge cases"Every MCP operation is logged as JSON:
{
"timestamp": "2024-01-20T10:30:00Z",
"workspace": "production",
"server": "github",
"method": "tools/call",
"tool": "create_issue",
"duration_ms": 230,
"status": "success"
}The log level can be configured via the LOG_LEVEL environment variable
Director alsos provides a few utilities to help you debug MCP servers:
director mcp list-tools <workspace_name>
director mcp get-tool <workspace_name> <toolName>
director mcp call-tool <workspace_name> <toolName>
Manage context for your AI agent
USAGE
director <command> [subcommand] [flags]
CORE COMMANDS
quickstart Start the gateway and open the studio in your browser
serve Start the web service
studio Open the UI in your browser
ls List proxies
get <workspaceId> [serverName] Show proxy details
auth <proxyId> <server> Authenticate a server
create <name> Create a new proxy
destroy <proxyId> Delete a proxy
connect <proxyId> [options] Connect a proxy to a MCP client
disconnect <proxyId> [options] Disconnect a proxy from an MCP client
add <proxyId> [options] Add a server to a proxy.
remove <proxyId> <serverName> Remove a server from a proxy
update <proxyId> [serverName] [options] Update proxy attributes
http2stdio <url> Proxy an HTTP connection (sse or streamable) to a stdio stream
env [options] Print environment variables
status Get the status of the director
REGISTRY
registry ls List all available servers in the registry
registry get <entryName> Get detailed information about a registry item
registry readme <entryName> Print the readme for a registry item
MCP
mcp list-tools <proxyId> List tools on a proxy
mcp get-tool <proxyId> <toolName> Get the details of a tool
mcp call-tool <proxyId> <toolName> [options] Call a tool on a proxy
PROMPTS
prompts ls <proxyId> List all prompts for a proxy
prompts add <proxyId> Add a new prompt to a proxy
prompts edit <proxyId> <promptName> Edit an existing prompt
prompts remove <proxyId> <promptName> Remove a prompt from a proxy
prompts get <proxyId> <promptName> Show the details of a specific prompt
FLAGS
-V, --version output the version number
EXAMPLES
$ director create my-proxy # Create a new proxy
$ director add my-proxy --entry fetch # Add a server to a proxy
$ director connect my-proxy --target claude # Connect my-proxy to claude
Programmatic control for advanced use cases:
import { Director } from '@director.run/sdk';
const director = new Director();
// Create workspace programmatically
const workspace = await director.workspaces.create({
name: 'ci-environment',
servers: [{
name: 'github',
command: 'mcp-server-github',
env: { GITHUB_TOKEN: process.env.GITHUB_TOKEN }
}]
});
// Execute tools
const result = await workspace.callTool('github.create_issue', {
title: 'Automated issue from CI',
body: 'This issue was created by Director'
});-
apps/cli- The command-line interface, the primary way to interact with Director. Available on npm. -
apps/sdk- The Typescript SDK, available on npm. -
apps/docker- The Director docker image, which allows you to run Director (and all MCP servers) securly inside a container. Available on Docker Hub. -
apps/docs- Project documentation hosted at https://docs.director.run -
apps/registry- Backend for the registry hosted at https://registry.director.run -
apps/sandbox- A tool for running Director (and all MCP servers) securely inside a VM. Apple Silicon only.
-
packages/client-configurator- Library for managing MCP client configuration files -
packages/gateway- Core gateway and proxy logic -
packages/mcp- Extensions to MCP SDK that add middleware functionality -
packages/utilities- Shared utilities used across all packages and apps -
packages/design- Design system: reusable UI components, hooks, and styles for all Director apps
This is a monorepo managed by Turborepo.
If you're using director, have any ideas, or just want to chat about MCP, we'd love to chat:
- 💬 Join our Discord
- 📧 Send us an Email
- 🐛 Report a Bug
- 🐦 Follow us on X / Twitter
We welcome contributions! See CONTRIBUTING.mdx for guidelines.
# Fork and clone
git clone https://github.com/director_run/director
cd director
./scripts/setup-development.sh
bun run testAGPL v3 - See LICENSE for details.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for director
Similar Open Source Tools
director
Director is a context infrastructure tool for AI agents that simplifies managing MCP servers, prompts, and configurations by packaging them into portable workspaces accessible through a single endpoint. It allows users to define context workspaces once and share them across different AI clients, enabling seamless collaboration, instant context switching, and secure isolation of untrusted servers without cloud dependencies or API keys. Director offers features like workspaces, universal portability, local-first architecture, sandboxing, smart filtering, unified OAuth, observability, multiple interfaces, and compatibility with all MCP clients and servers.
supabase-mcp
Supabase MCP Server standardizes how Large Language Models (LLMs) interact with Supabase, enabling AI assistants to manage tables, fetch config, and query data. It provides tools for project management, database operations, project configuration, branching (experimental), and development tools. The server is pre-1.0, so expect some breaking changes between versions.
mcpd
mcpd is a tool developed by Mozilla AI to declaratively manage Model Context Protocol (MCP) servers, enabling consistent interface for defining and running tools across different environments. It bridges the gap between local development and enterprise deployment by providing secure secrets management, declarative configuration, and seamless environment promotion. mcpd simplifies the developer experience by offering zero-config tool setup, language-agnostic tooling, version-controlled configuration files, enterprise-ready secrets management, and smooth transition from local to production environments.
MCPJungle
MCPJungle is a self-hosted MCP Gateway for private AI agents, serving as a registry for Model Context Protocol Servers. Developers use it to manage servers and tools centrally, while clients discover and consume tools from a single 'Gateway' MCP Server. Suitable for developers using MCP Clients like Claude & Cursor, building production-grade AI Agents, and organizations managing client-server interactions. The tool allows quick start, installation, usage, server and client setup, connection to Claude and Cursor, enabling/disabling tools, managing tool groups, authentication, enterprise features like access control and OpenTelemetry metrics. Limitations include lack of long-running connections to servers and no support for OAuth flow. Contributions are welcome.
cursor-tools
cursor-tools is a CLI tool designed to enhance AI agents with advanced skills, such as web search, repository context, documentation generation, GitHub integration, Xcode tools, and browser automation. It provides features like Perplexity for web search, Gemini 2.0 for codebase context, and Stagehand for browser operations. The tool requires API keys for Perplexity AI and Google Gemini, and supports global installation for system-wide access. It offers various commands for different tasks and integrates with Cursor Composer for AI agent usage.
backend.ai-webui
Backend.AI Web UI is a user-friendly web and app interface designed to make AI accessible for end-users, DevOps, and SysAdmins. It provides features for session management, inference service management, pipeline management, storage management, node management, statistics, configurations, license checking, plugins, help & manuals, kernel management, user management, keypair management, manager settings, proxy mode support, service information, and integration with the Backend.AI Web Server. The tool supports various devices, offers a built-in websocket proxy feature, and allows for versatile usage across different platforms. Users can easily manage resources, run environment-supported apps, access a web-based terminal, use Visual Studio Code editor, manage experiments, set up autoscaling, manage pipelines, handle storage, monitor nodes, view statistics, configure settings, and more.
action_mcp
Action MCP is a powerful tool for managing and automating your cloud infrastructure. It provides a user-friendly interface to easily create, update, and delete resources on popular cloud platforms. With Action MCP, you can streamline your deployment process, reduce manual errors, and improve overall efficiency. The tool supports various cloud providers and offers a wide range of features to meet your infrastructure management needs. Whether you are a developer, system administrator, or DevOps engineer, Action MCP can help you simplify and optimize your cloud operations.
steel-browser
Steel is an open-source browser API designed for AI agents and applications, simplifying the process of building live web agents and browser automation tools. It serves as a core building block for a production-ready, containerized browser sandbox with features like stealth capabilities, text-to-markdown session management, UI for session viewing/debugging, and full browser control through popular automation frameworks. Steel allows users to control, run, and manage a production-ready browser environment via a REST API, offering features such as full browser control, session management, proxy support, extension support, debugging tools, anti-detection mechanisms, resource management, and various browser tools. It aims to streamline complex browsing tasks programmatically, enabling users to focus on their AI applications while Steel handles the underlying complexity.
pentagi
PentAGI is an innovative tool for automated security testing that leverages cutting-edge artificial intelligence technologies. It is designed for information security professionals, researchers, and enthusiasts who need a powerful and flexible solution for conducting penetration tests. The tool provides secure and isolated operations in a sandboxed Docker environment, fully autonomous AI-powered agent for penetration testing steps, a suite of 20+ professional security tools, smart memory system for storing research results, web intelligence for gathering information, integration with external search systems, team delegation system, comprehensive monitoring and reporting, modern interface, API integration, persistent storage, scalable architecture, self-hosted solution, flexible authentication, and quick deployment through Docker Compose.
tiledesk-dashboard
Tiledesk is an open-source live chat platform with integrated chatbots written in Node.js and Express. It is designed to be a multi-channel platform for web, Android, and iOS, and it can be used to increase sales or provide post-sales customer service. Tiledesk's chatbot technology allows for automation of conversations, and it also provides APIs and webhooks for connecting external applications. Additionally, it offers a marketplace for apps and features such as CRM, ticketing, and data export.
ChatOpsLLM
ChatOpsLLM is a project designed to empower chatbots with effortless DevOps capabilities. It provides an intuitive interface and streamlined workflows for managing and scaling language models. The project incorporates robust MLOps practices, including CI/CD pipelines with Jenkins and Ansible, monitoring with Prometheus and Grafana, and centralized logging with the ELK stack. Developers can find detailed documentation and instructions on the project's website.
openai-kotlin
OpenAI Kotlin API client is a Kotlin client for OpenAI's API with multiplatform and coroutines capabilities. It allows users to interact with OpenAI's API using Kotlin programming language. The client supports various features such as models, chat, images, embeddings, files, fine-tuning, moderations, audio, assistants, threads, messages, and runs. It also provides guides on getting started, chat & function call, file source guide, and assistants. Sample apps are available for reference, and troubleshooting guides are provided for common issues. The project is open-source and licensed under the MIT license, allowing contributions from the community.
lexido
Lexido is an innovative assistant for the Linux command line, designed to boost your productivity and efficiency. Powered by Gemini Pro 1.0 and utilizing the free API, Lexido offers smart suggestions for commands based on your prompts and importantly your current environment. Whether you're installing software, managing files, or configuring system settings, Lexido streamlines the process, making it faster and more intuitive.
AirCasting
AirCasting is a platform for gathering, visualizing, and sharing environmental data. It aims to provide a central hub for environmental data, making it easier for people to access and use this information to make informed decisions about their environment.
odoo-expert
RAG-Powered Odoo Documentation Assistant is a comprehensive documentation processing and chat system that converts Odoo's documentation to a searchable knowledge base with an AI-powered chat interface. It supports multiple Odoo versions (16.0, 17.0, 18.0) and provides semantic search capabilities powered by OpenAI embeddings. The tool automates the conversion of RST to Markdown, offers real-time semantic search, context-aware AI-powered chat responses, and multi-version support. It includes a Streamlit-based web UI, REST API for programmatic access, and a CLI for document processing and chat. The system operates through a pipeline of data processing steps and an interface layer for UI and API access to the knowledge base.
mycoder
An open-source mono-repository containing the MyCoder agent and CLI. It leverages Anthropic's Claude API for intelligent decision making, has a modular architecture with various tool categories, supports parallel execution with sub-agents, can modify code by writing itself, features a smart logging system for clear output, and is human-compatible using README.md, project files, and shell commands to build its own context.
For similar tasks
cuckoo
Cuckoo is a Decentralized AI Platform that focuses on GPU-sharing for text-to-image generation and LLM inference. It provides a platform for users to generate images using Telegram or Discord.
Ling
Ling is a MoE LLM provided and open-sourced by InclusionAI. It includes two different sizes, Ling-Lite with 16.8 billion parameters and Ling-Plus with 290 billion parameters. These models show impressive performance and scalability for various tasks, from natural language processing to complex problem-solving. The open-source nature of Ling encourages collaboration and innovation within the AI community, leading to rapid advancements and improvements. Users can download the models from Hugging Face and ModelScope for different use cases. Ling also supports offline batched inference and online API services for deployment. Additionally, users can fine-tune Ling models using Llama-Factory for tasks like SFT and DPO.
langbase-examples
Langbase Examples is an open-source repository showcasing projects built using Langbase, a composable AI infrastructure for creating and deploying AI agents with hyper-personalized memory. Langbase offers AI Pipes for building custom AI agents as APIs and Memory (RAG) for managed search engine capabilities. The platform also includes AI Studio for collaboration and deployment of AI projects, providing a complete AI developer platform for teams to work together on building and deploying AI features.
director
Director is a context infrastructure tool for AI agents that simplifies managing MCP servers, prompts, and configurations by packaging them into portable workspaces accessible through a single endpoint. It allows users to define context workspaces once and share them across different AI clients, enabling seamless collaboration, instant context switching, and secure isolation of untrusted servers without cloud dependencies or API keys. Director offers features like workspaces, universal portability, local-first architecture, sandboxing, smart filtering, unified OAuth, observability, multiple interfaces, and compatibility with all MCP clients and servers.
fish-ai
fish-ai is a tool that adds AI functionality to Fish shell. It can be integrated with various AI providers like OpenAI, Azure OpenAI, Google, Hugging Face, Mistral, or a self-hosted LLM. Users can transform comments into commands, autocomplete commands, and suggest fixes. The tool allows customization through configuration files and supports switching between contexts. Data privacy is maintained by redacting sensitive information before submission to the AI models. Development features include debug logging, testing, and creating releases.
For similar jobs
sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.
teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.
