
director
Context infrastructure for AI agents
Stars: 343

Director is a context infrastructure tool for AI agents that simplifies managing MCP servers, prompts, and configurations by packaging them into portable workspaces accessible through a single endpoint. It allows users to define context workspaces once and share them across different AI clients, enabling seamless collaboration, instant context switching, and secure isolation of untrusted servers without cloud dependencies or API keys. Director offers features like workspaces, universal portability, local-first architecture, sandboxing, smart filtering, unified OAuth, observability, multiple interfaces, and compatibility with all MCP clients and servers.
README:
Context infrastructure for AI agents
curl -LsSf https://director.run/install.sh | sh
Director is a context engine that packages MCP servers, prompts, and configuration into workspaces — portable contexts accessible through a single endpoint.
Instead of configuring MCP servers individually for each agent, Director lets you define context workspaces once and use them everywhere. Share complete AI contexts between Claude, Cursor, VSCode or any MCP enabled client. Distribute workspaces to your team. Switch between development and production contexts instantly. Run untrusted servers in isolation. All without cloud dependencies, API keys or accounts.
# Install Director
$ curl -LsSf https://director.run/install.sh | sh
# Start the onboarding flow
$ director quickstart
MCP standardizes how AI agents access context. However, the ecosystem is still nascent and using it remains complicated.
Every agent needs it's own configuration. You can't share context between Claude Code and Cursor. You definitely can't share with teammates. And running untrusted MCP servers means executing arbitrary code on your machine.
Director fixes this by treating context as infrastructure - something you define once and deploy everywhere.
Problem | Current State | With Director |
---|---|---|
Agent Portability | Each agent has proprietary config format | One workspace works with all MCP clients |
Context Switching | Manual JSON editing to change tool sets |
director use production switches instantly |
Team Collaboration | "Send me your MCP config" "Which one?" "The working one" |
director export > context.yaml - complete, working context |
Token Efficiency | 50+ tools loaded, 5 actually needed |
include: [create_pr, review_code] - load only what's relevant |
Security | npm install sketchy-mcp-server && pray |
sandbox: docker - full isolation |
Debugging | Black box with no visibility | Structured JSON logs for every operation |
- 📚 Workspaces - Isolated contexts for different tasks or environments
- 🚀 Universal Portability - One workspace, all agents, any teammate
- 🏠 Local-First - Runs on your machine, not ours
- 🔐 Sandboxing - Docker/VM isolation for untrusted servers
- 🎯 Smart Filtering - Reduce token usage and improve accuracy
- 👤 Unified OAuth - Authenticate once, use everywhere
- 📊 Observability - Structured logs for debugging and compliance
- 🔧 Multiple Interfaces - CLI, YAML, Studio UI, or TypeScript SDK
- 🔌 MCP Native - Works with all clients and servers
A workspace isn't configuration — it's a complete context for your AI. Tools, prompts, environment, and security boundaries packaged together:
# Define a Workspace
workspaces:
production_support:
description: Investigate and resolve production issues
servers:
sentry: # alerts
type: http
url: https://mcp.sentry.dev/mcp
cloudwatch: # logging
type: stdio
command: uvx
args: ["awslabs.cloudwatch-mcp-server@latest"]
env:
AWS_PROFILE: "[The AWS Profile Name to use for AWS access]",
include: [search_logs, get_metrics] # No write access
github: # code
type: http
url: https://api.githubcopilot.com/mcp/
tools:
include: [ create_pr, search_code ]
prompts:
- name: investigate
content: |
Check recent alerts, correlate with deployment times,
search logs for errors, identify root cause
# Use with any MCP client
director connect production_support --target claude_code # Auto-configures Claude Code
director connect production_support --target cursor # Same workspace in Cursor
director export production_support > team-fix.yaml # Share with team
This workspace is:
- Portable: Works with any MCP client
- Shareable: One file contains everything
- Auditable: Every tool call is logged
- Safe: Dangerous operations filtered out
Director runs entirely on your machine. No cloud services, no accounts, no api keys. Your context never leaves your control.
# Everything runs locally
director start
# Or sandbox everything in Docker
director start --sandbox docker
Director meets you where you are. You can interact with it via YAML, CLI or the web based management UI.
There are two ways to install director:
# Option 1: Install director & it's dependencies (node, npm & uvx) using the installation script
$ curl -LsSf https://director.run/install.sh | sh
# Option 2: If you already have node installed, you can use npm
$ npm install -g @director.run/cli
# Start director & open the UI
$ director quickstart
Director is designed to be an always-on background service:
# Start director
director start
# Stop director
director stop
If you'd like to configure Director visually, this will open the management UI in your browser:
director studio
Director makes it easy to sandbox untrusted or insecure MCP servers:
# Run director (and all MCP servers) inside a docker sandbox
director start --sandbox docker
A workspace is a collection of MCP servers, prompts, and configuration that work together for a specific purpose. For example, maintaining a changelog, fixing bugs, performing research, replying to support tickets...
You can create as many workspaces as you'd like:
director create <workspace_name>
Once you've created a workspace, you can add MCP servers. Director will proxy all tools, prompts and resources to the client.
# Add a server from the director registry
director server add <workspace_name> --entry <registry_entry>
# Add an Stdio server by specifying the command to run
director server add <workspace_name> --name <server_name> --command "uvx ..."
# Add a streamable or SSE sever by specifying it's URL
director server add <workspace_name> --name <server_name> --url https://example.com/mcp
Director has full OAuth support. Currently, we only support OAuth in the CLI.
# Add an OAuth server by specifying the URL
director server add <workspace_name> --name notion --url https://mcp.notion.com/mcp
# If you query the workspace, you'll notice that the server is "unauthorized"
director get <workspace_name>
# This will trigger the OAuth flow in your browser
director auth <workspace_name> notion
MCP servers often add too many tools to your context, which can lead to hallucinations. You can use director to include only the tools you need.
director update <workspace_name> <server_name> -a includeTools=[<tool_name_1>, <tool_name_2>]
You can use tool name prefixing to avoid conflicts when includeing multiple MCP servers that use the same tool name (for example search).
director update <workspace_name> <server_name> -a toolPrefix="prefix__"
Director can manage client connections for you. Currently we support claude_code
, claude
, cursor
& vscode
.
# Conntect the workspace to a client, currently: "claude_code", "claude", "cursor", "vscode"
director connect <workspace_name> -t <client_name>
If your client isn't supported yet, you can connect manually.
# This will print out the Streamable / SSE URL as well as the Stdio connection config
$ director connect test_workspace
Director will not only proxy prompts from the underlying MCP servers, but will also allow you define your own prompts at the workspace level. This is helpful to capture and share prompts that you re-use often.
# Add a prompt to a workspace, this will open up your editor for you to add in the prompt body.
director prompts add <workspace_name> --name <prompt_name>
You can now invoke the prompt from your favourite client as follows: \director__<prompt_name>
Director uses a flat configuration file to manage all of it's state. Which makes it trivial to make large edits to your context as well as sharing.
Director will use the director.yaml
file in the current directory if it is present. Otherwise, it will default to ~/.director/director.yaml
.
# Configuration file reference
workspaces:
name: code_review
description: Automates code reviews
servers:
filesystem:
type: stdio
command: npx
args: [ "@modelcontextprotocol/server-filesystem", "./src" ]
github:
type: http
url: https://api.githubcopilot.com/mcp/
tools:
include: [ create_issue, search_code ]
prompts:
- name: code_review
content: "Review this code for security vulnerabilities and performance issues"
- name: write_tests
content: "Write comprehensive unit tests including edge cases"
Every MCP operation is logged as JSON:
{
"timestamp": "2024-01-20T10:30:00Z",
"workspace": "production",
"server": "github",
"method": "tools/call",
"tool": "create_issue",
"duration_ms": 230,
"status": "success"
}
The log level can be configured via the LOG_LEVEL
environment variable
Director alsos provides a few utilities to help you debug MCP servers:
director mcp list-tools <workspace_name>
director mcp get-tool <workspace_name> <toolName>
director mcp call-tool <workspace_name> <toolName>
Manage context for your AI agent
USAGE
director <command> [subcommand] [flags]
CORE COMMANDS
quickstart Start the gateway and open the studio in your browser
serve Start the web service
studio Open the UI in your browser
ls List proxies
get <workspaceId> [serverName] Show proxy details
auth <proxyId> <server> Authenticate a server
create <name> Create a new proxy
destroy <proxyId> Delete a proxy
connect <proxyId> [options] Connect a proxy to a MCP client
disconnect <proxyId> [options] Disconnect a proxy from an MCP client
add <proxyId> [options] Add a server to a proxy.
remove <proxyId> <serverName> Remove a server from a proxy
update <proxyId> [serverName] [options] Update proxy attributes
http2stdio <url> Proxy an HTTP connection (sse or streamable) to a stdio stream
env [options] Print environment variables
status Get the status of the director
REGISTRY
registry ls List all available servers in the registry
registry get <entryName> Get detailed information about a registry item
registry readme <entryName> Print the readme for a registry item
MCP
mcp list-tools <proxyId> List tools on a proxy
mcp get-tool <proxyId> <toolName> Get the details of a tool
mcp call-tool <proxyId> <toolName> [options] Call a tool on a proxy
PROMPTS
prompts ls <proxyId> List all prompts for a proxy
prompts add <proxyId> Add a new prompt to a proxy
prompts edit <proxyId> <promptName> Edit an existing prompt
prompts remove <proxyId> <promptName> Remove a prompt from a proxy
prompts get <proxyId> <promptName> Show the details of a specific prompt
FLAGS
-V, --version output the version number
EXAMPLES
$ director create my-proxy # Create a new proxy
$ director add my-proxy --entry fetch # Add a server to a proxy
$ director connect my-proxy --target claude # Connect my-proxy to claude
Programmatic control for advanced use cases:
import { Director } from '@director.run/sdk';
const director = new Director();
// Create workspace programmatically
const workspace = await director.workspaces.create({
name: 'ci-environment',
servers: [{
name: 'github',
command: 'mcp-server-github',
env: { GITHUB_TOKEN: process.env.GITHUB_TOKEN }
}]
});
// Execute tools
const result = await workspace.callTool('github.create_issue', {
title: 'Automated issue from CI',
body: 'This issue was created by Director'
});
-
apps/cli
- The command-line interface, the primary way to interact with Director. Available on npm. -
apps/sdk
- The Typescript SDK, available on npm. -
apps/docker
- The Director docker image, which allows you to run Director (and all MCP servers) securly inside a container. Available on Docker Hub. -
apps/docs
- Project documentation hosted at https://docs.director.run -
apps/registry
- Backend for the registry hosted at https://registry.director.run -
apps/sandbox
- A tool for running Director (and all MCP servers) securely inside a VM. Apple Silicon only.
-
packages/client-configurator
- Library for managing MCP client configuration files -
packages/gateway
- Core gateway and proxy logic -
packages/mcp
- Extensions to MCP SDK that add middleware functionality -
packages/utilities
- Shared utilities used across all packages and apps -
packages/design
- Design system: reusable UI components, hooks, and styles for all Director apps
This is a monorepo managed by Turborepo.
If you're using director, have any ideas, or just want to chat about MCP, we'd love to chat:
- 💬 Join our Discord
- 📧 Send us an Email
- 🐛 Report a Bug
- 🐦 Follow us on X / Twitter
We welcome contributions! See CONTRIBUTING.mdx for guidelines.
# Fork and clone
git clone https://github.com/director_run/director
cd director
./scripts/setup-development.sh
bun run test
AGPL v3 - See LICENSE for details.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for director
Similar Open Source Tools

director
Director is a context infrastructure tool for AI agents that simplifies managing MCP servers, prompts, and configurations by packaging them into portable workspaces accessible through a single endpoint. It allows users to define context workspaces once and share them across different AI clients, enabling seamless collaboration, instant context switching, and secure isolation of untrusted servers without cloud dependencies or API keys. Director offers features like workspaces, universal portability, local-first architecture, sandboxing, smart filtering, unified OAuth, observability, multiple interfaces, and compatibility with all MCP clients and servers.

model-compose
model-compose is an open-source, declarative workflow orchestrator inspired by docker-compose. It lets you define and run AI model pipelines using simple YAML files. Effortlessly connect external AI services or run local AI models within powerful, composable workflows. Features include declarative design, multi-workflow support, modular components, flexible I/O routing, streaming mode support, and more. It supports running workflows locally or serving them remotely, Docker deployment, environment variable support, and provides a CLI interface for managing AI workflows.

raycast_api_proxy
The Raycast AI Proxy is a tool that acts as a proxy for the Raycast AI application, allowing users to utilize the application without subscribing. It intercepts and forwards Raycast requests to various AI APIs, then reformats the responses for Raycast. The tool supports multiple AI providers and allows for custom model configurations. Users can generate self-signed certificates, add them to the system keychain, and modify DNS settings to redirect requests to the proxy. The tool is designed to work with providers like OpenAI, Azure OpenAI, Google, and more, enabling tasks such as AI chat completions, translations, and image generation.

rclip
rclip is a command-line photo search tool powered by the OpenAI's CLIP neural network. It allows users to search for images using text queries, similar image search, and combining multiple queries. The tool extracts features from photos to enable searching and indexing, with options for previewing results in supported terminals or custom viewers. Users can install rclip on Linux, macOS, and Windows using different installation methods. The repository follows the Conventional Commits standard and welcomes contributions from the community.

BuildCLI
BuildCLI is a command-line interface (CLI) tool designed for managing and automating common tasks in Java project development. It simplifies the development process by allowing users to create, compile, manage dependencies, run projects, generate documentation, manage configuration profiles, dockerize projects, integrate CI/CD tools, and generate structured changelogs. The tool aims to enhance productivity and streamline Java project management by providing a range of functionalities accessible directly from the terminal.

go-embeddings
This project provides API clients for fetching embeddings from various LLM providers. It includes implementations for OpenAI, Cohere, Google Vertex, VoyageAI, Ollama, and AWS Bedrock. Sample programs demonstrate how to use the client packages. The 'document' package offers text splitters inspired by Langchain framework. Environment variables are used to initialize API clients for each provider. Contributions are welcome.

agenticSeek
AgenticSeek is a voice-enabled AI assistant powered by DeepSeek R1 agents, offering a fully local alternative to cloud-based AI services. It allows users to interact with their filesystem, code in multiple languages, and perform various tasks autonomously. The tool is equipped with memory to remember user preferences and past conversations, and it can divide tasks among multiple agents for efficient execution. AgenticSeek prioritizes privacy by running entirely on the user's hardware without sending data to the cloud.

cursor-tools
cursor-tools is a CLI tool designed to enhance AI agents with advanced skills, such as web search, repository context, documentation generation, GitHub integration, Xcode tools, and browser automation. It provides features like Perplexity for web search, Gemini 2.0 for codebase context, and Stagehand for browser operations. The tool requires API keys for Perplexity AI and Google Gemini, and supports global installation for system-wide access. It offers various commands for different tasks and integrates with Cursor Composer for AI agent usage.

llm-vscode
llm-vscode is an extension designed for all things LLM, utilizing llm-ls as its backend. It offers features such as code completion with 'ghost-text' suggestions, the ability to choose models for code generation via HTTP requests, ensuring prompt size fits within the context window, and code attribution checks. Users can configure the backend, suggestion behavior, keybindings, llm-ls settings, and tokenization options. Additionally, the extension supports testing models like Code Llama 13B, Phind/Phind-CodeLlama-34B-v2, and WizardLM/WizardCoder-Python-34B-V1.0. Development involves cloning llm-ls, building it, and setting up the llm-vscode extension for use.

llm-functions
LLM Functions is a project that enables the enhancement of large language models (LLMs) with custom tools and agents developed in bash, javascript, and python. Users can create tools for their LLM to execute system commands, access web APIs, or perform other complex tasks triggered by natural language prompts. The project provides a framework for building tools and agents, with tools being functions written in the user's preferred language and automatically generating JSON declarations based on comments. Agents combine prompts, function callings, and knowledge (RAG) to create conversational AI agents. The project is designed to be user-friendly and allows users to easily extend the capabilities of their language models.

tiledesk-dashboard
Tiledesk is an open-source live chat platform with integrated chatbots written in Node.js and Express. It is designed to be a multi-channel platform for web, Android, and iOS, and it can be used to increase sales or provide post-sales customer service. Tiledesk's chatbot technology allows for automation of conversations, and it also provides APIs and webhooks for connecting external applications. Additionally, it offers a marketplace for apps and features such as CRM, ticketing, and data export.

runpod-worker-comfy
runpod-worker-comfy is a serverless API tool that allows users to run any ComfyUI workflow to generate an image. Users can provide input images as base64-encoded strings, and the generated image can be returned as a base64-encoded string or uploaded to AWS S3. The tool is built on Ubuntu + NVIDIA CUDA and provides features like built-in checkpoints and VAE models. Users can configure environment variables to upload images to AWS S3 and interact with the RunPod API to generate images. The tool also supports local testing and deployment to Docker hub using Github Actions.

chatgpt-cli
ChatGPT CLI provides a powerful command-line interface for seamless interaction with ChatGPT models via OpenAI and Azure. It features streaming capabilities, extensive configuration options, and supports various modes like streaming, query, and interactive mode. Users can manage thread-based context, sliding window history, and provide custom context from any source. The CLI also offers model and thread listing, advanced configuration options, and supports GPT-4, GPT-3.5-turbo, and Perplexity's models. Installation is available via Homebrew or direct download, and users can configure settings through default values, a config.yaml file, or environment variables.

Discord-AI-Chatbot
Discord AI Chatbot is a versatile tool that seamlessly integrates into your Discord server, offering a wide range of capabilities to enhance your communication and engagement. With its advanced language model, the bot excels at imaginative generation, providing endless possibilities for creative expression. Additionally, it offers secure credential management, ensuring the privacy of your data. The bot's hybrid command system combines the best of slash and normal commands, providing flexibility and ease of use. It also features mention recognition, ensuring prompt responses whenever you mention it or use its name. The bot's message handling capabilities prevent confusion by recognizing when you're replying to others. You can customize the bot's behavior by selecting from a range of pre-existing personalities or creating your own. The bot's web access feature unlocks a new level of convenience, allowing you to interact with it from anywhere. With its open-source nature, you have the freedom to modify and adapt the bot to your specific needs.

mycoder
An open-source mono-repository containing the MyCoder agent and CLI. It leverages Anthropic's Claude API for intelligent decision making, has a modular architecture with various tool categories, supports parallel execution with sub-agents, can modify code by writing itself, features a smart logging system for clear output, and is human-compatible using README.md, project files, and shell commands to build its own context.

iffy
Iffy is a tool for intelligent content moderation at scale, allowing users to keep unwanted content off their platform without the need to manage a team of moderators. It provides features such as a Moderation Dashboard to view and manage all moderation activity, User Lifecycle to automatically suspend users with flagged content, Appeals Management for efficient handling of user appeals, and Powerful Rules & Presets to create custom moderation rules. Users can choose between the managed Iffy Cloud or the free self-hosted Iffy Community version, each offering different features and setup requirements.
For similar tasks

cuckoo
Cuckoo is a Decentralized AI Platform that focuses on GPU-sharing for text-to-image generation and LLM inference. It provides a platform for users to generate images using Telegram or Discord.

Ling
Ling is a MoE LLM provided and open-sourced by InclusionAI. It includes two different sizes, Ling-Lite with 16.8 billion parameters and Ling-Plus with 290 billion parameters. These models show impressive performance and scalability for various tasks, from natural language processing to complex problem-solving. The open-source nature of Ling encourages collaboration and innovation within the AI community, leading to rapid advancements and improvements. Users can download the models from Hugging Face and ModelScope for different use cases. Ling also supports offline batched inference and online API services for deployment. Additionally, users can fine-tune Ling models using Llama-Factory for tasks like SFT and DPO.

langbase-examples
Langbase Examples is an open-source repository showcasing projects built using Langbase, a composable AI infrastructure for creating and deploying AI agents with hyper-personalized memory. Langbase offers AI Pipes for building custom AI agents as APIs and Memory (RAG) for managed search engine capabilities. The platform also includes AI Studio for collaboration and deployment of AI projects, providing a complete AI developer platform for teams to work together on building and deploying AI features.

director
Director is a context infrastructure tool for AI agents that simplifies managing MCP servers, prompts, and configurations by packaging them into portable workspaces accessible through a single endpoint. It allows users to define context workspaces once and share them across different AI clients, enabling seamless collaboration, instant context switching, and secure isolation of untrusted servers without cloud dependencies or API keys. Director offers features like workspaces, universal portability, local-first architecture, sandboxing, smart filtering, unified OAuth, observability, multiple interfaces, and compatibility with all MCP clients and servers.

fish-ai
fish-ai is a tool that adds AI functionality to Fish shell. It can be integrated with various AI providers like OpenAI, Azure OpenAI, Google, Hugging Face, Mistral, or a self-hosted LLM. Users can transform comments into commands, autocomplete commands, and suggest fixes. The tool allows customization through configuration files and supports switching between contexts. Data privacy is maintained by redacting sensitive information before submission to the AI models. Development features include debug logging, testing, and creating releases.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.