
aider-desk
Desktop application for Aider AI assistant
Stars: 107

AiderDesk is a desktop application that enhances coding workflow by leveraging AI capabilities. It offers an intuitive GUI, project management, IDE integration, MCP support, settings management, cost tracking, structured messages, visual file management, model switching, code diff viewer, one-click reverts, and easy sharing. Users can install it by downloading the latest release and running the executable. AiderDesk also supports Python version detection and auto update disabling. It includes features like multiple project management, context file management, model switching, chat mode selection, question answering, cost tracking, MCP server integration, and MCP support for external tools and context. Development setup involves cloning the repository, installing dependencies, running in development mode, and building executables for different platforms. Contributions from the community are welcome following specific guidelines.
README:
Supercharge your coding workflow with AiderDesk, a sleek desktop application that brings the power of aider to your fingertips with a modern GUI. Leverage AI to accelerate your coding tasks while enjoying seamless project management, cost tracking, and IDE integration.
Transform your AI coding experience with AiderDesk - all the power of the Aider console tool in an intuitive desktop interface. Whether you're managing multiple projects, integrating with your favorite IDE, or tracking costs, AiderDesk elevates your productivity to new heights.
- ✨ Key Features
- 📥 Installation
- 📸 Screenshots
- 🛠️ Model Context Protocol (MCP) Support
- 🌐 REST API
- 👨💻 Development Setup
- 🤝 Contributing
- ⭐ Star History
- 🖥️ Intuitive GUI - Replace command-line complexities with a sleek visual interface
- 📂 Project Management - Organize and switch between multiple codebases effortlessly
- 🔌 IDE Integration - Automatically manage context files in:
- 🌐 REST API - Expose functionality via REST API for external tools
- 🧩 MCP Support - Connect to Model Context Protocol servers for enhanced AI capabilities
- 🔑 Settings Management - Easily configure API keys and environment variables
- 💰 Cost Tracking - Monitor token usage and expenses with detailed insights
- 📨 Structured Messages - View code, prompts, and outputs in a clear, organized manner
- 📄 Visual File Management - Add, remove, and manage context files with ease
- 🔄 Model Switching - Seamlessly switch between different AI models while preserving context
- 🔍 Code Diff Viewer - Review changes with side-by-side comparison
- ⏪ One-Click Reverts - Undo specific AI-generated changes while keeping others
- 📋 Easy Sharing - Copy and share code changes or conversations instantly
- 💾 Session Management - Save and load coding sessions to preserve context and progress
- Python 3.9-3.12 installed on your system
- Download the latest release for your platform from Releases
- Run the downloaded executable
If you encounter issues with the application not detecting the correct Python version, you can specify the path to the desired Python executable using the AIDER_DESK_PYTHON
environment variable. This is typically only needed on the initial run/setup of AiderDesk.
For example, on macOS or Linux:
export AIDER_DESK_PYTHON=/usr/bin/python3.10
Or on Windows:
$env:AIDER_DESK_PYTHON = "C:\Path\To\Python310\python.exe"
Replace /usr/bin/python3.10
or C:\Path\To\Python310\python.exe
with the actual path to your Python executable.
If you want to disable automatic updates, you can set the AIDER_DESK_NO_AUTO_UPDATE
environment variable to true
. This is useful in environments where you want to control when updates are applied.
For example, on macOS or Linux:
export AIDER_DESK_NO_AUTO_UPDATE=true
Or on Windows:
$env:AIDER_DESK_NO_AUTO_UPDATE = "true"

Main application interface showing the chat interface, file management, and project overview

Aider settings and preferences

Manage and switch between multiple projects

Manage files included in the AI context

Switch between different models

Switch between different chat modes

Answer questions and run commands

Side-by-side code comparison and diff viewer

Token usage and cost tracking for session per project

Configure and manage Model Context Protocol servers for enhanced AI capabilities

Save and load coding sessions to preserve context and progress
AiderDesk integrates with the Model Context Protocol (MCP), enhancing your coding workflow with external tools and context:
MCP connects AI models to external tools like web browsers, documentation systems, and specialized programming utilities. AiderDesk can use these tools to gather information, then pass the results to Aider for implementing actual code changes.
- Tool Integration: Connect to browsers, documentation systems, and language-specific tools.
- Provider Options: Choose between OpenAI, Anthropic, Gemini, Deepseek, Amazon Bedrock and custom OpenAI compatible providers.
- Flexible Configuration: Enable/disable servers, customize settings, and control usage.
- Seamless Workflow: MCP tools gather information, then Aider implements the code changes.
- Aider Tools: Use Aider tools to perform add/drop context files actions and run prompts.
- Context Files: Add content of context files into the chat of MCP agent.
AiderDesk should work with any MCP-compatible server, including Brave API MCP server for searching the web and custom language-specific tools.
AiderDesk comes with a built-in MCP server that provides tools for interacting with the AiderDesk API. This allows you to use MCP to manage context files, run prompts, and more.
To use the built-in MCP server, add the following configuration to your MCP settings:
Windows
{
"mcpServers": {
"aider-desk": {
"command": "node",
"args": ["path-to-appdata/aider-desk/mcp-server/aider-desk-mcp-server.js", "/path/to/project"],
"env": {
"AIDER_DESK_API_BASE_URL": "http://localhost:24337/api"
}
}
}
}
Note: Replace path-to-appdata
with the absolute path to your AppData directory. You can find this value by running echo %APPDATA%
in your command prompt.
macOS
{
"mcpServers": {
"aider-desk": {
"command": "node",
"args": ["/path/to/home/Library/Application Support/aider-desk/mcp-server/aider-desk-mcp-server.js", "/path/to/project"],
"env": {
"AIDER_DESK_API_BASE_URL": "http://localhost:24337/api"
}
}
}
}
Note: Replace /path/to/home
with the absolute path to your home directory. You can find this value by running echo $HOME
in your terminal.
Linux
{
"mcpServers": {
"aider-desk": {
"command": "node",
"args": ["/path/to/home/.config/aider-desk/mcp-server/aider-desk-mcp-server.js", "/path/to/project"],
"env": {
"AIDER_DESK_API_BASE_URL": "http://localhost:24337/api"
}
}
}
}
Note: Replace /path/to/home
with the absolute path to your home directory. You can find this value by running echo $HOME
in your terminal.
The server supports the following:
Command-line arguments:
- First argument: Project directory path (default: current directory)
Environment variables:
-
AIDER_DESK_API_BASE_URL
: The base URL of the AiderDesk API (default: http://localhost:24337/api)
With this configuration, the MCP server will automatically use the specified project directory for all tool calls, so you don't need to specify the project directory when using the tools.
The AiderDesk MCP server provides the following tools:
-
add_context_file
: Add a file to the context of AiderDesk -
drop_context_file
: Remove a file from the context of AiderDesk -
get_context_files
: Get the list of context files in AiderDesk -
get_addable_files
: Get the list of project files that can be added to the context context -
run_prompt
: Run a prompt in AiderDesk
These tools allow MCP clients (Claude Desktop, Claude Code, Cursor, Windsurf...) to interact with your AiderDesk, managing context files and running prompts.
Note: The AiderDesk application must be running for the MCP server to function.
AiderDesk provides a REST API for external tools to interact with the application. The API is running on the same port as the main application (default 24337, configurable by AIDER_DESK_PORT
environment variable).
/api/add-context-file
- Method: POST
-
Request Body:
{ "projectDir": "path/to/your/project", "path": "path/to/the/file", "readOnly": false }
-
Response:
Returns the list of context files in the project.
[ { "path": "path/to/the/file", "readOnly": false } ]
/api/drop-context-file
- Method: POST
-
Request Body:
{ "projectDir": "path/to/your/project", "path": "path/to/the/file" }
-
Response:
Returns the list of context files in the project.
[]
/api/get-context-files
- Method: POST
-
Request Body:
{ "projectDir": "path/to/your/project" }
-
Response:
Returns the list of context files in the project.
[ { "path": "path/to/the/file", "readOnly": false } ]
/api/get-addable-files
- Method: POST
-
Request Body:
{ "projectDir": "path/to/your/project", "searchRegex": "optional/regex/filter" }
-
Response:
Returns the list of files that can be added to the project.
[ { "path": "path/to/the/file" } ]
/api/run-prompt
-
Endpoint:
/api/run-prompt
- Method: POST
-
Request Body:
{ "projectDir": "path/to/your/project", "prompt": "Your prompt here", "editFormat": "code" // Optional: "code", "ask", or "architect" }
-
Response:
[ { "messageId": "unique-message-id", "baseDir": "path/to/your/project", "content": "The AI generated response", "reflectedMessage": "Optional reflected message", "editedFiles": ["file1.txt", "file2.py"], "commitHash": "a1b2c3d4e5f6", "commitMessage": "Optional commit message", "diff": "Optional diff content", "usageReport": { "sentTokens": 100, "receivedTokens": 200, "messageCost": 0.5, "totalCost": 1.0, "mcpToolsCost": 0.2 } } ]
If you want to run from source, you can follow these steps:
# Clone the repository
$ git clone https://github.com/hotovo/aider-desk.git
$ cd aider-desk
# Install dependencies
$ npm install
# Run in development mode
$ npm run dev
# Build executables
# For Windows
$ npm run build:win
# For macOS
$ npm run build:mac
# For Linux
$ npm run build:linux
We welcome contributions from the community! Here's how you can help improve aider-desk:
- Fork the repository on GitHub
-
Create a new branch for your feature or bugfix:
git checkout -b my-feature-branch
- Commit your changes with clear, descriptive messages
- Push your branch to your fork
- Create a Pull Request against the main branch of the original repository
Please follow these guidelines:
- Keep PRs focused on a single feature or bugfix
- Update documentation when adding new features
- Follow the existing code style and conventions
- Write clear commit messages and PR descriptions
For major changes, please open an issue first to discuss what you would like to change.
Thank you ❤️
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for aider-desk
Similar Open Source Tools

aider-desk
AiderDesk is a desktop application that enhances coding workflow by leveraging AI capabilities. It offers an intuitive GUI, project management, IDE integration, MCP support, settings management, cost tracking, structured messages, visual file management, model switching, code diff viewer, one-click reverts, and easy sharing. Users can install it by downloading the latest release and running the executable. AiderDesk also supports Python version detection and auto update disabling. It includes features like multiple project management, context file management, model switching, chat mode selection, question answering, cost tracking, MCP server integration, and MCP support for external tools and context. Development setup involves cloning the repository, installing dependencies, running in development mode, and building executables for different platforms. Contributions from the community are welcome following specific guidelines.

supergateway
Supergateway is a tool that allows running MCP stdio-based servers over SSE (Server-Sent Events) with one command. It is useful for remote access, debugging, or connecting to SSE-based clients when your MCP server only speaks stdio. The tool supports running in SSE to Stdio mode as well, where it connects to a remote SSE server and exposes a local stdio interface for downstream clients. Supergateway can be used with ngrok to share local MCP servers with remote clients and can also be run in a Docker containerized deployment. It is designed with modularity in mind, ensuring compatibility and ease of use for AI tools exchanging data.

json-repair
JSON Repair is a toolkit designed to address JSON anomalies that can arise from Large Language Models (LLMs). It offers a comprehensive solution for repairing JSON strings, ensuring accuracy and reliability in your data processing. With its user-friendly interface and extensive capabilities, JSON Repair empowers developers to seamlessly integrate JSON repair into their workflows.

ai-gateway
LangDB AI Gateway is an open-source enterprise AI gateway built in Rust. It provides a unified interface to all LLMs using the OpenAI API format, focusing on high performance, enterprise readiness, and data control. The gateway offers features like comprehensive usage analytics, cost tracking, rate limiting, data ownership, and detailed logging. It supports various LLM providers and provides OpenAI-compatible endpoints for chat completions, model listing, embeddings generation, and image generation. Users can configure advanced settings, such as rate limiting, cost control, dynamic model routing, and observability with OpenTelemetry tracing. The gateway can be run with Docker Compose and integrated with MCP tools for server communication.

Agentarium
Agentarium is a powerful Python framework for managing and orchestrating AI agents with ease. It provides a flexible and intuitive way to create, manage, and coordinate interactions between multiple AI agents in various environments. The framework offers advanced agent management, robust interaction management, a checkpoint system for saving and restoring agent states, data generation through agent interactions, performance optimization, flexible environment configuration, and an extensible architecture for customization.

FlashLearn
FlashLearn is a tool that provides a simple interface and orchestration for incorporating Agent LLMs into workflows and ETL pipelines. It allows data transformations, classifications, summarizations, rewriting, and custom multi-step tasks using LLMs. Each step and task has a compact JSON definition, making pipelines easy to understand and maintain. FlashLearn supports LiteLLM, Ollama, OpenAI, DeepSeek, and other OpenAI-compatible clients.

quantalogic
QuantaLogic is a ReAct framework for building advanced AI agents that seamlessly integrates large language models with a robust tool system. It aims to bridge the gap between advanced AI models and practical implementation in business processes by enabling agents to understand, reason about, and execute complex tasks through natural language interaction. The framework includes features such as ReAct Framework, Universal LLM Support, Secure Tool System, Real-time Monitoring, Memory Management, and Enterprise Ready components.

clarifai-python
The Clarifai Python SDK offers a comprehensive set of tools to integrate Clarifai's AI platform to leverage computer vision capabilities like classification , detection ,segementation and natural language capabilities like classification , summarisation , generation , Q&A ,etc into your applications. With just a few lines of code, you can leverage cutting-edge artificial intelligence to unlock valuable insights from visual and textual content.

aws-mcp
AWS MCP is a Model Context Protocol (MCP) server that facilitates interactions between AI assistants and AWS environments. It allows for natural language querying and management of AWS resources during conversations. The server supports multiple AWS profiles, SSO authentication, multi-region operations, and secure credential handling. Users can locally execute commands with their AWS credentials, enhancing the conversational experience with AWS resources.

mcphub.nvim
MCPHub.nvim is a powerful Neovim plugin that integrates MCP (Model Context Protocol) servers into your workflow. It offers a centralized config file for managing servers and tools, with an intuitive UI for testing resources. Ideal for LLM integration, it provides programmatic API access and interactive testing through the `:MCPHub` command.
For similar tasks

hide
Hide is a headless IDE that provides containerized development environments for codebases and exposes APIs for agents to interact with them. It spins up devcontainers, installs dependencies, and offers APIs for codebase interaction. Hide can be used to create custom toolkits or utilize pre-built toolkits for popular frameworks like Langchain. The Hide Runtime manages development containers and tasks, while the SDK provides APIs for coding agents to interact with the codebase.

hide
Hide is a headless IDE that provides containerized development environments for codebases and exposes APIs for agents to interact with them. It spins up devcontainers, installs dependencies, and offers APIs for codebase interaction. Hide can be used to create custom toolkits or utilize pre-built toolkits for popular frameworks like Langchain. The Hide Runtime manages development containers and executes tasks, while the SDK provides APIs and toolkits for coding agents to interact with the codebase. Installation can be done via Homebrew or building from source, with Docker Engine as a prerequisite. The tool offers flexibility in managing development environments and simplifies codebase interaction for developers.

aider-desk
AiderDesk is a desktop application that enhances coding workflow by leveraging AI capabilities. It offers an intuitive GUI, project management, IDE integration, MCP support, settings management, cost tracking, structured messages, visual file management, model switching, code diff viewer, one-click reverts, and easy sharing. Users can install it by downloading the latest release and running the executable. AiderDesk also supports Python version detection and auto update disabling. It includes features like multiple project management, context file management, model switching, chat mode selection, question answering, cost tracking, MCP server integration, and MCP support for external tools and context. Development setup involves cloning the repository, installing dependencies, running in development mode, and building executables for different platforms. Contributions from the community are welcome following specific guidelines.

MLE-agent
MLE-Agent is an intelligent companion designed for machine learning engineers and researchers. It features autonomous baseline creation, integration with Arxiv and Papers with Code, smart debugging, file system organization, comprehensive tools integration, and an interactive CLI chat interface for seamless AI engineering and research workflows.

PlanExe
PlanExe is a planning AI tool that helps users generate detailed plans based on vague descriptions. It offers a Gradio-based web interface for easy input and output. Users can choose between running models in the cloud or locally on a high-end computer. The tool aims to provide a straightforward path to planning various tasks efficiently.

LangGraph-Expense-Tracker
LangGraph Expense tracker is a small project that explores the possibilities of LangGraph. It allows users to send pictures of invoices, which are then structured and categorized into expenses and stored in a database. The project includes functionalities for invoice extraction, database setup, and API configuration. It consists of various modules for categorizing expenses, creating database tables, and running the API. The database schema includes tables for categories, payment methods, and expenses, each with specific columns to track transaction details. The API documentation is available for reference, and the project utilizes LangChain for processing expense data.

travel-planner-ai
Travel Planner AI is a Software as a Service (SaaS) product that simplifies travel planning by generating comprehensive itineraries based on user preferences. It leverages cutting-edge technologies to provide tailored schedules, optimal timing suggestions, food recommendations, prime experiences, expense tracking, and collaboration features. The tool aims to be the ultimate travel companion for users looking to plan seamless and smart travel adventures.

gemini-android
Gemini-Android is a mobile application that allows users to track their expenses and manage their finances on the go. The app provides a user-friendly interface for adding and categorizing expenses, setting budgets, and generating reports to help users make informed financial decisions. With Gemini-Android, users can easily monitor their spending habits, identify areas for saving, and stay on top of their financial goals.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.