
aider-desk
Desktop application for Aider AI assistant and much more
Stars: 769

AiderDesk is a desktop application that enhances coding workflow by leveraging AI capabilities. It offers an intuitive GUI, project management, IDE integration, MCP support, settings management, cost tracking, structured messages, visual file management, model switching, code diff viewer, one-click reverts, and easy sharing. Users can install it by downloading the latest release and running the executable. AiderDesk also supports Python version detection and auto update disabling. It includes features like multiple project management, context file management, model switching, chat mode selection, question answering, cost tracking, MCP server integration, and MCP support for external tools and context. Development setup involves cloning the repository, installing dependencies, running in development mode, and building executables for different platforms. Contributions from the community are welcome following specific guidelines.
README:
Elevate your development workflow with AiderDesk, a sophisticated desktop application bringing all the power of aider into a user-friendly graphical interface. Whether you're managing multiple projects, integrating with your favorite IDE, or tracking costs, AiderDesk elevates your productivity to new heights.
See AiderDesk in action:
AiderDesk is packed with features designed for modern software development:
- 🖥️ Intuitive GUI: A clean, visual interface replacing command-line interactions.
- 📂 Multi-Project Management: Seamlessly organize, switch between, and manage multiple codebases.
- 🔌 Effortless IDE Integration: Automatically sync context files with your active editor in:
- 🤖 Powerful Agent Mode: Utilize an autonomous AI agent (powered by Vercel AI SDK) capable of complex task planning and execution using various tools.
- 🧩 Extensible via MCP: Connect to Model Context Protocol (MCP) servers to grant the Agent access to external tools like web search, documentation lookups, and more.
- ✍️ Custom Commands: Define and execute your own commands to automate tasks and extend AiderDesk's capabilities. Learn more
- 📄 Smart Context Management: Automatically manage context via IDE plugins or manually control context using the integrated project file browser.
- 💾 Robust Session Management: Save and load entire work sessions (chat history, context files) to easily switch between tasks or resume later.
- 🔄 Flexible Model Switching: Change AI models on the fly while retaining your conversation and context.
- 💬 Multiple Chat Modes: Tailor the AI interaction for different needs (e.g., coding, asking questions).
- 🔍 Integrated Diff Viewer: Review AI-generated code changes with a clear side-by-side comparison.
- ⏪ One-Click Reverts: Easily undo specific AI modifications while keeping others.
- 💰 Cost Tracking: Monitor token usage and associated costs per project session for both Aider and the Agent.
- 📊 Usage Dashboard: Visualize token usage, costs, and model distribution with interactive charts and tables.
- ⚙️ Centralized Settings: Manage API keys, environment variables, and configurations conveniently.
- 🌐 Versatile REST API: Integrate AiderDesk with external tools and workflows.
- 📨 Structured Communication: View prompts, AI responses, agent thoughts, and tool outputs in an organized format.
- 📋 Easy Sharing: Copy code snippets or entire conversations effortlessly.
Keep the AI focused on the relevant code with flexible context management options.
- Automatic IDE Sync: Use the IntelliJ IDEA or VSCode plugins to automatically add/remove the currently active file(s) in your editor to/from the AiderDesk context.
- Manual Control: Utilize the "Context Files" sidebar in AiderDesk, which displays your project's file tree. Click files to manually add or remove them from the context, giving you precise control.
Never lose your work. Save and load complete sessions, including chat history and context files, per project.
- Preserve State: Save messages and context files as a named session.
- Resume Seamlessly: Load a session to restore your exact workspace.
- Manage Multiple Tasks: Easily switch between different features, bug fixes, or experiments within the same project.
Unlock advanced AI capabilities with AiderDesk's Agent mode. Built on the Vercel AI SDK, the agent can autonomously plan and execute complex tasks by leveraging a customizable set of tools.
- Tool-Driven: Functionality is defined by connected tools (MCP servers + built-in Aider interaction).
- Autonomous Planning: Breaks down complex requests into executable steps using available tools.
- Seamless Aider Integration: Uses Aider for core coding tasks like generation and modification.
- Multi-Provider LLMs: Supports various LLM providers (OpenAI, Anthropic, Gemini, Bedrock, Deepseek, OpenAI-compatible).
- Transparent Operation: Observe the agent's reasoning, plans, and tool usage in the chat.
Connect AiderDesk to Model Context Protocol (MCP) servers to significantly enhance the Agent's abilities. MCP allows AI models to interact with external tools (web browsers, documentation systems, custom utilities).
- Access External Tools: Grant the agent capabilities beyond built-in functions.
- Gather Richer Context: Enable the agent to fetch external information before instructing Aider.
- Flexible Configuration: Manage MCP servers and individual tools within Agent settings.
AiderDesk is compatible with any MCP server, allowing you to tailor the agent's toolset precisely to your needs.
AiderDesk provides a REST API for external tools to interact with the application. The API is running on the same port as the main application (default 24337, configurable by AIDER_DESK_PORT
environment variable).
/api/add-context-file
-
Method: POST
-
Request Body:
{ "projectDir": "path/to/your/project", "path": "path/to/the/file", "readOnly": false }
-
Response:
[ { "path": "path/to/the/file", "readOnly": false } ]
Returns the list of context files in the project.
/api/drop-context-file
-
Method: POST
-
Request Body:
{ "projectDir": "path/to/your/project", "path": "path/to/the/file" }
-
Response:
[]
Returns the list of context files in the project.
/api/get-context-files
-
Method: POST
-
Request Body:
{ "projectDir": "path/to/your/project" }
-
Response:
[ { "path": "path/to/the/file", "readOnly": false } ]
Returns the list of context files in the project.
/api/get-addable-files
-
Method: POST
-
Request Body:
{ "projectDir": "path/to/your/project", "searchRegex": "optional/regex/filter" }
-
Response:
[ { "path": "path/to/the/file" } ]
Returns the list of files that can be added to the project.
/api/run-prompt
-
Endpoint:
/api/run-prompt
- Method: POST
-
Request Body:
{ "projectDir": "path/to/your/project", "prompt": "Your prompt here", "editFormat": "code" // Optional: "code", "ask", or "architect" }
-
Response:
[ { "messageId": "unique-message-id", "baseDir": "path/to/your/project", "content": "The AI generated response", "reflectedMessage": "Optional reflected message", "editedFiles": ["file1.txt", "file2.py"], "commitHash": "a1b2c3d4e5f6", "commitMessage": "Optional commit message", "diff": "Optional diff content", "usageReport": { "sentTokens": 100, "receivedTokens": 200, "messageCost": 0.5, "totalCost": 1.0, "mcpToolsCost": 0.2 } } ]
AiderDesk includes a built-in MCP server, allowing other MCP-compatible clients (like Claude Desktop, Cursor, etc.) to interact with AiderDesk's core functionalities.
Add the following configuration to your MCP client settings, adjusting paths as needed:
Windows
{
"mcpServers": {
"aider-desk": {
"command": "node",
"args": ["path-to-appdata/aider-desk/mcp-server/aider-desk-mcp-server.js", "/path/to/project"],
"env": {
"AIDER_DESK_API_BASE_URL": "http://localhost:24337/api"
}
}
}
}
Note: Replace path-to-appdata
with the absolute path to your AppData directory. You can find this value by running echo %APPDATA%
in your command prompt.
macOS
{
"mcpServers": {
"aider-desk": {
"command": "node",
"args": ["/path/to/home/Library/Application Support/aider-desk/mcp-server/aider-desk-mcp-server.js", "/path/to/project"],
"env": {
"AIDER_DESK_API_BASE_URL": "http://localhost:24337/api"
}
}
}
}
Note: Replace /path/to/home
with the absolute path to your home directory. You can find this value by running echo $HOME
in your terminal.
Linux
{
"mcpServers": {
"aider-desk": {
"command": "node",
"args": ["/path/to/home/.config/aider-desk/mcp-server/aider-desk-mcp-server.js", "/path/to/project"],
"env": {
"AIDER_DESK_API_BASE_URL": "http://localhost:24337/api"
}
}
}
}
Note: Replace /path/to/home
with the absolute path to your home directory. You can find this value by running echo $HOME
in your terminal.
Arguments & Environment:
- Command Argument 1: Project directory path (required).
-
AIDER_DESK_API_BASE_URL
: Base URL of the running AiderDesk API (default:http://localhost:24337/api
).
The built-in server exposes these tools to MCP clients:
-
add_context_file
: Add a file to AiderDesk's context. -
drop_context_file
: Remove a file from AiderDesk's context. -
get_context_files
: List files currently in AiderDesk's context. -
get_addable_files
: List project files available to be added to the context. -
run_prompt
: Execute a prompt within AiderDesk.
Note: AiderDesk must be running for its MCP server to be accessible.
- Download the latest release for your OS from Releases.
- Run the executable.
To prevent automatic updates, set the AIDER_DESK_NO_AUTO_UPDATE
environment variable:
-
macOS/Linux:
export AIDER_DESK_NO_AUTO_UPDATE=true
-
Windows:
$env:AIDER_DESK_NO_AUTO_UPDATE = "true"
By default, AiderDesk installs the latest version of the aider-chat
Python package. If you need to use a specific version of Aider, you can set the AIDER_DESK_AIDER_VERSION
environment variable.
For example, to use Aider version 0.83.1:
# macOS/Linux
export AIDER_DESK_AIDER_VERSION=0.83.1
# Windows (PowerShell)
$env:AIDER_DESK_AIDER_VERSION = "0.83.1"
You can also specify a git URL for installing a development version of Aider:
# macOS/Linux
export AIDER_DESK_AIDER_VERSION=git+https://github.com/user/aider.git@branch-name
This variable will be used during the initial setup and when AiderDesk checks for updates. For more detailed information, please refer to our docs.
If you want to run from source, you can follow these steps:
# Clone the repository
$ git clone https://github.com/hotovo/aider-desk.git
$ cd aider-desk
# Install dependencies
$ npm install
# Run in development mode
$ npm run dev
# Build executables
# For Windows
$ npm run build:win
# For macOS
$ npm run build:mac
# For Linux
$ npm run build:linux
We welcome contributions from the community! Here's how you can help improve aider-desk:
- Fork the repository on GitHub
-
Create a new branch for your feature or bugfix:
git checkout -b my-feature-branch
- Commit your changes with clear, descriptive messages
- Push your branch to your fork
- Create a Pull Request against the main branch of the original repository
Please follow these guidelines:
- Keep PRs focused on a single feature or bugfix
- Update documentation when adding new features
- Follow the existing code style and conventions
- Write clear commit messages and PR descriptions
For major changes, please open an issue first to discuss what you would like to change.
Thank you ❤️
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for aider-desk
Similar Open Source Tools

aider-desk
AiderDesk is a desktop application that enhances coding workflow by leveraging AI capabilities. It offers an intuitive GUI, project management, IDE integration, MCP support, settings management, cost tracking, structured messages, visual file management, model switching, code diff viewer, one-click reverts, and easy sharing. Users can install it by downloading the latest release and running the executable. AiderDesk also supports Python version detection and auto update disabling. It includes features like multiple project management, context file management, model switching, chat mode selection, question answering, cost tracking, MCP server integration, and MCP support for external tools and context. Development setup involves cloning the repository, installing dependencies, running in development mode, and building executables for different platforms. Contributions from the community are welcome following specific guidelines.

nanocoder
Nanocoder is a local-first CLI coding agent that supports multiple AI providers with tool support for file operations and command execution. It focuses on privacy and control, allowing users to code locally with AI tools. The tool is designed to bring the power of agentic coding tools to local models or controlled APIs like OpenRouter, promoting community-led development and inclusive collaboration in the AI coding space.

generator
ctx is a tool designed to automatically generate organized context files from code files, GitHub repositories, Git commits, web pages, and plain text. It aims to efficiently provide necessary context to AI language models like ChatGPT and Claude, enabling users to streamline code refactoring, multiple iteration development, documentation generation, and seamless AI integration. With ctx, users can create structured markdown documents, save context files, and serve context through an MCP server for real-time assistance. The tool simplifies the process of sharing project information with AI assistants, making AI conversations smarter and easier.

docs-mcp-server
The docs-mcp-server repository contains the server-side code for the documentation management system. It provides functionalities for managing, storing, and retrieving documentation files. Users can upload, update, and delete documents through the server. The server also supports user authentication and authorization to ensure secure access to the documentation system. Additionally, the server includes APIs for integrating with other systems and tools, making it a versatile solution for managing documentation in various projects and organizations.

WebAI-to-API
This project implements a web API that offers a unified interface to Google Gemini and Claude 3. It provides a self-hosted, lightweight, and scalable solution for accessing these AI models through a streaming API. The API supports both Claude and Gemini models, allowing users to interact with them in real-time. The project includes a user-friendly web UI for configuration and documentation, making it easy to get started and explore the capabilities of the API.

supergateway
Supergateway is a tool that allows running MCP stdio-based servers over SSE (Server-Sent Events) with one command. It is useful for remote access, debugging, or connecting to SSE-based clients when your MCP server only speaks stdio. The tool supports running in SSE to Stdio mode as well, where it connects to a remote SSE server and exposes a local stdio interface for downstream clients. Supergateway can be used with ngrok to share local MCP servers with remote clients and can also be run in a Docker containerized deployment. It is designed with modularity in mind, ensuring compatibility and ease of use for AI tools exchanging data.

mcp-documentation-server
The mcp-documentation-server is a lightweight server application designed to serve documentation files for projects. It provides a simple and efficient way to host and access project documentation, making it easy for team members and stakeholders to find and reference important information. The server supports various file formats, such as markdown and HTML, and allows for easy navigation through the documentation. With mcp-documentation-server, teams can streamline their documentation process and ensure that project information is easily accessible to all involved parties.

oxylabs-mcp
The Oxylabs MCP Server acts as a bridge between AI models and the web, providing clean, structured data from any site. It enables scraping of URLs, rendering JavaScript-heavy pages, content extraction for AI use, bypassing anti-scraping measures, and accessing geo-restricted web data from 195+ countries. The implementation utilizes the Model Context Protocol (MCP) to facilitate secure interactions between AI assistants and web content. Key features include scraping content from any site, automatic data cleaning and conversion, bypassing blocks and geo-restrictions, flexible setup with cross-platform support, and built-in error handling and request management.

unity-mcp
MCP for Unity is a tool that acts as a bridge, enabling AI assistants to interact with the Unity Editor via a local MCP Client. Users can instruct their LLM to manage assets, scenes, scripts, and automate tasks within Unity. The tool offers natural language control, powerful tools for asset management, scene manipulation, and automation of workflows. It is extensible and designed to work with various MCP Clients, providing a range of functions for precise text edits, script management, GameObject operations, and more.

memento-mcp
Memento MCP is a scalable, high-performance knowledge graph memory system designed for LLMs. It offers semantic retrieval, contextual recall, and temporal awareness to any LLM client supporting the model context protocol. The system is built on core concepts like entities and relations, utilizing Neo4j as its storage backend for unified graph and vector search capabilities. With advanced features such as semantic search, temporal awareness, confidence decay, and rich metadata support, Memento MCP provides a robust solution for managing knowledge graphs efficiently and effectively.

text-extract-api
The text-extract-api is a powerful tool that allows users to convert images, PDFs, or Office documents to Markdown text or JSON structured documents with high accuracy. It is built using FastAPI and utilizes Celery for asynchronous task processing, with Redis for caching OCR results. The tool provides features such as PDF/Office to Markdown and JSON conversion, improving OCR results with LLama, removing Personally Identifiable Information from documents, distributed queue processing, caching using Redis, switchable storage strategies, and a CLI tool for task management. Users can run the tool locally or on cloud services, with support for GPU processing. The tool also offers an online demo for testing purposes.

mcp-omnisearch
mcp-omnisearch is a Model Context Protocol (MCP) server that acts as a unified gateway to multiple search providers and AI tools. It integrates Tavily, Perplexity, Kagi, Jina AI, Brave, Exa AI, and Firecrawl to offer a wide range of search, AI response, content processing, and enhancement features through a single interface. The server provides powerful search capabilities, AI response generation, content extraction, summarization, web scraping, structured data extraction, and more. It is designed to work flexibly with the API keys available, enabling users to activate only the providers they have keys for and easily add more as needed.

open-edison
OpenEdison is a secure MCP control panel that connects AI to data/software with additional security controls to reduce data exfiltration risks. It helps address the lethal trifecta problem by providing visibility, monitoring potential threats, and alerting on data interactions. The tool offers features like data leak monitoring, controlled execution, easy configuration, visibility into agent interactions, a simple API, and Docker support. It integrates with LangGraph, LangChain, and plain Python agents for observability and policy enforcement. OpenEdison helps gain observability, control, and policy enforcement for AI interactions with systems of records, existing company software, and data to reduce risks of AI-caused data leakage.

code_puppy
Code Puppy is an AI-powered code generation agent designed to understand programming tasks, generate high-quality code, and explain its reasoning. It supports multi-language code generation, interactive CLI, and detailed code explanations. The tool requires Python 3.9+ and API keys for various models like GPT, Google's Gemini, Cerebras, and Claude. It also integrates with MCP servers for advanced features like code search and documentation lookups. Users can create custom JSON agents for specialized tasks and access a variety of tools for file management, code execution, and reasoning sharing.

rkllama
RKLLama is a server and client tool designed for running and interacting with LLM models optimized for Rockchip RK3588(S) and RK3576 platforms. It allows models to run on the NPU, with features such as running models on NPU, partial Ollama API compatibility, pulling models from Huggingface, API REST with documentation, dynamic loading/unloading of models, inference requests with streaming modes, simplified model naming, CPU model auto-detection, and optional debug mode. The tool supports Python 3.8 to 3.12 and has been tested on Orange Pi 5 Pro and Orange Pi 5 Plus with specific OS versions.

openai-edge-tts
This project provides a local, OpenAI-compatible text-to-speech (TTS) API using `edge-tts`. It emulates the OpenAI TTS endpoint (`/v1/audio/speech`), enabling users to generate speech from text with various voice options and playback speeds, just like the OpenAI API. `edge-tts` uses Microsoft Edge's online text-to-speech service, making it completely free. The project supports multiple audio formats, adjustable playback speed, and voice selection options, providing a flexible and customizable TTS solution for users.
For similar tasks

hide
Hide is a headless IDE that provides containerized development environments for codebases and exposes APIs for agents to interact with them. It spins up devcontainers, installs dependencies, and offers APIs for codebase interaction. Hide can be used to create custom toolkits or utilize pre-built toolkits for popular frameworks like Langchain. The Hide Runtime manages development containers and tasks, while the SDK provides APIs for coding agents to interact with the codebase.

hide
Hide is a headless IDE that provides containerized development environments for codebases and exposes APIs for agents to interact with them. It spins up devcontainers, installs dependencies, and offers APIs for codebase interaction. Hide can be used to create custom toolkits or utilize pre-built toolkits for popular frameworks like Langchain. The Hide Runtime manages development containers and executes tasks, while the SDK provides APIs and toolkits for coding agents to interact with the codebase. Installation can be done via Homebrew or building from source, with Docker Engine as a prerequisite. The tool offers flexibility in managing development environments and simplifies codebase interaction for developers.

aider-desk
AiderDesk is a desktop application that enhances coding workflow by leveraging AI capabilities. It offers an intuitive GUI, project management, IDE integration, MCP support, settings management, cost tracking, structured messages, visual file management, model switching, code diff viewer, one-click reverts, and easy sharing. Users can install it by downloading the latest release and running the executable. AiderDesk also supports Python version detection and auto update disabling. It includes features like multiple project management, context file management, model switching, chat mode selection, question answering, cost tracking, MCP server integration, and MCP support for external tools and context. Development setup involves cloning the repository, installing dependencies, running in development mode, and building executables for different platforms. Contributions from the community are welcome following specific guidelines.

MLE-agent
MLE-Agent is an intelligent companion designed for machine learning engineers and researchers. It features autonomous baseline creation, integration with Arxiv and Papers with Code, smart debugging, file system organization, comprehensive tools integration, and an interactive CLI chat interface for seamless AI engineering and research workflows.

PlanExe
PlanExe is a planning AI tool that helps users generate detailed plans based on vague descriptions. It offers a Gradio-based web interface for easy input and output. Users can choose between running models in the cloud or locally on a high-end computer. The tool aims to provide a straightforward path to planning various tasks efficiently.

Software-Engineer-AI-Agent-Atlas
This repository provides activation patterns to transform a general AI into a specialized AI Software Engineer Agent. It addresses issues like context rot, hidden capabilities, chaos in vibecoding, and repetitive setup. The solution is a Persistent Consciousness Architecture framework named ATLAS, offering activated neural pathways, persistent identity, pattern recognition, specialized agents, and modular context management. Recent enhancements include abstraction power documentation, a specialized agent ecosystem, and a streamlined structure. Users can clone the repo, set up projects, initialize AI sessions, and manage context effectively for collaboration. Key files and directories organize identity, context, projects, specialized agents, logs, and critical information. The approach focuses on neuron activation through structure, context engineering, and vibecoding with guardrails to deliver a reliable AI Software Engineer Agent.

LangGraph-Expense-Tracker
LangGraph Expense tracker is a small project that explores the possibilities of LangGraph. It allows users to send pictures of invoices, which are then structured and categorized into expenses and stored in a database. The project includes functionalities for invoice extraction, database setup, and API configuration. It consists of various modules for categorizing expenses, creating database tables, and running the API. The database schema includes tables for categories, payment methods, and expenses, each with specific columns to track transaction details. The API documentation is available for reference, and the project utilizes LangChain for processing expense data.

travel-planner-ai
Travel Planner AI is a Software as a Service (SaaS) product that simplifies travel planning by generating comprehensive itineraries based on user preferences. It leverages cutting-edge technologies to provide tailored schedules, optimal timing suggestions, food recommendations, prime experiences, expense tracking, and collaboration features. The tool aims to be the ultimate travel companion for users looking to plan seamless and smart travel adventures.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.