
nanocoder
A beautiful local-first coding agent running in your terminal - built by the community for the community ⚒
Stars: 422

Nanocoder is a local-first CLI coding agent that supports multiple AI providers with tool support for file operations and command execution. It focuses on privacy and control, allowing users to code locally with AI tools. The tool is designed to bring the power of agentic coding tools to local models or controlled APIs like OpenRouter, promoting community-led development and inclusive collaboration in the AI coding space.
README:
A local-first CLI coding agent that brings the power of agentic coding tools like Claude Code and Gemini CLI to local models or controlled APIs like OpenRouter. Built with privacy and control in mind, Nanocoder supports multiple AI providers with tool support for file operations and command execution.
Nanocoder is a local-first CLI coding agent that brings the power of agentic coding tools like Claude Code and Gemini CLI to local models or controlled APIs like OpenRouter. Built with privacy and control in mind, Nanocoder supports any AI provider that has an OpenAI compatible end-point, tool and non-tool calling models.
This comes down to philosophy. OpenCode is a great tool, but it's owned and managed by a venture-backed company that restricts community and open-source involvement to the outskirts. With Nanocoder, the focus is on building a true community-led project where anyone can contribute openly and directly. We believe AI is too powerful to be in the hands of big corporations and everyone should have access to it.
We also strongly believe in the "local-first" approach, where your data, models, and processing stay on your machine whenever possible to ensure maximum privacy and user control. Beyond that, we're actively pushing to develop advancements and frameworks for small, local models to be effective at coding locally.
Not everyone will agree with this philosophy, and that's okay. We believe in fostering an inclusive community that's focused on open collaboration and privacy-first AI coding tools.
Firstly, we would love for you to be involved. You can get started contributing to Nanocoder in several ways, check out the Community section of this README.
Install globally and use anywhere:
npm install -g @motesoftware/nanocoder
Then run in any directory:
nanocoder
If you want to contribute or modify Nanocoder:
Prerequisites:
- Node.js 18+
- npm
Setup:
- Clone and install dependencies:
git clone [repo-url]
cd nanocoder
npm install
- Build the project:
npm run build
- Run locally:
npm run start
Or build and run in one command:
npm run dev
Nanocoder supports any OpenAI-compatible API through a unified provider configuration. Create agents.config.json
in your working directory (where you run nanocoder
):
{
"nanocoder": {
"providers": [
{
"name": "llama-cpp",
"baseUrl": "http://localhost:8080/v1",
"models": ["qwen3-coder:a3b", "deepseek-v3.1"]
},
{
"name": "Ollama",
"baseUrl": "http://localhost:11434/v1",
"models": ["qwen2.5-coder:14b", "llama3.2"]
},
{
"name": "OpenRouter",
"baseUrl": "https://openrouter.ai/api/v1",
"apiKey": "your-openrouter-api-key",
"models": ["openai/gpt-4o-mini", "anthropic/claude-3-haiku"]
},
{
"name": "LM Studio",
"baseUrl": "http://localhost:1234/v1",
"models": ["local-model"]
}
]
}
}
Common Provider Examples:
-
llama.cpp server:
"baseUrl": "http://localhost:8080/v1"
-
llama-swap:
"baseUrl": "http://localhost:9292/v1"
-
Ollama (Local):
- First run:
ollama pull qwen2.5-coder:14b
- Use:
"baseUrl": "http://localhost:11434/v1"
- First run:
-
OpenRouter (Cloud):
- Use:
"baseUrl": "https://openrouter.ai/api/v1"
- Requires:
"apiKey": "your-api-key"
- Use:
-
LM Studio:
"baseUrl": "http://localhost:1234/v1"
-
vLLM:
"baseUrl": "http://localhost:8000/v1"
-
LocalAI:
"baseUrl": "http://localhost:8080/v1"
-
OpenAI:
"baseUrl": "https://api.openai.com/v1"
Provider Configuration:
-
name
: Display name used in/provider
command -
baseUrl
: OpenAI-compatible API endpoint -
apiKey
: API key (optional for local servers) -
models
: Available model list for/model
command
Nanocoder supports connecting to MCP servers to extend its capabilities with additional tools. Configure MCP servers in your agents.config.json
:
{
"nanocoder": {
"mcpServers": [
{
"name": "filesystem",
"command": "npx",
"args": [
"@modelcontextprotocol/server-filesystem",
"/path/to/allowed/directory"
]
},
{
"name": "github",
"command": "npx",
"args": ["@modelcontextprotocol/server-github"],
"env": {
"GITHUB_TOKEN": "your-github-token"
}
},
{
"name": "custom-server",
"command": "python",
"args": ["path/to/server.py"],
"env": {
"API_KEY": "your-api-key"
}
}
]
}
}
When MCP servers are configured, Nanocoder will:
- Automatically connect to all configured servers on startup
- Make all server tools available to the AI model
- Show connected servers and their tools with the
/mcp
command
Popular MCP servers:
- Filesystem: Enhanced file operations
- GitHub: Repository management
- Brave Search: Web search capabilities
- Memory: Persistent context storage
- View more MCP servers
Note: The
agents.config.json
file should be placed in the directory where you run Nanocoder, allowing for project-by-project configuration with different models or API keys per repository.
Nanocoder automatically saves your preferences to remember your choices across sessions. Preferences are stored in ~/.nanocoder-preferences.json
in your home directory.
What gets saved automatically:
- Last provider used: The AI provider you last selected (by name from your configuration)
- Last model per provider: Your preferred model for each provider
- Session continuity: Automatically switches back to your preferred provider/model when restarting
How it works:
- When you switch providers with
/provider
, your choice is saved - When you switch models with
/model
, the selection is saved for that specific provider - Next time you start Nanocoder, it will use your last provider and model
- Each provider remembers its own preferred model independently
Manual management:
- View current preferences: The file is human-readable JSON
- Reset preferences: Delete
~/.nanocoder-preferences.json
to start fresh - No manual editing needed: Use the
/provider
and/model
commands instead
-
/help
- Show available commands -
/init
- Initialize project with intelligent analysis, create AGENTS.md and configuration files -
/clear
- Clear chat history -
/model
- Switch between available models -
/provider
- Switch between configured AI providers -
/mcp
- Show connected MCP servers and their tools -
/debug
- Toggle logging levels (silent/normal/verbose) -
/custom-commands
- List all custom commands -
/exit
- Exit the application -
/export
- Export current session to markdown file -
/theme
- Select a theme for the Nanocoder CLI -
/update
- Update Nanocoder to the latest version -
!command
- Execute bash commands directly without leaving Nanocoder (output becomes context for the LLM)
Nanocoder supports custom commands defined as markdown files in the .nanocoder/commands
directory. Like agents.config.json
, this directory is created per codebase, allowing you to create reusable prompts with parameters and organize them by category specific to each project.
Example custom command (.nanocoder/commands/test.md
):
---
description: 'Generate comprehensive unit tests for the specified component'
aliases: ['testing', 'spec']
parameters:
- name: 'component'
description: 'The component or function to test'
required: true
---
Generate comprehensive unit tests for {{component}}. Include:
- Happy path scenarios
- Edge cases and error handling
- Mock dependencies where appropriate
- Clear test descriptions
Usage: /test component="UserService"
Features:
- YAML frontmatter for metadata (description, aliases, parameters)
- Template variable substitution with
{{parameter}}
syntax - Namespace support through directories (e.g.,
/refactor:dry
) - Autocomplete integration for command discovery
- Parameter validation and prompting
Pre-installed Commands:
-
/test
- Generate comprehensive unit tests for components -
/review
- Perform thorough code reviews with suggestions -
/refactor:dry
- Apply DRY (Don't Repeat Yourself) principle -
/refactor:solid
- Apply SOLID design principles
- Universal OpenAI compatibility: Works with any OpenAI-compatible API
- Local providers: Ollama, LM Studio, vLLM, LocalAI, llama.cpp
- Cloud providers: OpenRouter, OpenAI, and other hosted services
- Smart fallback: Automatically switches to available providers if one fails
- Per-provider preferences: Remembers your preferred model for each provider
- Dynamic configuration: Add any provider with just a name and endpoint
- Built-in tools: File operations, bash command execution
- MCP (Model Context Protocol) servers: Extend capabilities with any MCP-compatible tool
- Dynamic tool loading: Tools are loaded on-demand from configured MCP servers
- Tool approval: Optional confirmation before executing potentially destructive operations
-
Markdown-based commands: Define reusable prompts in
.nanocoder/commands/
-
Template variables: Use
{{parameter}}
syntax for dynamic content -
Namespace organization: Organize commands in folders (e.g.,
refactor/dry.md
) - Autocomplete support: Tab completion for command discovery
- Rich metadata: YAML frontmatter for descriptions, aliases, and parameters
- Smart autocomplete: Tab completion for commands with real-time suggestions
-
Prompt history: Access and reuse previous prompts with
/history
- Configurable logging: Silent, normal, or verbose output levels
- Colorized output: Syntax highlighting and structured display
- Session persistence: Maintains context and preferences across sessions
- Real-time indicators: Shows token usage, timing, and processing status
- First-time directory security disclaimer: Prompts on first run and stores a per-project trust decision to prevent accidental exposure of local code or secrets.
- TypeScript-first: Full type safety and IntelliSense support
- Extensible architecture: Plugin-style system for adding new capabilities
-
Project-specific config: Different settings per project via
agents.config.json
- Debug tools: Built-in debugging commands and verbose logging
- Error resilience: Graceful handling of provider failures and network issues
We're a small community-led team building Nanocoder and would love your help! Whether you're interested in contributing code, documentation, or just being part of our community, there are several ways to get involved.
If you want to contribute to the code:
- Read our detailed CONTRIBUTING.md guide for information on development setup, coding standards, and how to submit your changes.
If you want to be part of our community or help with other aspects like design or marketing:
-
Join our Discord server to connect with other users, ask questions, share ideas, and get help: Join our Discord server
-
Head to our GitHub issues or discussions to open and join current conversations with others in the community.
What does Nanocoder you need help with?
Nanocoder could benefit from help all across the board. Such as:
- Adding support for new AI providers
- Improving tool functionality
- Enhancing the user experience
- Writing documentation
- Reporting bugs or suggesting features
- Marketing and getting the word out
- Design and building more great software
All contributions and community participation are welcome!
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for nanocoder
Similar Open Source Tools

nanocoder
Nanocoder is a local-first CLI coding agent that supports multiple AI providers with tool support for file operations and command execution. It focuses on privacy and control, allowing users to code locally with AI tools. The tool is designed to bring the power of agentic coding tools to local models or controlled APIs like OpenRouter, promoting community-led development and inclusive collaboration in the AI coding space.

aider-desk
AiderDesk is a desktop application that enhances coding workflow by leveraging AI capabilities. It offers an intuitive GUI, project management, IDE integration, MCP support, settings management, cost tracking, structured messages, visual file management, model switching, code diff viewer, one-click reverts, and easy sharing. Users can install it by downloading the latest release and running the executable. AiderDesk also supports Python version detection and auto update disabling. It includes features like multiple project management, context file management, model switching, chat mode selection, question answering, cost tracking, MCP server integration, and MCP support for external tools and context. Development setup involves cloning the repository, installing dependencies, running in development mode, and building executables for different platforms. Contributions from the community are welcome following specific guidelines.

generator
ctx is a tool designed to automatically generate organized context files from code files, GitHub repositories, Git commits, web pages, and plain text. It aims to efficiently provide necessary context to AI language models like ChatGPT and Claude, enabling users to streamline code refactoring, multiple iteration development, documentation generation, and seamless AI integration. With ctx, users can create structured markdown documents, save context files, and serve context through an MCP server for real-time assistance. The tool simplifies the process of sharing project information with AI assistants, making AI conversations smarter and easier.

code_puppy
Code Puppy is an AI-powered code generation agent designed to understand programming tasks, generate high-quality code, and explain its reasoning. It supports multi-language code generation, interactive CLI, and detailed code explanations. The tool requires Python 3.9+ and API keys for various models like GPT, Google's Gemini, Cerebras, and Claude. It also integrates with MCP servers for advanced features like code search and documentation lookups. Users can create custom JSON agents for specialized tasks and access a variety of tools for file management, code execution, and reasoning sharing.

memento-mcp
Memento MCP is a scalable, high-performance knowledge graph memory system designed for LLMs. It offers semantic retrieval, contextual recall, and temporal awareness to any LLM client supporting the model context protocol. The system is built on core concepts like entities and relations, utilizing Neo4j as its storage backend for unified graph and vector search capabilities. With advanced features such as semantic search, temporal awareness, confidence decay, and rich metadata support, Memento MCP provides a robust solution for managing knowledge graphs efficiently and effectively.

Groqqle
Groqqle 2.1 is a revolutionary, free AI web search and API that instantly returns ORIGINAL content derived from source articles, websites, videos, and even foreign language sources, for ANY target market of ANY reading comprehension level! It combines the power of large language models with advanced web and news search capabilities, offering a user-friendly web interface, a robust API, and now a powerful Groqqle_web_tool for seamless integration into your projects. Developers can instantly incorporate Groqqle into their applications, providing a powerful tool for content generation, research, and analysis across various domains and languages.

docs-mcp-server
The docs-mcp-server repository contains the server-side code for the documentation management system. It provides functionalities for managing, storing, and retrieving documentation files. Users can upload, update, and delete documents through the server. The server also supports user authentication and authorization to ensure secure access to the documentation system. Additionally, the server includes APIs for integrating with other systems and tools, making it a versatile solution for managing documentation in various projects and organizations.

WebAI-to-API
This project implements a web API that offers a unified interface to Google Gemini and Claude 3. It provides a self-hosted, lightweight, and scalable solution for accessing these AI models through a streaming API. The API supports both Claude and Gemini models, allowing users to interact with them in real-time. The project includes a user-friendly web UI for configuration and documentation, making it easy to get started and explore the capabilities of the API.

mcp-documentation-server
The mcp-documentation-server is a lightweight server application designed to serve documentation files for projects. It provides a simple and efficient way to host and access project documentation, making it easy for team members and stakeholders to find and reference important information. The server supports various file formats, such as markdown and HTML, and allows for easy navigation through the documentation. With mcp-documentation-server, teams can streamline their documentation process and ensure that project information is easily accessible to all involved parties.

supergateway
Supergateway is a tool that allows running MCP stdio-based servers over SSE (Server-Sent Events) with one command. It is useful for remote access, debugging, or connecting to SSE-based clients when your MCP server only speaks stdio. The tool supports running in SSE to Stdio mode as well, where it connects to a remote SSE server and exposes a local stdio interface for downstream clients. Supergateway can be used with ngrok to share local MCP servers with remote clients and can also be run in a Docker containerized deployment. It is designed with modularity in mind, ensuring compatibility and ease of use for AI tools exchanging data.

rkllama
RKLLama is a server and client tool designed for running and interacting with LLM models optimized for Rockchip RK3588(S) and RK3576 platforms. It allows models to run on the NPU, with features such as running models on NPU, partial Ollama API compatibility, pulling models from Huggingface, API REST with documentation, dynamic loading/unloading of models, inference requests with streaming modes, simplified model naming, CPU model auto-detection, and optional debug mode. The tool supports Python 3.8 to 3.12 and has been tested on Orange Pi 5 Pro and Orange Pi 5 Plus with specific OS versions.

text-extract-api
The text-extract-api is a powerful tool that allows users to convert images, PDFs, or Office documents to Markdown text or JSON structured documents with high accuracy. It is built using FastAPI and utilizes Celery for asynchronous task processing, with Redis for caching OCR results. The tool provides features such as PDF/Office to Markdown and JSON conversion, improving OCR results with LLama, removing Personally Identifiable Information from documents, distributed queue processing, caching using Redis, switchable storage strategies, and a CLI tool for task management. Users can run the tool locally or on cloud services, with support for GPU processing. The tool also offers an online demo for testing purposes.

strava-mcp
Strava MCP Server is a TypeScript implementation of a Model Context Protocol (MCP) server that serves as a bridge to the Strava API. It provides tools for accessing recent activities, detailed activity streams, segments exploration, activity and segment effort information, saved routes details, and route exporting in GPX or TCX format. The server offers AI-friendly JSON responses via MCP and utilizes Strava API V3 for seamless integration. Users can interact with their Strava data through natural language queries and advanced prompts, enabling personalized analysis and visualization of their activities.

mcp-omnisearch
mcp-omnisearch is a Model Context Protocol (MCP) server that acts as a unified gateway to multiple search providers and AI tools. It integrates Tavily, Perplexity, Kagi, Jina AI, Brave, Exa AI, and Firecrawl to offer a wide range of search, AI response, content processing, and enhancement features through a single interface. The server provides powerful search capabilities, AI response generation, content extraction, summarization, web scraping, structured data extraction, and more. It is designed to work flexibly with the API keys available, enabling users to activate only the providers they have keys for and easily add more as needed.

oxylabs-mcp
The Oxylabs MCP Server acts as a bridge between AI models and the web, providing clean, structured data from any site. It enables scraping of URLs, rendering JavaScript-heavy pages, content extraction for AI use, bypassing anti-scraping measures, and accessing geo-restricted web data from 195+ countries. The implementation utilizes the Model Context Protocol (MCP) to facilitate secure interactions between AI assistants and web content. Key features include scraping content from any site, automatic data cleaning and conversion, bypassing blocks and geo-restrictions, flexible setup with cross-platform support, and built-in error handling and request management.

g4f.dev
G4f.dev is the official documentation hub for GPT4Free, a free and convenient AI tool with endpoints that can be integrated directly into apps, scripts, and web browsers. The documentation provides clear overviews, quick examples, and deeper insights into the major features of GPT4Free, including text and image generation. Users can choose between Python and JavaScript for installation and setup, and can access various API endpoints, providers, models, and client options for different tasks.
For similar tasks

nanocoder
Nanocoder is a local-first CLI coding agent that supports multiple AI providers with tool support for file operations and command execution. It focuses on privacy and control, allowing users to code locally with AI tools. The tool is designed to bring the power of agentic coding tools to local models or controlled APIs like OpenRouter, promoting community-led development and inclusive collaboration in the AI coding space.

aichat
Aichat is an AI-powered CLI chat and copilot tool that seamlessly integrates with over 10 leading AI platforms, providing a powerful combination of chat-based interaction, context-aware conversations, and AI-assisted shell capabilities, all within a customizable and user-friendly environment.

wingman-ai
Wingman AI allows you to use your voice to talk to various AI providers and LLMs, process your conversations, and ultimately trigger actions such as pressing buttons or reading answers. Our _Wingmen_ are like characters and your interface to this world, and you can easily control their behavior and characteristics, even if you're not a developer. AI is complex and it scares people. It's also **not just ChatGPT**. We want to make it as easy as possible for you to get started. That's what _Wingman AI_ is all about. It's a **framework** that allows you to build your own Wingmen and use them in your games and programs. The idea is simple, but the possibilities are endless. For example, you could: * **Role play** with an AI while playing for more immersion. Have air traffic control (ATC) in _Star Citizen_ or _Flight Simulator_. Talk to Shadowheart in Baldur's Gate 3 and have her respond in her own (cloned) voice. * Get live data such as trade information, build guides, or wiki content and have it read to you in-game by a _character_ and voice you control. * Execute keystrokes in games/applications and create complex macros. Trigger them in natural conversations with **no need for exact phrases.** The AI understands the context of your dialog and is quite _smart_ in recognizing your intent. Say _"It's raining! I can't see a thing!"_ and have it trigger a command you simply named _WipeVisors_. * Automate tasks on your computer * improve accessibility * ... and much more

letmedoit
LetMeDoIt AI is a virtual assistant designed to revolutionize the way you work. It goes beyond being a mere chatbot by offering a unique and powerful capability - the ability to execute commands and perform computing tasks on your behalf. With LetMeDoIt AI, you can access OpenAI ChatGPT-4, Google Gemini Pro, and Microsoft AutoGen, local LLMs, all in one place, to enhance your productivity.

shell-ai
Shell-AI (`shai`) is a CLI utility that enables users to input commands in natural language and receive single-line command suggestions. It leverages natural language understanding and interactive CLI tools to enhance command line interactions. Users can describe tasks in plain English and receive corresponding command suggestions, making it easier to execute commands efficiently. Shell-AI supports cross-platform usage and is compatible with Azure OpenAI deployments, offering a user-friendly and efficient way to interact with the command line.

AIRAVAT
AIRAVAT is a multifunctional Android Remote Access Tool (RAT) with a GUI-based Web Panel that does not require port forwarding. It allows users to access various features on the victim's device, such as reading files, downloading media, retrieving system information, managing applications, SMS, call logs, contacts, notifications, keylogging, admin permissions, phishing, audio recording, music playback, device control (vibration, torch light, wallpaper), executing shell commands, clipboard text retrieval, URL launching, and background operation. The tool requires a Firebase account and tools like ApkEasy Tool or ApkTool M for building. Users can set up Firebase, host the web panel, modify Instagram.apk for RAT functionality, and connect the victim's device to the web panel. The tool is intended for educational purposes only, and users are solely responsible for its use.

chatflow
Chatflow is a tool that provides a chat interface for users to interact with systems using natural language. The engine understands user intent and executes commands for tasks, allowing easy navigation of complex websites/products. This approach enhances user experience, reduces training costs, and boosts productivity.

Wave-executor
Wave Executor is an innovative Windows executor developed by SPDM Team and CodeX engineers, featuring cutting-edge technologies like AI, built-in script hub, HDWID spoofing, and enhanced scripting capabilities. It offers a 100% stealth mode Byfron bypass, advanced features like decompiler and save instance functionality, and a commercial edition with ad-free experience and direct download link. Wave Premium provides multi-instance, multi-inject, and 100% UNC support, making it a cost-effective option for executing scripts in popular Roblox games.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.