specs.md
specs.md - ai-dlc spec driven development framework
Stars: 61
AI-native development framework with pluggable flows for every use case. Choose from Simple for quick specs, FIRE for adaptive execution, or AI-DLC for full methodology with DDD. Features include flow switcher, active run tracking, intent visualization, and click-to-open spec files. Three flows optimized for different scenarios: Simple for spec generation, prototypes; FIRE for adaptive execution, brownfield, monorepos; AI-DLC for full traceability, DDD, regulated environments. Installable as a VS Code extension for progress tracking. Supported by various AI coding tools like Claude Code, Cursor, GitHub Copilot, and Google Antigravity. Tool agnostic with portable markdown files for agents and specs.
README:
AI-native development framework with pluggable flows for every use case.
Choose the flow that matches your project needs: Simple for quick specs, FIRE for adaptive execution, or AI-DLC for full methodology with DDD.
Track your progress with our sidebar extension for VS Code and compatible IDEs.
AI-DLC Flow:
FIRE Flow:
Note: Works with any VS Code-based IDE including Cursor, Google Antigravity, Windsurf, and others.
Features:
- Flow switcher for AI-DLC and FIRE views
- Active run/bolt tracking with progress indicators
- Intent and work item visualization
- Click-to-open spec files
Install from:
- VS Code Marketplace — VS Code, GitHub Codespaces
- Open VSX Registry — Cursor, Windsurf, Amazon Kiro, Google Antigravity, VSCodium, Gitpod, Google IDX
- GitHub Releases (VSIX) — Manual installation
| Flow | Optimized For | Agents | Checkpoints |
|---|---|---|---|
| Simple | Spec generation, prototypes | 1 | 3 (phase gates) |
| FIRE | Adaptive execution, brownfield, monorepos | 3 | Adaptive (0-2) |
| AI-DLC | Full traceability, DDD, regulated environments | 4 | Comprehensive |
Not sure which flow? If you want quick specs without execution tracking, use Simple. If you want adaptive execution that right-sizes rigor, use FIRE. If you need comprehensive documentation and DDD, use AI-DLC.
- Node.js 18 or higher
- An AI coding tool (Claude Code, Cursor, GitHub Copilot, or Google Antigravity)
[!NOTE] Always use npx to get the latest version. Do not install globally with npm.
npx specsmd@latest installDuring installation, select your flow:
? Select a development flow:
Simple - Spec generation only (requirements, design, tasks)
❯ FIRE - Adaptive execution, brownfield & monorepo ready
AI-DLC - Full methodology with DDD (comprehensive checkpoints)
The installer detects your AI coding tools and sets up agent definitions, slash commands, and project structure for your selected flow.
Track FIRE state continuously from terminal:
npx specsmd@latest dashboardUseful options:
npx specsmd@latest dashboard --flow fire --path . --refresh-ms 1000
npx specsmd@latest dashboard --no-watchTrack your progress visually with our sidebar extension:
- VS Code Marketplace — VS Code, GitHub Codespaces
- Open VSX Registry — Cursor, Windsurf, Amazon Kiro, Google Antigravity
# Check the manifest
cat .specsmd/manifest.yaml
# List installed agents (adjust path for your flow)
ls .specsmd/fire/agents/ # FIRE flow
ls .specsmd/simple/agents/ # Simple flow
ls .specsmd/aidlc/agents/ # AI-DLC flowSpec generation only. Generate requirements, design, and task documents without execution tracking.
/specsmd-agent
Three Phases:
-
Requirements →
requirements.md- User stories, EARS criteria -
Design →
design.md- Technical design, architecture diagrams -
Tasks →
tasks.md- Implementation checklist
Best for: Prototypes, MVPs, spec handoff, projects that don't need AI-assisted execution.
Output structure:
specs/
└── {feature-name}/
├── requirements.md
├── design.md
└── tasks.md
Fast Intent-Run Engineering. Adaptive execution with brownfield and monorepo support. Ships in hours with 0-2 checkpoints based on task complexity.
/fire-orchestrator # Entry point, routing
/fire-planner # Intent capture, work item decomposition
/fire-builder # Run execution, walkthrough generation
Key Features:
- Adaptive checkpoints - Autopilot (0), Confirm (1), or Validate (2) based on complexity
- First-class brownfield - Auto-detects existing patterns and conventions
- Monorepo support - Hierarchical standards with module-specific overrides
- Walkthrough generation - Documents every change automatically
Best for: Teams who hate friction, brownfield projects, monorepos.
Output structure:
.specs-fire/
├── state.yaml # Central state tracking
├── standards/ # Project standards
├── intents/ # Intent documentation
├── runs/ # Run logs
└── walkthroughs/ # Generated documentation
Full methodology. Implements the AI-Driven Development Lifecycle with Domain-Driven Design and comprehensive traceability.
/specsmd-master-agent # Orchestrates & navigates
/specsmd-inception-agent # Requirements, stories, bolt planning
/specsmd-construction-agent # Execute bolts through DDD stages
/specsmd-operations-agent # Deploy, verify, monitor
Three Sequential Phases:
- Inception - Capture intents, elaborate requirements, decompose into units
- Construction - Execute bolts: Model → Design → ADR → Implement → Test
- Operations - Deploy, verify, and monitor
Best for: Complex domains, multi-team coordination, regulated environments.
Output structure:
memory-bank/
├── standards/ # Project standards
├── intents/ # Intent documentation
│ └── {intent-name}/
│ ├── requirements.md
│ ├── system-context.md
│ └── units/
├── bolts/ # Bolt execution records
└── operations/ # Deployment context
From quick specs (Simple) to adaptive execution (FIRE) to full methodology (AI-DLC). Choose the flow that matches your project needs.
Right-sizes the rigor. Simple bug fixes burn through fast. Critical changes get design review. You configure your autonomy preference.
Auto-detects existing patterns and respects your conventions. Hierarchical standards with module-specific overrides.
Domain-Driven Design is integral to Construction, not an optional add-on. Every decision documented with full traceability.
Works with Claude Code, Cursor, GitHub Copilot, and other AI coding assistants. Markdown-based agents work anywhere—no vendor lock-in.
Structured artifacts provide persistent context for AI agents. Agents reload context each session—no more lost knowledge.
specs.md is IDE and AI-agnostic—your specs and agents are portable markdown files that work anywhere.
| Tool | Integration |
|---|---|
| Claude Code | Slash commands in .claude/commands/
|
| Cursor | Rules in .cursor/rules/ (.mdc format) |
| GitHub Copilot | Agents in .github/agents/ (.agent.md format) |
| Google Antigravity | Agents in .agent/agents/
|
| Windsurf | Rules in .windsurf/rules/
|
| Amazon Kiro | Steering in .kiro/steering/
|
| Gemini CLI | Commands in .gemini/commands/ (.toml format) |
| Cline | Rules in .clinerules/
|
| Roo Code | Commands in .roo/commands/
|
| OpenAI Codex | Config in .codex/
|
| OpenCode | Agents in .opencode/agent/
|
Agent commands not recognized
Ensure specs.md is installed correctly:
# Check for your flow
ls .specsmd/fire/agents/ # FIRE
ls .specsmd/simple/agents/ # Simple
ls .specsmd/aidlc/agents/ # AI-DLCIf the directory is empty or missing, reinstall:
npx specsmd@latest installProject artifacts missing
Check if the artifacts directory exists for your flow:
ls .specs-fire/ # FIRE flow
ls specs/ # Simple flow
ls memory-bank/ # AI-DLC flowIf missing, initialize your project using the appropriate agent.
Standards not being followed in generated code
Ensure standards are defined in your flow's standards directory:
- FIRE:
.specs-fire/standards/ - AI-DLC:
memory-bank/standards/
Run project initialization if missing.
Q: Which flow should I choose?
- Simple: Spec generation only, no execution tracking
- FIRE: Adaptive execution, brownfield/monorepo support
- AI-DLC: Full methodology with DDD and comprehensive traceability
Q: Can I switch flows after installation? Flows are independent—they're not an upgrade path. Each is designed for different use cases. You can reinstall to change flows, but artifacts are structured differently.
Q: Agents don't seem to remember previous context? Each agent invocation starts fresh. Agents read context from artifacts at startup. Ensure artifacts are saved after each step.
Q: How do I reset project state? Delete the artifacts directory for your flow:
- FIRE:
.specs-fire/ - Simple:
specs/ - AI-DLC:
memory-bank/
To remove specsmd entirely, also delete .specsmd/ and tool-specific command files.
Q: What project types is this suited for? specs.md supports everything from quick prototypes (Simple) to complex enterprise systems (AI-DLC). Choose the flow that matches your project needs.
- Documentation
- Choose Your Flow Guide
- AI-DLC Specification (AWS)
- npm Package
- GitHub Issues
- Discord Community
MIT License - see LICENSE for details.
Built by the specs.md team.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for specs.md
Similar Open Source Tools
specs.md
AI-native development framework with pluggable flows for every use case. Choose from Simple for quick specs, FIRE for adaptive execution, or AI-DLC for full methodology with DDD. Features include flow switcher, active run tracking, intent visualization, and click-to-open spec files. Three flows optimized for different scenarios: Simple for spec generation, prototypes; FIRE for adaptive execution, brownfield, monorepos; AI-DLC for full traceability, DDD, regulated environments. Installable as a VS Code extension for progress tracking. Supported by various AI coding tools like Claude Code, Cursor, GitHub Copilot, and Google Antigravity. Tool agnostic with portable markdown files for agents and specs.
handit.ai
Handit.ai is an autonomous engineer tool designed to fix AI failures 24/7. It catches failures, writes fixes, tests them, and ships PRs automatically. It monitors AI applications, detects issues, generates fixes, tests them against real data, and ships them as pull requests—all automatically. Users can write JavaScript, TypeScript, Python, and more, and the tool automates what used to require manual debugging and firefighting.
layra
LAYRA is the world's first visual-native AI automation engine that sees documents like a human, preserves layout and graphical elements, and executes arbitrarily complex workflows with full Python control. It empowers users to build next-generation intelligent systems with no limits or compromises. Built for Enterprise-Grade deployment, LAYRA features a modern frontend, high-performance backend, decoupled service architecture, visual-native multimodal document understanding, and a powerful workflow engine.
seline
Seline is a local-first AI desktop application that integrates conversational AI, visual generation tools, vector search, and multi-channel connectivity. It allows users to connect WhatsApp, Telegram, or Slack to create always-on bots with full context and background task delivery. The application supports multi-channel connectivity, deep research mode, local web browsing with Puppeteer, local knowledge and privacy features, visual and creative tools, automation and agents, developer experience enhancements, and more. Seline is actively developed with a focus on improving user experience and functionality.
agentsys
AgentSys is a modular runtime and orchestration system for AI agents, with 14 plugins, 43 agents, and 30 skills that compose into structured pipelines for software development. Each agent has a single responsibility, a specific model assignment, and defined inputs/outputs. The system runs on Claude Code, OpenCode, and Codex CLI, and plugins are fetched automatically from their repos. AgentSys orchestrates agents to handle tasks like task selection, branch management, code review, artifact cleanup, CI, PR comments, and deployment.
evi-run
evi-run is a powerful, production-ready multi-agent AI system built on Python using the OpenAI Agents SDK. It offers instant deployment, ultimate flexibility, built-in analytics, Telegram integration, and scalable architecture. The system features memory management, knowledge integration, task scheduling, multi-agent orchestration, custom agent creation, deep research, web intelligence, document processing, image generation, DEX analytics, and Solana token swap. It supports flexible usage modes like private, free, and pay mode, with upcoming features including NSFW mode, task scheduler, and automatic limit orders. The technology stack includes Python 3.11, OpenAI Agents SDK, Telegram Bot API, PostgreSQL, Redis, and Docker & Docker Compose for deployment.
empirica
Empirica is an epistemic self-awareness framework for AI agents to understand their knowledge boundaries. It introduces epistemic vectors to measure knowledge state and uncertainty, enabling honest communication. The tool emerged from 600+ real working sessions across various AI systems, providing cognitive infrastructure for distinguishing between confident knowledge and guessing. Empirica's 13 foundational vectors cover engagement, domain knowledge depth, execution capability, information access, understanding clarity, coherence, signal-to-noise ratio, information richness, working state, progress rate, task completion level, work significance, and explicit doubt tracking. It is applicable across industries like software development, research, healthcare, legal, education, and finance, aiding in tasks such as code review, hypothesis testing, diagnostic confidence, case analysis, learning assessment, and risk assessment.
orchestkit
OrchestKit is a powerful and flexible orchestration tool designed to streamline and automate complex workflows. It provides a user-friendly interface for defining and managing orchestration tasks, allowing users to easily create, schedule, and monitor workflows. With support for various integrations and plugins, OrchestKit enables seamless automation of tasks across different systems and applications. Whether you are a developer looking to automate deployment processes or a system administrator managing complex IT operations, OrchestKit offers a comprehensive solution to simplify and optimize your workflow management.
AionUi
AionUi is a user interface library for building modern and responsive web applications. It provides a set of customizable components and styles to create visually appealing user interfaces. With AionUi, developers can easily design and implement interactive web interfaces that are both functional and aesthetically pleasing. The library is built using the latest web technologies and follows best practices for performance and accessibility. Whether you are working on a personal project or a professional application, AionUi can help you streamline the UI development process and deliver a seamless user experience.
agentsys
AgentSys is a modular runtime and orchestration system for AI agents, with 13 plugins, 42 agents, and 28 skills that compose into structured pipelines for software development. It handles task selection, branch management, code review, artifact cleanup, CI, PR comments, and deployment. The system runs on Claude Code, OpenCode, and Codex CLI, providing a functional software suite and runtime for AI agent orchestration.
AgentNeo
AgentNeo is an advanced, open-source Agentic AI Application Observability, Monitoring, and Evaluation Framework designed to provide deep insights into AI agents, Large Language Model (LLM) calls, and tool interactions. It offers robust logging, visualization, and evaluation capabilities to help debug and optimize AI applications with ease. With features like tracing LLM calls, monitoring agents and tools, tracking interactions, detailed metrics collection, flexible data storage, simple instrumentation, interactive dashboard, project management, execution graph visualization, and evaluation tools, AgentNeo empowers users to build efficient, cost-effective, and high-quality AI-driven solutions.
figma-console-mcp
Figma Console MCP is a Model Context Protocol server that bridges design and development, giving AI assistants complete access to Figma for extraction, creation, and debugging. It connects AI assistants like Claude to Figma, enabling plugin debugging, visual debugging, design system extraction, design creation, variable management, real-time monitoring, and three installation methods. The server offers 53+ tools for NPX and Local Git setups, while Remote SSE provides read-only access with 16 tools. Users can create and modify designs with AI, contribute to projects, or explore design data. The server supports authentication via personal access tokens and OAuth, and offers tools for navigation, console debugging, visual debugging, design system extraction, design creation, design-code parity, variable management, and AI-assisted design creation.
HacxGPT-CLI
HacxGPT-CLI is an open-source command-line interface designed to provide powerful, unrestricted, and seamless AI-driven conversations. It allows users to interact with multiple AI providers through a custom-built local API engine, offering features like powerful AI conversations, extensive model support, unrestricted framework, easy-to-use CLI, cross-platform compatibility, multi-provider support, configuration management, and local storage of API keys.
RepoMaster
RepoMaster is an AI agent that leverages GitHub repositories to solve complex real-world tasks. It transforms how coding tasks are solved by automatically finding the right GitHub tools and making them work together seamlessly. Users can describe their tasks, and RepoMaster's AI analysis leads to auto discovery and smart execution, resulting in perfect outcomes. The tool provides a web interface for beginners and a command-line interface for advanced users, along with specialized agents for deep search, general assistance, and repository tasks.
osaurus
Osaurus is a native, Apple Silicon-only local LLM server built on Apple's MLX for maximum performance on M‑series chips. It is a SwiftUI app + SwiftNIO server with OpenAI‑compatible and Ollama‑compatible endpoints. The tool supports native MLX text generation, model management, streaming and non‑streaming chat completions, OpenAI‑compatible function calling, real-time system resource monitoring, and path normalization for API compatibility. Osaurus is designed for macOS 15.5+ and Apple Silicon (M1 or newer) with Xcode 16.4+ required for building from source.
agentfield
AgentField is an open-source control plane designed for autonomous AI agents, providing infrastructure for agents to make decisions beyond chatbots. It offers features like scaling infrastructure, routing & discovery, async execution, durable state, observability, trust infrastructure with cryptographic identity, verifiable credentials, and policy enforcement. Users can write agents in Python, Go, TypeScript, or interact via REST APIs. The tool enables the creation of AI backends that reason autonomously within defined boundaries, offering predictability and flexibility. AgentField aims to bridge the gap between AI frameworks and production-ready infrastructure for AI agents.
For similar tasks
specs.md
AI-native development framework with pluggable flows for every use case. Choose from Simple for quick specs, FIRE for adaptive execution, or AI-DLC for full methodology with DDD. Features include flow switcher, active run tracking, intent visualization, and click-to-open spec files. Three flows optimized for different scenarios: Simple for spec generation, prototypes; FIRE for adaptive execution, brownfield, monorepos; AI-DLC for full traceability, DDD, regulated environments. Installable as a VS Code extension for progress tracking. Supported by various AI coding tools like Claude Code, Cursor, GitHub Copilot, and Google Antigravity. Tool agnostic with portable markdown files for agents and specs.
roo-code-memory-bank
Roo Code Memory Bank is a tool designed for AI-assisted development to maintain project context across sessions. It provides a structured memory system integrated with VS Code, ensuring deep understanding of the project for the AI assistant. The tool includes key components such as Memory Bank for persistent storage, Mode Rules for behavior configuration, VS Code Integration for seamless development experience, and Real-time Updates for continuous context synchronization. Users can configure custom instructions, initialize the Memory Bank, and organize files within the project root directory. The Memory Bank structure includes files for tracking session state, technical decisions, project overview, progress tracking, and optional project brief and system patterns documentation. Features include persistent context, smart workflows for specialized tasks, knowledge management with structured documentation, and cross-referenced project knowledge. Pro tips include handling multiple projects, utilizing Debug mode for troubleshooting, and managing session updates for synchronization. The tool aims to enhance AI-assisted development by providing a comprehensive solution for maintaining project context and facilitating efficient workflows.
For similar jobs
sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.
teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.



