smart-ralph
Spec-driven development with smart compaction. Claude Code plugin combining Ralph Wiggum loop with structured specification workflow.
Stars: 173
Smart Ralph is a Claude Code plugin designed for spec-driven development. It helps users turn vague feature ideas into structured specs and executes them task-by-task. The tool operates within a self-contained execution loop without external dependencies, providing a seamless workflow for feature development. Named after the Ralph agentic loop pattern, Smart Ralph simplifies the development process by focusing on the next task at hand, akin to the simplicity of the Springfield student, Ralph.
README:
Spec-driven development for Claude Code. Task-by-task execution with fresh context per task.
Self-contained execution loop. No external dependencies.
Smart Ralph is a Claude Code plugin that turns your vague feature ideas into structured specs, then executes them task-by-task. Like having a tiny product team in your terminal.
You: "Add user authentication"
Ralph: *creates research.md, requirements.md, design.md, tasks.md*
Ralph: *executes each task with fresh context*
Ralph: "I'm helping!"
Named after the Ralph agentic loop pattern and everyone's favorite Springfield student. Ralph doesn't overthink. Ralph just does the next task. Be like Ralph.
# Install Smart Ralph
/plugin marketplace add tzachbon/smart-ralph
/plugin install ralph-specum@smart-ralph
# Restart Claude CodeTroubleshooting & alternative methods
Install from GitHub directly:
/plugin install https://github.com/tzachbon/smart-ralphLocal development:
git clone https://github.com/tzachbon/smart-ralph.git
claude --plugin-dir ./smart-ralph/plugins/ralph-specum# The smart way (auto-detects resume or new)
/ralph-specum:start user-auth Add JWT authentication
# Quick mode (skip spec phases, auto-generate everything)
/ralph-specum:start "Add user auth" --quick
# The step-by-step way
/ralph-specum:new user-auth Add JWT authentication
/ralph-specum:requirements
/ralph-specum:design
/ralph-specum:tasks
/ralph-specum:implement| Command | What it does |
|---|---|
/ralph-specum:start [name] [goal] |
Smart entry: resume existing or create new |
/ralph-specum:start [goal] --quick |
Quick mode: auto-generate all specs and execute |
/ralph-specum:new <name> [goal] |
Create new spec, start research |
/ralph-specum:research |
Run/re-run research phase |
/ralph-specum:requirements |
Generate requirements from research |
/ralph-specum:design |
Generate technical design |
/ralph-specum:tasks |
Break design into executable tasks |
/ralph-specum:implement |
Execute tasks one-by-one |
/ralph-specum:index |
Scan codebase and generate component specs |
/ralph-specum:status |
Show all specs and progress |
/ralph-specum:switch <name> |
Change active spec |
/ralph-specum:cancel |
Cancel loop, cleanup state |
/ralph-specum:help |
Show help |
"I want a feature!"
|
v
+---------------------+
| Research | <- Analyzes codebase, searches web
+---------------------+
|
v
+---------------------+
| Requirements | <- User stories, acceptance criteria
+---------------------+
|
v
+---------------------+
| Design | <- Architecture, patterns, decisions
+---------------------+
|
v
+---------------------+
| Tasks | <- POC-first task breakdown
+---------------------+
|
v
+---------------------+
| Execution | <- Task-by-task with fresh context
+---------------------+
|
v
"I did it!"
Each phase uses a specialized sub-agent:
| Phase | Agent | Superpower |
|---|---|---|
| Research | research-analyst |
Web search, codebase analysis, feasibility checks |
| Requirements | product-manager |
User stories, acceptance criteria, business value |
| Design | architect-reviewer |
Architecture patterns, technical trade-offs |
| Tasks | task-planner |
POC-first breakdown, task sequencing |
| Execution | spec-executor |
Autonomous implementation, quality gates |
Tasks follow a 4-phase structure:
- Make It Work - POC validation, skip tests initially
- Refactoring - Clean up the code
- Testing - Unit, integration, e2e tests
- Quality Gates - Lint, types, CI checks
Starting with v2.12.0, Smart Ralph can scan existing codebases and auto-generate component specs, making legacy code discoverable during new feature research.
When starting a new feature on an existing codebase, the research phase benefits from knowing what's already built. Without indexing, the research agent has limited visibility into your codebase structure.
The /ralph-specum:index command:
- Scans your codebase for controllers, services, models, helpers, and migrations
- Generates searchable specs for each component
- Indexes external resources (URLs, MCP servers, installed skills)
- Makes existing code discoverable in
/ralph-specum:start
# Full interactive indexing (recommended for first-time)
/ralph-specum:index
# Quick mode - skip interviews, batch scan only
/ralph-specum:index --quick
# Dry run - preview what would be indexed
/ralph-specum:index --dry-run
# Index specific directory
/ralph-specum:index --path=src/api/
# Force regenerate all specs
/ralph-specum:index --force /ralph-specum:index
|
v
+---------------------+
| Pre-Scan Interview | <- External URLs? Focus areas? Sparse areas?
+---------------------+
|
v
+---------------------+
| Component Scanner | <- Detects controllers, services, models...
+---------------------+
|
v
+---------------------+
| External Resources | <- Fetches URLs, queries MCP, introspects skills
+---------------------+
|
v
+---------------------+
| Post-Scan Review | <- Validates findings with user
+---------------------+
|
v
specs/.index/
├── index.md # Summary dashboard
├── components/ # Code component specs
└── external/ # External resource specs
| Option | Description |
|---|---|
--path=<dir> |
Limit indexing to specific directory |
--type=<types> |
Filter by type: controllers, services, models, helpers, migrations |
--exclude=<patterns> |
Patterns to exclude (e.g., test, mock) |
--dry-run |
Preview without writing files |
--force |
Regenerate all specs (overwrites existing) |
--changed |
Regenerate only git-changed files |
--quick |
Skip interviews, batch scan only |
For best results, run /ralph-specum:index before starting new features on an existing codebase.
The research phase searches indexed specs to discover relevant existing components. Without an index, you may miss important context about what's already built.
# First time on a codebase? Index it first
/ralph-specum:index
# Then start your feature
/ralph-specum:start my-feature Add user authenticationWhen you run /ralph-specum:start:
- If no index exists, you'll see a hint suggesting to run
/ralph-specum:index - The spec scanner searches both regular specs AND indexed specs
- Indexed components appear in "Related Specs" during research
Components (detected by path/name patterns):
- Controllers:
**/controllers/**/*.{ts,js,py,go} - Services:
**/services/**/*.{ts,js,py,go} - Models:
**/models/**/*.{ts,js,py,go} - Helpers:
**/helpers/**/*.{ts,js,py,go} - Migrations:
**/migrations/**/*.{ts,js,sql}
External Resources (discovered via interview):
- URLs (fetched via WebFetch)
- MCP servers (queried for tools/resources)
- Installed skills (commands/agents documented)
Default Excludes:
node_modules, vendor, dist, build, .git, __pycache__, test files
smart-ralph/
├── .claude-plugin/
│ └── marketplace.json
├── plugins/
│ ├── ralph-specum/ # Spec workflow (self-contained)
│ │ ├── .claude-plugin/
│ │ │ └── plugin.json
│ │ ├── agents/ # Sub-agent definitions
│ │ ├── commands/ # Slash commands
│ │ ├── hooks/ # Stop watcher (controls execution loop)
│ │ ├── templates/ # Spec templates
│ │ └── schemas/ # Validation schemas
│ └── ralph-speckit/ # Spec-kit methodology
│ ├── .claude-plugin/
│ │ └── plugin.json
│ ├── agents/ # spec-executor, qa-engineer
│ ├── commands/ # /speckit:* commands
│ └── templates/ # Constitution, spec, plan templates
└── README.md
Specs live in ./specs/ in your project:
./specs/
├── .current-spec # Active spec name
└── my-feature/
├── .ralph-state.json # Loop state (deleted on completion)
├── .progress.md # Progress tracking
├── research.md
├── requirements.md
├── design.md
└── tasks.md
ralph-speckit is an alternative plugin implementing GitHub's spec-kit methodology with constitution-first governance.
| Feature | ralph-specum | ralph-speckit |
|---|---|---|
| Directory | ./specs/ |
.specify/specs/ |
| Naming | my-feature/ |
001-feature-name/ |
| Constitution | None | .specify/memory/constitution.md |
| Spec structure | research, requirements, design, tasks | spec (WHAT/WHY), plan (HOW), tasks |
| Traceability | Basic | Full FR/AC annotations |
/plugin install ralph-speckit@smart-ralph# Initialize constitution (first time only)
/speckit:constitution
# Create and develop a feature
/speckit:start user-auth "Add JWT authentication"
/speckit:specify
/speckit:plan
/speckit:tasks
/speckit:implement| Command | What it does |
|---|---|
/speckit:constitution |
Create/update project constitution |
/speckit:start <name> [goal] |
Create new feature with auto ID |
/speckit:specify |
Define feature spec (WHAT/WHY) |
/speckit:plan [tech] |
Create technical plan with research |
/speckit:tasks |
Generate task breakdown by user story |
/speckit:implement |
Execute tasks task-by-task |
/speckit:status |
Show current feature status |
/speckit:switch <name> |
Switch active feature |
/speckit:cancel |
Cancel execution loop |
/speckit:clarify |
Optional: clarify ambiguous requirements |
/speckit:analyze |
Optional: check spec consistency |
.specify/
├── memory/
│ └── constitution.md # Project-level principles
├── .current-feature # Active feature pointer
└── specs/
├── 001-user-auth/
│ ├── .speckit-state.json
│ ├── .progress.md
│ ├── spec.md # Requirements (WHAT/WHY)
│ ├── research.md
│ ├── plan.md # Technical design (HOW)
│ └── tasks.md
└── 002-payment-flow/
└── ...
- ralph-specum: Quick iterations, personal projects, simple features
- ralph-speckit: Enterprise projects, team collaboration, audit trails needed
Task keeps failing?
After max iterations, the loop stops. Check .progress.md for errors. Fix manually, then /ralph-specum:implement to resume.
Want to start over?
/ralph-specum:cancel cleans up state files. Then start fresh.
Resume existing spec?
Just /ralph-specum:start - it auto-detects and continues where you left off.
More issues? See the full Troubleshooting Guide.
Self-contained execution loop (no more ralph-loop dependency)
Starting with v3.0.0, Smart Ralph is fully self-contained. The execution loop is handled by the built-in stop-hook.
Migration from v2.x:
- Update Smart Ralph to v3.0.0+
- Restart Claude Code
- Existing specs continue working. No spec file changes needed.
- You can optionally uninstall ralph-loop if you don't use it elsewhere
What changed:
- Ralph Loop dependency removed
- Stop-hook now controls the execution loop directly
-
/implementruns the loop internally (no external invocation) -
/cancelonly cleans up Smart Ralph state files
Why:
- Simpler installation (one plugin instead of two)
- No version compatibility issues between plugins
- Self-contained workflow
Ralph Loop dependency required (superseded by v3.0.0)
v2.0.0 delegated task execution to the Ralph Loop plugin. This is no longer required as of v3.0.0.
PRs welcome! This project is friendly to first-time contributors.
- Fork it
- Create your feature branch (
git checkout -b feature/amazing) - Commit your changes
- Push to the branch
- Open a PR
- Ralph agentic loop pattern by Geoffrey Huntley
- Built for Claude Code
- Inspired by every developer who wished their AI could just figure out the whole feature
Made with confusion and determination
"The doctor said I wouldn't have so many nosebleeds if I kept my finger outta there."
MIT License
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for smart-ralph
Similar Open Source Tools
smart-ralph
Smart Ralph is a Claude Code plugin designed for spec-driven development. It helps users turn vague feature ideas into structured specs and executes them task-by-task. The tool operates within a self-contained execution loop without external dependencies, providing a seamless workflow for feature development. Named after the Ralph agentic loop pattern, Smart Ralph simplifies the development process by focusing on the next task at hand, akin to the simplicity of the Springfield student, Ralph.
bmalph
bmalph is a tool that bundles and installs two AI development systems, BMAD-METHOD for planning agents and workflows (Phases 1-3) and Ralph for autonomous implementation loop (Phase 4). It provides commands like `bmalph init` to install both systems, `bmalph upgrade` to update to the latest versions, `bmalph doctor` to check installation health, and `/bmalph-implement` to transition from BMAD to Ralph. Users can work through BMAD phases 1-3 with commands like BP, MR, DR, CP, VP, CA, etc., and then transition to Ralph for implementation.
azure-agentic-infraops
Agentic InfraOps is a multi-agent orchestration system for Azure infrastructure development that transforms how you build Azure infrastructure with AI agents. It provides a structured 7-step workflow that coordinates specialized AI agents through a complete infrastructure development cycle: Requirements → Architecture → Design → Plan → Code → Deploy → Documentation. The system enforces Azure Well-Architected Framework (WAF) alignment and Azure Verified Modules (AVM) at every phase, combining the speed of AI coding with best practices in cloud engineering.
everything-claude-code
The 'Everything Claude Code' repository is a comprehensive collection of production-ready agents, skills, hooks, commands, rules, and MCP configurations developed over 10+ months. It includes guides for setup, foundations, and philosophy, as well as detailed explanations of various topics such as token optimization, memory persistence, continuous learning, verification loops, parallelization, and subagent orchestration. The repository also provides updates on bug fixes, multi-language rules, installation wizard, PM2 support, OpenCode plugin integration, unified commands and skills, and cross-platform support. It offers a quick start guide for installation, ecosystem tools like Skill Creator and Continuous Learning v2, requirements for CLI version compatibility, key concepts like agents, skills, hooks, and rules, running tests, contributing guidelines, OpenCode support, background information, important notes on context window management and customization, star history chart, and relevant links.
ai-coders-context
The @ai-coders/context repository provides the Ultimate MCP for AI Agent Orchestration, Context Engineering, and Spec-Driven Development. It simplifies context engineering for AI by offering a universal process called PREVC, which consists of Planning, Review, Execution, Validation, and Confirmation steps. The tool aims to address the problem of context fragmentation by introducing a single `.context/` directory that works universally across different tools. It enables users to create structured documentation, generate agent playbooks, manage workflows, provide on-demand expertise, and sync across various AI tools. The tool follows a structured, spec-driven development approach to improve AI output quality and ensure reproducible results across projects.
roam-code
Roam is a tool that builds a semantic graph of your codebase and allows AI agents to query it with one shell command. It pre-indexes your codebase into a semantic graph stored in a local SQLite DB, providing architecture-level graph queries offline, cross-language, and compact. Roam understands functions, modules, tests coverage, and overall architecture structure. It is best suited for agent-assisted coding, large codebases, architecture governance, safe refactoring, and multi-repo projects. Roam is not suitable for real-time type checking, dynamic/runtime analysis, small scripts, or pure text search. It offers speed, dependency-awareness, LLM-optimized output, fully local operation, and CI readiness.
paperbanana
PaperBanana is an automated academic illustration tool designed for AI scientists. It implements an agentic framework for generating publication-quality academic diagrams and statistical plots from text descriptions. The tool utilizes a two-phase multi-agent pipeline with iterative refinement, Gemini-based VLM planning, and image generation. It offers a CLI, Python API, and MCP server for IDE integration, along with Claude Code skills for generating diagrams, plots, and evaluating diagrams. PaperBanana is not affiliated with or endorsed by the original authors or Google Research, and it may differ from the original system described in the paper.
llm4s
LLM4S provides a simple, robust, and scalable framework for building Large Language Models (LLM) applications in Scala. It aims to leverage Scala's type safety, functional programming, JVM ecosystem, concurrency, and performance advantages to create reliable and maintainable AI-powered applications. The framework supports multi-provider integration, execution environments, error handling, Model Context Protocol (MCP) support, agent frameworks, multimodal generation, and Retrieval-Augmented Generation (RAG) workflows. It also offers observability features like detailed trace logging, monitoring, and analytics for debugging and performance insights.
multi-agent-ralph-loop
Multi-agent RALPH (Reinforcement Learning with Probabilistic Hierarchies) Loop is a framework for multi-agent reinforcement learning research. It provides a flexible and extensible platform for developing and testing multi-agent reinforcement learning algorithms. The framework supports various environments, including grid-world environments, and allows users to easily define custom environments. Multi-agent RALPH Loop is designed to facilitate research in the field of multi-agent reinforcement learning by providing a set of tools and utilities for experimenting with different algorithms and scenarios.
claude-code-mastery
Claude Code Mastery is a comprehensive tool for maximizing Claude Code, offering a production-ready project template with 16 slash commands, deterministic hook enforcement, MongoDB wrapper, live AI monitoring, and three-layer security. It provides a security gatekeeper, project scaffolding blueprint, MCP server integration, workflow automation through custom commands, and emphasizes the importance of single-purpose chats to avoid performance degradation.
claude-craft
Claude Craft is a comprehensive framework for AI-assisted development with Claude Code, providing standardized rules, agents, and commands across multiple technology stacks. It includes autonomous sprint capabilities, documentation accuracy improvements, CI hardening, and test coverage enhancements. With support for 10 technology stacks, 5 languages, 40 AI agents, 157 slash commands, and various project management features like BMAD v6 framework, Ralph Wiggum loop execution, skills, templates, checklists, and hooks system, Claude Craft offers a robust solution for project development and management. The tool also supports workflow methodology, development tracks, document generation, BMAD v6 project management, quality gates, batch processing, backlog migration, and Claude Code hooks integration.
claude-talk-to-figma-mcp
A Model Context Protocol (MCP) plugin named Claude Talk to Figma MCP that enables Claude Desktop and other AI tools to interact directly with Figma for AI-assisted design capabilities. It provides document interaction, element creation, smart modifications, text mastery, and component integration. Users can connect the plugin to Figma, start designing, and utilize various tools for document analysis, element creation, modification, text manipulation, and component management. The project offers installation instructions, AI client configuration options, usage patterns, command references, troubleshooting support, testing guidelines, architecture overview, contribution guidelines, version history, and licensing information.
Legacy-Modernization-Agents
Legacy Modernization Agents is an open source migration framework developed to demonstrate AI Agents capabilities for converting legacy COBOL code to Java or C# .NET. The framework uses Microsoft Agent Framework with a dual-API architecture to analyze COBOL code and dependencies, then convert to either Java Quarkus or C# .NET. The web portal provides real-time visualization of migration progress, dependency graphs, and AI-powered Q&A.
terminator
Terminator is an AI-powered desktop automation tool that is open source, MIT-licensed, and cross-platform. It works across all apps and browsers, inspired by GitHub Actions & Playwright. It is 100x faster than generic AI agents, with over 95% success rate and no vendor lock-in. Users can create automations that work across any desktop app or browser, achieve high success rates without costly consultant armies, and pre-train workflows as deterministic code.
agentscope
AgentScope is a multi-agent platform designed to empower developers to build multi-agent applications with large-scale models. It features three high-level capabilities: Easy-to-Use, High Robustness, and Actor-Based Distribution. AgentScope provides a list of `ModelWrapper` to support both local model services and third-party model APIs, including OpenAI API, DashScope API, Gemini API, and ollama. It also enables developers to rapidly deploy local model services using libraries such as ollama (CPU inference), Flask + Transformers, Flask + ModelScope, FastChat, and vllm. AgentScope supports various services, including Web Search, Data Query, Retrieval, Code Execution, File Operation, and Text Processing. Example applications include Conversation, Game, and Distribution. AgentScope is released under Apache License 2.0 and welcomes contributions.
agents
AI agent tooling for data engineering workflows. Includes an MCP server for Airflow, a CLI tool for interacting with Airflow from your terminal, and skills that extend AI coding agents with specialized capabilities for working with Airflow and data warehouses. Works with Claude Code, Cursor, and other agentic coding tools. The tool provides a comprehensive set of features for data discovery & analysis, data lineage, DAG development, dbt integration, migration, and more. It also offers user journeys for data analysis flow and DAG development flow. The Airflow CLI tool allows users to interact with Airflow directly from the terminal. The tool supports various databases like Snowflake, PostgreSQL, Google BigQuery, and more, with auto-detected SQLAlchemy databases. Skills are invoked automatically based on user queries or can be invoked directly using specific commands.
For similar tasks
smart-ralph
Smart Ralph is a Claude Code plugin designed for spec-driven development. It helps users turn vague feature ideas into structured specs and executes them task-by-task. The tool operates within a self-contained execution loop without external dependencies, providing a seamless workflow for feature development. Named after the Ralph agentic loop pattern, Smart Ralph simplifies the development process by focusing on the next task at hand, akin to the simplicity of the Springfield student, Ralph.
software-dev-prompt-library
A collection of AI-powered prompts designed to streamline software development workflows. The library contains prompts at various stages of development, with structured sequences of connected prompts, project initialization support, development assistance, and documentation generation. It aims to provide consistent guidance across different development phases, promote systematic development processes, and enable progress tracking and validation.
For similar jobs
weave
Weave is a toolkit for developing Generative AI applications, built by Weights & Biases. With Weave, you can log and debug language model inputs, outputs, and traces; build rigorous, apples-to-apples evaluations for language model use cases; and organize all the information generated across the LLM workflow, from experimentation to evaluations to production. Weave aims to bring rigor, best-practices, and composability to the inherently experimental process of developing Generative AI software, without introducing cognitive overhead.
ChatFAQ
ChatFAQ is an open-source comprehensive platform for creating a wide variety of chatbots: generic ones, business-trained, or even capable of redirecting requests to human operators. It includes a specialized NLP/NLG engine based on a RAG architecture and customized chat widgets, ensuring a tailored experience for users and avoiding vendor lock-in.
agentcloud
AgentCloud is an open-source platform that enables companies to build and deploy private LLM chat apps, empowering teams to securely interact with their data. It comprises three main components: Agent Backend, Webapp, and Vector Proxy. To run this project locally, clone the repository, install Docker, and start the services. The project is licensed under the GNU Affero General Public License, version 3 only. Contributions and feedback are welcome from the community.
LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.
VisionCraft
The VisionCraft API is a free API for using over 100 different AI models. From images to sound.
autogen
AutoGen is a framework that enables the development of LLM applications using multiple agents that can converse with each other to solve tasks. AutoGen agents are customizable, conversable, and seamlessly allow human participation. They can operate in various modes that employ combinations of LLMs, human inputs, and tools.
llama-recipes
The llama-recipes repository provides a scalable library for fine-tuning Llama 2, along with example scripts and notebooks to quickly get started with using the Llama 2 models in a variety of use-cases, including fine-tuning for domain adaptation and building LLM-based applications with Llama 2 and other tools in the LLM ecosystem. The examples here showcase how to run Llama 2 locally, in the cloud, and on-prem.
llmware
LLMWare is a framework for quickly developing LLM-based applications including Retrieval Augmented Generation (RAG) and Multi-Step Orchestration of Agent Workflows. This project provides a comprehensive set of tools that anyone can use - from a beginner to the most sophisticated AI developer - to rapidly build industrial-grade, knowledge-based enterprise LLM applications. Our specific focus is on making it easy to integrate open source small specialized models and connecting enterprise knowledge safely and securely.