
commands
A collection of production-ready slash commands for Claude Code
Stars: 717

Production-ready slash commands for Claude Code that accelerate development through intelligent automation and multi-agent orchestration. Contains 52 commands organized into workflows and tools categories. Workflows orchestrate complex tasks with multiple agents, while tools provide focused functionality for specific development tasks. Commands can be used with prefixes for organization or flattened for convenience. Best practices include using workflows for complex tasks and tools for specific scopes, chaining commands strategically, and providing detailed context for effective usage.
README:
Production-ready slash commands for Claude Code that accelerate development through intelligent automation and multi-agent orchestration.
This repository contains 52 production-ready slash commands designed to enhance your development workflow with Claude Code. Commands are organized into two primary categories:
- Workflows: Multi-agent orchestration for complex, multi-step tasks
- Tools: Single-purpose utilities for specific operations
These commands require Claude Code and the Claude Code Subagents collection for full orchestration capabilities.
cd ~/.claude
git clone https://github.com/wshobson/commands.git
git clone https://github.com/wshobson/agents.git # Required for subagent orchestration
Commands are organized in tools/
and workflows/
directories. Use them with directory prefixes:
/tools:api-scaffold user management with authentication
/tools:security-scan check for vulnerabilities
/workflows:feature-development implement chat functionality
To use commands without prefixes, flatten the directories:
cp tools/*.md .
cp workflows/*.md .
Workflows orchestrate multiple specialized agents to handle complex, multi-domain tasks. They analyze requirements, delegate to appropriate specialists, and coordinate results.
- feature-development - Complete feature implementation with backend, frontend, testing, and deployment agents
- full-review - Comprehensive code analysis from multiple specialist perspectives
- smart-fix - Intelligent issue analysis and resolution with automatic delegation
- git-workflow - Git workflow implementation with branching strategies and PR templates
- improve-agent - Agent performance optimization through prompt engineering
- legacy-modernize - Legacy codebase modernization with specialized refactoring agents
- ml-pipeline - ML pipeline creation with data and ML engineering specialists
- multi-platform - Cross-platform application development with coordinated agents
- workflow-automate - CI/CD, monitoring, and deployment automation
- full-stack-feature - Multi-platform features with backend, frontend, and mobile agents
- security-hardening - Security-first implementation with specialized security agents
- data-driven-feature - ML-powered features with data science specialists
- performance-optimization - End-to-end optimization with performance specialists
- incident-response - Production incident resolution with operations specialists
Tools provide focused, single-purpose functionality for specific development tasks.
- ai-assistant - Production-ready AI assistants and chatbots
- ai-review - Specialized review for AI/ML codebases
- langchain-agent - LangChain/LangGraph agents with modern patterns
- ml-pipeline - End-to-end ML pipelines with MLOps
- prompt-optimize - AI prompt optimization for performance and quality
- code-explain - Detailed explanations of complex code
- code-migrate - Code migration between languages, frameworks, or versions
- refactor-clean - Code refactoring for maintainability and performance
- tech-debt - Technical debt analysis and prioritization
- data-pipeline - Scalable data pipeline architectures
- data-validation - Comprehensive data validation systems
- db-migrate - Advanced database migration strategies
- deploy-checklist - Deployment configurations and checklists
- docker-optimize - Container optimization strategies
- k8s-manifest - Production-grade Kubernetes deployments
- monitor-setup - Comprehensive monitoring and observability
- slo-implement - Service Level Objectives (SLOs) implementation
- workflow-automate - Development and operational workflow automation
- api-mock - Realistic API mocks for development and testing
- api-scaffold - Production-ready API endpoints with complete implementation
- test-harness - Comprehensive test suites with framework detection
- accessibility-audit - Accessibility testing and remediation
- compliance-check - Regulatory compliance verification (GDPR, HIPAA, SOC2)
- security-scan - Security scanning with automated remediation
- debug-trace - Advanced debugging and tracing strategies
- error-analysis - Error pattern analysis and resolution
- error-trace - Production error diagnosis
- issue - Well-structured issue creation for GitHub/GitLab
- config-validate - Application configuration validation and management
- deps-audit - Dependency security and licensing audit
- deps-upgrade - Safe dependency upgrades
- doc-generate - Comprehensive documentation generation
- pr-enhance - Pull request enhancement with quality checks
- standup-notes - Daily standup note generation
- cost-optimize - Cloud and infrastructure cost optimization
- onboard - Development environment setup for new team members
- context-save - Save project context and architectural decisions
- context-restore - Restore saved context for continuity
- multi-agent-review - Multi-perspective code review
- smart-debug - Deep debugging with specialized agents
- multi-agent-optimize - Full-stack optimization
# Start new feature with comprehensive implementation
/workflows:feature-development Add OAuth2 authentication with Google and GitHub
# Scaffold specific API endpoints
/tools:api-scaffold user profile CRUD with avatar upload
# Intelligent problem analysis and fix
/workflows:smart-fix Dashboard loading slowly, users seeing 5+ second wait times
# Trace specific errors
/tools:error-trace Analyze high memory usage in production pods
# Comprehensive multi-agent review
/tools:multi-agent-review Review PR #123 payment processing implementation
# Security scanning
/tools:security-scan Comprehensive scan for OWASP Top 10 vulnerabilities
# Generate deployment checklist
/tools:deploy-checklist OAuth2 authentication feature deployment
# Optimize containers
/tools:docker-optimize Optimize authentication service container
# Create Kubernetes manifests
/tools:k8s-manifest Production deployment with auto-scaling
Commands are designed to work together in sequences:
# 1. Implement with workflow
/workflows:feature-development Add real-time chat with WebSockets
# 2. Add security measures
/tools:security-scan Focus on WebSocket vulnerabilities
# 3. Optimize performance
/tools:multi-agent-optimize Optimize message delivery
# 4. Prepare deployment
/tools:deploy-checklist Real-time chat deployment
/tools:k8s-manifest WebSocket service with sticky sessions
# 1. Start modernization
/workflows:legacy-modernize Migrate Express.js v4 to modern architecture
# 2. Update dependencies
/tools:deps-upgrade Update all dependencies to latest versions
# 3. Clean up code
/tools:refactor-clean Remove deprecated patterns
# 4. Add tests
/tools:test-harness Ensure 100% test coverage
# 5. Deploy
/tools:docker-optimize Create production build
/tools:k8s-manifest Deploy with zero-downtime strategy
- Tackling complex, multi-domain problems
- Solution approach is unclear
- Need coordination between multiple specialists
- Implementing complete features end-to-end
- Require adaptive problem-solving
Examples:
- "Implement user authentication system" →
/workflows:feature-development
- "Fix performance issues across the stack" →
/workflows:smart-fix
- "Modernize legacy monolith" →
/workflows:legacy-modernize
- Task has a clear, specific scope
- Need precise control over implementation
- Working within a single domain
- Enhancing or refining existing work
- Integrating into manual workflows
Examples:
- "Create Kubernetes manifests" →
/tools:k8s-manifest
- "Scan for security vulnerabilities" →
/tools:security-scan
- "Generate API documentation" →
/tools:doc-generate
- Let Claude Code suggest commands automatically based on context
- Start with workflows for complex tasks, then refine with tools
- Chain commands strategically for comprehensive solutions
- Provide comprehensive context including tech stack and constraints
- Use workflow output as input for specific tools
- Build incrementally on previous command outputs
- Allow workflows to complete before applying modifications
- Use tools for known, specific problems
- Use workflows for exploration and complex solutions
- Provide detailed requirements upfront to reduce iterations
- Keep simple edits in the main agent context
Slash commands are markdown files with a simple structure:
- Filename becomes the command name (e.g.,
tools/api-scaffold.md
→/tools:api-scaffold
) - File content contains the prompt/instructions executed when invoked
-
$ARGUMENTS
placeholder captures user input
To add new commands:
- Create a
.md
file inworkflows/
ortools/
directory - Use lowercase-hyphen naming convention
- Include
$ARGUMENTS
placeholder for user input - For workflows: Focus on agent orchestration and delegation
- For tools: Provide complete, production-ready implementations
Command not found
- Verify files exist in
~/.claude/commands/tools/
or~/.claude/commands/workflows/
- Check command syntax:
/tools:command-name
or/workflows:command-name
- Consider flattening directories if you prefer no prefixes
Slow workflow execution
- Normal behavior - workflows coordinate multiple agents
- Complex tasks require analysis and delegation time
Generic or incomplete output
- Provide more specific context about your technology stack
- Include constraints and requirements in your request
Integration issues
- Verify file paths are correct
- Check command sequencing and dependencies
Comprehensive security scanning with automated remediation:
- Multi-tool scanning (Bandit, Safety, Trivy, Semgrep, Snyk)
- Automated vulnerability fixes
- CI/CD security gate integration
- Container and secret scanning
Advanced container optimization:
- Framework-specific multi-stage builds
- Modern build tools (BuildKit, Bun, UV)
- Security hardening with distroless images
- Cross-command integration support
Production-grade Kubernetes deployments:
- Pod Security Standards and Network Policies
- GitOps ready (FluxCD/ArgoCD)
- Built-in observability (Prometheus/OpenTelemetry)
- Auto-scaling and service mesh integration
Advanced database migration strategies:
- Multi-database support (PostgreSQL, MySQL, MongoDB, DynamoDB)
- Zero-downtime migration patterns
- Event sourcing with Kafka/Kinesis
- Polyglot persistence handling
- Claude Code Documentation
- Slash Commands Documentation
- Subagents Documentation
- Claude Code GitHub
- Claude Code Subagents Collection
MIT License - See LICENSE file for details
For issues, questions, or contributions, please open an issue on GitHub.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for commands
Similar Open Source Tools

commands
Production-ready slash commands for Claude Code that accelerate development through intelligent automation and multi-agent orchestration. Contains 52 commands organized into workflows and tools categories. Workflows orchestrate complex tasks with multiple agents, while tools provide focused functionality for specific development tasks. Commands can be used with prefixes for organization or flattened for convenience. Best practices include using workflows for complex tasks and tools for specific scopes, chaining commands strategically, and providing detailed context for effective usage.

lyraios
LYRAIOS (LLM-based Your Reliable AI Operating System) is an advanced AI assistant platform built with FastAPI and Streamlit, designed to serve as an operating system for AI applications. It offers core features such as AI process management, memory system, and I/O system. The platform includes built-in tools like Calculator, Web Search, Financial Analysis, File Management, and Research Tools. It also provides specialized assistant teams for Python and research tasks. LYRAIOS is built on a technical architecture comprising FastAPI backend, Streamlit frontend, Vector Database, PostgreSQL storage, and Docker support. It offers features like knowledge management, process control, and security & access control. The roadmap includes enhancements in core platform, AI process management, memory system, tools & integrations, security & access control, open protocol architecture, multi-agent collaboration, and cross-platform support.

ai-doc-gen
An AI-powered code documentation generator that automatically analyzes repositories and creates comprehensive documentation using advanced language models. The system employs a multi-agent architecture to perform specialized code analysis and generate structured documentation.

finite-monkey-engine
FiniteMonkey is an advanced vulnerability mining engine powered purely by GPT, requiring no prior knowledge base or fine-tuning. Its effectiveness significantly surpasses most current related research approaches. The tool is task-driven, prompt-driven, and focuses on prompt design, leveraging 'deception' and hallucination as key mechanics. It has helped identify vulnerabilities worth over $60,000 in bounties. The tool requires PostgreSQL database, OpenAI API access, and Python environment for setup. It supports various languages like Solidity, Rust, Python, Move, Cairo, Tact, Func, Java, and Fake Solidity for scanning. FiniteMonkey is best suited for logic vulnerability mining in real projects, not recommended for academic vulnerability testing. GPT-4-turbo is recommended for optimal results with an average scan time of 2-3 hours for medium projects. The tool provides detailed scanning results guide and implementation tips for users.

gemini-cli
Gemini CLI is an open-source AI agent that provides lightweight access to Gemini, offering powerful capabilities like code understanding, generation, automation, integration, and advanced features. It is designed for developers who prefer working in the command line and offers extensibility through MCP support. The tool integrates directly into GitHub workflows and offers various authentication options for individual developers, enterprise teams, and production workloads. With features like code querying, editing, app generation, debugging, and GitHub integration, Gemini CLI aims to streamline development workflows and enhance productivity.

meeting-minutes
An open-source AI assistant for taking meeting notes that captures live meeting audio, transcribes it in real-time, and generates summaries while ensuring user privacy. Perfect for teams to focus on discussions while automatically capturing and organizing meeting content without external servers or complex infrastructure. Features include modern UI, real-time audio capture, speaker diarization, local processing for privacy, and more. The tool also offers a Rust-based implementation for better performance and native integration, with features like live transcription, speaker diarization, and a rich text editor for notes. Future plans include database connection for saving meeting minutes, improving summarization quality, and adding download options for meeting transcriptions and summaries. The backend supports multiple LLM providers through a unified interface, with configurations for Anthropic, Groq, and Ollama models. System architecture includes core components like audio capture service, transcription engine, LLM orchestrator, data services, and API layer. Prerequisites for setup include Node.js, Python, FFmpeg, and Rust. Development guidelines emphasize project structure, testing, documentation, type hints, and ESLint configuration. Contributions are welcome under the MIT License.

SynthLang
SynthLang is a tool designed to optimize AI prompts by reducing costs and improving processing speed. It brings academic rigor to prompt engineering, creating precise and powerful AI interactions. The tool includes core components like a Translator Engine, Performance Optimization, Testing Framework, and Technical Architecture. It offers mathematical precision, academic rigor, enhanced security, a modern interface, and instant testing. Users can integrate mathematical frameworks, model complex relationships, and apply structured prompts to various domains. Security features include API key management and data privacy. The tool also provides a CLI for prompt engineering and optimization capabilities.

OpenChat
OS Chat is a free, open-source AI personal assistant that combines 40+ language models with powerful automation capabilities. It allows users to deploy background agents, connect services like Gmail, Calendar, Notion, GitHub, and Slack, and get things done through natural conversation. With features like smart automation, service connectors, AI models, chat management, interface customization, and premium features, OS Chat offers a comprehensive solution for managing digital life and workflows. It prioritizes privacy by being open source and self-hostable, with encrypted API key storage.

monoscope
Monoscope is an open-source monitoring and observability platform that uses artificial intelligence to understand and monitor systems automatically. It allows users to ingest and explore logs, traces, and metrics in S3 buckets, query in natural language via LLMs, and create AI agents to detect anomalies. Key capabilities include universal data ingestion, AI-powered understanding, natural language interface, cost-effective storage, and zero configuration. Monoscope is designed to reduce alert fatigue, catch issues before they impact users, and provide visibility across complex systems.

evi-run
evi-run is a powerful, production-ready multi-agent AI system built on Python using the OpenAI Agents SDK. It offers instant deployment, ultimate flexibility, built-in analytics, Telegram integration, and scalable architecture. The system features memory management, knowledge integration, task scheduling, multi-agent orchestration, custom agent creation, deep research, web intelligence, document processing, image generation, DEX analytics, and Solana token swap. It supports flexible usage modes like private, free, and pay mode, with upcoming features including NSFW mode, task scheduler, and automatic limit orders. The technology stack includes Python 3.11, OpenAI Agents SDK, Telegram Bot API, PostgreSQL, Redis, and Docker & Docker Compose for deployment.

persistent-ai-memory
Persistent AI Memory System is a comprehensive tool that offers persistent, searchable storage for AI assistants. It includes features like conversation tracking, MCP tool call logging, and intelligent scheduling. The system supports multiple databases, provides enhanced memory management, and offers various tools for memory operations, schedule management, and system health checks. It also integrates with various platforms like LM Studio, VS Code, Koboldcpp, Ollama, and more. The system is designed to be modular, platform-agnostic, and scalable, allowing users to handle large conversation histories efficiently.

DreamLayer
DreamLayer AI is an open-source Stable Diffusion WebUI designed for AI researchers, labs, and developers. It automates prompts, seeds, and metrics for benchmarking models, datasets, and samplers, enabling reproducible evaluations across multiple seeds and configurations. The tool integrates custom metrics and evaluation pipelines, providing a streamlined workflow for AI research. With features like automated benchmarking, reproducibility, built-in metrics, multi-modal readiness, and researcher-friendly interface, DreamLayer AI aims to simplify and accelerate the model evaluation process.

layra
LAYRA is the world's first visual-native AI automation engine that sees documents like a human, preserves layout and graphical elements, and executes arbitrarily complex workflows with full Python control. It empowers users to build next-generation intelligent systems with no limits or compromises. Built for Enterprise-Grade deployment, LAYRA features a modern frontend, high-performance backend, decoupled service architecture, visual-native multimodal document understanding, and a powerful workflow engine.

local-deep-research
Local Deep Research is a powerful AI-powered research assistant that performs deep, iterative analysis using multiple LLMs and web searches. It can be run locally for privacy or configured to use cloud-based LLMs for enhanced capabilities. The tool offers advanced research capabilities, flexible LLM support, rich output options, privacy-focused operation, enhanced search integration, and academic & scientific integration. It also provides a web interface, command line interface, and supports multiple LLM providers and search engines. Users can configure AI models, search engines, and research parameters for customized research experiences.

RepoMaster
RepoMaster is an AI agent that leverages GitHub repositories to solve complex real-world tasks. It transforms how coding tasks are solved by automatically finding the right GitHub tools and making them work together seamlessly. Users can describe their tasks, and RepoMaster's AI analysis leads to auto discovery and smart execution, resulting in perfect outcomes. The tool provides a web interface for beginners and a command-line interface for advanced users, along with specialized agents for deep search, general assistance, and repository tasks.

bifrost
Bifrost is a high-performance AI gateway that unifies access to multiple providers through a single OpenAI-compatible API. It offers features like automatic failover, load balancing, semantic caching, and enterprise-grade functionalities. Users can deploy Bifrost in seconds with zero configuration, benefiting from its core infrastructure, advanced features, enterprise and security capabilities, and developer experience. The repository structure is modular, allowing for maximum flexibility. Bifrost is designed for quick setup, easy configuration, and seamless integration with various AI models and tools.
For similar tasks

commands
Production-ready slash commands for Claude Code that accelerate development through intelligent automation and multi-agent orchestration. Contains 52 commands organized into workflows and tools categories. Workflows orchestrate complex tasks with multiple agents, while tools provide focused functionality for specific development tasks. Commands can be used with prefixes for organization or flattened for convenience. Best practices include using workflows for complex tasks and tools for specific scopes, chaining commands strategically, and providing detailed context for effective usage.

apicat
ApiCat is an API documentation management tool that is fully compatible with the OpenAPI specification. With ApiCat, you can freely and efficiently manage your APIs. It integrates the capabilities of LLM, which not only helps you automatically generate API documentation and data models but also creates corresponding test cases based on the API content. Using ApiCat, you can quickly accomplish anything outside of coding, allowing you to focus your energy on the code itself.

tt-metal
TT-NN is a python & C++ Neural Network OP library. It provides a low-level programming model, TT-Metalium, enabling kernel development for Tenstorrent hardware.

mscclpp
MSCCL++ is a GPU-driven communication stack for scalable AI applications. It provides a highly efficient and customizable communication stack for distributed GPU applications. MSCCL++ redefines inter-GPU communication interfaces, delivering a highly efficient and customizable communication stack for distributed GPU applications. Its design is specifically tailored to accommodate diverse performance optimization scenarios often encountered in state-of-the-art AI applications. MSCCL++ provides communication abstractions at the lowest level close to hardware and at the highest level close to application API. The lowest level of abstraction is ultra light weight which enables a user to implement logics of data movement for a collective operation such as AllReduce inside a GPU kernel extremely efficiently without worrying about memory ordering of different ops. The modularity of MSCCL++ enables a user to construct the building blocks of MSCCL++ in a high level abstraction in Python and feed them to a CUDA kernel in order to facilitate the user's productivity. MSCCL++ provides fine-grained synchronous and asynchronous 0-copy 1-sided abstracts for communication primitives such as `put()`, `get()`, `signal()`, `flush()`, and `wait()`. The 1-sided abstractions allows a user to asynchronously `put()` their data on the remote GPU as soon as it is ready without requiring the remote side to issue any receive instruction. This enables users to easily implement flexible communication logics, such as overlapping communication with computation, or implementing customized collective communication algorithms without worrying about potential deadlocks. Additionally, the 0-copy capability enables MSCCL++ to directly transfer data between user's buffers without using intermediate internal buffers which saves GPU bandwidth and memory capacity. MSCCL++ provides consistent abstractions regardless of the location of the remote GPU (either on the local node or on a remote node) or the underlying link (either NVLink/xGMI or InfiniBand). This simplifies the code for inter-GPU communication, which is often complex due to memory ordering of GPU/CPU read/writes and therefore, is error-prone.

mlir-air
This repository contains tools and libraries for building AIR platforms, runtimes and compilers.

free-for-life
A massive list including a huge amount of products and services that are completely free! ⭐ Star on GitHub • 🤝 Contribute # Table of Contents * APIs, Data & ML * Artificial Intelligence * BaaS * Code Editors * Code Generation * DNS * Databases * Design & UI * Domains * Email * Font * For Students * Forms * Linux Distributions * Messaging & Streaming * PaaS * Payments & Billing * SSL

AIMr
AIMr is an AI aimbot tool written in Python that leverages modern technologies to achieve an undetected system with a pleasing appearance. It works on any game that uses human-shaped models. To optimize its performance, users should build OpenCV with CUDA. For Valorant, additional perks in the Discord and an Arduino Leonardo R3 are required.

aika
AIKA (Artificial Intelligence for Knowledge Acquisition) is a new type of artificial neural network designed to mimic the behavior of a biological brain more closely and bridge the gap to classical AI. The network conceptually separates activations from neurons, creating two separate graphs to represent acquired knowledge and inferred information. It uses different types of neurons and synapses to propagate activation values, binding signals, causal relations, and training gradients. The network structure allows for flexible topology and supports the gradual population of neurons and synapses during training.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.