
commands
A collection of production-ready slash commands for Claude Code
Stars: 774

Production-ready slash commands for Claude Code that accelerate development through intelligent automation and multi-agent orchestration. Contains 52 commands organized into workflows and tools categories. Workflows orchestrate complex tasks with multiple agents, while tools provide focused functionality for specific development tasks. Commands can be used with prefixes for organization or flattened for convenience. Best practices include using workflows for complex tasks and tools for specific scopes, chaining commands strategically, and providing detailed context for effective usage.
README:
A comprehensive collection of production-ready slash commands for Claude Code that provides intelligent automation and multi-agent orchestration capabilities for modern software development.
This repository provides 56 production-ready slash commands (15 workflows, 41 tools) that extend Claude Code's capabilities through:
- Workflows: Multi-agent orchestration systems that coordinate complex, multi-step operations across different domains
- Tools: Specialized single-purpose utilities for focused development tasks
- Claude Code installed and configured
- Claude Code Subagents collection for workflow orchestration capabilities
- Git for repository management
# Navigate to Claude configuration directory
cd ~/.claude
# Clone the commands repository
git clone https://github.com/wshobson/commands.git
# Clone the agents repository (required for workflow execution)
git clone https://github.com/wshobson/agents.git
Commands are organized in tools/
and workflows/
directories and invoked using directory prefixes:
# Workflow invocation
/workflows:feature-development implement OAuth2 authentication
# Tool invocation
/tools:security-scan perform vulnerability assessment
# Multiple argument example
/tools:api-scaffold create user management endpoints with RBAC
To invoke commands without directory prefixes, copy files to the root directory:
cp tools/*.md .
cp workflows/*.md .
# Then invoke directly
/api-scaffold create REST endpoints
/feature-development implement payment system
Workflows implement multi-agent orchestration patterns for complex, cross-domain tasks. Each workflow analyzes requirements, delegates to specialized agents, and coordinates execution across multiple subsystems.
Command | Purpose | Agent Coordination |
---|---|---|
feature-development |
End-to-end feature implementation | Backend, frontend, testing, deployment |
full-review |
Multi-perspective code analysis | Architecture, security, performance, quality |
smart-fix |
Intelligent problem resolution | Dynamic agent selection based on issue type |
tdd-cycle |
Test-driven development orchestration | Test writer, implementer, refactoring specialist |
Command | Purpose | Scope |
---|---|---|
git-workflow |
Version control process automation | Branching strategies, commit standards, PR templates |
improve-agent |
Agent optimization | Prompt engineering, performance tuning |
legacy-modernize |
Codebase modernization | Architecture migration, dependency updates, pattern refactoring |
ml-pipeline |
Machine learning pipeline construction | Data engineering, model training, deployment |
multi-platform |
Cross-platform development | Web, mobile, desktop coordination |
workflow-automate |
CI/CD pipeline automation | Build, test, deploy, monitor |
Command | Primary Focus | Specialized Agents |
---|---|---|
full-stack-feature |
Multi-tier implementation | Backend API, frontend UI, mobile, database |
security-hardening |
Security-first development | Threat modeling, vulnerability assessment, remediation |
data-driven-feature |
ML-powered functionality | Data science, feature engineering, model deployment |
performance-optimization |
System-wide optimization | Profiling, caching, query optimization, load testing |
incident-response |
Production issue resolution | Diagnostics, root cause analysis, hotfix deployment |
Tools provide focused, single-purpose utilities for specific development operations. Each tool is optimized for its domain with production-ready implementations.
Command | Functionality | Key Features |
---|---|---|
ai-assistant |
AI assistant implementation | LLM integration, conversation management, context handling |
ai-review |
ML code review | Model architecture validation, training pipeline review |
langchain-agent |
LangChain agent creation | RAG patterns, tool integration, memory management |
ml-pipeline |
ML pipeline construction | Data processing, training, evaluation, deployment |
prompt-optimize |
Prompt engineering | Performance testing, cost optimization, quality metrics |
Command | Purpose | Capabilities |
---|---|---|
code-explain |
Code documentation | AST analysis, complexity metrics, flow diagrams |
code-migrate |
Migration automation | Framework upgrades, language porting, API migrations |
refactor-clean |
Code improvement | Pattern detection, dead code removal, structure optimization |
tech-debt |
Debt assessment | Complexity analysis, risk scoring, remediation planning |
Command | Focus Area | Technologies |
---|---|---|
data-pipeline |
ETL/ELT architecture | Apache Spark, Airflow, dbt, streaming platforms |
data-validation |
Data quality | Schema validation, anomaly detection, constraint checking |
db-migrate |
Database migrations | Schema versioning, zero-downtime strategies, rollback plans |
Command | Domain | Implementation |
---|---|---|
deploy-checklist |
Deployment preparation | Pre-flight checks, rollback procedures, monitoring setup |
docker-optimize |
Container optimization | Multi-stage builds, layer caching, size reduction |
k8s-manifest |
Kubernetes configuration | Deployments, services, ingress, autoscaling, security policies |
monitor-setup |
Observability | Metrics, logging, tracing, alerting rules |
slo-implement |
SLO/SLI definition | Error budgets, monitoring, automated responses |
workflow-automate |
Pipeline automation | CI/CD, GitOps, infrastructure as code |
Command | Testing Focus | Framework Support |
---|---|---|
api-mock |
Mock generation | REST, GraphQL, gRPC, WebSocket |
api-scaffold |
Endpoint creation | CRUD operations, authentication, validation |
test-harness |
Test suite generation | Unit, integration, e2e, performance |
tdd-red |
Test-first development | Failing test creation, edge case coverage |
tdd-green |
Implementation | Minimal code to pass tests |
tdd-refactor |
Code improvement | Optimization while maintaining green tests |
Command | Security Domain | Standards |
---|---|---|
accessibility-audit |
WCAG compliance | ARIA, keyboard navigation, screen reader support |
compliance-check |
Regulatory compliance | GDPR, HIPAA, SOC2, PCI-DSS |
security-scan |
Vulnerability assessment | OWASP, CVE scanning, dependency audits |
Command | Analysis Type | Output |
---|---|---|
debug-trace |
Runtime analysis | Stack traces, memory profiles, execution paths |
error-analysis |
Error patterns | Root cause analysis, frequency analysis, impact assessment |
error-trace |
Production debugging | Log correlation, distributed tracing, error reproduction |
issue |
Issue tracking | Standardized templates, reproduction steps, acceptance criteria |
Command | Management Area | Features |
---|---|---|
config-validate |
Configuration management | Schema validation, environment variables, secrets handling |
deps-audit |
Dependency analysis | Security vulnerabilities, license compliance, version conflicts |
deps-upgrade |
Version management | Breaking change detection, compatibility testing, rollback support |
Command | Documentation Type | Format |
---|---|---|
doc-generate |
API documentation | OpenAPI, JSDoc, TypeDoc, Sphinx |
pr-enhance |
Pull request optimization | Description generation, checklist creation, review suggestions |
standup-notes |
Status reporting | Progress tracking, blocker identification, next steps |
Command | Operational Focus | Use Case |
---|---|---|
cost-optimize |
Resource optimization | Cloud spend analysis, right-sizing, reserved capacity |
onboard |
Environment setup | Development tools, access configuration, documentation |
context-save |
State persistence | Architecture decisions, configuration snapshots |
context-restore |
State recovery | Context reload, decision history, configuration restore |
# Complete feature with multi-agent orchestration
/workflows:feature-development OAuth2 authentication with JWT tokens
# API-first development
/tools:api-scaffold REST endpoints for user management with RBAC
# Test-driven approach
/workflows:tdd-cycle shopping cart with discount calculation logic
# Intelligent issue resolution
/workflows:smart-fix high memory consumption in production workers
# Targeted error analysis
/tools:error-trace investigate Redis connection timeouts
# Performance optimization
/workflows:performance-optimization optimize database query performance
# Security assessment
/tools:security-scan OWASP Top 10 vulnerability scan
# Compliance verification
/tools:compliance-check GDPR data handling requirements
# Security hardening workflow
/workflows:security-hardening implement zero-trust architecture
# Complete TDD cycle with orchestration
/workflows:tdd-cycle payment processing with Stripe integration
# Manual TDD phases for granular control
/tools:tdd-red create failing tests for order validation
/tools:tdd-green implement minimal order validation logic
/tools:tdd-refactor optimize validation performance
# Feature implementation pipeline
/workflows:feature-development real-time notifications with WebSockets
/tools:security-scan WebSocket implementation vulnerabilities
/workflows:performance-optimization WebSocket connection handling
/tools:deploy-checklist notification service deployment requirements
/tools:k8s-manifest WebSocket service with session affinity
# Legacy system upgrade
/workflows:legacy-modernize migrate monolith to microservices
/tools:deps-audit check dependency vulnerabilities
/tools:deps-upgrade update to latest stable versions
/tools:refactor-clean remove deprecated patterns
/tools:test-harness generate comprehensive test coverage
/tools:docker-optimize create optimized container images
/tools:k8s-manifest deploy with rolling update strategy
Criteria | Use Workflows | Use Tools |
---|---|---|
Problem Complexity | Multi-domain, cross-cutting concerns | Single domain, focused scope |
Solution Clarity | Exploratory, undefined approach | Clear implementation path |
Agent Coordination | Multiple specialists required | Single expertise sufficient |
Implementation Scope | End-to-end features | Specific components |
Control Level | Automated orchestration preferred | Manual control required |
Requirement | Recommended Workflow | Rationale |
---|---|---|
"Build complete authentication system" | /workflows:feature-development |
Multi-tier implementation required |
"Debug production performance issues" | /workflows:smart-fix |
Unknown root cause, needs analysis |
"Modernize legacy application" | /workflows:legacy-modernize |
Complex refactoring across stack |
"Implement ML-powered feature" | /workflows:data-driven-feature |
Requires data science expertise |
Task | Recommended Tool | Output |
---|---|---|
"Generate Kubernetes configs" | /tools:k8s-manifest |
YAML manifests with best practices |
"Audit security vulnerabilities" | /tools:security-scan |
Vulnerability report with fixes |
"Create API documentation" | /tools:doc-generate |
OpenAPI/Swagger specifications |
"Optimize Docker images" | /tools:docker-optimize |
Multi-stage Dockerfile |
- Technology Stack Specification: Include framework versions, database systems, deployment targets
- Constraint Definition: Specify performance requirements, security standards, compliance needs
- Integration Requirements: Define external services, APIs, authentication methods
- Output Preferences: Indicate coding standards, testing frameworks, documentation formats
- Progressive Enhancement: Start with workflows for foundation, refine with tools
- Pipeline Construction: Chain commands in logical sequence for complete solutions
- Iterative Refinement: Use tool outputs as inputs for subsequent commands
- Parallel Execution: Run independent tools simultaneously when possible
- Workflows typically require 30-90 seconds for complete orchestration
- Tools execute in 5-30 seconds for focused operations
- Provide detailed requirements upfront to minimize iteration cycles
- Use saved context (
context-save
/context-restore
) for multi-session projects
Each slash command is a markdown file with the following characteristics:
Component | Description | Example |
---|---|---|
Filename | Determines command name |
api-scaffold.md → /tools:api-scaffold
|
Content | Execution instructions | Agent prompts and orchestration logic |
Variables |
$ARGUMENTS placeholder |
Captures and processes user input |
Directory | Command category |
tools/ for utilities, workflows/ for orchestration |
~/.claude/commands/
├── workflows/ # Multi-agent orchestration commands
│ ├── feature-development.md
│ ├── smart-fix.md
│ └── ...
├── tools/ # Single-purpose utility commands
│ ├── api-scaffold.md
│ ├── security-scan.md
│ └── ...
└── README.md # This documentation
-
File Creation: Place in
workflows/
directory with descriptive naming - Agent Orchestration: Define delegation logic for multiple specialists
- Error Handling: Include fallback strategies and error recovery
- Output Coordination: Specify how agent outputs should be combined
-
File Creation: Place in
tools/
directory with single-purpose naming - Implementation: Provide complete, production-ready code generation
- Framework Detection: Auto-detect and adapt to project stack
- Best Practices: Include security, performance, and scalability considerations
- Use lowercase with hyphens:
feature-name.md
- Be descriptive but concise:
security-scan
notscan
- Indicate action clearly:
deps-upgrade
notdependencies
- Maintain consistency with existing commands
Issue | Cause | Resolution |
---|---|---|
Command not recognized | File missing or misnamed | Verify file exists in correct directory |
Slow execution | Normal workflow behavior | Workflows coordinate multiple agents (30-90s typical) |
Incomplete output | Insufficient context | Provide technology stack and requirements |
Integration failures | Path or configuration issues | Check file paths and dependencies |
-
Context Caching: Use
context-save
for multi-session projects - Batch Operations: Combine related tasks in single workflow
- Tool Selection: Use tools for known problems, workflows for exploration
- Requirement Clarity: Detailed specifications reduce iteration cycles
Command | Type | Capabilities |
---|---|---|
tdd-cycle |
Workflow | Complete red-green-refactor orchestration with test coverage analysis |
tdd-red |
Tool | Failing test generation with edge case coverage and mocking |
tdd-green |
Tool | Minimal implementation to achieve test passage |
tdd-refactor |
Tool | Code optimization while maintaining test integrity |
Framework Support: Jest, Mocha, PyTest, RSpec, JUnit, Go testing, Rust tests
Command | Specialization | Key Features |
---|---|---|
security-scan |
Vulnerability detection | SAST/DAST analysis, dependency scanning, secret detection |
docker-optimize |
Container optimization | Multi-stage builds, layer caching, size reduction (50-90% typical) |
k8s-manifest |
Kubernetes deployment | HPA, NetworkPolicy, PodSecurityPolicy, service mesh ready |
monitor-setup |
Observability | Prometheus metrics, Grafana dashboards, alert rules |
Security Tools Integration: Bandit, Safety, Trivy, Semgrep, Snyk, GitGuardian
Command | Database Support | Migration Strategies |
---|---|---|
db-migrate |
PostgreSQL, MySQL, MongoDB, DynamoDB | Blue-green, expand-contract, versioned schemas |
data-pipeline |
Batch and streaming | Apache Spark, Kafka, Airflow, dbt integration |
data-validation |
Schema and quality | Great Expectations, Pandera, custom validators |
Zero-Downtime Patterns: Rolling migrations, feature flags, dual writes, backfill strategies
Command | Analysis Type | Optimization Techniques |
---|---|---|
performance-optimization |
Full-stack profiling | Query optimization, caching strategies, CDN configuration |
cost-optimize |
Cloud resource analysis | Right-sizing, spot instances, reserved capacity planning |
docker-optimize |
Container performance | Build cache optimization, minimal base images, layer reduction |
# Complete feature development pipeline
/workflows:feature-development user authentication system
/tools:security-scan authentication implementation
/tools:test-harness authentication test suite
/tools:docker-optimize authentication service
/tools:k8s-manifest authentication deployment
/tools:monitor-setup authentication metrics
MIT License - See LICENSE file for complete terms.
- Issues: GitHub Issues
- Contributions: Pull requests welcome following the development guidelines
- Questions: Open a discussion in the repository
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for commands
Similar Open Source Tools

commands
Production-ready slash commands for Claude Code that accelerate development through intelligent automation and multi-agent orchestration. Contains 52 commands organized into workflows and tools categories. Workflows orchestrate complex tasks with multiple agents, while tools provide focused functionality for specific development tasks. Commands can be used with prefixes for organization or flattened for convenience. Best practices include using workflows for complex tasks and tools for specific scopes, chaining commands strategically, and providing detailed context for effective usage.

agents
The 'agents' repository is a comprehensive collection of 83 specialized AI subagents for Claude Code, providing domain-specific expertise across software development, infrastructure, and business operations. Each subagent incorporates current industry best practices, production-ready patterns, deep domain expertise, modern technology stacks, and optimized model selection based on task complexity.

airport
The 'airport' repository provides free Clash Meta nodes sourced from the internet, with testing every 6 hours to ensure quality and low latency. It includes features such as node deduplication, regional renaming, and geographical grouping.

PredictorLLM
PredictorLLM is an advanced trading agent framework that utilizes large language models to automate trading in financial markets. It includes a profiling module to establish agent characteristics, a layered memory module for retaining and prioritizing financial data, and a decision-making module to convert insights into trading strategies. The framework mimics professional traders' behavior, surpassing human limitations in data processing and continuously evolving to adapt to market conditions for superior investment outcomes.

hcaptcha-challenger
hCaptcha Challenger is a tool designed to gracefully face hCaptcha challenges using a multimodal large language model. It does not rely on Tampermonkey scripts or third-party anti-captcha services, instead implementing interfaces for 'AI vs AI' scenarios. The tool supports various challenge types such as image labeling, drag and drop, and advanced tasks like self-supervised challenges and Agentic Workflow. Users can access documentation in multiple languages and leverage resources for tasks like model training, dataset annotation, and model upgrading. The tool aims to enhance user experience in handling hCaptcha challenges with innovative AI capabilities.

crabml
Crabml is a llama.cpp compatible AI inference engine written in Rust, designed for efficient inference on various platforms with WebGPU support. It focuses on running inference tasks with SIMD acceleration and minimal memory requirements, supporting multiple models and quantization methods. The project is hackable, embeddable, and aims to provide high-performance AI inference capabilities.

flute
FLUTE (Flexible Lookup Table Engine for LUT-quantized LLMs) is a tool designed for uniform quantization and lookup table quantization of weights in lower-precision intervals. It offers flexibility in mapping intervals to arbitrary values through a lookup table. FLUTE supports various quantization formats such as int4, int3, int2, fp4, fp3, fp2, nf4, nf3, nf2, and even custom tables. The tool also introduces new quantization algorithms like Learned Normal Float (NFL) for improved performance and calibration data learning. FLUTE provides benchmarks, model zoo, and integration with frameworks like vLLM and HuggingFace for easy deployment and usage.

aikit
AIKit is a one-stop shop to quickly get started to host, deploy, build and fine-tune large language models (LLMs). AIKit offers two main capabilities: Inference: AIKit uses LocalAI, which supports a wide range of inference capabilities and formats. LocalAI provides a drop-in replacement REST API that is OpenAI API compatible, so you can use any OpenAI API compatible client, such as Kubectl AI, Chatbot-UI and many more, to send requests to open-source LLMs! Fine Tuning: AIKit offers an extensible fine tuning interface. It supports Unsloth for fast, memory efficient, and easy fine-tuning experience.

awsome-distributed-training
This repository contains reference architectures and test cases for distributed model training with Amazon SageMaker Hyperpod, AWS ParallelCluster, AWS Batch, and Amazon EKS. The test cases cover different types and sizes of models as well as different frameworks and parallel optimizations (Pytorch DDP/FSDP, MegatronLM, NemoMegatron...).

SecReport
SecReport is a platform for collaborative information security penetration testing report writing and exporting, powered by ChatGPT. It standardizes penetration testing processes, allows multiple users to edit reports, offers custom export templates, generates vulnerability summaries and fix suggestions using ChatGPT, and provides APP security compliance testing reports. The tool aims to streamline the process of creating and managing security reports for penetration testing and compliance purposes.

beet
Beet is a collection of crates for authoring and running web pages, games and AI behaviors. It includes crates like `beet_flow` for scenes-as-control-flow bevy library, `beet_spatial` for spatial behaviors, `beet_ml` for machine learning, `beet_sim` for simulation tooling, `beet_rsx` for authoring tools for html and bevy, and `beet_router` for file-based router for web docs. The `beet` crate acts as a base crate that re-exports sub-crates based on feature flags, similar to the `bevy` crate structure.

jailbreak_llms
This is the official repository for the ACM CCS 2024 paper 'Do Anything Now': Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models. The project employs a new framework called JailbreakHub to conduct the first measurement study on jailbreak prompts in the wild, collecting 15,140 prompts from December 2022 to December 2023, including 1,405 jailbreak prompts. The dataset serves as the largest collection of in-the-wild jailbreak prompts. The repository contains examples of harmful language and is intended for research purposes only.

YuLan-Mini
YuLan-Mini is a lightweight language model with 2.4 billion parameters that achieves performance comparable to industry-leading models despite being pre-trained on only 1.08T tokens. It excels in mathematics and code domains. The repository provides pre-training resources, including data pipeline, optimization methods, and annealing approaches. Users can pre-train their own language models, perform learning rate annealing, fine-tune the model, research training dynamics, and synthesize data. The team behind YuLan-Mini is AI Box at Renmin University of China. The code is released under the MIT License with future updates on model weights usage policies. Users are advised on potential safety concerns and ethical use of the model.

OpenAI-CLIP-Feature
This repository provides code for extracting image and text features using OpenAI CLIP models, supporting both global and local grid visual features. It aims to facilitate multi visual-and-language downstream tasks by allowing users to customize input and output grid resolution easily. The extracted features have shown comparable or superior results in image captioning tasks without hyperparameter tuning. The repo supports various CLIP models and provides detailed information on supported settings and results on MSCOCO image captioning. Users can get started by setting up experiments with the extracted features using X-modaler.

OneClickLLAMA
OneClickLLAMA is a tool designed to run local LLM models such as Qwen2.5 and SakuraLLM with ease. It can be used in conjunction with various OpenAI format translators and analyzers, including LinguaGacha and KeywordGacha. By following the setup guides provided on the page, users can optimize performance and achieve a 3-5 times speed improvement compared to default settings. The tool requires a minimum of 8GB dedicated graphics memory, preferably NVIDIA, and the latest version of graphics drivers installed. Users can download the tool from the release page, choose the appropriate model based on usage and memory size, and start the tool by selecting the corresponding launch script.

llm4regression
This project explores the capability of Large Language Models (LLMs) to perform regression tasks using in-context examples. It compares the performance of LLMs like GPT-4 and Claude 3 Opus with traditional supervised methods such as Linear Regression and Gradient Boosting. The project provides preprints and results demonstrating the strong performance of LLMs in regression tasks. It includes datasets, models used, and experiments on adaptation and contamination. The code and data for the experiments are available for interaction and analysis.
For similar tasks

commands
Production-ready slash commands for Claude Code that accelerate development through intelligent automation and multi-agent orchestration. Contains 52 commands organized into workflows and tools categories. Workflows orchestrate complex tasks with multiple agents, while tools provide focused functionality for specific development tasks. Commands can be used with prefixes for organization or flattened for convenience. Best practices include using workflows for complex tasks and tools for specific scopes, chaining commands strategically, and providing detailed context for effective usage.

apicat
ApiCat is an API documentation management tool that is fully compatible with the OpenAPI specification. With ApiCat, you can freely and efficiently manage your APIs. It integrates the capabilities of LLM, which not only helps you automatically generate API documentation and data models but also creates corresponding test cases based on the API content. Using ApiCat, you can quickly accomplish anything outside of coding, allowing you to focus your energy on the code itself.

tt-metal
TT-NN is a python & C++ Neural Network OP library. It provides a low-level programming model, TT-Metalium, enabling kernel development for Tenstorrent hardware.

mscclpp
MSCCL++ is a GPU-driven communication stack for scalable AI applications. It provides a highly efficient and customizable communication stack for distributed GPU applications. MSCCL++ redefines inter-GPU communication interfaces, delivering a highly efficient and customizable communication stack for distributed GPU applications. Its design is specifically tailored to accommodate diverse performance optimization scenarios often encountered in state-of-the-art AI applications. MSCCL++ provides communication abstractions at the lowest level close to hardware and at the highest level close to application API. The lowest level of abstraction is ultra light weight which enables a user to implement logics of data movement for a collective operation such as AllReduce inside a GPU kernel extremely efficiently without worrying about memory ordering of different ops. The modularity of MSCCL++ enables a user to construct the building blocks of MSCCL++ in a high level abstraction in Python and feed them to a CUDA kernel in order to facilitate the user's productivity. MSCCL++ provides fine-grained synchronous and asynchronous 0-copy 1-sided abstracts for communication primitives such as `put()`, `get()`, `signal()`, `flush()`, and `wait()`. The 1-sided abstractions allows a user to asynchronously `put()` their data on the remote GPU as soon as it is ready without requiring the remote side to issue any receive instruction. This enables users to easily implement flexible communication logics, such as overlapping communication with computation, or implementing customized collective communication algorithms without worrying about potential deadlocks. Additionally, the 0-copy capability enables MSCCL++ to directly transfer data between user's buffers without using intermediate internal buffers which saves GPU bandwidth and memory capacity. MSCCL++ provides consistent abstractions regardless of the location of the remote GPU (either on the local node or on a remote node) or the underlying link (either NVLink/xGMI or InfiniBand). This simplifies the code for inter-GPU communication, which is often complex due to memory ordering of GPU/CPU read/writes and therefore, is error-prone.

mlir-air
This repository contains tools and libraries for building AIR platforms, runtimes and compilers.

free-for-life
A massive list including a huge amount of products and services that are completely free! ⭐ Star on GitHub • 🤝 Contribute # Table of Contents * APIs, Data & ML * Artificial Intelligence * BaaS * Code Editors * Code Generation * DNS * Databases * Design & UI * Domains * Email * Font * For Students * Forms * Linux Distributions * Messaging & Streaming * PaaS * Payments & Billing * SSL

AIMr
AIMr is an AI aimbot tool written in Python that leverages modern technologies to achieve an undetected system with a pleasing appearance. It works on any game that uses human-shaped models. To optimize its performance, users should build OpenCV with CUDA. For Valorant, additional perks in the Discord and an Arduino Leonardo R3 are required.

aika
AIKA (Artificial Intelligence for Knowledge Acquisition) is a new type of artificial neural network designed to mimic the behavior of a biological brain more closely and bridge the gap to classical AI. The network conceptually separates activations from neurons, creating two separate graphs to represent acquired knowledge and inferred information. It uses different types of neurons and synapses to propagate activation values, binding signals, causal relations, and training gradients. The network structure allows for flexible topology and supports the gradual population of neurons and synapses during training.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.