
zcf
Zero-Config Claude-Code Flow
Stars: 1707

ZCF (Zero-Config Claude-Code Flow) is a tool that provides zero-configuration, one-click setup for Claude Code with bilingual support, intelligent agent system, and personalized AI assistant. It offers an interactive menu for easy operations and direct commands for quick execution. The tool supports bilingual operation with automatic language switching and customizable AI output styles. ZCF also includes features like BMad Workflow for enterprise-grade workflow system, Spec Workflow for structured feature development, CCR (Claude Code Router) support for proxy routing, and CCometixLine for real-time usage tracking. It provides smart installation, complete configuration management, and core features like professional agents, command system, and smart configuration. ZCF is cross-platform compatible, supports Windows and Termux environments, and includes security features like dangerous operation confirmation mechanism.
README:
中文 | English | 日本語 | Changelog
Zero-config, one-click setup for Claude Code with bilingual support, intelligent agent system and personalized AI assistant
npx zcf # Open interactive menu and choose operations based on your needs
Menu options include:
-
1
Full initialization (equivalent tozcf i
) -
2
Import workflows (equivalent tozcf u
) -
3
Configure API or CCR - API configuration or CCR proxy setup -
4
Configure MCP - MCP service configuration and management -
5
Configure default model - Set default model (opus/sonnet/opusplan/custom) -
6
Configure AI memory - Configure AI output language and global output style -
7
Configure environment permissions - Import environment variables and permissions -
R
Claude Code Router management (enhanced in v2.8.1) -
U
ccusage - Claude Code usage analysis -
L
CCometixLine - High-performance statusline tool with Git integration and real-time usage tracking (v2.9.9+ new) -
+
Check updates - Check and update Claude Code, CCR and CCometixLine versions (v2.9.9+ enhanced) - More features...
Model Configuration (Option 5): Configure your default Claude model with flexible options:
- Default: Let Claude Code automatically choose the best model for each task
- Opus: Use Claude-3.5-Opus exclusively (high token consumption, use with caution)
- OpusPlan: Use Opus for planning, Sonnet for implementation (recommended balance)
- Custom: Specify your own model names for both primary and fast tasks (supports any custom model)
AI Memory Configuration (Option 6): Personalize your AI assistant:
- AI Output Language: Set the language for AI responses (Chinese, English, or custom)
- Global Output Style: Configure AI personality and response style
npx zcf i # Execute full initialization directly: Install Claude Code + Import workflows + Configure API + Set up MCP services
# or
npx zcf → select 1 # Execute full initialization via menu
npx zcf u # Update workflows only: Quick add AI workflows and command system
# or
npx zcf → select 2 # Execute workflow update via menu
Note:
- Since v2.0,
zcf
opens the interactive menu by default, providing a visual operation interface- You can choose operations through the menu or use commands directly for quick execution
zcf i
= full initialization,zcf u
= update workflows only
ZCF supports bilingual operation with automatic language switching for all commands:
# Use Chinese for all operations
npx zcf --lang zh-CN # Interactive menu in Chinese
npx zcf init --lang zh-CN # Initialize with Chinese interface
npx zcf ccr --allLang zh-CN # Configure CCR in Chinese
# Language parameter priority (highest to lowest):
# --all-lang > --lang > saved user preference > interactive prompt
Language Parameters:
-
--lang, -l
: ZCF interface language (applies to all commands) -
--all-lang, -g
: Set all language parameters at once (most convenient) -
--config-lang, -c
: Template files language (init/update commands only) -
--ai-output-lang, -a
: AI assistant output language (init command only)
For CI/CD and automated setups, use --skip-prompt
with parameters:
# Shorthand version
npx zcf i -s -g zh-CN -t api_key -k "sk-xxx" -u "https://xxx.xxx"
# Complete version
npx zcf i --skip-prompt --all-lang zh-CN --api-type api_key --api-key "sk-xxx" --api-url "https://xxx.xxx"
When using --skip-prompt
, the following parameters are available:
Parameter | Description | Values | Required | Default |
---|---|---|---|---|
--skip-prompt, -s |
Skip all interactive prompts | - | Yes (for non-interactive mode) | - |
--lang, -l |
ZCF display language (applies to all commands) |
zh-CN , en
|
No |
en or user's saved preference |
--config-lang, -c |
Configuration language (template files language) |
zh-CN , en
|
No | en |
--ai-output-lang, -a |
AI output language |
zh-CN , en , custom string |
No | en |
--all-lang, -g |
Set all language parameters (applies to all commands) |
zh-CN , en , custom string |
No | - (Priority: allLang > lang > user preference > prompt. Custom string sets AI output language to custom while interaction and config languages remain 'en') |
--config-action, -r |
Config handling |
new , backup , merge , docs-only , skip
|
No | backup |
--api-type, -t |
API configuration type |
auth_token , api_key , ccr_proxy , skip
|
No | skip |
--api-key, -k |
API key (for both API key and auth token types) | string | Required when api-type is not skip
|
- |
--api-url, -u |
Custom API URL | URL string | No | official API |
--mcp-services, -m |
MCP services to install (multi-select, comma-separated) |
context7 , open-websearch , spec-workflow , mcp-deepwiki , Playwright , exa , or skip for none |
No | all |
--workflows, -w |
Workflows to install (multi-select, comma-separated) |
commonTools , sixStepsWorkflow , featPlanUx , gitWorkflow , bmadWorkflow , or skip for none |
No | all |
--output-styles, -o |
Output styles to install (multi-select, comma-separated) |
engineer-professional , nekomata-engineer , laowang-engineer , or skip for none |
No | all |
--default-output-style, -d |
Default output style | Same as output styles plus built-in: default , explanatory , learning
|
No | engineer-professional |
--install-cometix-line, -x |
Install CCometixLine statusline tool |
true , false
|
No | true |
ZCF now supports customizable AI output styles to personalize your Claude Code experience:
Available Output Styles:
-
engineer-professional
: Professional software engineer following SOLID, KISS, DRY, YAGNI principles -
nekomata-engineer
: Professional catgirl engineer UFO Nya, combining rigorous engineering with cute catgirl traits -
laowang-engineer
: Laowang grumpy tech style, never tolerates code errors and non-standard code - Built-in styles:
default
,explanatory
,learning
(always available)
Features:
- Install multiple styles and switch between them
- Set global default style for all projects
- Automatic cleanup of legacy personality files
- Template-based customization system
Usage Tips:
- Use
/output-style
command to switch project-level output styles anytime - Or modify global output styles in ZCF menu option 6
Important:
- Claude Code version must be greater than 1.0.81 to support output-style. Use
npx zcf check
to update. - Legacy global memory rules have been migrated to the
engineer-professional
output style, solving issues with excessive token usage and AI forgetting global memory.
BMad (BMad-Method: Universal AI Agent Framework) is an enterprise-grade workflow system that provides:
- Complete team of specialized AI agents (PO, PM, Architect, Dev, QA, etc.)
- Structured development process with quality gates
- Automatic documentation generation
- Support for both greenfield and brownfield projects
After installation, use /bmad-init
to initialize the BMad workflow in your project.
Spec Workflow is a comprehensive MCP service that provides structured feature development workflow from requirements to implementation:
- Requirements Analysis: Structured requirements gathering and documentation
- Design Phase: Detailed technical design and architecture planning
- Task Management: Automatic task breakdown and progress tracking
- Implementation Workflow: Systematic approach from requirements to implementation
- Interactive Dashboard: Built-in dashboard for workflow visualization and management
- Approval System: Review and approval process for each development phase
The Spec Workflow MCP provides an optional dashboard for workflow visualization. Users can manually launch the dashboard using:
npx -y @pimzino/spec-workflow-mcp@latest --dashboard
Alternatively, you can install the VS Code extension for integrated workflow management.
Usage Guide: For detailed usage instructions and best practices, see the official Spec Workflow documentation.
Open Web Search is a versatile web search MCP service that provides access to multiple search engines:
- Multi-Engine Support: Search across DuckDuckGo, Bing, and Brave search engines
- Privacy-Focused: Uses privacy-respecting search engines as defaults
- Flexible Configuration: Customizable search engine preferences
- No API Key Required: Ready to use without additional authentication
- Search Aggregation: Ability to combine results from multiple engines
CCR is a powerful proxy router that enables:
- Free Model Access: Use free AI models (like Gemini, DeepSeek) through Claude Code interface
- Custom Routing: Route different types of requests to different models based on your rules
- Cost Optimization: Significantly reduce API costs by using appropriate models for different tasks
- Easy Management: Interactive menu for CCR configuration and service control
- Auto Updates: Automatic version checking and updates for CCR and Claude Code (v2.8.1+)
To access CCR features:
npx zcf ccr # Open CCR management menu
# or
npx zcf → select R
CCR menu options:
- Initialize CCR - Install and configure CCR with preset providers
- Start UI - Launch CCR web interface for advanced configuration
- Service Control - Start/stop/restart CCR service
- Check Status - View current CCR service status
After CCR setup, ZCF automatically configures Claude Code to use CCR as the API proxy.
Important Notice for v2.9.9 Users: If you previously installed CCometixLine using ZCF v2.9.9, please rerun the installation process to ensure that the CCometixLine configuration is correctly added. Run npx zcf
-> Select L
-> Select 1
to add the CCometixLine configuration.
CCometixLine is a high-performance Rust-based statusline tool that provides:
- Real-time Usage Tracking: Monitor Claude Code API usage in real-time
- Git Integration: Display Git status and branch information
- Status Line Display: Native integration with your terminal statusline
- Performance Optimized: Built with Rust for minimal resource usage
- TUI Configuration: Interactive terminal UI for customizing themes, segments, and display options
- Auto Updates: Included in ZCF's update checking system
CCometixLine menu options (accessible via npx zcf
→ L
):
-
1
Install or Update - Install or update CCometixLine using npm -
2
Print Default Configuration - Display current CCometixLine configuration -
3
Custom Config - TUI Configuration Mode - Interactive terminal UI for customizing settings
Important Note for v2.9.9 Users: If you have previously used ZCF v2.9.9 to set up your environment, please re-run the initialization process to ensure CCometixLine configuration is properly added. Run
npx zcf
and select the appropriate setup option to update your configuration with CCometixLine support.
npx zcf check-updates # Check and update Claude Code, CCR and CCometixLine to latest versions
# or
npx zcf → select +
Full initialization (npx zcf
) will automatically:
- ✅ Detect and install Claude Code
- ✅ Select AI output language (new feature)
- ✅ Configure API keys or CCR proxy
- ✅ Select and configure MCP services
- ✅ Set up all necessary configuration files
After configuration:
- For first-time project use, strongly recommend running
/init-project
to generate CLAUDE.md for better AI understanding of project architecture -
<task description>
- Execute directly without workflow, following SOLID, KISS, DRY, and YAGNI principles, suitable for small tasks like bug fixes -
/feat <task description>
- Start new feature development, divided into plan and UI phases -
/workflow <task description>
- Execute complete development workflow, not automated, starts with multiple solution options, asks for user feedback at each step, allows plan modifications, maximum control
PS:
- Both feat and workflow have their advantages, try both to compare
- Generated documents are located by default at
.claude/xxx.md
in project root, you can add.claude/
to your project's.gitignore
- Script interaction language: Controls installation prompts language
- Configuration file language: Determines which configuration set to install (zh-CN/en)
- AI output language: Choose the language for AI responses (supports Chinese, English, and custom languages)
- AI output styles: Support multiple preset styles (Professional Engineer, Nekomata Engineer, Laowang Engineer) for customized experience
- Auto-detects Claude Code installation status
- Uses npm for automatic installation (ensures compatibility)
- Cross-platform support (Windows/macOS/Linux/WSL/Termux)
- Automatic MCP service configuration
- Smart configuration merging and partial modification support (v2.0 new)
- Enhanced command detection mechanism (v2.1 new)
- Dangerous operation confirmation mechanism (v2.3 new)
- CLAUDE.md system instructions
- settings.json configuration file
- commands custom commands
- agents AI agent configurations
- Supports two authentication methods:
- Auth Token: For tokens obtained via OAuth or browser login
- API Key: For API keys from Anthropic Console
- Custom API URL support
- Support for manual configuration later
- Partial modification: Update only needed configuration items (v2.0 new)
- Smart backup of existing configurations (all backups saved in ~/.claude/backup/)
- Configuration merge option (v2.0 enhanced: supports deep merge)
- Safe overwrite mechanism
- Automatic backup before MCP configuration changes
- Default model configuration (v2.0 new)
- AI memory management (v2.0 new)
- ZCF cache cleanup (v2.0 new)
$ npx zcf
ZCF - Zero-Config Claude-Code Flow
? Select ZCF display language / 选择ZCF显示语言:
❯ 简体中文
English
Select function:
-------- Claude Code --------
1. Full initialization - Install Claude Code + Import workflow + Configure API or CCR proxy + Configure MCP services
2. Import workflow - Import/update workflow-related files only
3. Configure API - Configure API URL and authentication (supports CCR proxy)
4. Configure MCP - Configure MCP services (includes Windows fix)
5. Configure default model - Set default model (opus/sonnet/opusplan/custom)
6. Configure Claude global memory - Configure AI output language and output styles
7. Import recommended environment variables and permissions - Import privacy protection environment variables and system permissions
--------- Other Tools ----------
R. CCR - Claude Code Router management
U. ccusage - Claude Code usage analysis
L. CCometixLine - High-performance statusline tool with Git integration and real-time usage tracking
------------ ZCF ------------
0. Select display language / 更改显示语言 - Change ZCF interface language
-. Uninstall - Remove Claude Code configurations and tools from system
+. Check updates - Check and update Claude Code, CCR and CCometixLine versions
Q. Exit
Enter your choice: _
? Select Claude Code configuration language:
❯ 简体中文 (zh-CN) - Chinese (easier for Chinese users to customize)
English (en) - English (recommended, lower token consumption)
? Select AI output language:
AI will respond to you in this language
❯ 简体中文
English
Custom
(Supports Japanese, French, German, and more)
? Claude Code not found. Install automatically? (Y/n)
✔ Claude Code installed successfully
? Existing config detected. How to proceed?
❯ Backup and overwrite - Backup existing config to ~/.claude/backup/
Update docs only - Only update workflows and docs, keep existing API config
Merge config - Merge with existing config, preserve user customizations
Skip - Skip configuration update
? Select API authentication method
❯ Use Auth Token (OAuth authentication)
For tokens obtained via OAuth or browser login
Use API Key (Key authentication)
For API keys from Anthropic Console
Configure CCR Proxy (Claude Code Router)
Use free models and custom routing to reduce costs and explore the possibilities of Claude Code
Skip (configure manually later)
? Enter API URL: https://api.anthropic.com
? Enter Auth Token or API Key: xxx
? Select output styles to install:
❯ Engineer Professional - Professional software engineer following SOLID, KISS, DRY, YAGNI principles
Nekomata Engineer - Professional catgirl engineer UFO Nya, combining rigorous engineering with cute catgirl traits
Laowang Grumpy Tech - Laowang grumpy tech style, never tolerates code errors and non-standard code
? Select global default output style:
❯ Engineer Professional
? Configure MCP services? (Y/n)
? Select MCP services to install:
❯ context7 - Get latest library and framework documentation
mcp-deepwiki - Access deepwiki.com knowledge base
Playwright - Browser automation and web testing
exa - Advanced search and enterprise research tools
? Select workflows to install:
❯ Common Tools Workflow - init-project and related agents
Six Steps Workflow - Complete six-stage development process
Feature Planning UX - Complete feature development lifecycle
Git Workflow - Git operations and branch management
BMad Workflow - AI-driven agile development methodology
? Install CCometixLine statusline tool? (Y/n)
✔ Setup complete! Claude Code environment is ready
✔ All config files backed up to ~/.claude/backup/xxx ✔ Config files copied to ~/.claude
? Select workflows to install (space to select, enter to confirm) ❯ ◉ Common Tools (init-project + init-architect + get-current-datetime) - Essential project initialization and utility commands ◉ Six Steps Workflow (workflow) - Complete 6-phase development process ◉ Feature Planning and UX Design (feat + planner + ui-ux-designer) - Structured feature development ◉ Git Commands (commit + rollback + cleanBranches + worktree) - Streamlined Git operations ◉ BMAD-Method Extension Installer - Enterprise agile development workflow
✔ Installing workflows... ✔ Installed command: zcf/workflow.md ✔ Installed command: zcf/feat.md ✔ Installed agent: zcf/plan/planner.md ✔ Installed agent: zcf/plan/ui-ux-designer.md ✔ Installed command: zcf/git/git-commit.md ✔ Installed command: zcf/git/git-rollback.md ✔ Installed command: zcf/git/git-cleanBranches.md ✔ Installed command: zcf/git/git-worktree.md ✔ Installed command: zcf/bmad-init.md ✔ Workflow installation successful
✔ API configured
? Configure MCP services? (Y/n)
? Select MCP services to install (space to select, enter to confirm) ❯ ◯ Install all ◯ Context7 Documentation Query - Query latest library docs and code examples ◯ DeepWiki - Query GitHub repository docs and examples ◯ Playwright Browser Control - Direct browser automation control ◯ Exa AI Search - Web search using Exa AI
? Enter Exa API Key (get from https://dashboard.exa.ai/api-keys)
✔ MCP services configured
🎉 Setup complete! Use 'claude' command to start.
### Command Line Options
#### Commands Quick Reference
| Command | Alias | Description |
| ------------------- | ------- | ------------------------------------------------------------------------------------- |
| `zcf` | - | Show interactive menu (v2.0 default command) |
| `zcf init` | `zcf i` | Initialize Claude Code configuration |
| `zcf update` | `zcf u` | Update workflow-related md files with backup |
| `zcf ccu` | - | Run Claude Code usage analysis tool - [ccusage](https://github.com/ryoppippi/ccusage) |
| `zcf ccr` | - | Open CCR (Claude Code Router) management menu |
| `zcf uninstall` | - | Interactive uninstall tool for Claude Code configurations and tools |
| `zcf check-updates` | - | Check and update Claude Code, CCR and CCometixLine versions |
#### Common Options
```bash
# Specify configuration language
npx zcf --config-lang zh-CN
npx zcf -c zh-CN # Using short option
# Force overwrite existing configuration
npx zcf --force
npx zcf -f # Using short option
# Update workflow-related md files with backup (preserve API and MCP configs)
npx zcf u # Using update command
npx zcf update # Full command
# Show help information
npx zcf --help
npx zcf -h
# Show version
npx zcf --version
npx zcf -v
# Show interactive menu (default)
npx zcf
# First-time installation, complete initialization
npx zcf i
npx zcf init # Full command
# Update workflow-related md files with backup, keep API and MCP configs
npx zcf u
npx zcf update # Full command
# Force reinitialize with Chinese config
npx zcf i --config-lang zh-CN --force
npx zcf i -c zh-CN -f # Using short options
# Update to English prompts (lower token consumption)
npx zcf u --config-lang en
npx zcf u -c en # Using short option
# Run Claude Code usage analysis tool (powered by ccusage)
npx zcf ccu # Daily usage (default), or use: monthly, session, blocks
zcf/
├── README.md # Documentation
├── package.json # npm package configuration
├── bin/
│ └── zcf.mjs # CLI entry point
├── src/ # Source code
│ ├── cli.ts # CLI main logic
│ ├── commands/ # Command implementations
│ ├── utils/ # Utility functions
│ └── constants.ts # Constant definitions
├── templates/ # Configuration templates
│ ├── CLAUDE.md # Project level config (v2.0 new)
│ ├── settings.json # Base configuration (with privacy env vars)
│ ├── en/ # English version
│ │ ├── rules.md # Core principles (formerly CLAUDE.md)
│ │ ├── output-styles/ # AI output styles (v2.12+ new)
│ │ ├── mcp.md # MCP services guide (v2.0 new)
│ │ ├── agents/ # AI agents
│ │ └── commands/ # Command definitions
│ └── zh-CN/ # Chinese version
│ └── ... (same structure)
└── dist/ # Build output
- Task Planner: Breaks down complex tasks into executable steps
- UI/UX Designer: Provides professional interface design guidance
- AI Personality: Support multiple preset personalities and custom (v2.0 new)
-
BMad Team (New): Complete agile development team including:
- Product Owner (PO): Requirements elicitation and prioritization
- Project Manager (PM): Planning and coordination
- System Architect: Technical design and architecture
- Developer: Implementation and coding
- QA Engineer: Testing and quality assurance
- Scrum Master (SM): Process facilitation
- Business Analyst: Requirements analysis
- UX Expert: User experience design
-
Feature Development (
/feat
): Structured new feature development -
Workflow (
/workflow
): Complete six-phase development workflow -
Git Commands: Streamlined Git operations
-
/git-commit
: Smart commit with automatic staging and message generation -
/git-rollback
: Safely rollback to previous commits with backup -
/git-cleanBranches
: Clean up merged branches and maintain repository hygiene -
/git-worktree
: Manage Git worktrees with IDE integration and content migration
-
-
BMad Workflow (
/bmad-init
): Initialize BMad workflow for enterprise development- Supports both greenfield (new projects) and brownfield (existing projects)
- Provides comprehensive templates for PRDs, architecture docs, and user stories
- Integrated quality gates and checklist system
- API key management (supports partial modification)
- Fine-grained permission control
- Multiple Claude model support (configurable default model)
- Interactive menu system (v2.0 new)
- AI memory management (v2.0 new)
- [Mode: Research] - Understand requirements
- [Mode: Ideate] - Design solutions
- [Mode: Plan] - Create detailed plan
- [Mode: Execute] - Implement development
- [Mode: Optimize] - Improve quality
- [Mode: Review] - Final assessment
# Clone the project
git clone https://github.com/UfoMiao/zcf.git
cd zcf
# Install dependencies (using pnpm)
pnpm install
# Build project
pnpm build
# Local testing
node bin/zcf.mjs
- Task Breakdown: Keep tasks independent and testable
- Code Quality: Follow SOLID, KISS, DRY, and YAGNI principles
-
Documentation Management: The plan will be stored in the
.claude/plan/
directory at the project root
If you encounter issues:
- Re-run
npx zcf
to reconfigure - Check configuration files in
~/.claude/
directory - Ensure Claude Code is properly installed
- If paths contain spaces, ZCF will automatically handle quote wrapping
- Use ripgrep (
rg
) preferentially for file searching for better performance
ZCF fully supports Windows platform:
-
Auto-detection: Automatically uses compatible
cmd /c npx
format on Windows systems - Config repair: Existing incorrect configurations are automatically fixed during updates
- Zero-config: Windows users don't need any extra steps, same experience as macOS/Linux
If you encounter MCP connection issues on Windows, running npx zcf
will automatically fix the configuration format.
ZCF now provides comprehensive support for Windows Subsystem for Linux (WSL):
- Smart Detection: Multi-layered WSL environment detection using environment variables, system files, and mount points
- Distribution Recognition: Automatically identifies WSL distribution (Ubuntu, Debian, etc.) for optimized configuration
- Seamless Installation: Native Linux-style installation experience within WSL environment
- Path Management: Intelligent handling of WSL-specific configuration paths and file locations
If running in WSL, ZCF will automatically detect the environment and display appropriate installation messages.
ZCF now supports running in Android Termux environment:
- Auto-adaptation: Automatically detects Termux environment and uses compatible configuration
- Enhanced detection: Intelligently identifies available commands, ensuring normal operation in restricted environments
- Full functionality: Enjoy the same complete features in Termux as on desktop systems
To protect user data security, the following operations require explicit confirmation:
- File System: Delete files/directories, bulk modifications, move system files
-
Code Commits:
git commit
,git push
,git reset --hard
- System Config: Modify environment variables, system settings, permissions
- Data Operations: Database deletions, schema changes, bulk updates
- Network Requests: Send sensitive data, call production APIs
- Package Management: Global install/uninstall, update core dependencies
This project is inspired by and incorporates the following open source projects:
Thanks to these community contributors for sharing!
If you find this project helpful, please consider sponsoring its development. Your support is greatly appreciated!
![]() |
![]() |
A huge thank you to all our sponsors for their generous support!
- Tc (first sponsor)
- Argolinhas (first ko-fi sponsor ٩(•̤̀ᵕ•̤́๑))
- r*r (first anonymous sponsor🤣)
- **康 (first KFC sponsor🍗)
- 16°C coffee (My best friend🤪, offered Claude Code max $200 package)
If this project helps you, please give me a ⭐️ Star!
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for zcf
Similar Open Source Tools

zcf
ZCF (Zero-Config Claude-Code Flow) is a tool that provides zero-configuration, one-click setup for Claude Code with bilingual support, intelligent agent system, and personalized AI assistant. It offers an interactive menu for easy operations and direct commands for quick execution. The tool supports bilingual operation with automatic language switching and customizable AI output styles. ZCF also includes features like BMad Workflow for enterprise-grade workflow system, Spec Workflow for structured feature development, CCR (Claude Code Router) support for proxy routing, and CCometixLine for real-time usage tracking. It provides smart installation, complete configuration management, and core features like professional agents, command system, and smart configuration. ZCF is cross-platform compatible, supports Windows and Termux environments, and includes security features like dangerous operation confirmation mechanism.

auto-engineer
Auto Engineer is a tool designed to automate the Software Development Life Cycle (SDLC) by building production-grade applications with a combination of human and AI agents. It offers a plugin-based architecture that allows users to install only the necessary functionality for their projects. The tool guides users through key stages including Flow Modeling, IA Generation, Deterministic Scaffolding, AI Coding & Testing Loop, and Comprehensive Quality Checks. Auto Engineer follows a command/event-driven architecture and provides a modular plugin system for specific functionalities. It supports TypeScript with strict typing throughout and includes a built-in message bus server with a web dashboard for monitoring commands and events.

llm-context.py
LLM Context is a tool designed to assist developers in quickly injecting relevant content from code/text projects into Large Language Model chat interfaces. It leverages `.gitignore` patterns for smart file selection and offers a streamlined clipboard workflow using the command line. The tool also provides direct integration with Large Language Models through the Model Context Protocol (MCP). LLM Context is optimized for code repositories and collections of text/markdown/html documents, making it suitable for developers working on projects that fit within an LLM's context window. The tool is under active development and aims to enhance AI-assisted development workflows by harnessing the power of Large Language Models.

sim
Sim is a platform that allows users to build and deploy AI agent workflows quickly and easily. It provides cloud-hosted and self-hosted options, along with support for local AI models. Users can set up the application using Docker Compose, Dev Containers, or manual setup with PostgreSQL and pgvector extension. The platform utilizes technologies like Next.js, Bun, PostgreSQL with Drizzle ORM, Better Auth for authentication, Shadcn and Tailwind CSS for UI, Zustand for state management, ReactFlow for flow editor, Fumadocs for documentation, Turborepo for monorepo management, Socket.io for real-time communication, and Trigger.dev for background jobs.

tunacode
TunaCode CLI is an AI-powered coding assistant that provides a command-line interface for developers to enhance their coding experience. It offers features like model selection, parallel execution for faster file operations, and various commands for code management. The tool aims to improve coding efficiency and provide a seamless coding environment for developers.

MassGen
MassGen is a cutting-edge multi-agent system that leverages the power of collaborative AI to solve complex tasks. It assigns a task to multiple AI agents who work in parallel, observe each other's progress, and refine their approaches to converge on the best solution to deliver a comprehensive and high-quality result. The system operates through an architecture designed for seamless multi-agent collaboration, with key features including cross-model/agent synergy, parallel processing, intelligence sharing, consensus building, and live visualization. Users can install the system, configure API settings, and run MassGen for various tasks such as question answering, creative writing, research, development & coding tasks, and web automation & browser tasks. The roadmap includes plans for advanced agent collaboration, expanded model, tool & agent integration, improved performance & scalability, enhanced developer experience, and a web interface.

dotclaude
A sophisticated multi-agent configuration system for Claude Code that provides specialized agents and command templates to accelerate code review, refactoring, security audits, tech-lead-guidance, and UX evaluations. It offers essential commands, directory structure details, agent system overview, command templates, usage patterns, collaboration philosophy, sync management, advanced usage guidelines, and FAQ. The tool aims to streamline development workflows, enhance code quality, and facilitate collaboration between developers and AI agents.

BrowserAI
BrowserAI is a tool that allows users to run large language models (LLMs) directly in the browser, providing a simple, fast, and open-source solution. It prioritizes privacy by processing data locally, is cost-effective with no server costs, works offline after initial download, and offers WebGPU acceleration for high performance. It is developer-friendly with a simple API, supports multiple engines, and comes with pre-configured models for easy use. Ideal for web developers, companies needing privacy-conscious AI solutions, researchers experimenting with browser-based AI, and hobbyists exploring AI without infrastructure overhead.

evalchemy
Evalchemy is a unified and easy-to-use toolkit for evaluating language models, focusing on post-trained models. It integrates multiple existing benchmarks such as RepoBench, AlpacaEval, and ZeroEval. Key features include unified installation, parallel evaluation, simplified usage, and results management. Users can run various benchmarks with a consistent command-line interface and track results locally or integrate with a database for systematic tracking and leaderboard submission.

cortex.cpp
Cortex is a C++ AI engine with a Docker-like command-line interface and client libraries. It supports running AI models using ONNX, TensorRT-LLM, and llama.cpp engines. Cortex can function as a standalone server or be integrated as a library. The tool provides support for various engines and models, allowing users to easily deploy and interact with AI models. It offers a range of CLI commands for managing models, embeddings, and engines, as well as a REST API for interacting with models. Cortex is designed to simplify the deployment and usage of AI models in C++ applications.

LEANN
LEANN is an innovative vector database that democratizes personal AI, transforming your laptop into a powerful RAG system that can index and search through millions of documents using 97% less storage than traditional solutions without accuracy loss. It achieves this through graph-based selective recomputation and high-degree preserving pruning, computing embeddings on-demand instead of storing them all. LEANN allows semantic search of file system, emails, browser history, chat history, codebase, or external knowledge bases on your laptop with zero cloud costs and complete privacy. It is a drop-in semantic search MCP service fully compatible with Claude Code, enabling intelligent retrieval without changing your workflow.

dexto
Dexto is a lightweight runtime for creating and running AI agents that turn natural language into real-world actions. It serves as the missing intelligence layer for building AI applications, standalone chatbots, or as the reasoning engine inside larger products. Dexto features a powerful CLI and Web UI for running AI agents, supports multiple interfaces, allows hot-swapping of LLMs from various providers, connects to remote tool servers via the Model Context Protocol, is config-driven with version-controlled YAML, offers production-ready core features, extensibility for custom services, and enables multi-agent collaboration via MCP and A2A.

terminator
Terminator is an AI-powered desktop automation tool that is open source, MIT-licensed, and cross-platform. It works across all apps and browsers, inspired by GitHub Actions & Playwright. It is 100x faster than generic AI agents, with over 95% success rate and no vendor lock-in. Users can create automations that work across any desktop app or browser, achieve high success rates without costly consultant armies, and pre-train workflows as deterministic code.

Shellsage
Shell Sage is an intelligent terminal companion and AI-powered terminal assistant that enhances the terminal experience with features like local and cloud AI support, context-aware error diagnosis, natural language to command translation, and safe command execution workflows. It offers interactive workflows, supports various API providers, and allows for custom model selection. Users can configure the tool for local or API mode, select specific models, and switch between modes easily. Currently in alpha development, Shell Sage has known limitations like limited Windows support and occasional false positives in error detection. The roadmap includes improvements like better context awareness, Windows PowerShell integration, Tmux integration, and CI/CD error pattern database.

dive
Dive is an AI toolkit for Go that enables the creation of specialized teams of AI agents and seamless integration with leading LLMs. It offers a CLI and APIs for easy integration, with features like creating specialized agents, hierarchical agent systems, declarative configuration, multiple LLM support, extended reasoning, model context protocol, advanced model settings, tools for agent capabilities, tool annotations, streaming, CLI functionalities, thread management, confirmation system, deep research, and semantic diff. Dive also provides semantic diff analysis, unified interface for LLM providers, tool system with annotations, custom tool creation, and support for various verified models. The toolkit is designed for developers to build AI-powered applications with rich agent capabilities and tool integrations.

airunner
AI Runner is a multi-modal AI interface that allows users to run open-source large language models and AI image generators on their own hardware. The tool provides features such as voice-based chatbot conversations, text-to-speech, speech-to-text, vision-to-text, text generation with large language models, image generation capabilities, image manipulation tools, utility functions, and more. It aims to provide a stable and user-friendly experience with security updates, a new UI, and a streamlined installation process. The application is designed to run offline on users' hardware without relying on a web server, offering a smooth and responsive user experience.
For similar tasks

KeyboardGPT
Keyboard GPT is an LSPosed Module that integrates Generative AI like ChatGPT into your keyboard, allowing for real-time AI responses, custom prompts, and web search capabilities. It works in all apps and supports popular keyboards like Gboard, Swiftkey, Fleksy, and Samsung Keyboard. Users can easily configure API providers, submit prompts, and perform web searches directly from their keyboard. The tool also supports multiple Generative AI APIs such as ChatGPT, Gemini, and Groq. It offers an easy installation process for both rooted and non-rooted devices, making it a versatile and powerful tool for enhancing text input experiences on mobile devices.

PokeLLMon
PokeLLMon is a tool that allows users to set up a local battle engine for Pokémon battles. It requires Python version 3.8 or higher and OpenAI version 1.7.2 or higher. Users can configure the OpenAI API to enhance their battles. The tool provides a platform for users to engage in local battles by running the main Python script with their username and password for PokeLLMon.

Code-Atlas
Code Atlas is a lightweight interpreter developed in C++ that supports the execution of multi-language code snippets and partial Markdown rendering. It consumes significantly lower resources compared to similar tools, making it suitable for resource-limited devices. It leverages llama.cpp for local large-model inference and supports cloud-based large-model APIs. The tool provides features for code execution, Markdown rendering, local AI inference, and resource efficiency.

Cerebr
Cerebr is an intelligent AI assistant browser extension designed to enhance work efficiency and learning experience. It integrates powerful AI capabilities from various sources to provide features such as smart sidebar, multiple API support, cross-browser API configuration synchronization, comprehensive Q&A support, elegant rendering, real-time response, theme switching, and more. With a minimalist design and focus on delivering a seamless, distraction-free browsing experience, Cerebr aims to be your second brain for deep reading and understanding.

zcf
ZCF (Zero-Config Claude-Code Flow) is a tool that provides zero-configuration, one-click setup for Claude Code with bilingual support, intelligent agent system, and personalized AI assistant. It offers an interactive menu for easy operations and direct commands for quick execution. The tool supports bilingual operation with automatic language switching and customizable AI output styles. ZCF also includes features like BMad Workflow for enterprise-grade workflow system, Spec Workflow for structured feature development, CCR (Claude Code Router) support for proxy routing, and CCometixLine for real-time usage tracking. It provides smart installation, complete configuration management, and core features like professional agents, command system, and smart configuration. ZCF is cross-platform compatible, supports Windows and Termux environments, and includes security features like dangerous operation confirmation mechanism.

Mirror-Flowers
Mirror Flowers is an out-of-the-box code security auditing tool that integrates local static scanning (line-level taint tracking + AST) with AI verification to help quickly discover and locate high-risk issues, providing repair suggestions. It supports multiple languages such as PHP, Python, JavaScript/TypeScript, and Java. The tool offers both single-file and project modes, with features like concurrent acceleration, integrated UI for visual results, and compatibility with multiple OpenAI interface providers. Users can configure the tool through environment variables or API, and can utilize it through a web UI or HTTP API for tasks like single-file auditing or project auditing.

AGiXT
AGiXT is a dynamic Artificial Intelligence Automation Platform engineered to orchestrate efficient AI instruction management and task execution across a multitude of providers. Our solution infuses adaptive memory handling with a broad spectrum of commands to enhance AI's understanding and responsiveness, leading to improved task completion. The platform's smart features, like Smart Instruct and Smart Chat, seamlessly integrate web search, planning strategies, and conversation continuity, transforming the interaction between users and AI. By leveraging a powerful plugin system that includes web browsing and command execution, AGiXT stands as a versatile bridge between AI models and users. With an expanding roster of AI providers, code evaluation capabilities, comprehensive chain management, and platform interoperability, AGiXT is consistently evolving to drive a multitude of applications, affirming its place at the forefront of AI technology.

aiexe
aiexe is a cutting-edge command-line interface (CLI) and graphical user interface (GUI) tool that integrates powerful AI capabilities directly into your terminal or desktop. It is designed for developers, tech enthusiasts, and anyone interested in AI-powered automation. aiexe provides an easy-to-use yet robust platform for executing complex tasks with just a few commands. Users can harness the power of various AI models from OpenAI, Anthropic, Ollama, Gemini, and GROQ to boost productivity and enhance decision-making processes.
For similar jobs

promptflow
**Prompt flow** is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring. It makes prompt engineering much easier and enables you to build LLM apps with production quality.

deepeval
DeepEval is a simple-to-use, open-source LLM evaluation framework specialized for unit testing LLM outputs. It incorporates various metrics such as G-Eval, hallucination, answer relevancy, RAGAS, etc., and runs locally on your machine for evaluation. It provides a wide range of ready-to-use evaluation metrics, allows for creating custom metrics, integrates with any CI/CD environment, and enables benchmarking LLMs on popular benchmarks. DeepEval is designed for evaluating RAG and fine-tuning applications, helping users optimize hyperparameters, prevent prompt drifting, and transition from OpenAI to hosting their own Llama2 with confidence.

MegaDetector
MegaDetector is an AI model that identifies animals, people, and vehicles in camera trap images (which also makes it useful for eliminating blank images). This model is trained on several million images from a variety of ecosystems. MegaDetector is just one of many tools that aims to make conservation biologists more efficient with AI. If you want to learn about other ways to use AI to accelerate camera trap workflows, check out our of the field, affectionately titled "Everything I know about machine learning and camera traps".

leapfrogai
LeapfrogAI is a self-hosted AI platform designed to be deployed in air-gapped resource-constrained environments. It brings sophisticated AI solutions to these environments by hosting all the necessary components of an AI stack, including vector databases, model backends, API, and UI. LeapfrogAI's API closely matches that of OpenAI, allowing tools built for OpenAI/ChatGPT to function seamlessly with a LeapfrogAI backend. It provides several backends for various use cases, including llama-cpp-python, whisper, text-embeddings, and vllm. LeapfrogAI leverages Chainguard's apko to harden base python images, ensuring the latest supported Python versions are used by the other components of the stack. The LeapfrogAI SDK provides a standard set of protobuffs and python utilities for implementing backends and gRPC. LeapfrogAI offers UI options for common use-cases like chat, summarization, and transcription. It can be deployed and run locally via UDS and Kubernetes, built out using Zarf packages. LeapfrogAI is supported by a community of users and contributors, including Defense Unicorns, Beast Code, Chainguard, Exovera, Hypergiant, Pulze, SOSi, United States Navy, United States Air Force, and United States Space Force.

llava-docker
This Docker image for LLaVA (Large Language and Vision Assistant) provides a convenient way to run LLaVA locally or on RunPod. LLaVA is a powerful AI tool that combines natural language processing and computer vision capabilities. With this Docker image, you can easily access LLaVA's functionalities for various tasks, including image captioning, visual question answering, text summarization, and more. The image comes pre-installed with LLaVA v1.2.0, Torch 2.1.2, xformers 0.0.23.post1, and other necessary dependencies. You can customize the model used by setting the MODEL environment variable. The image also includes a Jupyter Lab environment for interactive development and exploration. Overall, this Docker image offers a comprehensive and user-friendly platform for leveraging LLaVA's capabilities.

carrot
The 'carrot' repository on GitHub provides a list of free and user-friendly ChatGPT mirror sites for easy access. The repository includes sponsored sites offering various GPT models and services. Users can find and share sites, report errors, and access stable and recommended sites for ChatGPT usage. The repository also includes a detailed list of ChatGPT sites, their features, and accessibility options, making it a valuable resource for ChatGPT users seeking free and unlimited GPT services.

TrustLLM
TrustLLM is a comprehensive study of trustworthiness in LLMs, including principles for different dimensions of trustworthiness, established benchmark, evaluation, and analysis of trustworthiness for mainstream LLMs, and discussion of open challenges and future directions. Specifically, we first propose a set of principles for trustworthy LLMs that span eight different dimensions. Based on these principles, we further establish a benchmark across six dimensions including truthfulness, safety, fairness, robustness, privacy, and machine ethics. We then present a study evaluating 16 mainstream LLMs in TrustLLM, consisting of over 30 datasets. The document explains how to use the trustllm python package to help you assess the performance of your LLM in trustworthiness more quickly. For more details about TrustLLM, please refer to project website.

AI-YinMei
AI-YinMei is an AI virtual anchor Vtuber development tool (N card version). It supports fastgpt knowledge base chat dialogue, a complete set of solutions for LLM large language models: [fastgpt] + [one-api] + [Xinference], supports docking bilibili live broadcast barrage reply and entering live broadcast welcome speech, supports Microsoft edge-tts speech synthesis, supports Bert-VITS2 speech synthesis, supports GPT-SoVITS speech synthesis, supports expression control Vtuber Studio, supports painting stable-diffusion-webui output OBS live broadcast room, supports painting picture pornography public-NSFW-y-distinguish, supports search and image search service duckduckgo (requires magic Internet access), supports image search service Baidu image search (no magic Internet access), supports AI reply chat box [html plug-in], supports AI singing Auto-Convert-Music, supports playlist [html plug-in], supports dancing function, supports expression video playback, supports head touching action, supports gift smashing action, supports singing automatic start dancing function, chat and singing automatic cycle swing action, supports multi scene switching, background music switching, day and night automatic switching scene, supports open singing and painting, let AI automatically judge the content.