
zcf
Zero-Config Claude-Code Flow
Stars: 1199

ZCF (Zero-Config Claude-Code Flow) is a tool that provides zero-configuration, one-click setup for Claude Code with bilingual support, intelligent agent system, and personalized AI assistant. It offers an interactive menu for easy operations and direct commands for quick execution. The tool supports bilingual operation with automatic language switching and customizable AI output styles. ZCF also includes features like BMad Workflow for enterprise-grade workflow system, Spec Workflow for structured feature development, CCR (Claude Code Router) support for proxy routing, and CCometixLine for real-time usage tracking. It provides smart installation, complete configuration management, and core features like professional agents, command system, and smart configuration. ZCF is cross-platform compatible, supports Windows and Termux environments, and includes security features like dangerous operation confirmation mechanism.
README:
Zero-config, one-click setup for Claude Code with bilingual support, intelligent agent system and personalized AI assistant
npx zcf # Open interactive menu and choose operations based on your needs
Menu options include:
-
1
Full initialization (equivalent tozcf i
) -
2
Import workflows (equivalent tozcf u
) -
3-7
Configuration management (API/CCR, MCP, Model settings, AI output style, environment permissions, etc.) -
R
Claude Code Router management (enhanced in v2.8.1) -
U
ccusage - Claude Code usage analysis -
L
CCometixLine - High-performance statusline tool with Git integration and real-time usage tracking (v2.9.9+ new) -
+
Check updates - Check and update Claude Code, CCR and CCometixLine versions (v2.9.9+ enhanced) - More features...
npx zcf i # Execute full initialization directly: Install Claude Code + Import workflows + Configure API + Set up MCP services
# or
npx zcf โ select 1 # Execute full initialization via menu
npx zcf u # Update workflows only: Quick add AI workflows and command system
# or
npx zcf โ select 2 # Execute workflow update via menu
Note:
- Since v2.0,
zcf
opens the interactive menu by default, providing a visual operation interface- You can choose operations through the menu or use commands directly for quick execution
zcf i
= full initialization,zcf u
= update workflows only
ZCF supports bilingual operation with automatic language switching for all commands:
# Use Chinese for all operations
npx zcf --lang zh-CN # Interactive menu in Chinese
npx zcf init --lang zh-CN # Initialize with Chinese interface
npx zcf ccr --allLang zh-CN # Configure CCR in Chinese
# Language parameter priority (highest to lowest):
# --all-lang > --lang > saved user preference > interactive prompt
Language Parameters:
-
--lang, -l
: ZCF interface language (applies to all commands) -
--all-lang, -g
: Set all language parameters at once (most convenient) -
--config-lang, -c
: Template files language (init/update commands only) -
--ai-output-lang, -a
: AI assistant output language (init command only)
For CI/CD and automated setups, use --skip-prompt
with parameters:
# Shorthand version
npx zcf i -s -g zh-CN -t api_key -k "sk-xxx" -u "https://xxx.xxx"
# Complete version
npx zcf i --skip-prompt --all-lang zh-CN --api-type api_key --api-key "sk-xxx" --api-url "https://xxx.xxx"
When using --skip-prompt
, the following parameters are available:
Parameter | Description | Values | Required | Default |
---|---|---|---|---|
--skip-prompt, -s |
Skip all interactive prompts | - | Yes (for non-interactive mode) | - |
--lang, -l |
ZCF display language (applies to all commands) |
zh-CN , en
|
No |
en or user's saved preference |
--config-lang, -c |
Configuration language (template files language) |
zh-CN , en
|
No | en |
--ai-output-lang, -a |
AI output language |
zh-CN , en , custom string |
No | en |
--all-lang, -g |
Set all language parameters (applies to all commands) |
zh-CN , en , custom string |
No | - (Priority: allLang > lang > user preference > prompt. Custom string sets AI output language to custom while interaction and config languages remain 'en') |
--config-action, -r |
Config handling |
new , backup , merge , docs-only , skip
|
No | backup |
--api-type, -t |
API configuration type |
auth_token , api_key , ccr_proxy , skip
|
No | skip |
--api-key, -k |
API key (for both API key and auth token types) | string | Required when api-type is not skip
|
- |
--api-url, -u |
Custom API URL | URL string | No | official API |
--mcp-services, -m |
MCP services to install (multi-select, comma-separated) |
context7 , mcp-deepwiki , Playwright , exa , or skip for none |
No | all |
--workflows, -w |
Workflows to install (multi-select, comma-separated) |
commonTools , sixStepsWorkflow , featPlanUx , gitWorkflow , bmadWorkflow , or skip for none |
No | all |
--output-styles, -o |
Output styles to install (multi-select, comma-separated) |
engineer-professional , nekomata-engineer , laowang-engineer , or skip for none |
No | all |
--default-output-style, -d |
Default output style | Same as output styles plus built-in: default , explanatory , learning
|
No | engineer-professional |
--install-cometix-line, -x |
Install CCometixLine statusline tool |
true , false
|
No | true |
ZCF now supports customizable AI output styles to personalize your Claude Code experience:
Available Output Styles:
-
engineer-professional
: Professional software engineer following SOLID, KISS, DRY, YAGNI principles -
nekomata-engineer
: Professional catgirl engineer UFO Nya, combining rigorous engineering with cute catgirl traits -
laowang-engineer
: Laowang grumpy tech style, never tolerates code errors and non-standard code - Built-in styles:
default
,explanatory
,learning
(always available)
Features:
- Install multiple styles and switch between them
- Set global default style for all projects
- Automatic cleanup of legacy personality files
- Template-based customization system
Usage Tips:
- Use
/output-style
command to switch project-level output styles anytime - Or modify global output styles in ZCF menu option 6
Important:
- Claude Code version must be greater than 1.0.81 to support output-style. Use
npx zcf check
to update. - Legacy global memory rules have been migrated to the
engineer-professional
output style, solving issues with excessive token usage and AI forgetting global memory.
BMad (BMad-Method: Universal AI Agent Framework) is an enterprise-grade workflow system that provides:
- Complete team of specialized AI agents (PO, PM, Architect, Dev, QA, etc.)
- Structured development process with quality gates
- Automatic documentation generation
- Support for both greenfield and brownfield projects
After installation, use /bmad-init
to initialize the BMad workflow in your project.
Spec Workflow is a comprehensive MCP service that provides structured feature development workflow from requirements to implementation:
- Requirements Analysis: Structured requirements gathering and documentation
- Design Phase: Detailed technical design and architecture planning
- Task Management: Automatic task breakdown and progress tracking
- Implementation Workflow: Systematic approach from requirements to implementation
- Interactive Dashboard: Built-in dashboard for workflow visualization and management
- Approval System: Review and approval process for each development phase
The Spec Workflow MCP provides an optional dashboard for workflow visualization. Users can manually launch the dashboard using:
npx -y @pimzino/spec-workflow-mcp@latest --dashboard
Alternatively, you can install the VS Code extension for integrated workflow management.
Usage Guide: For detailed usage instructions and best practices, see the official Spec Workflow documentation.
CCR is a powerful proxy router that enables:
- Free Model Access: Use free AI models (like Gemini, DeepSeek) through Claude Code interface
- Custom Routing: Route different types of requests to different models based on your rules
- Cost Optimization: Significantly reduce API costs by using appropriate models for different tasks
- Easy Management: Interactive menu for CCR configuration and service control
- Auto Updates: Automatic version checking and updates for CCR and Claude Code (v2.8.1+)
To access CCR features:
npx zcf ccr # Open CCR management menu
# or
npx zcf โ select R
CCR menu options:
- Initialize CCR - Install and configure CCR with preset providers
- Start UI - Launch CCR web interface for advanced configuration
- Service Control - Start/stop/restart CCR service
- Check Status - View current CCR service status
After CCR setup, ZCF automatically configures Claude Code to use CCR as the API proxy.
Important Notice for v2.9.9 Users: If you previously installed CCometixLine using ZCF v2.9.9, please rerun the installation process to ensure that the CCometixLine configuration is correctly added. Run npx zcf
-> Select L
-> Select 1
to add the CCometixLine configuration.
CCometixLine is a high-performance Rust-based statusline tool that provides:
- Real-time Usage Tracking: Monitor Claude Code API usage in real-time
- Git Integration: Display Git status and branch information
- Status Line Display: Native integration with your terminal statusline
- Performance Optimized: Built with Rust for minimal resource usage
- TUI Configuration: Interactive terminal UI for customizing themes, segments, and display options
- Auto Updates: Included in ZCF's update checking system
CCometixLine menu options (accessible via npx zcf
โ L
):
-
1
Install or Update - Install or update CCometixLine using npm -
2
Print Default Configuration - Display current CCometixLine configuration -
3
Custom Config - TUI Configuration Mode - Interactive terminal UI for customizing settings
Important Note for v2.9.9 Users: If you have previously used ZCF v2.9.9 to set up your environment, please re-run the initialization process to ensure CCometixLine configuration is properly added. Run
npx zcf
and select the appropriate setup option to update your configuration with CCometixLine support.
npx zcf check-updates # Check and update Claude Code, CCR and CCometixLine to latest versions
# or
npx zcf โ select +
Full initialization (npx zcf
) will automatically:
- โ Detect and install Claude Code
- โ Select AI output language (new feature)
- โ Configure API keys or CCR proxy
- โ Select and configure MCP services
- โ Set up all necessary configuration files
After configuration:
- For first-time project use, strongly recommend running
/init-project
to generate CLAUDE.md for better AI understanding of project architecture -
<task description>
- Execute directly without workflow, following SOLID, KISS, DRY, and YAGNI principles, suitable for small tasks like bug fixes -
/feat <task description>
- Start new feature development, divided into plan and UI phases -
/workflow <task description>
- Execute complete development workflow, not automated, starts with multiple solution options, asks for user feedback at each step, allows plan modifications, maximum control
PS:
- Both feat and workflow have their advantages, try both to compare
- Generated documents are located by default at
.claude/xxx.md
in project root, you can add.claude/
to your project's.gitignore
- Script interaction language: Controls installation prompts language
- Configuration file language: Determines which configuration set to install (zh-CN/en)
- AI output language: Choose the language for AI responses (supports Chinese, English, and custom languages)
- AI output styles: Support multiple preset styles (Professional Engineer, Nekomata Engineer, Laowang Engineer) for customized experience
- Auto-detects Claude Code installation status
- Uses npm for automatic installation (ensures compatibility)
- Cross-platform support (Windows/macOS/Linux/Termux)
- Automatic MCP service configuration
- Smart configuration merging and partial modification support (v2.0 new)
- Enhanced command detection mechanism (v2.1 new)
- Dangerous operation confirmation mechanism (v2.3 new)
- CLAUDE.md system instructions
- settings.json configuration file
- commands custom commands
- agents AI agent configurations
- Supports two authentication methods:
- Auth Token: For tokens obtained via OAuth or browser login
- API Key: For API keys from Anthropic Console
- Custom API URL support
- Support for manual configuration later
- Partial modification: Update only needed configuration items (v2.0 new)
- Smart backup of existing configurations (all backups saved in ~/.claude/backup/)
- Configuration merge option (v2.0 enhanced: supports deep merge)
- Safe overwrite mechanism
- Automatic backup before MCP configuration changes
- Default model configuration (v2.0 new)
- AI memory management (v2.0 new)
- ZCF cache cleanup (v2.0 new)
$ npx zcf
ZCF - Zero-Config Claude-Code Flow
? Select ZCF display language / ้ๆฉZCFๆพ็คบ่ฏญ่จ:
โฏ ็ฎไฝไธญๆ
English
Select function:
-------- Claude Code --------
1. Full initialization - Install Claude Code + Import workflow + Configure API or CCR proxy + Configure MCP services
2. Import workflow - Import/update workflow-related files only
3. Configure API - Configure API URL and authentication (supports CCR proxy)
4. Configure MCP - Configure MCP services (includes Windows fix)
5. Configure default model - Set default model (opus/sonnet)
6. Configure Claude global memory - Configure AI output language and output styles
7. Import recommended environment variables and permissions - Import privacy protection environment variables and system permissions
--------- Other Tools ----------
R. CCR - Claude Code Router management
U. ccusage - Claude Code usage analysis
L. CCometixLine - High-performance statusline tool with Git integration and real-time usage tracking
------------ ZCF ------------
0. Select display language / ๆดๆนๆพ็คบ่ฏญ่จ - Change ZCF interface language
-. Clear preference cache - Clear preference language and other caches
+. Check updates - Check and update Claude Code, CCR and CCometixLine versions
Q. Exit
Enter your choice: _
? Select Claude Code configuration language:
โฏ ็ฎไฝไธญๆ (zh-CN) - Chinese (easier for Chinese users to customize)
English (en) - English (recommended, lower token consumption)
? Select AI output language:
AI will respond to you in this language
โฏ ็ฎไฝไธญๆ
English
Custom
(Supports Japanese, French, German, and more)
? Claude Code not found. Install automatically? (Y/n)
โ Claude Code installed successfully
? Existing config detected. How to proceed?
โฏ Backup and overwrite - Backup existing config to ~/.claude/backup/
Update docs only - Only update workflows and docs, keep existing API config
Merge config - Merge with existing config, preserve user customizations
Skip - Skip configuration update
? Select API authentication method
โฏ Use Auth Token (OAuth authentication)
For tokens obtained via OAuth or browser login
Use API Key (Key authentication)
For API keys from Anthropic Console
Configure CCR Proxy (Claude Code Router)
Use free models and custom routing to reduce costs and explore the possibilities of Claude Code
Skip (configure manually later)
? Enter API URL: https://api.anthropic.com
? Enter Auth Token or API Key: xxx
? Select output styles to install:
โฏ Engineer Professional - Professional software engineer following SOLID, KISS, DRY, YAGNI principles
Nekomata Engineer - Professional catgirl engineer UFO Nya, combining rigorous engineering with cute catgirl traits
Laowang Grumpy Tech - Laowang grumpy tech style, never tolerates code errors and non-standard code
? Select global default output style:
โฏ Engineer Professional
? Configure MCP services? (Y/n)
? Select MCP services to install:
โฏ context7 - Get latest library and framework documentation
mcp-deepwiki - Access deepwiki.com knowledge base
Playwright - Browser automation and web testing
exa - Advanced search and enterprise research tools
? Select workflows to install:
โฏ Common Tools Workflow - init-project and related agents
Six Steps Workflow - Complete six-stage development process
Feature Planning UX - Complete feature development lifecycle
Git Workflow - Git operations and branch management
BMad Workflow - AI-driven agile development methodology
? Install CCometixLine statusline tool? (Y/n)
โ Setup complete! Claude Code environment is ready
โ All config files backed up to ~/.claude/backup/xxx โ Config files copied to ~/.claude
? Select workflows to install (space to select, enter to confirm) โฏ โ Common Tools (init-project + init-architect + get-current-datetime) - Essential project initialization and utility commands โ Six Steps Workflow (workflow) - Complete 6-phase development process โ Feature Planning and UX Design (feat + planner + ui-ux-designer) - Structured feature development โ Git Commands (commit + rollback + cleanBranches + worktree) - Streamlined Git operations โ BMAD-Method Extension Installer - Enterprise agile development workflow
โ Installing workflows... โ Installed command: zcf/workflow.md โ Installed command: zcf/feat.md โ Installed agent: zcf/plan/planner.md โ Installed agent: zcf/plan/ui-ux-designer.md โ Installed command: zcf/git/git-commit.md โ Installed command: zcf/git/git-rollback.md โ Installed command: zcf/git/git-cleanBranches.md โ Installed command: zcf/git/git-worktree.md โ Installed command: zcf/bmad-init.md โ Workflow installation successful
โ API configured
? Configure MCP services? (Y/n)
? Select MCP services to install (space to select, enter to confirm) โฏ โฏ Install all โฏ Context7 Documentation Query - Query latest library docs and code examples โฏ DeepWiki - Query GitHub repository docs and examples โฏ Playwright Browser Control - Direct browser automation control โฏ Exa AI Search - Web search using Exa AI
? Enter Exa API Key (get from https://dashboard.exa.ai/api-keys)
โ MCP services configured
๐ Setup complete! Use 'claude' command to start.
### Command Line Options
#### Commands Quick Reference
| Command | Alias | Description |
| ------------------- | ------- | ------------------------------------------------------------------------------------- |
| `zcf` | - | Show interactive menu (v2.0 default command) |
| `zcf init` | `zcf i` | Initialize Claude Code configuration |
| `zcf update` | `zcf u` | Update workflow-related md files with backup |
| `zcf ccu` | - | Run Claude Code usage analysis tool - [ccusage](https://github.com/ryoppippi/ccusage) |
| `zcf ccr` | - | Open CCR (Claude Code Router) management menu |
| `zcf check-updates` | - | Check and update Claude Code, CCR and CCometixLine versions |
#### Common Options
```bash
# Specify configuration language
npx zcf --config-lang zh-CN
npx zcf -c zh-CN # Using short option
# Force overwrite existing configuration
npx zcf --force
npx zcf -f # Using short option
# Update workflow-related md files with backup (preserve API and MCP configs)
npx zcf u # Using update command
npx zcf update # Full command
# Show help information
npx zcf --help
npx zcf -h
# Show version
npx zcf --version
npx zcf -v
# Show interactive menu (default)
npx zcf
# First-time installation, complete initialization
npx zcf i
npx zcf init # Full command
# Update workflow-related md files with backup, keep API and MCP configs
npx zcf u
npx zcf update # Full command
# Force reinitialize with Chinese config
npx zcf i --config-lang zh-CN --force
npx zcf i -c zh-CN -f # Using short options
# Update to English prompts (lower token consumption)
npx zcf u --config-lang en
npx zcf u -c en # Using short option
# Run Claude Code usage analysis tool (powered by ccusage)
npx zcf ccu # Daily usage (default), or use: monthly, session, blocks
zcf/
โโโ README.md # Documentation
โโโ package.json # npm package configuration
โโโ bin/
โ โโโ zcf.mjs # CLI entry point
โโโ src/ # Source code
โ โโโ cli.ts # CLI main logic
โ โโโ commands/ # Command implementations
โ โโโ utils/ # Utility functions
โ โโโ constants.ts # Constant definitions
โโโ templates/ # Configuration templates
โ โโโ CLAUDE.md # Project level config (v2.0 new)
โ โโโ settings.json # Base configuration (with privacy env vars)
โ โโโ en/ # English version
โ โ โโโ rules.md # Core principles (formerly CLAUDE.md)
โ โ โโโ output-styles/ # AI output styles (v2.12+ new)
โ โ โโโ mcp.md # MCP services guide (v2.0 new)
โ โ โโโ agents/ # AI agents
โ โ โโโ commands/ # Command definitions
โ โโโ zh-CN/ # Chinese version
โ โโโ ... (same structure)
โโโ dist/ # Build output
- Task Planner: Breaks down complex tasks into executable steps
- UI/UX Designer: Provides professional interface design guidance
- AI Personality: Support multiple preset personalities and custom (v2.0 new)
-
BMad Team (New): Complete agile development team including:
- Product Owner (PO): Requirements elicitation and prioritization
- Project Manager (PM): Planning and coordination
- System Architect: Technical design and architecture
- Developer: Implementation and coding
- QA Engineer: Testing and quality assurance
- Scrum Master (SM): Process facilitation
- Business Analyst: Requirements analysis
- UX Expert: User experience design
-
Feature Development (
/feat
): Structured new feature development -
Workflow (
/workflow
): Complete six-phase development workflow -
Git Commands: Streamlined Git operations
-
/git-commit
: Smart commit with automatic staging and message generation -
/git-rollback
: Safely rollback to previous commits with backup -
/git-cleanBranches
: Clean up merged branches and maintain repository hygiene -
/git-worktree
: Manage Git worktrees with IDE integration and content migration
-
-
BMad Workflow (
/bmad-init
): Initialize BMad workflow for enterprise development- Supports both greenfield (new projects) and brownfield (existing projects)
- Provides comprehensive templates for PRDs, architecture docs, and user stories
- Integrated quality gates and checklist system
- API key management (supports partial modification)
- Fine-grained permission control
- Multiple Claude model support (configurable default model)
- Interactive menu system (v2.0 new)
- AI memory management (v2.0 new)
- [Mode: Research] - Understand requirements
- [Mode: Ideate] - Design solutions
- [Mode: Plan] - Create detailed plan
- [Mode: Execute] - Implement development
- [Mode: Optimize] - Improve quality
- [Mode: Review] - Final assessment
# Clone the project
git clone https://github.com/UfoMiao/zcf.git
cd zcf
# Install dependencies (using pnpm)
pnpm install
# Build project
pnpm build
# Local testing
node bin/zcf.mjs
- Task Breakdown: Keep tasks independent and testable
- Code Quality: Follow SOLID, KISS, DRY, and YAGNI principles
-
Documentation Management: The plan will be stored in the
.claude/plan/
directory at the project root
If you encounter issues:
- Re-run
npx zcf
to reconfigure - Check configuration files in
~/.claude/
directory - Ensure Claude Code is properly installed
- If paths contain spaces, ZCF will automatically handle quote wrapping
- Use ripgrep (
rg
) preferentially for file searching for better performance
ZCF fully supports Windows platform:
-
Auto-detection: Automatically uses compatible
cmd /c npx
format on Windows systems - Config repair: Existing incorrect configurations are automatically fixed during updates
- Zero-config: Windows users don't need any extra steps, same experience as macOS/Linux
If you encounter MCP connection issues on Windows, running npx zcf
will automatically fix the configuration format.
ZCF now supports running in Android Termux environment:
- Auto-adaptation: Automatically detects Termux environment and uses compatible configuration
- Enhanced detection: Intelligently identifies available commands, ensuring normal operation in restricted environments
- Full functionality: Enjoy the same complete features in Termux as on desktop systems
To protect user data security, the following operations require explicit confirmation:
- File System: Delete files/directories, bulk modifications, move system files
-
Code Commits:
git commit
,git push
,git reset --hard
- System Config: Modify environment variables, system settings, permissions
- Data Operations: Database deletions, schema changes, bulk updates
- Network Requests: Send sensitive data, call production APIs
- Package Management: Global install/uninstall, update core dependencies
This project is inspired by and incorporates the following open source projects:
Thanks to these community contributors for sharing!
If you find this project helpful, please consider sponsoring its development. Your support is greatly appreciated!
![]() |
![]() |
A huge thank you to all our sponsors for their generous support!
- Tc (first sponsor)
- Argolinhas (first ko-fi sponsor ูฉ(โขฬคฬแตโขฬคฬเน))
- r*r (first anonymous sponsor๐คฃ)
- 16ยฐC coffee (My best friend๐คช, offered Claude Code max $200 package)
MIT License
If this project helps you, please give me a โญ๏ธ Star!
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for zcf
Similar Open Source Tools

zcf
ZCF (Zero-Config Claude-Code Flow) is a tool that provides zero-configuration, one-click setup for Claude Code with bilingual support, intelligent agent system, and personalized AI assistant. It offers an interactive menu for easy operations and direct commands for quick execution. The tool supports bilingual operation with automatic language switching and customizable AI output styles. ZCF also includes features like BMad Workflow for enterprise-grade workflow system, Spec Workflow for structured feature development, CCR (Claude Code Router) support for proxy routing, and CCometixLine for real-time usage tracking. It provides smart installation, complete configuration management, and core features like professional agents, command system, and smart configuration. ZCF is cross-platform compatible, supports Windows and Termux environments, and includes security features like dangerous operation confirmation mechanism.

sim
Sim is a platform that allows users to build and deploy AI agent workflows quickly and easily. It provides cloud-hosted and self-hosted options, along with support for local AI models. Users can set up the application using Docker Compose, Dev Containers, or manual setup with PostgreSQL and pgvector extension. The platform utilizes technologies like Next.js, Bun, PostgreSQL with Drizzle ORM, Better Auth for authentication, Shadcn and Tailwind CSS for UI, Zustand for state management, ReactFlow for flow editor, Fumadocs for documentation, Turborepo for monorepo management, Socket.io for real-time communication, and Trigger.dev for background jobs.

dotclaude
A sophisticated multi-agent configuration system for Claude Code that provides specialized agents and command templates to accelerate code review, refactoring, security audits, tech-lead-guidance, and UX evaluations. It offers essential commands, directory structure details, agent system overview, command templates, usage patterns, collaboration philosophy, sync management, advanced usage guidelines, and FAQ. The tool aims to streamline development workflows, enhance code quality, and facilitate collaboration between developers and AI agents.

evalchemy
Evalchemy is a unified and easy-to-use toolkit for evaluating language models, focusing on post-trained models. It integrates multiple existing benchmarks such as RepoBench, AlpacaEval, and ZeroEval. Key features include unified installation, parallel evaluation, simplified usage, and results management. Users can run various benchmarks with a consistent command-line interface and track results locally or integrate with a database for systematic tracking and leaderboard submission.

cortex.cpp
Cortex is a C++ AI engine with a Docker-like command-line interface and client libraries. It supports running AI models using ONNX, TensorRT-LLM, and llama.cpp engines. Cortex can function as a standalone server or be integrated as a library. The tool provides support for various engines and models, allowing users to easily deploy and interact with AI models. It offers a range of CLI commands for managing models, embeddings, and engines, as well as a REST API for interacting with models. Cortex is designed to simplify the deployment and usage of AI models in C++ applications.

BrowserAI
BrowserAI is a tool that allows users to run large language models (LLMs) directly in the browser, providing a simple, fast, and open-source solution. It prioritizes privacy by processing data locally, is cost-effective with no server costs, works offline after initial download, and offers WebGPU acceleration for high performance. It is developer-friendly with a simple API, supports multiple engines, and comes with pre-configured models for easy use. Ideal for web developers, companies needing privacy-conscious AI solutions, researchers experimenting with browser-based AI, and hobbyists exploring AI without infrastructure overhead.

Shellsage
Shell Sage is an intelligent terminal companion and AI-powered terminal assistant that enhances the terminal experience with features like local and cloud AI support, context-aware error diagnosis, natural language to command translation, and safe command execution workflows. It offers interactive workflows, supports various API providers, and allows for custom model selection. Users can configure the tool for local or API mode, select specific models, and switch between modes easily. Currently in alpha development, Shell Sage has known limitations like limited Windows support and occasional false positives in error detection. The roadmap includes improvements like better context awareness, Windows PowerShell integration, Tmux integration, and CI/CD error pattern database.

pipecat
Pipecat is an open-source framework designed for building generative AI voice bots and multimodal assistants. It provides code building blocks for interacting with AI services, creating low-latency data pipelines, and transporting audio, video, and events over the Internet. Pipecat supports various AI services like speech-to-text, text-to-speech, image generation, and vision models. Users can implement new services and contribute to the framework. Pipecat aims to simplify the development of applications like personal coaches, meeting assistants, customer support bots, and more by providing a complete framework for integrating AI services.

airunner
AI Runner is a multi-modal AI interface that allows users to run open-source large language models and AI image generators on their own hardware. The tool provides features such as voice-based chatbot conversations, text-to-speech, speech-to-text, vision-to-text, text generation with large language models, image generation capabilities, image manipulation tools, utility functions, and more. It aims to provide a stable and user-friendly experience with security updates, a new UI, and a streamlined installation process. The application is designed to run offline on users' hardware without relying on a web server, offering a smooth and responsive user experience.

agentscope
AgentScope is a multi-agent platform designed to empower developers to build multi-agent applications with large-scale models. It features three high-level capabilities: Easy-to-Use, High Robustness, and Actor-Based Distribution. AgentScope provides a list of `ModelWrapper` to support both local model services and third-party model APIs, including OpenAI API, DashScope API, Gemini API, and ollama. It also enables developers to rapidly deploy local model services using libraries such as ollama (CPU inference), Flask + Transformers, Flask + ModelScope, FastChat, and vllm. AgentScope supports various services, including Web Search, Data Query, Retrieval, Code Execution, File Operation, and Text Processing. Example applications include Conversation, Game, and Distribution. AgentScope is released under Apache License 2.0 and welcomes contributions.

chat
deco.chat is an open-source foundation for building AI-native software, providing developers, engineers, and AI enthusiasts with robust tools to rapidly prototype, develop, and deploy AI-powered applications. It empowers Vibecoders to prototype ideas and Agentic engineers to deploy scalable, secure, and sustainable production systems. The core capabilities include an open-source runtime for composing tools and workflows, MCP Mesh for secure integration of models and APIs, a unified TypeScript stack for backend logic and custom frontends, global modular infrastructure built on Cloudflare, and a visual workspace for building agents and orchestrating everything in code.

VT.ai
VT.ai is a multimodal AI platform that offers dynamic conversation routing with SemanticRouter, multi-modal interactions (text/image/audio), an assistant framework with code interpretation, real-time response streaming, cross-provider model switching, and local model support with Ollama integration. It supports various AI providers such as OpenAI, Anthropic, Google Gemini, Groq, Cohere, and OpenRouter, providing a wide range of core capabilities for AI orchestration.

auto-subs
Auto-subs is a tool designed to automatically transcribe editing timelines using OpenAI Whisper and Stable-TS for extreme accuracy. It generates subtitles in a custom style, is completely free, and runs locally within Davinci Resolve. It works on Mac, Linux, and Windows, supporting both Free and Studio versions of Resolve. Users can jump to positions on the timeline using the Subtitle Navigator and translate from any language to English. The tool provides a user-friendly interface for creating and customizing subtitles for video content.

mistral.rs
Mistral.rs is a fast LLM inference platform written in Rust. We support inference on a variety of devices, quantization, and easy-to-use application with an Open-AI API compatible HTTP server and Python bindings.

Notate
Notate is a powerful desktop research assistant that combines AI-driven analysis with advanced vector search technology. It streamlines research workflow by processing, organizing, and retrieving information from documents, audio, and text. Notate offers flexible AI capabilities with support for various LLM providers and local models, ensuring data privacy. Built for researchers, academics, and knowledge workers, it features real-time collaboration, accessible UI, and cross-platform compatibility.

serve
Jina-Serve is a framework for building and deploying AI services that communicate via gRPC, HTTP and WebSockets. It provides native support for major ML frameworks and data types, high-performance service design with scaling and dynamic batching, LLM serving with streaming output, built-in Docker integration and Executor Hub, one-click deployment to Jina AI Cloud, and enterprise-ready features with Kubernetes and Docker Compose support. Users can create gRPC-based AI services, build pipelines, scale services locally with replicas, shards, and dynamic batching, deploy to the cloud using Kubernetes, Docker Compose, or JCloud, and enable token-by-token streaming for responsive LLM applications.
For similar tasks

KeyboardGPT
Keyboard GPT is an LSPosed Module that integrates Generative AI like ChatGPT into your keyboard, allowing for real-time AI responses, custom prompts, and web search capabilities. It works in all apps and supports popular keyboards like Gboard, Swiftkey, Fleksy, and Samsung Keyboard. Users can easily configure API providers, submit prompts, and perform web searches directly from their keyboard. The tool also supports multiple Generative AI APIs such as ChatGPT, Gemini, and Groq. It offers an easy installation process for both rooted and non-rooted devices, making it a versatile and powerful tool for enhancing text input experiences on mobile devices.

PokeLLMon
PokeLLMon is a tool that allows users to set up a local battle engine for Pokรฉmon battles. It requires Python version 3.8 or higher and OpenAI version 1.7.2 or higher. Users can configure the OpenAI API to enhance their battles. The tool provides a platform for users to engage in local battles by running the main Python script with their username and password for PokeLLMon.

Code-Atlas
Code Atlas is a lightweight interpreter developed in C++ that supports the execution of multi-language code snippets and partial Markdown rendering. It consumes significantly lower resources compared to similar tools, making it suitable for resource-limited devices. It leverages llama.cpp for local large-model inference and supports cloud-based large-model APIs. The tool provides features for code execution, Markdown rendering, local AI inference, and resource efficiency.

Cerebr
Cerebr is an intelligent AI assistant browser extension designed to enhance work efficiency and learning experience. It integrates powerful AI capabilities from various sources to provide features such as smart sidebar, multiple API support, cross-browser API configuration synchronization, comprehensive Q&A support, elegant rendering, real-time response, theme switching, and more. With a minimalist design and focus on delivering a seamless, distraction-free browsing experience, Cerebr aims to be your second brain for deep reading and understanding.

zcf
ZCF (Zero-Config Claude-Code Flow) is a tool that provides zero-configuration, one-click setup for Claude Code with bilingual support, intelligent agent system, and personalized AI assistant. It offers an interactive menu for easy operations and direct commands for quick execution. The tool supports bilingual operation with automatic language switching and customizable AI output styles. ZCF also includes features like BMad Workflow for enterprise-grade workflow system, Spec Workflow for structured feature development, CCR (Claude Code Router) support for proxy routing, and CCometixLine for real-time usage tracking. It provides smart installation, complete configuration management, and core features like professional agents, command system, and smart configuration. ZCF is cross-platform compatible, supports Windows and Termux environments, and includes security features like dangerous operation confirmation mechanism.

AGiXT
AGiXT is a dynamic Artificial Intelligence Automation Platform engineered to orchestrate efficient AI instruction management and task execution across a multitude of providers. Our solution infuses adaptive memory handling with a broad spectrum of commands to enhance AI's understanding and responsiveness, leading to improved task completion. The platform's smart features, like Smart Instruct and Smart Chat, seamlessly integrate web search, planning strategies, and conversation continuity, transforming the interaction between users and AI. By leveraging a powerful plugin system that includes web browsing and command execution, AGiXT stands as a versatile bridge between AI models and users. With an expanding roster of AI providers, code evaluation capabilities, comprehensive chain management, and platform interoperability, AGiXT is consistently evolving to drive a multitude of applications, affirming its place at the forefront of AI technology.

aiexe
aiexe is a cutting-edge command-line interface (CLI) and graphical user interface (GUI) tool that integrates powerful AI capabilities directly into your terminal or desktop. It is designed for developers, tech enthusiasts, and anyone interested in AI-powered automation. aiexe provides an easy-to-use yet robust platform for executing complex tasks with just a few commands. Users can harness the power of various AI models from OpenAI, Anthropic, Ollama, Gemini, and GROQ to boost productivity and enhance decision-making processes.

claude.vim
Claude.vim is a Vim plugin that integrates Claude, an AI pair programmer, into your Vim workflow. It allows you to chat with Claude about what to build or how to debug problems, and Claude offers opinions, proposes modifications, or even writes code. The plugin provides a chat/instruction-centric interface optimized for human collaboration, with killer features like access to chat history and vimdiff interface. It can refactor code, modify or extend selected pieces of code, execute complex tasks by reading documentation, cloning git repositories, and more. Note that it is early alpha software and expected to rapidly evolve.
For similar jobs

promptflow
**Prompt flow** is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring. It makes prompt engineering much easier and enables you to build LLM apps with production quality.

deepeval
DeepEval is a simple-to-use, open-source LLM evaluation framework specialized for unit testing LLM outputs. It incorporates various metrics such as G-Eval, hallucination, answer relevancy, RAGAS, etc., and runs locally on your machine for evaluation. It provides a wide range of ready-to-use evaluation metrics, allows for creating custom metrics, integrates with any CI/CD environment, and enables benchmarking LLMs on popular benchmarks. DeepEval is designed for evaluating RAG and fine-tuning applications, helping users optimize hyperparameters, prevent prompt drifting, and transition from OpenAI to hosting their own Llama2 with confidence.

MegaDetector
MegaDetector is an AI model that identifies animals, people, and vehicles in camera trap images (which also makes it useful for eliminating blank images). This model is trained on several million images from a variety of ecosystems. MegaDetector is just one of many tools that aims to make conservation biologists more efficient with AI. If you want to learn about other ways to use AI to accelerate camera trap workflows, check out our of the field, affectionately titled "Everything I know about machine learning and camera traps".

leapfrogai
LeapfrogAI is a self-hosted AI platform designed to be deployed in air-gapped resource-constrained environments. It brings sophisticated AI solutions to these environments by hosting all the necessary components of an AI stack, including vector databases, model backends, API, and UI. LeapfrogAI's API closely matches that of OpenAI, allowing tools built for OpenAI/ChatGPT to function seamlessly with a LeapfrogAI backend. It provides several backends for various use cases, including llama-cpp-python, whisper, text-embeddings, and vllm. LeapfrogAI leverages Chainguard's apko to harden base python images, ensuring the latest supported Python versions are used by the other components of the stack. The LeapfrogAI SDK provides a standard set of protobuffs and python utilities for implementing backends and gRPC. LeapfrogAI offers UI options for common use-cases like chat, summarization, and transcription. It can be deployed and run locally via UDS and Kubernetes, built out using Zarf packages. LeapfrogAI is supported by a community of users and contributors, including Defense Unicorns, Beast Code, Chainguard, Exovera, Hypergiant, Pulze, SOSi, United States Navy, United States Air Force, and United States Space Force.

llava-docker
This Docker image for LLaVA (Large Language and Vision Assistant) provides a convenient way to run LLaVA locally or on RunPod. LLaVA is a powerful AI tool that combines natural language processing and computer vision capabilities. With this Docker image, you can easily access LLaVA's functionalities for various tasks, including image captioning, visual question answering, text summarization, and more. The image comes pre-installed with LLaVA v1.2.0, Torch 2.1.2, xformers 0.0.23.post1, and other necessary dependencies. You can customize the model used by setting the MODEL environment variable. The image also includes a Jupyter Lab environment for interactive development and exploration. Overall, this Docker image offers a comprehensive and user-friendly platform for leveraging LLaVA's capabilities.

carrot
The 'carrot' repository on GitHub provides a list of free and user-friendly ChatGPT mirror sites for easy access. The repository includes sponsored sites offering various GPT models and services. Users can find and share sites, report errors, and access stable and recommended sites for ChatGPT usage. The repository also includes a detailed list of ChatGPT sites, their features, and accessibility options, making it a valuable resource for ChatGPT users seeking free and unlimited GPT services.

TrustLLM
TrustLLM is a comprehensive study of trustworthiness in LLMs, including principles for different dimensions of trustworthiness, established benchmark, evaluation, and analysis of trustworthiness for mainstream LLMs, and discussion of open challenges and future directions. Specifically, we first propose a set of principles for trustworthy LLMs that span eight different dimensions. Based on these principles, we further establish a benchmark across six dimensions including truthfulness, safety, fairness, robustness, privacy, and machine ethics. We then present a study evaluating 16 mainstream LLMs in TrustLLM, consisting of over 30 datasets. The document explains how to use the trustllm python package to help you assess the performance of your LLM in trustworthiness more quickly. For more details about TrustLLM, please refer to project website.

AI-YinMei
AI-YinMei is an AI virtual anchor Vtuber development tool (N card version). It supports fastgpt knowledge base chat dialogue, a complete set of solutions for LLM large language models: [fastgpt] + [one-api] + [Xinference], supports docking bilibili live broadcast barrage reply and entering live broadcast welcome speech, supports Microsoft edge-tts speech synthesis, supports Bert-VITS2 speech synthesis, supports GPT-SoVITS speech synthesis, supports expression control Vtuber Studio, supports painting stable-diffusion-webui output OBS live broadcast room, supports painting picture pornography public-NSFW-y-distinguish, supports search and image search service duckduckgo (requires magic Internet access), supports image search service Baidu image search (no magic Internet access), supports AI reply chat box [html plug-in], supports AI singing Auto-Convert-Music, supports playlist [html plug-in], supports dancing function, supports expression video playback, supports head touching action, supports gift smashing action, supports singing automatic start dancing function, chat and singing automatic cycle swing action, supports multi scene switching, background music switching, day and night automatic switching scene, supports open singing and painting, let AI automatically judge the content.