
auto-engineer
Build enterprise grade apps that scale using AI
Stars: 61

Auto Engineer is a tool designed to automate the Software Development Life Cycle (SDLC) by building production-grade applications with a combination of human and AI agents. It offers a plugin-based architecture that allows users to install only the necessary functionality for their projects. The tool guides users through key stages including Flow Modeling, IA Generation, Deterministic Scaffolding, AI Coding & Testing Loop, and Comprehensive Quality Checks. Auto Engineer follows a command/event-driven architecture and provides a modular plugin system for specific functionalities. It supports TypeScript with strict typing throughout and includes a built-in message bus server with a web dashboard for monitoring commands and events.
README:
Put your SDLC on Auto, and build production-grade apps with humans and agents.
- It will be buggy as you use it!
- We are working hard on making it awesome
- We are actively using Auto with real-world clients and use-cases
- We are making a lot of design decisions as we battle test the approach
Stay up to date by watching 👀 and giving us a star ⭐ - join the 💬 Discord for conversations.
npx create-auto-app@latest
- Node.js >= 20.0.0
- pnpm >= 8.15.4
- At least one AI provider API key:
- Anthropic Claude (Highly recommended)
- OpenAI
- Google Gemini
- X.AI Grok
Auto Engineer uses a plugin-based architecture. Install the CLI and only the plugins you need:
# Install the CLI globally (use Yarn or NPM if you prefer of course)
pnpm install -g @auto-engineer/cli@latest
# Create a new project directory
mkdir my-app && cd my-app
# Install plugins for your use case
pnpm install @auto-engineer/flow @auto-engineer/server-generator-apollo-emmett
# Or install all common plugins
pnpm install @auto-engineer/flow @auto-engineer/server-generator-apollo-emmett @auto-engineer/server-implementer @auto-engineer/frontend-generator-react-graphql
# Configure your API keys
echo "ANTHROPIC_API_KEY=your-key-here" > .env
Create an auto.config.ts
file in your project root to configure plugins:
// auto.config.ts
export default {
plugins: [
'@auto-engineer/flow',
'@auto-engineer/server-generator-apollo-emmett',
'@auto-engineer/server-implementer',
'@auto-engineer/frontend-generator-react-graphql',
// Add more plugins as needed
],
// Optional: Override command aliases if there are conflicts
aliases: {
// 'command:name': '@auto-engineer/package-name'
},
};
Auto Engineer uses a modular plugin architecture. Each plugin provides specific functionality:
Plugin | Package | Commands | Description |
---|---|---|---|
Flow | @auto-engineer/flow |
create:example , export:schema
|
Flow modeling DSL and schema export |
Emmett Generator | @auto-engineer/server-generator-apollo-emmett |
generate:server |
Server code generation from schemas |
Server Implementer | @auto-engineer/server-implementer |
implement:server , implement:slice
|
AI-powered server implementation |
React GraphQL Generator | @auto-engineer/frontend-generator-react-graphql |
generate:client , copy:example
|
React client scaffolding |
Frontend Implementer | @auto-engineer/frontend-implementer |
implement:client |
AI-powered client implementation |
Information Architect | @auto-engineer/information-architect |
generate:ia |
Information architecture generation |
Design System Importer | @auto-engineer/design-system-importer |
import:design-system |
Figma design system import |
Server Checks | @auto-engineer/server-checks |
check:types , check:lint , check:tests
|
Server validation suite |
Frontend Checks | @auto-engineer/frontend-checks |
check:client |
Frontend validation suite |
File Syncer | @auto-engineer/file-syncer |
N/A (internal use) | File watching and synchronization |
Create Auto App | @auto-engineer/create-auto-app |
create:app |
Bootstrap new Auto Engineer projects |
Install only the plugins you need:
# For server development
pnpm install @auto-engineer/flow @auto-engineer/server-generator-apollo-emmett @auto-engineer/server-implementer @auto-engineer/server-checks
# For frontend development
pnpm install @auto-engineer/frontend-generator-react-graphql @auto-engineer/frontend-implementer @auto-engineer/frontend-checks
# For design system integration
pnpm install @auto-engineer/design-system-importer @auto-engineer/information-architect
If multiple plugins register the same command alias, you'll see a clear error message:
❌ Command alias conflicts detected!
Multiple packages are trying to register the same command aliases.
Please add alias overrides to your auto.config.ts file:
export default {
plugins: [
'@auto-engineer/package-a',
'@auto-engineer/package-b',
],
aliases: {
// Specify which package handles each conflicting command
'conflicting:command': '@auto-engineer/package-a',
}
};
Note: Each package can expose multiple commands. The alias resolution maps a specific command alias to the package that should handle it. For example, if both package-a
and package-b
provide a check:types
command, you specify which package wins for that specific command alias.
- Built-in event-driven message bus server with web dashboard
- Real-time command and event monitoring at http://localhost:5555
- WebSocket support for live updates
- DSL functions for event handling and orchestration in
auto.config.ts
- All command handlers now use a single
defineCommandHandler
function - Type-safe command definitions with automatic CLI manifest generation
- Named parameters for all CLI commands (e.g.,
--input-path=value
) - Integrated help and examples in command definitions
- Automatic file watching and syncing for development workflows
- Support for TypeScript declaration files (.d.ts)
- Flow file synchronization with related dependencies
- Flow package now works in browser environments
- Stub implementations for Node.js-specific modules
- Support for browser-based flow modeling tools
Auto automates the SDLC through a configurable pipeline of agentic and procedural modules. The process turns high-level models into production-ready code through these key stages:
- Flow Modeling: You (or an AI) start by creating a high-level "Flow Model". This defines system behavior through command, query, and reaction "slices" that specify both frontend and server requirements. This is where the core design work happens.
- IA Generation: An "information architect" agent automatically generates an information architecture schema from your model, similar to how a UX designer creates wireframes.
- Deterministic Scaffolding: The IA schema is used to generate a complete, deterministic application scaffold.
- Spec-Driven Precision: The scaffold is populated with placeholders containing implementation hints and in-situ prompts. The initial flow model also generates deterministic tests. This combination of fine-grained prompts and tests precisely guides the AI.
- AI Coding & Testing Loop: An AI agent implements the code based on the prompts and context from previous steps. As code is written, tests are run. If they fail, the AI gets the error feedback and self-corrects, usually within 1-3 attempts.
- Comprehensive Quality Checks: After passing the tests, the code goes through further checks, including linting, runtime validation, and AI-powered visual testing to ensure design system compliance.
Commands are provided by installed plugins. Run auto --help
to see available commands based on your configuration.
All commands now use named parameters for clarity and consistency:
Flow Development
-
auto create:example --name=<project-name>
- Create an example project -
auto export:schema --output-dir=<dir> --directory=<flows-dir>
- Export flow schemas
Server Generation
-
auto generate:server --schema-path=<schema> --destination=<dest>
- Generate server from schema -
auto implement:server --server-directory=<dir>
- AI implements server -
auto implement:slice --server-directory=<dir> --slice=<name>
- Implement specific slice
Frontend Generation
-
auto generate:ia --output-dir=<dir> --flow-files=<patterns>
- Generate Information Architecture -
auto generate:client --starter-template=<template> --client-dir=<dir> --ia-schema=<file> --gql-schema=<file>
- Generate React client -
auto implement:client --project-dir=<dir> --ia-scheme-dir=<dir> --design-system-path=<file>
- AI implements client
Validation & Testing
-
auto check:types --target-directory=<dir> --scope=<project|changed>
- TypeScript type checking -
auto check:tests --target-directory=<dir> --scope=<project|changed>
- Run test suites -
auto check:lint --target-directory=<dir> --fix --scope=<project|changed>
- Linting with optional auto-fix -
auto check:client --client-directory=<dir> --skip-browser-checks
- Full frontend validation
Design System
-
auto import:design-system --figma-file-id=<id> --figma-access-token=<token> --output-dir=<dir>
- Import from Figma
Auto Engineer follows a command/event-driven architecture:
- Plugin-based: Modular design allows installing only needed functionality
- Command Pattern: All operations are commands that can be composed
- Event-driven: Loosely coupled components communicate via events
- Type-safe: Full TypeScript with strict typing throughout
- Node.js >= 20.0.0
- pnpm >= 8.15.4
- Git
- At least one AI provider API key (see Quick Start section)
-
Clone the repository
git clone https://github.com/SamHatoum/auto-engineer.git cd auto-engineer
-
Install dependencies
pnpm install
-
Build all packages
pnpm build
-
Set up environment variables
# Create .env file in the root directory echo "ANTHROPIC_API_KEY=your-key-here" > .env # Add other API keys as needed
When developing locally, you'll want to use the local packages instead of published npm versions:
-
Use workspace protocol in example projects
# In any example project (e.g., examples/shopping-app) cd examples/shopping-app # Install packages using workspace protocol pnpm add '@auto-engineer/cli@workspace:*' \ '@auto-engineer/flow@workspace:*' \ '@auto-engineer/server-checks@workspace:*' \ # ... add other packages as needed
-
The workspace protocol ensures:
- Local packages are used instead of npm registry versions
- Changes to packages are immediately reflected
- No need for npm link or manual linking
Auto Engineer includes a built-in message bus server with a web dashboard for monitoring commands and events:
# Start the server (runs on port 5555)
pnpm auto
# Or run with debug output
DEBUG=auto-engineer:* pnpm auto
# Access the dashboard at http://localhost:5555
The dashboard provides:
- Real-time command execution monitoring
- Event stream visualization
- Command handler registry
- WebSocket connection status
- Dark/light theme support
-
Make changes to packages
# Edit source files in packages/*/src/
-
Build affected packages
# Build specific package pnpm build --filter=@auto-engineer/cli # Or build all packages pnpm build
-
Run tests
# Run all tests pnpm test # Run tests for specific package pnpm test --filter=@auto-engineer/flow
-
Lint and type check
# Run all checks pnpm check # Individual checks pnpm lint pnpm type-check
-
Create package directory
mkdir packages/my-plugin cd packages/my-plugin
-
Initialize package.json
{ "name": "@auto-engineer/my-plugin", "version": "0.1.0", "type": "module", "exports": { ".": "./dist/src/index.js" }, "scripts": { "build": "tsc && tsx ../../scripts/fix-esm-imports.ts" } }
-
Implement command handlers using the unified pattern
import { defineCommandHandler } from '@auto-engineer/message-bus'; export const commandHandler = defineCommandHandler({ name: 'MyCommand', alias: 'my:command', description: 'Does something useful', category: 'My Plugin', fields: { inputPath: { description: 'Path to input file', required: true, }, }, examples: ['$ auto my:command --input-path=./file.txt'], handle: async (command) => { // Implementation }, });
Port 5555 already in use
# Find and kill the process
lsof -i :5555 | grep LISTEN | awk '{print $2}' | xargs kill -9
Module not found errors
# Ensure all packages are built
pnpm build
# Clear build artifacts and rebuild
pnpm clean
pnpm install
pnpm build
Dashboard not showing command handlers
- Clear browser cache and refresh (Cmd+Shift+R)
- Check browser console for JavaScript errors
- Verify packages are properly built
- Ensure auto.config.ts lists all required plugins
We welcome contributions! Please see our Contributing Guide for details.
Auto Engineer is licensed under the Elastic License 2.0 (EL2).
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for auto-engineer
Similar Open Source Tools

auto-engineer
Auto Engineer is a tool designed to automate the Software Development Life Cycle (SDLC) by building production-grade applications with a combination of human and AI agents. It offers a plugin-based architecture that allows users to install only the necessary functionality for their projects. The tool guides users through key stages including Flow Modeling, IA Generation, Deterministic Scaffolding, AI Coding & Testing Loop, and Comprehensive Quality Checks. Auto Engineer follows a command/event-driven architecture and provides a modular plugin system for specific functionalities. It supports TypeScript with strict typing throughout and includes a built-in message bus server with a web dashboard for monitoring commands and events.

evalchemy
Evalchemy is a unified and easy-to-use toolkit for evaluating language models, focusing on post-trained models. It integrates multiple existing benchmarks such as RepoBench, AlpacaEval, and ZeroEval. Key features include unified installation, parallel evaluation, simplified usage, and results management. Users can run various benchmarks with a consistent command-line interface and track results locally or integrate with a database for systematic tracking and leaderboard submission.

llm-context.py
LLM Context is a tool designed to assist developers in quickly injecting relevant content from code/text projects into Large Language Model chat interfaces. It leverages `.gitignore` patterns for smart file selection and offers a streamlined clipboard workflow using the command line. The tool also provides direct integration with Large Language Models through the Model Context Protocol (MCP). LLM Context is optimized for code repositories and collections of text/markdown/html documents, making it suitable for developers working on projects that fit within an LLM's context window. The tool is under active development and aims to enhance AI-assisted development workflows by harnessing the power of Large Language Models.

tunacode
TunaCode CLI is an AI-powered coding assistant that provides a command-line interface for developers to enhance their coding experience. It offers features like model selection, parallel execution for faster file operations, and various commands for code management. The tool aims to improve coding efficiency and provide a seamless coding environment for developers.

sim
Sim is a platform that allows users to build and deploy AI agent workflows quickly and easily. It provides cloud-hosted and self-hosted options, along with support for local AI models. Users can set up the application using Docker Compose, Dev Containers, or manual setup with PostgreSQL and pgvector extension. The platform utilizes technologies like Next.js, Bun, PostgreSQL with Drizzle ORM, Better Auth for authentication, Shadcn and Tailwind CSS for UI, Zustand for state management, ReactFlow for flow editor, Fumadocs for documentation, Turborepo for monorepo management, Socket.io for real-time communication, and Trigger.dev for background jobs.

zcf
ZCF (Zero-Config Claude-Code Flow) is a tool that provides zero-configuration, one-click setup for Claude Code with bilingual support, intelligent agent system, and personalized AI assistant. It offers an interactive menu for easy operations and direct commands for quick execution. The tool supports bilingual operation with automatic language switching and customizable AI output styles. ZCF also includes features like BMad Workflow for enterprise-grade workflow system, Spec Workflow for structured feature development, CCR (Claude Code Router) support for proxy routing, and CCometixLine for real-time usage tracking. It provides smart installation, complete configuration management, and core features like professional agents, command system, and smart configuration. ZCF is cross-platform compatible, supports Windows and Termux environments, and includes security features like dangerous operation confirmation mechanism.

dotclaude
A sophisticated multi-agent configuration system for Claude Code that provides specialized agents and command templates to accelerate code review, refactoring, security audits, tech-lead-guidance, and UX evaluations. It offers essential commands, directory structure details, agent system overview, command templates, usage patterns, collaboration philosophy, sync management, advanced usage guidelines, and FAQ. The tool aims to streamline development workflows, enhance code quality, and facilitate collaboration between developers and AI agents.

CrewAI-GUI
CrewAI-GUI is a Node-Based Frontend tool designed to revolutionize AI workflow creation. It empowers users to design complex AI agent interactions through an intuitive drag-and-drop interface, export designs to JSON for modularity and reusability, and supports both GPT-4 API and Ollama for flexible AI backend. The tool ensures cross-platform compatibility, allowing users to create AI workflows on Windows, Linux, or macOS efficiently.

dexto
Dexto is a lightweight runtime for creating and running AI agents that turn natural language into real-world actions. It serves as the missing intelligence layer for building AI applications, standalone chatbots, or as the reasoning engine inside larger products. Dexto features a powerful CLI and Web UI for running AI agents, supports multiple interfaces, allows hot-swapping of LLMs from various providers, connects to remote tool servers via the Model Context Protocol, is config-driven with version-controlled YAML, offers production-ready core features, extensibility for custom services, and enables multi-agent collaboration via MCP and A2A.

LEANN
LEANN is an innovative vector database that democratizes personal AI, transforming your laptop into a powerful RAG system that can index and search through millions of documents using 97% less storage than traditional solutions without accuracy loss. It achieves this through graph-based selective recomputation and high-degree preserving pruning, computing embeddings on-demand instead of storing them all. LEANN allows semantic search of file system, emails, browser history, chat history, codebase, or external knowledge bases on your laptop with zero cloud costs and complete privacy. It is a drop-in semantic search MCP service fully compatible with Claude Code, enabling intelligent retrieval without changing your workflow.

code-graph-rag
Graph-Code is an accurate Retrieval-Augmented Generation (RAG) system that analyzes multi-language codebases using Tree-sitter. It builds comprehensive knowledge graphs, enabling natural language querying of codebase structure and relationships, along with editing capabilities. The system supports various languages, uses Tree-sitter for parsing, Memgraph for storage, and AI models for natural language to Cypher translation. It offers features like code snippet retrieval, advanced file editing, shell command execution, interactive code optimization, reference-guided optimization, dependency analysis, and more. The architecture consists of a multi-language parser and an interactive CLI for querying the knowledge graph.

orra
Orra is a tool for building production-ready multi-agent applications that handle complex real-world interactions. It coordinates tasks across existing stack, agents, and tools run as services using intelligent reasoning. With features like smart pre-evaluated execution plans, domain grounding, durable execution, and automatic service health monitoring, Orra enables users to go fast with tools as services and revert state to handle failures. It provides real-time status tracking and webhook result delivery, making it ideal for developers looking to move beyond simple crews and agents.

MassGen
MassGen is a cutting-edge multi-agent system that leverages the power of collaborative AI to solve complex tasks. It assigns a task to multiple AI agents who work in parallel, observe each other's progress, and refine their approaches to converge on the best solution to deliver a comprehensive and high-quality result. The system operates through an architecture designed for seamless multi-agent collaboration, with key features including cross-model/agent synergy, parallel processing, intelligence sharing, consensus building, and live visualization. Users can install the system, configure API settings, and run MassGen for various tasks such as question answering, creative writing, research, development & coding tasks, and web automation & browser tasks. The roadmap includes plans for advanced agent collaboration, expanded model, tool & agent integration, improved performance & scalability, enhanced developer experience, and a web interface.

R2R
R2R (RAG to Riches) is a fast and efficient framework for serving high-quality Retrieval-Augmented Generation (RAG) to end users. The framework is designed with customizable pipelines and a feature-rich FastAPI implementation, enabling developers to quickly deploy and scale RAG-based applications. R2R was conceived to bridge the gap between local LLM experimentation and scalable production solutions. **R2R is to LangChain/LlamaIndex what NextJS is to React**. A JavaScript client for R2R deployments can be found here. ### Key Features * **🚀 Deploy** : Instantly launch production-ready RAG pipelines with streaming capabilities. * **🧩 Customize** : Tailor your pipeline with intuitive configuration files. * **🔌 Extend** : Enhance your pipeline with custom code integrations. * **⚖️ Autoscale** : Scale your pipeline effortlessly in the cloud using SciPhi. * **🤖 OSS** : Benefit from a framework developed by the open-source community, designed to simplify RAG deployment.

mcp-apache-spark-history-server
The MCP Server for Apache Spark History Server is a tool that connects AI agents to Apache Spark History Server for intelligent job analysis and performance monitoring. It enables AI agents to analyze job performance, identify bottlenecks, and provide insights from Spark History Server data. The server bridges AI agents with existing Apache Spark infrastructure, allowing users to query job details, analyze performance metrics, compare multiple jobs, investigate failures, and generate insights from historical execution data.

paperless-gpt
paperless-gpt is a tool designed to generate accurate and meaningful document titles and tags for paperless-ngx using Large Language Models (LLMs). It supports multiple LLM providers, including OpenAI and Ollama. With paperless-gpt, you can streamline your document management by automatically suggesting appropriate titles and tags based on the content of your scanned documents. The tool offers features like multiple LLM support, customizable prompts, easy integration with paperless-ngx, user-friendly interface for reviewing and applying suggestions, dockerized deployment, automatic document processing, and an experimental OCR feature.
For similar tasks

auto-engineer
Auto Engineer is a tool designed to automate the Software Development Life Cycle (SDLC) by building production-grade applications with a combination of human and AI agents. It offers a plugin-based architecture that allows users to install only the necessary functionality for their projects. The tool guides users through key stages including Flow Modeling, IA Generation, Deterministic Scaffolding, AI Coding & Testing Loop, and Comprehensive Quality Checks. Auto Engineer follows a command/event-driven architecture and provides a modular plugin system for specific functionalities. It supports TypeScript with strict typing throughout and includes a built-in message bus server with a web dashboard for monitoring commands and events.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.