auto-engineer
Build enterprise grade apps that scale using AI
Stars: 61
Auto Engineer is a tool designed to automate the Software Development Life Cycle (SDLC) by building production-grade applications with a combination of human and AI agents. It offers a plugin-based architecture that allows users to install only the necessary functionality for their projects. The tool guides users through key stages including Flow Modeling, IA Generation, Deterministic Scaffolding, AI Coding & Testing Loop, and Comprehensive Quality Checks. Auto Engineer follows a command/event-driven architecture and provides a modular plugin system for specific functionalities. It supports TypeScript with strict typing throughout and includes a built-in message bus server with a web dashboard for monitoring commands and events.
README:
Put your SDLC on Auto, and build production-grade apps with humans and agents.
- It will be buggy as you use it!
- We are working hard on making it awesome
- We are actively using Auto with real-world clients and use-cases
- We are making a lot of design decisions as we battle test the approach
Stay up to date by watching 👀 and giving us a star ⭐ - join the 💬 Discord for conversations.
npx create-auto-app@latest- Node.js >= 20.0.0
- pnpm >= 8.15.4
- At least one AI provider API key:
- Anthropic Claude (Highly recommended)
- OpenAI
- Google Gemini
- X.AI Grok
Auto Engineer uses a plugin-based architecture. Install the CLI and only the plugins you need:
# Install the CLI globally (use Yarn or NPM if you prefer of course)
pnpm install -g @auto-engineer/cli@latest
# Create a new project directory
mkdir my-app && cd my-app
# Install plugins for your use case
pnpm install @auto-engineer/flow @auto-engineer/server-generator-apollo-emmett
# Or install all common plugins
pnpm install @auto-engineer/flow @auto-engineer/server-generator-apollo-emmett @auto-engineer/server-implementer @auto-engineer/frontend-generator-react-graphql
# Configure your API keys
echo "ANTHROPIC_API_KEY=your-key-here" > .envCreate an auto.config.ts file in your project root to configure plugins:
// auto.config.ts
export default {
plugins: [
'@auto-engineer/flow',
'@auto-engineer/server-generator-apollo-emmett',
'@auto-engineer/server-implementer',
'@auto-engineer/frontend-generator-react-graphql',
// Add more plugins as needed
],
// Optional: Override command aliases if there are conflicts
aliases: {
// 'command:name': '@auto-engineer/package-name'
},
};Auto Engineer uses a modular plugin architecture. Each plugin provides specific functionality:
| Plugin | Package | Commands | Description |
|---|---|---|---|
| Flow | @auto-engineer/flow |
create:example, export:schema
|
Flow modeling DSL and schema export |
| Emmett Generator | @auto-engineer/server-generator-apollo-emmett |
generate:server |
Server code generation from schemas |
| Server Implementer | @auto-engineer/server-implementer |
implement:server, implement:slice
|
AI-powered server implementation |
| React GraphQL Generator | @auto-engineer/frontend-generator-react-graphql |
generate:client, copy:example
|
React client scaffolding |
| Frontend Implementer | @auto-engineer/frontend-implementer |
implement:client |
AI-powered client implementation |
| Information Architect | @auto-engineer/information-architect |
generate:ia |
Information architecture generation |
| Design System Importer | @auto-engineer/design-system-importer |
import:design-system |
Figma design system import |
| Server Checks | @auto-engineer/server-checks |
check:types, check:lint, check:tests
|
Server validation suite |
| Frontend Checks | @auto-engineer/frontend-checks |
check:client |
Frontend validation suite |
| File Syncer | @auto-engineer/file-syncer |
N/A (internal use) | File watching and synchronization |
| Create Auto App | @auto-engineer/create-auto-app |
create:app |
Bootstrap new Auto Engineer projects |
Install only the plugins you need:
# For server development
pnpm install @auto-engineer/flow @auto-engineer/server-generator-apollo-emmett @auto-engineer/server-implementer @auto-engineer/server-checks
# For frontend development
pnpm install @auto-engineer/frontend-generator-react-graphql @auto-engineer/frontend-implementer @auto-engineer/frontend-checks
# For design system integration
pnpm install @auto-engineer/design-system-importer @auto-engineer/information-architectIf multiple plugins register the same command alias, you'll see a clear error message:
❌ Command alias conflicts detected!
Multiple packages are trying to register the same command aliases.
Please add alias overrides to your auto.config.ts file:
export default {
plugins: [
'@auto-engineer/package-a',
'@auto-engineer/package-b',
],
aliases: {
// Specify which package handles each conflicting command
'conflicting:command': '@auto-engineer/package-a',
}
};
Note: Each package can expose multiple commands. The alias resolution maps a specific command alias to the package that should handle it. For example, if both package-a and package-b provide a check:types command, you specify which package wins for that specific command alias.
- Built-in event-driven message bus server with web dashboard
- Real-time command and event monitoring at http://localhost:5555
- WebSocket support for live updates
- DSL functions for event handling and orchestration in
auto.config.ts
- All command handlers now use a single
defineCommandHandlerfunction - Type-safe command definitions with automatic CLI manifest generation
- Named parameters for all CLI commands (e.g.,
--input-path=value) - Integrated help and examples in command definitions
- Automatic file watching and syncing for development workflows
- Support for TypeScript declaration files (.d.ts)
- Flow file synchronization with related dependencies
- Flow package now works in browser environments
- Stub implementations for Node.js-specific modules
- Support for browser-based flow modeling tools
Auto automates the SDLC through a configurable pipeline of agentic and procedural modules. The process turns high-level models into production-ready code through these key stages:
- Flow Modeling: You (or an AI) start by creating a high-level "Flow Model". This defines system behavior through command, query, and reaction "slices" that specify both frontend and server requirements. This is where the core design work happens.
- IA Generation: An "information architect" agent automatically generates an information architecture schema from your model, similar to how a UX designer creates wireframes.
- Deterministic Scaffolding: The IA schema is used to generate a complete, deterministic application scaffold.
- Spec-Driven Precision: The scaffold is populated with placeholders containing implementation hints and in-situ prompts. The initial flow model also generates deterministic tests. This combination of fine-grained prompts and tests precisely guides the AI.
- AI Coding & Testing Loop: An AI agent implements the code based on the prompts and context from previous steps. As code is written, tests are run. If they fail, the AI gets the error feedback and self-corrects, usually within 1-3 attempts.
- Comprehensive Quality Checks: After passing the tests, the code goes through further checks, including linting, runtime validation, and AI-powered visual testing to ensure design system compliance.
Commands are provided by installed plugins. Run auto --help to see available commands based on your configuration.
All commands now use named parameters for clarity and consistency:
Flow Development
-
auto create:example --name=<project-name>- Create an example project -
auto export:schema --output-dir=<dir> --directory=<flows-dir>- Export flow schemas
Server Generation
-
auto generate:server --schema-path=<schema> --destination=<dest>- Generate server from schema -
auto implement:server --server-directory=<dir>- AI implements server -
auto implement:slice --server-directory=<dir> --slice=<name>- Implement specific slice
Frontend Generation
-
auto generate:ia --output-dir=<dir> --flow-files=<patterns>- Generate Information Architecture -
auto generate:client --starter-template=<template> --client-dir=<dir> --ia-schema=<file> --gql-schema=<file>- Generate React client -
auto implement:client --project-dir=<dir> --ia-scheme-dir=<dir> --design-system-path=<file>- AI implements client
Validation & Testing
-
auto check:types --target-directory=<dir> --scope=<project|changed>- TypeScript type checking -
auto check:tests --target-directory=<dir> --scope=<project|changed>- Run test suites -
auto check:lint --target-directory=<dir> --fix --scope=<project|changed>- Linting with optional auto-fix -
auto check:client --client-directory=<dir> --skip-browser-checks- Full frontend validation
Design System
-
auto import:design-system --figma-file-id=<id> --figma-access-token=<token> --output-dir=<dir>- Import from Figma
Auto Engineer follows a command/event-driven architecture:
- Plugin-based: Modular design allows installing only needed functionality
- Command Pattern: All operations are commands that can be composed
- Event-driven: Loosely coupled components communicate via events
- Type-safe: Full TypeScript with strict typing throughout
- Node.js >= 20.0.0
- pnpm >= 8.15.4
- Git
- At least one AI provider API key (see Quick Start section)
-
Clone the repository
git clone https://github.com/SamHatoum/auto-engineer.git cd auto-engineer -
Install dependencies
pnpm install
-
Build all packages
pnpm build
-
Set up environment variables
# Create .env file in the root directory echo "ANTHROPIC_API_KEY=your-key-here" > .env # Add other API keys as needed
When developing locally, you'll want to use the local packages instead of published npm versions:
-
Use workspace protocol in example projects
# In any example project (e.g., examples/shopping-app) cd examples/shopping-app # Install packages using workspace protocol pnpm add '@auto-engineer/cli@workspace:*' \ '@auto-engineer/flow@workspace:*' \ '@auto-engineer/server-checks@workspace:*' \ # ... add other packages as needed
-
The workspace protocol ensures:
- Local packages are used instead of npm registry versions
- Changes to packages are immediately reflected
- No need for npm link or manual linking
Auto Engineer includes a built-in message bus server with a web dashboard for monitoring commands and events:
# Start the server (runs on port 5555)
pnpm auto
# Or run with debug output
DEBUG=auto-engineer:* pnpm auto
# Access the dashboard at http://localhost:5555The dashboard provides:
- Real-time command execution monitoring
- Event stream visualization
- Command handler registry
- WebSocket connection status
- Dark/light theme support
-
Make changes to packages
# Edit source files in packages/*/src/ -
Build affected packages
# Build specific package pnpm build --filter=@auto-engineer/cli # Or build all packages pnpm build
-
Run tests
# Run all tests pnpm test # Run tests for specific package pnpm test --filter=@auto-engineer/flow
-
Lint and type check
# Run all checks pnpm check # Individual checks pnpm lint pnpm type-check
-
Create package directory
mkdir packages/my-plugin cd packages/my-plugin -
Initialize package.json
{ "name": "@auto-engineer/my-plugin", "version": "0.1.0", "type": "module", "exports": { ".": "./dist/src/index.js" }, "scripts": { "build": "tsc && tsx ../../scripts/fix-esm-imports.ts" } } -
Implement command handlers using the unified pattern
import { defineCommandHandler } from '@auto-engineer/message-bus'; export const commandHandler = defineCommandHandler({ name: 'MyCommand', alias: 'my:command', description: 'Does something useful', category: 'My Plugin', fields: { inputPath: { description: 'Path to input file', required: true, }, }, examples: ['$ auto my:command --input-path=./file.txt'], handle: async (command) => { // Implementation }, });
Port 5555 already in use
# Find and kill the process
lsof -i :5555 | grep LISTEN | awk '{print $2}' | xargs kill -9Module not found errors
# Ensure all packages are built
pnpm build
# Clear build artifacts and rebuild
pnpm clean
pnpm install
pnpm buildDashboard not showing command handlers
- Clear browser cache and refresh (Cmd+Shift+R)
- Check browser console for JavaScript errors
- Verify packages are properly built
- Ensure auto.config.ts lists all required plugins
We welcome contributions! Please see our Contributing Guide for details.
Auto Engineer is licensed under the Elastic License 2.0 (EL2).
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for auto-engineer
Similar Open Source Tools
auto-engineer
Auto Engineer is a tool designed to automate the Software Development Life Cycle (SDLC) by building production-grade applications with a combination of human and AI agents. It offers a plugin-based architecture that allows users to install only the necessary functionality for their projects. The tool guides users through key stages including Flow Modeling, IA Generation, Deterministic Scaffolding, AI Coding & Testing Loop, and Comprehensive Quality Checks. Auto Engineer follows a command/event-driven architecture and provides a modular plugin system for specific functionalities. It supports TypeScript with strict typing throughout and includes a built-in message bus server with a web dashboard for monitoring commands and events.
nosia
Nosia is a self-hosted AI RAG + MCP platform that allows users to run AI models on their own data with complete privacy and control. It integrates the Model Context Protocol (MCP) to connect AI models with external tools, services, and data sources. The platform is designed to be easy to install and use, providing OpenAI-compatible APIs that work seamlessly with existing AI applications. Users can augment AI responses with their documents, perform real-time streaming, support multi-format data, enable semantic search, and achieve easy deployment with Docker Compose. Nosia also offers multi-tenancy for secure data separation.
zcf
ZCF (Zero-Config Claude-Code Flow) is a tool that provides zero-configuration, one-click setup for Claude Code with bilingual support, intelligent agent system, and personalized AI assistant. It offers an interactive menu for easy operations and direct commands for quick execution. The tool supports bilingual operation with automatic language switching and customizable AI output styles. ZCF also includes features like BMad Workflow for enterprise-grade workflow system, Spec Workflow for structured feature development, CCR (Claude Code Router) support for proxy routing, and CCometixLine for real-time usage tracking. It provides smart installation, complete configuration management, and core features like professional agents, command system, and smart configuration. ZCF is cross-platform compatible, supports Windows and Termux environments, and includes security features like dangerous operation confirmation mechanism.
mcp-devtools
MCP DevTools is a high-performance server written in Go that replaces multiple Node.js and Python-based servers. It provides access to essential developer tools through a unified, modular interface. The server is efficient, with minimal memory footprint and fast response times. It offers a comprehensive tool suite for agentic coding, including 20+ essential developer agent tools. The tool registry allows for easy addition of new tools. The server supports multiple transport modes, including STDIO, HTTP, and SSE. It includes a security framework for multi-layered protection and a plugin system for adding new tools.
CrewAI-GUI
CrewAI-GUI is a Node-Based Frontend tool designed to revolutionize AI workflow creation. It empowers users to design complex AI agent interactions through an intuitive drag-and-drop interface, export designs to JSON for modularity and reusability, and supports both GPT-4 API and Ollama for flexible AI backend. The tool ensures cross-platform compatibility, allowing users to create AI workflows on Windows, Linux, or macOS efficiently.
tunacode
TunaCode CLI is an AI-powered coding assistant that provides a command-line interface for developers to enhance their coding experience. It offers features like model selection, parallel execution for faster file operations, and various commands for code management. The tool aims to improve coding efficiency and provide a seamless coding environment for developers.
evalchemy
Evalchemy is a unified and easy-to-use toolkit for evaluating language models, focusing on post-trained models. It integrates multiple existing benchmarks such as RepoBench, AlpacaEval, and ZeroEval. Key features include unified installation, parallel evaluation, simplified usage, and results management. Users can run various benchmarks with a consistent command-line interface and track results locally or integrate with a database for systematic tracking and leaderboard submission.
llm-context.py
LLM Context is a tool designed to assist developers in quickly injecting relevant content from code/text projects into Large Language Model chat interfaces. It leverages `.gitignore` patterns for smart file selection and offers a streamlined clipboard workflow using the command line. The tool also provides direct integration with Large Language Models through the Model Context Protocol (MCP). LLM Context is optimized for code repositories and collections of text/markdown/html documents, making it suitable for developers working on projects that fit within an LLM's context window. The tool is under active development and aims to enhance AI-assisted development workflows by harnessing the power of Large Language Models.
git-mcp-server
A secure and scalable Git MCP server providing AI agents with powerful version control capabilities for local and serverless environments. It offers 28 comprehensive Git operations organized into seven functional categories, resources for contextual information about the Git environment, and structured prompt templates for guiding AI agents through complex workflows. The server features declarative tools, robust error handling, pluggable authentication, abstracted storage, full-stack observability, dependency injection, and edge-ready architecture. It also includes specialized features for Git integration such as cross-runtime compatibility, provider-based architecture, optimized Git execution, working directory management, configurable Git identity, safety features, and commit signing.
sim
Sim is a platform that allows users to build and deploy AI agent workflows quickly and easily. It provides cloud-hosted and self-hosted options, along with support for local AI models. Users can set up the application using Docker Compose, Dev Containers, or manual setup with PostgreSQL and pgvector extension. The platform utilizes technologies like Next.js, Bun, PostgreSQL with Drizzle ORM, Better Auth for authentication, Shadcn and Tailwind CSS for UI, Zustand for state management, ReactFlow for flow editor, Fumadocs for documentation, Turborepo for monorepo management, Socket.io for real-time communication, and Trigger.dev for background jobs.
agents
AI agent tooling for data engineering workflows. Includes an MCP server for Airflow, a CLI tool for interacting with Airflow from your terminal, and skills that extend AI coding agents with specialized capabilities for working with Airflow and data warehouses. Works with Claude Code, Cursor, and other agentic coding tools. The tool provides a comprehensive set of features for data discovery & analysis, data lineage, DAG development, dbt integration, migration, and more. It also offers user journeys for data analysis flow and DAG development flow. The Airflow CLI tool allows users to interact with Airflow directly from the terminal. The tool supports various databases like Snowflake, PostgreSQL, Google BigQuery, and more, with auto-detected SQLAlchemy databases. Skills are invoked automatically based on user queries or can be invoked directly using specific commands.
mcp-apache-spark-history-server
The MCP Server for Apache Spark History Server is a tool that connects AI agents to Apache Spark History Server for intelligent job analysis and performance monitoring. It enables AI agents to analyze job performance, identify bottlenecks, and provide insights from Spark History Server data. The server bridges AI agents with existing Apache Spark infrastructure, allowing users to query job details, analyze performance metrics, compare multiple jobs, investigate failures, and generate insights from historical execution data.
TTP-Threat-Feeds
TTP-Threat-Feeds is a script-powered threat feed generator that automates the discovery and parsing of threat actor behavior from security research. It scrapes URLs from trusted sources, extracts observable adversary behaviors, and outputs structured YAML files to help detection engineers and threat researchers derive detection opportunities and correlation logic. The tool supports multiple LLM providers for text extraction and includes OCR functionality for extracting content from images. Users can configure URLs, run the extractor, and save results as YAML files. Cloud provider SDKs are optional. Contributions are welcome for improvements and enhancements to the tool.
VimLM
VimLM is an AI-powered coding assistant for Vim that integrates AI for code generation, refactoring, and documentation directly into your Vim workflow. It offers native Vim integration with split-window responses and intuitive keybindings, offline first execution with MLX-compatible models, contextual awareness with seamless integration with codebase and external resources, conversational workflow for iterating on responses, project scaffolding for generating and deploying code blocks, and extensibility for creating custom LLM workflows with command chains.
OSA
OSA (Open-Source-Advisor) is a tool designed to improve the quality of scientific open source projects by automating the generation of README files, documentation, CI/CD scripts, and providing advice and recommendations for repositories. It supports various LLMs accessible via API, local servers, or osa_bot hosted on ITMO servers. OSA is currently under development with features like README file generation, documentation generation, automatic implementation of changes, LLM integration, and GitHub Action Workflow generation. It requires Python 3.10 or higher and tokens for GitHub/GitLab/Gitverse and LLM API key. Users can install OSA using PyPi or build from source, and run it using CLI commands or Docker containers.
Free-GPT4-WEB-API
FreeGPT4-WEB-API is a Python server that allows you to have a self-hosted GPT-4 Unlimited and Free WEB API, via the latest Bing's AI. It uses Flask and GPT4Free libraries. GPT4Free provides an interface to the Bing's GPT-4. The server can be configured by editing the `FreeGPT4_Server.py` file. You can change the server's port, host, and other settings. The only cookie needed for the Bing model is `_U`.
For similar tasks
auto-engineer
Auto Engineer is a tool designed to automate the Software Development Life Cycle (SDLC) by building production-grade applications with a combination of human and AI agents. It offers a plugin-based architecture that allows users to install only the necessary functionality for their projects. The tool guides users through key stages including Flow Modeling, IA Generation, Deterministic Scaffolding, AI Coding & Testing Loop, and Comprehensive Quality Checks. Auto Engineer follows a command/event-driven architecture and provides a modular plugin system for specific functionalities. It supports TypeScript with strict typing throughout and includes a built-in message bus server with a web dashboard for monitoring commands and events.
For similar jobs
sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.
teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.
