
auto-engineer
Build enterprise grade apps that scale using AI
Stars: 51

Auto Engineer is a tool designed to automate the Software Development Life Cycle (SDLC) by building production-grade applications with a combination of human and AI agents. It offers a plugin-based architecture that allows users to install only the necessary functionality for their projects. The tool guides users through key stages including Flow Modeling, IA Generation, Deterministic Scaffolding, AI Coding & Testing Loop, and Comprehensive Quality Checks. Auto Engineer follows a command/event-driven architecture and provides a modular plugin system for specific functionalities. It supports TypeScript with strict typing throughout and includes a built-in message bus server with a web dashboard for monitoring commands and events.
README:
Put your SDLC on Auto, and build production-grade apps with humans and agents.
- We are working hard on making it happen
- We are actively using Auto with real-world clients and use-cases
- We are making a lot of design decisions as we battle test the approach
Stay up to date by watching 👀 and giving us a star ⭐ - join the 💬 Discord for conversations.
npx create-auto-app@latest
- Node.js >= 20.0.0
- pnpm >= 8.15.4
- At least one AI provider API key:
- Anthropic Claude (Highly recommended)
- OpenAI
- Google Gemini
- X.AI Grok
Auto Engineer uses a plugin-based architecture. Install the CLI and only the plugins you need:
# Install the CLI globally (use Yarn or NPM if you prefer of course)
pnpm install -g @auto-engineer/cli@latest
# Create a new project directory
mkdir my-app && cd my-app
# Install plugins for your use case
pnpm install @auto-engineer/flow @auto-engineer/server-generator-apollo-emmett
# Or install all common plugins
pnpm install @auto-engineer/flow @auto-engineer/server-generator-apollo-emmett @auto-engineer/server-implementer @auto-engineer/frontend-generator-react-graphql
# Configure your API keys
echo "ANTHROPIC_API_KEY=your-key-here" > .env
Create an auto.config.ts
file in your project root to configure plugins:
// auto.config.ts
export default {
plugins: [
'@auto-engineer/flow',
'@auto-engineer/server-generator-apollo-emmett',
'@auto-engineer/server-implementer',
'@auto-engineer/frontend-generator-react-graphql',
// Add more plugins as needed
],
// Optional: Override command aliases if there are conflicts
aliases: {
// 'command:name': '@auto-engineer/package-name'
},
};
# With plugins configured, create a new app
auto create:example --name=shopping-assistant
# Navigate to the created project
cd shopping-assistant
pnpm install
# Export the flow schemas
auto export:schema --output-dir=./.context --directory=./flows
# Generate and implement the server
auto generate:server --schema-path=.context/schema.json --destination=.
auto implement:server --server-directory=./server
# Run server validation
auto check:types --target-directory=./server
auto check:tests --target-directory=./server
auto check:lint --target-directory=./server --fix
# Generate frontend (requires additional plugins)
auto generate:ia --output-dir=./.context --flow-files=./flows/*.flow.ts
auto generate:client --starter-template=./shadcn-starter --client-dir=./client \
--ia-schema=./auto-ia.json --gql-schema=./schema.graphql --figma-vars=./figma-vars.json
auto implement:client --project-dir=./client --ia-scheme-dir=./.context \
--design-system-path=./design-system.md
# Start the application
pnpm start
Auto Engineer uses a modular plugin architecture. Each plugin provides specific functionality:
Plugin | Package | Commands | Description |
---|---|---|---|
Flow | @auto-engineer/flow |
create:example , export:schema
|
Flow modeling DSL and schema export |
Emmett Generator | @auto-engineer/server-generator-apollo-emmett |
generate:server |
Server code generation from schemas |
Server Implementer | @auto-engineer/server-implementer |
implement:server , implement:slice
|
AI-powered server implementation |
React GraphQL Generator | @auto-engineer/frontend-generator-react-graphql |
generate:client , copy:example
|
React client scaffolding |
Frontend Implementer | @auto-engineer/frontend-implementer |
implement:client |
AI-powered client implementation |
Information Architect | @auto-engineer/information-architect |
generate:ia |
Information architecture generation |
Design System Importer | @auto-engineer/design-system-importer |
import:design-system |
Figma design system import |
Server Checks | @auto-engineer/server-checks |
check:types , check:lint , check:tests
|
Server validation suite |
Frontend Checks | @auto-engineer/frontend-checks |
check:client |
Frontend validation suite |
File Syncer | @auto-engineer/file-syncer |
N/A (internal use) | File watching and synchronization |
Create Auto App | @auto-engineer/create-auto-app |
create:app |
Bootstrap new Auto Engineer projects |
Install only the plugins you need:
# For server development
npm install @auto-engineer/flow @auto-engineer/server-generator-apollo-emmett @auto-engineer/server-implementer @auto-engineer/server-checks
# For frontend development
npm install @auto-engineer/frontend-generator-react-graphql @auto-engineer/frontend-implementer @auto-engineer/frontend-checks
# For design system integration
npm install @auto-engineer/design-system-importer @auto-engineer/information-architect
If multiple plugins register the same command alias, you'll see a clear error message:
❌ Command alias conflicts detected!
Multiple packages are trying to register the same command aliases.
Please add alias overrides to your auto.config.ts file:
export default {
plugins: [
'@auto-engineer/package-a',
'@auto-engineer/package-b',
],
aliases: {
// Specify which package handles each conflicting command
'conflicting:command': '@auto-engineer/package-a',
}
};
Note: Each package can expose multiple commands. The alias resolution maps a specific command alias to the package that should handle it. For example, if both package-a
and package-b
provide a check:types
command, you specify which package wins for that specific command alias.
- Built-in event-driven message bus server with web dashboard
- Real-time command and event monitoring at http://localhost:5555
- WebSocket support for live updates
- DSL functions for event handling and orchestration in
auto.config.ts
- All command handlers now use a single
defineCommandHandler
function - Type-safe command definitions with automatic CLI manifest generation
- Named parameters for all CLI commands (e.g.,
--input-path=value
) - Integrated help and examples in command definitions
- Automatic file watching and syncing for development workflows
- Support for TypeScript declaration files (.d.ts)
- Flow file synchronization with related dependencies
- Flow package now works in browser environments
- Stub implementations for Node.js-specific modules
- Support for browser-based flow modeling tools
Auto automates the SDLC through a configurable pipeline of agentic and procedural modules. The process turns high-level models into production-ready code through these key stages:
- Flow Modeling: You (or an AI) start by creating a high-level "Flow Model". This defines system behavior through command, query, and reaction "slices" that specify both frontend and server requirements. This is where the core design work happens.
- IA Generation: An "information architect" agent automatically generates an information architecture schema from your model, similar to how a UX designer creates wireframes.
- Deterministic Scaffolding: The IA schema is used to generate a complete, deterministic application scaffold.
- Spec-Driven Precision: The scaffold is populated with placeholders containing implementation hints and in-situ prompts. The initial flow model also generates deterministic tests. This combination of fine-grained prompts and tests precisely guides the AI.
- AI Coding & Testing Loop: An AI agent implements the code based on the prompts and context from previous steps. As code is written, tests are run. If they fail, the AI gets the error feedback and self-corrects, usually within 1-3 attempts.
- Comprehensive Quality Checks: After passing the tests, the code goes through further checks, including linting, runtime validation, and AI-powered visual testing to ensure design system compliance.
Commands are provided by installed plugins. Run auto --help
to see available commands based on your configuration.
All commands now use named parameters for clarity and consistency:
Flow Development
-
create:example --name=<project-name>
- Create an example project -
export:schema --output-dir=<dir> --directory=<flows-dir>
- Export flow schemas
Server Generation
-
generate:server --schema-path=<schema> --destination=<dest>
- Generate server from schema -
implement:server --server-directory=<dir>
- AI implements server -
implement:slice --server-directory=<dir> --slice=<name>
- Implement specific slice
Frontend Generation
-
generate:ia --output-dir=<dir> --flow-files=<patterns>
- Generate Information Architecture -
generate:client --starter-template=<template> --client-dir=<dir> --ia-schema=<file> --gql-schema=<file>
- Generate React client -
implement:client --project-dir=<dir> --ia-scheme-dir=<dir> --design-system-path=<file>
- AI implements client
Validation & Testing
-
check:types --target-directory=<dir> --scope=<project|changed>
- TypeScript type checking -
check:tests --target-directory=<dir> --scope=<project|changed>
- Run test suites -
check:lint --target-directory=<dir> --fix --scope=<project|changed>
- Linting with optional auto-fix -
check:client --client-directory=<dir> --skip-browser-checks
- Full frontend validation
Design System
-
import:design-system --figma-file-id=<id> --figma-access-token=<token> --output-dir=<dir>
- Import from Figma
Auto Engineer follows a command/event-driven architecture:
- Plugin-based: Modular design allows installing only needed functionality
- Command Pattern: All operations are commands that can be composed
- Event-driven: Loosely coupled components communicate via events
- Type-safe: Full TypeScript with strict typing throughout
auto-engineer/
├── packages/
│ ├── cli/ # Main CLI with plugin loader
│ ├── flow/ # Flow modeling DSL
│ ├── server-generator-apollo-emmett/ # Server code generation
│ ├── server-implementer/ # AI server implementation
│ ├── frontend-generator-react-graphql/ # React client scaffolding
│ ├── frontend-implementer/ # AI client implementation
│ ├── information-architect/ # IA generation
│ ├── design-system-importer/ # Figma integration
│ ├── server-checks/ # Server validation
│ ├── frontend-checks/ # Frontend validation
│ ├── ai-gateway/ # Unified AI provider interface
│ ├── message-bus/ # Event-driven messaging
│ ├── file-store/ # File system operations
│ ├── file-syncer/ # File watching and synchronization
│ └── create-auto-app/ # Project bootstrapping
├── integrations/
│ ├── ai-chat-completion/ # AI provider integrations
│ ├── cart/ # Cart service integration
│ └── product-catalogue/ # Product catalog integration
└── examples/
├── cart-api/ # Example cart API
└── product-catalogue-api/ # Example product API
- Node.js >= 20.0.0
- pnpm >= 8.15.4
- Git
- At least one AI provider API key (see Quick Start section)
-
Clone the repository
git clone https://github.com/SamHatoum/auto-engineer.git cd auto-engineer
-
Install dependencies
pnpm install
-
Build all packages
pnpm build
-
Set up environment variables
# Create .env file in the root directory echo "ANTHROPIC_API_KEY=your-key-here" > .env # Add other API keys as needed
When developing locally, you'll want to use the local packages instead of published npm versions:
-
Use workspace protocol in example projects
# In any example project (e.g., examples/shopping-app) cd examples/shopping-app # Install packages using workspace protocol pnpm add '@auto-engineer/cli@workspace:*' \ '@auto-engineer/flow@workspace:*' \ '@auto-engineer/server-checks@workspace:*' \ # ... add other packages as needed
-
The workspace protocol ensures:
- Local packages are used instead of npm registry versions
- Changes to packages are immediately reflected
- No need for npm link or manual linking
Auto Engineer includes a built-in message bus server with a web dashboard for monitoring commands and events:
# Start the server (runs on port 5555)
pnpm auto
# Or run with debug output
DEBUG=auto-engineer:* pnpm auto
# Access the dashboard at http://localhost:5555
The dashboard provides:
- Real-time command execution monitoring
- Event stream visualization
- Command handler registry
- WebSocket connection status
- Dark/light theme support
-
Make changes to packages
# Edit source files in packages/*/src/
-
Build affected packages
# Build specific package pnpm build --filter=@auto-engineer/cli # Or build all packages pnpm build
-
Run tests
# Run all tests pnpm test # Run tests for specific package pnpm test --filter=@auto-engineer/flow
-
Lint and type check
# Run all checks pnpm check # Individual checks pnpm lint pnpm type-check
-
Create package directory
mkdir packages/my-plugin cd packages/my-plugin
-
Initialize package.json
{ "name": "@auto-engineer/my-plugin", "version": "0.1.0", "type": "module", "exports": { ".": "./dist/src/index.js" }, "scripts": { "build": "tsc && tsx ../../scripts/fix-esm-imports.ts" } }
-
Implement command handlers using the unified pattern
import { defineCommandHandler } from '@auto-engineer/message-bus'; export const commandHandler = defineCommandHandler({ name: 'MyCommand', alias: 'my:command', description: 'Does something useful', category: 'My Plugin', fields: { inputPath: { description: 'Path to input file', required: true, }, }, examples: ['$ auto my:command --input-path=./file.txt'], handle: async (command) => { // Implementation }, });
Port 5555 already in use
# Find and kill the process
lsof -i :5555 | grep LISTEN | awk '{print $2}' | xargs kill -9
Module not found errors
# Ensure all packages are built
pnpm build
# Clear build artifacts and rebuild
pnpm clean
pnpm install
pnpm build
Dashboard not showing command handlers
- Clear browser cache and refresh (Cmd+Shift+R)
- Check browser console for JavaScript errors
- Verify packages are properly built
- Ensure auto.config.ts lists all required plugins
We welcome contributions! Please see our Contributing Guide for details.
Auto Engineer is licensed under the Elastic License 2.0 (EL2).
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for auto-engineer
Similar Open Source Tools

auto-engineer
Auto Engineer is a tool designed to automate the Software Development Life Cycle (SDLC) by building production-grade applications with a combination of human and AI agents. It offers a plugin-based architecture that allows users to install only the necessary functionality for their projects. The tool guides users through key stages including Flow Modeling, IA Generation, Deterministic Scaffolding, AI Coding & Testing Loop, and Comprehensive Quality Checks. Auto Engineer follows a command/event-driven architecture and provides a modular plugin system for specific functionalities. It supports TypeScript with strict typing throughout and includes a built-in message bus server with a web dashboard for monitoring commands and events.

llm-context.py
LLM Context is a tool designed to assist developers in quickly injecting relevant content from code/text projects into Large Language Model chat interfaces. It leverages `.gitignore` patterns for smart file selection and offers a streamlined clipboard workflow using the command line. The tool also provides direct integration with Large Language Models through the Model Context Protocol (MCP). LLM Context is optimized for code repositories and collections of text/markdown/html documents, making it suitable for developers working on projects that fit within an LLM's context window. The tool is under active development and aims to enhance AI-assisted development workflows by harnessing the power of Large Language Models.

evalchemy
Evalchemy is a unified and easy-to-use toolkit for evaluating language models, focusing on post-trained models. It integrates multiple existing benchmarks such as RepoBench, AlpacaEval, and ZeroEval. Key features include unified installation, parallel evaluation, simplified usage, and results management. Users can run various benchmarks with a consistent command-line interface and track results locally or integrate with a database for systematic tracking and leaderboard submission.

dotclaude
A sophisticated multi-agent configuration system for Claude Code that provides specialized agents and command templates to accelerate code review, refactoring, security audits, tech-lead-guidance, and UX evaluations. It offers essential commands, directory structure details, agent system overview, command templates, usage patterns, collaboration philosophy, sync management, advanced usage guidelines, and FAQ. The tool aims to streamline development workflows, enhance code quality, and facilitate collaboration between developers and AI agents.

sim
Sim is a platform that allows users to build and deploy AI agent workflows quickly and easily. It provides cloud-hosted and self-hosted options, along with support for local AI models. Users can set up the application using Docker Compose, Dev Containers, or manual setup with PostgreSQL and pgvector extension. The platform utilizes technologies like Next.js, Bun, PostgreSQL with Drizzle ORM, Better Auth for authentication, Shadcn and Tailwind CSS for UI, Zustand for state management, ReactFlow for flow editor, Fumadocs for documentation, Turborepo for monorepo management, Socket.io for real-time communication, and Trigger.dev for background jobs.

gonzo
Gonzo is a powerful, real-time log analysis terminal UI tool inspired by k9s. It allows users to analyze log streams with beautiful charts, AI-powered insights, and advanced filtering directly from the terminal. The tool provides features like live streaming log processing, OTLP support, interactive dashboard with real-time charts, advanced filtering options including regex support, and AI-powered insights such as pattern detection, anomaly analysis, and root cause suggestions. Users can also configure AI models from providers like OpenAI, LM Studio, and Ollama for intelligent log analysis. Gonzo is built with Bubble Tea, Lipgloss, Cobra, Viper, and OpenTelemetry, following a clean architecture with separate modules for TUI, log analysis, frequency tracking, OTLP handling, and AI integration.

CrewAI-GUI
CrewAI-GUI is a Node-Based Frontend tool designed to revolutionize AI workflow creation. It empowers users to design complex AI agent interactions through an intuitive drag-and-drop interface, export designs to JSON for modularity and reusability, and supports both GPT-4 API and Ollama for flexible AI backend. The tool ensures cross-platform compatibility, allowing users to create AI workflows on Windows, Linux, or macOS efficiently.

orra
Orra is a tool for building production-ready multi-agent applications that handle complex real-world interactions. It coordinates tasks across existing stack, agents, and tools run as services using intelligent reasoning. With features like smart pre-evaluated execution plans, domain grounding, durable execution, and automatic service health monitoring, Orra enables users to go fast with tools as services and revert state to handle failures. It provides real-time status tracking and webhook result delivery, making it ideal for developers looking to move beyond simple crews and agents.

R2R
R2R (RAG to Riches) is a fast and efficient framework for serving high-quality Retrieval-Augmented Generation (RAG) to end users. The framework is designed with customizable pipelines and a feature-rich FastAPI implementation, enabling developers to quickly deploy and scale RAG-based applications. R2R was conceived to bridge the gap between local LLM experimentation and scalable production solutions. **R2R is to LangChain/LlamaIndex what NextJS is to React**. A JavaScript client for R2R deployments can be found here. ### Key Features * **🚀 Deploy** : Instantly launch production-ready RAG pipelines with streaming capabilities. * **🧩 Customize** : Tailor your pipeline with intuitive configuration files. * **🔌 Extend** : Enhance your pipeline with custom code integrations. * **⚖️ Autoscale** : Scale your pipeline effortlessly in the cloud using SciPhi. * **🤖 OSS** : Benefit from a framework developed by the open-source community, designed to simplify RAG deployment.

dexto
Dexto is a lightweight runtime for creating and running AI agents that turn natural language into real-world actions. It serves as the missing intelligence layer for building AI applications, standalone chatbots, or as the reasoning engine inside larger products. Dexto features a powerful CLI and Web UI for running AI agents, supports multiple interfaces, allows hot-swapping of LLMs from various providers, connects to remote tool servers via the Model Context Protocol, is config-driven with version-controlled YAML, offers production-ready core features, extensibility for custom services, and enables multi-agent collaboration via MCP and A2A.

agentscope
AgentScope is a multi-agent platform designed to empower developers to build multi-agent applications with large-scale models. It features three high-level capabilities: Easy-to-Use, High Robustness, and Actor-Based Distribution. AgentScope provides a list of `ModelWrapper` to support both local model services and third-party model APIs, including OpenAI API, DashScope API, Gemini API, and ollama. It also enables developers to rapidly deploy local model services using libraries such as ollama (CPU inference), Flask + Transformers, Flask + ModelScope, FastChat, and vllm. AgentScope supports various services, including Web Search, Data Query, Retrieval, Code Execution, File Operation, and Text Processing. Example applications include Conversation, Game, and Distribution. AgentScope is released under Apache License 2.0 and welcomes contributions.

rlama
RLAMA is a powerful AI-driven question-answering tool that seamlessly integrates with local Ollama models. It enables users to create, manage, and interact with Retrieval-Augmented Generation (RAG) systems tailored to their documentation needs. RLAMA follows a clean architecture pattern with clear separation of concerns, focusing on lightweight and portable RAG capabilities with minimal dependencies. The tool processes documents, generates embeddings, stores RAG systems locally, and provides contextually-informed responses to user queries. Supported document formats include text, code, and various document types, with troubleshooting steps available for common issues like Ollama accessibility, text extraction problems, and relevance of answers.

scabench
ScaBench is a comprehensive framework designed for evaluating security analysis tools and AI agents on real-world smart contract vulnerabilities. It provides curated datasets from recent audits and official tooling for consistent evaluation. The tool includes features such as curated datasets from Code4rena, Cantina, and Sherlock audits, a baseline runner for security analysis, a scoring tool for evaluating findings, a report generator for HTML reports with visualizations, and pipeline automation for complete workflow execution. Users can access curated datasets, generate new datasets, download project source code, run security analysis using LLMs, and evaluate tool findings against benchmarks using LLM matching. The tool enforces strict matching policies to ensure accurate evaluation results.

paperless-gpt
paperless-gpt is a tool designed to generate accurate and meaningful document titles and tags for paperless-ngx using Large Language Models (LLMs). It supports multiple LLM providers, including OpenAI and Ollama. With paperless-gpt, you can streamline your document management by automatically suggesting appropriate titles and tags based on the content of your scanned documents. The tool offers features like multiple LLM support, customizable prompts, easy integration with paperless-ngx, user-friendly interface for reviewing and applying suggestions, dockerized deployment, automatic document processing, and an experimental OCR feature.

SG-Nav
SG-Nav is an online 3D scene graph prompting tool designed for LLM-based zero-shot object navigation. It proposes a framework that constructs an online 3D scene graph to prompt LLMs, allowing direct application to various scenes and categories without the need for training.

ChatGPT-API-Faucet
ChatGPT API Faucet is a frontend project for the public platform ChatGPT API Faucet, inspired by the crypto project MultiFaucet. It allows developers in the AI ecosystem to claim $1.00 for free every 24 hours. The program is developed using the Next.js framework and React library, with key components like _app.tsx for initializing pages, index.tsx for main modifications, and Layout.tsx for defining layout components. Users can deploy the project by installing dependencies, building the project, starting the project, configuring reverse proxies or using port:IP access, and running a development server. The tool also supports token balance queries and is related to projects like one-api, ChatGPT-Cost-Calculator, and Poe.Monster. It is licensed under the MIT license.
For similar tasks

auto-engineer
Auto Engineer is a tool designed to automate the Software Development Life Cycle (SDLC) by building production-grade applications with a combination of human and AI agents. It offers a plugin-based architecture that allows users to install only the necessary functionality for their projects. The tool guides users through key stages including Flow Modeling, IA Generation, Deterministic Scaffolding, AI Coding & Testing Loop, and Comprehensive Quality Checks. Auto Engineer follows a command/event-driven architecture and provides a modular plugin system for specific functionalities. It supports TypeScript with strict typing throughout and includes a built-in message bus server with a web dashboard for monitoring commands and events.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.