
abi
AI Operating System - Build your own AI using ontologies as the unifying field connecting data, models, workflows, and systems.
Stars: 61

ABI (Agentic Brain Infrastructure) is a Python-based AI Operating System designed to serve as the core infrastructure for building an Agentic AI Ontology Engine. It empowers organizations to integrate, manage, and scale AI-driven operations with multiple AI models, focusing on ontology, agent-driven workflows, and analytics. ABI emphasizes modularity and customization, providing a customizable framework aligned with international standards and regulatory frameworks. It offers features such as configurable AI agents, ontology management, integrations with external data sources, data processing pipelines, workflow automation, analytics, and data handling capabilities.
README:
A multi-agent AI System that uses ontologies to unify data, AI models, and workflows. β Star and follow to stay updated!
ABI (Agentic Brain Infrastructure) is an AI Operating System that uses intent-driven routing to match user requests with pre-configured responses and actions. When you make a request, ABI identifies your intent and triggers the appropriate response - whether that's a direct answer, tool usage, or routing to a specific AI agent.
The system combines multiple AI models (ChatGPT, Claude, Gemini, Grok, Llama, Mistral) with a semantic knowledge graph to map intents to actions, enabling intelligent routing based on what you're trying to accomplish.
graph TD
%% === USER INTERACTION LAYER ===
USER["π€ User"] <-->|"uses"| APPS["π± Apps<br/>Chat | API | Dashboard"]
APPS <-->|"talks to"| AGENTS
%% === AGENTS LAYER ===
subgraph AGENTS["Multi-Agents System"]
ABI["π§ ABI<br/>AI SuperAssistant"]
CUSTOM_AGENTS["π― Agents<br/>Application & Domain Experts"]
ABI -->|"coordinates"| CUSTOM_AGENTS
end
%% === STORAGE LAYER ===
subgraph STORAGE["Storage"]
MEMORY[("π Memory<br/>Persisting context")]
TRIPLESTORE[("π§ Semantic Knowledge Graph<br/>Information, Relations & Reasoning")]
VECTORDB[("π Vector DB<br/>Embeddings")]
FILES[("πΎ Object Storage<br/>Files")]
end
AGENTS <-->|"query"| TRIPLESTORE
AGENTS <-->|"search"| VECTORDB
AGENTS <-->|"retrieve"| FILES
AGENTS <-->|"access"| MEMORY
FILES -->|"index in"| VECTORDB
MEMORY -->|"index in"| VECTORDB
%% === EXECUTION LAYER ===
subgraph C["Components"]
MODELS["π€ AI Models<br/>Open & Closed Source"]
ANALYTICS["π Analytics<br/>Dashboards & Reports"]
WORKFLOWS["π Workflows<br/>Processes"]
ONTOLOGIES["π Ontologies<br/>BFO Structure"]
PIPELINES["βοΈ Pipelines<br/>Data β Semantic"]
INTEGRATIONS["π Integrations<br/>APIs, Files"]
end
AGENTS <-->|"use"| ONTOLOGIES
AGENTS -->|"execute"| INTEGRATIONS["π Integrations<br/>APIs, Exports"]
AGENTS -->|"execute"| PIPELINES["βοΈ Pipelines<br/>Data β Semantic"]
AGENTS -->|"access"| ANALYTICS["π Analytics<br/>Dashboards & Reports"]
AGENTS -->|"execute"| WORKFLOWS["π Workflows<br/>Processes"]
AGENTS -->|"use"| MODELS["π€ AI Models<br/>Open & Closed Source"]
%% === DATA PIPELINE ===
ONTOLOGIES-->|"structure"| PIPELINES
PIPELINES-->|"use"| WORKFLOWS
PIPELINES-->|"use"| INTEGRATIONS
PIPELINES -->|"create triples"| TRIPLESTORE
WORKFLOWS-->|"use"| INTEGRATIONS
%% === KINETIC ACTIONS ===
TRIPLESTORE -.->|"trigger"| PIPELINES
TRIPLESTORE -.->|"trigger"| WORKFLOWS
TRIPLESTORE -.->|"trigger"| INTEGRATIONS
%% === FILE GENERATION ===
WORKFLOWS -->|"create"| FILES
WORKFLOWS -->|"generate"| ANALYTICS
INTEGRATIONS -->|"create"| FILES
%% === STYLING ===
classDef user fill:#2c3e50,stroke:#fff,stroke-width:2px,color:#fff
classDef abi fill:#e74c3c,stroke:#fff,stroke-width:3px,color:#fff
classDef apps fill:#9b59b6,stroke:#fff,stroke-width:2px,color:#fff
classDef agents fill:#3498db,stroke:#fff,stroke-width:2px,color:#fff
classDef workflows fill:#1abc9c,stroke:#fff,stroke-width:2px,color:#fff
classDef integrations fill:#f39c12,stroke:#fff,stroke-width:2px,color:#fff
classDef ontologies fill:#27ae60,stroke:#fff,stroke-width:2px,color:#fff
classDef pipelines fill:#e67e22,stroke:#fff,stroke-width:2px,color:#fff
classDef analytics fill:#8e44ad,stroke:#fff,stroke-width:2px,color:#fff
classDef models fill:#e91e63,stroke:#fff,stroke-width:2px,color:#fff
classDef infrastructure fill:#95a5a6,stroke:#fff,stroke-width:2px,color:#fff
class USER user
class ABI abi
class APPS apps
class CUSTOM_AGENTS agents
class WORKFLOWS workflows
class INTEGRATIONS integrations
class ONTOLOGIES ontologies
class PIPELINES pipelines
class ANALYTICS analytics
class MODELS models
class TRIPLESTORE,VECTORDB,MEMORY,FILES infrastructure
ABI is an AI Operating System that orchestrates intelligent agents, data systems, and workflows through semantic understanding and automated reasoning.
1. User Interaction Layer
- Multiple Interfaces: Chat, REST API, Web Dashboard, MCP Protocol (Claude Desktop integration)
- Universal Access: Single entry point to all AI capabilities and domain expertise
2. Agent Orchestration Layer using LangGraph agents
- ABI SuperAssistant: Central coordinator that analyzes requests and routes to optimal resources
- AI Model Agents: Access to ChatGPT, Claude, Gemini, Grok, Llama, Mistral, and local models
- Domains Expert Agents: 20+ specialized agents (Software Engineer, Data Analyst, Content Creator, etc.)
- Applications Expert Agents: 20+ specialized agents (GitHub, Google, LinkedIn, Powerpoint, etc.)
3. Adaptive Storage Layer based on Hexagonal architecture
- Semantic Knowledge Graph: RDF triples in BFO-compliant ontologies for reasoning and relationships (default: Oxygraph)
- Vector Database: Intent embeddings for semantic similarity matching and context understanding (default: Qdrant)
- Memory System: For conversation history and persistent context (default: PostgresSQL)
- File Storage: Generated reports, documents, and workflow outputs (default: AWS S3)
4. Execution Components Layer
- Ontologies: BFO-structured knowledge that defines how data relates and flows
- Pipelines: Automated data processing that transforms raw information into semantic knowledge
- Workflows: Business process automation triggered by user requests or system events
- Integrations: Connections to external APIs, databases, and applications
- Analytics: Real-time dashboards and reporting capabilities
- Request Analysis: ABI receives your request through any interface
- Semantic Understanding: Vector search and SPARQL queries identify intent and context
- Agent Routing: Knowledge graph determines the best agent/model combination
- Resource Coordination: Agents access ontologies, trigger workflows, and use integrations as needed
- Knowledge Creation: Results are stored back into the knowledge graph for future reasoning
- Kinetic Actions: System automatically triggers related processes and workflows based on new knowledge
Ontology-Based AI to Preserve the Freedom to Reason: The convergence of semantic alignment and kinetic action through ontology-driven systems represents one of the most powerful technologies ever created. When such transformative capabilities exist behind closed doors or within a single organization's control, it threatens the fundamental freedom to reason and act independently. We believe this power should be distributed, not concentrated - because the ability to understand, reason about, and act upon complex information is a cornerstone of human autonomy and democratic society.
Core Capabilities for the Innovation Community:
- Ontology-Driven Intelligence: Semantic understanding that connects data, meaning, and action
- Knowledge Graph Operations: Real-time reasoning over complex, interconnected information
- Automated Decision Systems: AI that understands context and triggers appropriate responses
- Semantic Data Integration: Connect disparate systems through shared understanding, not just APIs
The Open Source Advantage:
- Research & Education: Academic institutions and researchers can explore ontological AI without barriers
- Innovation Acceleration: Developers can build upon proven semantic architectures
- Community Collaboration: Collective advancement of ontology-based AI methodologies
- Accessible Entry Point: Learn and experiment with enterprise-grade semantic technologies
Moreover, this project is built with international standards and regulatory frameworks as guiding principles, including ISO/IEC 42001:2023 (AI Management Systems), ISO/IEC 21838-2:2021 (Basic Formal Ontology), and forward-compatibility with emerging regulations like the EU AI Act, ABI provides a customizable framework suitable for individuals and organizations aiming to create intelligent, automated systems aligned to their needs.
For innovators who want to own their AI:
- π€ Individuals: Run locally, choose your models, own your data
- β‘ Pro: Automate workflows, optimize AI costs
- π₯ Teams: Share knowledge, build custom agents
- π’ Enterprise: Deploy organization-wide, integrate legacy systems, maintain full control
ABI Local & Open Source + Naas.ai Cloud = Complete AI Operating System
- π Local: Open source, privacy-first, full control
- βοΈ Cloud: Managed infrastructure, marketplace, enterprise features
- π Hybrid: Start local, scale cloud, seamless migration
For cloud users, we offer Naas AI Credits that aggregate multiple AI models on our platform - giving you access to ChatGPT, Claude, Gemini, and more through a single, cost-optimized interface. Available for anyone with a naas.ai workspace account.
- Supervisor: ABI, a Supervisor Agent with intelligent routing across all AI models.
- Cloud: ChatGPT, Claude, Gemini, Grok, Llama, Mistral, Perplexity
- Local: Privacy-focused Qwen, DeepSeek, Gemma (via Ollama)

- Domain Expert Agents: 20+ specialized agents (Software Engineer, Content Creator, Data Engineer, Accountant, Project Manager, etc.)
- Applications Module: GitHub, LinkedIn, Google Search, PostgreSQL, ArXiv, Naas, Git, PowerPoint, and more
-
Modular Architecture: Enable/disable any module via
config.yaml

- Semantic Knowledge Graph: BFO-compliant ontologies with Oxigraph backend
- SPARQL Queries: 30+ optimized queries for intelligent agent routing
- Vector Search: Intent matching via embeddings and similarity search
- Object Storage: File storage and retrieval with MinIO compatibility
- Memory: Persistent context and conversation history storage

- Integrations: Seamless connectivity with external APIs and data export capabilities
- Workflows: End-to-end automation of complex business processes with intelligent task orchestration
- Pipelines: Data processing and semantic transformation
- Event-Driven: Real-time reactivity with automatic triggers based on knowledge graph updates
- Cache System: Intelligent caching layer to optimize API usage and manage rate limits efficiently

- Terminal: Interactive chat with any agent
- REST API: HTTP endpoints for all agents and workflows
- MCP Protocol: Integration with Claude Desktop and VS Code
- Web UI: Knowledge graph explorer and SPARQL editor
git clone https://github.com/jupyter-naas/abi.git
cd abi
make
What happens:
- Setup wizard walks you through configuration (API keys, preferences)
- Local services start automatically (knowledge graph, database)
- ABI chat opens - your AI SuperAssistant that routes to the best model for each task
Chat commands:
-
@claude analyze this data
- Route to Claude for analysis -
@qwen write some code
- Use local Qwen for privacy -
make chat agent=ChatGPTAgent
- Run ChatGPT directly -
/?
- Show all available agents and commands -
/exit
- End session
Other interfaces:
make api # REST API (http://localhost:9879)
make oxigraph-explorer # Knowledge graph browser
ABI is in active R&D and deploying projects with collaboration between:
- NaasAI - Applied Research Lab focused on creating universal data & AI platform that can connect the needs of individuals and organizations
- OpenTeams - Open SaaS infrastructure platform led by Python ecosystem pioneers, providing enterprise-grade open source AI/ML solutions and packaging expertise
- University at Buffalo - Research university providing academic foundation and institutional support
- National Center for Ontological Research (NCOR) - Leading research center for ontological foundations and formal knowledge representation
- Forvis Mazars - Global audit and consulting firm providing governance and risk management expertise
This collaborative effort aims to better manage and control the way we use AI in society, ensuring responsible development and deployment of agentic AI systems through rigorous research, international standards compliance, and professional oversight.
ABI development is supported through:
- Applied Research Grants - Funding for ontological AI research and development
- Academic Partnership - University at Buffalo research collaboration and institutional support
- Industry Partnerships - Strategic partnerships including Quansight, Forvis Mazars, VSquared AI, and other enterprise collaborators
- Open Source Community - Community contributions, collaborative development, and infrastructure support from OpenTeams
For funding opportunities, research partnerships, or enterprise support, contact us at [email protected]
We welcome contributions! Please read the contributing guidelines for more information.
ABI Framework is open-source and available for use under the MIT license. Professionals and enterprises are encouraged to contact our support for custom services as this project evolves rapidly at [email protected]
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for abi
Similar Open Source Tools

abi
ABI (Agentic Brain Infrastructure) is a Python-based AI Operating System designed to serve as the core infrastructure for building an Agentic AI Ontology Engine. It empowers organizations to integrate, manage, and scale AI-driven operations with multiple AI models, focusing on ontology, agent-driven workflows, and analytics. ABI emphasizes modularity and customization, providing a customizable framework aligned with international standards and regulatory frameworks. It offers features such as configurable AI agents, ontology management, integrations with external data sources, data processing pipelines, workflow automation, analytics, and data handling capabilities.

ComfyUI-Copilot
ComfyUI-Copilot is an intelligent assistant built on the Comfy-UI framework that simplifies and enhances the AI algorithm debugging and deployment process through natural language interactions. It offers intuitive node recommendations, workflow building aids, and model querying services to streamline development processes. With features like interactive Q&A bot, natural language node suggestions, smart workflow assistance, and model querying, ComfyUI-Copilot aims to lower the barriers to entry for beginners, boost development efficiency with AI-driven suggestions, and provide real-time assistance for developers.

ai
Jetify's AI SDK for Go is a unified interface for interacting with multiple AI providers including OpenAI, Anthropic, and more. It addresses the challenges of fragmented ecosystems, vendor lock-in, poor Go developer experience, and complex multi-modal handling by providing a unified interface, Go-first design, production-ready features, multi-modal support, and extensible architecture. The SDK supports language models, embeddings, image generation, multi-provider support, multi-modal inputs, tool calling, and structured outputs.

chatbox
Chatbox is a desktop client for ChatGPT, Claude, and other LLMs, providing a user-friendly interface for AI copilot assistance on Windows, Mac, and Linux. It offers features like local data storage, multiple LLM provider support, image generation with Dall-E-3, enhanced prompting, keyboard shortcuts, and more. Users can collaborate, access the tool on various platforms, and enjoy multilingual support. Chatbox is constantly evolving with new features to enhance the user experience.

chatbox
Chatbox is a desktop client for ChatGPT, Claude, and other LLMs, providing features like local data storage, multiple LLM provider support, image generation, enhanced prompting, keyboard shortcuts, and more. It offers a user-friendly interface with dark theme, team collaboration, cross-platform availability, web version access, iOS & Android apps, multilingual support, and ongoing feature enhancements. Developed for prompt and API debugging, it has gained popularity for daily chatting and professional role-playing with AI assistance.

parlant
Parlant is a structured approach to building and guiding customer-facing AI agents. It allows developers to create and manage robust AI agents, providing specific feedback on agent behavior and helping understand user intentions better. With features like guidelines, glossary, coherence checks, dynamic context, and guided tool use, Parlant offers control over agent responses and behavior. Developer-friendly aspects include instant changes, Git integration, clean architecture, and type safety. It enables confident deployment with scalability, effective debugging, and validation before deployment. Parlant works with major LLM providers and offers client SDKs for Python and TypeScript. The tool facilitates natural customer interactions through asynchronous communication and provides a chat UI for testing new behaviors before deployment.

fast-llm-security-guardrails
ZenGuard AI enables AI developers to integrate production-level, low-code LLM (Large Language Model) guardrails into their generative AI applications effortlessly. With ZenGuard AI, ensure your application operates within trusted boundaries, is protected from prompt injections, and maintains user privacy without compromising on performance.

transformerlab-app
Transformer Lab is an app that allows users to experiment with Large Language Models by providing features such as one-click download of popular models, finetuning across different hardware, RLHF and Preference Optimization, working with LLMs across different operating systems, chatting with models, using different inference engines, evaluating models, building datasets for training, calculating embeddings, providing a full REST API, running in the cloud, converting models across platforms, supporting plugins, embedded Monaco code editor, prompt editing, inference logs, all through a simple cross-platform GUI.

memU
MemU is an open-source memory framework designed for AI companions, offering high accuracy, fast retrieval, and cost-effectiveness. It serves as an intelligent 'memory folder' that adapts to various AI companion scenarios. With MemU, users can create AI companions that remember them, learn their preferences, and evolve through interactions. The framework provides advanced retrieval strategies, 24/7 support, and is specialized for AI companions. MemU offers cloud, enterprise, and self-hosting options, with features like memory organization, interconnected knowledge graph, continuous self-improvement, and adaptive forgetting mechanism. It boasts high memory accuracy, fast retrieval, and low cost, making it suitable for building intelligent agents with persistent memory capabilities.

lancedb
LanceDB is an open-source database for vector-search built with persistent storage, which greatly simplifies retrieval, filtering, and management of embeddings. The key features of LanceDB include: Production-scale vector search with no servers to manage. Store, query, and filter vectors, metadata, and multi-modal data (text, images, videos, point clouds, and more). Support for vector similarity search, full-text search, and SQL. Native Python and Javascript/Typescript support. Zero-copy, automatic versioning, manage versions of your data without needing extra infrastructure. GPU support in building vector index(*). Ecosystem integrations with LangChain π¦οΈπ, LlamaIndex π¦, Apache-Arrow, Pandas, Polars, DuckDB, and more on the way. LanceDB's core is written in Rust π¦ and is built using Lance, an open-source columnar format designed for performant ML workloads.

VisioFirm
VisioFirm is an open-source, AI-powered image annotation tool designed to accelerate labeling for computer vision tasks like classification, object detection, oriented bounding boxes (OBB), segmentation and video annotation. Built for speed and simplicity, it leverages state-of-the-art models for semi-automated pre-annotations, allowing you to focus on refining rather than starting from scratch. Whether you're preparing datasets for YOLO, SAM, or custom models, VisioFirm streamlines your workflow with an intuitive web interface and powerful backend. Perfect for researchers, data scientists, and ML engineers handling large image datasetsβget high-quality annotations in minutes, not hours!

ramparts
Ramparts is a fast, lightweight security scanner designed for the Model Context Protocol (MCP) ecosystem. It scans MCP servers to identify vulnerabilities and provides security features such as discovering capabilities, multi-transport support, session management, static analysis, cross-origin analysis, LLM-powered analysis, and risk assessment. The tool is suitable for developers, MCP users, and MCP developers to ensure the security of their connections. It can be used for security audits, development testing, CI/CD integration, and compliance with security requirements for AI agent deployments.

dspy.rb
DSPy.rb is a Ruby framework for building reliable LLM applications using composable, type-safe modules. It enables developers to define typed signatures and compose them into pipelines, offering a more structured approach compared to traditional prompting. The framework embraces Ruby conventions and adds innovations like CodeAct agents and enhanced production instrumentation, resulting in scalable LLM applications that are robust and efficient. DSPy.rb is actively developed, with a focus on stability and real-world feedback through the 0.x series before reaching a stable v1.0 API.

robustmq
RobustMQ is a next-generation, high-performance, multi-protocol message queue built in Rust. It aims to create a unified messaging infrastructure tailored for modern cloud-native and AI systems. With features like high performance, distributed architecture, multi-protocol support, pluggable storage, cloud-native readiness, multi-tenancy, security features, observability, and user-friendliness, RobustMQ is designed to be production-ready and become a top-level Apache project in the message queue ecosystem by the second half of 2025.

NotelyVoice
Notely Voice is a free, modern, cross-platform AI voice transcription and note-taking application. It offers powerful Whisper AI Voice to Text capabilities, making it ideal for students, professionals, doctors, researchers, and anyone in need of hands-free note-taking. The app features rich text editing, simple search, smart filtering, organization with folders and tags, advanced speech-to-text, offline capability, seamless integration, audio recording, theming, cross-platform support, and sharing functionality. It includes memory-efficient audio processing, chunking configuration, and utilizes OpenAI Whisper for speech recognition technology. Built with Kotlin, Compose Multiplatform, Coroutines, Android Architecture, ViewModel, Koin, Material 3, Whisper AI, and Native Compose Navigation, Notely follows Android Architecture principles with distinct layers for UI, presentation, domain, and data.

chunkhound
ChunkHound is a tool that transforms your codebase into a searchable knowledge base for AI assistants using semantic and regex search. It integrates with AI assistants via the Model Context Protocol (MCP) and offers features such as cAST algorithm for semantic code chunking, multi-hop semantic search, natural language queries, regex search without API keys, support for 22 languages, and local-first architecture. It provides intelligent code discovery by following semantic relationships and discovering related implementations. ChunkHound is built on the cAST algorithm from Carnegie Mellon University, ensuring structure-aware chunking that preserves code meaning. It supports universal language parsing and offers efficient updates for large codebases.
For similar tasks

askui
AskUI is a reliable, automated end-to-end automation tool that only depends on what is shown on your screen instead of the technology or platform you are running on.

abi
ABI (Agentic Brain Infrastructure) is a Python-based AI Operating System designed to serve as the core infrastructure for building an Agentic AI Ontology Engine. It empowers organizations to integrate, manage, and scale AI-driven operations with multiple AI models, focusing on ontology, agent-driven workflows, and analytics. ABI emphasizes modularity and customization, providing a customizable framework aligned with international standards and regulatory frameworks. It offers features such as configurable AI agents, ontology management, integrations with external data sources, data processing pipelines, workflow automation, analytics, and data handling capabilities.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.