OpenMemory
Local persistent memory store for LLM applications including claude desktop, github copilot, codex, antigravity, etc.
Stars: 3316
OpenMemory is a cognitive memory engine for AI agents, providing real long-term memory capabilities beyond simple embeddings. It is self-hosted and supports Python + Node SDKs, with integrations for various tools like LangChain, CrewAI, AutoGen, and more. Users can ingest data from sources like GitHub, Notion, Google Drive, and others directly into memory. OpenMemory offers explainable traces for recalled information and supports multi-sector memory, temporal reasoning, decay engine, waypoint graph, and more. It aims to provide a true memory system rather than just a vector database with marketing copy, enabling users to build agents, copilots, journaling systems, and coding assistants that can remember and reason effectively.
README:
Real long-term memory for AI agents. Not RAG. Not a vector DB. Self-hosted, Python + Node.
OpenMemory is a cognitive memory engine for LLMs and agents.
- 🧠 Real long-term memory (not just embeddings in a table)
- 💾 Self-hosted, local-first (SQLite / Postgres)
- 🐍 Python + 🟦 Node SDKs
- 🧩 Integrations: LangChain, CrewAI, AutoGen, Streamlit, MCP, VS Code
- 📥 Sources: GitHub, Notion, Google Drive, OneDrive, Web Crawler
- 🔍 Explainable traces (see why something was recalled)
Your model stays stateless. Your app stops being amnesiac.
Spin up a shared OpenMemory backend (HTTP API + MCP + dashboard):
Use the SDKs when you want embedded local memory. Use the server when you want multi‑user org‑wide memory.
Install:
pip install openmemory-pyUse:
from openmemory.client import Memory
mem = Memory()
mem.add("user prefers dark mode", user_id="u1")
results = mem.search("preferences", user_id="u1")
await mem.delete("memory_id")Note:
add,search,get,deleteare async. Useawaitin async contexts.
mem = Memory()
client = mem.openai.register(OpenAI(), user_id="u1")
resp = client.chat.completions.create(...)from openmemory.integrations.langchain import OpenMemoryChatMessageHistory
history = OpenMemoryChatMessageHistory(memory=mem, user_id="u1")OpenMemory is designed to sit behind agent frameworks and UIs:
- Crew-style agents: use
Memoryas a shared long-term store - AutoGen-style orchestrations: store dialog + tool calls as episodic memory
- Streamlit apps: give each user a persistent memory by
user_id
See the integrations section in the docs for concrete patterns.
Install:
npm install openmemory-jsUse:
import { Memory } from "openmemory-js"
const mem = new Memory()
await mem.add("user likes spicy food", { user_id: "u1" })
const results = await mem.search("food?", { user_id: "u1" })
await mem.delete("memory_id")Drop this into:
- Node backends
- CLIs
- local tools
- anything that needs durable memory without running a separate service.
Ingest data from external sources directly into memory:
# python
github = mem.source("github")
await github.connect(token="ghp_...")
await github.ingest_all(repo="owner/repo")// javascript
const github = await mem.source("github")
await github.connect({ token: "ghp_..." })
await github.ingest_all({ repo: "owner/repo" })Available connectors: github, notion, google_drive, google_sheets, google_slides, onedrive, web_crawler
OpenMemory can run inside your app or as a central service.
- ✅ Local SQLite by default
- ✅ Supports external DBs (via config)
- ✅ Great fit for LangChain / LangGraph / CrewAI / notebooks
Docs: https://openmemory.cavira.app/docs/sdks/python
- Same cognitive model as Python
- Ideal for JS/TS applications
- Can either run fully local or talk to a central backend
Docs: https://openmemory.cavira.app/docs/sdks/javascript
Use when you want:
- org‑wide memory
- HTTP API
- dashboard
- MCP server for Claude / Cursor / Windsurf
Run from source:
git clone https://github.com/CaviraOSS/OpenMemory.git
cd OpenMemory
cp .env.example .env
cd backend
npm install
npm run dev # default :8080Or with Docker:
docker compose up --build -dThe backend exposes:
-
/api/memory/*– memory operations -
/api/temporal/*– temporal knowledge graph -
/mcp– MCP server - dashboard UI
LLMs forget everything between messages.
Most “memory” solutions are really just RAG pipelines:
- text is chunked
- embedded into a vector store
- retrieved by similarity
They don’t understand:
- whether something is a fact, event, preference, or feeling
- how recent / important it is
- how it links to other memories
- what was true at a specific time
Cloud memory APIs add:
- vendor lock‑in
- latency
- opaque behavior
- privacy problems
OpenMemory gives you an actual memory system:
- 🧠 Multi‑sector memory (episodic, semantic, procedural, emotional, reflective)
- ⏱ Temporal reasoning (what was true when)
- 📉 Decay & reinforcement instead of dumb TTLs
- 🕸 Waypoint graph (associative, traversable links)
- 🔍 Explainable traces (see which nodes were recalled and why)
- 🏠 Self‑hosted, local‑first, you own the DB
- 🔌 SDKs + server + VS Code + MCP
It behaves like a memory module, not a “vector DB with marketing copy”.
Vector DB + LangChain (cloud-heavy, ceremony):
import os
import time
from langchain.chains import ConversationChain
from langchain.memory import VectorStoreRetrieverMemory
from langchain_community.vectorstores import Pinecone
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
os.environ["PINECONE_API_KEY"] = "sk-..."
os.environ["OPENAI_API_KEY"] = "sk-..."
time.sleep(3) # cloud warmup
embeddings = OpenAIEmbeddings()
pinecone = Pinecone.from_existing_index(embeddings, index_name="my-memory")
retriever = pinecone.as_retriever(search_kwargs={"k": 2})
memory = VectorStoreRetrieverMemory(retriever=retriever)
conversation = ConversationChain(llm=ChatOpenAI(), memory=memory)
conversation.predict(input="I'm allergic to peanuts")OpenMemory (3 lines, local file, no vendor lock-in):
from openmemory.client import Memory
mem = Memory()
mem.add("user allergic to peanuts", user_id="user123")
results = mem.search("allergies", user_id="user123")✅ Zero cloud config • ✅ Local SQLite • ✅ Offline‑friendly • ✅ Your DB, your schema
-
Multi-sector memory
Episodic (events), semantic (facts), procedural (skills), emotional (feelings), reflective (insights). -
Temporal knowledge graph
valid_from/valid_to, point‑in‑time truth, evolution over time. -
Composite scoring
Salience + recency + coactivation, not just cosine distance. -
Decay engine
Adaptive forgetting per sector instead of hard TTLs. -
Explainable recall
“Waypoint” traces that show exactly which nodes were used in context. -
Embeddings
OpenAI, Gemini, Ollama, AWS, synthetic fallback. -
Integrations
LangChain, CrewAI, AutoGen, Streamlit, MCP, VS Code, IDEs. -
Connectors
Import from GitHub, Notion, Google Drive, Google Sheets/Slides, OneDrive, Web Crawler. -
Migration tool
Import memories from Mem0, Zep, Supermemory and more.
If you’re building agents, copilots, journaling systems, knowledge workers, or coding assistants, OpenMemory is the piece that turns them from “goldfish” into something that actually remembers.
OpenMemory ships a native MCP server, so any MCP‑aware client can treat it as a tool.
claude mcp add --transport http openmemory http://localhost:8080/mcp.mcp.json:
{
"mcpServers": {
"openmemory": {
"type": "http",
"url": "http://localhost:8080/mcp"
}
}
}Available tools include:
openmemory_queryopenmemory_storeopenmemory_listopenmemory_getopenmemory_reinforce
Your IDE assistant can query, store, list, and reinforce memories without you wiring every call manually.
OpenMemory treats time as a first‑class dimension.
-
valid_from/valid_to– truth windows - auto‑evolution – new facts close previous ones
- confidence decay – old facts fade gracefully
- point‑in‑time queries – “what was true on X?”
- timelines – reconstruct an entity’s history
- change detection – see when something flipped
POST /api/temporal/fact
{
"subject": "CompanyX",
"predicate": "has_CEO",
"object": "Alice",
"valid_from": "2021-01-01"
}Then later:
POST /api/temporal/fact
{
"subject": "CompanyX",
"predicate": "has_CEO",
"object": "Bob",
"valid_from": "2024-04-10"
}Alice’s term is automatically closed; timeline queries stay sane.
The opm CLI talks directly to the engine / server.
cd packages/openmemory-js
npm install
npm run build
npm link # adds `opm` to your PATH# Start the API server
opm serve
# In another terminal:
opm health
opm add "Recall that I prefer TypeScript over Python" --tags preference
opm query "language preference"opm add "user prefers dark mode" --user u1 --tags prefs
opm query "preferences" --user u1 --limit 5
opm list --user u1
opm delete <id>
opm reinforce <id>
opm statsUseful for scripting, debugging, and non‑LLM pipelines that still want memory.
OpenMemory uses Hierarchical Memory Decomposition with a temporal graph on top.
graph TB
classDef inputStyle fill:#eceff1,stroke:#546e7a,stroke-width:2px,color:#37474f
classDef processStyle fill:#e3f2fd,stroke:#1976d2,stroke-width:2px,color:#0d47a1
classDef sectorStyle fill:#fff3e0,stroke:#f57c00,stroke-width:2px,color:#e65100
classDef storageStyle fill:#fce4ec,stroke:#c2185b,stroke-width:2px,color:#880e4f
classDef engineStyle fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px,color:#4a148c
classDef outputStyle fill:#e8f5e9,stroke:#388e3c,stroke-width:2px,color:#1b5e20
classDef graphStyle fill:#e1f5fe,stroke:#0277bd,stroke-width:2px,color:#01579b
INPUT[Input / Query]:::inputStyle
CLASSIFIER[Sector Classifier]:::processStyle
EPISODIC[Episodic]:::sectorStyle
SEMANTIC[Semantic]:::sectorStyle
PROCEDURAL[Procedural]:::sectorStyle
EMOTIONAL[Emotional]:::sectorStyle
REFLECTIVE[Reflective]:::sectorStyle
EMBED[Embedding Engine]:::processStyle
SQLITE[(SQLite/Postgres<br/>Memories / Vectors / Waypoints)]:::storageStyle
TEMPORAL[(Temporal Graph)]:::storageStyle
subgraph RECALL_ENGINE["Recall Engine"]
VECTOR[Vector Search]:::engineStyle
WAYPOINT[Waypoint Graph]:::engineStyle
SCORING[Composite Scoring]:::engineStyle
DECAY[Decay Engine]:::engineStyle
end
subgraph TKG["Temporal KG"]
FACTS[Facts]:::graphStyle
TIMELINE[Timeline]:::graphStyle
end
CONSOLIDATE[Consolidation]:::processStyle
REFLECT[Reflection]:::processStyle
OUTPUT[Recall + Trace]:::outputStyle
INPUT --> CLASSIFIER
CLASSIFIER --> EPISODIC
CLASSIFIER --> SEMANTIC
CLASSIFIER --> PROCEDURAL
CLASSIFIER --> EMOTIONAL
CLASSIFIER --> REFLECTIVE
EPISODIC --> EMBED
SEMANTIC --> EMBED
PROCEDURAL --> EMBED
EMOTIONAL --> EMBED
REFLECTIVE --> EMBED
EMBED --> SQLITE
EMBED --> TEMPORAL
SQLITE --> VECTOR
SQLITE --> WAYPOINT
SQLITE --> DECAY
TEMPORAL --> FACTS
FACTS --> TIMELINE
VECTOR --> SCORING
WAYPOINT --> SCORING
DECAY --> SCORING
TIMELINE --> SCORING
SCORING --> CONSOLIDATE
CONSOLIDATE --> REFLECT
REFLECT --> OUTPUT
OUTPUT -.->|Reinforce| WAYPOINT
OUTPUT -.->|Salience| DECAYOpenMemory ships a migration tool to import data from other memory systems.
Supported:
- Mem0
- Zep
- Supermemory
Example:
cd migrate
python -m migrate --from zep --api-key ZEP_KEY --verify(See migrate/ and docs for detailed commands per provider.)
- 🧬 Learned sector classifier (trainable on your data)
- 🕸 Federated / clustered memory nodes
- 🤝 Deeper LangGraph / CrewAI / AutoGen integrations
- 🔭 Memory visualizer 2.0
- 🔐 Pluggable encryption at rest
Star the repo to follow along.
Issues and PRs are welcome.
- Bugs: https://github.com/CaviraOSS/OpenMemory/issues
- Feature requests: use the GitHub issue templates
- Before large changes, open a discussion or small design PR
OpenMemory is licensed under Apache 2.0. See LICENSE for details.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for OpenMemory
Similar Open Source Tools
OpenMemory
OpenMemory is a cognitive memory engine for AI agents, providing real long-term memory capabilities beyond simple embeddings. It is self-hosted and supports Python + Node SDKs, with integrations for various tools like LangChain, CrewAI, AutoGen, and more. Users can ingest data from sources like GitHub, Notion, Google Drive, and others directly into memory. OpenMemory offers explainable traces for recalled information and supports multi-sector memory, temporal reasoning, decay engine, waypoint graph, and more. It aims to provide a true memory system rather than just a vector database with marketing copy, enabling users to build agents, copilots, journaling systems, and coding assistants that can remember and reason effectively.
ai-counsel
AI Counsel is a true deliberative consensus MCP server where AI models engage in actual debate, refine positions across multiple rounds, and converge with voting and confidence levels. It features two modes (quick and conference), mixed adapters (CLI tools and HTTP services), auto-convergence, structured voting, semantic grouping, model-controlled stopping, evidence-based deliberation, local model support, data privacy, context injection, semantic search, fault tolerance, and full transcripts. Users can run local and cloud models to deliberate on various questions, ground decisions in reality by querying code and files, and query past decisions for analysis. The tool is designed for critical technical decisions requiring multi-model deliberation and consensus building.
crawl4ai
Crawl4AI is a powerful and free web crawling service that extracts valuable data from websites and provides LLM-friendly output formats. It supports crawling multiple URLs simultaneously, replaces media tags with ALT, and is completely free to use and open-source. Users can integrate Crawl4AI into Python projects as a library or run it as a standalone local server. The tool allows users to crawl and extract data from specified URLs using different providers and models, with options to include raw HTML content, force fresh crawls, and extract meaningful text blocks. Configuration settings can be adjusted in the `crawler/config.py` file to customize providers, API keys, chunk processing, and word thresholds. Contributions to Crawl4AI are welcome from the open-source community to enhance its value for AI enthusiasts and developers.
logicstamp-context
LogicStamp Context is a static analyzer that extracts deterministic component contracts from TypeScript codebases, providing structured architectural context for AI coding assistants. It helps AI assistants understand architecture by extracting props, hooks, and dependencies without implementation noise. The tool works with React, Next.js, Vue, Express, and NestJS, and is compatible with various AI assistants like Claude, Cursor, and MCP agents. It offers features like watch mode for real-time updates, breaking change detection, and dependency graph creation. LogicStamp Context is a security-first tool that protects sensitive data, runs locally, and is non-opinionated about architectural decisions.
core
CORE is an open-source unified, persistent memory layer for all AI tools, allowing developers to maintain context across different tools like Cursor, ChatGPT, and Claude. It aims to solve the issue of context switching and information loss between sessions by creating a knowledge graph that remembers conversations, decisions, and insights. With features like unified memory, temporal knowledge graph, browser extension, chat with memory, auto-sync from apps, and MCP integration hub, CORE provides a seamless experience for managing and recalling context. The tool's ingestion pipeline captures evolving context through normalization, extraction, resolution, and graph integration, resulting in a dynamic memory that grows and changes with the user. When recalling from memory, CORE utilizes search, re-ranking, filtering, and output to provide relevant and contextual answers. Security measures include data encryption, authentication, access control, and vulnerability reporting.
polyfire-js
Polyfire is an all-in-one managed backend for AI apps that allows users to build AI apps directly from the frontend, eliminating the need for a separate backend. It simplifies the process by providing most backend services in just a few lines of code. With Polyfire, users can easily create chatbots, transcribe audio files to text, generate simple text, create a long-term memory, and generate images with Dall-E. The tool also offers starter guides and tutorials to help users get started quickly and efficiently.
unity-mcp
MCP for Unity is a tool that acts as a bridge, enabling AI assistants to interact with the Unity Editor via a local MCP Client. Users can instruct their LLM to manage assets, scenes, scripts, and automate tasks within Unity. The tool offers natural language control, powerful tools for asset management, scene manipulation, and automation of workflows. It is extensible and designed to work with various MCP Clients, providing a range of functions for precise text edits, script management, GameObject operations, and more.
mcp-documentation-server
The mcp-documentation-server is a lightweight server application designed to serve documentation files for projects. It provides a simple and efficient way to host and access project documentation, making it easy for team members and stakeholders to find and reference important information. The server supports various file formats, such as markdown and HTML, and allows for easy navigation through the documentation. With mcp-documentation-server, teams can streamline their documentation process and ensure that project information is easily accessible to all involved parties.
alexandria-audiobook
Alexandria Audiobook Generator is a tool that transforms any book or novel into a fully-voiced audiobook using AI-powered script annotation and text-to-speech. It features a built-in Qwen3-TTS engine with batch processing and a browser-based editor for fine-tuning every line before final export. The tool offers AI-powered pipeline for automatic script annotation, smart chunking, and context preservation. It also provides voice generation capabilities with built-in TTS engine, multi-language support, custom voices, voice cloning, and LoRA voice training. The web UI editor allows users to edit, preview, and export the audiobook. Export options include combined audiobook, individual voicelines, and Audacity export for DAW editing.
twick
Twick is a comprehensive video editing toolkit built with modern web technologies. It is a monorepo containing multiple packages for video and image manipulation. The repository includes core utilities for media handling, a React-based canvas library for video and image editing, a video visualization and animation toolkit, a React component for video playback and control, timeline management and editing capabilities, a React-based video editor, and example implementations and usage demonstrations. Twick provides detailed API documentation and module information for developers. It offers easy integration with existing projects and allows users to build videos using the Twick Studio. The project follows a comprehensive style guide for naming conventions and code style across all packages.
code_puppy
Code Puppy is an AI-powered code generation agent designed to understand programming tasks, generate high-quality code, and explain its reasoning. It supports multi-language code generation, interactive CLI, and detailed code explanations. The tool requires Python 3.9+ and API keys for various models like GPT, Google's Gemini, Cerebras, and Claude. It also integrates with MCP servers for advanced features like code search and documentation lookups. Users can create custom JSON agents for specialized tasks and access a variety of tools for file management, code execution, and reasoning sharing.
ai
A TypeScript toolkit for building AI-driven video workflows on the server, powered by Mux! @mux/ai provides purpose-driven workflow functions and primitive functions that integrate with popular AI/LLM providers like OpenAI, Anthropic, and Google. It offers pre-built workflows for tasks like generating summaries and tags, content moderation, chapter generation, and more. The toolkit is cost-effective, supports multi-modal analysis, tone control, and configurable thresholds, and provides full TypeScript support. Users can easily configure credentials for Mux and AI providers, as well as cloud infrastructure like AWS S3 for certain workflows. @mux/ai is production-ready, offers composable building blocks, and supports universal language detection.
flyte-sdk
Flyte 2 SDK is a pure Python tool for type-safe, distributed orchestration of agents, ML pipelines, and more. It allows users to write data pipelines, ML training jobs, and distributed compute in Python without any DSL constraints. With features like async-first parallelism and fine-grained observability, Flyte 2 offers a seamless workflow experience. Users can leverage core concepts like TaskEnvironments for container configuration, pure Python workflows for flexibility, and async parallelism for distributed execution. Advanced features include sub-task observability with tracing and remote task execution. The tool also provides native Jupyter integration for running and monitoring workflows directly from notebooks. Configuration and deployment are made easy with configuration files and commands for deploying and running workflows. Flyte 2 is licensed under the Apache 2.0 License.
quantalogic
QuantaLogic is a ReAct framework for building advanced AI agents that seamlessly integrates large language models with a robust tool system. It aims to bridge the gap between advanced AI models and practical implementation in business processes by enabling agents to understand, reason about, and execute complex tasks through natural language interaction. The framework includes features such as ReAct Framework, Universal LLM Support, Secure Tool System, Real-time Monitoring, Memory Management, and Enterprise Ready components.
browser4
Browser4 is a lightning-fast, coroutine-safe browser designed for AI integration with large language models. It offers ultra-fast automation, deep web understanding, and powerful data extraction APIs. Users can automate the browser, extract data at scale, and perform tasks like summarizing products, extracting product details, and finding specific links. The tool is developer-friendly, supports AI-powered automation, and provides advanced features like X-SQL for precise data extraction. It also offers RPA capabilities, browser control, and complex data extraction with X-SQL. Browser4 is suitable for web scraping, data extraction, automation, and AI integration tasks.
Acontext
Acontext is a context data platform designed for production AI agents, offering unified storage, built-in context management, and observability features. It helps agents scale from local demos to production without the need to rebuild context infrastructure. The platform provides solutions for challenges like scattered context data, long-running agents requiring context management, and tracking states from multi-modal agents. Acontext offers core features such as context storage, session management, disk storage, agent skills management, and sandbox for code execution and analysis. Users can connect to Acontext, install SDKs, initialize clients, store and retrieve messages, perform context engineering, and utilize agent storage tools. The platform also supports building agents using end-to-end scripts in Python and Typescript, with various templates available. Acontext's architecture includes client layer, backend with API and core components, infrastructure with PostgreSQL, S3, Redis, and RabbitMQ, and a web dashboard. Join the Acontext community on Discord and follow updates on GitHub.
For similar tasks
OpenMemory
OpenMemory is a cognitive memory engine for AI agents, providing real long-term memory capabilities beyond simple embeddings. It is self-hosted and supports Python + Node SDKs, with integrations for various tools like LangChain, CrewAI, AutoGen, and more. Users can ingest data from sources like GitHub, Notion, Google Drive, and others directly into memory. OpenMemory offers explainable traces for recalled information and supports multi-sector memory, temporal reasoning, decay engine, waypoint graph, and more. It aims to provide a true memory system rather than just a vector database with marketing copy, enabling users to build agents, copilots, journaling systems, and coding assistants that can remember and reason effectively.
For similar jobs
sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.
teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.

