
deep-research
Use any LLMs (Large Language Models) for Deep Research. Support SSE API and MCP server.
Stars: 3984

Deep Research is a lightning-fast tool that uses powerful AI models to generate comprehensive research reports in just a few minutes. It leverages advanced 'Thinking' and 'Task' models, combined with an internet connection, to provide fast and insightful analysis on various topics. The tool ensures privacy by processing and storing all data locally. It supports multi-platform deployment, offers support for various large language models, web search functionality, knowledge graph generation, research history preservation, local and server API support, PWA technology, multi-key payload support, multi-language support, and is built with modern technologies like Next.js and Shadcn UI. Deep Research is open-source under the MIT License.
README:
Lightning-Fast Deep Research Report
Deep Research uses a variety of powerful AI models to generate in-depth research reports in just a few minutes. It leverages advanced "Thinking" and "Task" models, combined with an internet connection, to provide fast and insightful analysis on a variety of topics. Your privacy is paramount - all data is processed and stored locally.
- Rapid Deep Research: Generates comprehensive research reports in about 2 minutes, significantly accelerating your research process.
- Multi-platform Support: Supports rapid deployment to Vercel, Cloudflare and other platforms.
- Powered by AI: Utilizes the advanced AI models for accurate and insightful analysis.
- Privacy-Focused: Your data remains private and secure, as all data is stored locally on your browser.
- Support for Multi-LLM: Supports a variety of mainstream large language models, including Gemini, OpenAI, Anthropic, Deepseek, Grok, Mistral, Azure OpenAI, any OpenAI Compatible LLMs, OpenRouter, Ollama, etc.
- Support Web Search: Supports search engines such as Searxng, Tavily, Firecrawl, Exa, Bocha, etc., allowing LLMs that do not support search to use the web search function more conveniently.
- Thinking & Task Models: Employs sophisticated "Thinking" and "Task" models to balance depth and speed, ensuring high-quality results quickly. Support switching research models.
- Support Further Research: You can refine or adjust the research content at any stage of the project and support re-research from that stage.
- Local Knowledge Base: Supports uploading and processing text, Office, PDF and other resource files to generate local knowledge base.
- Artifact: Supports editing of research content, with two editing modes: WYSIWYM and Markdown. It is possible to adjust the reading level, article length and full text translation.
- Knowledge Graph: It supports one-click generation of knowledge graph, allowing you to have a systematic understanding of the report content.
- Research History: Support preservation of research history, you can review previous research results at any time and conduct in-depth research again.
- Local & Server API Support: Offers flexibility with both local and server-side API calling options to suit your needs.
- Support for SaaS and MCP: You can use this project as a deep research service (SaaS) through the SSE API, or use it in other AI services through MCP service.
- Support PWA: With Progressive Web App (PWA) technology, you can use the project like a software.
- Support Multi-Key payload: Support Multi-Key payload to improve API response efficiency.
- Multi-language Support: English, 简体中文, Español.
- Built with Modern Technologies: Developed using Next.js 15 and Shadcn UI, ensuring a modern, performant, and visually appealing user experience.
- MIT Licensed: Open-source and freely available for personal and commercial use under the MIT License.
- [x] Support preservation of research history
- [x] Support editing final report and search results
- [x] Support for other LLM models
- [x] Support file upload and local knowledge base
- [x] Support SSE API and MCP server
-
Get Gemini API Key
-
One-click deployment of the project, you can choose to deploy to Vercel or Cloudflare
Currently the project supports deployment to Cloudflare, but you need to follow How to deploy to Cloudflare Pages to do it.
-
Start using
- Deploy the project to Vercel or Cloudflare
- Set the LLM API key
- Set the LLM API base URL (optional)
- Start using
Follow these steps to get Deep Research up and running on your local browser.
-
Clone the repository:
git clone https://github.com/u14app/deep-research.git cd deep-research
-
Install dependencies:
pnpm install # or npm install or yarn install
-
Set up Environment Variables:
You need to modify the file
env.tpl
to.env
, or create a.env
file and write the variables to this file.# For Development cp env.tpl .env.local # For Production cp env.tpl .env
-
Run the development server:
pnpm dev # or npm run dev or yarn dev
Open your browser and visit http://localhost:3000 to access Deep Research.
The project allow custom model list, but only works in proxy mode. Please add an environment variable named NEXT_PUBLIC_MODEL_LIST
in the .env
file or environment variables page.
Custom model lists use ,
to separate multiple models. If you want to disable a model, use the -
symbol followed by the model name, i.e. -existing-model-name
. To only allow the specified model to be available, use -all,+new-model-name
.
Currently the project supports deployment to Cloudflare, but you need to follow How to deploy to Cloudflare Pages to do it.
The Docker version needs to be 20 or above, otherwise it will prompt that the image cannot be found.
⚠️ Note: Most of the time, the docker version will lag behind the latest version by 1 to 2 days, so the "update exists" prompt will continue to appear after deployment, which is normal.
docker pull xiangfa/deep-research:latest
docker run -d --name deep-research -p 3333:3000 xiangfa/deep-research
You can also specify additional environment variables:
docker run -d --name deep-research \
-p 3333:3000 \
-e ACCESS_PASSWORD=your-password \
-e GOOGLE_GENERATIVE_AI_API_KEY=AIzaSy... \
xiangfa/deep-research
or build your own docker image:
docker build -t deep-research .
docker run -d --name deep-research -p 3333:3000 deep-research
If you need to specify other environment variables, please add -e key=value
to the above command to specify it.
Deploy using docker-compose.yml
:
version: '3.9'
services:
deep-research:
image: xiangfa/deep-research
container_name: deep-research
environment:
- ACCESS_PASSWORD=your-password
- GOOGLE_GENERATIVE_AI_API_KEY=AIzaSy...
ports:
- 3333:3000
or build your own docker compose:
docker compose -f docker-compose.yml build
You can also build a static page version directly, and then upload all files in the out
directory to any website service that supports static pages, such as Github Page, Cloudflare, Vercel, etc..
pnpm build:export
As mentioned in the "Getting Started" section, Deep Research utilizes the following environment variables for server-side API configurations:
Please refer to the file env.tpl for all available environment variables.
Important Notes on Environment Variables:
-
Privacy Reminder: These environment variables are primarily used for server-side API calls. When using the local API mode, no API keys or server-side configurations are needed, further enhancing your privacy.
-
Multi-key Support: Supports multiple keys, each key is separated by
,
, i.e.key1,key2,key3
. -
Security Setting: By setting
ACCESS_PASSWORD
, you can better protect the security of the server API. -
Make variables effective: After adding or modifying this environment variable, please redeploy the project for the changes to take effect.
Currently the project supports two forms of API: Server-Sent Events (SSE) and Model Context Protocol (MCP).
The Deep Research API provides a real-time interface for initiating and monitoring complex research tasks.
Recommended to use the API via @microsoft/fetch-event-source
, to get the final report, you need to listen to the message
event, the data will be returned in the form of a text stream.
Endpoint: /api/sse
Method: POST
Body:
interface SSEConfig {
// Research topic
query: string;
// AI provider, Possible values include: google, openai, anthropic, deepseek, xai, mistral, azure, openrouter, openaicompatible, pollinations, ollama
provider: string;
// Thinking model id
thinkingModel: string;
// Task model id
taskModel: string;
// Search provider, Possible values include: model, tavily, firecrawl, exa, bocha, searxng
searchProvider: string;
// Response Language, also affects the search language. (optional)
language?: string;
// Maximum number of search results. Default, `5` (optional)
maxResult?: number;
// Whether to include content-related images in the final report. Default, `true`. (optional)
enableCitationImage?: boolean;
// Whether to include citation links in search results and final reports. Default, `true`. (optional)
enableReferences?: boolean;
}
Headers:
interface Headers {
"Content-Type": "application/json";
// If you set an access password
// Authorization: "Bearer YOUR_ACCESS_PASSWORD";
}
See the detailed API documentation.
This is an interesting implementation. You can watch the whole process of deep research directly through the URL just like watching a video.
You can access the deep research report via the following link:
http://localhost:3000/api/sse/live?query=AI+trends+for+this+year&provider=pollinations&thinkingModel=openai&taskModel=openai-fast&searchProvider=searxng
Query Params:
// The parameters are the same as POST parameters
interface QueryParams extends SSEConfig {
// If you set the `ACCESS_PASSWORD` environment variable, this parameter is required
password?: string;
}
Currently supports StreamableHTTP
and SSE
Server Transport.
StreamableHTTP server endpoint: /api/mcp
, transport type: streamable-http
SSE server endpoint: /api/mcp/sse
, transport type: sse
{
"mcpServers": {
"deep-research": {
"url": "http://127.0.0.1:3000/api/mcp",
"transportType": "streamable-http",
"timeout": 600
}
}
}
Note: Since deep research take a long time to execute, you need to set a longer timeout to avoid interrupting the study.
If your server sets ACCESS_PASSWORD
, the MCP service will be protected and you need to add additional headers parameters:
{
"mcpServers": {
"deep-research": {
"url": "http://127.0.0.1:3000/api/mcp",
"transportType": "streamable-http",
"timeout": 600,
"headers": {
"Authorization": "Bearer YOUR_ACCESS_PASSWORD"
}
}
}
}
Enabling MCP service requires setting global environment variables:
# MCP Server AI provider
# Possible values include: google, openai, anthropic, deepseek, xai, mistral, azure, openrouter, openaicompatible, pollinations, ollama
MCP_AI_PROVIDER=google
# MCP Server search provider. Default, `model`
# Possible values include: model, tavily, firecrawl, exa, bocha, searxng
MCP_SEARCH_PROVIDER=tavily
# MCP Server thinking model id, the core model used in deep research.
MCP_THINKING_MODEL=gemini-2.0-flash-thinking-exp
# MCP Server task model id, used for secondary tasks, high output models are recommended.
MCP_TASK_MODEL=gemini-2.0-flash-exp
Note: To ensure that the MCP service can be used normally, you need to set the environment variables of the corresponding model and search engine. For specific environment variable parameters, please refer to env.tpl.
-
Research topic
- Input research topic
- Use local research resources (optional)
- Start thinking (or rethinking)
-
Propose your ideas
- The system asks questions
- Answer system questions (optional)
- Write a research plan (or rewrite the research plan)
- The system outputs the research plan
- Start in-depth research (or re-research)
- The system generates SERP queries
- The system asks questions
-
Information collection
- Initial research
- Retrieve local research resources based on SERP queries
- Collect information from the Internet based on SERP queries
- In-depth research (this process can be repeated)
- Propose research suggestions (optional)
- Start a new round of information collection (the process is the same as the initial research)
- Initial research
-
Generate Final Report
- Make a writing request (optional)
- Summarize all research materials into a comprehensive Markdown report
- Regenerate research report (optional)
flowchart TB
A[Research Topic]:::start
subgraph Propose[Propose your ideas]
B1[System asks questions]:::process
B2[System outputs the research plan]:::process
B3[System generates SERP queries]:::process
B1 --> B2
B2 --> B3
end
subgraph Collect[Information collection]
C1[Initial research]:::collection
C1a[Retrieve local research resources based on SERP queries]:::collection
C1b[Collect information from the Internet based on SERP queries]:::collection
C2[In-depth research]:::recursive
Refine{More in-depth research needed?}:::decision
C1 --> C1a
C1 --> C1b
C1a --> C2
C1b --> C2
C2 --> Refine
Refine -->|Yes| C2
end
Report[Generate Final Report]:::output
A --> Propose
B3 --> C1
%% Connect the exit from the loop/subgraph to the final report
Refine -->|No| Report
%% Styling
classDef start fill:#7bed9f,stroke:#2ed573,color:black
classDef process fill:#70a1ff,stroke:#1e90ff,color:black
classDef recursive fill:#ffa502,stroke:#ff7f50,color:black
classDef output fill:#ff4757,stroke:#ff6b81,color:black
classDef collection fill:#a8e6cf,stroke:#3b7a57,color:black
classDef decision fill:#c8d6e5,stroke:#8395a7,color:black
class A start
class B1,B2,B3 process
class C1,C1a,C1b collection
class C2 recursive
class Refine decision
class Report output
Why does my Ollama or SearXNG not work properly and displays the error TypeError: Failed to fetch
?
If your request generates CORS
due to browser security restrictions, you need to configure parameters for Ollama or SearXNG to allow cross-domain requests. You can also consider using the server proxy mode, which is a backend server that makes requests, which can effectively avoid cross-domain issues.
Deep Research is designed with your privacy in mind. All research data and generated reports are stored locally on your machine. We do not collect or transmit any of your research data to external servers (unless you are explicitly using server-side API calls, in which case data is sent to API through your configured proxy if any). Your privacy is our priority.
- Next.js - The React framework for building performant web applications.
- Shadcn UI - Beautifully designed components that helped streamline the UI development.
- AI SDKs - Powering the intelligent research capabilities of Deep Research.
-
Deep Research - Thanks to the project
dzhng/deep-research
for inspiration.
We welcome contributions to Deep Research! If you have ideas for improvements, bug fixes, or new features, please feel free to:
- Fork the repository.
- Create a new branch for your feature or bug fix.
- Make your changes and commit them.
- Submit a pull request.
For major changes, please open an issue first to discuss your proposed changes.
If you have any questions, suggestions, or feedback, please create a new issue.
Deep Research is released under the MIT License. This license allows for free use, modification, and distribution for both commercial and non-commercial purposes.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for deep-research
Similar Open Source Tools

deep-research
Deep Research is a lightning-fast tool that uses powerful AI models to generate comprehensive research reports in just a few minutes. It leverages advanced 'Thinking' and 'Task' models, combined with an internet connection, to provide fast and insightful analysis on various topics. The tool ensures privacy by processing and storing all data locally. It supports multi-platform deployment, offers support for various large language models, web search functionality, knowledge graph generation, research history preservation, local and server API support, PWA technology, multi-key payload support, multi-language support, and is built with modern technologies like Next.js and Shadcn UI. Deep Research is open-source under the MIT License.

exospherehost
Exosphere is an open source infrastructure designed to run AI agents at scale for large data and long running flows. It allows developers to define plug and playable nodes that can be run on a reliable backbone in the form of a workflow, with features like dynamic state creation at runtime, infinite parallel agents, persistent state management, and failure handling. This enables the deployment of production agents that can scale beautifully to build robust autonomous AI workflows.

giskard
Giskard is an open-source Python library that automatically detects performance, bias & security issues in AI applications. The library covers LLM-based applications such as RAG agents, all the way to traditional ML models for tabular data.

BentoML
BentoML is an open-source model serving library for building performant and scalable AI applications with Python. It comes with everything you need for serving optimization, model packaging, and production deployment.

premsql
PremSQL is an open-source library designed to help developers create secure, fully local Text-to-SQL solutions using small language models. It provides essential tools for building and deploying end-to-end Text-to-SQL pipelines with customizable components, ideal for secure, autonomous AI-powered data analysis. The library offers features like Local-First approach, Customizable Datasets, Robust Executors and Evaluators, Advanced Generators, Error Handling and Self-Correction, Fine-Tuning Support, and End-to-End Pipelines. Users can fine-tune models, generate SQL queries from natural language inputs, handle errors, and evaluate model performance against predefined metrics. PremSQL is extendible for customization and private data usage.

sec-parser
The `sec-parser` project simplifies extracting meaningful information from SEC EDGAR HTML documents by organizing them into semantic elements and a tree structure. It helps in parsing SEC filings for financial and regulatory analysis, analytics and data science, AI and machine learning, causal AI, and large language models. The tool is especially beneficial for AI, ML, and LLM applications by streamlining data pre-processing and feature extraction.

Easy-Translate
Easy-Translate is a script designed for translating large text files with a single command. It supports various models like M2M100, NLLB200, SeamlessM4T, LLaMA, and Bloom. The tool is beginner-friendly and offers seamless and customizable features for advanced users. It allows acceleration on CPU, multi-CPU, GPU, multi-GPU, and TPU, with support for different precisions and decoding strategies. Easy-Translate also provides an evaluation script for translations. Built on HuggingFace's Transformers and Accelerate library, it supports prompt usage and loading huge models efficiently.

UFO
UFO is a UI-focused dual-agent framework to fulfill user requests on Windows OS by seamlessly navigating and operating within individual or spanning multiple applications.

RainbowGPT
RainbowGPT is a versatile tool that offers a range of functionalities, including Stock Analysis for financial decision-making, MySQL Management for database navigation, and integration of AI technologies like GPT-4 and ChatGlm3. It provides a user-friendly interface suitable for all skill levels, ensuring seamless information flow and continuous expansion of emerging technologies. The tool enhances adaptability, creativity, and insight, making it a valuable asset for various projects and tasks.

langmanus
LangManus is a community-driven AI automation framework that combines language models with specialized tools for tasks like web search, crawling, and Python code execution. It implements a hierarchical multi-agent system with agents like Coordinator, Planner, Supervisor, Researcher, Coder, Browser, and Reporter. The framework supports LLM integration, search and retrieval tools, Python integration, workflow management, and visualization. LangManus aims to give back to the open-source community and welcomes contributions in various forms.

Curie
Curie is an AI-agent framework designed for automated and rigorous scientific experimentation. It automates end-to-end workflow management, ensures methodical procedure, reliability, and interpretability, and supports ML research, system analysis, and scientific discovery. It provides a benchmark with questions from 4 Computer Science domains. Users can customize experiment agents and adapt to their own tasks by configuring base_config.json. Curie is suitable for hyperparameter tuning, algorithm behavior analysis, system performance benchmarking, and automating computational simulations.

llama-cpp-agent
The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM models, execute structured function calls and get structured output (objects). It provides a simple yet robust interface and supports llama-cpp-python and OpenAI endpoints with GBNF grammar support (like the llama-cpp-python server) and the llama.cpp backend server. It works by generating a formal GGML-BNF grammar of the user defined structures and functions, which is then used by llama.cpp to generate text valid to that grammar. In contrast to most GBNF grammar generators it also supports nested objects, dictionaries, enums and lists of them.

KlicStudio
Klic Studio is a versatile audio and video localization and enhancement solution developed by Krillin AI. This minimalist yet powerful tool integrates video translation, dubbing, and voice cloning, supporting both landscape and portrait formats. With an end-to-end workflow, users can transform raw materials into beautifully ready-to-use cross-platform content with just a few clicks. The tool offers features like video acquisition, accurate speech recognition, intelligent segmentation, terminology replacement, professional translation, voice cloning, video composition, and cross-platform support. It also supports various speech recognition services, large language models, and TTS text-to-speech services. Users can easily deploy the tool using Docker and configure it for different tasks like subtitle translation, large model translation, and optional voice services.

koog
Koog is a Kotlin-based framework for building and running AI agents entirely in idiomatic Kotlin. It allows users to create agents that interact with tools, handle complex workflows, and communicate with users. Key features include pure Kotlin implementation, MCP integration, embedding capabilities, custom tool creation, ready-to-use components, intelligent history compression, powerful streaming API, persistent agent memory, comprehensive tracing, flexible graph workflows, modular feature system, scalable architecture, and multiplatform support.

ChatGPT-desktop
ChatGPT Desktop Application is a multi-platform tool that provides a powerful AI wrapper for generating text. It offers features like text-to-speech, exporting chat history in various formats, automatic application upgrades, system tray hover window, support for slash commands, customization of global shortcuts, and pop-up search. The application is built using Tauri and aims to enhance user experience by simplifying text generation tasks. It is available for Mac, Windows, and Linux, and is designed for personal learning and research purposes.

OpenAdapt
OpenAdapt is an open-source software adapter between Large Multimodal Models (LMMs) and traditional desktop and web Graphical User Interfaces (GUIs). It aims to automate repetitive GUI workflows by leveraging the power of LMMs. OpenAdapt records user input and screenshots, converts them into tokenized format, and generates synthetic input via transformer model completions. It also analyzes recordings to generate task trees and replay synthetic input to complete tasks. OpenAdapt is model agnostic and generates prompts automatically by learning from human demonstration, ensuring that agents are grounded in existing processes and mitigating hallucinations. It works with all types of desktop GUIs, including virtualized and web, and is open source under the MIT license.
For similar tasks

Azure-Analytics-and-AI-Engagement
The Azure-Analytics-and-AI-Engagement repository provides packaged Industry Scenario DREAM Demos with ARM templates (Containing a demo web application, Power BI reports, Synapse resources, AML Notebooks etc.) that can be deployed in a customer’s subscription using the CAPE tool within a matter of few hours. Partners can also deploy DREAM Demos in their own subscriptions using DPoC.

sorrentum
Sorrentum is an open-source project that aims to combine open-source development, startups, and brilliant students to build machine learning, AI, and Web3 / DeFi protocols geared towards finance and economics. The project provides opportunities for internships, research assistantships, and development grants, as well as the chance to work on cutting-edge problems, learn about startups, write academic papers, and get internships and full-time positions at companies working on Sorrentum applications.

tidb
TiDB is an open-source distributed SQL database that supports Hybrid Transactional and Analytical Processing (HTAP) workloads. It is MySQL compatible and features horizontal scalability, strong consistency, and high availability.

zep-python
Zep is an open-source platform for building and deploying large language model (LLM) applications. It provides a suite of tools and services that make it easy to integrate LLMs into your applications, including chat history memory, embedding, vector search, and data enrichment. Zep is designed to be scalable, reliable, and easy to use, making it a great choice for developers who want to build LLM-powered applications quickly and easily.

telemetry-airflow
This repository codifies the Airflow cluster that is deployed at workflow.telemetry.mozilla.org (behind SSO) and commonly referred to as "WTMO" or simply "Airflow". Some links relevant to users and developers of WTMO: * The `dags` directory in this repository contains some custom DAG definitions * Many of the DAGs registered with WTMO don't live in this repository, but are instead generated from ETL task definitions in bigquery-etl * The Data SRE team maintains a WTMO Developer Guide (behind SSO)

mojo
Mojo is a new programming language that bridges the gap between research and production by combining Python syntax and ecosystem with systems programming and metaprogramming features. Mojo is still young, but it is designed to become a superset of Python over time.

pandas-ai
PandasAI is a Python library that makes it easy to ask questions to your data in natural language. It helps you to explore, clean, and analyze your data using generative AI.

databend
Databend is an open-source cloud data warehouse that serves as a cost-effective alternative to Snowflake. With its focus on fast query execution and data ingestion, it's designed for complex analysis of the world's largest datasets.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.