
fortuna
A Library for Uncertainty Quantification.
Stars: 905

Fortuna is a library for uncertainty quantification that enables users to estimate predictive uncertainty, assess model reliability, trigger human intervention, and deploy models safely. It provides calibration and conformal methods for pre-trained models in any framework, supports Bayesian inference methods for deep learning models written in Flax, and is designed to be intuitive and highly configurable. Users can run benchmarks and bring uncertainty to production systems with ease.
README:
Fortuna #######
.. image:: https://img.shields.io/pypi/status/Fortuna :target: https://img.shields.io/pypi/status/Fortuna :alt: PyPI - Status .. image:: https://img.shields.io/pypi/dm/aws-fortuna :target: https://pypistats.org/packages/aws-fortuna :alt: PyPI - Downloads .. image:: https://img.shields.io/pypi/v/aws-fortuna :target: https://img.shields.io/pypi/v/aws-fortuna :alt: PyPI - Version .. image:: https://img.shields.io/github/license/awslabs/Fortuna :target: https://github.com/awslabs/Fortuna/blob/main/LICENSE :alt: License .. image:: https://readthedocs.org/projects/aws-fortuna/badge/?version=latest :target: https://aws-fortuna.readthedocs.io :alt: Documentation Status
Proper estimation of predictive uncertainty is fundamental in applications that involve critical decisions. Uncertainty can be used to assess reliability of model predictions, trigger human intervention, or decide whether a model can be safely deployed in the wild.
Fortuna is a library for uncertainty quantification that makes it easy for users to run benchmarks and bring uncertainty to production systems.
Fortuna provides calibration and conformal methods starting from pre-trained models written in any framework,
and it further supports several Bayesian inference methods starting from deep learning models written in Flax <https://flax.readthedocs.io/en/latest/index.html>
_.
The language is designed to be intuitive for practitioners unfamiliar with uncertainty quantification,
and is highly configurable.
Check the documentation <https://aws-fortuna.readthedocs.io/en/latest/>
_ for a quickstart, examples and references.
Fortuna offers three different usage modes:
From uncertainty estimates <https://github.com/awslabs/fortuna#from-uncertainty-estimates>
,
From model outputs <https://github.com/awslabs/fortuna#from-model-outputs>
and
From Flax models <https://github.com/awslabs/fortuna#from-flax-models>
_.
These serve users according to the constraints dictated by their own applications.
Their pipelines are depicted in the following figure, each starting from one of the green panels.
.. image:: https://github.com/awslabs/fortuna/raw/main/docs/source/_static/pipeline.png :target: https://github.com/awslabs/fortuna/raw/main/docs/source/_static/pipeline.png
Starting from uncertainty estimates has minimal compatibility requirements and it is the quickest level of interaction with the library. This usage mode offers conformal prediction methods for both classification and regression. These take uncertainty estimates in input, and return rigorous sets of predictions that retain a user-given level of probability. In one-dimensional regression tasks, conformal sets may be thought as calibrated versions of confidence or credible intervals.
Mind that if the uncertainty estimates that you provide in inputs are inaccurate,
conformal sets might be large and unusable.
For this reason, if your application allows it,
please consider the From model outputs <https://github.com/awslabs/fortuna#from-model-outputs>
_ and
From Flax models <https://github.com/awslabs/fortuna#from-flax-models>
_ usage modes.
Example. Suppose you want to calibrate credible intervals with coverage error :code:error
,
each corresponding to a different test input variable.
We assume that credible intervals are passed as arrays of lower and upper bounds,
respectively :code:test_lower_bounds
and :code:test_upper_bounds
.
You also have lower and upper bounds of credible intervals computed for several validation inputs,
respectively :code:val_lower_bounds
and :code:val_upper_bounds
.
The corresponding array of validation targets is denoted by :code:val_targets
.
The following code produces conformal prediction intervals,
i.e. calibrated versions of you test credible intervals.
.. code-block:: python
from fortuna.conformal import QuantileConformalRegressor conformal_intervals = QuantileConformalRegressor().conformal_interval( val_lower_bounds=val_lower_bounds, val_upper_bounds=val_upper_bounds, test_lower_bounds=test_lower_bounds, test_upper_bounds=test_upper_bounds, val_targets=val_targets, error=error)
Starting from model outputs assumes you have already trained a model in some framework,
and arrive to Fortuna with model outputs in :code:numpy.ndarray
format for each input data point.
This usage mode allows you to calibrate your model outputs, estimate uncertainty,
compute metrics and obtain conformal sets.
Compared to the From uncertainty estimates <https://github.com/awslabs/fortuna#from-uncertainty-estimates>
_ usage mode,
this one offers better control,
as it can make sure uncertainty estimates have been appropriately calibrated.
However, if the model had been trained with classical methods,
the resulting quantification of model (a.k.a. epistemic) uncertainty may be poor.
To mitigate this problem, please consider the From Flax models <https://github.com/awslabs/fortuna#from-flax-models>
_
usage mode.
Example.
Suppose you have validation and test model outputs,
respectively :code:val_outputs
and :code:test_outputs
.
Furthermore, you have some arrays of validation and target variables,
respectively :code:val_targets
and :code:test_targets
.
The following code provides a minimal classification example to get calibrated predictive entropy estimates.
.. code-block:: python
from fortuna.output_calib_model import OutputCalibClassifier calib_model = OutputCalibClassifier() status = calib_model.calibrate(outputs=val_outputs, targets=val_targets) test_entropies = calib_model.predictive.entropy(outputs=test_outputs)
Starting from Flax models has higher compatibility requirements than the
From uncertainty estimates <https://github.com/awslabs/fortuna#from-uncertainty-estimates>
_
and From model outputs <https://github.com/awslabs/fortuna#from-model-outputs>
_ usage modes,
as it requires deep learning models written in Flax <https://flax.readthedocs.io/en/latest/index.html>
_.
However, it enables you to replace standard model training with scalable Bayesian inference procedures,
which may significantly improve the quantification of predictive uncertainty.
Example. Suppose you have a Flax classification deep learning model :code:model
from inputs to logits, with output
dimension given by :code:output_dim
. Furthermore,
you have some training, validation and calibration TensorFlow data loader :code:train_data_loader
, :code:val_data_loader
and :code:test_data_loader
, respectively.
The following code provides a minimal classification example to get calibrated probability estimates.
.. code-block:: python
from fortuna.data import DataLoader train_data_loader = DataLoader.from_tensorflow_data_loader(train_data_loader) calib_data_loader = DataLoader.from_tensorflow_data_loader(val_data_loader) test_data_loader = DataLoader.from_tensorflow_data_loader(test_data_loader)
from fortuna.prob_model import ProbClassifier prob_model = ProbClassifier(model=model) status = prob_model.train(train_data_loader=train_data_loader, calib_data_loader=calib_data_loader) test_means = prob_model.predictive.mean(inputs_loader=test_data_loader.to_inputs_loader())
NOTE: Before installing Fortuna, you are required to install JAX <https://github.com/google/jax#installation>
_ in your virtual environment.
You can install Fortuna by typing
.. code-block::
pip install aws-fortuna
Alternatively, you can build the package using Poetry <https://python-poetry.org/docs/>
.
If you choose to pursue this way, first install Poetry and add it to your PATH
(see here <https://python-poetry.org/docs/#installation>
). Then type
.. code-block::
poetry install
All the dependencies will be installed at their required versions. Consider adding the following flags to the command above:
- :code:
-E transformers
if you want to use models and datasets fromHugging Face <https://huggingface.co/>
_. - :code:
-E sagemaker
if you want to install the dependencies necessary to run Fortuna on Amazon SageMaker. - :code:
-E docs
if you want to install Sphinx dependencies to build the documentation. - :code:
-E notebooks
if you want to work with Jupyter notebooks.
Finally, you can either access the virtualenv that Poetry created by typing :code:poetry shell
,
or execute commands within the virtualenv using the :code:run
command, e.g. :code:poetry run python
.
Several usage examples are found in the
/examples <https://github.com/awslabs/fortuna/tree/main/examples>
_
directory.
We offer a simple pipeline that allows you to run Fortuna on Amazon SageMaker with minimal effort.
-
Create an AWS account - it is free! Store the account ID and the region where you want to launch training jobs.
-
First,
update your local AWS credentials <https://docs.aws.amazon.com/cli/latest/userguide/cli-authentication-short-term.html>
. Then you need to build andpush a Docker image to an Amazon ECR repository <https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html>
. Thisscript <https://github.com/awslabs/fortuna/tree/main/fortuna/docker/build_and_push.sh>
_ will help you doing so - it will require your AWS account ID and region. If you need other packages to be included in your Docker image, you should consider customize theDockerfile <https://github.com/awslabs/fortuna/tree/main/fortuna/docker/Dockerfile>
_. NOTE: the script has been tested on a M1 MacOS. It is possible that different operating systems will need small modifications. -
Create an
S3 bucket <https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-bucket.html>
_. You will need this to dump the results from your training jobs on Amazon Sagemaker. -
Write a configuration
yaml
file. This will include your AWS details, the path to the entrypoint script that you want to run on Amazon SageMaker, the arguments to pass to the script, the path to the S3 bucket where you want to dump the results, the metrics to monitor, and more. Checkthis file <https://github.com/awslabs/fortuna/tree/main/benchmarks/transformers/sagemaker_entrypoints/prob_model_text_classification_config/default.yaml>
_ for an example. -
Finally, given :code:
config_dir
, that is the absolute path to the main configuration directory, and :code:config_filename
, that is the name of the main configuration file (without .yaml extension), enter Python and run the following:
.. code-block:: python
from fortuna.sagemaker import run_training_job
run_training_job(config_dir=config_dir, config_filename=config_filename)
-
AWS launch blog post <https://aws.amazon.com/blogs/machine-learning/introducing-fortuna-a-library-for-uncertainty-quantification/>
_ -
Fortuna: A Library for Uncertainty Quantification in Deep Learning [arXiv paper] <https://arxiv.org/abs/2302.04019>
_
To cite Fortuna:
.. code-block::
@article{detommaso2023fortuna,
title={Fortuna: A Library for Uncertainty Quantification in Deep Learning},
author={Detommaso, Gianluca and Gasparin, Alberto and Donini, Michele and Seeger, Matthias and Wilson, Andrew Gordon and Archambeau, Cedric},
journal={arXiv preprint arXiv:2302.04019},
year={2023}
}
If you wish to contribute to the project, please refer to our contribution guidelines <https://github.com/awslabs/fortuna/blob/main/CONTRIBUTING.md>
_.
This project is licensed under the Apache-2.0 License.
See LICENSE <https://github.com/awslabs/fortuna/blob/main/LICENSE>
_ for more information.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for fortuna
Similar Open Source Tools

fortuna
Fortuna is a library for uncertainty quantification that enables users to estimate predictive uncertainty, assess model reliability, trigger human intervention, and deploy models safely. It provides calibration and conformal methods for pre-trained models in any framework, supports Bayesian inference methods for deep learning models written in Flax, and is designed to be intuitive and highly configurable. Users can run benchmarks and bring uncertainty to production systems with ease.

dockershrink
Dockershrink is an AI-powered Commandline Tool designed to help reduce the size of Docker images. It combines traditional Rule-based analysis with Generative AI techniques to optimize Image configurations. The tool supports NodeJS applications and aims to save costs on storage, data transfer, and build times while increasing developer productivity. By automatically applying advanced optimization techniques, Dockershrink simplifies the process for engineers and organizations, resulting in significant savings and efficiency improvements.

LLMeBench
LLMeBench is a flexible framework designed for accelerating benchmarking of Large Language Models (LLMs) in the field of Natural Language Processing (NLP). It supports evaluation of various NLP tasks using model providers like OpenAI, HuggingFace Inference API, and Petals. The framework is customizable for different NLP tasks, LLM models, and datasets across multiple languages. It features extensive caching capabilities, supports zero- and few-shot learning paradigms, and allows on-the-fly dataset download and caching. LLMeBench is open-source and continuously expanding to support new models accessible through APIs.

OlympicArena
OlympicArena is a comprehensive benchmark designed to evaluate advanced AI capabilities across various disciplines. It aims to push AI towards superintelligence by tackling complex challenges in science and beyond. The repository provides detailed data for different disciplines, allows users to run inference and evaluation locally, and offers a submission platform for testing models on the test set. Additionally, it includes an annotation interface and encourages users to cite their paper if they find the code or dataset helpful.

MultiPL-E
MultiPL-E is a system for translating unit test-driven neural code generation benchmarks to new languages. It is part of the BigCode Code Generation LM Harness and allows for evaluating Code LLMs using various benchmarks. The tool supports multiple versions with improvements and new language additions, providing a scalable and polyglot approach to benchmarking neural code generation. Users can access a tutorial for direct usage and explore the dataset of translated prompts on the Hugging Face Hub.

MARS5-TTS
MARS5 is a novel English speech model (TTS) developed by CAMB.AI, featuring a two-stage AR-NAR pipeline with a unique NAR component. The model can generate speech for various scenarios like sports commentary and anime with just 5 seconds of audio and a text snippet. It allows steering prosody using punctuation and capitalization in the transcript. Speaker identity is specified using an audio reference file, enabling 'deep clone' for improved quality. The model can be used via torch.hub or HuggingFace, supporting both shallow and deep cloning for inference. Checkpoints are provided for AR and NAR models, with hardware requirements of 750M+450M params on GPU. Contributions to improve model stability, performance, and reference audio selection are welcome.

langchain
LangChain is a framework for developing Elixir applications powered by language models. It enables applications to connect language models to other data sources and interact with the environment. The library provides components for working with language models and off-the-shelf chains for specific tasks. It aims to assist in building applications that combine large language models with other sources of computation or knowledge. LangChain is written in Elixir and is not aimed for parity with the JavaScript and Python versions due to differences in programming paradigms and design choices. The library is designed to make it easy to integrate language models into applications and expose features, data, and functionality to the models.

agno
Agno is a lightweight library for building multi-modal Agents. It is designed with core principles of simplicity, uncompromising performance, and agnosticism, allowing users to create blazing fast agents with minimal memory footprint. Agno supports any model, any provider, and any modality, making it a versatile container for AGI. Users can build agents with lightning-fast agent creation, model agnostic capabilities, native support for text, image, audio, and video inputs and outputs, memory management, knowledge stores, structured outputs, and real-time monitoring. The library enables users to create autonomous programs that use language models to solve problems, improve responses, and achieve tasks with varying levels of agency and autonomy.

gpt-subtrans
GPT-Subtrans is an open-source subtitle translator that utilizes large language models (LLMs) as translation services. It supports translation between any language pairs that the language model supports. Note that GPT-Subtrans requires an active internet connection, as subtitles are sent to the provider's servers for translation, and their privacy policy applies.

ScreenAgent
ScreenAgent is a project focused on creating an environment for Visual Language Model agents (VLM Agent) to interact with real computer screens. The project includes designing an automatic control process for agents to interact with the environment and complete multi-step tasks. It also involves building the ScreenAgent dataset, which collects screenshots and action sequences for various daily computer tasks. The project provides a controller client code, configuration files, and model training code to enable users to control a desktop with a large model.

AnkiAIUtils
Anki AI Utils is a powerful suite of AI-powered tools designed to enhance your Anki flashcard learning experience by automatically improving cards you struggle with. The tools include features such as adaptive learning, personalized memory hooks, automation readiness, universal compatibility, provider agnosticism, and infinite extensibility. The toolkit consists of tools like Illustrator for creating custom mnemonic images, Reformulator for rephrasing flashcards, Mnemonics Creator for generating memorable mnemonics, Explainer for providing detailed explanations, and Mnemonics Helper for quick mnemonic generation. The project aims to motivate others to package the tools into addons for wider accessibility.

local-genAI-search
Local-GenAI Search is a local generative search engine powered by the Llama3 model, allowing users to ask questions about their local files and receive concise answers with relevant document references. It utilizes MS MARCO embeddings for semantic search and can run locally on a 32GB laptop or computer. The tool can be used to index local documents, search for information, and provide generative search services through a user interface.

holohub
Holohub is a central repository for the NVIDIA Holoscan AI sensor processing community to share reference applications, operators, tutorials, and benchmarks. It includes example applications, community components, package configurations, and tutorials. Users and developers of the Holoscan platform are invited to reuse and contribute to this repository. The repository provides detailed instructions on prerequisites, building, running applications, contributing, and glossary terms. It also offers a searchable catalog of available components on the Holoscan SDK User Guide website.

claude.vim
Claude.vim is a Vim plugin that integrates Claude, an AI pair programmer, into your Vim workflow. It allows you to chat with Claude about what to build or how to debug problems, and Claude offers opinions, proposes modifications, or even writes code. The plugin provides a chat/instruction-centric interface optimized for human collaboration, with killer features like access to chat history and vimdiff interface. It can refactor code, modify or extend selected pieces of code, execute complex tasks by reading documentation, cloning git repositories, and more. Note that it is early alpha software and expected to rapidly evolve.

visualwebarena
VisualWebArena is a benchmark for evaluating multimodal autonomous language agents through diverse and complex web-based visual tasks. It builds on the reproducible evaluation introduced in WebArena. The repository provides scripts for end-to-end training, demos to run multimodal agents on webpages, and tools for setting up environments for evaluation. It includes trajectories of the GPT-4V + SoM agent on VWA tasks, along with human evaluations on 233 tasks. The environment supports OpenAI models and Gemini models for evaluation.

jaison-core
J.A.I.son is a Python project designed for generating responses using various components and applications. It requires specific plugins like STT, T2T, TTSG, and TTSC to function properly. Users can customize responses, voice, and configurations. The project provides a Discord bot, Twitch events and chat integration, and VTube Studio Animation Hotkeyer. It also offers features for managing conversation history, training AI models, and monitoring conversations.
For similar tasks

fortuna
Fortuna is a library for uncertainty quantification that enables users to estimate predictive uncertainty, assess model reliability, trigger human intervention, and deploy models safely. It provides calibration and conformal methods for pre-trained models in any framework, supports Bayesian inference methods for deep learning models written in Flax, and is designed to be intuitive and highly configurable. Users can run benchmarks and bring uncertainty to production systems with ease.

ck
Collective Mind (CM) is a collection of portable, extensible, technology-agnostic and ready-to-use automation recipes with a human-friendly interface (aka CM scripts) to unify and automate all the manual steps required to compose, run, benchmark and optimize complex ML/AI applications on any platform with any software and hardware: see online catalog and source code. CM scripts require Python 3.7+ with minimal dependencies and are continuously extended by the community and MLCommons members to run natively on Ubuntu, MacOS, Windows, RHEL, Debian, Amazon Linux and any other operating system, in a cloud or inside automatically generated containers while keeping backward compatibility - please don't hesitate to report encountered issues here and contact us via public Discord Server to help this collaborative engineering effort! CM scripts were originally developed based on the following requirements from the MLCommons members to help them automatically compose and optimize complex MLPerf benchmarks, applications and systems across diverse and continuously changing models, data sets, software and hardware from Nvidia, Intel, AMD, Google, Qualcomm, Amazon and other vendors: * must work out of the box with the default options and without the need to edit some paths, environment variables and configuration files; * must be non-intrusive, easy to debug and must reuse existing user scripts and automation tools (such as cmake, make, ML workflows, python poetry and containers) rather than substituting them; * must have a very simple and human-friendly command line with a Python API and minimal dependencies; * must require minimal or zero learning curve by using plain Python, native scripts, environment variables and simple JSON/YAML descriptions instead of inventing new workflow languages; * must have the same interface to run all automations natively, in a cloud or inside containers. CM scripts were successfully validated by MLCommons to modularize MLPerf inference benchmarks and help the community automate more than 95% of all performance and power submissions in the v3.1 round across more than 120 system configurations (models, frameworks, hardware) while reducing development and maintenance costs.

eval-dev-quality
DevQualityEval is an evaluation benchmark and framework designed to compare and improve the quality of code generation of Language Model Models (LLMs). It provides developers with a standardized benchmark to enhance real-world usage in software development and offers users metrics and comparisons to assess the usefulness of LLMs for their tasks. The tool evaluates LLMs' performance in solving software development tasks and measures the quality of their results through a point-based system. Users can run specific tasks, such as test generation, across different programming languages to evaluate LLMs' language understanding and code generation capabilities.

modelbench
ModelBench is a tool for running safety benchmarks against AI models and generating detailed reports. It is part of the MLCommons project and is designed as a proof of concept to aggregate measures, relate them to specific harms, create benchmarks, and produce reports. The tool requires LlamaGuard for evaluating responses and a TogetherAI account for running benchmarks. Users can install ModelBench from GitHub or PyPI, run tests using Poetry, and create benchmarks by providing necessary API keys. The tool generates static HTML pages displaying benchmark scores and allows users to dump raw scores and manage cache for faster runs. ModelBench is aimed at enabling users to test their own models and create tests and benchmarks.

Aidan-Bench
Aidan Bench is a tool that rewards creativity, reliability, contextual attention, and instruction following. It is weakly correlated with Lmsys, has no score ceiling, and aligns with real-world open-ended use. The tool involves giving LLMs open-ended questions and evaluating their answers based on novelty scores. Users can set up the tool by installing required libraries and setting up API keys. The project allows users to run benchmarks for different models and provides flexibility in threading options.

AirspeedVelocity.jl
AirspeedVelocity.jl is a tool designed to simplify benchmarking of Julia packages over their lifetime. It provides a CLI to generate benchmarks, compare commits/tags/branches, plot benchmarks, and run benchmark comparisons for every submitted PR as a GitHub action. The tool freezes the benchmark script at a specific revision to prevent old history from affecting benchmarks. Users can configure options using CLI flags and visualize benchmark results. AirspeedVelocity.jl can be used to benchmark any Julia package and offers features like generating tables and plots of benchmark results. It also supports custom benchmarks and can be integrated into GitHub actions for automated benchmarking of PRs.

orama-core
OramaCore is a database designed for AI projects, answer engines, copilots, and search functionalities. It offers features such as a full-text search engine, vector database, LLM interface, and various utilities. The tool is currently under active development and not recommended for production use due to potential API changes. OramaCore aims to provide a comprehensive solution for managing data and enabling advanced AI capabilities in projects.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.