fortuna
A Library for Uncertainty Quantification.
Stars: 905
Fortuna is a library for uncertainty quantification that enables users to estimate predictive uncertainty, assess model reliability, trigger human intervention, and deploy models safely. It provides calibration and conformal methods for pre-trained models in any framework, supports Bayesian inference methods for deep learning models written in Flax, and is designed to be intuitive and highly configurable. Users can run benchmarks and bring uncertainty to production systems with ease.
README:
Fortuna #######
.. image:: https://img.shields.io/pypi/status/Fortuna :target: https://img.shields.io/pypi/status/Fortuna :alt: PyPI - Status .. image:: https://img.shields.io/pypi/dm/aws-fortuna :target: https://pypistats.org/packages/aws-fortuna :alt: PyPI - Downloads .. image:: https://img.shields.io/pypi/v/aws-fortuna :target: https://img.shields.io/pypi/v/aws-fortuna :alt: PyPI - Version .. image:: https://img.shields.io/github/license/awslabs/Fortuna :target: https://github.com/awslabs/Fortuna/blob/main/LICENSE :alt: License .. image:: https://readthedocs.org/projects/aws-fortuna/badge/?version=latest :target: https://aws-fortuna.readthedocs.io :alt: Documentation Status
Proper estimation of predictive uncertainty is fundamental in applications that involve critical decisions. Uncertainty can be used to assess reliability of model predictions, trigger human intervention, or decide whether a model can be safely deployed in the wild.
Fortuna is a library for uncertainty quantification that makes it easy for users to run benchmarks and bring uncertainty to production systems.
Fortuna provides calibration and conformal methods starting from pre-trained models written in any framework,
and it further supports several Bayesian inference methods starting from deep learning models written in Flax <https://flax.readthedocs.io/en/latest/index.html>_.
The language is designed to be intuitive for practitioners unfamiliar with uncertainty quantification,
and is highly configurable.
Check the documentation <https://aws-fortuna.readthedocs.io/en/latest/>_ for a quickstart, examples and references.
Fortuna offers three different usage modes:
From uncertainty estimates <https://github.com/awslabs/fortuna#from-uncertainty-estimates>,
From model outputs <https://github.com/awslabs/fortuna#from-model-outputs> and
From Flax models <https://github.com/awslabs/fortuna#from-flax-models>_.
These serve users according to the constraints dictated by their own applications.
Their pipelines are depicted in the following figure, each starting from one of the green panels.
.. image:: https://github.com/awslabs/fortuna/raw/main/docs/source/_static/pipeline.png :target: https://github.com/awslabs/fortuna/raw/main/docs/source/_static/pipeline.png
Starting from uncertainty estimates has minimal compatibility requirements and it is the quickest level of interaction with the library. This usage mode offers conformal prediction methods for both classification and regression. These take uncertainty estimates in input, and return rigorous sets of predictions that retain a user-given level of probability. In one-dimensional regression tasks, conformal sets may be thought as calibrated versions of confidence or credible intervals.
Mind that if the uncertainty estimates that you provide in inputs are inaccurate,
conformal sets might be large and unusable.
For this reason, if your application allows it,
please consider the From model outputs <https://github.com/awslabs/fortuna#from-model-outputs>_ and
From Flax models <https://github.com/awslabs/fortuna#from-flax-models>_ usage modes.
Example. Suppose you want to calibrate credible intervals with coverage error :code:error,
each corresponding to a different test input variable.
We assume that credible intervals are passed as arrays of lower and upper bounds,
respectively :code:test_lower_bounds and :code:test_upper_bounds.
You also have lower and upper bounds of credible intervals computed for several validation inputs,
respectively :code:val_lower_bounds and :code:val_upper_bounds.
The corresponding array of validation targets is denoted by :code:val_targets.
The following code produces conformal prediction intervals,
i.e. calibrated versions of you test credible intervals.
.. code-block:: python
from fortuna.conformal import QuantileConformalRegressor conformal_intervals = QuantileConformalRegressor().conformal_interval( val_lower_bounds=val_lower_bounds, val_upper_bounds=val_upper_bounds, test_lower_bounds=test_lower_bounds, test_upper_bounds=test_upper_bounds, val_targets=val_targets, error=error)
Starting from model outputs assumes you have already trained a model in some framework,
and arrive to Fortuna with model outputs in :code:numpy.ndarray format for each input data point.
This usage mode allows you to calibrate your model outputs, estimate uncertainty,
compute metrics and obtain conformal sets.
Compared to the From uncertainty estimates <https://github.com/awslabs/fortuna#from-uncertainty-estimates>_ usage mode,
this one offers better control,
as it can make sure uncertainty estimates have been appropriately calibrated.
However, if the model had been trained with classical methods,
the resulting quantification of model (a.k.a. epistemic) uncertainty may be poor.
To mitigate this problem, please consider the From Flax models <https://github.com/awslabs/fortuna#from-flax-models>_
usage mode.
Example.
Suppose you have validation and test model outputs,
respectively :code:val_outputs and :code:test_outputs.
Furthermore, you have some arrays of validation and target variables,
respectively :code:val_targets and :code:test_targets.
The following code provides a minimal classification example to get calibrated predictive entropy estimates.
.. code-block:: python
from fortuna.output_calib_model import OutputCalibClassifier calib_model = OutputCalibClassifier() status = calib_model.calibrate(outputs=val_outputs, targets=val_targets) test_entropies = calib_model.predictive.entropy(outputs=test_outputs)
Starting from Flax models has higher compatibility requirements than the
From uncertainty estimates <https://github.com/awslabs/fortuna#from-uncertainty-estimates>_
and From model outputs <https://github.com/awslabs/fortuna#from-model-outputs>_ usage modes,
as it requires deep learning models written in Flax <https://flax.readthedocs.io/en/latest/index.html>_.
However, it enables you to replace standard model training with scalable Bayesian inference procedures,
which may significantly improve the quantification of predictive uncertainty.
Example. Suppose you have a Flax classification deep learning model :code:model from inputs to logits, with output
dimension given by :code:output_dim. Furthermore,
you have some training, validation and calibration TensorFlow data loader :code:train_data_loader, :code:val_data_loader
and :code:test_data_loader, respectively.
The following code provides a minimal classification example to get calibrated probability estimates.
.. code-block:: python
from fortuna.data import DataLoader train_data_loader = DataLoader.from_tensorflow_data_loader(train_data_loader) calib_data_loader = DataLoader.from_tensorflow_data_loader(val_data_loader) test_data_loader = DataLoader.from_tensorflow_data_loader(test_data_loader)
from fortuna.prob_model import ProbClassifier prob_model = ProbClassifier(model=model) status = prob_model.train(train_data_loader=train_data_loader, calib_data_loader=calib_data_loader) test_means = prob_model.predictive.mean(inputs_loader=test_data_loader.to_inputs_loader())
NOTE: Before installing Fortuna, you are required to install JAX <https://github.com/google/jax#installation>_ in your virtual environment.
You can install Fortuna by typing
.. code-block::
pip install aws-fortuna
Alternatively, you can build the package using Poetry <https://python-poetry.org/docs/>.
If you choose to pursue this way, first install Poetry and add it to your PATH
(see here <https://python-poetry.org/docs/#installation>). Then type
.. code-block::
poetry install
All the dependencies will be installed at their required versions. Consider adding the following flags to the command above:
- :code:
-E transformersif you want to use models and datasets fromHugging Face <https://huggingface.co/>_. - :code:
-E sagemakerif you want to install the dependencies necessary to run Fortuna on Amazon SageMaker. - :code:
-E docsif you want to install Sphinx dependencies to build the documentation. - :code:
-E notebooksif you want to work with Jupyter notebooks.
Finally, you can either access the virtualenv that Poetry created by typing :code:poetry shell,
or execute commands within the virtualenv using the :code:run command, e.g. :code:poetry run python.
Several usage examples are found in the
/examples <https://github.com/awslabs/fortuna/tree/main/examples>_
directory.
We offer a simple pipeline that allows you to run Fortuna on Amazon SageMaker with minimal effort.
-
Create an AWS account - it is free! Store the account ID and the region where you want to launch training jobs.
-
First,
update your local AWS credentials <https://docs.aws.amazon.com/cli/latest/userguide/cli-authentication-short-term.html>. Then you need to build andpush a Docker image to an Amazon ECR repository <https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html>. Thisscript <https://github.com/awslabs/fortuna/tree/main/fortuna/docker/build_and_push.sh>_ will help you doing so - it will require your AWS account ID and region. If you need other packages to be included in your Docker image, you should consider customize theDockerfile <https://github.com/awslabs/fortuna/tree/main/fortuna/docker/Dockerfile>_. NOTE: the script has been tested on a M1 MacOS. It is possible that different operating systems will need small modifications. -
Create an
S3 bucket <https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-bucket.html>_. You will need this to dump the results from your training jobs on Amazon Sagemaker. -
Write a configuration
yamlfile. This will include your AWS details, the path to the entrypoint script that you want to run on Amazon SageMaker, the arguments to pass to the script, the path to the S3 bucket where you want to dump the results, the metrics to monitor, and more. Checkthis file <https://github.com/awslabs/fortuna/tree/main/benchmarks/transformers/sagemaker_entrypoints/prob_model_text_classification_config/default.yaml>_ for an example. -
Finally, given :code:
config_dir, that is the absolute path to the main configuration directory, and :code:config_filename, that is the name of the main configuration file (without .yaml extension), enter Python and run the following:
.. code-block:: python
from fortuna.sagemaker import run_training_job
run_training_job(config_dir=config_dir, config_filename=config_filename)
-
AWS launch blog post <https://aws.amazon.com/blogs/machine-learning/introducing-fortuna-a-library-for-uncertainty-quantification/>_ -
Fortuna: A Library for Uncertainty Quantification in Deep Learning [arXiv paper] <https://arxiv.org/abs/2302.04019>_
To cite Fortuna:
.. code-block::
@article{detommaso2023fortuna,
title={Fortuna: A Library for Uncertainty Quantification in Deep Learning},
author={Detommaso, Gianluca and Gasparin, Alberto and Donini, Michele and Seeger, Matthias and Wilson, Andrew Gordon and Archambeau, Cedric},
journal={arXiv preprint arXiv:2302.04019},
year={2023}
}
If you wish to contribute to the project, please refer to our contribution guidelines <https://github.com/awslabs/fortuna/blob/main/CONTRIBUTING.md>_.
This project is licensed under the Apache-2.0 License.
See LICENSE <https://github.com/awslabs/fortuna/blob/main/LICENSE>_ for more information.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for fortuna
Similar Open Source Tools
fortuna
Fortuna is a library for uncertainty quantification that enables users to estimate predictive uncertainty, assess model reliability, trigger human intervention, and deploy models safely. It provides calibration and conformal methods for pre-trained models in any framework, supports Bayesian inference methods for deep learning models written in Flax, and is designed to be intuitive and highly configurable. Users can run benchmarks and bring uncertainty to production systems with ease.
artkit
ARTKIT is a Python framework developed by BCG X for automating prompt-based testing and evaluation of Gen AI applications. It allows users to develop automated end-to-end testing and evaluation pipelines for Gen AI systems, supporting multi-turn conversations and various testing scenarios like Q&A accuracy, brand values, equitability, safety, and security. The framework provides a simple API, asynchronous processing, caching, model agnostic support, end-to-end pipelines, multi-turn conversations, robust data flows, and visualizations. ARTKIT is designed for customization by data scientists and engineers to enhance human-in-the-loop testing and evaluation, emphasizing the importance of tailored testing for each Gen AI use case.
OlympicArena
OlympicArena is a comprehensive benchmark designed to evaluate advanced AI capabilities across various disciplines. It aims to push AI towards superintelligence by tackling complex challenges in science and beyond. The repository provides detailed data for different disciplines, allows users to run inference and evaluation locally, and offers a submission platform for testing models on the test set. Additionally, it includes an annotation interface and encourages users to cite their paper if they find the code or dataset helpful.
LLMeBench
LLMeBench is a flexible framework designed for accelerating benchmarking of Large Language Models (LLMs) in the field of Natural Language Processing (NLP). It supports evaluation of various NLP tasks using model providers like OpenAI, HuggingFace Inference API, and Petals. The framework is customizable for different NLP tasks, LLM models, and datasets across multiple languages. It features extensive caching capabilities, supports zero- and few-shot learning paradigms, and allows on-the-fly dataset download and caching. LLMeBench is open-source and continuously expanding to support new models accessible through APIs.
PromptAgent
PromptAgent is a repository for a novel automatic prompt optimization method that crafts expert-level prompts using language models. It provides a principled framework for prompt optimization by unifying prompt sampling and rewarding using MCTS algorithm. The tool supports different models like openai, palm, and huggingface models. Users can run PromptAgent to optimize prompts for specific tasks by strategically sampling model errors, generating error feedbacks, simulating future rewards, and searching for high-reward paths leading to expert prompts.
knowledge-graph-of-thoughts
Knowledge Graph of Thoughts (KGoT) is an innovative AI assistant architecture that integrates LLM reasoning with dynamically constructed knowledge graphs (KGs). KGoT extracts and structures task-relevant knowledge into a dynamic KG representation, iteratively enhanced through external tools such as math solvers, web crawlers, and Python scripts. Such structured representation of task-relevant knowledge enables low-cost models to solve complex tasks effectively. The KGoT system consists of three main components: the Controller, the Graph Store, and the Integrated Tools, each playing a critical role in the task-solving process.
kafka-ml
Kafka-ML is a framework designed to manage the pipeline of Tensorflow/Keras and PyTorch machine learning models on Kubernetes. It enables the design, training, and inference of ML models with datasets fed through Apache Kafka, connecting them directly to data streams like those from IoT devices. The Web UI allows easy definition of ML models without external libraries, catering to both experts and non-experts in ML/AI.
MegatronApp
MegatronApp is a toolchain built around the Megatron-LM training framework, offering performance tuning, slow-node detection, and training-process visualization. It includes modules like MegaScan for anomaly detection, MegaFBD for forward-backward decoupling, MegaDPP for dynamic pipeline planning, and MegaScope for visualization. The tool aims to enhance large-scale distributed training by providing valuable capabilities and insights.
mosec
Mosec is a high-performance and flexible model serving framework for building ML model-enabled backend and microservices. It bridges the gap between any machine learning models you just trained and the efficient online service API. * **Highly performant** : web layer and task coordination built with Rust 🦀, which offers blazing speed in addition to efficient CPU utilization powered by async I/O * **Ease of use** : user interface purely in Python 🐍, by which users can serve their models in an ML framework-agnostic manner using the same code as they do for offline testing * **Dynamic batching** : aggregate requests from different users for batched inference and distribute results back * **Pipelined stages** : spawn multiple processes for pipelined stages to handle CPU/GPU/IO mixed workloads * **Cloud friendly** : designed to run in the cloud, with the model warmup, graceful shutdown, and Prometheus monitoring metrics, easily managed by Kubernetes or any container orchestration systems * **Do one thing well** : focus on the online serving part, users can pay attention to the model optimization and business logic
rag-experiment-accelerator
The RAG Experiment Accelerator is a versatile tool that helps you conduct experiments and evaluations using Azure AI Search and RAG pattern. It offers a rich set of features, including experiment setup, integration with Azure AI Search, Azure Machine Learning, MLFlow, and Azure OpenAI, multiple document chunking strategies, query generation, multiple search types, sub-querying, re-ranking, metrics and evaluation, report generation, and multi-lingual support. The tool is designed to make it easier and faster to run experiments and evaluations of search queries and quality of response from OpenAI, and is useful for researchers, data scientists, and developers who want to test the performance of different search and OpenAI related hyperparameters, compare the effectiveness of various search strategies, fine-tune and optimize parameters, find the best combination of hyperparameters, and generate detailed reports and visualizations from experiment results.
ontogpt
OntoGPT is a Python package for extracting structured information from text using large language models, instruction prompts, and ontology-based grounding. It provides a command line interface and a minimal web app for easy usage. The tool has been evaluated on test data and is used in related projects like TALISMAN for gene set analysis. OntoGPT enables users to extract information from text by specifying relevant terms and provides the extracted objects as output.
facet
FACET is an open source library for human-explainable AI that combines model inspection and model-based simulation to provide better explanations for supervised machine learning models. It offers an efficient and transparent machine learning workflow, enhancing scikit-learn's pipelining paradigm with new capabilities for model selection, inspection, and simulation. FACET introduces new algorithms for quantifying dependencies and interactions between features in ML models, as well as for conducting virtual experiments to optimize predicted outcomes. The tool ensures end-to-end traceability of features using an augmented version of scikit-learn with enhanced support for pandas data frames. FACET also provides model inspection methods for scikit-learn estimators, enhancing global metrics like synergy and redundancy to complement the local perspective of SHAP. Additionally, FACET offers model simulation capabilities for conducting univariate uplift simulations based on important features like BMI.
aici
The Artificial Intelligence Controller Interface (AICI) lets you build Controllers that constrain and direct output of a Large Language Model (LLM) in real time. Controllers are flexible programs capable of implementing constrained decoding, dynamic editing of prompts and generated text, and coordinating execution across multiple, parallel generations. Controllers incorporate custom logic during the token-by-token decoding and maintain state during an LLM request. This allows diverse Controller strategies, from programmatic or query-based decoding to multi-agent conversations to execute efficiently in tight integration with the LLM itself.
eval-dev-quality
DevQualityEval is an evaluation benchmark and framework designed to compare and improve the quality of code generation of Language Model Models (LLMs). It provides developers with a standardized benchmark to enhance real-world usage in software development and offers users metrics and comparisons to assess the usefulness of LLMs for their tasks. The tool evaluates LLMs' performance in solving software development tasks and measures the quality of their results through a point-based system. Users can run specific tasks, such as test generation, across different programming languages to evaluate LLMs' language understanding and code generation capabilities.
CoLLM
CoLLM is a novel method that integrates collaborative information into Large Language Models (LLMs) for recommendation. It converts recommendation data into language prompts, encodes them with both textual and collaborative information, and uses a two-step tuning method to train the model. The method incorporates user/item ID fields in prompts and employs a conventional collaborative model to generate user/item representations. CoLLM is built upon MiniGPT-4 and utilizes pretrained Vicuna weights for training.
agentscript
AgentScript is an open-source framework for building AI agents that think in code. It prompts a language model to generate JavaScript code, which is then executed in a dedicated runtime with resumability, state persistence, and interactivity. The framework allows for abstract task execution without needing to know all the data beforehand, making it flexible and efficient. AgentScript supports tools, deterministic functions, and LLM-enabled functions, enabling dynamic data processing and decision-making. It also provides state management and human-in-the-loop capabilities, allowing for pausing, serialization, and resumption of execution.
For similar tasks
fortuna
Fortuna is a library for uncertainty quantification that enables users to estimate predictive uncertainty, assess model reliability, trigger human intervention, and deploy models safely. It provides calibration and conformal methods for pre-trained models in any framework, supports Bayesian inference methods for deep learning models written in Flax, and is designed to be intuitive and highly configurable. Users can run benchmarks and bring uncertainty to production systems with ease.
ck
Collective Mind (CM) is a collection of portable, extensible, technology-agnostic and ready-to-use automation recipes with a human-friendly interface (aka CM scripts) to unify and automate all the manual steps required to compose, run, benchmark and optimize complex ML/AI applications on any platform with any software and hardware: see online catalog and source code. CM scripts require Python 3.7+ with minimal dependencies and are continuously extended by the community and MLCommons members to run natively on Ubuntu, MacOS, Windows, RHEL, Debian, Amazon Linux and any other operating system, in a cloud or inside automatically generated containers while keeping backward compatibility - please don't hesitate to report encountered issues here and contact us via public Discord Server to help this collaborative engineering effort! CM scripts were originally developed based on the following requirements from the MLCommons members to help them automatically compose and optimize complex MLPerf benchmarks, applications and systems across diverse and continuously changing models, data sets, software and hardware from Nvidia, Intel, AMD, Google, Qualcomm, Amazon and other vendors: * must work out of the box with the default options and without the need to edit some paths, environment variables and configuration files; * must be non-intrusive, easy to debug and must reuse existing user scripts and automation tools (such as cmake, make, ML workflows, python poetry and containers) rather than substituting them; * must have a very simple and human-friendly command line with a Python API and minimal dependencies; * must require minimal or zero learning curve by using plain Python, native scripts, environment variables and simple JSON/YAML descriptions instead of inventing new workflow languages; * must have the same interface to run all automations natively, in a cloud or inside containers. CM scripts were successfully validated by MLCommons to modularize MLPerf inference benchmarks and help the community automate more than 95% of all performance and power submissions in the v3.1 round across more than 120 system configurations (models, frameworks, hardware) while reducing development and maintenance costs.
eval-dev-quality
DevQualityEval is an evaluation benchmark and framework designed to compare and improve the quality of code generation of Language Model Models (LLMs). It provides developers with a standardized benchmark to enhance real-world usage in software development and offers users metrics and comparisons to assess the usefulness of LLMs for their tasks. The tool evaluates LLMs' performance in solving software development tasks and measures the quality of their results through a point-based system. Users can run specific tasks, such as test generation, across different programming languages to evaluate LLMs' language understanding and code generation capabilities.
modelbench
ModelBench is a tool for running safety benchmarks against AI models and generating detailed reports. It is part of the MLCommons project and is designed as a proof of concept to aggregate measures, relate them to specific harms, create benchmarks, and produce reports. The tool requires LlamaGuard for evaluating responses and a TogetherAI account for running benchmarks. Users can install ModelBench from GitHub or PyPI, run tests using Poetry, and create benchmarks by providing necessary API keys. The tool generates static HTML pages displaying benchmark scores and allows users to dump raw scores and manage cache for faster runs. ModelBench is aimed at enabling users to test their own models and create tests and benchmarks.
Aidan-Bench
Aidan Bench is a tool that rewards creativity, reliability, contextual attention, and instruction following. It is weakly correlated with Lmsys, has no score ceiling, and aligns with real-world open-ended use. The tool involves giving LLMs open-ended questions and evaluating their answers based on novelty scores. Users can set up the tool by installing required libraries and setting up API keys. The project allows users to run benchmarks for different models and provides flexibility in threading options.
AirspeedVelocity.jl
AirspeedVelocity.jl is a tool designed to simplify benchmarking of Julia packages over their lifetime. It provides a CLI to generate benchmarks, compare commits/tags/branches, plot benchmarks, and run benchmark comparisons for every submitted PR as a GitHub action. The tool freezes the benchmark script at a specific revision to prevent old history from affecting benchmarks. Users can configure options using CLI flags and visualize benchmark results. AirspeedVelocity.jl can be used to benchmark any Julia package and offers features like generating tables and plots of benchmark results. It also supports custom benchmarks and can be integrated into GitHub actions for automated benchmarking of PRs.
orama-core
OramaCore is a database designed for AI projects, answer engines, copilots, and search functionalities. It offers features such as a full-text search engine, vector database, LLM interface, and various utilities. The tool is currently under active development and not recommended for production use due to potential API changes. OramaCore aims to provide a comprehensive solution for managing data and enabling advanced AI capabilities in projects.
For similar jobs
sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.
teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.