![fortuna](/statics/github-mark.png)
fortuna
A Library for Uncertainty Quantification.
Stars: 881
![screenshot](/screenshots_githubs/awslabs-fortuna.jpg)
Fortuna is a library for uncertainty quantification that enables users to estimate predictive uncertainty, assess model reliability, trigger human intervention, and deploy models safely. It provides calibration and conformal methods for pre-trained models in any framework, supports Bayesian inference methods for deep learning models written in Flax, and is designed to be intuitive and highly configurable. Users can run benchmarks and bring uncertainty to production systems with ease.
README:
Fortuna #######
.. image:: https://img.shields.io/pypi/status/Fortuna :target: https://img.shields.io/pypi/status/Fortuna :alt: PyPI - Status .. image:: https://img.shields.io/pypi/dm/aws-fortuna :target: https://pypistats.org/packages/aws-fortuna :alt: PyPI - Downloads .. image:: https://img.shields.io/pypi/v/aws-fortuna :target: https://img.shields.io/pypi/v/aws-fortuna :alt: PyPI - Version .. image:: https://img.shields.io/github/license/awslabs/Fortuna :target: https://github.com/awslabs/Fortuna/blob/main/LICENSE :alt: License .. image:: https://readthedocs.org/projects/aws-fortuna/badge/?version=latest :target: https://aws-fortuna.readthedocs.io :alt: Documentation Status
Proper estimation of predictive uncertainty is fundamental in applications that involve critical decisions. Uncertainty can be used to assess reliability of model predictions, trigger human intervention, or decide whether a model can be safely deployed in the wild.
Fortuna is a library for uncertainty quantification that makes it easy for users to run benchmarks and bring uncertainty to production systems.
Fortuna provides calibration and conformal methods starting from pre-trained models written in any framework,
and it further supports several Bayesian inference methods starting from deep learning models written in Flax <https://flax.readthedocs.io/en/latest/index.html>
_.
The language is designed to be intuitive for practitioners unfamiliar with uncertainty quantification,
and is highly configurable.
Check the documentation <https://aws-fortuna.readthedocs.io/en/latest/>
_ for a quickstart, examples and references.
Fortuna offers three different usage modes:
From uncertainty estimates <https://github.com/awslabs/fortuna#from-uncertainty-estimates>
,
From model outputs <https://github.com/awslabs/fortuna#from-model-outputs>
and
From Flax models <https://github.com/awslabs/fortuna#from-flax-models>
_.
These serve users according to the constraints dictated by their own applications.
Their pipelines are depicted in the following figure, each starting from one of the green panels.
.. image:: https://github.com/awslabs/fortuna/raw/main/docs/source/_static/pipeline.png :target: https://github.com/awslabs/fortuna/raw/main/docs/source/_static/pipeline.png
Starting from uncertainty estimates has minimal compatibility requirements and it is the quickest level of interaction with the library. This usage mode offers conformal prediction methods for both classification and regression. These take uncertainty estimates in input, and return rigorous sets of predictions that retain a user-given level of probability. In one-dimensional regression tasks, conformal sets may be thought as calibrated versions of confidence or credible intervals.
Mind that if the uncertainty estimates that you provide in inputs are inaccurate,
conformal sets might be large and unusable.
For this reason, if your application allows it,
please consider the From model outputs <https://github.com/awslabs/fortuna#from-model-outputs>
_ and
From Flax models <https://github.com/awslabs/fortuna#from-flax-models>
_ usage modes.
Example. Suppose you want to calibrate credible intervals with coverage error :code:error
,
each corresponding to a different test input variable.
We assume that credible intervals are passed as arrays of lower and upper bounds,
respectively :code:test_lower_bounds
and :code:test_upper_bounds
.
You also have lower and upper bounds of credible intervals computed for several validation inputs,
respectively :code:val_lower_bounds
and :code:val_upper_bounds
.
The corresponding array of validation targets is denoted by :code:val_targets
.
The following code produces conformal prediction intervals,
i.e. calibrated versions of you test credible intervals.
.. code-block:: python
from fortuna.conformal import QuantileConformalRegressor conformal_intervals = QuantileConformalRegressor().conformal_interval( val_lower_bounds=val_lower_bounds, val_upper_bounds=val_upper_bounds, test_lower_bounds=test_lower_bounds, test_upper_bounds=test_upper_bounds, val_targets=val_targets, error=error)
Starting from model outputs assumes you have already trained a model in some framework,
and arrive to Fortuna with model outputs in :code:numpy.ndarray
format for each input data point.
This usage mode allows you to calibrate your model outputs, estimate uncertainty,
compute metrics and obtain conformal sets.
Compared to the From uncertainty estimates <https://github.com/awslabs/fortuna#from-uncertainty-estimates>
_ usage mode,
this one offers better control,
as it can make sure uncertainty estimates have been appropriately calibrated.
However, if the model had been trained with classical methods,
the resulting quantification of model (a.k.a. epistemic) uncertainty may be poor.
To mitigate this problem, please consider the From Flax models <https://github.com/awslabs/fortuna#from-flax-models>
_
usage mode.
Example.
Suppose you have validation and test model outputs,
respectively :code:val_outputs
and :code:test_outputs
.
Furthermore, you have some arrays of validation and target variables,
respectively :code:val_targets
and :code:test_targets
.
The following code provides a minimal classification example to get calibrated predictive entropy estimates.
.. code-block:: python
from fortuna.output_calib_model import OutputCalibClassifier calib_model = OutputCalibClassifier() status = calib_model.calibrate(outputs=val_outputs, targets=val_targets) test_entropies = calib_model.predictive.entropy(outputs=test_outputs)
Starting from Flax models has higher compatibility requirements than the
From uncertainty estimates <https://github.com/awslabs/fortuna#from-uncertainty-estimates>
_
and From model outputs <https://github.com/awslabs/fortuna#from-model-outputs>
_ usage modes,
as it requires deep learning models written in Flax <https://flax.readthedocs.io/en/latest/index.html>
_.
However, it enables you to replace standard model training with scalable Bayesian inference procedures,
which may significantly improve the quantification of predictive uncertainty.
Example. Suppose you have a Flax classification deep learning model :code:model
from inputs to logits, with output
dimension given by :code:output_dim
. Furthermore,
you have some training, validation and calibration TensorFlow data loader :code:train_data_loader
, :code:val_data_loader
and :code:test_data_loader
, respectively.
The following code provides a minimal classification example to get calibrated probability estimates.
.. code-block:: python
from fortuna.data import DataLoader train_data_loader = DataLoader.from_tensorflow_data_loader(train_data_loader) calib_data_loader = DataLoader.from_tensorflow_data_loader(val_data_loader) test_data_loader = DataLoader.from_tensorflow_data_loader(test_data_loader)
from fortuna.prob_model import ProbClassifier prob_model = ProbClassifier(model=model) status = prob_model.train(train_data_loader=train_data_loader, calib_data_loader=calib_data_loader) test_means = prob_model.predictive.mean(inputs_loader=test_data_loader.to_inputs_loader())
NOTE: Before installing Fortuna, you are required to install JAX <https://github.com/google/jax#installation>
_ in your virtual environment.
You can install Fortuna by typing
.. code-block::
pip install aws-fortuna
Alternatively, you can build the package using Poetry <https://python-poetry.org/docs/>
.
If you choose to pursue this way, first install Poetry and add it to your PATH
(see here <https://python-poetry.org/docs/#installation>
). Then type
.. code-block::
poetry install
All the dependencies will be installed at their required versions. Consider adding the following flags to the command above:
- :code:
-E transformers
if you want to use models and datasets fromHugging Face <https://huggingface.co/>
_. - :code:
-E sagemaker
if you want to install the dependencies necessary to run Fortuna on Amazon SageMaker. - :code:
-E docs
if you want to install Sphinx dependencies to build the documentation. - :code:
-E notebooks
if you want to work with Jupyter notebooks.
Finally, you can either access the virtualenv that Poetry created by typing :code:poetry shell
,
or execute commands within the virtualenv using the :code:run
command, e.g. :code:poetry run python
.
Several usage examples are found in the
/examples <https://github.com/awslabs/fortuna/tree/main/examples>
_
directory.
We offer a simple pipeline that allows you to run Fortuna on Amazon SageMaker with minimal effort.
-
Create an AWS account - it is free! Store the account ID and the region where you want to launch training jobs.
-
First,
update your local AWS credentials <https://docs.aws.amazon.com/cli/latest/userguide/cli-authentication-short-term.html>
. Then you need to build andpush a Docker image to an Amazon ECR repository <https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html>
. Thisscript <https://github.com/awslabs/fortuna/tree/main/fortuna/docker/build_and_push.sh>
_ will help you doing so - it will require your AWS account ID and region. If you need other packages to be included in your Docker image, you should consider customize theDockerfile <https://github.com/awslabs/fortuna/tree/main/fortuna/docker/Dockerfile>
_. NOTE: the script has been tested on a M1 MacOS. It is possible that different operating systems will need small modifications. -
Create an
S3 bucket <https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-bucket.html>
_. You will need this to dump the results from your training jobs on Amazon Sagemaker. -
Write a configuration
yaml
file. This will include your AWS details, the path to the entrypoint script that you want to run on Amazon SageMaker, the arguments to pass to the script, the path to the S3 bucket where you want to dump the results, the metrics to monitor, and more. Checkthis file <https://github.com/awslabs/fortuna/tree/main/benchmarks/transformers/sagemaker_entrypoints/prob_model_text_classification_config/default.yaml>
_ for an example. -
Finally, given :code:
config_dir
, that is the absolute path to the main configuration directory, and :code:config_filename
, that is the name of the main configuration file (without .yaml extension), enter Python and run the following:
.. code-block:: python
from fortuna.sagemaker import run_training_job
run_training_job(config_dir=config_dir, config_filename=config_filename)
-
AWS launch blog post <https://aws.amazon.com/blogs/machine-learning/introducing-fortuna-a-library-for-uncertainty-quantification/>
_ -
Fortuna: A Library for Uncertainty Quantification in Deep Learning [arXiv paper] <https://arxiv.org/abs/2302.04019>
_
To cite Fortuna:
.. code-block::
@article{detommaso2023fortuna,
title={Fortuna: A Library for Uncertainty Quantification in Deep Learning},
author={Detommaso, Gianluca and Gasparin, Alberto and Donini, Michele and Seeger, Matthias and Wilson, Andrew Gordon and Archambeau, Cedric},
journal={arXiv preprint arXiv:2302.04019},
year={2023}
}
If you wish to contribute to the project, please refer to our contribution guidelines <https://github.com/awslabs/fortuna/blob/main/CONTRIBUTING.md>
_.
This project is licensed under the Apache-2.0 License.
See LICENSE <https://github.com/awslabs/fortuna/blob/main/LICENSE>
_ for more information.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for fortuna
Similar Open Source Tools
![fortuna Screenshot](/screenshots_githubs/awslabs-fortuna.jpg)
fortuna
Fortuna is a library for uncertainty quantification that enables users to estimate predictive uncertainty, assess model reliability, trigger human intervention, and deploy models safely. It provides calibration and conformal methods for pre-trained models in any framework, supports Bayesian inference methods for deep learning models written in Flax, and is designed to be intuitive and highly configurable. Users can run benchmarks and bring uncertainty to production systems with ease.
![dockershrink Screenshot](/screenshots_githubs/duaraghav8-dockershrink.jpg)
dockershrink
Dockershrink is an AI-powered Commandline Tool designed to help reduce the size of Docker images. It combines traditional Rule-based analysis with Generative AI techniques to optimize Image configurations. The tool supports NodeJS applications and aims to save costs on storage, data transfer, and build times while increasing developer productivity. By automatically applying advanced optimization techniques, Dockershrink simplifies the process for engineers and organizations, resulting in significant savings and efficiency improvements.
![cellm Screenshot](/screenshots_githubs/getcellm-cellm.jpg)
cellm
Cellm is an Excel extension that allows users to leverage Large Language Models (LLMs) like ChatGPT within cell formulas. It enables users to extract AI responses to text ranges, making it useful for automating repetitive tasks that involve data processing and analysis. Cellm supports various models from Anthropic, Mistral, OpenAI, and Google, as well as locally hosted models via Llamafiles, Ollama, or vLLM. The tool is designed to simplify the integration of AI capabilities into Excel for tasks such as text classification, data cleaning, content summarization, entity extraction, and more.
![OlympicArena Screenshot](/screenshots_githubs/GAIR-NLP-OlympicArena.jpg)
OlympicArena
OlympicArena is a comprehensive benchmark designed to evaluate advanced AI capabilities across various disciplines. It aims to push AI towards superintelligence by tackling complex challenges in science and beyond. The repository provides detailed data for different disciplines, allows users to run inference and evaluation locally, and offers a submission platform for testing models on the test set. Additionally, it includes an annotation interface and encourages users to cite their paper if they find the code or dataset helpful.
![MultiPL-E Screenshot](/screenshots_githubs/nuprl-MultiPL-E.jpg)
MultiPL-E
MultiPL-E is a system for translating unit test-driven neural code generation benchmarks to new languages. It is part of the BigCode Code Generation LM Harness and allows for evaluating Code LLMs using various benchmarks. The tool supports multiple versions with improvements and new language additions, providing a scalable and polyglot approach to benchmarking neural code generation. Users can access a tutorial for direct usage and explore the dataset of translated prompts on the Hugging Face Hub.
![ollama-ai-provider Screenshot](/screenshots_githubs/sgomez-ollama-ai-provider.jpg)
ollama-ai-provider
Vercel AI Provider for running Large Language Models locally using Ollama. This module is under development and may contain errors and frequent incompatible changes. It provides the capability of generating and streaming text and objects, with features like image input, object generation, tool usage simulation, tool streaming simulation, intercepting fetch requests, and provider management. The provider can be customized with optional settings like baseURL and headers.
![HackBot Screenshot](/screenshots_githubs/morpheuslord-HackBot.jpg)
HackBot
HackBot is an AI-powered cybersecurity chatbot designed to provide accurate answers to cybersecurity-related queries, conduct code analysis, and scan analysis. It utilizes the Meta-LLama2 AI model through the 'LlamaCpp' library to respond coherently. The chatbot offers features like local AI/Runpod deployment support, cybersecurity chat assistance, interactive interface, clear output presentation, static code analysis, and vulnerability analysis. Users can interact with HackBot through a command-line interface and utilize it for various cybersecurity tasks.
![langchain Screenshot](/screenshots_githubs/brainlid-langchain.jpg)
langchain
LangChain is a framework for developing Elixir applications powered by language models. It enables applications to connect language models to other data sources and interact with the environment. The library provides components for working with language models and off-the-shelf chains for specific tasks. It aims to assist in building applications that combine large language models with other sources of computation or knowledge. LangChain is written in Elixir and is not aimed for parity with the JavaScript and Python versions due to differences in programming paradigms and design choices. The library is designed to make it easy to integrate language models into applications and expose features, data, and functionality to the models.
![agentscript Screenshot](/screenshots_githubs/AgentScript-AI-agentscript.jpg)
agentscript
AgentScript is an open-source framework for building AI agents that think in code. It prompts a language model to generate JavaScript code, which is then executed in a dedicated runtime with resumability, state persistence, and interactivity. The framework allows for abstract task execution without needing to know all the data beforehand, making it flexible and efficient. AgentScript supports tools, deterministic functions, and LLM-enabled functions, enabling dynamic data processing and decision-making. It also provides state management and human-in-the-loop capabilities, allowing for pausing, serialization, and resumption of execution.
![fairlearn Screenshot](/screenshots_githubs/fairlearn-fairlearn.jpg)
fairlearn
Fairlearn is a Python package designed to help developers assess and mitigate fairness issues in artificial intelligence (AI) systems. It provides mitigation algorithms and metrics for model assessment. Fairlearn focuses on two types of harms: allocation harms and quality-of-service harms. The package follows the group fairness approach, aiming to identify groups at risk of experiencing harms and ensuring comparable behavior across these groups. Fairlearn consists of metrics for assessing model impacts and algorithms for mitigating unfairness in various AI tasks under different fairness definitions.
![airbroke Screenshot](/screenshots_githubs/icoretech-airbroke.jpg)
airbroke
Airbroke is an open-source error catcher tool designed for modern web applications. It provides a PostgreSQL-based backend with an Airbrake-compatible HTTP collector endpoint and a React-based frontend for error management. The tool focuses on simplicity, maintaining a small database footprint even under heavy data ingestion. Users can ask AI about issues, replay HTTP exceptions, and save/manage bookmarks for important occurrences. Airbroke supports multiple OAuth providers for secure user authentication and offers occurrence charts for better insights into error occurrences. The tool can be deployed in various ways, including building from source, using Docker images, deploying on Vercel, Render.com, Kubernetes with Helm, or Docker Compose. It requires Node.js, PostgreSQL, and specific system resources for deployment.
![vulnerability-analysis Screenshot](/screenshots_githubs/NVIDIA-AI-Blueprints-vulnerability-analysis.jpg)
vulnerability-analysis
The NVIDIA AI Blueprint for Vulnerability Analysis for Container Security showcases accelerated analysis on common vulnerabilities and exposures (CVE) at an enterprise scale, reducing mitigation time from days to seconds. It enables security analysts to determine software package vulnerabilities using large language models (LLMs) and retrieval-augmented generation (RAG). The blueprint is designed for security analysts, IT engineers, and AI practitioners in cybersecurity. It requires NVAIE developer license and API keys for vulnerability databases, search engines, and LLM model services. Hardware requirements include L40 GPU for pipeline operation and optional LLM NIM and Embedding NIM. The workflow involves LLM pipeline for CVE impact analysis, utilizing LLM planner, agent, and summarization nodes. The blueprint uses NVIDIA NIM microservices and Morpheus Cybersecurity AI SDK for vulnerability analysis.
![ScreenAgent Screenshot](/screenshots_githubs/niuzaisheng-ScreenAgent.jpg)
ScreenAgent
ScreenAgent is a project focused on creating an environment for Visual Language Model agents (VLM Agent) to interact with real computer screens. The project includes designing an automatic control process for agents to interact with the environment and complete multi-step tasks. It also involves building the ScreenAgent dataset, which collects screenshots and action sequences for various daily computer tasks. The project provides a controller client code, configuration files, and model training code to enable users to control a desktop with a large model.
![AnkiAIUtils Screenshot](/screenshots_githubs/thiswillbeyourgithub-AnkiAIUtils.jpg)
AnkiAIUtils
Anki AI Utils is a powerful suite of AI-powered tools designed to enhance your Anki flashcard learning experience by automatically improving cards you struggle with. The tools include features such as adaptive learning, personalized memory hooks, automation readiness, universal compatibility, provider agnosticism, and infinite extensibility. The toolkit consists of tools like Illustrator for creating custom mnemonic images, Reformulator for rephrasing flashcards, Mnemonics Creator for generating memorable mnemonics, Explainer for providing detailed explanations, and Mnemonics Helper for quick mnemonic generation. The project aims to motivate others to package the tools into addons for wider accessibility.
![local-genAI-search Screenshot](/screenshots_githubs/nikolamilosevic86-local-genAI-search.jpg)
local-genAI-search
Local-GenAI Search is a local generative search engine powered by the Llama3 model, allowing users to ask questions about their local files and receive concise answers with relevant document references. It utilizes MS MARCO embeddings for semantic search and can run locally on a 32GB laptop or computer. The tool can be used to index local documents, search for information, and provide generative search services through a user interface.
![gpt-subtrans Screenshot](/screenshots_githubs/machinewrapped-gpt-subtrans.jpg)
gpt-subtrans
GPT-Subtrans is an open-source subtitle translator that utilizes large language models (LLMs) as translation services. It supports translation between any language pairs that the language model supports. Note that GPT-Subtrans requires an active internet connection, as subtitles are sent to the provider's servers for translation, and their privacy policy applies.
For similar tasks
![fortuna Screenshot](/screenshots_githubs/awslabs-fortuna.jpg)
fortuna
Fortuna is a library for uncertainty quantification that enables users to estimate predictive uncertainty, assess model reliability, trigger human intervention, and deploy models safely. It provides calibration and conformal methods for pre-trained models in any framework, supports Bayesian inference methods for deep learning models written in Flax, and is designed to be intuitive and highly configurable. Users can run benchmarks and bring uncertainty to production systems with ease.
![ck Screenshot](/screenshots_githubs/mlcommons-ck.jpg)
ck
Collective Mind (CM) is a collection of portable, extensible, technology-agnostic and ready-to-use automation recipes with a human-friendly interface (aka CM scripts) to unify and automate all the manual steps required to compose, run, benchmark and optimize complex ML/AI applications on any platform with any software and hardware: see online catalog and source code. CM scripts require Python 3.7+ with minimal dependencies and are continuously extended by the community and MLCommons members to run natively on Ubuntu, MacOS, Windows, RHEL, Debian, Amazon Linux and any other operating system, in a cloud or inside automatically generated containers while keeping backward compatibility - please don't hesitate to report encountered issues here and contact us via public Discord Server to help this collaborative engineering effort! CM scripts were originally developed based on the following requirements from the MLCommons members to help them automatically compose and optimize complex MLPerf benchmarks, applications and systems across diverse and continuously changing models, data sets, software and hardware from Nvidia, Intel, AMD, Google, Qualcomm, Amazon and other vendors: * must work out of the box with the default options and without the need to edit some paths, environment variables and configuration files; * must be non-intrusive, easy to debug and must reuse existing user scripts and automation tools (such as cmake, make, ML workflows, python poetry and containers) rather than substituting them; * must have a very simple and human-friendly command line with a Python API and minimal dependencies; * must require minimal or zero learning curve by using plain Python, native scripts, environment variables and simple JSON/YAML descriptions instead of inventing new workflow languages; * must have the same interface to run all automations natively, in a cloud or inside containers. CM scripts were successfully validated by MLCommons to modularize MLPerf inference benchmarks and help the community automate more than 95% of all performance and power submissions in the v3.1 round across more than 120 system configurations (models, frameworks, hardware) while reducing development and maintenance costs.
![eval-dev-quality Screenshot](/screenshots_githubs/symflower-eval-dev-quality.jpg)
eval-dev-quality
DevQualityEval is an evaluation benchmark and framework designed to compare and improve the quality of code generation of Language Model Models (LLMs). It provides developers with a standardized benchmark to enhance real-world usage in software development and offers users metrics and comparisons to assess the usefulness of LLMs for their tasks. The tool evaluates LLMs' performance in solving software development tasks and measures the quality of their results through a point-based system. Users can run specific tasks, such as test generation, across different programming languages to evaluate LLMs' language understanding and code generation capabilities.
![modelbench Screenshot](/screenshots_githubs/mlcommons-modelbench.jpg)
modelbench
ModelBench is a tool for running safety benchmarks against AI models and generating detailed reports. It is part of the MLCommons project and is designed as a proof of concept to aggregate measures, relate them to specific harms, create benchmarks, and produce reports. The tool requires LlamaGuard for evaluating responses and a TogetherAI account for running benchmarks. Users can install ModelBench from GitHub or PyPI, run tests using Poetry, and create benchmarks by providing necessary API keys. The tool generates static HTML pages displaying benchmark scores and allows users to dump raw scores and manage cache for faster runs. ModelBench is aimed at enabling users to test their own models and create tests and benchmarks.
![Aidan-Bench Screenshot](/screenshots_githubs/aidanmclaughlin-Aidan-Bench.jpg)
Aidan-Bench
Aidan Bench is a tool that rewards creativity, reliability, contextual attention, and instruction following. It is weakly correlated with Lmsys, has no score ceiling, and aligns with real-world open-ended use. The tool involves giving LLMs open-ended questions and evaluating their answers based on novelty scores. Users can set up the tool by installing required libraries and setting up API keys. The project allows users to run benchmarks for different models and provides flexibility in threading options.
![AirspeedVelocity.jl Screenshot](/screenshots_githubs/MilesCranmer-AirspeedVelocity.jl.jpg)
AirspeedVelocity.jl
AirspeedVelocity.jl is a tool designed to simplify benchmarking of Julia packages over their lifetime. It provides a CLI to generate benchmarks, compare commits/tags/branches, plot benchmarks, and run benchmark comparisons for every submitted PR as a GitHub action. The tool freezes the benchmark script at a specific revision to prevent old history from affecting benchmarks. Users can configure options using CLI flags and visualize benchmark results. AirspeedVelocity.jl can be used to benchmark any Julia package and offers features like generating tables and plots of benchmark results. It also supports custom benchmarks and can be integrated into GitHub actions for automated benchmarking of PRs.
![orama-core Screenshot](/screenshots_githubs/oramasearch-orama-core.jpg)
orama-core
OramaCore is a database designed for AI projects, answer engines, copilots, and search functionalities. It offers features such as a full-text search engine, vector database, LLM interface, and various utilities. The tool is currently under active development and not recommended for production use due to potential API changes. OramaCore aims to provide a comprehensive solution for managing data and enabling advanced AI capabilities in projects.
For similar jobs
![sweep Screenshot](/screenshots_githubs/sweepai-sweep.jpg)
sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.
![teams-ai Screenshot](/screenshots_githubs/microsoft-teams-ai.jpg)
teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.
![ai-guide Screenshot](/screenshots_githubs/Crataco-ai-guide.jpg)
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
![classifai Screenshot](/screenshots_githubs/10up-classifai.jpg)
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
![chatbot-ui Screenshot](/screenshots_githubs/mckaywrigley-chatbot-ui.jpg)
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
![BricksLLM Screenshot](/screenshots_githubs/bricks-cloud-BricksLLM.jpg)
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
![uAgents Screenshot](/screenshots_githubs/fetchai-uAgents.jpg)
uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.
![griptape Screenshot](/screenshots_githubs/griptape-ai-griptape.jpg)
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.