
metta
A reinforcement learning codebase focusing on the emergence of cooperation and alignment in multi-agent AI systems.
Stars: 88

Metta AI is an open-source research project focusing on the emergence of cooperation and alignment in multi-agent AI systems. It explores the impact of social dynamics like kinship and mate selection on learning and cooperative behaviors of AI agents. The project introduces a reward-sharing mechanism mimicking familial bonds and mate selection to observe the evolution of complex social behaviors among AI agents. Metta aims to contribute to the discussion on safe and beneficial AGI by creating an environment where AI agents can develop general intelligence through continuous learning and adaptation.
README:
A reinforcement learning codebase focusing on the emergence of cooperation and alignment in multi-agent AI systems.
- Discord: https://discord.gg/mQzrgwqmwy
- Short (5m) Talk: https://www.youtube.com/watch?v=bt6hV73VA8I
- Talk: https://foresight.org/summary/david-bloomin-metta-learning-love-is-all-you-need/
Metta AI is an open-source research project investigating the emergence of cooperation and alignment in multi-agent AI systems. By creating a model organism for complex multi-agent gridworld environments, the project aims to study the impact of social dynamics, such as kinship and mate selection, on learning and cooperative behaviors of AI agents.
Metta AI explores the hypothesis that social dynamics, akin to love in biological systems, play a crucial role in the development of cooperative AGI and AI alignment. The project introduces a novel reward-sharing mechanism mimicking familial bonds and mate selection, allowing researchers to observe the evolution of complex social behaviors and cooperation among AI agents. By investigating this concept in a controlled multi-agent setting, the project seeks to contribute to the broader discussion on the path towards safe and beneficial AGI.
Metta is a simulation environment (game) designed to train AI agents capable of meta-learning general intelligence. The core idea is to create an environment where incremental intelligence is rewarded, fostering the development of generally intelligent agents.
-
Agents and Environment: Agents are shaped by their environment, learning policies that enhance their fitness. To develop general intelligence, agents need an environment where increasing intelligence is continually rewarded.
-
Competitive and Cooperative Dynamics: A game with multiple agents and some competition creates an evolving environment where challenges increase with agent intelligence. Purely competitive games often reach a Nash equilibrium, where locally optimal strategies are hard to deviate from. Adding cooperative dynamics introduces more behavioral possibilities and smooths the behavioral space.
-
Kinship Structures: The game features a flexible kinship structure, simulating a range of relationships from close kin to strangers. Agents must learn to coordinate with close kin, negotiate with more distant kin, and compete with strangers. This diverse social environment encourages continuous learning and intelligence growth.
The game is designed to evolve with the agents, providing unlimited learning opportunities despite simple rules.
The current version of the game can be found here. It's a grid world with the following dynamics:
- Agents and Vision: Agents can see a limited number of squares around them.
- Resources: Agents harvest diamonds, convert them to energy at charger stations, and use energy to power the "heart altar" for rewards.
- Energy Management: All actions cost energy, so agents learn to manage their energy budgets efficiently.
- Combat: Agents can attack others, temporarily freezing the target and stealing resources.
- Defense: Agents can toggle shields, which drain energy but absorb attacks.
- Cooperation: Agents can share energy or resources and use markers to communicate.
The game offers numerous possibilities for exploration, including:
- Diverse Energy Profiles: Assigning different energy profiles to agents, essentially giving them different bodies and policies.
- Dynamic Energy Profiles: Allowing agents to change their energy profiles, reflecting different postures or emotions.
- Resource Types and Conversions: Introducing different resource types and conversion mechanisms.
- Environment Modification: Enabling agents to modify the game board by creating, destroying, or altering objects.
The game explores various kinship structures:
- Random Kinship Scores: Each pair of agents has a kinship score sampled from a distribution.
- Teams: Agents belong to teams with symmetric kinship among team members.
- Hives/Clans/Families: Structuring agents into larger kinship groups.
Future plans include incorporating mate-selection dynamics, where agents share future rewards at a cost, potentially leading to intelligence gains through a signaling arms race.
Metta aims to create a rich, evolving environment where AI agents can develop general intelligence through continuous learning and adaptation.
The project's modular design and open-source nature make it easy for researchers to adapt and extend the platform to investigate their own hypotheses in this domain. The highly performant, open-ended game rules provide a rich environment for studying these behaviors and their potential implications for AI alignment.
Some areas of research interest:
Develop rich and diverse gridworld environments with complex dynamics, such as resource systems, agent diversity, procedural terrain generation, support for various environment types, population dynamics, and kinship schemes.
Incorporate techniques like dense learning signals, surprise minimization, exploration strategies, and blending reinforcement and imitation learning.
Investigate scalable training approaches, including distributed reinforcement learning, student-teacher architectures, and blending reinforcement learning with imitation learning, to enable efficient training of large-scale multi-agent systems.
Design and implement a comprehensive suite of intelligence evaluations for gridworld agents, covering navigation tasks, maze solving, in-context learning, cooperation, and competition scenarios.
Develop tools and infrastructure for efficient management, tracking, and deployment of experiments, such as cloud cluster management, experiment tracking and visualization, and continuous integration and deployment pipelines.
This README provides only a brief overview of research explorations. Visit the research roadmap for more details.
Clone the repository and run the setup:
git clone https://github.com/Metta-AI/metta.git
cd metta
./install.sh # Interactive setup - installs uv, configures metta, and installs components
After installation, you can use metta commands directly:
metta status # Check component status
metta install # Install additional components
metta configure # Reconfigure for a different profile
./install.sh --profile=softmax # For Softmax employees
./install.sh --profile=external # For external collaborators
./install.sh --help # Show all available options
The repository contains command-line tools in the tools/
directory.
run.py
is a script that kicks off tasks like training, evaluation, and visualization. The runner looks up the task,
builds its configuration, and runs it. The current available tasks are:
-
experiments.recipes.arena.train: Train on the arena curriculum
./tools/run.py experiments.recipes.arena.train --args run=my_experiment
-
experiments.recipes.navigation.train: Train on the navigation curriculum
./tools/run.py experiments.recipes.navigation.train --args run=my_experiment
-
experiments.recipes.arena.play: Play in the browser
./tools/run.py experiments.recipes.arena.play
-
experiments.recipes.arena.replay: Replay a single episode from a saved policy
./tools/run.py experiments.recipes.arena.replay --overrides policy_uri=wandb://run/local.alice.1
-
experiments.recipes.arena.evaluate: Evaluate a policy on the arena eval suite
./tools/run.py experiments.recipes.arena.evaluate --args policy_uri=wandb://run/local.alice.1
-
Dry-run version, e.g. Print the resolved config without executing it
./tools/run.py experiments.recipes.arena.train --args run=my_experiment --dry-run
Use the runner like this:
./tools/run.py <task_name> [--args key=value ...] [--overrides path.to.field=value ...] [--dry-run]
-
task_name
: a Python-style path to a task (for example,experiments.recipes.arena.train
). -
--args
: name=value pairs passed to the task function (these become constructor args of the Tool it returns).- Types: integers (
42
), floats (0.1
), booleans (true/false
), and strings. - Multiple args: add more pairs separated by spaces.
- Example:
--args run=local.alice.1
- Types: integers (
-
--overrides
: update fields inside the returned Tool configuration using dot paths.- Common fields:
system.device=cpu
,wandb.enabled=false
,trainer.total_timesteps=100000
,trainer.rollout_workers=4
,policy_uri=wandb://run/<name>
(for replay/eval). - Multiple overrides: add more pairs separated by spaces.
- Example:
--overrides system.device=cpu wandb.enabled=false
- Common fields:
-
--dry-run
: print the fully-resolved configuration as JSON and exit without running.
Quick examples:
# Faster local run on CPU, less logging
./tools/run.py experiments.recipes.arena.train \
--args run=local.alice.1 \
--overrides system.device=cpu wandb.enabled=false trainer.total_timesteps=100000
# Evaluate a specific policy URI on the arena suite
./tools/run.py experiments.recipes.arena.evaluate --args policy_uri=wandb://run/local.alice.1
Tips:
- Strings with spaces: quote the value, for example
notes="my local run"
. - Booleans are lowercase:
true
andfalse
. - If a value looks numeric but should be a string, wrap it in quotes (for example,
run="001"
).
A “task” is just a Python function (or class) that returns a Tool configuration. The runner loads it by name and runs
its invoke()
method.
What you write:
- A function that returns a Tool, for example
TrainTool
,SimTool
,PlayTool
, orReplayTool
. - Place it anywhere importable (for personal use,
experiments/user/<your_file>.py
is convenient). - The function name becomes part of the task name you run.
Minimal example:
# experiments/user/my_tasks.py
from metta.mettagrid.config.envs import make_arena
from metta.rl.trainer_config import EvaluationConfig, TrainerConfig
from metta.sim.simulation_config import SimulationConfig
from metta.tools.train import TrainTool
def my_train(run: str = "local.me.1") -> TrainTool:
trainer = TrainerConfig(
evaluation=EvaluationConfig(
simulations=[SimulationConfig(name="arena/basic", env=make_arena(num_agents=4))]
)
)
return TrainTool(trainer=trainer, run=run)
Run your task:
./tools/run.py experiments.user.my_tasks.my_train --args run=local.me.2 \
--overrides system.device=cpu wandb.enabled=false
Notes:
- Tasks can also be Tool classes (subclasses of
metta.common.config.tool.Tool
). The runner will construct them with--args
and then apply--overrides
. - Use
--dry-run
while developing to see the exact configuration your task produces.
To use WandB with your personal account:
- Get your WandB API key from wandb.ai (click your profile → API keys)
- Add it to your
~/.netrc
file:machine api.wandb.ai login user password YOUR_API_KEY_HERE
- Edit
configs/wandb/external_user.yaml
and replace???
with your WandB username:entity: ??? # Replace with your WandB username
Now you can run training with your personal WandB config:
./tools/run.py experiments.recipes.arena.train --args run=local.yourname.123 --overrides wandb.enabled=true wandb.entity=<your_user>
Mettascope allows you to run and view episodes in the environment you specify. It goes beyond just spectator mode, and allows taking over an agent and controlling it manually.
For more information, see ./mettascope/README.md.
./tools/run.py experiments.recipes.arena.play
Optional overrides:
-
policy_uri=<path>
: Use a specific policy for NPC agents.- Local checkpoints:
file://./train_dir/<run>/checkpoints
- WandB artifacts:
wandb://run/<run_name>
- Local checkpoints:
./tools/run.py experiments.recipes.arena.replay --overrides policy_uri=wandb://run/local.alice.1
When you run training, if you have WandB enabled, then you will be able to see in your WandB run page results for the eval suites.
However, this will not apply for anything trained before April 8th.
If you want to run evaluation post-training to compare different policies, you can do the following:
Evaluate a policy against the arena eval suite:
./tools/run.py experiments.recipes.arena.evaluate --args policy_uri=wandb://run/local.alice.1
Evaluate on the navigation eval suite (provide the policy URI):
./tools/run.py experiments.recipes.navigation.eval --overrides policy_uris=wandb://run/local.alice.1
This repo implements a MettaAgent
policy class. The underlying network is parameterized by config files in
configs/agent
(with configs/agent/fast.yaml
used by default). See configs/agent/reference_design.yaml
for an
explanation of the config structure, and this wiki section
for further documentation.
To use MettaAgent
with a non-default architecture config:
- (Optional): Create your own configuration file, e.g.
configs/agent/my_agent.yaml
. - Run with the configuration file of your choice:
./tools/run.py experiments.recipes.arena.train --overrides policy_architecture.agent_config=my_agent
We support agent architectures without using the MettaAgent system:
- Implement your agent class under
metta/agent/src/metta/agent/pytorch/my_agent.py
. Seemetta/agent/src/metta/agent/pytorch/fast.py
for an example. - Register it in
metta/agent/src/metta/agent/pytorch/agent_mapper.py
by adding an entry toagent_classes
with a key name (e.g.,"my_agent"
). - Select it at runtime using the runner and an override on the agent config name:
./tools/run.py experiments.recipes.arena.train --overrides policy_architecture.name=pytorch/my_agent
Further updates to support bringing your own agent are coming soon.
To run the style checks and tests locally:
ruff format
ruff check
pyright metta # optional, some stubs are missing
pytest
Task | Command |
---|---|
Train (arena) | ./tools/run.py experiments.recipes.arena.train --args run=my_experiment |
Train (navigation) | ./tools/run.py experiments.recipes.navigation.train --args run=my_experiment |
Play (browser) | ./tools/run.py experiments.recipes.arena.play |
Replay (policy) | ./tools/run.py experiments.recipes.arena.replay --overrides policy_uri=wandb://run/local.alice.1 |
Evaluate (arena) | ./tools/run.py experiments.recipes.arena.evaluate --args policy_uri=wandb://run/local.alice.1 |
Evaluate (navigation suite) | ./tools/run.py experiments.recipes.navigation.eval --overrides policy_uris=wandb://run/local.alice.1 |
Dry-run (print config) | ./tools/run.py experiments.recipes.arena.train --args run=my_experiment --dry-run |
Running these commands mirrors our CI configuration and helps keep the codebase consistent.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for metta
Similar Open Source Tools

metta
Metta AI is an open-source research project focusing on the emergence of cooperation and alignment in multi-agent AI systems. It explores the impact of social dynamics like kinship and mate selection on learning and cooperative behaviors of AI agents. The project introduces a reward-sharing mechanism mimicking familial bonds and mate selection to observe the evolution of complex social behaviors among AI agents. Metta aims to contribute to the discussion on safe and beneficial AGI by creating an environment where AI agents can develop general intelligence through continuous learning and adaptation.

jina
Jina is a tool that allows users to build multimodal AI services and pipelines using cloud-native technologies. It provides a Pythonic experience for serving ML models and transitioning from local deployment to advanced orchestration frameworks like Docker-Compose, Kubernetes, or Jina AI Cloud. Users can build and serve models for any data type and deep learning framework, design high-performance services with easy scaling, serve LLM models while streaming their output, integrate with Docker containers via Executor Hub, and host on CPU/GPU using Jina AI Cloud. Jina also offers advanced orchestration and scaling capabilities, a smooth transition to the cloud, and easy scalability and concurrency features for applications. Users can deploy to their own cloud or system with Kubernetes and Docker Compose integration, and even deploy to JCloud for autoscaling and monitoring.

KnowAgent
KnowAgent is a tool designed for Knowledge-Augmented Planning for LLM-Based Agents. It involves creating an action knowledge base, converting action knowledge into text for model understanding, and a knowledgeable self-learning phase to continually improve the model's planning abilities. The tool aims to enhance agents' potential for application in complex situations by leveraging external reservoirs of information and iterative processes.

MetaGPT
MetaGPT is a multi-agent framework that enables GPT to work in a software company, collaborating to tackle more complex tasks. It assigns different roles to GPTs to form a collaborative entity for complex tasks. MetaGPT takes a one-line requirement as input and outputs user stories, competitive analysis, requirements, data structures, APIs, documents, etc. Internally, MetaGPT includes product managers, architects, project managers, and engineers. It provides the entire process of a software company along with carefully orchestrated SOPs. MetaGPT's core philosophy is "Code = SOP(Team)", materializing SOP and applying it to teams composed of LLMs.

code2prompt
Code2Prompt is a powerful command-line tool that generates comprehensive prompts from codebases, designed to streamline interactions between developers and Large Language Models (LLMs) for code analysis, documentation, and improvement tasks. It bridges the gap between codebases and LLMs by converting projects into AI-friendly prompts, enabling users to leverage AI for various software development tasks. The tool offers features like holistic codebase representation, intelligent source tree generation, customizable prompt templates, smart token management, Gitignore integration, flexible file handling, clipboard-ready output, multiple output options, and enhanced code readability.

giskard
Giskard is an open-source Python library that automatically detects performance, bias & security issues in AI applications. The library covers LLM-based applications such as RAG agents, all the way to traditional ML models for tabular data.

humanoid-gym
Humanoid-Gym is a reinforcement learning framework designed for training locomotion skills for humanoid robots, focusing on zero-shot transfer from simulation to real-world environments. It integrates a sim-to-sim framework from Isaac Gym to Mujoco for verifying trained policies in different physical simulations. The codebase is verified with RobotEra's XBot-S and XBot-L humanoid robots. It offers comprehensive training guidelines, step-by-step configuration instructions, and execution scripts for easy deployment. The sim2sim support allows transferring trained policies to accurate simulated environments. The upcoming features include Denoising World Model Learning and Dexterous Hand Manipulation. Installation and usage guides are provided along with examples for training PPO policies and sim-to-sim transformations. The code structure includes environment and configuration files, with instructions on adding new environments. Troubleshooting tips are provided for common issues, along with a citation and acknowledgment section.

wanda
Official PyTorch implementation of Wanda (Pruning by Weights and Activations), a simple and effective pruning approach for large language models. The pruning approach removes weights on a per-output basis, by the product of weight magnitudes and input activation norms. The repository provides support for various features such as LLaMA-2, ablation study on OBS weight update, zero-shot evaluation, and speedup evaluation. Users can replicate main results from the paper using provided bash commands. The tool aims to enhance the efficiency and performance of language models through structured and unstructured sparsity techniques.

upgini
Upgini is an intelligent data search engine with a Python library that helps users find and add relevant features to their ML pipeline from various public, community, and premium external data sources. It automates the optimization of connected data sources by generating an optimal set of machine learning features using large language models, GraphNNs, and recurrent neural networks. The tool aims to simplify feature search and enrichment for external data to make it a standard approach in machine learning pipelines. It democratizes access to data sources for the data science community.

distillKitPlus
DistillKitPlus is an open-source toolkit designed for knowledge distillation (KLD) in low computation resource settings. It supports logit distillation, pre-computed logits for memory-efficient training, LoRA fine-tuning integration, and model quantization for faster inference. The toolkit utilizes a JSON configuration file for project, dataset, model, tokenizer, training, distillation, LoRA, and quantization settings. Users can contribute to the toolkit and contact the developers for technical questions or issues.

EasyInstruct
EasyInstruct is a Python package proposed as an easy-to-use instruction processing framework for Large Language Models (LLMs) like GPT-4, LLaMA, ChatGLM in your research experiments. EasyInstruct modularizes instruction generation, selection, and prompting, while also considering their combination and interaction.

OpenAdapt
OpenAdapt is an open-source software adapter between Large Multimodal Models (LMMs) and traditional desktop and web Graphical User Interfaces (GUIs). It aims to automate repetitive GUI workflows by leveraging the power of LMMs. OpenAdapt records user input and screenshots, converts them into tokenized format, and generates synthetic input via transformer model completions. It also analyzes recordings to generate task trees and replay synthetic input to complete tasks. OpenAdapt is model agnostic and generates prompts automatically by learning from human demonstration, ensuring that agents are grounded in existing processes and mitigating hallucinations. It works with all types of desktop GUIs, including virtualized and web, and is open source under the MIT license.

llm-compressor
llm-compressor is an easy-to-use library for optimizing models for deployment with vllm. It provides a comprehensive set of quantization algorithms, seamless integration with Hugging Face models and repositories, and supports mixed precision, activation quantization, and sparsity. Supported algorithms include PTQ, GPTQ, SmoothQuant, and SparseGPT. Installation can be done via git clone and local pip install. Compression can be easily applied by selecting an algorithm and calling the oneshot API. The library also offers end-to-end examples for model compression. Contributions to the code, examples, integrations, and documentation are appreciated.

rag-chatbot
The RAG ChatBot project combines Lama.cpp, Chroma, and Streamlit to build a Conversation-aware Chatbot and a Retrieval-augmented generation (RAG) ChatBot. The RAG Chatbot works by taking a collection of Markdown files as input and provides answers based on the context provided by those files. It utilizes a Memory Builder component to load Markdown pages, divide them into sections, calculate embeddings, and save them in an embedding database. The chatbot retrieves relevant sections from the database, rewrites questions for optimal retrieval, and generates answers using a local language model. It also remembers previous interactions for more accurate responses. Various strategies are implemented to deal with context overflows, including creating and refining context, hierarchical summarization, and async hierarchical summarization.

storm
STORM is a LLM system that writes Wikipedia-like articles from scratch based on Internet search. While the system cannot produce publication-ready articles that often require a significant number of edits, experienced Wikipedia editors have found it helpful in their pre-writing stage. **Try out our [live research preview](https://storm.genie.stanford.edu/) to see how STORM can help your knowledge exploration journey and please provide feedback to help us improve the system 🙏!**

notte
Notte is a web browser designed specifically for LLM agents, providing a language-first web navigation experience without the need for DOM/HTML parsing. It transforms websites into structured, navigable maps described in natural language, enabling users to interact with the web using natural language commands. By simplifying browser complexity, Notte allows LLM policies to focus on conversational reasoning and planning, reducing token usage, costs, and latency. The tool supports various language model providers and offers a reinforcement learning style action space and controls for full navigation control.
For similar tasks

metta
Metta AI is an open-source research project focusing on the emergence of cooperation and alignment in multi-agent AI systems. It explores the impact of social dynamics like kinship and mate selection on learning and cooperative behaviors of AI agents. The project introduces a reward-sharing mechanism mimicking familial bonds and mate selection to observe the evolution of complex social behaviors among AI agents. Metta aims to contribute to the discussion on safe and beneficial AGI by creating an environment where AI agents can develop general intelligence through continuous learning and adaptation.

AI4U
AI4U is a tool that provides a framework for modeling virtual reality and game environments. It offers an alternative approach to modeling Non-Player Characters (NPCs) in Godot Game Engine. AI4U defines an agent living in an environment and interacting with it through sensors and actuators. Sensors provide data to the agent's brain, while actuators send actions from the agent to the environment. The brain processes the sensor data and makes decisions (selects an action by time). AI4U can also be used in other situations, such as modeling environments for artificial intelligence experiments.

Co-LLM-Agents
This repository contains code for building cooperative embodied agents modularly with large language models. The agents are trained to perform tasks in two different environments: ThreeDWorld Multi-Agent Transport (TDW-MAT) and Communicative Watch-And-Help (C-WAH). TDW-MAT is a multi-agent environment where agents must transport objects to a goal position using containers. C-WAH is an extension of the Watch-And-Help challenge, which enables agents to send messages to each other. The code in this repository can be used to train agents to perform tasks in both of these environments.

godot_rl_agents
Godot RL Agents is an open-source package that facilitates the integration of Machine Learning algorithms with games created in the Godot Engine. It provides interfaces for popular RL frameworks, support for memory-based agents, 2D and 3D games, AI sensors, and is licensed under MIT. Users can train agents in the Godot editor, create custom environments, export trained agents in ONNX format, and utilize advanced features like different RL training frameworks.

agents
Agents 2.0 is a framework for training language agents using symbolic learning, inspired by connectionist learning for neural nets. It implements main components of connectionist learning like back-propagation and gradient-based weight update in the context of agent training using language-based loss, gradients, and weights. The framework supports optimizing multi-agent systems and allows multiple agents to take actions in one node.

foyle
Foyle is a project focused on building agents to assist software developers in deploying and operating software. It aims to improve agent performance by collecting human feedback on agent suggestions and human examples of reasoning traces. Foyle utilizes a literate environment using vscode notebooks to interact with infrastructure, capturing prompts, AI-provided answers, and user corrections. The goal is to continuously retrain AI to enhance performance. Additionally, Foyle emphasizes the importance of reasoning traces for training agents to work with internal systems, providing a self-documenting process for operations and troubleshooting.

ygo-agent
YGO Agent is a project focused on using deep learning to master the Yu-Gi-Oh! trading card game. It utilizes reinforcement learning and large language models to develop advanced AI agents that aim to surpass human expert play. The project provides a platform for researchers and players to explore AI in complex, strategic game environments.

MineStudio
MineStudio is a simple and efficient Minecraft development kit for AI research. It contains tools and APIs for developing Minecraft AI agents, including a customizable simulator, trajectory data structure, policy models, offline and online training pipelines, inference framework, and benchmarking automation. The repository is under development and welcomes contributions and suggestions.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.