
OpenManus-RL
A live stream development of RL tunning for LLM agents
Stars: 3425

OpenManus-RL is an open-source initiative focused on enhancing reasoning and decision-making capabilities of large language models (LLMs) through advanced reinforcement learning (RL)-based agent tuning. The project explores novel algorithmic structures, diverse reasoning paradigms, sophisticated reward strategies, and extensive benchmark environments. It aims to push the boundaries of agent reasoning and tool integration by integrating insights from leading RL tuning frameworks and continuously updating progress in a dynamic, live-streaming fashion.
README:
OpenManus-RL is an open-source initiative collaboratively led by Ulab-UIUC and MetaGPT .
This project is an extended version of the original @OpenManus initiative. Inspired by successful RL tunning for reasoning LLM such as Deepseek-R1, QwQ-32B, we will explore new paradigms for RL-based LLM agent tuning, particularly building upon foundations.
We are committed to regularly updating our exploration directions and results in a dynamic, live-streaming fashion. All progress, including rigorous testing on agent benchmarks such as GAIA, AgentBench, WebShop, and OSWorld, and tuned models, will be openly shared and continuously updated.
We warmly welcome contributions from the broader communityβjoin us in pushing the boundaries of agent reasoning and tool integration!
Code and dataset are now available! The verl
submodule has been integrated for enhanced RL training capabilities.
- OpenManus-RL
- Running
- Related Work
- Acknowledgement
- Community Group
- Citation
- Documentation
- [2025-03-09] πΊ We collect and opensource our Agent SFT dataset at Huggingface, go try it!
- [2025-03-08] π We are collaborating with @OpenManus from Metagpt to work on this project together!
- [2025-03-06] π₯³ We(UIUC-Ulab) are announcing our live-streaming project, OpenManus-RL.
@Kunlun Zhu(Ulab-UIUC), @Muxin Tian, @Zijia Liu(Ulab-UIUC), @Yingxuan Yang,@Jiayi Zhang(MetaGPT), @Xinbing Liang, @Weijia Zhang, @Haofei Yu(Ulab-UIUC), @Cheng Qian,@Bowen Jin,
We wholeheartedly welcome suggestions, feedback, and contributions from the community! Feel free to:
We welcome contributions, including fine-tuning codebase, tuning dataset, environment setup, and computing resources. Create issues for feature requests, bug reports, or ideas. Submit pull requests to help improve OpenManus-RL. Or simply reach out to us for direct collaboration. Important contributors will be listed as co-authors to our paper.
-
Agent Environment Support Setting up LLM agent environment for online RL tunning.
-
Agent Trajectories Data Collection Connect to specialized reasoning models such as deepseek-r1, QwQ-32B for more complex inference tasks to collect comprehensive agent trajectories.
-
RL-Tuning Model Paradigm Provide an RL fine-tuning approach for customizing the agent's behavior in our agent environment.
-
Test on Agent Benchmarks Evaluate our framework on agentic benchmark such as Webshop, GAIA, OSWorld, AgentBench
Our method proposes an advanced reinforcement learning (RL)-based agent tuning framework designed to significantly enhance reasoning and decision-making capabilities of large language models (LLMs). Drawing inspiration from RAGEN's Reasoning-Interaction Chain Optimization (RICO), our approach further explores novel algorithmic structures, diverse reasoning paradigms, sophisticated reward strategies, and extensive benchmark environments.
To benchmark the reasoning capabilities effectively, we evaluate multiple state-of-the-art reasoning models:
- GPT-O1
- Deepseek-R1
- QwQ-32B
Each model provides unique reasoning capabilities that inform downstream optimization and training strategies.
We experiment with a variety of rollout strategies to enhance agent planning efficiency and reasoning robustness, including:
- Tree-of-Thoughts (ToT): Employs tree-based reasoning paths, enabling agents to explore branching possibilities systematically.
- Graph-of-Thoughts (GoT): Utilizes graph structures to represent complex reasoning dependencies effectively.
- DFSDT (Depth-First Search Decision Trees): Optimizes action selection through depth-first search, enhancing long-horizon planning.
- Monte Carlo Tree Search (MCTS): Explores reasoning and decision paths probabilistically, balancing exploration and exploitation effectively.
These methods help identify optimal rollout techniques for various reasoning tasks.
We specifically analyze and compare several reasoning output formats, notably:
- ReAct: Integrates reasoning and action explicitly, encouraging structured decision-making.
- Outcome-based Reasoning: Optimizes toward explicit outcome predictions, driving focused goal alignment.
These formats are rigorously compared to derive the most effective reasoning representation for various tasks.
We investigate multiple post-training methodologies to fine-tune agent reasoning effectively:
- Supervised Fine-Tuning (SFT): Initializes reasoning capabilities using human-annotated instructions.
-
Generalized Reward-based Policy Optimization (GRPO): Incorporates:
- Format-based Rewards: Rewards adherence to specified reasoning structures.
- Outcome-based Rewards: Rewards accurate task completion and goal attainment.
- Proximal Policy Optimization (PPO): Enhances agent stability through proximal updates.
- Direct Preference Optimization (DPO): Leverages explicit human preferences to optimize agent outputs directly.
- Preference-based Reward Modeling (PRM): Uses learned reward functions derived from human preference data.
We train specialized agent reward models using annotated data to accurately quantify nuanced reward signals. These models are then leveraged to guide agent trajectory selection during both training and evaluation phases.
During the inference phase, trajectory scaling methods are implemented, allowing agents to flexibly adapt to varying task complexities, thus enhancing robustness and performance in real-world scenarios.
Agents are equipped with action-space awareness, employing systematic exploration strategies designed to navigate complex action spaces effectively, ultimately maximizing expected rewards.
We integrate insights and methodologies from leading RL tuning frameworks, including:
- Verl - Integrated as Git Submodule - Our primary RL framework, providing advanced training capabilities for agent optimization
- TinyZero
- OpenR1
- Trlx
The verl
submodule is fully integrated into OpenManus-RL, providing:
- Advanced RL Algorithms - PPO, DPO, and custom reward modeling
- Efficient Training - Optimized for large language model fine-tuning
- Flexible Configuration - Easy customization of training parameters
- Production Ready - Battle-tested framework from Bytedance
Through these frameworks, agents can effectively balance exploration and exploitation, optimize reasoning processes, and adapt dynamically to novel environments.
In summary, our method systematically integrates advanced reasoning paradigms, diverse rollout strategies, sophisticated reward modeling, and robust RL frameworks, significantly advancing the capability and adaptability of reasoning-enhanced LLM agents.
OpenManusRL-Dataset combines agent trajectories from AgentInstruct, Agent-FLAN and [AgentTraj-L(AgentGym)] with features:
- π ReAct Framework - Reasoning-Acting integration
- π§ Structured Training - Separate format/reasoning learning
- π« Anti-Hallucination - Negative samples + environment grounding
- π 6 Domains - OS, DB, Web, KG, Household, E-commerce
Source | Trajectories | Avg Turns | Key Features |
---|---|---|---|
AgentInstruct | 1,866 | 5.24 | Multi-task QA, CoT reasoning |
Agent-FLAN | 34442 | 3-35 | Error recovery patterns, diverse real-world tasks |
AgentTraj-L | 14485 | 3-35 | interactive environments and tasks |
Combined | 50793 | 4-20 | Enhanced generalization, uniform format with broader task coverage |
- text-generation: ReAct-style instruction following
- conversational-ai: Tool-augmented dialogues
English
ReAct Pattern Example:
{
"id": "os_0",
"conversations": [
{"role": "user", "content": "Count files in /etc"},
{"role": "assistant", "content": "Think: Need reliable counting method\nAct: bash\n```bash\nls -1 /etc | wc -l\n```"},
{"role": "user", "content": "OS Output: 220"},
{"role": "assistant", "content": "Think: Verified through execution\nAct: answer(220)"}
]
}
A simplified library for Supervised Fine-Tuning (SFT) and GRPO tunning of language models for agentic system. (developed upon Verl from Bytedance) We are still laboriously developing this part, welcome feedback.
This project uses git submodules. After cloning the repository, make sure to initialize and update the submodules:
# Clone the repository with submodules
git clone --recursive https://github.com/OpenManus/OpenManus-RL.git
# Or if already cloned, initialize and update submodules
git submodule update --init --recursive
First, create a conda environment and activate it:
# Create a new conda environment
conda create -n openmanus-rl python=3.10 -y
conda activate openmanus-rl
Then, install the required dependencies:
# Install PyTorch with CUDA support
pip3 install torch torchvision
# Install vllm for efficient inference
# Install the main package
pip install -e .[vllm]
# flash attention 2
pip3 install flash-attn --no-build-isolation
pip install wandb
To set up the WebShop environment for evaluation:
# Change to the agentenv-webshop directory
cd openmanus_rl/environments/env_package/webshop/webshop/
# Create a new conda environment for WebShop
conda create -n agentenv_webshop python==3.10 -y
conda activate agentenv_webshop
# Setup the environment
bash ./setup.sh -d all
conda acitvate openmanus-rl
pip3 install gymnasium==0.29.1
pip3 install stable-baselines3==2.6.0
pip install alfworld
Download PDDL & Game files and pre-trained MskRCNN detector (will be stored in ~/.cache/alfworld/
):
alfworld-download -f
Use --extra
to download pre-trained checkpoints and seq2seq data.
Make sure you have the required environments set up (see Environment Setup section above).
Download the OpenManus-RL dataset from Hugging Face.
conda activate openmanus-rl
bash scripts/ppo_train/train_alfworld.sh
- Offline Training of Language Model Agents with Functions as Learnable Weights. [paper]
- FIREACT : TOWARD LANGUAGE AGENT FINE-TUNING. [paper]
- AgentTuning: Enabling Generalized Agent Abilities for LLMs. [paper]
- ReAct Meets ActRe: When Language Agents Enjoy Training Data Autonomy. [paper]
- UI-TARS: Pioneering Automated GUI Interaction with Native Agents. [paper]
- ATLAS: Agent Tuning via Learning Critical Steps. [paper]
- Toolformer: Language Models Can Teach Themselves to Use Tools. [paper]
- ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs. [paper]
- Agent-FLAN: Designing Data and Methods of Effective Agent Tuning for Large Language Models. [paper]
- AgentOhana: Design Unified Data and Training Pipeline for Effective Agent Learning. [paper]
- Training Language Models to Follow Instructions with Human Feedback. [paper]
- Deepseekmath: Pushing the Limits of Mathematical Reasoning in Open Language Models. [paper]
- DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning. [paper]
- AgentBench: Evaluating LLMs as Agents. paper
- WebShop: Towards Scalable Real-World Web Interaction with Autonomous Agents. paper
- GAIA: a benchmark for General AI Assistants. paper
- ALFWorld: Aligning Text and Embodied Environments for Interactive Learning. paper
- D4RL: Datasets for Deep Data-Drive Reinforcement Learning. [paper]
- Offline Reforcement Learning with Implicit Q-Learning. [paper]
- Behavior Proximal Policy Optimization. [paper]
We extend our thanks to ulab-uiuc (https://ulab-uiuc.github.io/) and Openmanus (https://github.com/mannaandpoem/OpenManus)) team from MetaGPT for their support and shared knowledge. Their mission and community contributions help drive innovations like OpenManus forward.
We also want to gratefully thank Verl (https://github.com/volcengine/verl) and verl-agent(https://github.com/langfengQ/verl-agent) for their opensource.
We welcome all developers who are interested in this project can reach out to ([email protected])
Stay tuned for updates and the official release of our repository. Together, let's build a thriving open-source agent ecosystem!
Join our networking group on Feishu and share your experience with other developers!
Please cite the following paper if you find OpenManus helpful!
@misc{OpenManus,
author = {OpenManus-RL Team},
title = {OpenManus-RL: Open Platform for Generalist LLM Reasoning Agents with RL optimization},
year = {2025},
organization = {GitHub},
url = {https://github.com/OpenManus/OpenManus-RL},
}
OpenManus-RL/
βββ verl/ # Verl RL framework submodule
βββ openmanus_rl/ # Main OpenManus-RL library
βββ scripts/ # Training and evaluation scripts
βββ configs/ # Configuration files
βββ environments/ # Agent environment implementations
βββ docs/ # Documentation
βββ examples/ # Usage examples
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for OpenManus-RL
Similar Open Source Tools

OpenManus-RL
OpenManus-RL is an open-source initiative focused on enhancing reasoning and decision-making capabilities of large language models (LLMs) through advanced reinforcement learning (RL)-based agent tuning. The project explores novel algorithmic structures, diverse reasoning paradigms, sophisticated reward strategies, and extensive benchmark environments. It aims to push the boundaries of agent reasoning and tool integration by integrating insights from leading RL tuning frameworks and continuously updating progress in a dynamic, live-streaming fashion.

MM-RLHF
MM-RLHF is a comprehensive project for aligning Multimodal Large Language Models (MLLMs) with human preferences. It includes a high-quality MLLM alignment dataset, a Critique-Based MLLM reward model, a novel alignment algorithm MM-DPO, and benchmarks for reward models and multimodal safety. The dataset covers image understanding, video understanding, and safety-related tasks with model-generated responses and human-annotated scores. The reward model generates critiques of candidate texts before assigning scores for enhanced interpretability. MM-DPO is an alignment algorithm that achieves performance gains with simple adjustments to the DPO framework. The project enables consistent performance improvements across 10 dimensions and 27 benchmarks for open-source MLLMs.

llms-interview-questions
This repository contains a comprehensive collection of 63 must-know Large Language Models (LLMs) interview questions. It covers topics such as the architecture of LLMs, transformer models, attention mechanisms, training processes, encoder-decoder frameworks, differences between LLMs and traditional statistical language models, handling context and long-term dependencies, transformers for parallelization, applications of LLMs, sentiment analysis, language translation, conversation AI, chatbots, and more. The readme provides detailed explanations, code examples, and insights into utilizing LLMs for various tasks.

shandu
Shandu is an advanced AI research system that automates comprehensive research processes using language models, web scraping, and iterative exploration to generate well-structured reports with citations. It features intelligent state-based workflow, deep exploration, multi-source information synthesis, enhanced web scraping, smart source evaluation, content analysis pipeline, comprehensive report generation, parallel processing, adaptive search strategy, and full citation management.

fastRAG
fastRAG is a research framework designed to build and explore efficient retrieval-augmented generative models. It incorporates state-of-the-art Large Language Models (LLMs) and Information Retrieval to empower researchers and developers with a comprehensive tool-set for advancing retrieval augmented generation. The framework is optimized for Intel hardware, customizable, and includes key features such as optimized RAG pipelines, efficient components, and RAG-efficient components like ColBERT and Fusion-in-Decoder (FiD). fastRAG supports various unique components and backends for running LLMs, making it a versatile tool for research and development in the field of retrieval-augmented generation.

crawl4ai
Crawl4AI is a powerful and free web crawling service that extracts valuable data from websites and provides LLM-friendly output formats. It supports crawling multiple URLs simultaneously, replaces media tags with ALT, and is completely free to use and open-source. Users can integrate Crawl4AI into Python projects as a library or run it as a standalone local server. The tool allows users to crawl and extract data from specified URLs using different providers and models, with options to include raw HTML content, force fresh crawls, and extract meaningful text blocks. Configuration settings can be adjusted in the `crawler/config.py` file to customize providers, API keys, chunk processing, and word thresholds. Contributions to Crawl4AI are welcome from the open-source community to enhance its value for AI enthusiasts and developers.

SynthLang
SynthLang is a tool designed to optimize AI prompts by reducing costs and improving processing speed. It brings academic rigor to prompt engineering, creating precise and powerful AI interactions. The tool includes core components like a Translator Engine, Performance Optimization, Testing Framework, and Technical Architecture. It offers mathematical precision, academic rigor, enhanced security, a modern interface, and instant testing. Users can integrate mathematical frameworks, model complex relationships, and apply structured prompts to various domains. Security features include API key management and data privacy. The tool also provides a CLI for prompt engineering and optimization capabilities.

ComfyUI-Copilot
ComfyUI-Copilot is an intelligent assistant built on the Comfy-UI framework that simplifies and enhances the AI algorithm debugging and deployment process through natural language interactions. It offers intuitive node recommendations, workflow building aids, and model querying services to streamline development processes. With features like interactive Q&A bot, natural language node suggestions, smart workflow assistance, and model querying, ComfyUI-Copilot aims to lower the barriers to entry for beginners, boost development efficiency with AI-driven suggestions, and provide real-time assistance for developers.

DriveLM
DriveLM is a multimodal AI model that enables autonomous driving by combining computer vision and natural language processing. It is designed to understand and respond to complex driving scenarios using visual and textual information. DriveLM can perform various tasks related to driving, such as object detection, lane keeping, and decision-making. It is trained on a massive dataset of images and text, which allows it to learn the relationships between visual cues and driving actions. DriveLM is a powerful tool that can help to improve the safety and efficiency of autonomous vehicles.

CortexON
CortexON is an open-source, multi-agent AI system designed to automate and simplify everyday tasks. It integrates specialized agents like Web Agent, File Agent, Coder Agent, Executor Agent, and API Agent to accomplish user-defined objectives. CortexON excels at executing complex workflows, research tasks, technical operations, and business process automations by dynamically coordinating the agents' unique capabilities. It offers advanced research automation, multi-agent orchestration, integration with third-party APIs, code generation and execution, efficient file and data management, and personalized task execution for travel planning, market analysis, educational content creation, and business intelligence.

chunkhound
ChunkHound is a modern tool for transforming your codebase into a searchable knowledge base for AI assistants. It utilizes semantic search via the cAST algorithm and regex search, integrating with AI assistants through the Model Context Protocol (MCP). With features like cAST Algorithm, Multi-Hop Semantic Search, Regex search, and support for 22 languages, ChunkHound offers a local-first approach to code analysis and discovery. It provides intelligent code discovery, universal language support, and real-time indexing capabilities, making it a powerful tool for developers looking to enhance their coding experience.

holisticai
Holistic AI is an open-source library dedicated to assessing and improving the trustworthiness of AI systems. It focuses on measuring and mitigating bias, explainability, robustness, security, and efficacy in AI models. The tool provides comprehensive metrics, mitigation techniques, a user-friendly interface, and visualization tools to enhance AI system trustworthiness. It offers documentation, tutorials, and detailed installation instructions for easy integration into existing workflows.

RepoMaster
RepoMaster is an AI agent that leverages GitHub repositories to solve complex real-world tasks. It transforms how coding tasks are solved by automatically finding the right GitHub tools and making them work together seamlessly. Users can describe their tasks, and RepoMaster's AI analysis leads to auto discovery and smart execution, resulting in perfect outcomes. The tool provides a web interface for beginners and a command-line interface for advanced users, along with specialized agents for deep search, general assistance, and repository tasks.

kserve
KServe provides a Kubernetes Custom Resource Definition for serving predictive and generative machine learning (ML) models. It encapsulates the complexity of autoscaling, networking, health checking, and server configuration to bring cutting edge serving features like GPU Autoscaling, Scale to Zero, and Canary Rollouts to ML deployments. KServe enables a simple, pluggable, and complete story for Production ML Serving including prediction, pre-processing, post-processing, and explainability. It is a standard, cloud agnostic Model Inference Platform for serving predictive and generative AI models on Kubernetes, built for highly scalable use cases.

AI-Engineering.academy
AI Engineering Academy aims to provide a structured learning path for individuals looking to learn Applied AI effectively. The platform offers multiple roadmaps covering topics like Retrieval Augmented Generation, Fine-tuning, and Deployment. Each roadmap equips learners with the knowledge and skills needed to excel in applied GenAI. Additionally, the platform will feature Hands-on End-to-End AI projects in the future.

abi
ABI (Agentic Brain Infrastructure) is a Python-based AI Operating System designed to serve as the core infrastructure for building an Agentic AI Ontology Engine. It empowers organizations to integrate, manage, and scale AI-driven operations with multiple AI models, focusing on ontology, agent-driven workflows, and analytics. ABI emphasizes modularity and customization, providing a customizable framework aligned with international standards and regulatory frameworks. It offers features such as configurable AI agents, ontology management, integrations with external data sources, data processing pipelines, workflow automation, analytics, and data handling capabilities.
For similar tasks

Mortal
Mortal (ε‘倫) is a free and open source AI for Japanese mahjong, powered by deep reinforcement learning. It provides a comprehensive solution for playing Japanese mahjong with AI assistance. The project focuses on utilizing deep reinforcement learning techniques to enhance gameplay and decision-making in Japanese mahjong. Mortal offers a user-friendly interface and detailed documentation to assist users in understanding and utilizing the AI effectively. The project is actively maintained and welcomes contributions from the community to further improve the AI's capabilities and performance.

Smart-Connections-Visualizer
The Smart Connections Visualizer Plugin is a tool designed to enhance note-taking and information visualization by creating dynamic force-directed graphs that represent connections between notes or excerpts. Users can customize visualization settings, preview notes, and interact with the graph to explore relationships and insights within their notes. The plugin aims to revolutionize communication with AI and improve decision-making processes by visualizing complex information in a more intuitive and context-driven manner.

OpenManus-RL
OpenManus-RL is an open-source initiative focused on enhancing reasoning and decision-making capabilities of large language models (LLMs) through advanced reinforcement learning (RL)-based agent tuning. The project explores novel algorithmic structures, diverse reasoning paradigms, sophisticated reward strategies, and extensive benchmark environments. It aims to push the boundaries of agent reasoning and tool integration by integrating insights from leading RL tuning frameworks and continuously updating progress in a dynamic, live-streaming fashion.

RLHF-Reward-Modeling
This repository contains code for training reward models for Deep Reinforcement Learning-based Reward-modulated Hierarchical Fine-tuning (DRL-based RLHF), Iterative Selection Fine-tuning (Rejection sampling fine-tuning), and iterative Decision Policy Optimization (DPO). The reward models are trained using a Bradley-Terry model based on the Gemma and Mistral language models. The resulting reward models achieve state-of-the-art performance on the RewardBench leaderboard for reward models with base models of up to 13B parameters.

h2o-llmstudio
H2O LLM Studio is a framework and no-code GUI designed for fine-tuning state-of-the-art large language models (LLMs). With H2O LLM Studio, you can easily and effectively fine-tune LLMs without the need for any coding experience. The GUI is specially designed for large language models, and you can finetune any LLM using a large variety of hyperparameters. You can also use recent finetuning techniques such as Low-Rank Adaptation (LoRA) and 8-bit model training with a low memory footprint. Additionally, you can use Reinforcement Learning (RL) to finetune your model (experimental), use advanced evaluation metrics to judge generated answers by the model, track and compare your model performance visually, and easily export your model to the Hugging Face Hub and share it with the community.

MathCoder
MathCoder is a repository focused on enhancing mathematical reasoning by fine-tuning open-source language models to use code for modeling and deriving math equations. It introduces MathCodeInstruct dataset with solutions interleaving natural language, code, and execution results. The repository provides MathCoder models capable of generating code-based solutions for challenging math problems, achieving state-of-the-art scores on MATH and GSM8K datasets. It offers tools for model deployment, inference, and evaluation, along with a citation for referencing the work.

Awesome-Text2SQL
Awesome Text2SQL is a curated repository containing tutorials and resources for Large Language Models, Text2SQL, Text2DSL, Text2API, Text2Vis, and more. It provides guidelines on converting natural language questions into structured SQL queries, with a focus on NL2SQL. The repository includes information on various models, datasets, evaluation metrics, fine-tuning methods, libraries, and practice projects related to Text2SQL. It serves as a comprehensive resource for individuals interested in working with Text2SQL and related technologies.

Awesome-LLM
Awesome-LLM is a curated list of resources related to large language models, focusing on papers, projects, frameworks, tools, tutorials, courses, opinions, and other useful resources in the field. It covers trending LLM projects, milestone papers, other papers, open LLM projects, LLM training frameworks, LLM evaluation frameworks, tools for deploying LLM, prompting libraries & tools, tutorials, courses, books, and opinions. The repository provides a comprehensive overview of the latest advancements and resources in the field of large language models.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.