LLM_MultiAgents_Survey_Papers
Large Language Model based Multi-Agents: A Survey of Progress and Challenges
Stars: 225
This repository maintains a list of research papers on LLM-based Multi-Agents, categorized into five main streams: Multi-Agents Framework, Multi-Agents Orchestration and Efficiency, Multi-Agents for Problem Solving, Multi-Agents for World Simulation, and Multi-Agents Datasets and Benchmarks. The repository also includes a survey paper on LLM-based Multi-Agents and a table summarizing the key findings of the survey.
README:
🔥 Paper 🔥
Our survey about LLM based Multi-Agents is available at: https://arxiv.org/abs/2402.01680
Our summarized LLM-based Multi-Agents architecture is:
The Overview table is as follows. More details can be seen in our paper. Very appreciate any suggestions.
[2024/02] We will update our paper list every two weeks and include all the following papers in the next version of our paper. Please Feel free to contact me in case we have missed any papers!
[2024/01] This repo is created to maintain LLM-based Multi-Agents papers. We categorized these papers into five main streams:
- Multi-Agents Framework
- Multi-Agents Orchestration and Efficiency
- Multi-Agents for Problem Solving
- Multi-Agents for World Simulation
- Multi-Agents Datasets and Benchmarks
- Multi-Agents Framework
- Multi-Agents Orchestration and Efficiency
- Multi-Agents for Problem Solving
- Multi-Agents for World Simulation
- Multi-Agents Datasets and Benchmarks
- Contributing
- Contact
[2024/03] Are More LLM Calls All You Need? Towards Scaling Laws of Compound Inference Systems. Lingjiao Chen et al. [paper]
[2024/02] Rethinking the Bounds of LLM Reasoning: Are Multi-Agent Discussions the Key?. Qineng Wang et al. [paper]
[2024/02] AgentLite: A Lightweight Library for Building and Advancing Task-Oriented LLM Agent System. Zhiwei Liu et al. [paper]
[2023/12] Generative agent-based modeling with actions grounded in physical, social, or digital space using Concordia. Alexander Sasha Vezhnevets et al. [paper]
[2023/10] L2MAC: Large Language Model Automatic Computer for Extensive Code Generation. Samuel Holt et al. [paper]
[2023/10] OpenAgents: An Open Platform for Language Agents in the Wild. Tianbao Xie et al. [paper]
[2023/10] MetaAgents: Simulating Interactions of Human Behaviors for LLM-based Task-oriented Coordination via Collaborative Generative Agents. Yuan Li et al. [paper]
[2023/09] AutoAgents: A Framework for Automatic Agent Generation. Guangyao Chen et al. [paper]
[2023/09] Agents: An Open-source Framework for Autonomous Language Agents. Wangchunshu Zhou et al. [paper]
[2023/08] AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors. Weize Chen et al. [paper]
[2023/08] AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation. Qingyun Wu et al. [paper]
[2023/08] MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework. Sirui Hong et al. [paper]
[2023/03] CAMEL: Communicative Agents for “Mind” Exploration of Large Language Model Society. Guohao Li et al. [paper]
[2024/02] Language Agents as Optimizable Graphs. Mingchen Zhuge et al. [paper]
[2024/02] More Agents Is All You Need. Junyou Li et al. [paper]
[2023/11] Controlling Large Language Model-based Agents for Large-Scale Decision-Making: An Actor-Critic Approach. Bin Zhang et al. [paper]
[2023/11] crewAI. joaomdmoura et al. [repo]
[2023/10] Dynamic LLM-Agent Network: An LLM-agent Collaboration Framework with Agent Team Optimization. Zijun Liu et al. [paper]
[2023/10] Adapting LLM Agents Through Communication. Kuan Wang et al. [paper]
[2023/08] ProAgent: Building Proactive Cooperative AI with Large Language Models. Ceyao Zhang et al. [paper]
[2023/07] Self-Adaptive Large Language Model (LLM)-Based Multiagent Systems. Nathalia Nascimento et al. [paper]
[2023/07] Unleashing Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration. Zhenhailong Wang et al. [paper]
[2023/05] Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate. Tian Liang et al. [paper]
[2024/02] Can Large Language Models Serve as Data Analysts? A Multi-Agent Assisted Approach for Qualitative Data Analysis. Zeeshan Rasheed et al. [paper]
[2024/01] XUAT-Copilot: Multi-Agent Collaborative System for Automated User Acceptance Testing with Large Language Model. Zhitao Wang et al. [paper]
[2023/12] AgentCoder: Multi-Agent-based Code Generation with Iterative Testing and Optimisation. Dong Huang et al. [paper]
[2023/10] L2MAC: Large Language Model Automatic Computer for Extensive Code Generation. Samuel Holt et al. [paper]
[2023/08] MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework. Sirui Hong et al. [paper]
[2023/07] Communicative agents for software development. Chen Qian et al. [paper]
[2023/04] Self-collaboration Code Generation via ChatGPT. Yihong Dong et al. [paper]
[2023/10] Multi-Agent Consensus Seeking via Large Language Models. Huaben Chen et al. [paper]
[2023/10] Co-NavGPT: Multi-Robot Cooperative Visual Semantic Navigation using Large Language Models. Bangguo Yu et al. [paper]
[2023/09] Scalable Multi-Robot Collaboration with Large Language Models: Centralized or Decentralized Systems?. Yongchao Chen et al. [paper]
[2023/07] RoCo: Dialectic Multi-Robot Collaboration with Large Language Models. Zhao Mandi et al. [paper]
[2023/07] Building Cooperative Embodied Agents Modularly with Large Language Models. Hongxin Zhang et al. [paper]
[2023/02] Collaborating with language models for embodied reasoning. Ishita Dasgupta et al. [paper]
[2024/01] ProtAgents: Protein discovery via large language model multi-agent collaborations combining physics and machine learning. A. Ghafarollahi et al. [paper]
[2023/11] ChatGPT Research Group for Optimizing the Crystallinity of MOFs and COFs. Zhiling Zheng et al. [paper]
[2023/04] ChemCrow: Augmenting large-language models with chemistry tools. Andres M Bran et al. [paper]
[2023/04] Emergent autonomous scientific research capabilities of large language models. Daniil A. Boiko et al. [paper]
[2024/01] Enhancing Diagnostic Accuracy through Multi-Agent Conversations: Using Large Language Models to Mitigate Cognitive Bias. Yu He Ke et al. [paper]
[2023/11] MechAgents: Large language model multi-agent collaborations can solve mechanics problems, generate new data, and integrate knowledge. Bo Ni et al. [paper]
[2023/11] MedAgents: Large Language Models as Collaborators for Zero-shot Medical Reasoning. Xiangru Tang et al. [paper]
[2023/08] ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate. Chi-Min Chan et al. [paper]
[2023/05] Improving Factuality and Reasoning in Language Models through Multiagent Debate. Yilun Du et al. [paper]
[2023/05] Examining Inter-Consistency of Large Language Models Collaboration: An In-depth Analysis via Debate. Kai Xiong et al. [paper]
[2023/12] D-Bot: Database Diagnosis System using Large Language Models. et al. [paper]
[2023/08] LLM As DBA. et al. [paper]
[2024/02] Can Large Language Model Agents Simulate Human Trust Behaviors? Chengxing Xie et al. [paper]
[2023/12] Large Language Model Enhanced Multi-Agent Systems for 6G Communications. Feibo Jiang et al. [paper]
[2023/10] Multi-Agent Consensus Seeking via Large Language Models. Huaben Chen et al. [paper]
[2023/10] Lyfe Agents: Generative agents for low-cost real-time social interactions. Zhao Kaiya et al. [paper]
[2023/08] Quantifying the Impact of Large Language Models on Collective Opinion Dynamics. Chao Li et al. [paper]
[2023/07] S3 Social-network Simulation System with Large Language Model-Empowered Agents. Chen Gao et al. [paper]
[2023/07] Are you in a Masquerade? Exploring the Behavior and Impact of Large Language Model Driven Social Bots in Online Social Networks. Siyu Li et al. [paper]
[2023/06] User Behavior Simulation with Large Language Model based Agents. Lei Wang et al. [paper]
[2023/05] Can Large Language Models Transform Computational Social Science?. Caleb Ziems et al. [paper]
[2023/04] Generative Agents- Interactive Simulacra of Human Behavior. Joon Sung Park et al. [paper]
[2022/10] Social simulacra: Creating populated prototypes for social computing systems. Joon Sung Park et al. [paper]
[2023/12] Deciphering Digital Detectives: Understanding LLM Behaviors and Capabilities in Multi-Agent Mystery Games. Dekun Wu et al. [paper]
[2023/12] Can Large Language Models Serve as Rational Players in Game Theory? A Systematic Analysis. Caoyun Fan et al. [paper]
[2023/11] ALYMPICS: Language Agents Meet Game Theory. Shaoguang Mao et al. [paper]
[2023/10] Language Agents with Reinforcement Learning for Strategic Play in the Werewolf Game. Zelai Xu et al. [paper]
[2023/10] Theory of Mind for Multi-Agent Collaboration via Large Language Models. Huao Li et al. [paper]
[2023/10] Welfare Diplomacy: Benchmarking Language Model Cooperation. Gabriel Mukobi et al. [paper]
[2023/10] GameGPT: Multi-agent Collaborative Framework for Game Development. Dake Chen et al. [paper]
[2023/10] AVALONBENCH: Evaluating LLMs Playing the Game of Avalon. Jonathan Light et al. [paper]
[2023/10] Avalon’s Game of Thoughts: Battle Against Deception through Recursive Contemplation. Shenzhi Wang et al. [paper]
[2023/09] MindAgent: Emergent Gaming Interaction. Ran Gong et al. [paper]
[2023/09] Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf. Yuzhuang Xu et al. [paper]
[2023/05] Playing repeated games with Large Language Models. Elif Akata et al. [paper]
[2023/07] Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support. Zilin Ma et al. [paper]
[2023/07] The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents. Grgur KovaÄŤ et al. [paper]
[2022/10] Exploring Collaboration Mechanisms for LLM Agents: A Social Psychology View. Jintian Zhang et al. [paper]
[2022/08] Using Large Language Models to Simulate Multiple Humans and Replicate Human Subject Studies. Gati Aher et al. [paper]
[2023/10] CompeteAI: Understanding the Competition Behaviors in Large Language Model-based Agents. Qinlin Zhao et al. [paper]
[2023/10] Large Language Model-Empowered Agents for Simulating Macroeconomic Activities. Nian Li et al. [paper]
[2023/09] Rethinking the Buyer’s Inspection Paradox in Information Markets with Language Agents. et al. [paper]
[2023/09] TradingGPT: Multi-Agent System with Layered Memory and Distinct Characters for Enhanced Financial Trading Performance. Yang Li et al. [paper]
[2023/01] Large Language Models as Simulated Economic Agents: What Can We Learn from Homo Silicus?. John J. Horton et al. [paper]
[2023/10] On Generative Agents in Recommendation. An Zhang et al. [paper]
[2023/10] AgentCF: Collaborative Learning with Autonomous Language Agents for Recommender Systems. Junjie Zhang et al. [paper]
[2023/11] War and Peace (WarAgent): Large Language Model-based Multi-Agent Simulation of World Wars. Wenyue Hua et al. [paper]
[2023/11] Simulating Public Administration Crisis: A Novel Generative Agent-Based Simulation System to Lower Technology Barriers in Social Science Research. Bushi Xiao et al. [paper]
[2023/10] Multi-Agent Consensus Seeking via Large Language Models. Huaben Chen et al. [paper]
[2023/09] Generative Agent-Based Modeling: Unveiling Social System Dynamics through Coupling Mechanistic Models with Generative Artificial Intelligence. Navid Ghaffarzadegan et al. [paper]
[2023/07] Epidemic modeling with generative agents. Ross Williams et al. [paper]
[2024/02] LLMArena: Assessing Capabilities of Large Language Models in Dynamic Multi-Agent Environments. Junzhe Chen et al. [paper]
[2023/11] Towards Reasoning in Large Language Models via Multi-Agent Peer Review Collaboration. Zhenran Xu et al. [paper]
[2023/11] MAgIC: Investigation of Large Language Model Powered Multi-Agent in Cognition, Adaptability, Rationality and Collaboration. Lin Xu et al. [paper]
[2023/10] SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents. Xuhui Zhou et al. [paper]
[2023/10] Evaluating Multi-Agent Coordination Abilities in Large Language Models. Saaket Agashe et al. [paper]
[2023/09] LLM-Deliberation: Evaluating LLMs with Interactive Multi-Agent Negotiation Games. Sahar Abdelnabi et al. [paper]
Because the LLM-based Multi-Agents is a fast-growing research field, we may miss some important related papers. Very welcome contributions to this repository! Please feel free to submit a pull request or open an issue if you have anything to add or comment.
Thanks!
Taicheng Guo
- Email: [email protected]
- Twitter: https://twitter.com/taioooorange
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for LLM_MultiAgents_Survey_Papers
Similar Open Source Tools
LLM_MultiAgents_Survey_Papers
This repository maintains a list of research papers on LLM-based Multi-Agents, categorized into five main streams: Multi-Agents Framework, Multi-Agents Orchestration and Efficiency, Multi-Agents for Problem Solving, Multi-Agents for World Simulation, and Multi-Agents Datasets and Benchmarks. The repository also includes a survey paper on LLM-based Multi-Agents and a table summarizing the key findings of the survey.
AwesomeLLM4APR
Awesome LLM for APR is a repository dedicated to exploring the capabilities of Large Language Models (LLMs) in Automated Program Repair (APR). It provides a comprehensive collection of research papers, tools, and resources related to using LLMs for various scenarios such as repairing semantic bugs, security vulnerabilities, syntax errors, programming problems, static warnings, self-debugging, type errors, web UI tests, smart contracts, hardware bugs, performance bugs, API misuses, crash bugs, test case repairs, formal proofs, GitHub issues, code reviews, motion planners, human studies, and patch correctness assessments. The repository serves as a valuable reference for researchers and practitioners interested in leveraging LLMs for automated program repair.
Awesome-LLM4RS-Papers
This paper list is about Large Language Model-enhanced Recommender System. It also contains some related works. Keywords: recommendation system, large language models
Awesome-LLM-Compression
Awesome LLM compression research papers and tools to accelerate LLM training and inference.
LLM-and-Law
This repository is dedicated to summarizing papers related to large language models with the field of law. It includes applications of large language models in legal tasks, legal agents, legal problems of large language models, data resources for large language models in law, law LLMs, and evaluation of large language models in the legal domain.
Efficient-LLMs-Survey
This repository provides a systematic and comprehensive review of efficient LLMs research. We organize the literature in a taxonomy consisting of three main categories, covering distinct yet interconnected efficient LLMs topics from **model-centric** , **data-centric** , and **framework-centric** perspective, respectively. We hope our survey and this GitHub repository can serve as valuable resources to help researchers and practitioners gain a systematic understanding of the research developments in efficient LLMs and inspire them to contribute to this important and exciting field.
Awesome-TimeSeries-SpatioTemporal-LM-LLM
Awesome-TimeSeries-SpatioTemporal-LM-LLM is a curated list of Large (Language) Models and Foundation Models for Temporal Data, including Time Series, Spatio-temporal, and Event Data. The repository aims to summarize recent advances in Large Models and Foundation Models for Time Series and Spatio-Temporal Data with resources such as papers, code, and data. It covers various applications like General Time Series Analysis, Transportation, Finance, Healthcare, Event Analysis, Climate, Video Data, and more. The repository also includes related resources, surveys, and papers on Large Language Models, Foundation Models, and their applications in AIOps.
Efficient_Foundation_Model_Survey
Efficient Foundation Model Survey is a comprehensive analysis of resource-efficient large language models (LLMs) and multimodal foundation models. The survey covers algorithmic and systemic innovations to support the growth of large models in a scalable and environmentally sustainable way. It explores cutting-edge model architectures, training/serving algorithms, and practical system designs. The goal is to provide insights on tackling resource challenges posed by large foundation models and inspire future breakthroughs in the field.
awesome-llm-understanding-mechanism
This repository is a collection of papers focused on understanding the internal mechanism of large language models (LLM). It includes research on topics such as how LLMs handle multilingualism, learn in-context, and handle factual associations. The repository aims to provide insights into the inner workings of transformer-based language models through a curated list of papers and surveys.
Awesome-Segment-Anything
Awesome-Segment-Anything is a powerful tool for segmenting and extracting information from various types of data. It provides a user-friendly interface to easily define segmentation rules and apply them to text, images, and other data formats. The tool supports both supervised and unsupervised segmentation methods, allowing users to customize the segmentation process based on their specific needs. With its versatile functionality and intuitive design, Awesome-Segment-Anything is ideal for data analysts, researchers, content creators, and anyone looking to efficiently extract valuable insights from complex datasets.
awesome-llm-security
Awesome LLM Security is a curated collection of tools, documents, and projects related to Large Language Model (LLM) security. It covers various aspects of LLM security including white-box, black-box, and backdoor attacks, defense mechanisms, platform security, and surveys. The repository provides resources for researchers and practitioners interested in understanding and safeguarding LLMs against adversarial attacks. It also includes a list of tools specifically designed for testing and enhancing LLM security.
glossAPI
The glossAPI project aims to develop a Greek language model as open-source software, with code licensed under EUPL and data under Creative Commons BY-SA. The project focuses on collecting and evaluating open text sources in Greek, with efforts to prioritize and gather textual data sets. The project encourages contributions through the CONTRIBUTING.md file and provides resources in the wiki for viewing and modifying recorded sources. It also welcomes ideas and corrections through issue submissions. The project emphasizes the importance of open standards, ethically secured data, privacy protection, and addressing digital divides in the context of artificial intelligence and advanced language technologies.
AI-resources
AI-resources is a repository containing links to various resources for learning Artificial Intelligence. It includes video lectures, courses, tutorials, and open-source libraries related to deep learning, reinforcement learning, machine learning, and more. The repository categorizes resources for beginners, average users, and advanced users/researchers, providing a comprehensive collection of materials to enhance knowledge and skills in AI.
Awesome-Code-LLM
Analyze the following text from a github repository (name and readme text at end) . Then, generate a JSON object with the following keys and provide the corresponding information for each key, in lowercase letters: 'description' (detailed description of the repo, must be less than 400 words,Ensure that no line breaks and quotation marks.),'for_jobs' (List 5 jobs suitable for this tool,in lowercase letters), 'ai_keywords' (keywords of the tool,user may use those keyword to find the tool,in lowercase letters), 'for_tasks' (list of 5 specific tasks user can use this tool to do,in lowercase letters), 'answer' (in english languages)
awesome-AIOps
awesome-AIOps is a curated list of academic researches and industrial materials related to Artificial Intelligence for IT Operations (AIOps). It includes resources such as competitions, white papers, blogs, tutorials, benchmarks, tools, companies, academic materials, talks, workshops, papers, and courses covering various aspects of AIOps like anomaly detection, root cause analysis, incident management, microservices, dependency tracing, and more.
For similar tasks
tabby
Tabby is a self-hosted AI coding assistant, offering an open-source and on-premises alternative to GitHub Copilot. It boasts several key features: * Self-contained, with no need for a DBMS or cloud service. * OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE). * Supports consumer-grade GPUs.
LafTools
LafTools is a privacy-first, self-hosted, fully open source toolbox designed for programmers. It offers a wide range of tools, including code generation, translation, encryption, compression, data analysis, and more. LafTools is highly integrated with a productive UI and supports full GPT-alike functionality. It is available as Docker images and portable edition, with desktop edition support planned for the future.
aideml
AIDE is a machine learning code generation agent that can generate solutions for machine learning tasks from natural language descriptions. It has the following features: 1. **Instruct with Natural Language**: Describe your problem or additional requirements and expert insights, all in natural language. 2. **Deliver Solution in Source Code**: AIDE will generate Python scripts for the **tested** machine learning pipeline. Enjoy full transparency, reproducibility, and the freedom to further improve the source code! 3. **Iterative Optimization**: AIDE iteratively runs, debugs, evaluates, and improves the ML code, all by itself. 4. **Visualization**: We also provide tools to visualize the solution tree produced by AIDE for a better understanding of its experimentation process. This gives you insights not only about what works but also what doesn't. AIDE has been benchmarked on over 60 Kaggle data science competitions and has demonstrated impressive performance, surpassing 50% of Kaggle participants on average. It is particularly well-suited for tasks that require complex data preprocessing, feature engineering, and model selection.
auto-dev
AutoDev is an AI-powered coding wizard that supports multiple languages, including Java, Kotlin, JavaScript/TypeScript, Rust, Python, Golang, C/C++/OC, and more. It offers a range of features, including auto development mode, copilot mode, chat with AI, customization options, SDLC support, custom AI agent integration, and language features such as language support, extensions, and a DevIns language for AI agent development. AutoDev is designed to assist developers with tasks such as auto code generation, bug detection, code explanation, exception tracing, commit message generation, code review content generation, smart refactoring, Dockerfile generation, CI/CD config file generation, and custom shell/command generation. It also provides a built-in LLM fine-tune model and supports UnitEval for LLM result evaluation and UnitGen for code-LLM fine-tune data generation.
LLM4SE
The collection is actively updated with the help of an internal literature search engine.
Awesome-Code-LLM
Analyze the following text from a github repository (name and readme text at end) . Then, generate a JSON object with the following keys and provide the corresponding information for each key, in lowercase letters: 'description' (detailed description of the repo, must be less than 400 words,Ensure that no line breaks and quotation marks.),'for_jobs' (List 5 jobs suitable for this tool,in lowercase letters), 'ai_keywords' (keywords of the tool,user may use those keyword to find the tool,in lowercase letters), 'for_tasks' (list of 5 specific tasks user can use this tool to do,in lowercase letters), 'answer' (in english languages)
crewAI
crewAI is a cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks. It provides a flexible and structured approach to AI collaboration, enabling users to define agents with specific roles, goals, and tools, and assign them tasks within a customizable process. crewAI supports integration with various LLMs, including OpenAI, and offers features such as autonomous task delegation, flexible task management, and output parsing. It is open-source and welcomes contributions, with a focus on improving the library based on usage data collected through anonymous telemetry.
llava-docker
This Docker image for LLaVA (Large Language and Vision Assistant) provides a convenient way to run LLaVA locally or on RunPod. LLaVA is a powerful AI tool that combines natural language processing and computer vision capabilities. With this Docker image, you can easily access LLaVA's functionalities for various tasks, including image captioning, visual question answering, text summarization, and more. The image comes pre-installed with LLaVA v1.2.0, Torch 2.1.2, xformers 0.0.23.post1, and other necessary dependencies. You can customize the model used by setting the MODEL environment variable. The image also includes a Jupyter Lab environment for interactive development and exploration. Overall, this Docker image offers a comprehensive and user-friendly platform for leveraging LLaVA's capabilities.
For similar jobs
lollms-webui
LoLLMs WebUI (Lord of Large Language Multimodal Systems: One tool to rule them all) is a user-friendly interface to access and utilize various LLM (Large Language Models) and other AI models for a wide range of tasks. With over 500 AI expert conditionings across diverse domains and more than 2500 fine tuned models over multiple domains, LoLLMs WebUI provides an immediate resource for any problem, from car repair to coding assistance, legal matters, medical diagnosis, entertainment, and more. The easy-to-use UI with light and dark mode options, integration with GitHub repository, support for different personalities, and features like thumb up/down rating, copy, edit, and remove messages, local database storage, search, export, and delete multiple discussions, make LoLLMs WebUI a powerful and versatile tool.
Azure-Analytics-and-AI-Engagement
The Azure-Analytics-and-AI-Engagement repository provides packaged Industry Scenario DREAM Demos with ARM templates (Containing a demo web application, Power BI reports, Synapse resources, AML Notebooks etc.) that can be deployed in a customer’s subscription using the CAPE tool within a matter of few hours. Partners can also deploy DREAM Demos in their own subscriptions using DPoC.
minio
MinIO is a High Performance Object Storage released under GNU Affero General Public License v3.0. It is API compatible with Amazon S3 cloud storage service. Use MinIO to build high performance infrastructure for machine learning, analytics and application data workloads.
mage-ai
Mage is an open-source data pipeline tool for transforming and integrating data. It offers an easy developer experience, engineering best practices built-in, and data as a first-class citizen. Mage makes it easy to build, preview, and launch data pipelines, and provides observability and scaling capabilities. It supports data integrations, streaming pipelines, and dbt integration.
AiTreasureBox
AiTreasureBox is a versatile AI tool that provides a collection of pre-trained models and algorithms for various machine learning tasks. It simplifies the process of implementing AI solutions by offering ready-to-use components that can be easily integrated into projects. With AiTreasureBox, users can quickly prototype and deploy AI applications without the need for extensive knowledge in machine learning or deep learning. The tool covers a wide range of tasks such as image classification, text generation, sentiment analysis, object detection, and more. It is designed to be user-friendly and accessible to both beginners and experienced developers, making AI development more efficient and accessible to a wider audience.
tidb
TiDB is an open-source distributed SQL database that supports Hybrid Transactional and Analytical Processing (HTAP) workloads. It is MySQL compatible and features horizontal scalability, strong consistency, and high availability.
airbyte
Airbyte is an open-source data integration platform that makes it easy to move data from any source to any destination. With Airbyte, you can build and manage data pipelines without writing any code. Airbyte provides a library of pre-built connectors that make it easy to connect to popular data sources and destinations. You can also create your own connectors using Airbyte's no-code Connector Builder or low-code CDK. Airbyte is used by data engineers and analysts at companies of all sizes to build and manage their data pipelines.
labelbox-python
Labelbox is a data-centric AI platform for enterprises to develop, optimize, and use AI to solve problems and power new products and services. Enterprises use Labelbox to curate data, generate high-quality human feedback data for computer vision and LLMs, evaluate model performance, and automate tasks by combining AI and human-centric workflows. The academic & research community uses Labelbox for cutting-edge AI research.