
Awesome-Papers-Autonomous-Agent
A collection of recent papers on building autonomous agent. Two topics included: RL-based / LLM-based agents.
Stars: 521

Awesome-Papers-Autonomous-Agent is a curated collection of recent papers focusing on autonomous agents, specifically interested in RL-based agents and LLM-based agents. The repository aims to provide a comprehensive resource for researchers and practitioners interested in intelligent agents that can achieve goals, acquire knowledge, and continually improve. The collection includes papers on various topics such as instruction following, building agents based on world models, using language as knowledge, leveraging LLMs as a tool, generalization across tasks, continual learning, combining RL and LLM, transformer-based policies, trajectory to language, trajectory prediction, multimodal agents, training LLMs for generalization and adaptation, task-specific designing, multi-agent systems, experimental analysis, benchmarking, applications, algorithm design, and combining with RL.
README:
This is a collection of recent papers focusing on autonomous agent. Here is how Wikipedia defines Agent:
In artificial intelligence, an intelligent agent is an agent acting in an intelligent manner; It perceives its environment, takes actions autonomously in order to achieve goals, and may improve its performance with learning or acquiring knowledge. An intelligent agent may be simple or complex: A thermostator other control systemis considered an example of an intelligent agent, as is a human being, as is any system that meets the definition, such as a firm, a state, or a biome.
Thus, the key of an agent is that it can achieve goals, acquire knowledge and continually improve. The traditional agents in RL research will not be considered in this collection. Though LLM-based agents have caught people's eyes in recent research, RL-based agents also take their special position. Specifically, this repo is interested in two types of agent: RL-based agent and LLM-based agent.
Note that this paper list is under active maintaince. Free free to open an issue if you found any missed papers that fit the topic.
- 2024/01/31: Add a special list for surveys on autonomous agent.
- 2023/12/08: Add papers accepted by ICML'23 and ICLR'23 π
- 2023/11/08: Add papers accepted by NeurIPS'23. Add related links (project page or github) to these accepted papers π
- 2023/10/25: Classify all papers based on their research topics. Check ToC for the standard of classification π
- 2023/10/18: Release first version of collection, including papers submitted to ICLR 2024 π
Table of Contents
- A Survey on Large Language Model based Autonomous Agents
- The Rise and Potential of Large Language Model Based Agents: A Survey
- [NeurIPS'23] Natural Language-conditioned Reinforcement Learning with Inside-out Task Language Development and Translation
- [NeurIPS'23] Guide Your Agent with Adaptive Multimodal Rewards [project]
- Compositional Instruction Following with Language Models and Reinforcement Learning
- RT-1: Robotics Transformer for Real-World Control at Scale [blog]
- RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control [blog]
- Open X-Embodiment: Robotic Learning Datasets and RT-X Models [blog]
- [NeurIPS'23] Guide Your Agent with Adaptive Multimodal Rewards [project]
- LEO: An Embodied Generalist Agent in 3D World [project]
- [ICLR'23 Oral] Transformers are Sample-Efficient World Models [code]
- Learning to Model the World with Language
- MAMBA: an Effective World Model Approach for Meta-Reinforcement Learning
- Learning with Language Inference and Tips for Continual Reinforcement Learning
- Informing Reinforcement Learning Agents by Grounding Natural Language to Markov Decision Processes
- Language Reward Modulation for Pretraining Reinforcement Learning
- [NeurIPS'23] Efficient Policy Adaptation with Contrastive Prompt Ensemble for Embodied Agents
- [ICLR'23] Reward Design with Language Models [code]
- [ICML'23] RLang: A Declarative Language for Describing Partial World Knowledge to Reinforcement Learning Agents [Poster]
- [ICML'23] Do Embodied Agents Dream of Pixelated Sheep: Embodied Decision Making using Language Guided World Modelling [Project][Code]
- [ICML'23] Grounding Large Language Models in Interactive Environments with Online Reinforcement Learning
- Leveraging Large Language Models for Optimised Coordination in Textual Multi-Agent Reinforcement Learning
- Text2Reward: Dense Reward Generation with Language Models for Reinforcement Learning
- Language to Rewards for Robotic Skill Synthesis
- Eureka: Human-Level Reward Design via Coding Large Language Models
- STARLING: Self-supervised Training of Text-based Reinforcement Learning Agent with Large Language Models
- ADAPTER-RL: Adaptation of Any Agent using Reinforcement Learning
- Online Continual Learning for Interactive Instruction Following Agents
- [NeurIPS'23] A Definition of Continual Reinforcement Learning
- [NeurIPS'23] Large Language Models Are Semi-Parametric Reinforcement Learning Agents
- RoboGPT : An intelligent agent of making embodied long-term decisions for daily instruction tasks
- Can Language Agents Approach the Performance of RL? An Empirical Study On OpenAI Gym
- RLAdapter: Bridging Large Language Models to Reinforcement Learning in Open Worlds
- [NeurIPS'23] Cross-Episodic Curriculum for Transformer Agents. [project]
- [NeurIPS'23] State2Explanation: Concept-Based Explanations to Benefit Agent Learning and User Understanding
- [NeurIPS'23] Semantic HELM: A Human-Readable Memory for Reinforcement Learning
- [ICML'23] Distilling Internet-Scale Vision-Language Models into Embodied Agents
- Understanding Your Agent: Leveraging Large Language Models for Behavior Explanation
- Enhancing Human Experience in Human-Agent Collaboration: A Human-Centered Modeling Approach Based on Positive Human Gain
- A Competition Winning Deep Reinforcement Learning Agent in microRTS
- Aligning Agents like Large Language Models
- [ICML'23] PaLM-E: An Embodied Multimodal Language Model
- Steve-Eye: Equipping LLM-based Embodied Agents with Visual Perception in Open Worlds
- Multimodal Web Navigation with Instruction-Finetuned Foundation Models
- You Only Look at Screens: Multimodal Chain-of-Action Agents
- Learning Embodied Vision-Language Programming From Instruction, Exploration, and Environmental Feedback
- An Embodied Generalist Agent in 3D World
- JARVIS-1: Open-world Multi-task Agents with Memory-Augmented Multimodal Language Models
- FireAct: Toward Language Agent Finetuning
- Adapting LLM Agents Through Communication
- AgentTuning: Enabling Generalized Agent Abilities for LLMs
- Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
- [NeurIPS'23] Describe, Explain, Plan and Select: Interactive Planning with LLMs Enables Open-World Multi-Task Agents
- [NeurIPS'23] SwiftSage: A Generative Agent with Fast and Slow Thinking for Complex Interactive Tasks [Github]
- Rethinking the Buyerβs Inspection Paradox in Information Markets with Language Agents
- A Language-Agent Approach to Formal Theorem-Proving
- Agent Instructs Large Language Models to be General Zero-Shot Reasoners
- Ghost in the Minecraft: Hierarchical Agents for Minecraft via Large Language Models with Text-based Knowledge and Memory
- PaperQA: Retrieval-Augmented Generative Agent for Scientific Research
- Language Agents for Detecting Implicit Stereotypes in Text-to-image Models at Scale
- Suspicion-Agent: Playing Imperfect Information Games with Theory of Mind Aware GPT-4
- CoMM: Collaborative Multi-Agent, Multi-Reasoning-Path Prompting for Complex Problem Solving
- Building Cooperative Embodied Agents Modularly with Large Language Models
- OKR-Agent: An Object and Key Results Driven Agent System with Hierarchical Self-Collaboration and Self-Evaluation
- MetaGPT: Meta Programming for Multi-Agent Collaborative Framework
- AutoAgents: A Framework for Automatic Agent Generation
- Dynamic LLM-Agent Network: An LLM-agent Collaboration Framework with Agent Team Optimization
- AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
- Exploring Collaboration Mechanisms for LLM Agents: A Social Psychology View
- REX: Rapid Exploration and eXploitation for AI agents
- Emergence of Social Norms in Large Language Model-based Agent Societies
- Identifying the Risks of LM Agents with an LM-Emulated Sandbox
- Evaluating Multi-Agent Coordination Abilities in Large Language Models
- Large Language Models as Gaming Agents
- Benchmarking Large Language Models as AI Research Agents
- Adaptive Environmental Modeling for Task-Oriented Language Agents
- CLIN: A Continually Learning Language Agent for Rapid Task Adaptation and Generalization
- [ACL'24] A Controllable World of Apps and People for Benchmarking Interactive Coding Agents [website][blog]
- [ICLR'23] Task Ambiguity in Humans and Language Models [code]
- SmartPlay : A Benchmark for LLMs as Intelligent Agents
- AgentBench: Evaluating LLMs as Agents
- Put Your Money Where Your Mouth Is: Evaluating Strategic Planning and Execution of LLM Agents in an Auction Arena
- SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents
- SocioDojo: Building Lifelong Analytical Agents with Real-world Text and Time Series
- WebArena: A Realistic Web Environment for Building Autonomous Agents
- LLM-Deliberation: Evaluating LLMs with Interactive Multi-Agent Negotiation Game
- Evaluating Large Language Models at Evaluating Instruction Following
- CivRealm: A Learning and Reasoning Odyssey for Decision-Making Agents
- Lyfe Agents: generative agents for low-cost real-time social interactions
- AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
- [ICLR'23 Oral] ReAct: Synergizing Reasoning and Acting in Language Models [code]
- [NeurIPS'23] AdaPlanner: Adaptive Planning from Feedback with Language Models [github]
- Prospector: Improving LLM Agents with Self-Asking and Trajectory Ranking
- Formally Specifying the High-Level Behavior of LLM-Based Agents
- Cumulative Reasoning With Large Language Models
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for Awesome-Papers-Autonomous-Agent
Similar Open Source Tools

Awesome-Papers-Autonomous-Agent
Awesome-Papers-Autonomous-Agent is a curated collection of recent papers focusing on autonomous agents, specifically interested in RL-based agents and LLM-based agents. The repository aims to provide a comprehensive resource for researchers and practitioners interested in intelligent agents that can achieve goals, acquire knowledge, and continually improve. The collection includes papers on various topics such as instruction following, building agents based on world models, using language as knowledge, leveraging LLMs as a tool, generalization across tasks, continual learning, combining RL and LLM, transformer-based policies, trajectory to language, trajectory prediction, multimodal agents, training LLMs for generalization and adaptation, task-specific designing, multi-agent systems, experimental analysis, benchmarking, applications, algorithm design, and combining with RL.

Awesome-Embodied-AI
Awesome-Embodied-AI is a curated list of papers on Embodied AI and related resources, tracking and summarizing research and industrial progress in the field. It includes surveys, workshops, tutorials, talks, blogs, and papers covering various aspects of Embodied AI, such as vision-language navigation, large language model-based agents, robotics, and more. The repository welcomes contributions and aims to provide a comprehensive overview of the advancements in Embodied AI.

awesome-generative-ai
A curated list of Generative AI projects, tools, artworks, and models

Awesome-LLM4EDA
LLM4EDA is a repository dedicated to showcasing the emerging progress in utilizing Large Language Models for Electronic Design Automation. The repository includes resources, papers, and tools that leverage LLMs to solve problems in EDA. It covers a wide range of applications such as knowledge acquisition, code generation, code analysis, verification, and large circuit models. The goal is to provide a comprehensive understanding of how LLMs can revolutionize the EDA industry by offering innovative solutions and new interaction paradigms.

LMOps
LMOps is a research initiative focusing on fundamental research and technology for building AI products with foundation models, particularly enabling AI capabilities with Large Language Models (LLMs) and Generative AI models. The project explores various aspects such as prompt optimization, longer context handling, LLM alignment, acceleration of LLMs, LLM customization, and understanding in-context learning. It also includes tools like Promptist for automatic prompt optimization, Structured Prompting for efficient long-sequence prompts consumption, and X-Prompt for extensible prompts beyond natural language. Additionally, LLMA accelerators are developed to speed up LLM inference by referencing and copying text spans from documents. The project aims to advance technologies that facilitate prompting language models and enhance the performance of LLMs in various scenarios.

LLM-PLSE-paper
LLM-PLSE-paper is a repository focused on the applications of Large Language Models (LLMs) in Programming Language and Software Engineering (PL/SE) domains. It covers a wide range of topics including bug detection, specification inference and verification, code generation, fuzzing and testing, code model and reasoning, code understanding, IDE technologies, prompting for reasoning tasks, and agent/tool usage and planning. The repository provides a comprehensive collection of research papers, benchmarks, empirical studies, and frameworks related to the capabilities of LLMs in various PL/SE tasks.

PPTAgent
PPTAgent is an innovative system that automatically generates presentations from documents. It employs a two-step process for quality assurance and introduces PPTEval for comprehensive evaluation. With dynamic content generation, smart reference learning, and quality assessment, PPTAgent aims to streamline presentation creation. The tool follows an analysis phase to learn from reference presentations and a generation phase to develop structured outlines and cohesive slides. PPTEval evaluates presentations based on content accuracy, visual appeal, and logical coherence.

AI-Bootcamp
The AI Bootcamp is a comprehensive training program focusing on real-world applications to equip individuals with the skills and knowledge needed to excel as AI engineers. The bootcamp covers topics such as Real-World PyTorch, Machine Learning Projects, Fine-tuning Tiny LLM, Deployment of LLM to Production, AI Agents with GPT-4 Turbo, CrewAI, Llama 3, and more. Participants will learn foundational skills in Python for AI, ML Pipelines, Large Language Models (LLMs), AI Agents, and work on projects like RagBase for private document chat.

verl
veRL is a flexible and efficient reinforcement learning training framework designed for large language models (LLMs). It allows easy extension of diverse RL algorithms, seamless integration with existing LLM infrastructures, and flexible device mapping. The framework achieves state-of-the-art throughput and efficient actor model resharding with 3D-HybridEngine. It supports popular HuggingFace models and is suitable for users working with PyTorch FSDP, Megatron-LM, and vLLM backends.

awesome-lifelong-llm-agent
This repository is a collection of papers and resources related to Lifelong Learning of Large Language Model (LLM) based Agents. It focuses on continual learning and incremental learning of LLM agents, identifying key modules such as Perception, Memory, and Action. The repository serves as a roadmap for understanding lifelong learning in LLM agents and provides a comprehensive overview of related research and surveys.

Pai-Megatron-Patch
Pai-Megatron-Patch is a deep learning training toolkit built for developers to train and predict LLMs & VLMs by using Megatron framework easily. With the continuous development of LLMs, the model structure and scale are rapidly evolving. Although these models can be conveniently manufactured using Transformers or DeepSpeed training framework, the training efficiency is comparably low. This phenomenon becomes even severer when the model scale exceeds 10 billion. The primary objective of Pai-Megatron-Patch is to effectively utilize the computational power of GPUs for LLM. This tool allows convenient training of commonly used LLM with all the accelerating techniques provided by Megatron-LM.

Reflection_Tuning
Reflection-Tuning is a project focused on improving the quality of instruction-tuning data through a reflection-based method. It introduces Selective Reflection-Tuning, where the student model can decide whether to accept the improvements made by the teacher model. The project aims to generate high-quality instruction-response pairs by defining specific criteria for the oracle model to follow and respond to. It also evaluates the efficacy and relevance of instruction-response pairs using the r-IFD metric. The project provides code for reflection and selection processes, along with data and model weights for both V1 and V2 methods.

FinRobot
FinRobot is an open-source AI agent platform designed for financial applications using large language models. It transcends the scope of FinGPT, offering a comprehensive solution that integrates a diverse array of AI technologies. The platform's versatility and adaptability cater to the multifaceted needs of the financial industry. FinRobot's ecosystem is organized into four layers, including Financial AI Agents Layer, Financial LLMs Algorithms Layer, LLMOps and DataOps Layers, and Multi-source LLM Foundation Models Layer. The platform's agent workflow involves Perception, Brain, and Action modules to capture, process, and execute financial data and insights. The Smart Scheduler optimizes model diversity and selection for tasks, managed by components like Director Agent, Agent Registration, Agent Adaptor, and Task Manager. The tool provides a structured file organization with subfolders for agents, data sources, and functional modules, along with installation instructions and hands-on tutorials.

repromodel
ReproModel is an open-source toolbox designed to boost AI research efficiency by enabling researchers to reproduce, compare, train, and test AI models faster. It provides standardized models, dataloaders, and processing procedures, allowing researchers to focus on new datasets and model development. With a no-code solution, users can access benchmark and SOTA models and datasets, utilize training visualizations, extract code for publication, and leverage an LLM-powered automated methodology description writer. The toolbox helps researchers modularize development, compare pipeline performance reproducibly, and reduce time for model development, computation, and writing. Future versions aim to facilitate building upon state-of-the-art research by loading previously published study IDs with verified code, experiments, and results stored in the system.

VideoLingo
VideoLingo is an all-in-one video translation and localization dubbing tool designed to generate Netflix-level high-quality subtitles. It aims to eliminate stiff machine translation, multiple lines of subtitles, and can even add high-quality dubbing, allowing knowledge from around the world to be shared across language barriers. Through an intuitive Streamlit web interface, the entire process from video link to embedded high-quality bilingual subtitles and even dubbing can be completed with just two clicks, easily creating Netflix-quality localized videos. Key features and functions include using yt-dlp to download videos from Youtube links, using WhisperX for word-level timeline subtitle recognition, using NLP and GPT for subtitle segmentation based on sentence meaning, summarizing intelligent term knowledge base with GPT for context-aware translation, three-step direct translation, reflection, and free translation to eliminate strange machine translation, checking single-line subtitle length and translation quality according to Netflix standards, using GPT-SoVITS for high-quality aligned dubbing, and integrating package for one-click startup and one-click output in streamlit.

interpret
InterpretML is an open-source package that incorporates state-of-the-art machine learning interpretability techniques under one roof. With this package, you can train interpretable glassbox models and explain blackbox systems. InterpretML helps you understand your model's global behavior, or understand the reasons behind individual predictions. Interpretability is essential for: - Model debugging - Why did my model make this mistake? - Feature Engineering - How can I improve my model? - Detecting fairness issues - Does my model discriminate? - Human-AI cooperation - How can I understand and trust the model's decisions? - Regulatory compliance - Does my model satisfy legal requirements? - High-risk applications - Healthcare, finance, judicial, ...
For similar tasks

Awesome-LLM-RAG
This repository, Awesome-LLM-RAG, aims to record advanced papers on Retrieval Augmented Generation (RAG) in Large Language Models (LLMs). It serves as a resource hub for researchers interested in promoting their work related to LLM RAG by updating paper information through pull requests. The repository covers various topics such as workshops, tutorials, papers, surveys, benchmarks, retrieval-enhanced LLMs, RAG instruction tuning, RAG in-context learning, RAG embeddings, RAG simulators, RAG search, RAG long-text and memory, RAG evaluation, RAG optimization, and RAG applications.

Awesome_LLM_System-PaperList
Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of papers on LLMs inference and serving.

LLM-Tool-Survey
This repository contains a collection of papers related to tool learning with large language models (LLMs). The papers are organized according to the survey paper 'Tool Learning with Large Language Models: A Survey'. The survey focuses on the benefits and implementation of tool learning with LLMs, covering aspects such as task planning, tool selection, tool calling, response generation, benchmarks, evaluation, challenges, and future directions in the field. It aims to provide a comprehensive understanding of tool learning with LLMs and inspire further exploration in this emerging area.

Awesome-CVPR2024-ECCV2024-AIGC
A Collection of Papers and Codes for CVPR 2024 AIGC. This repository compiles and organizes research papers and code related to CVPR 2024 and ECCV 2024 AIGC (Artificial Intelligence and Graphics Computing). It serves as a valuable resource for individuals interested in the latest advancements in the field of computer vision and artificial intelligence. Users can find a curated list of papers and accompanying code repositories for further exploration and research. The repository encourages collaboration and contributions from the community through stars, forks, and pull requests.

LLMs-in-science
The 'LLMs-in-science' repository is a collaborative environment for organizing papers related to large language models (LLMs) and autonomous agents in the field of chemistry. The goal is to discuss trend topics, challenges, and the potential for supporting scientific discovery in the context of artificial intelligence. The repository aims to maintain a systematic structure of the field and welcomes contributions from the community to keep the content up-to-date and relevant.

Awesome-Papers-Autonomous-Agent
Awesome-Papers-Autonomous-Agent is a curated collection of recent papers focusing on autonomous agents, specifically interested in RL-based agents and LLM-based agents. The repository aims to provide a comprehensive resource for researchers and practitioners interested in intelligent agents that can achieve goals, acquire knowledge, and continually improve. The collection includes papers on various topics such as instruction following, building agents based on world models, using language as knowledge, leveraging LLMs as a tool, generalization across tasks, continual learning, combining RL and LLM, transformer-based policies, trajectory to language, trajectory prediction, multimodal agents, training LLMs for generalization and adaptation, task-specific designing, multi-agent systems, experimental analysis, benchmarking, applications, algorithm design, and combining with RL.

awesome-lifelong-llm-agent
This repository is a collection of papers and resources related to Lifelong Learning of Large Language Model (LLM) based Agents. It focuses on continual learning and incremental learning of LLM agents, identifying key modules such as Perception, Memory, and Action. The repository serves as a roadmap for understanding lifelong learning in LLM agents and provides a comprehensive overview of related research and surveys.

OpenAGI
OpenAGI is an AI agent creation package designed for researchers and developers to create intelligent agents using advanced machine learning techniques. The package provides tools and resources for building and training AI models, enabling users to develop sophisticated AI applications. With a focus on collaboration and community engagement, OpenAGI aims to facilitate the integration of AI technologies into various domains, fostering innovation and knowledge sharing among experts and enthusiasts.
For similar jobs

weave
Weave is a toolkit for developing Generative AI applications, built by Weights & Biases. With Weave, you can log and debug language model inputs, outputs, and traces; build rigorous, apples-to-apples evaluations for language model use cases; and organize all the information generated across the LLM workflow, from experimentation to evaluations to production. Weave aims to bring rigor, best-practices, and composability to the inherently experimental process of developing Generative AI software, without introducing cognitive overhead.

LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.

VisionCraft
The VisionCraft API is a free API for using over 100 different AI models. From images to sound.

kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.

PyRIT
PyRIT is an open access automation framework designed to empower security professionals and ML engineers to red team foundation models and their applications. It automates AI Red Teaming tasks to allow operators to focus on more complicated and time-consuming tasks and can also identify security harms such as misuse (e.g., malware generation, jailbreaking), and privacy harms (e.g., identity theft). The goal is to allow researchers to have a baseline of how well their model and entire inference pipeline is doing against different harm categories and to be able to compare that baseline to future iterations of their model. This allows them to have empirical data on how well their model is doing today, and detect any degradation of performance based on future improvements.

tabby
Tabby is a self-hosted AI coding assistant, offering an open-source and on-premises alternative to GitHub Copilot. It boasts several key features: * Self-contained, with no need for a DBMS or cloud service. * OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE). * Supports consumer-grade GPUs.

spear
SPEAR (Simulator for Photorealistic Embodied AI Research) is a powerful tool for training embodied agents. It features 300 unique virtual indoor environments with 2,566 unique rooms and 17,234 unique objects that can be manipulated individually. Each environment is designed by a professional artist and features detailed geometry, photorealistic materials, and a unique floor plan and object layout. SPEAR is implemented as Unreal Engine assets and provides an OpenAI Gym interface for interacting with the environments via Python.

Magick
Magick is a groundbreaking visual AIDE (Artificial Intelligence Development Environment) for no-code data pipelines and multimodal agents. Magick can connect to other services and comes with nodes and templates well-suited for intelligent agents, chatbots, complex reasoning systems and realistic characters.