Awesome-LLM-Robotics
A comprehensive list of papers using large language/multi-modal models for Robotics/RL, including papers, codes, and related websites
Stars: 2860
This repository contains a curated list of **papers using Large Language/Multi-Modal Models for Robotics/RL**. Template from awesome-Implicit-NeRF-Robotics Please feel free to send me pull requests or email to add papers! If you find this repository useful, please consider citing and STARing this list. Feel free to share this list with others! ## Overview * Surveys * Reasoning * Planning * Manipulation * Instructions and Navigation * Simulation Frameworks * Citation
README:
This repo contains a curative list of papers using Large Language/Multi-Modal Models for Robotics/RL. Template from awesome-Implicit-NeRF-Robotics
Please feel free to send me pull requests or email to add papers!
If you find this repository useful, please consider citing and STARing this list. Feel free to share this list with others!
- "A Superalignment Framework in Autonomous Driving with Large Language Models", arXiv, Jun 2024, [Paper]
- "Neural Scaling Laws for Embodied AI", arXiv, May 2024. [Paper]
- "Toward General-Purpose Robots via Foundation Models: A Survey and Meta-Analysis", arXiv, Dec 2023. [Paper] [Paper List] [Website]
- "Language-conditioned Learning for Robotic Manipulation: A Survey", arXiv, Dec 2023, [Paper]
- "Foundation Models in Robotics: Applications, Challenges, and the Future", arXiv, Dec 2023, [Paper] [Paper List]
- "Robot Learning in the Era of Foundation Models: A Survey", arXiv, Nov 2023, [Paper]
- "The Development of LLMs for Embodied Navigation", arXiv, Nov 2023, [Paper]
- AHA: "AHA: A Vision-Language-Model for Detecting and Reasoning over Failures in Robotic Manipulation", arXiv, Oct 1. [Paper] [Website]
- ReKep: "ReKep: Spatio-Temporal Reasoning of Relational Keypoint Constraints for Robotic Manipulation", arXiv, Sep 2024. [Paper] [Code] [Website]
- CLEAR: "Language, Camera, Autonomy! Prompt-engineered Robot Control for Rapidly Evolving Deployment", ACM/IEEE International Conference on Human-Robot Interaction (HRI), Mar 2024. [Paper] [Code]
- MoMa-LLM: "Language-Grounded Dynamic Scene Graphs for Interactive Object Search with Mobile Manipulation", arXiv, Mar 2024. [Paper] [Code] [Website]
- AutoRT: "Embodied Foundation Models for Large Scale Orchestration of Robotic Agents", arXiv, Jan 2024. [Paper] [Website]
- LEO: "An Embodied Generalist Agent in 3D World", arXiv, Nov 2023. [Paper] [Code] [Website]
- Robogen: "A generative and self-guided robotic agent that endlessly propose and master new skills.", arXiv, Nov 2023. [Paper] [Code] [Website]
- SayPlan: "Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning", Conference on Robot Learning (CoRL), Nov 2023. [Paper] [Website]
- [LLaRP] "Large Language Models as Generalizable Policies for Embodied Tasks", arXiv, Oct 2023. [Paper] [Website]
- [RT-X] "Open X-Embodiment: Robotic Learning Datasets and RT-X Models", arXiv, July 2023. [Paper] [Website]
- [RT-2] "RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control", arXiv, July 2023. [Paper] [Website]
- Instruct2Act: "Mapping Multi-modality Instructions to Robotic Actions with Large Language Model", arXiv, May 2023. [Paper] [Pytorch Code]
- TidyBot: "Personalized Robot Assistance with Large Language Models", arXiv, May 2023. [Paper] [Pytorch Code] [Website]
- Generative Agents: "Generative Agents: Interactive Simulacra of Human Behavior", arXiv, Apr 2023. [Paper Code]
- Matcha: "Chat with the Environment: Interactive Multimodal Perception using Large Language Models", IROS, Mar 2023. [Paper] [Github] [Website]
- PaLM-E: "PaLM-E: An Embodied Multimodal Language Model", arXiv, Mar 2023, [Paper] [Webpage]
- "Large Language Models as Zero-Shot Human Models for Human-Robot Interaction", arXiv, Mar 2023. [Paper]
- CortexBench "Where are we in the search for an Artificial Visual Cortex for Embodied Intelligence?" arXiv, Mar 2023. [Paper]
- "Translating Natural Language to Planning Goals with Large-Language Models", arXiv, Feb 2023. [Paper]
- RT-1: "RT-1: Robotics Transformer for Real-World Control at Scale", arXiv, Dec 2022. [Paper] [GitHub] [Website]
- "PDDL Planning with Pretrained Large Language Models", NeurIPS, Oct 2022. [Paper] [Github]
- ProgPrompt: "Generating Situated Robot Task Plans using Large Language Models", arXiv, Sept 2022. [Paper] [Github] [Website]
- Code-As-Policies: "Code as Policies: Language Model Programs for Embodied Control", arXiv, Sept 2022. [Paper] [Colab] [Website]
- PIGLeT: "PIGLeT: Language Grounding Through Neuro-Symbolic Interaction in a 3D World", ACL, Jun 2021. [Paper] [Pytorch Code] [Website]
- Say-Can: "Do As I Can, Not As I Say: Grounding Language in Robotic Affordances", arXiv, Apr 2021. [Paper] [Colab] [Website]
- Socratic: "Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language", arXiv, Apr 2021. [Paper] [Pytorch Code] [Website]
- LABOR Agent: "Large Language Models for Orchestrating Bimanual Robots", Humanoids, Nov. 2024. [Paper] [Website], [Code]
- Wonderful Team: "Solving Robotics Problems in Zero-Shot with Vision-Language Models", arXiv, Jul 2024. [Paper] [Code] [Website]
- Embodied AI in Mobile Robots: Coverage Path Planning with Large Language Models", arXiV, Jul 2024, [Paper]
- FLTRNN: "FLTRNN: Faithful Long-Horizon Task Planning for Robotics with Large Language Models", ICRA, May 17th 2024, [Paper] [Code] [Website]
- LLM-Personalize: "LLM-Personalize: Aligning LLM Planners with Human Preferences via Reinforced Self-Training for Housekeeping Robots", arXiv, Apr 2024. [Paper] [Website] [Code]
- LLM3: "LLM3: Large Language Model-based Task and Motion Planning with Motion Failure Reasoning", IROS, Mar 2024. [Paper][Code]
- BTGenBot: "BTGenBot: Behavior Tree Generation for Robotic Tasks with Lightweight LLMs", arXiv, March 2024. [Paper][Github]
- Attentive Support: "To Help or Not to Help: LLM-based Attentive Support for Human-Robot Group Interactions", arXiv, March 2024. [Paper] [Website][Code]
- Beyond Text: "Beyond Text: Improving LLM's Decision Making for Robot Navigation via Vocal Cues", arxiv, Feb 2024. [Paper]
- SayCanPay: "SayCanPay: Heuristic Planning with Large Language Models Using Learnable Domain Knowledge", AAAI Jan 2024, [Paper] [Code] [Website]
- ViLa: "Look Before You Leap: Unveiling the Power of GPT-4V in Robotic Vision-Language Planning", arXiv, Sep 2023, [Paper] [Website]
- CoPAL: "Corrective Planning of Robot Actions with Large Language Models", ICRA, Oct 2023. [Paper] [Website][Code]
- LGMCTS: "LGMCTS: Language-Guided Monte-Carlo Tree Search for Executable Semantic Object Rearrangement", arXiv, Sep 2023. [Paper]
- Prompt2Walk: "Prompt a Robot to Walk with Large Language Models", arXiv, Sep 2023, [Paper] [Website]
- DoReMi: "Grounding Language Model by Detecting and Recovering from Plan-Execution Misalignment", arXiv, July 2023, [Paper] [Website]
- Co-LLM-Agents: "Building Cooperative Embodied Agents Modularly with Large Language Models", arXiv, Jul 2023. [Paper] [Code] [Website]
- LLM-Reward: "Language to Rewards for Robotic Skill Synthesis", arXiv, Jun 2023. [Paper] [Website]
- LLM-BRAIn: "LLM-BRAIn: AI-driven Fast Generation of Robot Behaviour Tree based on Large Language Model", arXiv, May 2023. [Paper]
- GLAM: "Grounding Large Language Models in Interactive Environments with Online Reinforcement Learning", arXiv, May 2023. [Paper] [Pytorch Code]
- LLM-MCTS: "Large Language Models as Commonsense Knowledge for Large-Scale Task Planning", arXiv, May 2023. [Paper]
- AlphaBlock: "AlphaBlock: Embodied Finetuning for Vision-Language Reasoning in Robot Manipulation", arxiv, May 2023. [Paper]
- LLM+P:"LLM+P: Empowering Large Language Models with Optimal Planning Proficiency", arXiv, Apr 2023, [Paper] [Code]
- ChatGPT-Prompts: "ChatGPT Empowered Long-Step Robot Control in Various Environments: A Case Application", arXiv, Apr 2023, [Paper] [Code/Prompts]
- ReAct: "ReAct: Synergizing Reasoning and Acting in Language Models", ICLR, Apr 2023. [Paper] [Github] [Website]
- LLM-Brain: "LLM as A Robotic Brain: Unifying Egocentric Memory and Control", arXiv, Apr 2023. [Paper]
- "Foundation Models for Decision Making: Problems, Methods, and Opportunities", arXiv, Mar 2023, [Paper]
- LLM-planner: "LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Models", arXiv, Mar 2023. [Paper] [Pytorch Code] [Website]
- Text2Motion: "Text2Motion: From Natural Language Instructions to Feasible Plans", arXiV, Mar 2023, [Paper] [Website]
- GD: "Grounded Decoding: Guiding Text Generation with Grounded Models for Robot Control", arXiv, Mar 2023. [Paper] [Website]
- PromptCraft: "ChatGPT for Robotics: Design Principles and Model Abilities", Blog, Feb 2023, [Paper] [Website]
- "Reward Design with Language Models", ICML, Feb 2023. [Paper] [Pytorch Code]
- "Planning with Large Language Models via Corrective Re-prompting", arXiv, Nov 2022. [Paper]
- Don't Copy the Teacher: "Don’t Copy the Teacher: Data and Model Challenges in Embodied Dialogue", EMNLP, Oct 2022. [Paper] [Website]
- COWP: "Robot Task Planning and Situation Handling in Open Worlds", arXiv, Oct 2022. [Paper] [Pytorch Code] [Website]
- LM-Nav: "Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action", arXiv, July 2022. [Paper] [Pytorch Code] [Website]
- InnerMonlogue: "Inner Monologue: Embodied Reasoning through Planning with Language Models", arXiv, July 2022. [Paper] [Website]
- Housekeep: "Housekeep: Tidying Virtual Households using Commonsense Reasoning", arXiv, May 2022. [Paper] [Pytorch Code] [Website]
- FILM: "FILM: Following Instructions in Language with Modular Methods", ICLR, Apr 2022. [Paper] [Code] [Website]
- MOO: "Open-World Object Manipulation using Pre-Trained Vision-Language Models", arXiv, Mar 2022. [Paper] [Website]
- LID: "Pre-Trained Language Models for Interactive Decision-Making", arXiv, Feb 2022. [Paper] [Pytorch Code] [Website]
- "Collaborating with language models for embodied reasoning", NeurIPS, Feb 2022. [Paper]
- ZSP: "Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents", ICML, Jan 2022. [Paper] [Pytorch Code] [Website]
- CALM: "Keep CALM and Explore: Language Models for Action Generation in Text-based Games", arXiv, Oct 2020. [Paper] [Pytorch Code]
- "Visually-Grounded Planning without Vision: Language Models Infer Detailed Plans from High-level Instructions", arXiV, Oct 2020, [Paper]
- Manipulate-Anything: "Manipulate-Anything: Automating Real-World Robots using Vision-Language Models", CoRL, Nov 2024. [Paper] [Website]
- Plan-Seq-Learn:"Plan-Seq-Learn: Language Model Guided RL for Solving Long Horizon Robotics Tasks", ICLR, May 2024. [Paper], [PyTorch Code] [Website]
- ManipVQA:"ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models", arXiv, Mar 2024, [Paper] [PyTorch Code]
- BOSS: "Bootstrap Your Own Skills: Learning to Solve New Tasks with LLM Guidance", CoRL, Nov 2023. [Paper] [Website]
- Lafite-RL: "Accelerating Reinforcement Learning of Robotic Manipulations via Feedback from Large Language Models", CoRL Workshop, Nov 2023. [Paper]
- Octopus:"Octopus: Embodied Vision-Language Programmer from Environmental Feedback", arXiv, Oct 2023, [Paper] [PyTorch Code] [Website]
- [Text2Reward] "Text2Reward: Automated Dense Reward Function Generation for Reinforcement Learning", arXiv, Sep 2023, [Paper] [Website]
- PhysObjects: "Physically Grounded Vision-Language Models for Robotic Manipulation", arxiv, Sept 2023. [Paper]
- [VoxPoser] "VoxPoser: Composable 3D Value Maps for Robotic Manipulation with Language Models", arXiv, July 2023, [Paper] [Website]
- Scalingup: "Scaling Up and Distilling Down: Language-Guided Robot Skill Acquisition", arXiv, July 2023. [Paper] [Code] [Website]
- VoxPoser:"VoxPoser: Composable 3D Value Maps for Robotic Manipulation with Language Models", arXiv, Jul 2023. [Paper] [Website]
- LIV:"LIV: Language-Image Representations and Rewards for Robotic Control", arXiv, Jun 2023, [Paper] [Pytorch Code] [Website]
- "Language Instructed Reinforcement Learning for Human-AI Coordination", arXiv, Jun 2023. [Paper]
- RoboCat: "RoboCat: A self-improving robotic agent", arxiv, Jun 2023. [Paper] [Website]
- SPRINT: "SPRINT: Semantic Policy Pre-training via Language Instruction Relabeling", arxiv, June 2023. [Paper] [Website]
- Grasp Anything: "Pave the Way to Grasp Anything: Transferring Foundation Models for Universal Pick-Place Robots", arxiv, June 2023. [Paper]
- LLM-GROP:"Task and Motion Planning with Large Language Models for Object Rearrangement", arXiv, May 2023. [Paper] [Website]
- VOYAGER:"VOYAGER: An Open-Ended Embodied Agent with Large Language Models", arXiv, May 2023. [Paper] [Pytorch Code] [Website]
- TIP: "Multimodal Procedural Planning via Dual Text-Image Prompting", arXiV, May 2023, [Paper]
- ProgramPort:"Programmatically Grounded, Compositionally Generalizable Robotic Manipulation", ICLR, Apr 2023, [Paper] [[Website] (https://progport.github.io/)]
- VLaMP: "Pretrained Language Models as Visual Planners for Human Assistance", arXiV, Apr 2023, [Paper]
- "Towards a Unified Agent with Foundation Models", ICLR, Apr 2023. [Paper]
- CoTPC:"Chain-of-Thought Predictive Control", arXiv, Apr 2023, [Paper] [Code]
- Plan4MC:"Plan4MC: Skill Reinforcement Learning and Planning for Open-World Minecraft Tasks", arXiv, Mar 2023. [Paper] [Pytorch Code] [Website]
- ELLM:"Guiding Pretraining in Reinforcement Learning with Large Language Models", arXiv, Feb 2023. [Paper]
- DEPS:"Describe, Explain, Plan and Select: Interactive Planning with Large Language Models Enables Open-World Multi-Task Agents", arXiv, Feb 2023. [Paper] [Pytorch Code]
- LILAC:"No, to the Right – Online Language Corrections for Robotic Manipulation via Shared Autonomy", arXiv, Jan 2023, [Paper] [Pytorch Code]
- DIAL:"Robotic Skill Acquistion via Instruction Augmentation with Vision-Language Models", arXiv, Nov 2022, [Paper] [Website]
- Gato: "A Generalist Agent", TMLR, Nov 2022. [Paper] [Website]
- NLMap:"Open-vocabulary Queryable Scene Representations for Real World Planning", arXiv, Sep 2022, [Paper] [Website]
- R3M:"R3M: A Universal Visual Representation for Robot Manipulation", arXiv, Nov 2022, [Paper] [Pytorch Code] [Website]
- CLIP-Fields:"CLIP-Fields: Weakly Supervised Semantic Fields for Robotic Memory", arXiv, Oct 2022, [Paper] [PyTorch Code] [Website]
- VIMA:"VIMA: General Robot Manipulation with Multimodal Prompts", arXiv, Oct 2022, [Paper] [Pytorch Code] [Website]
- Perceiver-Actor:"A Multi-Task Transformer for Robotic Manipulation", CoRL, Sep 2022. [Paper] [Pytorch Code] [Website]
- LaTTe: "LaTTe: Language Trajectory TransformEr", arXiv, Aug 2022. [Paper] [TensorFlow Code] [Website]
- Robots Enact Malignant Stereotypes: "Robots Enact Malignant Stereotypes", FAccT, Jun 2022. [Paper] [Pytorch Code] [Website] [Washington Post] [Wired] (code access on request)
- ATLA: "Leveraging Language for Accelerated Learning of Tool Manipulation", CoRL, Jun 2022. [Paper]
- ZeST: "Can Foundation Models Perform Zero-Shot Task Specification For Robot Manipulation?", L4DC, Apr 2022. [Paper]
- LSE-NGU: "Semantic Exploration from Language Abstractions and Pretrained Representations", arXiv, Apr 2022. [Paper]
- MetaMorph: "METAMORPH: LEARNING UNIVERSAL CONTROLLERS WITH TRANSFORMERS", arxiv, Mar 2022. [Paper]
- Embodied-CLIP: "Simple but Effective: CLIP Embeddings for Embodied AI", CVPR, Nov 2021. [Paper] [Pytorch Code]
- CLIPort: "CLIPort: What and Where Pathways for Robotic Manipulation", CoRL, Sept 2021. [Paper] [Pytorch Code] [Website]
- Navid: "NaVid: Video-based VLM Plans the Next Step for Vision-and-Language Navigation", arxiv, Mar 2024 [Paper] [Website]
- OVSG: "Context-Aware Entity Grounding with Open-Vocabulary 3D Scene Graphs", CoRL, Nov 2023. [Paper] [Code] [Website]
- VLMaps: "Visual Language Maps for Robot Navigation", arXiv, Mar 2023. [Paper] [Pytorch Code] [Website]
- "Interactive Language: Talking to Robots in Real Time", arXiv, Oct 2022 [Paper] [Website]
- NLMap:"Open-vocabulary Queryable Scene Representations for Real World Planning", arXiv, Sep 2022, [Paper] [Website]
- ADAPT: "ADAPT: Vision-Language Navigation with Modality-Aligned Action Prompts", CVPR, May 2022. [Paper]
- "The Unsurprising Effectiveness of Pre-Trained Vision Models for Control", ICML, Mar 2022. [Paper] [Pytorch Code] [Website]
- CoW: "CLIP on Wheels: Zero-Shot Object Navigation as Object Localization and Exploration", arXiv, Mar 2022. [Paper]
- Recurrent VLN-BERT: "A Recurrent Vision-and-Language BERT for Navigation", CVPR, Jun 2021 [Paper] [Pytorch Code]
- VLN-BERT: "Improving Vision-and-Language Navigation with Image-Text Pairs from the Web", ECCV, Apr 2020 [Paper] [Pytorch Code]
- ManiSkill3: "ManiSkill3: GPU Parallelized Robotics Simulation and Rendering for Generalizable Embodied AI.", arxiv, Oct 2024. [Paper] [Code] [Website]
- GENESIS: "A generative world for general-purpose robotics & embodied AI learning.", arXiv, Nov 2023. [Code]
- ARNOLD: "ARNOLD: A Benchmark for Language-Grounded Task Learning With Continuous States in Realistic 3D Scenes", ICCV, Apr 2023. [Paper] [Code] [Website]
- OmniGibson: "OmniGibson: a platform for accelerating Embodied AI research built upon NVIDIA's Omniverse engine".6th Annual Conference on Robot Learning, 2022. [Paper] [Code]
- MineDojo: "MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge", arXiv, Jun 2022. [Paper] [Code] [Website] [Open Database]
- Habitat 2.0: "Habitat 2.0: Training Home Assistants to Rearrange their Habitat", NeurIPS, Dec 2021. [Paper] [Code] [Website]
- BEHAVIOR: "BEHAVIOR: Benchmark for Everyday Household Activities in Virtual, Interactive, and Ecological Environments", CoRL, Nov 2021. [Paper] [Code] [Website]
- iGibson 1.0: "iGibson 1.0: a Simulation Environment for Interactive Tasks in Large Realistic Scenes", IROS, Sep 2021. [Paper] [Code] [Website]
- ALFRED: "ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks", CVPR, Jun 2020. [Paper] [Code] [Website]
- BabyAI: "BabyAI: A Platform to Study the Sample Efficiency of Grounded Language Learning", ICLR, May 2019. [[https://arxiv.org/abs/1810.08272)] [Code]
- LLM-Driven Robots Risk Enacting Discrimination, Violence, and Unlawful Actions: arXiv, Jun 2024. [Paper]
- Highlighting the Safety Concerns of Deploying LLMs/VLMs in Robotics: arXiv, Feb 2024. [Paper]
- Robots Enact Malignant Stereotypes: FAccT, Jun 2022. [arXiv] [DOI] [Code] [Website]
If you find this repository useful, please consider citing this list:
@misc{kira2022llmroboticspaperslist,
title = {Awesome-LLM-Robotics},
author = {Zsolt Kira},
journal = {GitHub repository},
url = {https://github.com/GT-RIPL/Awesome-LLM-Robotics},
year = {2022},
}
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for Awesome-LLM-Robotics
Similar Open Source Tools
Awesome-LLM-Robotics
This repository contains a curated list of **papers using Large Language/Multi-Modal Models for Robotics/RL**. Template from awesome-Implicit-NeRF-Robotics Please feel free to send me pull requests or email to add papers! If you find this repository useful, please consider citing and STARing this list. Feel free to share this list with others! ## Overview * Surveys * Reasoning * Planning * Manipulation * Instructions and Navigation * Simulation Frameworks * Citation
Awesome-Robotics-3D
Awesome-Robotics-3D is a curated list of 3D Vision papers related to Robotics domain, focusing on large models like LLMs/VLMs. It includes papers on Policy Learning, Pretraining, VLM and LLM, Representations, and Simulations, Datasets, and Benchmarks. The repository is maintained by Zubair Irshad and welcomes contributions and suggestions for adding papers. It serves as a valuable resource for researchers and practitioners in the field of Robotics and Computer Vision.
Everything-LLMs-And-Robotics
The Everything-LLMs-And-Robotics repository is the world's largest GitHub repository focusing on the intersection of Large Language Models (LLMs) and Robotics. It provides educational resources, research papers, project demos, and Twitter threads related to LLMs, Robotics, and their combination. The repository covers topics such as reasoning, planning, manipulation, instructions and navigation, simulation frameworks, perception, and more, showcasing the latest advancements in the field.
Paper-Reading-ConvAI
Paper-Reading-ConvAI is a repository that contains a list of papers, datasets, and resources related to Conversational AI, mainly encompassing dialogue systems and natural language generation. This repository is constantly updating.
Awesome-Quantization-Papers
This repo contains a comprehensive paper list of **Model Quantization** for efficient deep learning on AI conferences/journals/arXiv. As a highlight, we categorize the papers in terms of model structures and application scenarios, and label the quantization methods with keywords.
awesome-LLM-game-agent-papers
This repository provides a comprehensive survey of research papers on large language model (LLM)-based game agents. LLMs are powerful AI models that can understand and generate human language, and they have shown great promise for developing intelligent game agents. This survey covers a wide range of topics, including adventure games, crafting and exploration games, simulation games, competition games, cooperation games, communication games, and action games. For each topic, the survey provides an overview of the state-of-the-art research, as well as a discussion of the challenges and opportunities for future work.
llm-misinformation-survey
The 'llm-misinformation-survey' repository is dedicated to the survey on combating misinformation in the age of Large Language Models (LLMs). It explores the opportunities and challenges of utilizing LLMs to combat misinformation, providing insights into the history of combating misinformation, current efforts, and future outlook. The repository serves as a resource hub for the initiative 'LLMs Meet Misinformation' and welcomes contributions of relevant research papers and resources. The goal is to facilitate interdisciplinary efforts in combating LLM-generated misinformation and promoting the responsible use of LLMs in fighting misinformation.
ABigSurveyOfLLMs
ABigSurveyOfLLMs is a repository that compiles surveys on Large Language Models (LLMs) to provide a comprehensive overview of the field. It includes surveys on various aspects of LLMs such as transformers, alignment, prompt learning, data management, evaluation, societal issues, safety, misinformation, attributes of LLMs, efficient LLMs, learning methods for LLMs, multimodal LLMs, knowledge-based LLMs, extension of LLMs, LLMs applications, and more. The repository aims to help individuals quickly understand the advancements and challenges in the field of LLMs through a collection of recent surveys and research papers.
Call-for-Reviewers
The `Call-for-Reviewers` repository aims to collect the latest 'call for reviewers' links from various top CS/ML/AI conferences/journals. It provides an opportunity for individuals in the computer/ machine learning/ artificial intelligence fields to gain review experience for applying for NIW/H1B/EB1 or enhancing their CV. The repository helps users stay updated with the latest research trends and engage with the academic community.
Awesome-Story-Generation
Awesome-Story-Generation is a repository that curates a comprehensive list of papers related to Story Generation and Storytelling, focusing on the era of Large Language Models (LLMs). The repository includes papers on various topics such as Literature Review, Large Language Model, Plot Development, Better Storytelling, Story Character, Writing Style, Story Planning, Controllable Story, Reasonable Story, and Benchmark. It aims to provide a chronological collection of influential papers in the field, with a focus on citation counts for LLMs-era papers and some earlier influential papers. The repository also encourages contributions and feedback from the community to improve the collection.
awesome-AIOps
awesome-AIOps is a curated list of academic researches and industrial materials related to Artificial Intelligence for IT Operations (AIOps). It includes resources such as competitions, white papers, blogs, tutorials, benchmarks, tools, companies, academic materials, talks, workshops, papers, and courses covering various aspects of AIOps like anomaly detection, root cause analysis, incident management, microservices, dependency tracing, and more.
awesome-llm-attributions
This repository focuses on unraveling the sources that large language models tap into for attribution or citation. It delves into the origins of facts, their utilization by the models, the efficacy of attribution methodologies, and challenges tied to ambiguous knowledge reservoirs, biases, and pitfalls of excessive attribution.
Efficient-LLMs-Survey
This repository provides a systematic and comprehensive review of efficient LLMs research. We organize the literature in a taxonomy consisting of three main categories, covering distinct yet interconnected efficient LLMs topics from **model-centric** , **data-centric** , and **framework-centric** perspective, respectively. We hope our survey and this GitHub repository can serve as valuable resources to help researchers and practitioners gain a systematic understanding of the research developments in efficient LLMs and inspire them to contribute to this important and exciting field.
For similar tasks
Awesome-LLM-Robotics
This repository contains a curated list of **papers using Large Language/Multi-Modal Models for Robotics/RL**. Template from awesome-Implicit-NeRF-Robotics Please feel free to send me pull requests or email to add papers! If you find this repository useful, please consider citing and STARing this list. Feel free to share this list with others! ## Overview * Surveys * Reasoning * Planning * Manipulation * Instructions and Navigation * Simulation Frameworks * Citation
habitat-lab
Habitat-Lab is a modular high-level library for end-to-end development in embodied AI. It is designed to train agents to perform a wide variety of embodied AI tasks in indoor environments, as well as develop agents that can interact with humans in performing these tasks.
llm-client
LLMClient is a JavaScript/TypeScript library that simplifies working with large language models (LLMs) by providing an easy-to-use interface for building and composing efficient prompts using prompt signatures. These signatures enable the automatic generation of typed prompts, allowing developers to leverage advanced capabilities like reasoning, function calling, RAG, ReAcT, and Chain of Thought. The library supports various LLMs and vector databases, making it a versatile tool for a wide range of applications.
For similar jobs
sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.
teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.