DriveLM

DriveLM

[ECCV 2024 Oral] DriveLM: Driving with Graph Visual Question Answering

Stars: 814

Visit
 screenshot

DriveLM is a multimodal AI model that enables autonomous driving by combining computer vision and natural language processing. It is designed to understand and respond to complex driving scenarios using visual and textual information. DriveLM can perform various tasks related to driving, such as object detection, lane keeping, and decision-making. It is trained on a massive dataset of images and text, which allows it to learn the relationships between visual cues and driving actions. DriveLM is a powerful tool that can help to improve the safety and efficiency of autonomous vehicles.

README:

DriveLM: Driving with Graph Visual Question Answering

Autonomous Driving Challenge 2024 Driving-with-Language Leaderboard.

License: Apache2.0 arXiv Hugging Face

https://github.com/OpenDriveLab/DriveLM/assets/54334254/cddea8d6-9f6e-4e7e-b926-5afb59f8dce2

Highlights

šŸ”„ We instantiate datasets (DriveLM-Data) built upon nuScenes and CARLA, and propose a VLM-based baseline approach (DriveLM-Agent) for jointly performing Graph VQA and end-to-end driving.

šŸ DriveLM serves as a main track in the CVPR 2024 Autonomous Driving Challenge. Everything you need for the challenge is HERE, including baseline, test data and submission format and evaluation pipeline!

News

  • [2024/07/16] DriveLM official leaderboard reopen!
  • [2024/07/01] DriveLM got accepted to ECCV 2024! Congrats to the team!
  • [2024/06/01] Challenge ended up! See the final leaderboard.
  • [2024/03/25] Challenge test server is online and the test questions are released. Chekc it out!
  • [2024/02/29] Challenge repo release. Baseline, data and submission format, evaluation pipeline. Have a look!
  • [2023/08/25] DriveLM-nuScenes demo released.
  • [2023/12/22] DriveLM-nuScenes full v1.0 and paper released.

Table of Contents

  1. Highlights
  2. Getting Started
  3. Current Endeavors and Future Horizons
  4. TODO List
  5. DriveLM-Data
  6. License and Citation
  7. Other Resources

Getting Started

To get started with DriveLM:

(back to top)

Current Endeavors and Future Directions

  • The advent of GPT-style multimodal models in real-world applications motivates the study of the role of language in driving.
  • Date below reflects the arXiv submission date.
  • If there is any missing work, please reach out to us!

DriveLM attempts to address some of the challenges faced by the community.

  • Lack of data: DriveLM-Data serves as a comprehensive benchmark for driving with language.
  • Embodiment: GVQA provides a potential direction for embodied applications of LLMs / VLMs.
  • Closed-loop: DriveLM-CARLA attempts to explore closed-loop planning with language.

(back to top)

TODO List

  • [x] DriveLM-Data
    • [x] DriveLM-nuScenes
    • [x] DriveLM-CARLA
  • [x] DriveLM-Metrics
    • [x] GPT-score
  • [ ] DriveLM-Agent
    • [x] Inference code on DriveLM-nuScenes
    • [ ] Inference code on DriveLM-CARLA

(back to top)

DriveLM-Data

We facilitate the Perception, Prediction, Planning, Behavior, Motion tasks with human-written reasoning logic as a connection between them. We propose the task of GVQA on the DriveLM-Data.

šŸ“Š Comparison and Stats

DriveLM-Data is the first language-driving dataset facilitating the full stack of driving tasks with graph-structured logical dependencies.

Links to details about GVQA task, Dataset Features, and Annotation.

(back to top)

License and Citation

All assets and code in this repository are under the Apache 2.0 license unless specified otherwise. The language data is under CC BY-NC-SA 4.0. Other datasets (including nuScenes) inherit their own distribution licenses. Please consider citing our paper and project if they help your research.

@article{sima2023drivelm,
  title={DriveLM: Driving with Graph Visual Question Answering},
  author={Sima, Chonghao and Renz, Katrin and Chitta, Kashyap and Chen, Li and Zhang, Hanxue and Xie, Chengen and Luo, Ping and Geiger, Andreas and Li, Hongyang},
  journal={arXiv preprint arXiv:2312.14150},
  year={2023}
}
@misc{contributors2023drivelmrepo,
  title={DriveLM: Driving with Graph Visual Question Answering},
  author={DriveLM contributors},
  howpublished={\url{https://github.com/OpenDriveLab/DriveLM}},
  year={2023}
}

(back to top)

Other Resources

Twitter Follow

OpenDriveLab

Twitter Follow

Autonomous Vision Group

(back to top)

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for DriveLM

Similar Open Source Tools

For similar tasks

For similar jobs