awesome-and-novel-works-in-slam

awesome-and-novel-works-in-slam

collecting some new ideas for slam (Semantic, 3DGS, BEV, Nav, LLM, Multi-session) and will update this repo weekly, for both engineering and academic usings.

Stars: 92

Visit
 screenshot

This repository contains a curated list of cutting-edge works in Simultaneous Localization and Mapping (SLAM). It includes research papers, projects, and tools related to various aspects of SLAM, such as 3D reconstruction, semantic mapping, novel algorithms, large-scale mapping, and more. The repository aims to showcase the latest advancements in SLAM technology and provide resources for researchers and practitioners in the field.

README:

awesome-and-novel-works-in-slam Awesome

This repo contains a mostly cutting edge (new derives from old) list of awesome and novel works in slam

If you find this repository useful, please consider STARing this list. Feel free to share this list with others! More comments will be found below, yet just opinions of mine. Let's make slamer great again. Please note that arxiv is a preprint platform that has not undergone peer review. Some of the content is of very low quality. It is necessary to pay attention to the latest version of formal journals or conferences. There exists a significant disparity between engineering and academic pursuits, wherein numerous sophisticated concepts exhibit substantial gaps in practical implementation. This phenomenon is inherently normal and necessitates case-specific analysis to address the unique challenges encountered.

For latest Spatial AI related work, please refer [SAI-arxiv-daily-Repo]

graph


Overview


3DGSNeRF

Follow [Songyou Peng] for more endeavors on 3DGS, I will do supplement when I have time.

Follow [Jiheng Yang] for more information on NeRF/3DGS/EVER, He will update news weekly, best Chinese community for NeRF/3DGS, I will do supplement when I have time.

Follow [ai kwea] for more practical experience on NeRF, I will do supplement when I have time.

Follow [Chongjie Ye] for more practical experience on 3DGS, I will do supplement when I have time.

  • EVER: "Exact Volumetric Ellipsoid Rendering for Real-time View Synthesis", arxiv 2024. [Code]

  • STORM: "STORM: Spatio-Temporal Reconstruction Model for Large-Scale Outdoor Scenes", arxiv 2024. [Project]

  • NaVILA: "NaVILA: Legged Robot Vision-Language-Action Model for Navigation", arxiv 2024. [Paper]

  • BeNeRF: "BeNeRF: Neural Radiance Fields from a Single Blurry Image and Event Stream", arxiv 2024. [Paper]

  • SLGaussian: "SLGaussian: Fast Language Gaussian Splatting in Sparse Views", arxiv 2024. [Paper]

  • HUGSIM: "HUGSIM: A Real-Time, Photo-Realistic and Closed-Loop Simulator for Autonomous Driving", arxiv 2024. [Code]

  • GaussianRPG: "GaussianRPG: 3D Gaussian Rendering PlayGround", ECCV 2024. [Code]

  • HSFM: "Reconstructing People, Places, and Cameras", arxiv 2024. [Project]

  • Gaussian Splatting: "3D Gaussian Splatting for Real-Time Radiance Field Rendering", ACM Transactions on Graphics 2023. [Paper] [Code]

  • Neural-Sim: "Learning to Generate Training Data with NeRF", ECCV 2022. [Paper] [Code] [Webpage]

  • iNeRF: "Inverting Neural Radiance Fields for Pose Estimation", IROS, 2021. [Paper] [Code] [Website] [Dataset]

  • iMAP: "Implicit Mapping and Positioning in Real-Time", ICCV, 2021. [Paper] [Code]

  • SHINE-Mapping: "Large-Scale 3D Mapping Using Sparse Hierarchical Implicit Neural Representations", ICRA, 2023. [Paper] [Code]

  • H2-Mapping: "Real-time Dense Mapping Using Hierarchical Hybrid Representation", RA-L, 2023. [Paper] [Code]

  • LATITUDE: Robotic Global Localization with Truncated Dynamic Low-pass Filter in City-scale NeRF, ICRA, 2023. [Paper] [Code]

  • NeuSE: "Neural SE(3)-Equivariant Embedding for Consistent Spatial Understanding with Objects", arXiv. [Paper] [Code]

  • ObjectFusion: "Accurate object-level SLAM with neural object priors", Graphical Models, 2022. [Paper]

  • NDF_Change: "Robust Change Detection Based on Neural Descriptor Fields", IROS, 2022. [Paper]

  • LNDF: "Local Neural Descriptor Fields: Locally Conditioned Object Representations for Manipulation", ICRA, 2023. [Paper] [Webpage]

  • NeRF-LOAM: Neural Implicit Representation for Large-Scale Incremental LiDAR Odometry and Mapping, arXiv. [Paper] [Code]
  • "Implicit Map Augmentation for Relocalization", ECCV Workshop, 2022. [Paper]

  • Co-SLAM: Joint Coordinate and Sparse Parametric Encodings for Neural Real-Time SLAM, CVPR, 2023. [Paper] [Website]

  • Neural Implicit Dense Semantic SLAM, arXiv, 2023. [Paper]

  • NeRF-Navigation: "Vision-Only Robot Navigation in a Neural Radiance World", ICRA, 2022. [Paper] [Code] [Website]

  • ESDF: "Sampling-free obstacle gradients and reactive planning in Neural Radiance Fields", arXiv. [Paper]

  • Normal-NeRF: "Normal-NeRF: Ambiguity-Robust Normal Estimation for Highly Reflective Scenes", arXiv 2025. [Code]

  • GaussianProperty: "Integrating Physical Properties to 3D Gaussians with LMMs", arXiv 2025. [Project]

  • InfiniCube: "InfiniCube: Unbounded and Controllable Dynamic 3D Driving Scene Generation with World-Guided Video Models", arXiv 2024. [Paper]

  • GS-LIVOM: "Real-time Gaussian Splatting Assisted LiDAR-Inertial-Visual Odometry and Dense Mappings", arXiv 2024. [Code]

  • OpenGS-SLAM: "OpenGS-SLAM: Open-Set Dense Semantic SLAM with 3D Gaussian Splatting for Object-Level Scene Understanding", arXiv 2024. [Code]

  • AKF-LIO: "AKF-LIO: LiDAR-Inertial Odometry with Gaussian Map by Adaptive Kalman Filter", arXiv 2025. [Paper] [Code]

  • GSPR: "GSPR: Multimodal Place Recognition Using 3D Gaussian Splatting for Autonomous Driving", arXiv 2025. [Paper]

  • MTGS: "MTGS:Multi-Traversal Gaussian Splatting", arXiv 2025. [Paper]

  • Semantic Gaussians: "Semantic Gaussians: Open-Vocabulary Scene Understanding with 3D Gaussian Splatting", arXiv 2025. [Code]

  • diff-gaussian-rasterization: "3D Gaussian Splatting for Real-Time Rendering of Radiance Fields", ACM Transactions on Graphics 2023. [Code]

  • WildGS-SLAM: "WildGS-SLAM: Monocular Gaussian Splatting SLAM in Dynamic Environments", CVPR 2025. [Code]

  • SiLVR: "SiLVR: Scalable Lidar-Visual Reconstruction with Neural Radiance Fields for Robotic Inspection", arxiv 2024. [Paper]

  • DyNFL: "Dynamic LiDAR Re-simulation using Compositional Neural Fields", CVPR 2024. [Paper]

  • VPGS-SLAM: "VPGS-SLAM: Voxel-based Progressive 3D Gaussian SLAM in Large-Scale Scenes", arxiv 2025. [Paper]

  • MAC-Ego3D: "MAC-Ego3D: Multi-Agent Gaussian Consensus for Real-Time Collaborative Ego-Motion and Photorealistic 3D Reconstruction", CVPR 2025. [Code]

  • LucidDreamer: "LucidDreamer: Domain-free Generation of 3D Gaussian Splatting Scenes", arxiv 2023. [Code]

  • LODGE: "LODGE:Level-of-Detail Large-Scale Gaussian Splatting with Efficient Rendering", arxiv 2025. [Paper]

  • Omni-Scene: "Omni-Scene: Omni-Gaussian Representation for Ego-Centric Sparse-View Scene Reconstruction", CVPR 2025. [Paper]

  • QuadricFormer: "QuadricFormer: Scene as Superquadrics for 3D Semantic Occupancy Prediction", arxiv 2025. [Code] [Paper]

  • X-Scene: "X-Scene: Large-Scale Driving Scene Generation with High Fidelity and Flexible Controllability", arxiv 2025. [Project]

  • GaussNav: " GaussNav: Gaussian Splatting for Visual Navigation", arxiv 2024. [Code] [Paper]

  • SplatFormer: "SplatFormer: Point Transformer for Robust 3D Gaussian Splatting", ICLR 2025. [Paper]

  • AD-GS: "AD-GS: Object-Aware B-Spline Gaussian Splatting for Self-Supervised Autonomous Driving", ICCV 2025. [Project]

  • splatnav: "Safe Real-Time Robot Navigation in Gaussian Splatting Maps", arxiv 2024. [Project]

  • VR-Robo: "VR-Robo: A Real-to-Sim-to-Real Framework for Visual Robot Navigation and Locomotion", RAL 2025. [Project]

  • GaussianVLM: "GaussianVLM: Scene-centric 3D Vision-Language Models using Language-aligned Gaussian Splats for Embodied Reasoning and Beyond", arxiv 2025. [Paper]


v2x

Follow [Siheng Chen] for more information on V2X Perception, I will do supplement when I have time.

Follow [Runsheng Xu] for more information on V2X Perception, I will do supplement when I have time.

  • HighwayEnv: "An Environment for Autonomous Driving Decision-Making", GitHub. [Code]

  • OpenLane-V2: "The World's First Perception and Reasoning Benchmark for Scene Structure in Autonomous Driving.", GitHub. [Code]


semantic

  • OPen3d: "A Modern Library for 3D Data Processing", arxiv. [Paper] [Code]

  • SuMa++: "SuMa++: Efficient LiDAR-based Semantic SLAM", IEEE/RSJ International Conference on Intelligent Robots and Systems 2019. [Paper] [Code]

  • Khronos: "Khronos: A Unified Approach for Spatio-Temporal Metric-Semantic SLAM in Dynamic Environments", Robotics: Science and Systems. [Paper] [Code]


novel

  • Scene-Diffuser: "Diffusion-based Generation, Optimization, and Planning in 3D Scenes", CVPR 2023. [Project]

  • FastMap: "FastMap: Revisiting Dense and Scalable Structure from Motion", arxiv 2025. [Code]

  • VGGT: "VGGT: Visual Geometry Grounded Transformer", CVPR 2025. [Code]

  • ViPE: "ViPE: Video Pose Engine for 3D Geometric Perception", arxiv 2025. [Paper]

  • MASt3R-SLAM: "MASt3R-SLAM: Real-Time Dense SLAM with 3D Reconstruction Priors", CVPR 2025. [Code]

  • DRLSV: "Dense reinforcement learning for safety validation of autonomous vehicles", Nature 2023. [Paper]

  • Liquid AI: "Closed-form continuous-time neural networks", Nature 2023. [Paper]

  • UniDistill: "A Universal Cross-Modality Knowledge Distillation Framework for 3D Object Detection in Bird's-Eye View", CVPR 2023. [Code]

  • Let Occ Flow: "Let Occ Flow: Self-Supervised 3D Occupancy Flow Prediction", CoRL 2024. [Code]

  • UniLoc: "UniLoc: Towards Universal Place Recognition Using Any Single Modality", arxiv 2024. [Paper]

  • FAST-LIEO: "FAST-LIEO", github 2024. [Code]

  • Awesome-Robotics-Diffusion: "Awesome-Robotics-Diffusion", github 2024. [Code]

  • Diffusion Planner: "Diffusion-Based Planning for Autonomous Driving with Flexible Guidance", 2025 International Conference on Learning Representation (ICLR). [Code]

  • Learning More With Less: "Learning More With Less: Sample Efficient Dynamics Learning and Model-Based RL for Loco-Manipulation", arxiv 2025. [Paper]

  • SNN-VPR: "Applications of Spiking Neural Networks in Visual Place Recognition", IEEE Transactions on Robotics 2025. [Paper]

  • H3-Mapping: "H3-Mapping: Quasi-Heterogeneous Feature Grids for Real-time Dense Mapping Using Hierarchical Hybrid Representation", arxiv2024. [Code]

  • PE3R: "PE3R: Perception-Efficient 3D Reconstruction", arxiv 2025. [Code]

  • SD-DefSLAM: "SD-DefSLAM: Semi-Direct Monocular SLAM for Deformable and Intracorporeal Scenes", 2021 IEEE International Conference on Robotics and Automation (ICRA). [Code]

  • robust-pose-estimator: "Learning how to robustly estimate camera pose in endoscopic videos", International Journal of Computer Assisted Radiology and Surgery 2023. [Code]

  • NR-SLAM: "NR-SLAM: Non-Rigid Monocular SLAM", 2023 arxiv. [Code]

  • level-k: "vehicle-interaction-decision-making", 2024 github. [Code]

  • Murre: "Multi-view Reconstruction via SfM-guided Monocular Depth Estimation", 2025 arxiv. [Code]

  • STDLoc: "From Sparse to Dense: Camera Relocalization with Scene-Specific Detector from Feature Gaussian Splatting", arxiv 2025. [Code]

  • Scene Graph: "Controllable 3D Outdoor Scene Generation via Scene Graphs", arxiv 2025. [Paper]

  • SPR: "SPR: Scene-agnostic Pose Regression for Visual Localization", CVPR 2025. [Code]

  • BEV-LIO-LC: "BEV Image Assisted LiDAR-Inertial Odometry with Loop Closure", arxiv 2025. [Code]

  • DRO: "DRO: Doppler-Aware Direct Radar Odometry", RSS 2025. [Code]

  • RIO: "EKF-based Radar Inertial Odometry using 4D mmWave radar sensors", github 2022. [Code]

  • ApexNav: "ApexNav: An Adaptive Exploration Strategy for Zero-Shot Object Navigation with Target-centric Semantic Fusion", arxiv 2025. [Paper]

  • LP2: "LP2: Language-based Probabilistic Long-term Prediction", RAL 2024. [Code]

  • Matrix3D: "Matrix3D: Large Photogrammetry Model All-in-One", arxiv 2025. [Paper]

  • ViewCrafter: "ViewCrafter: Taming Video Diffusion Models for High-fidelity View Synthesis", arxiv 2024. [Code]

  • GelSLAM: "GelSLAM: A Real-Time, High-Resolution, and Robust 3D Tactile SLAM System", ICRA 2025. [Project]

  • GF-SLAM: "GF-SLAM: A Hybrid Localization Method Incorporating Global and Arc Features", TASE 2024. [Paper]

  • MAC-VO: "MAC-VO: Metrics-Aware Covariance for Learning-based Stereo Visual Odometry", ICRA 2025. [Project]

  • GeoNav: "GeoNav: Empowering MLLMs with Explicit Geospatial Reasoning Abilities for Language-Goal Aerial Navigation", arxiv 2025. [Paper]

  • NavDP: "NavDP: Learning Sim-to-Real Navigation Diffusion Policy with Privileged Information Guidance", arxiv 2025. [Paper]

  • COMO: "COMO: Compact Mapping and Odometry", ECCV 2024. [Code]

  • π³: "π³: Scalable Permutation-Equivariant Visual Geometry Learning", arxiv 2025. [Project]

  • MGSfM: "MGSfM: Multi-Camera Geometry Driven Global Structure-from-Motion", ICCV 2025. [Paper]

  • NavigScene: "NavigScene: Bridging Local Perception and Global Navigation for Beyond-Visual-Range Autonomous Driving", ACM MM 2025. [Paper]

  • L-OGM: "Self-supervised Multi-future Occupancy Forecasting for Autonomous Driving", arxiv 2024. [Paper]

  • U-ViLAR: "U-ViLAR: Uncertainty-Aware Visual Localization for Autonomous Driving via Differentiable Association and Registration", arxiv 2025. [Paper]

  • Move to Understand a 3D Scene: "Move to Understand a 3D Scene: Bridging Visual Grounding and Exploration for Efficient and Versatile Embodied Navigation", arxiv 2025. [Paper]

  • GeNIE: "GeNIE: A Generalizable Navigation System for In-the-Wild Environments", arxiv 2025. [Paper]

  • CREStE: "CREStE: Scalable Mapless Navigation with Internet Scale Priors and Counterfactual Guidance", RSS 2025. [Code]

  • Reloc3r: "Reloc3r: Large-Scale Training of Relative Camera Pose Regression for Generalizable, Fast, and Accurate Visual Localization", CVPR 2025. [Code]

  • DualMap: "DualMap: Online Open-Vocabulary Semantic Mapping for Natural Language Navigation in Dynamic Changing Scenes", arxiv 2025. [Paper]

  • ScoreLiDAR: "Distilling Diffusion Models to Efficient 3D LiDAR Scene Completion", ICCV 2025. [Paper]

  • DiffMVS & CasDiffMVS: "Lightweight and Accurate Multi-View Stereo with Confidence-Aware Diffusion Model", TPAMI 2025. [Code]

  • SAIL-Recon: "SAIL-Recon: Large SfM by Augmenting Scene Regression with Localization", arxiv 2025. [Paper]


largemodel

  • DriveLikeAHuman: "Rethinking Autonomous Driving with Large Language Models", CVPR, 2023. [Code]

  • ReAct: Synergizing Reasoning and Acting in Language Models[Code]

  • GraphRag[Code]

  • DreamerV3[Code]

  • DinoV2[Code]

  • CLIP[Code]

  • llama[Code]

  • Gato[Paper]

  • Open the Black Box of Transformers[Paper]

  • DeepSeek-V3[Code]

  • DeepSeek-R1[Code]

  • Lotus[Code]

  • VLN-CE-Isaac[Code]

  • Orion: "ORION: A Holistic End-to-End Autonomous Driving Framework by Vision-Language Instructed Action Generation", arxiv 2025. [Project]

  • GASP: "GASP: Unifying Geometric and Semantic Self-Supervised Pre-training for Autonomous Driving", arxiv 2025. [Paper]

  • SpatialLM: "SpatialLM: Large Language Model for Spatial Understanding", github 2025. [Code]

  • VL-Nav: "VL-Nav: Real-time Vision-Language Navigation with Spatial Reasoning", arxiv 2025. [Paper]

  • MiLA: "MiLA: Multi-view Intensive-fidelity Long-term Video Generation World Model for Autonomous Driving", arxiv 2025. [Paper]

  • GAIA-2: "GAIA-2: Pushing the Boundaries of Video Generative Models for Safer Assisted and Automated Driving", arxiv 2025. [Project]

  • NeuPAN: "NeuPAN: Direct Point Robot Navigation with End-to-End Model-based Learning", IEEE Transactions on Robotics 2025. [Paper] [Code]

  • OpenDriveVLA: "OpenDriveVLA: Towards End-to-end Autonomous Driving with Large Vision Language Action Model", arxiv 2025. [Code]

  • Distill Any Depth: "Distill Any Depth: Distillation Creates a Stronger Monocular Depth Estimator", arxiv 2025. [Code]

  • TAP: "Tracking Any Point (TAP)", github 2025. [Code]

  • ORION: "ORION: A Holistic End-to-End Autonomous Driving Framework by Vision-Language Instructed Action Generation", github 2025. [Code]

  • FindAnything: "FindAnything: Open-Vocabulary and Object-Centric Mapping for Robot Exploration in Any Environment", arxiv 2025. [Paper]

  • Hier-SLAM: "Hier-SLAM: Scaling-up Semantics in SLAM with a Hierarchically Categorical Gaussian Splatting", ICRA 2025. [Code]

  • SpatialLM: "SpatialLM: Large Language Model for Spatial Understanding", github 2025. [Code]

  • LightEMMA: "LightEMMA: Lightweight End-to-End Multimodal Model for Autonomous Driving", arxiv 2025. [Paper]

  • SOLVE: "SOLVE: Synergy of Language-Vision and End-to-End Networks for Autonomous Driving", arxiv 2025. [Paper]

  • VL-Nav: "VL-Nav: Real-time Vision-Language Navigation with Spatial Reasoning", arxiv 2025. [Paper]

  • VLM-3R: "VLM-3R: Vision-Language Models Augmented with Instruction-Aligned 3D Reconstruction", arxiv 2025. [Code]

  • Impromptu-VLA: "Impromptu VLA: Open Weights and Open Data for Driving Vision-Language-Action Models", arxiv 2025. [Code] [Paper]

  • Cosmos-Drive-Dreams: "Cosmos-Drive-Dreams: Scalable Synthetic Driving Data Generation with World Foundation Models", arxiv 2025. [Project] [Paper]

  • AnyCam: "AnyCam: Learning to Recover Camera Poses and Intrinsics from Casual Videos", CVPR 2025. [Code]

  • Depth Any Camera: "Depth Any Camera: Zero-Shot Metric Depth Estimation from Any Camera", CVPR 2025. [Code]

  • LLaVA-4D: "LLaVA-4D: Embedding SpatioTemporal Prompt into LMMs for 4D Scene Understanding", arxiv 2025. [Paper]

  • StreamVLN: "StreamVLN: Streaming Vision-and-Language Navigation via SlowFast Context Modeling", arxiv 2025. [Code]

  • Mem4Nav: "Mem4Nav: Boosting Vision-and-Language Navigation in Urban Environments with a Hierarchical Spatial-Cognition Long-Short Memory System", arxiv 2025. [Paper]

  • NavDP: "NavDP: Learning Sim-to-Real Navigation Diffusion Policy with Privileged Information Guidance", arxiv 2025. [Paper]

  • Bench2ADVLM: "Bench2ADVLM: Closed-Loop Evaluation Framework for Vision-language Models in Autonomous Driving", arxiv 2025. [Project]

  • SpatialVLA: "SpatialVLA: Exploring Spatial Representations for Visual-Language-Action Model", arxiv 2025. [Paper]

  • NavA3: "NavA3: Understanding Any Instruction, Navigating Anywhere, Finding Anything", arxiv 2025. [Paper]

  • DriveLM: "DriveLM: Driving with Graph Visual Question Answering", ECCV 2024. [Code]


bignames

Industries and the off-campus

UK

  • Oxford Control Group [Homepage]

  • Oxford Dynamic Robot Systems Group [Homepage]

  • Imperial College London Dyson Robotics Laboratory [Homepage]

  • UCL CDT in Foundational AI [Homepage]

  • UCL Robot Perception and Learning Lab [Homepage]

  • School of Informatics University of Edinburgh [Homepage]

Europe

  • ETHZ Autonomous Systems Lab [Homepage]

  • ETHZ Robotic Systems Lab [Homepage]

  • ETHZ Computer-Vision-and-Geometry-Lab [Homepage]

  • ETHZ Visual Intelligence and Systems Group [Homepage]

  • ETHZ University of Cyprus Vision for Robotics Lab [Homepage]

  • ETHZ Learning and Adaptive Systems Group [Homepage]

  • ETHZ Photogrammetry and Remote Sensing Lab [Homepage]

  • UZH Robotics and Perception Group [Homepage]

  • EPFL CV LAB [Homepage]

  • EPFL Intelligent Maintenance and Operations Systems [Homepage]

  • TUM-Institue-of-Auotomative-Technology [Homepage]

  • TUM-Computer-Vision-Group [Homepage]

  • TUM Smart Robotics Lab [Homepage]

  • Stuttgart Flight Robotics and Perception Group [Homepage]

  • Heidelberg Computer Vision and Learning Lab [Homepage]

  • UFR Robot Learning [Homepage]

  • University of Tübingen [Homepage]

  • Bonn Photogrammetry Robotics [Homepage]

  • Karlsruhe Institute of Technology Institute of Measurement and Control Systems [Homepage]

  • IGMR - RWTH Aachen University [Homepage]

  • RWTH Aachen Institute of Automatic Control [Homepage]

  • RWTH Aachen Institut für Kraftfahrzeuge ika [Homepage]

  • Goettingen Data Fusion Group [Homepage]

  • Albert-Ludwigs-Universität Freiburg Robot Learning Lab [Homepage]

  • KTH Robotics Perception and Learning [Homepage]

  • University of Turku Turku Intelligent Embedded and Robotic Systems Lab [Homepage]

  • Istituto Italiano di Tecnologia Robotics [Homepage]

  • NMBU Robotics [Homepage]

  • TU Delft 3D geoinformation research group [Homepage]

  • TU Delft Intelligent Vehicles [Homepage]

  • TU Delft Autonomous Multi-Robots Lab [Homepage]

  • Poznan University of Technology Mobile Robots Lab [Homepage]

  • Sapienza University of Rome Robots Vision and Perception [Homepage]

  • LTU Robotics & AI Engineering Group [Homepage]

  • University of Luxembourg Automation and Robotics Research Group (ARG) [Homepage]

  • Control of Networked Systems Group at University of Klagenfurt [Homepage]

  • Vision for Robotics and Autonomous Systems Czech Technical University in Prague (CTU) [Homepage]

  • Multi-robot Systems (MRS) group Czech Technical University in Prague (CTU) [Homepage]

  • LARICS Lab [Homepage]

Notrh America

  • Stanford Autonomous Systems Lab [Homepage]

  • Stanford Robotics Embodied Artificial Intelligence Lab REAL [Homepage]

  • Stanford Vision and Learning Lab [Homepage]

  • Stanford Computational Vision and Geometry Lab [Homepage]

  • Stanford Center for Research on Foundation Models [Homepage]

  • Stanford Open Virtual Assistant Lab [Homepage]

  • Stanford NAV Lab [Homepage]

  • MIT-SPARKs [Homepage]

  • MIT CSAIL Computer Vision [Homepage]

  • MIT's Marine Robotics Group [Homepage]

  • MIT HAN Lab [Homepage]

  • MIT Aerospace Controls Laboratory [Homepage]

  • MIT Robust Robotics Group [Homepage]

  • MIT Urban Mobility Lab + Transit Lab [Homepage]

  • MIT Driverless Perception Team [Homepage]

  • CMU Robotics Institute AirLab [Homepage][FRC]

  • CMU Robotic Exploration Lab [Homepage]

  • CMU Robot Perception Lab [Homepage]

  • Learning and Control for Agile Robotics Lab at CMU [Homepage]

  • Princeton Computational Imaging [Homepage]

  • Princeton Safe Robotics Lab [Homepage]

  • UPenn Perception Action Learning Group [Homepage]

  • UPenn Kumar Robotics [Homepage]

  • UCB Model Predictive Control Laboratory [Homepage]

  • Berkeley Automation Lab [Homepage]

  • JHU Laboratory for Computational Sensing and Robotics [Homepage]

  • UCLA Verifiable & Control-Theoretic Robotics Laboratory [Homepage]

  • UCLA Mobility Lab [Homepage]

  • Cornell Tech Kuleshov Group [Homepage]

  • UCSD Existential Robotics Lab [Homepage]

  • UCSD Autonomous Vehicle Laboratory [Homepage]

  • UCSD Hao Su's Lab [Homepage]

  • Umich Ford Center for Autonomous Vehicles FCAV [Homepage]

  • Umich The Autonomous Robotic Manipulation Lab studies motion planning manipulation human-robot collaboration [Homepage]

  • Umich Dynamic Legged Locomotion Robotics Lab [Homepage]

  • Umich Computational Autonomy and Robotics Laboratory [Homepage]

  • Umich Mobility Transformation Lab [Homepage]

  • NYU AI4CE Lab [Homepage]

  • GaTech BORG Lab [Homepage]

  • GaTech Intelligent Vision and Automation Laboratory IVA Lab [Homepage]

  • Special Interest Group for Robotics Enthusiasts at UIUC [Homepage]

  • UIUC-Robotics [Homepage]

  • UIUC-iSE [Homepage]

  • GAMMA UMD [Homepage]

  • Texas Austin Autonomous Mobile Robotics Laboratory [Homepage]

  • Texas Austin Visual Informatics Group [Homepage]

  • Texas Robot Perception and Learning Lab [Homepage]

  • University of Delaware Robot Perception Navigation Group RPNG [Homepage]

  • Virginia Tech Transportation Institute [Homepage]

  • ASU Intelligent Robotics and Interactive Systems (IRIS) Lab [Homepage]

  • Unmanned Systems Lab at Texas A&M [Homepage]

  • SIT Robust Field Autonomy Lab [Homepage]

  • University at Buffalo Spatial AI & Robotics Lab [Homepage]

  • UCR Trustworthy Autonomous Systems Laboratory (TASL) [Homepage]

  • Toronto STARS Laboratory [Homepage]

  • Toronto Autonomous Space Robotics Lab [Homepage]

  • Toronto TRAIL Lab [Homepage]

  • UBC Computer Vision Group [Homepage]

  • UWaterloo CL2 [Homepage]

  • IntRoLab - Intelligent / Interactive / Integrated / Interdisciplinary Robot Lab @ Université de Sherbrooke [Homepage]

  • École Polytechnique de Montréal Making Innovative Space Technology Lab [Homepage]

  • York-SDCNLab [Homepage]

  • Université Laval Northern Robotics Laboratory [Homepage]

  • Queen's Estimation, Search, and Planning Research Group [Homepage]

  • Queen's offroad robotics [Homepage]

Asia

  • KAIST Urban Robotics Lab [Homepage]

  • KAIST Cognitive Learning for Vision and Robotics CLVR lab [Homepage]

  • Yonsei Computational Intelligence Laboratory [Homepage]

  • SNU RPM [Homepage]

  • SNU Machine Perception and Reasoning Lab [Homepage]

  • DGIST APRL [Homepage]

  • Japan National Institute of Advanced Industrial Science and Technology [Homepage]

  • Japan Nagoya [Homepage]

  • NUS showlab [Homepage]

  • NUS clearlab [Homepage]

  • NUS adacomp [Homepage]

  • NUS marmotlab [Homepage]

  • NUS cvrp lab [Homepage]

  • NTU AutoMan [Homepage]

  • NTU ARIS [Homepage]

  • CUHK OpenMMLab [Homepage]

  • CUHK T Stone [Homepage]

  • CUHK usr [Homepage]

  • CUHK Deep Vision Lab [Homepage]

  • CUHK DeciForce [Homepage]

  • HKU Mars Lab [Homepage]

  • HKU CVMI Lab [Homepage]

  • HKUST Aerial Robotics Group [Homepage]

  • HKUST XRIM-Lab [Homepage]

  • City University of Hong Kong MetaSLAM [Homepage]

  • HK PolyU Visual Learning and Reasoning Group [Homepage]

  • UMacau Intelligent Machine Research Lab (IMRL) [Homepage]

  • NTNU Autonomous Robots Lab [Homepage]

  • Tsinghua IIIS MARS Lab [Homepage]

  • Tsinghua Institute for AI Industry Research [Homepage]

  • SJTU Vision and Intelligent System Group [Homepage]

  • SJTU Intelligent Robotics and Machine Vision Lab [Homepage]

  • SJTU Thinklab [Homepage]

  • ZJU Advanced-Perception-on-Robotics-and-Intelligent-Learning-Lab [Homepage]

  • ZJU CAD CG [Homepage]

  • ZJU Advanced Intelligent Machines AIM [Homepage]

  • ZJU Robotics Lab [Homepage]

  • ZJU FAST Lab [Homepage]

  • ZJU OM-AI Lab [Homepage]

  • Fudan Zhang Vision Group [Homepage]

  • Chongxuan Li's research group @ Renmin University of China [Homepage]

  • Tongji Intelligent Electric Vehicle Research Group [Homepage]

  • Tongji Intelligent Sensing Perception and Computing Group [Homepage]

  • NUDT NuBot [Homepage]

  • HUST EIC Vision Lab [Homepage]

  • ShanghaiTech Vision and Intelligent Perception Lab [Homepage]

  • ShanghaiTech Automation and Robotics Center [Homepage]

  • HITSZ nROS-Lab [Homepage]

  • HITSZ CLASS-Lab [Homepage]

  • GAP-LAB-CUHK-SZ [Homepage]

  • Westlake University Audio Signal and Information Processing Lab [Homepage]

  • Wuhan University Urban Spatial Intelligence Research Group at LIESMARS [Homepage]

  • Wuhan University Integrated and Intelligent Navigation Group [Homepage]

  • WHU GREAT (GNSS+ Research, Application and Teaching) group [Homepage]

  • SYSU STAR Smart Aerial Robotics Group [Homepage]

  • SYSU RAPID Lab [Homepage]

  • SYSU Pengyu Team [Homepage]

  • SYSU Human Cyber Physical (HCP) Intelligence Integration Lab [Homepage]

  • NKU Robot Autonomy and Human-AI Collaboration Group [Homepage]

  • HKUSTGZ Research group of visual generative models [Homepage]

  • HNU Neuromorphic Automation and Intelligence Lab [Homepage]

  • NEU REAL [Homepage]

  • SZU College of Computer Science and Software Engineering [Homepage]

  • Israel Autonomous Navigation and Sensor Fusion Lab [Homepage]

Australia

  • CSIRORobotics Brisbane, Queensland, Australia [Homepage]

  • Robotics Institute, University of Technology Sydney, Sydney, Australia [Homepage]

  • Robotic Perception Group at the Australian Centre For Robotics, Sydney, Australia [Homepage]

Find more on this link in case of u r looking for PhD positions[I] [II].

Journals

Conferences

  • Jan. SIGGRAPH Special Interest Group for Computer GRAPHICS [Registeration]

  • Mar. RSS Robotics: Science and Systems [Registeration]

  • Apr. CASE IEEE International Conference on Automation Science and Engineering [Registeration]

  • Mar. IROS IEEE/RSJ lnternational Conference onlntelligent Robots and Systems [Registeration]

  • Mar. ICCV International Conference on Computer Vision [Registeration]

  • Mar. ECCV European Conference on Computer Vision [Registeration]

  • May. ITSC International Conference on Intelligent Transportation Systems [Registeration]

  • May. CoRL Conference on Robot Learning [Registeration]

  • May. NIPS Annual Conference on Neural Information Processing Systems [Registeration]

  • May. ACM MM ACM International Conference on Multimedia [Registeration]

  • Aug. AAAI Annual AAAI Conference on Artificial Intelligence [Registeration]

  • Sept. ICLR International Conference on Learning Representations [Registeration]

  • Sept. ICRA IEEE International Conference on Robotics and Automation [Registeration]

  • Oct. ICML International Conference on Machine Learning [Registeration]

  • Nov. CVPR IEEE Conference on Computer Vision and Pattern Recognition [Registeration]


Datasets

find more in paperwithcode, awesome-slam-datasets

Competitions

Practice makes perfect, though lack of innovation.

  • CVPR 2025 drivex [Link]
  • ICRA 2025 FR [Link]
  • ICRA 2023 Robodepth [Link]
  • ICRA 2024 RoboDrive [Link]
  • ICRA 2023 Sim2Real [Link]
  • nerfbaselines [Link]

Tools

Libs:

NeRF/3DGS:

Calibration:

Evaluation:

Communication:

  • Message Queuing Telemetry Transport [Video]

  • Hypertext Transfer Protocol Secure [Video]

  • Controller Area Network [Video]

Deep Learning Framework:

Writing:

Books and Reviews:

  • learnLLM[Code]

  • awesome-llm4tr[Code]

  • Foundations-of-LLMs[Code]

  • MathRL[Code]

  • SLAM-Handbook[Code]

  • slambook2[Code]

  • SLAM Course (2013)[Video]

  • TransferLearning[Code]

  • Present and Future of SLAM in Extreme Environments: "Present and Future of SLAM in Extreme Environments: The DARPA SubT Challenge", 2022 IEEE Transactions on Robotics. [Paper]

  • General Place Recognition Survey: "General Place Recognition Survey: Towards Real-World Autonomy", 2024 arxiv. [Paper] [Code]

  • NeRF in Robotics: "NeRF in Robotics: A Survey", 2024 arxiv. [Paper]

  • Learning-based 3D Reconstruction in AD: "Learning-based 3D Reconstruction in Autonomous Driving: A Comprehensive Survey", 2025 arxiv. [Paper]

  • World Model: "The Role of World Models in Shaping Autonomous Driving: A Comprehensive Survey", [Code]

  • HFHWC: "Autonomous Driving in Unstructured Environments: How Far Have We Come?", [Code]

  • FutureMapping2: "FutureMapping 2: Gaussian Belief Propagation for Spatial AI", [Paper]

  • Vision-Language-ActionModels: "Vision-Language-ActionModels: Concepts, Progress, Applications and Challenges", [Paper]

  • Embodied Intelligence for 3D Understanding: "Embodied Intelligence for 3D Understanding: A Survey on 3D Scene Question Answering", [Paper]

  • Generative AI for AD: R: "Generative AI for Autonomous Driving: A Review", [Paper]

  • Generative AI for AD: FO: "GENERATIVE AI FOR AUTONOMOUS DRIVING: FRONTIERS AND OPPORTUNITIES", [Paper]

  • LLM4Drive: "LLM4Drive: A Survey of Large Language Models for Autonomous Driving", arxiv 2023. [Code]

  • MA-AD: "Multi-Agent-Autonomous-Driving", github 2025. [Code]

  • 4DRadar-tutorial: "4D Radar Technology and Advanced Sensor Fusion AI", ICRA 2025. [Project]

  • Toward Embodied AGI: "Toward Embodied AGI: A Review of Embodied AI and the Road Ahead", arxiv 2025. [Paper]

  • Multi-sensor Mapping: "LiDAR, GNSS and IMU Sensor Alignment through Dynamic Time Warping to Construct 3D City Maps", arxiv 2025. [Paper]

  • Embodied Navigation: "Embodied Navigation", Science China Information Sciences 2025. [Paper]

  • EvaluateLO: "A Comprehensive Evaluation of LiDAR Odometry Techniques", arxiv 2025. [Paper]

  • HDMap: "What Really Matters for Robust Multi-Sensor HD Map Construction?", arxiv 2025. [Paper]

  • MSFEAI: "A Survey of Multi-sensor Fusion Perception for Embodied AI: Background, Methods, Challenges and Prospects", arxiv 2025. [Paper]

  • awesome-3D-scene-graphs: "awesome-3D-scene-graphs", github 2025. [Code]

  • SSM-EN: "Sensing, Social, and Motion Intelligence in Embodied Navigation: A Comprehensive Surve", arxiv 2025. [Paper]


Sim (experienced and promising software)


BuySensors

Multi-sensors:RGB, RGBD, Infrared, Polarization, Event-based(Future), Motion, Capture(GT), 2D Spinning LiDAR, 3D Spinning LiDAR, 3D Solid-State LiDAR, 4D High Resolution Radar(Future), mmWave Radar, UWB, RTK(GT), Wheel Odom, IMU, Ultrasonic Sensor.


FindIntern

For bridging college and society, mainly base in Shanghai, Beijing, Hangzhou, Suzhou, Canton, Shenzhen, HongKong

Shanghai: AI Lab, DiDi, Horizon, Li Auto, BOSCH, Huawei, SenseTime,

Beijing: Xiaomi, Baidu, Pony.ai, MSRA, QCraft,

Hangzhou: Unitree, Damo,

Suzhou: Momenta,

Canton: ZhuoYu, WeRide, Xiaopeng, DeepRoute,

Interview experience

BTW, RA/TA Opportunities:

Find More


WeMedia

Follow [XueLingFeiHua] for general understanding of sensors used by autonomous vehicle, I will do supplement when I have time.

Follow [MITRoboticsSeminar] for cutting edge seminars in robotics, I will do supplement when I have time.

Follow [RSS] for cutting edge seminars in robotics, I will do supplement when I have time.

Follow [IVDesign] [ADCol)] [VLN)] [ADFeed)] for cutting edge paper and talks, scan QR code for more Wxtweet, I will do supplement when I have time.


WhySDMap

For autonomous driving vechile and outdoor robotics, use more light-weight map instead of no map, specifically more like maplite [demo]

Especially for indoor environments, the prior is BIM [BIM]

  • AddressCLIP: "AddressCLIP: Empowering Vision-Language Models for City-wide Image Address Localization", arxiv 2024. [Paper]

  • RoadNet: "Translating Images to Road Network: A Non-Autoregressive Sequence-to-Sequence Approach", ICCV 2023 (Oral). [Paper] [Code]

  • OrienterNet: "OrienterNet: Visual Localization in 2D Public Maps with Neural Matching", CVPR 2023. [Paper] [Code]

  • OSMLoc: "OSMLoc: Single Image-Based Visual Localization in OpenStreetMap with Semantic and Geometric Guidances", arxiv 2024. [Paper] [Code]

  • OPAL: "Opal: Visibility-aware lidar-to-openstreetmap place recognition via adaptive radial fusion", CoRL 2025. [Paper] [Code]

  • svo-dt: " Drift-free Visual SLAM using Digital Twins", arxiv 2024. [Paper] [Code]

  • CityNav: "CityNav: Language-Goal Aerial Navigation set with Geographic Information", arxiv 2024. [Paper]

  • DFVDT: "Drift-free Visual SLAM using Digital Twins", arxiv 2024. [Paper]

  • BEVPlace2: "BEVPlace++: Fast, Robust, and Lightweight LiDAR Global Localization for Unmanned Ground Vehicles", arxiv 2024. [Code]

  • TripletLoc: "TripletLoc: One-Shot Global Localization using Semantic Triplet in Urban Environment", arxiv2024. [Code]

  • Reliable-loc: "Reliable LiDAR global localization using spatial verification and pose uncertainty", arxiv 2024. [Code]

  • Render2Loc: "Render-and-Compare: Cross-view 6-DoF Localization from Noisy Prior", 2023 IEEE International Conference on Multimedia and Expo (ICME). [Code]

  • CityWalker: "CityWalker: Learning Embodied Urban Navigation from Web-Scale Videos", 2025 CVPR. [Code]

  • EI-Nav: "OPEN: Openstreetmap-enhanced oPen-air sEmantic Navigation", 2025 arxiv. [Code]

  • DeepGPS: "DeepGPS: deep learning enhanced GPS positioning in urban canyons", 2024 IEEE Transactions on Mobile Computing (TMC). [Code]

  • TESM: "Topological Exploration using Segmented Map with Keyframe Contribution in Subterranean Environments", 2023 arxiv. [Paper]

  • SD++: "SD++: Enhancing Standard Definition Maps by Incorporating Road Knowledge using LLMs", 2025 arxiv. [Paper]

  • ERPoT: "ERPoT: Effective and Reliable Pose Tracking for Mobile Robots Using Lightweight Polygon Maps", 2025 TRO. [Paper] [Code]

  • osmAG-LLM: "osmAG-LLM: Zero-Shot Open-Vocabulary Object Navigation via Semantic Maps and Large Language Models Reasoning", 2025 arxiv. [Code]

  • BevSplat: "BevSplat: Resolving Height Ambiguity via Feature-Based Gaussian Primitives for Weakly-Supervised Cross-View Localization", 2025 maybe NeuIPS. [Paper]

  • GLEAM: " GLEAM: LEARNING TO MATCH AND EXPLAIN IN CROSS-VIEW GEO-LOCALIZATION", 2025 arxiv. [Paper]

    Follow [Michael Milford], [Gmberton] and [Amar Ali-bey] for more information on Visual Place Recognition, I will do supplement when I have time.

    Follow [Sarlinpe] and [NAVER] for more information on SfM & mapfusion, I will do supplement when I have time.

    Follow [Yuxiang Sun] and for more information on SDMap aided localization, I will do supplement when I have time.

    Follow [Yujiao Shi] and [tudelft-iv] for more information on Satellite images aided localization, I will do supplement when I have time.

    Follow [Waipang Kwan] for more information on Scene Understanding Blogs.


WhyTraversability

For all terrain navigation, because We're tired of filtering ground points...

  • salon: "Off-road driving by learning from interaction and demonstration". [Code] [Project]

  • MonoForce: "MonoForce: Self-supervised Learning of Physics-aware Model for Predicting Robot-terrain Interaction", arXiv 2022. [Paper] [Code]

  • TML: "A Global Traversability Mapping Library easily integratable with any SLAM System". [Code]

  • TMMP: "Bayesian Generalized Kernel Inference for Terrain Traversability Mapping", the 2nd Annual Conference on Robot Learning. [Code]

  • STEPP: "STEPP: Semantic Traversability Estimation using Pose Projected features", not yet. [Code]

  • EcSemMap: "Uncertainty-aware Evidential Bayesian Semantic Mapping (EBS)", [Code]

  • LoD: "Learning-on-the-Drive: Self-supervised Adaptive Long-range Perception for High-speed Offroad Driving", [Paper]

  • tapdo: "tadpo", [Code]

  • ROLO-SLAM: "ROLO-SLAM: Rotation-Optimized LiDAR-Only SLAM in Uneven Terrain with Ground Vehicle", [Code]

  • MGGPlanner: "Multi-robot Grid Graph Exploration Planner", [Code]

  • GroundGrid: "GroundGrid: LiDAR Point Cloud Ground Segmentation and Terrain Estimation", [Code]

  • Holistic Fusion: "Holistic Fusion: Task and Setup-agnostic Robot Localization and State Estimation with Factor Graphs", [Code]

    Follow [Jianhao Jiao] for more information on Intelligent Nav, I will do supplement when I have time.


WhyLongTerm

For multi-session mapping and updating (change detection), dynamic objects filter.

  • LT-Mapper: "Lt-mapper: A modular framework for lidar-based lifelong mapping", 2022 International Conference on Robotics and Automation (ICRA). [Paper] [Code]

  • RoLL: "ROLL: Long-Term Robust LiDAR-based Localization With Temporary Mapping in Changing Environments", arxiv 2025. [Code]

  • Elite: "Ephemerality meets LiDAR-based Lifelong Mapping", arxiv 2025. [Code]

  • SOLiD: "SOLiD: Spatially Organized and Lightweight Global Descriptor for FOV-constrained LiDAR Place Recognition", IEEE ROBOTICS AND AUTOMATION LETTERS 2024. [Code]

  • HeLiOS: "HeLiOS: Heterogeneous LiDAR Place Recognition via Overlap-based Learning and Local Spherical Transformer", 2025 International Conference on Robotics and Automation (ICRA). [Code]

  • HHRM: " Lifelong 3D Mapping Framework for Hand-held & Robot-mounted LiDAR Mapping Systems", IEEE ROBOTICS AND AUTOMATION LETTERS 2024. [Paper]

  • LiLoc: " LiLoc: Lifelong Localization using Adaptive Submap Joining and Egocentric Factor Graph", 2025 International Conference on Robotics and Automation (ICRA). [Code]

  • KISS-Matcher: "KISS-Matcher: Fast and Robust Point Cloud Registration Revisited", arxiv 2024. [Paper] [Code]

  • G3Reg: "Pyramid Semantic Graph-based Global Point Cloud Registration with Low Overlap", IEEE/RSJ International Conference on Intelligent Robots and Systems 2023. [Code]

  • SG-Reg: "SG-Reg: Generalizable and Efficient Scene Graph Registration", TRO under review. [Code]

  • rko_lio: "A Robust Approach for LiDAR-Inertial Odometry Without Sensor-Specific Modelling", 2025 arxiv. [Code]

  • DCReg: "DCReg: Decoupled Characterization for Efficient Degenerate LiDAR Registration", IJRR under review. [Code]

  • DeepPointMap: "DeepPointMap: Advancing LiDAR SLAM with Unified Neural Descriptors", AAAI Conference on Artificial Intelligence 2024. [Code]

  • GLoc3D: "Global Localization in Large-scale Point Clouds via Roll-pitch-yaw Invariant Place Recognition and Low-overlap Global Registration", *IEEE Transactions on Circuits and Systems for Video Technology *. [Code]

  • maplalb2.0: "maplab 2.0 – A Modular and Multi-Modal Mapping Framework", IEEE Robotics and Automation Letters. [Code]

  • MR-SLAM: "Disco: Differentiable scan context with orientation", IEEE Robotics and Automation Letters. [Code]

  • MS-Mapping: "MS-Mapping: An Uncertainty-Aware Large-Scale Multi-Session LiDAR Mapping System", arxiv 2024. [Paper] [Code]

  • NF-Atlas: "Multi-Volume Neural Feature Fields for Large Scale LiDAR Mapping", IEEE Robotics and Automation Letters 2023. [Paper] [Code]

  • DiSCo-SLAM: "DiSCo-SLAM: Distributed Scan Context-Enabled Multi-Robot LiDAR SLAM With Two-Stage Global-Local Graph Optimization", IEEE International Conference on Robotics and Automation 2022. [Paper] [Code]

  • imesa: "{iMESA}: Incremental Distributed Optimization for Collaborative Simultaneous Localization and Mapping", Robotics: Science and Systems 2024. [Paper] [Code]

  • DCL-SLAM: "Swarm-lio: Decentralized swarm lidar-inertial odometry", IEEE International Conference on Robotics and Automation 2023. [Paper] [Code]

  • ROAM: "Riemannian Optimization for Active Mapping with Robot Teams", IEEE Transactions on Robotics 2025. [Project]

  • BlockMap: "Block-Map-Based Localization in Large-Scale Environment", IEEE International Conference on Robotics and Automation 2024. [Paper] [Code]

  • SLIM: "SLIM: Scalable and Lightweight LiDAR Mapping in Urban Environments", arxiv maybe tro 2025. [Paper] [Code]

  • RING#: "RING#: PR-by-PE Global Localization with Roto-translation Equivariant Gram Learning", TRO 2025. [Code]

  • LT-Gaussian: "LT-Gaussian: Long-Term Map Update Using 3D Gaussian Splatting for Autonomous Driving", [Paper]

  • GS-LTS: "GS-LTS: 3D Gaussian Splatting-Based Adaptive Modeling for Long-Term Service Robots", [Paper]

reduce z-axis drift

  • Norlab-icp: "A 2-D/3-D mapping library relying on the "Iterative Closest Point" algorithm", [Code]

  • SuperOdometry: "SuperOdometry: Lightweight LiDAR-inertial Odometry and Mapping", [Code]

  • BUFFER-X: "BUFFER-X: Zero-Shot Point Cloud Registration", ICCV 2025. [Code]

    Follow [Ji Zhang] for more information on Robust Navigation, I will do supplement when I have time.

    Follow [Xiang Gao] for more information on SLAM, I will do supplement when I have time.

    Follow [Tong Qin] for more information on V-SLAM, I will do supplement when I have time.

    Follow [Gisbi Kim] for more information on Map Maintaining, I will do supplement when I have time.

    Follow [Qingwen Zhang] for more information on Dynamic Objects Removing, I will do supplement when I have time.

    Follow [Zhijian Qiao] for more information on Multi-Session Mapping, I will do supplement when I have time.

    Engineering:

    Follow [koide] for more information on Life-Long LiDAR Mapping, I will do supplement when I have time.

    Follow [Zikang Yuan] for more information on LIO, I will do supplement when I have time.

    Follow [Xiangcheng Hu] for more information on Life-Long LiDAR Mapping, I will do supplement when I have time.

    Follow [Chengwei Zhao] for more information on LiDAR SLAM, I will do supplement when I have time.

    Follow [Heming Liang] for more information on LiDAR SLAM, I will do supplement when I have time.


Citation

If you find this repository useful, please consider citing this repo:

@misc{runjtu2024slamrepo,
    title = {awesome-and-novel-works-in-slam},
    author = {Runheng Zuo},
    howpublished = {\url{https://github.com/runjtu/awesome-and-novel-works-in-slam}},
    year = {2024},
    note = "[Online; accessed 04-October-2024]"
}

Star History Chart

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for awesome-and-novel-works-in-slam

Similar Open Source Tools

For similar tasks

For similar jobs