Visual-Inertial Multi-Instance Dynamic SLAM with Object-Level Relocalisation (bibtex)
by Y Ren, B Xu, CL. Choi and S Leutenegger
Reference:
Visual-Inertial Multi-Instance Dynamic SLAM with Object-Level Relocalisation (Y Ren, B Xu, CL. Choi and S Leutenegger), In International Conference on Intelligent Robots and Systems (IROS), 2022. ([video][project page])
Bibtex Entry:
@inproceedings{ren2020vimid,
 title = {Visual-Inertial Multi-Instance Dynamic SLAM with Object-Level Relocalisation},
 author = {Y Ren and B Xu and CL. Choi and S Leutenegger},
 year = {2022},
 pdf = {https://arxiv.org/pdf/2208.04274.pdf},
 keywords = {semanticslam, vslam},
 booktitle = {International Conference on Intelligent Robots and Systems (IROS)},
}
Powered by bibtexbrowser
Visual-Inertial Multi-Instance Dynamic SLAM with Object-Level Relocalisation (bibtex)
Visual-Inertial Multi-Instance Dynamic SLAM with Object-Level Relocalisation (bibtex)
by Y Ren, B Xu, CL. Choi and S Leutenegger
Reference:
Visual-Inertial Multi-Instance Dynamic SLAM with Object-Level Relocalisation (Y Ren, B Xu, CL. Choi and S Leutenegger), In International Conference on Intelligent Robots and Systems (IROS), 2022. ([video][project page])
Bibtex Entry:
@inproceedings{ren2020vimid,
 title = {Visual-Inertial Multi-Instance Dynamic SLAM with Object-Level Relocalisation},
 author = {Y Ren and B Xu and CL. Choi and S Leutenegger},
 year = {2022},
 pdf = {https://arxiv.org/pdf/2208.04274.pdf},
 keywords = {semanticslam, vslam},
 booktitle = {International Conference on Intelligent Robots and Systems (IROS)},
}
Powered by bibtexbrowser
research:projects:vimid

VI-MID: Visual-Inertial Multi-Instance Dynamic SLAM with Object-Level Relocalisation

Authors: Yifei Ren*, Binbin Xu*, Christopher L. Choi, Stefan Leutenegger

Abstract

In this paper, we present a tightly-coupled visual-inertial object-level multi-instance dynamic SLAM system. Even in extremely dynamic scenes, it can robustly optimise for the camera pose, velocity, IMU biases and build a dense 3D reconstruction object-level map of the environment. Our system can robustly track and reconstruct the geometries of arbitrary objects, their semantics and motion by incrementally fusing associated colour, depth, semantic, and foreground object probabilities into each object model thanks to its robust sensor and object tracking. In addition, when an object is lost or moved outside the camera field of view, our system can reliably recover its pose upon re-observation. We demonstrate the robustness and accuracy of our method by quantitatively and qualitatively testing it in real-world data sequences.