VI-MID: Visual-Inertial Multi-Instance Dynamic SLAM with Object-Level Relocalisation

Authors: Yifei Ren*, Binbin Xu*, Christopher L. Choi, Stefan Leutenegger

Abstract

In this paper, we present a tightly-coupled visual-inertial object-level multi-instance dynamic SLAM system. Even in extremely dynamic scenes, it can robustly optimise for the camera pose, velocity, IMU biases and build a dense 3D reconstruction object-level map of the environment. Our system can robustly track and reconstruct the geometries of arbitrary objects, their semantics and motion by incrementally fusing associated colour, depth, semantic, and foreground object probabilities into each object model thanks to its robust sensor and object tracking. In addition, when an object is lost or moved outside the camera field of view, our system can reliably recover its pose upon re-observation. We demonstrate the robustness and accuracy of our method by quantitatively and qualitatively testing it in real-world data sequences.


Export as PDF, XML, TEX or BIB

2022
Conference and Workshop Papers
[]Visual-Inertial Multi-Instance Dynamic SLAM with Object-Level Relocalisation (Y Ren, B Xu, CL. Choi and S Leutenegger), In International Conference on Intelligent Robots and Systems (IROS), 2022. ([video][project page]) [bibtex] [pdf]
Powered by bibtexbrowser
Export as PDF, XML, TEX or BIB