Mid-fusion: Octree-based object-level multi-instance dynamic slam (bibtex)
by B Xu, W Li, D Tzoumanikas, M Bloesch, A Davison and S Leutenegger
Reference:
Mid-fusion: Octree-based object-level multi-instance dynamic slam (B Xu, W Li, D Tzoumanikas, M Bloesch, A Davison and S Leutenegger), In 2019 International Conference on Robotics and Automation (ICRA), 2019. 
Bibtex Entry:
@inproceedings{xu2019mid,
 title = {Mid-fusion: Octree-based object-level multi-instance dynamic slam},
 author = {B Xu and W Li and D Tzoumanikas and M Bloesch and A Davison and S Leutenegger},
 booktitle = {2019 International Conference on Robotics and Automation (ICRA)},
 pages = {5231--5237},
 year = {2019},
 organization = {IEEE},
 keywords = {objectlevel},
}
Powered by bibtexbrowser
Mid-fusion: Octree-based object-level multi-instance dynamic slam (bibtex)
Mid-fusion: Octree-based object-level multi-instance dynamic slam (bibtex)
by B Xu, W Li, D Tzoumanikas, M Bloesch, A Davison and S Leutenegger
Reference:
Mid-fusion: Octree-based object-level multi-instance dynamic slam (B Xu, W Li, D Tzoumanikas, M Bloesch, A Davison and S Leutenegger), In 2019 International Conference on Robotics and Automation (ICRA), 2019. 
Bibtex Entry:
@inproceedings{xu2019mid,
 title = {Mid-fusion: Octree-based object-level multi-instance dynamic slam},
 author = {B Xu and W Li and D Tzoumanikas and M Bloesch and A Davison and S Leutenegger},
 booktitle = {2019 International Conference on Robotics and Automation (ICRA)},
 pages = {5231--5237},
 year = {2019},
 organization = {IEEE},
 keywords = {objectlevel},
}
Powered by bibtexbrowser
Mid-fusion: Octree-based object-level multi-instance dynamic slam (bibtex)
Mid-fusion: Octree-based object-level multi-instance dynamic slam (bibtex)
by B Xu, W Li, D Tzoumanikas, M Bloesch, A Davison and S Leutenegger
Reference:
Mid-fusion: Octree-based object-level multi-instance dynamic slam (B Xu, W Li, D Tzoumanikas, M Bloesch, A Davison and S Leutenegger), In 2019 International Conference on Robotics and Automation (ICRA), 2019. 
Bibtex Entry:
@inproceedings{xu2019mid,
 title = {Mid-fusion: Octree-based object-level multi-instance dynamic slam},
 author = {B Xu and W Li and D Tzoumanikas and M Bloesch and A Davison and S Leutenegger},
 booktitle = {2019 International Conference on Robotics and Automation (ICRA)},
 pages = {5231--5237},
 year = {2019},
 organization = {IEEE},
 keywords = {objectlevel},
}
Powered by bibtexbrowser
research:semanicobjectlevelanddynamicslam

Semantic, Object-level and Dynamic SLAM

Multi-Object and Object-level Dynamic Mapping

In this very much ongoing work, we are exploring segmentation and tracking of (rigid) objects into submaps. On the one hand, the underlying algorithms depend on instance-level semantic segmentation networks, and on the other hand they employ geometric and photometric tracking (also for identification of moving objects), as well as volumetric mapping. Works include Fusion++ (Dyson Robotics Lab at Imperial College) and MID-Fusion (SRL at Imperial College)

Current collaborators:

Former collaborators:

SemanticFusion (Dyson Robotics Lab at Imperial College)

In SemanticFusion, we use a real-time capable dense RGB-D SLAM system, ElasticFusion, and add a semantic layer to it. In parallel to the localisation and mapping process, a CNN takes the same inputs (colour image and depth image), in order to output semantic segmentation predictions. We aggregate this semantic information in the map by means of Bayesian fusion. The work is significant for two reasons: first of all, such a real-time semantic mapping framework will play a core enabling role for future robots to perform more abstract reasoning, i.e. bridging the gap with AI, also in relation to intuitive user interaction. Second, we could experimentally show that the map serving as a means for semantic data association across many frames in fact boosts accuracy of 2D semantic segmentation — when compared to single-view predictions.

Former collaborators:

Datasets

Deep learning approaches are naturally data hungry. We are therefore working on a number of datasets, where imagery is syntethically generated through realistic rendering. Furthermore, we can use the datasets for evaluation of SLAM algorithms (pose and structure), as we have ground truth trajectories, maps, and also complementary sensing modalities available, such as IMUs.

See Software & Datasets for downloads.