InteriorNet: Mega-scale multi-sensor photo-realistic indoor scenes dataset (bibtex)
by W Li, S Saeedi, J McCormac, R Clark, D Tzoumanikas, Q Ye, Y Huang, R Tang and S Leutenegger
Reference:
InteriorNet: Mega-scale multi-sensor photo-realistic indoor scenes dataset (W Li, S Saeedi, J McCormac, R Clark, D Tzoumanikas, Q Ye, Y Huang, R Tang and S Leutenegger), In , 2018. 
Bibtex Entry:
@inproceedings{li2018interiornet,
 title = {InteriorNet: Mega-scale multi-sensor photo-realistic indoor scenes dataset},
 author = {W Li and S Saeedi and J McCormac and R Clark and D Tzoumanikas and Q Ye and Y Huang and R Tang and S Leutenegger},
 journal = {British Machine Vision Conference (BMVC)},
 year = {2018},
}
Powered by bibtexbrowser
InteriorNet: Mega-scale multi-sensor photo-realistic indoor scenes dataset (bibtex)
InteriorNet: Mega-scale multi-sensor photo-realistic indoor scenes dataset (bibtex)
by W Li, S Saeedi, J McCormac, R Clark, D Tzoumanikas, Q Ye, Y Huang, R Tang and S Leutenegger
Reference:
InteriorNet: Mega-scale multi-sensor photo-realistic indoor scenes dataset (W Li, S Saeedi, J McCormac, R Clark, D Tzoumanikas, Q Ye, Y Huang, R Tang and S Leutenegger), In , 2018. 
Bibtex Entry:
@inproceedings{li2018interiornet,
 title = {InteriorNet: Mega-scale multi-sensor photo-realistic indoor scenes dataset},
 author = {W Li and S Saeedi and J McCormac and R Clark and D Tzoumanikas and Q Ye and Y Huang and R Tang and S Leutenegger},
 journal = {British Machine Vision Conference (BMVC)},
 year = {2018},
}
Powered by bibtexbrowser
InteriorNet: Mega-scale multi-sensor photo-realistic indoor scenes dataset (bibtex)
InteriorNet: Mega-scale multi-sensor photo-realistic indoor scenes dataset (bibtex)
by W Li, S Saeedi, J McCormac, R Clark, D Tzoumanikas, Q Ye, Y Huang, R Tang and S Leutenegger
Reference:
InteriorNet: Mega-scale multi-sensor photo-realistic indoor scenes dataset (W Li, S Saeedi, J McCormac, R Clark, D Tzoumanikas, Q Ye, Y Huang, R Tang and S Leutenegger), In , 2018. 
Bibtex Entry:
@inproceedings{li2018interiornet,
 title = {InteriorNet: Mega-scale multi-sensor photo-realistic indoor scenes dataset},
 author = {W Li and S Saeedi and J McCormac and R Clark and D Tzoumanikas and Q Ye and Y Huang and R Tang and S Leutenegger},
 journal = {British Machine Vision Conference (BMVC)},
 year = {2018},
}
Powered by bibtexbrowser
InteriorNet: Mega-scale multi-sensor photo-realistic indoor scenes dataset (bibtex)
InteriorNet: Mega-scale multi-sensor photo-realistic indoor scenes dataset (bibtex)
by W Li, S Saeedi, J McCormac, R Clark, D Tzoumanikas, Q Ye, Y Huang, R Tang and S Leutenegger
Reference:
InteriorNet: Mega-scale multi-sensor photo-realistic indoor scenes dataset (W Li, S Saeedi, J McCormac, R Clark, D Tzoumanikas, Q Ye, Y Huang, R Tang and S Leutenegger), In , 2018. 
Bibtex Entry:
@inproceedings{li2018interiornet,
 title = {InteriorNet: Mega-scale multi-sensor photo-realistic indoor scenes dataset},
 author = {W Li and S Saeedi and J McCormac and R Clark and D Tzoumanikas and Q Ye and Y Huang and R Tang and S Leutenegger},
 journal = {British Machine Vision Conference (BMVC)},
 year = {2018},
}
Powered by bibtexbrowser
InteriorNet: Mega-scale multi-sensor photo-realistic indoor scenes dataset (bibtex)
InteriorNet: Mega-scale multi-sensor photo-realistic indoor scenes dataset (bibtex)
by W Li, S Saeedi, J McCormac, R Clark, D Tzoumanikas, Q Ye, Y Huang, R Tang and S Leutenegger
Reference:
InteriorNet: Mega-scale multi-sensor photo-realistic indoor scenes dataset (W Li, S Saeedi, J McCormac, R Clark, D Tzoumanikas, Q Ye, Y Huang, R Tang and S Leutenegger), In , 2018. 
Bibtex Entry:
@inproceedings{li2018interiornet,
 title = {InteriorNet: Mega-scale multi-sensor photo-realistic indoor scenes dataset},
 author = {W Li and S Saeedi and J McCormac and R Clark and D Tzoumanikas and Q Ye and Y Huang and R Tang and S Leutenegger},
 journal = {British Machine Vision Conference (BMVC)},
 year = {2018},
}
Powered by bibtexbrowser
swdatasets

Software & Datasets

Supereight (Imperial College)

We release our reference implementation of Efficient Octree-Based Volumetric SLAM Supporting Signed-Distance and Occupancy Mapping (see publications below) under the BSD 3-clause license:

See https://bitbucket.org/smartroboticslab/supereight-public

InteriorNet (Imperial College)

A dataset that contains 20M images created by pipeline: (A) We collect around 1 million CAD models provided by world-leading furniture manufacturers. These models have been used in the real-world production. (B) Based on those models, around 1,100 professional designers create around 22 million interior layouts. Most of such layouts have been used in real-world decorations. (C) For each layout, we generate a number of configurations to represent different random lightings and simulation of scene change over time in daily life. (D) We provide an interactive simulator (ViSim) to help for creating ground truth IMU, events, as well as monocular or stereo camera trajectories including hand-drawn, random walking and neural network based realistic trajectory. (E) All supported image sequences and ground truth.

See https://interiornet.org/

OKVIS (ETH Zurich)

We are pleased to announce the open-source release of OKVIS: Open Keyframe-based Visual Inertial SLAM under the terms of the BSD 3-clause license. OKVIS tracks the motion of an assembly of an Inertial Measurement Unit (IMU) plus N cameras (tested: mono, stereo and four-camera setup) and reconstructs the scene sparsely. This is the Author’s implementation of the publications below. There is currently no loop-closure detection / optimisation included, but we are working on it.

Copyright © 2016, Autonomous Systems Lab / ETH Zurich Software authors and contributors: Stefan Leutenegger, Andreas Forster, Paul Furgale, Pascal Gohl, and Simon Lynen

To obtain the ROS-Version, follow the instructions here: http://ethz-asl.github.io/okvis_ros/ This is ready to be used with a Skybotix VI-Sensor or to process ROS bags.

We also provide a non-ROS version to use as a generic CMake library, which includes some minimal examples to process datasets: http://ethz-asl.github.io/okvis/

BRISK 2 (ETH Zurich and Imperial College)

NEWS: BRISK Version 2 with shorter descriptors, higher speed and compatibility with OpenCV version 3 is available here: brisk-2.0.2.zip

This is the Author’s implementation of BRISK: Binary Robust Invariant Scalable Keypoints. Various (partly unpublished) extensions are provided. In particular, the default descriptor consists of 48 instead of 64 bytes.

Note that the codebase that you are provided here is free of charge and without any warranty. This is bleeding edge research software. The 3-clause BSD license (see file LICENSE) applies. Supported operating systems: Linux or MacOS X, tested on Ubuntu 14.04 and El Capitan. Vector instructions (SSE2 and SSSE3 or NEON) must be available. Depends on OpenCV 2.4 or newer. OpenCV 3 is compatible, however not extensively tested and the demo application is somewhat limited in functionality. See README.md for further instructions on how to build and use the library and demo application.

The original BRISK (ETH Zurich) is still available is the original author’s implementation of the ICCV'11 paper: brisk.zip. Requires OpenCV 2.1-2.3.

Software and Datasets by the Dyson Robotics Lab at Imperial College

We were involved in development and release of the dense SLAM system ElasticFusion, the semantic SLAM system SemanticFusion, and datasets such as SceneNet RGB-D. They are all available here: http://www.imperial.ac.uk/dyson-robotics-lab/downloads/