Direkt zum Inhalt springen
Machine Learning for Robotics
TUM School of Computation, Information and Technology
Technical University of Munich

Technical University of Munich

Menu

Links

Informatik IX

Professorship for Machine Learning for Robotics

Smart Robotics Lab

Boltzmannstrasse 3
85748 Garching info@srl.cit.tum.de

Follow us on:
SRL  CVG   DVL  


Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
teaching:projects [2023/04/13 21:03]
Dr. Xingxing Zuo
teaching:projects [2023/10/20 04:46] (current)
Dr. Xingxing Zuo
Line 13: Line 13:
 The Chair of Applied Mechanics (Prof. Rixen) at TUM has a humanoid (see Figure) robot and bipedal walker. Ideally, these can perceive the potentially uneven terrain in front of them, in order to safely walk over it. Within this project, we would like to explore incorporating locally perceived elevation maps as (additional) inputs to learned gait control policies. The Chair of Applied Mechanics (Prof. Rixen) at TUM has a humanoid (see Figure) robot and bipedal walker. Ideally, these can perceive the potentially uneven terrain in front of them, in order to safely walk over it. Within this project, we would like to explore incorporating locally perceived elevation maps as (additional) inputs to learned gait control policies.
  
- 
-===== Dynamic Neural Object Reconstruction in Learned Dense SLAM ===== 
-{{ :teaching:projects:dynamicobjectdroidslam.png?300|Image taken from [Ioan Andrei Barsan et al. 2018].}} 
-**Supervisor(s) and Contact** 
-  * [[members:zuox|Dr. Xingxing Zuo]] 
- 
-**Context** 
- 
-Learning-based SLAM has made significant progress in recent years, due to the great power of deep neural networks.  However, most of the existing methods focus on static scenes. The pose and shape of dynamic objects are also critical for understanding the scene and benefiting the subsequent automation tasks. This project focus on the pose and shape estimation of the dynamic objects in a learned dense SLAM system. Along with the recurrent iterative updates of camera pose and pixel-wise depth, we aim to also optimize the pose and shape of the object with implicit neural representations. 
- 
-===== Real-time 3D Completion and Semantic Reconstruction ===== 
-{{ :teaching:projects:slamcompleiton.png?300|Image taken from [Shun-Cheng Wu et al. 2020].}} 
-**Supervisor(s) and Contact** 
-  * [[members:zuox|Dr. Xingxing Zuo]] 
-  * [[members:schaefer|Simon Schaefer]] 
- 
-**Context** 
- 
-This project focuses on the 3D semantic reconstruction using RGB-D camera. As we all know, the depth sensors in the RGB-D cameras usually have invalid depth measurements on shiny, glossy, bright, or distant surfaces. Besides, it is tricky to move the camera to cover the whole scenario for a complete and exquisite reconstruction. To this end, we aim to use the deep neural networks for learning prior knowledge of different scenes, and completing the missing structures incrementally in a real-time SLAM system. 
- 
-===== LiDAR-Inertial-Camera Volumetric Dense Mapping ===== 
-{{ :teaching:projects:licmapping.png?300|Image taken from [Jiarong Lin et al. 2021].}} 
-**Supervisor(s) and Contact** 
-  * [[members:zuox|Dr. Xingxing Zuo]] 
-  * [[members:boche|Simon Boche]] 
- 
-**Context** 
- 
-3D LiDAR, IMU, and Camera have their own strength and shortages for localization and mapping tasks. This project targets to develop a real-time LiDAR-Inertial-Camera mapping system by trying to utilize the best of each sensor modality for robust mapping under challenging scenarios, like high-dynamic ego-motion, bad illumination, and challenging weather conditions.  Based on the robust and efficient filter-base LIC-Fusion odometry, we aim to develop a volumetric mapping back-end for high-quality reconstruction. 
- 
-===== Learned plane-based visual-inertial SLAM and AR applications ===== 
-{{ :teaching:projects:visual_inertial_pslam.png?300| Image taken from [Shichao Yang et al. 2018]. }} 
-**Supervisor(s) and Contact** 
-  * [[members:zuox|Dr. Xingxing Zuo]] 
-  * [[members:leuteneg|Prof. Dr. Stefan Leutenegger]] 
- 
-**Context** 
- 
-Monocular SLAM system suffers from scale agnostic, while the visual-inertial system with the aid of IMU is competent to estimate metric 6DoF poses. Since structural planes are essential in AR (augmented reality) applications and informative, it is worth recovering 3D planes for building the layout of the environment. This project targets to develop a monocular visual-inertial SLAM system that leverages deep neural networks to detect and predicted 3D planes with the aid of IMU and incorporate planes into the conventional geometric bundle adjustment.  
- 
-===== Dynamic Object-level SLAM in Neural Radiance Field ===== 
-{{ :teaching:projects:objectslam_nerf.png?300|Image taken from [Julian Ost et al. 2021].}} 
-**Supervisor(s) and Contact** 
-  * [[members:zuox|Dr. Xingxing Zuo]] 
-  * [[members: |Binbin Xu]] 
- 
-**Context** 
- 
-Object-level SLAM has attracted a lot of attention and made tremendous progress recently, where each object in the scene can be represented in an individual sub-map. The Smart Robotics Lab has developed one of the first dynamic object-level SLAM systems that can simultaneously segment, track, and reconstruct both static and moving objects in the scene.  More recently, the neural radiance field has caught the attention of the vision community, and has adopted it to the object-level mapping framework. However, the object and camera poses are given and a tightly-coupled tracking component is lacking and prevent such work from real-world applications.  
- 
-===== Implicit Neural SLAM with NeRF ===== 
-{{ :teaching:projects:nerf.png?300|Image taken from [Zirui Wang et al. 2021].}} 
-**Supervisor(s) and Contact** 
-  * [[members:zuox|Dr. Xingxing Zuo]] 
-  * [[members: |Binbin Xu]] 
- 
- 
-**Context** 
- 
-Recently, the neural radiance field has caught the attention of the vision community and many extension works have been proposed, among which iMAP has proposed to use this implicit map representation in a SLAM system. However, it requires depth input to perform tracking and mapping. More recently, DROID-SLAM has proposed a new recurrent iterative updating way to achieve reliable tracking and semi-dense mapping system in a monocular camera setting. In this project, we would like to explore a tight integration of NerF and DROID-SLAM to achieve a dense monocular SLAM system, ideally working even in a dynamic environment. 
  

Rechte Seite

Informatik IX

Professorship for Machine Learning for Robotics

Smart Robotics Lab

Boltzmannstrasse 3
85748 Garching info@srl.cit.tum.de

Follow us on:
SRL  CVG   DVL