Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision | |||
teaching:projects [2023/04/13 21:03] Dr. Xingxing Zuo |
teaching:projects [2023/10/20 04:46] (current) Dr. Xingxing Zuo |
||
---|---|---|---|
Line 13: | Line 13: | ||
The Chair of Applied Mechanics (Prof. Rixen) at TUM has a humanoid (see Figure) robot and bipedal walker. Ideally, these can perceive the potentially uneven terrain in front of them, in order to safely walk over it. Within this project, we would like to explore incorporating locally perceived elevation maps as (additional) inputs to learned gait control policies. | The Chair of Applied Mechanics (Prof. Rixen) at TUM has a humanoid (see Figure) robot and bipedal walker. Ideally, these can perceive the potentially uneven terrain in front of them, in order to safely walk over it. Within this project, we would like to explore incorporating locally perceived elevation maps as (additional) inputs to learned gait control policies. | ||
- | |||
- | ===== Dynamic Neural Object Reconstruction in Learned Dense SLAM ===== | ||
- | {{ : | ||
- | **Supervisor(s) and Contact** | ||
- | * [[members: | ||
- | |||
- | **Context** | ||
- | |||
- | Learning-based SLAM has made significant progress in recent years, due to the great power of deep neural networks. | ||
- | |||
- | ===== Real-time 3D Completion and Semantic Reconstruction ===== | ||
- | {{ : | ||
- | **Supervisor(s) and Contact** | ||
- | * [[members: | ||
- | * [[members: | ||
- | |||
- | **Context** | ||
- | |||
- | This project focuses on the 3D semantic reconstruction using RGB-D camera. As we all know, the depth sensors in the RGB-D cameras usually have invalid depth measurements on shiny, glossy, bright, or distant surfaces. Besides, it is tricky to move the camera to cover the whole scenario for a complete and exquisite reconstruction. To this end, we aim to use the deep neural networks for learning prior knowledge of different scenes, and completing the missing structures incrementally in a real-time SLAM system. | ||
- | |||
- | ===== LiDAR-Inertial-Camera Volumetric Dense Mapping ===== | ||
- | {{ : | ||
- | **Supervisor(s) and Contact** | ||
- | * [[members: | ||
- | * [[members: | ||
- | |||
- | **Context** | ||
- | |||
- | 3D LiDAR, IMU, and Camera have their own strength and shortages for localization and mapping tasks. This project targets to develop a real-time LiDAR-Inertial-Camera mapping system by trying to utilize the best of each sensor modality for robust mapping under challenging scenarios, like high-dynamic ego-motion, bad illumination, | ||
- | |||
- | ===== Learned plane-based visual-inertial SLAM and AR applications ===== | ||
- | {{ : | ||
- | **Supervisor(s) and Contact** | ||
- | * [[members: | ||
- | * [[members: | ||
- | |||
- | **Context** | ||
- | |||
- | Monocular SLAM system suffers from scale agnostic, while the visual-inertial system with the aid of IMU is competent to estimate metric 6DoF poses. Since structural planes are essential in AR (augmented reality) applications and informative, | ||
- | |||
- | ===== Dynamic Object-level SLAM in Neural Radiance Field ===== | ||
- | {{ : | ||
- | **Supervisor(s) and Contact** | ||
- | * [[members: | ||
- | * [[members: |Binbin Xu]] | ||
- | |||
- | **Context** | ||
- | |||
- | Object-level SLAM has attracted a lot of attention and made tremendous progress recently, where each object in the scene can be represented in an individual sub-map. The Smart Robotics Lab has developed one of the first dynamic object-level SLAM systems that can simultaneously segment, track, and reconstruct both static and moving objects in the scene. | ||
- | |||
- | ===== Implicit Neural SLAM with NeRF ===== | ||
- | {{ : | ||
- | **Supervisor(s) and Contact** | ||
- | * [[members: | ||
- | * [[members: |Binbin Xu]] | ||
- | |||
- | |||
- | **Context** | ||
- | |||
- | Recently, the neural radiance field has caught the attention of the vision community and many extension works have been proposed, among which iMAP has proposed to use this implicit map representation in a SLAM system. However, it requires depth input to perform tracking and mapping. More recently, DROID-SLAM has proposed a new recurrent iterative updating way to achieve reliable tracking and semi-dense mapping system in a monocular camera setting. In this project, we would like to explore a tight integration of NerF and DROID-SLAM to achieve a dense monocular SLAM system, ideally working even in a dynamic environment. | ||