Direkt zum Inhalt springen
Machine Learning for Robotics
TUM School of Computation, Information and Technology
Technical University of Munich

Technical University of Munich

Menu

Links

Informatik IX

Professorship for Machine Learning for Robotics

Smart Robotics Lab

Boltzmannstrasse 3
85748 Garching info@srl.cit.tum.de

Follow us on:
SRL  CVG   DVL  


Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Last revision Both sides next revision
research [2022/07/26 16:53]
Simon Schaefer
research [2022/10/12 14:51]
Simon Schaefer
Line 13: Line 13:
 {{:research:mid_fusion_2019.png?300 |}} {{:research:mid_fusion_2019.png?300 |}}
  
-For meaningful interaction of a mobile robot with its environment, as well as with human operators, availabilty of accurate pose and dense maps will not be sufficient: instead, we need semantic information and a segmentation into 3-dimensional objects and hierarchies of objects -- importantly, concepts that are shared with humans. This deeper understanding allows a robot to operate safely relating to environment and tasks. Moreover, the motion of individual objects or a person performing a task, need to be well understood by a robot in order to infer what is happening and to forecast what might happen in the future. [[research:semanicobjectlevelanddynamicslam|[+]]]+For meaningful interaction of a mobile robot with its environment, as well as with human operators, availability of accurate pose and dense maps will not be sufficient: instead, we need semantic information and a segmentation into 3-dimensional objects and hierarchies of objects -- importantly, concepts that are shared with humans. This deeper understanding allows a robot to operate safely relating to environment and tasks. Moreover, the motion of individual objects or a person performing a task, need to be well understood by a robot in order to infer what is happening and to forecast what might happen in the future. [[research:semanicobjectlevelanddynamicslam|[+]]]
  
 ===== Machine Learning (Including Deep Learning) ===== ===== Machine Learning (Including Deep Learning) =====
Line 33: Line 33:
 {{:research:aerial_manipulation_2020.png?300 |}} {{:research:aerial_manipulation_2020.png?300 |}}
  
-Boyond safe navigation, mobile robots of tomorrow may want to interact physically with their environments, in order to complete ever more complex tasks. Examples are robots accomplishing grasping in a warehouse automation or domestic setting (pick-and-place over long distances). As another example, mobile robots may be deployed in a construction scenario, where drones could accomplish e.g. painting and drilling, or where ground-based robots might be assembling a structure. All of these applications crucially depend on meaningful geometric and semantic understanding of their surroundings and further extend the safe navigation stack. [[research:physicalinteraction|[+]]]+Beyond safe navigation, mobile robots of tomorrow may want to interact physically with their environments, in order to complete ever more complex tasks. Examples are robots accomplishing grasping in a warehouse automation or domestic setting (pick-and-place over long distances). As another example, mobile robots may be deployed in a construction scenario, where drones could accomplish e.g. painting and drilling, or where ground-based robots might be assembling a structure. All of these applications crucially depend on meaningful geometric and semantic understanding of their surroundings and further extend the safe navigation stack. [[research:physicalinteraction|[+]]]
  
 ===== Drones ===== ===== Drones =====

Rechte Seite

Informatik IX

Professorship for Machine Learning for Robotics

Smart Robotics Lab

Boltzmannstrasse 3
85748 Garching info@srl.cit.tum.de

Follow us on:
SRL  CVG   DVL