Both sides previous revision
Previous revision
Next revision
|
Previous revision
|
research [2022/07/26 16:53] Simon Schaefer |
research [2023/10/12 15:54] Hanzhi Chen |
{{:research:mid_fusion_2019.png?300 |}} | {{:research:mid_fusion_2019.png?300 |}} |
| |
For meaningful interaction of a mobile robot with its environment, as well as with human operators, availabilty of accurate pose and dense maps will not be sufficient: instead, we need semantic information and a segmentation into 3-dimensional objects and hierarchies of objects -- importantly, concepts that are shared with humans. This deeper understanding allows a robot to operate safely relating to environment and tasks. Moreover, the motion of individual objects or a person performing a task, need to be well understood by a robot in order to infer what is happening and to forecast what might happen in the future. [[research:semanicobjectlevelanddynamicslam|[+]]] | For meaningful interaction of a mobile robot with its environment, as well as with human operators, availability of accurate pose and dense maps will not be sufficient: instead, we need semantic information and a segmentation into 3-dimensional objects and hierarchies of objects -- importantly, concepts that are shared with humans. This deeper understanding allows a robot to operate safely relating to environment and tasks. Moreover, the motion of individual objects or a person performing a task, need to be well understood by a robot in order to infer what is happening and to forecast what might happen in the future. [[research:semanicobjectlevelanddynamicslam|[+]]] |
| |
===== Machine Learning (Including Deep Learning) ===== | ===== Machine Learning (Including Deep Learning) ===== |
| |
For a safe and efficient interactions between mobile robots and human, it is key for the robot to understand the human's behavior, ranging from the human's 3D body pose to its high-level actions. This research area includes questions about good learning representations for human modeling, robustifying model predictions despite the large variety of human shape and multi-modality of human actions, or leveraging contextual information for accurate predictions. [[research:human|[+]]] | For a safe and efficient interactions between mobile robots and human, it is key for the robot to understand the human's behavior, ranging from the human's 3D body pose to its high-level actions. This research area includes questions about good learning representations for human modeling, robustifying model predictions despite the large variety of human shape and multi-modality of human actions, or leveraging contextual information for accurate predictions. [[research:human|[+]]] |
| |
| |
===== Robot Navigation ===== | ===== Robot Navigation ===== |
{{:research:aerial_manipulation_2020.png?300 |}} | {{:research:aerial_manipulation_2020.png?300 |}} |
| |
Boyond safe navigation, mobile robots of tomorrow may want to interact physically with their environments, in order to complete ever more complex tasks. Examples are robots accomplishing grasping in a warehouse automation or domestic setting (pick-and-place over long distances). As another example, mobile robots may be deployed in a construction scenario, where drones could accomplish e.g. painting and drilling, or where ground-based robots might be assembling a structure. All of these applications crucially depend on meaningful geometric and semantic understanding of their surroundings and further extend the safe navigation stack. [[research:physicalinteraction|[+]]] | Beyond safe navigation, mobile robots of tomorrow may want to interact physically with their environments, in order to complete ever more complex tasks. Examples are robots accomplishing grasping in a warehouse automation or domestic setting (pick-and-place over long distances). As another example, mobile robots may be deployed in a construction scenario, where drones could accomplish e.g. painting and drilling, or where ground-based robots might be assembling a structure. All of these applications crucially depend on meaningful geometric and semantic understanding of their surroundings and further extend the safe navigation stack. [[research:physicalinteraction|[+]]] |
| |
===== Drones ===== | ===== Drones ===== |