Both sides previous revision
Previous revision
|
Last revision
Both sides next revision
|
research [2022/07/26 16:53] Simon Schaefer |
research [2022/10/12 14:51] Simon Schaefer |
{{:research:mid_fusion_2019.png?300 |}} | {{:research:mid_fusion_2019.png?300 |}} |
| |
For meaningful interaction of a mobile robot with its environment, as well as with human operators, availabilty of accurate pose and dense maps will not be sufficient: instead, we need semantic information and a segmentation into 3-dimensional objects and hierarchies of objects -- importantly, concepts that are shared with humans. This deeper understanding allows a robot to operate safely relating to environment and tasks. Moreover, the motion of individual objects or a person performing a task, need to be well understood by a robot in order to infer what is happening and to forecast what might happen in the future. [[research:semanicobjectlevelanddynamicslam|[+]]] | For meaningful interaction of a mobile robot with its environment, as well as with human operators, availability of accurate pose and dense maps will not be sufficient: instead, we need semantic information and a segmentation into 3-dimensional objects and hierarchies of objects -- importantly, concepts that are shared with humans. This deeper understanding allows a robot to operate safely relating to environment and tasks. Moreover, the motion of individual objects or a person performing a task, need to be well understood by a robot in order to infer what is happening and to forecast what might happen in the future. [[research:semanicobjectlevelanddynamicslam|[+]]] |
| |
===== Machine Learning (Including Deep Learning) ===== | ===== Machine Learning (Including Deep Learning) ===== |
{{:research:aerial_manipulation_2020.png?300 |}} | {{:research:aerial_manipulation_2020.png?300 |}} |
| |
Boyond safe navigation, mobile robots of tomorrow may want to interact physically with their environments, in order to complete ever more complex tasks. Examples are robots accomplishing grasping in a warehouse automation or domestic setting (pick-and-place over long distances). As another example, mobile robots may be deployed in a construction scenario, where drones could accomplish e.g. painting and drilling, or where ground-based robots might be assembling a structure. All of these applications crucially depend on meaningful geometric and semantic understanding of their surroundings and further extend the safe navigation stack. [[research:physicalinteraction|[+]]] | Beyond safe navigation, mobile robots of tomorrow may want to interact physically with their environments, in order to complete ever more complex tasks. Examples are robots accomplishing grasping in a warehouse automation or domestic setting (pick-and-place over long distances). As another example, mobile robots may be deployed in a construction scenario, where drones could accomplish e.g. painting and drilling, or where ground-based robots might be assembling a structure. All of these applications crucially depend on meaningful geometric and semantic understanding of their surroundings and further extend the safe navigation stack. [[research:physicalinteraction|[+]]] |
| |
===== Drones ===== | ===== Drones ===== |