<?xml version="1.0" encoding="utf-8"?>
<!-- generator="FeedCreator 1.8" -->
<?xml-stylesheet href="https://srl.cit.tum.de/lib/exe/css.php?s=feed" type="text/css"?>
<feed xmlns="http://www.w3.org/2005/Atom">
    <title>Machine Learning for Robotics  research:projects</title>
    <subtitle></subtitle>
    <link rel="alternate" type="text/html" href="https://srl.cit.tum.de/"/>
    <id>https://srl.cit.tum.de/</id>
    <updated>2026-04-20T16:16:10+00:00</updated>
    <generator>FeedCreator 1.8 (info@mypapit.net)</generator>
    <link rel="self" type="application/atom+xml" href="https://srl.cit.tum.de/feed.php" />
    <entry>
        <title>BodySlam: Joint Camera Localisation, Mapping, and Human Motion Tracking</title>
        <link rel="alternate" type="text/html" href="https://srl.cit.tum.de/research/projects/bodyslam?rev=1658741670&amp;do=diff"/>
        <published>2022-07-25T11:34:30+00:00</published>
        <updated>2022-07-25T11:34:30+00:00</updated>
        <id>https://srl.cit.tum.de/research/projects/bodyslam?rev=1658741670&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="research:projects" />
        <content>BodySlam: Joint Camera Localisation, Mapping, and Human Motion Tracking

Authors: Dorian Henning*, Tristan Laidlow*, Stefan Leutenegger (*Dyson Robotics Lab)

Abstract

Estimating human motion from video is an active research area due to its many potential applications. Most state-of-the-art methods predict human shape and posture estimates for individual images and do not leverage the temporal information available in video. Many &quot;in the wild&quot; sequences of human motion are captured by a moving …</content>
        <summary>BodySlam: Joint Camera Localisation, Mapping, and Human Motion Tracking

Authors: Dorian Henning*, Tristan Laidlow*, Stefan Leutenegger (*Dyson Robotics Lab)

Abstract

Estimating human motion from video is an active research area due to its many potential applications. Most state-of-the-art methods predict human shape and posture estimates for individual images and do not leverage the temporal information available in video. Many &quot;in the wild&quot; sequences of human motion are captured by a moving …</summary>
    </entry>
    <entry>
        <title>Learning to Complete Object Shapes for Object-level Mapping in Dynamic Scenes</title>
        <link rel="alternate" type="text/html" href="https://srl.cit.tum.de/research/projects/cosom?rev=1660211789&amp;do=diff"/>
        <published>2022-08-11T11:56:29+00:00</published>
        <updated>2022-08-11T11:56:29+00:00</updated>
        <id>https://srl.cit.tum.de/research/projects/cosom?rev=1660211789&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="research:projects" />
        <content>Learning to Complete Object Shapes for Object-level Mapping in Dynamic Scenes

Authors: Binbin Xu*, Andrew Davison*, Stefan Leutenegger (*Dyson Robotics Lab)

Abstract

In this paper, we propose a novel object-level mapping system that can simultaneously segment, track, and reconstruct objects in dynamic scenes. It can further predict and complete their full geometries by conditioning on reconstructions from depth inputs and a category-level shape prior with the aim that completed object geometr…</content>
        <summary>Learning to Complete Object Shapes for Object-level Mapping in Dynamic Scenes

Authors: Binbin Xu*, Andrew Davison*, Stefan Leutenegger (*Dyson Robotics Lab)

Abstract

In this paper, we propose a novel object-level mapping system that can simultaneously segment, track, and reconstruct objects in dynamic scenes. It can further predict and complete their full geometries by conditioning on reconstructions from depth inputs and a category-level shape prior with the aim that completed object geometr…</summary>
    </entry>
    <entry>
        <title>Visual-Inertial SLAM with Tightly-Coupled Dropout-Tolerant GPS Fusion</title>
        <link rel="alternate" type="text/html" href="https://srl.cit.tum.de/research/projects/gpsokvis2?rev=1659435517&amp;do=diff"/>
        <published>2022-08-02T12:18:37+00:00</published>
        <updated>2022-08-02T12:18:37+00:00</updated>
        <id>https://srl.cit.tum.de/research/projects/gpsokvis2?rev=1659435517&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="research:projects" />
        <content>Visual-Inertial SLAM with Tightly-Coupled Dropout-Tolerant GPS Fusion

Authors: Simon Boche, Xingxing Zuo, Simon Schaefer, Stefan Leutenegger

Abstract

Robotic applications are continuously striving towards higher levels of autonomy. To achieve that goal, a highly robust and accurate state estimation is indispensable. Combining visual and inertial sensor modalities has proven to yield accurate and locally consistent results in short-term applications. Unfortunately, visual-inertial state estima…</content>
        <summary>Visual-Inertial SLAM with Tightly-Coupled Dropout-Tolerant GPS Fusion

Authors: Simon Boche, Xingxing Zuo, Simon Schaefer, Stefan Leutenegger

Abstract

Robotic applications are continuously striving towards higher levels of autonomy. To achieve that goal, a highly robust and accurate state estimation is indispensable. Combining visual and inertial sensor modalities has proven to yield accurate and locally consistent results in short-term applications. Unfortunately, visual-inertial state estima…</summary>
    </entry>
    <entry>
        <title>VI-MID: Visual-Inertial Multi-Instance Dynamic SLAM with Object-Level Relocalisation</title>
        <link rel="alternate" type="text/html" href="https://srl.cit.tum.de/research/projects/vimid?rev=1660031445&amp;do=diff"/>
        <published>2022-08-09T09:50:45+00:00</published>
        <updated>2022-08-09T09:50:45+00:00</updated>
        <id>https://srl.cit.tum.de/research/projects/vimid?rev=1660031445&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="research:projects" />
        <content>VI-MID: Visual-Inertial Multi-Instance Dynamic SLAM with Object-Level Relocalisation

Authors: Yifei Ren*, Binbin Xu*, Christopher L. Choi, Stefan Leutenegger

Abstract

In this paper, we present a tightly-coupled visual-inertial object-level multi-instance dynamic SLAM system. Even in extremely dynamic scenes, it can robustly optimise for the camera pose, velocity, IMU biases and build a dense 3D reconstruction object-level map of the environment. Our system can robustly track and reconstruct t…</content>
        <summary>VI-MID: Visual-Inertial Multi-Instance Dynamic SLAM with Object-Level Relocalisation

Authors: Yifei Ren*, Binbin Xu*, Christopher L. Choi, Stefan Leutenegger

Abstract

In this paper, we present a tightly-coupled visual-inertial object-level multi-instance dynamic SLAM system. Even in extremely dynamic scenes, it can robustly optimise for the camera pose, velocity, IMU biases and build a dense 3D reconstruction object-level map of the environment. Our system can robustly track and reconstruct t…</summary>
    </entry>
</feed>
