Intelligent robots require the ability to perceive the environment and learn their action capabilities. Our group investigates novel methods for 3D perception and learning the effects of the actions of robots in their environment. Our aim is to enable robots to perform complex tasks such as autonomous navigation and object manipulation.
Computer Vision for Embodied Agents: Interpreting visual information plays a key role in autonomous systems that act purposefully in their environment. We develop approaches for visual scene understanding in intelligent systems. Specifically, we are interested in computer vision methods for simultaneous localization and mapping and object-level dynamic scene understanding. We aim at approaches for self-supervised learning and adaptation in order to increase the flexibility of robotic systems.
Dynamics Model Learning for Control and Planning: In recent years, learning-based approaches have gained traction due to their prospects of enabling robotic agents to learn their skills through interaction and exploration in the environment. This way, robots could adapt to novel situations and achieve more generalizable behavior than manually engineered approaches. In contrast to model-free approaches, which learn policies and value functions with an implicit understanding of the environment, our paradigm is to make environment models explicit and learn such world models for model-based control and planning. Challenges in this regard are to learn generalizable models which transfer to a variety of environments and systems, to adapt the models through perception, to achieve sample-efficient learning, and to enable robust approaches which are aware of the uncertainty of models.