Motivation and Scope
The geometric knowledge of environment allows us to infer more information about our surroundings. From a set of images, 3D geometric information of an environment can be obtained and used in localization and virtual reconstruction of the environment for further analysis.
Simultaneous localization and mapping for autonomous driving is a challenging task to accomplish, especially when only visual data is available. Current state-of-the art large-scale 3D scene reconstruction frameworks consider the environment to be static, i.e. without any dynamic changes. However, in autonomous driving, the scene consists of various motions, such as motions due to pedestrians and other vehicles. Therefore, the state-of-the art frameworks could not be used in the context of autonomous driving.
This project, in collaboration with the BMW Group, aims to localize the vehicle based on visual data obtained from an onboard stereo camera system and to generate a large-scale consistent 3D map of observed scene based upon the localization results.