July 11, 2021 (8:00-14:30 CET)
A workshop in conjunction with IV 2021 in Nagoya, Japan
Registration via Portal Site is open. Free registration is possible for viewing the conference content. After registration, the contents will be made available soon. Please check the conference website for details.
Recent advancements in the processing units have improved our ability to construct a variety of archi- tectures for understanding the surroundings of vehicles. Deep learning methods have been developed for geometric and semantic understanding of environments in driving scenarios aim to increase the suc- cess of full-autonomy with the cost of large amount of data.
Recently proposed methods challenge this dependency by pre-processing the data, enhancing, collecting and labeling it intelligently. In addition, the dependency on data can be relieved by generating synthetic data, which alleviates this need with the cost-free annotations, as well as using the test drive data from the sensors and hardware mounted on a vehicle. Nevertheless, state of the driver and passengers inside the cabin have been also of a big importance for the traffic safety and the holistic spatio-temporal perception of the environment.
Aim of this workshop is to form a platform for exchanging ideas and linking the scientific community active in intelligent vehicles domain. This workshop will provide an opportunity to discuss applications and their data-dependent demands for spatio-temporal understanding of the surroundings as well as inside of a vehicle while addressing how the data can be exploited to improve results instead of changing proposed architectures.
Please click to view the workshop in the previous years.
DDIVA Workshop: July 11, 2021
Please also check the conference web page for updates.
Workshop paper submission: (extended) May 10th, 2021
Notification of workshop paper acceptance: May 15th, 2021
Final Workshop paper submission: May 31st, 2021
|Start||End||Time Zone: CET|
|8:00||8:10||Introduction & Welcome|
|8:10||8:50||Keynote - Abhinav Valada ((Self-)Supervised Learning for Perception and Tracking in Autonomous Driving)|
Keynote - Akshay Rangesh (Semi Automatic Labelling Techniques for Driver Behavior Models)
|9:45||10:25||Keynote - Nazım Kemal Üre (How to Get the Most Out of Your Simulation Data for Designing Decision Making Systems for Autonomous Driving)|
|10:25||10:40||Accepted Paper - Hariprasath Govindarajan (Self-Supervised Representation Learning for Content Based Image Retrieval of Complex Scenes)|
|10:40||11:20||Keynote - Alexander Carballo (Recent research topics in data-driven driving behavior and driving scene understanding at Takeda Lab.)|
|12:20||13:00||Keynote - Julian Kooij (New sensing modalities for IV: Data-driven perception with acoustics and low-level radar)|
|13:00||13:40||Keynote - Fabian Oboril (Using the CARLA simulator for AV test and validation)|
|Affiliation||Laboratory for Intelligent & Safe Automobiles, UC San Diego|
|Title of the talk||Semi Automatic Labelling Techniques for Driver Behavior Models|
Modern supervised machine learning models require large amounts of labelled data for timely convergence during training and good generalization during testing. Thus, labelling large amounts of data has become an inevitable bottleneck in the model production pipeline. In this talk, I will present techniques and workflows that could considerably reduce the labelling time and/or effort, and in some cases remove the need to label altogether. To further illustrate this, I will present three case studies where these ideas have been put to use in practice, and discuss the resulting outcomes of our approach. In particular, this talk will cover concepts such as - simultaneous data and label capture, automatic labelling using auxiliary sensors, and task-specific data augmentation schemes. These techniques are meant to be for general-use, and could be applied to or adapted for tasks beyond the ones covered in this talk.
|Speaker||Prof. Dr. Abhinav Valada|
|Affiliation||Robot Learning Lab, Albert-Ludwigs-Universität Freiburg|
|Title of the talk||(Self-)Supervised Learning for Perception and Tracking in Autonomous Driving|
Scene understanding and object tracking play a critical role in enabling autonomous vehicles to navigate in dense urban environments. The last decade has witnessed unprecedented progress in these tasks by exploiting learning techniques to improve performance and robustness. Despite these advances, most techniques today still require a tremendous amount of human-annotated training data and do not generalize well to diverse real-world environments. In this talk, I will discuss some of our efforts targeted at addressing these challenges by leveraging self-supervision and multi-task learning for various tasks ranging from panoptic segmentation to cross-modal object tracking using sound. Finally, I will conclude the talk with a discussion on opportunities for further scaling up the learning of these tasks.
|Speaker||Dr. Julian F.P. Kooij|
|Affiliation||Intelligent Vehicles Group, TU Delft|
|Title of the talk||New sensing modalities for IV: Data-driven perception with acoustics and low-level radar|
Most research on data-driven perception in Intelligent Vehicles focuses on camera and lidar perception tasks, including object detection and scene segmentation. In this talk, I will present our group's research on data-driven methods for other sensing modalities that traditionally rely on signal processing (radar), or are even completely ignored in current IV research (acoustics).
[Palffy,RA-L’20]: “CNN based Road User Detection using the 3D Radar Cube”, A. Palffy et al., IEEE Robotics and Automation Letters (RA-L), 2020
|Speaker||Dr.-Ing. Fabian Oboril|
|Affiliation||Research Scientist for Dependable Driving Automation, Intel Labs|
|Title of the talk||Using the CARLA simulator for AV test and validation|
Automated vehicles (AVs) are gaining increasing interest and their development is making great progress. However, assuring safe driving operation under all possible road and environment conditions is still an open challenge. In this regard, vehicle simulation is seen as a major corner stone for test and validation. Recorded real world challenges can be rebuild in simulation (e.g. NHTSA pre-crash scenarios) and in addition artificial corner cases can be added on top. Those can then be utilized to test the complex software stack in various configurations to find possible safety or availability issues. For example, the same situation can be tested with different settings of the planning modules (driving policy) or road conditions to ensure that all possibilities result in safe driving. In this talk, we will present how the CARLA vehicle simulator in combination with an open source scenario editor can be used to re-create traffic scenarios, play those under various operating conditions and by that means make one step towards safe autonomous driving.
|Speaker||Dr. Alexander Carballo|
|Affiliation||Designated Associate Professor, Nagoya University|
|Title of the talk||Recent research topics in data-driven driving behavior and driving scene understanding at Takeda Lab|
One of the most important efforts by Prof. Kazuya Takeda has been focused in the field of signal processing technology research, in particular, understanding human behavior through data centric approaches. Faithful to that tradition, in this talk, we will introduce our recent data-driven works in driving behavior at Takeda Laboratory.
|Speaker||Dr. Nazım Kemal Üre|
|Affiliation||Artificial Intelligence Research Center, Istanbul Technical University & Eatron Technologies|
|Title of the talk||How to Get the Most Out of Your Simulation Data for Designing Decision Making Systems for Autonomous Driving|
Developing reinforcement learning (RL) algorithms for automated tactical decision making has been an attractive topic in recent years. It is evident that designing RL based autonomous driving systems can help tremendously with handling performance and safety-based issues of alternative planning and decision making approaches. That being said, most RL algorithms are trained in simulators, and require large-amounts of data to converge to good solutions. Thus, designing sample-efficient RL algorithms is important for accelerating design cycles and verifying the safety and robustness of RL solutions. In addition, good performance in simulation does not always imply good performance in real life tests. Thus, additional measure needs to be taken to guarantee that trained RL agents generalize to real-life situations. In this talk, we go over three case studies that deals with these issues; i) how to utilize curriculum RL to boost your autonomous driving agent’s performance using limited amount of data; ii) how to take advantage of offline RL to inject external driving demonstrations for improving sample efficiency, iii) how to use multiple fidelity simulators to transfer simulated performance to real-life.
Spatio-temporal data is crucial to improve accuracy in deep learning applications. In this workshop, we mainly focus on data and deep learning, since data enables through applications to infer more information about environment for autonomous driving. This workshop will provide an opportunity to discuss applications and their data-dependent demands for understanding the environment of a vehicle while addressing how the data can be exploited to improve results instead of changing proposed architectures. The ambition of this full-day DDIVA workshop is to form a platform for exchanging ideas and linking the scientific community active in intelligent vehicles domain.
To this end we welcome contributions with a strong focus on (but not limited to) the following topics within Data Driven Intelligent Vehicle Applications:
- Synthetic Data Generation
- Sensor Data Synchronization
- Sequential Data Processing
- Data Labeling
- Data Visualization
- Data Discovery
- Visual Scene Understanding
- Large Scale Scene Reconstruction
- Semantic Segmentation
- Object Detection
- In Cabin Understanding
- Emotion Recognition
Contact workshop organizers: walter.zimmer( at )tum.de
Please check the conference webpage for the details of submission guidelines.
Authors are encouraged to submit high-quality, original (i.e. not been previously published or accepted for publication in substantially similar form in any peer-reviewed venue including journal, conference or workshop) research. Authors of accepted workshop papers will have their paper published in the conference proceeding. For publication, at least one author needs to be registered for the workshop and the conference and present their work.
While preparing your manuscript, please follow the formatting guidelines of IEEE available here and listed below. Papers submitted to this workshop as well as IV2021 must be original, not previously published or accepted for publication elsewhere, and they must not be submitted to any other event or publication during the entire review process.
- Language: English
- Paper size: US Letter
- Paper format: Two-column format in the IEEE style
- Paper limit: For the initial submission, a manuscript can be 6-8 pages. For the final submission, a manuscript should be 6 pages, with 2 additional pages allowed, but at an extra charge ($100 per page)
- Abstract limit: 200 words
- File format: A single PDF file, please limit the size of PDF to be 10 MB
- Compliance: check here for more info
The paper template is also identical to the main IV2021 symposium:
To go paper submission site, please click here.