Master-Seminar – Deep Learning in Computer Graphics (IN2107, IN0014)

Lecturer Dr. Liwei Chen, Björn List, Erik Franz, Prof. Dr. Nils Thürey
Studies Master Informatics
Time, Place

Mondays 16:00-18:00

Kick-Off: Monday, April 20., 2020 

Online "BigBlueButton" audio conference: the link will be sent through emails.

Begin Monday, April 20., 2020

Content

In this course, students will autonomously investigate recent research about machine learning techniques in computer graphics. Independent investigation for further reading, critical analysis, and evaluation of the topic are required.

Requirements

Participants are required to first read the assigned paper and start writing a report. This will help you prepare for your presentation.

Attendance

  • It is only allowed to miss two talks. If you have to miss any, please let us know in advance, and write a one-page report about the paper in your own words.
  • Missing the third one means failing the seminar. 
  • As the seminar in this semester is completely online, we shall ask you for a short feedback or some comments for each talk. The feedback summary is just for checking attendance, so it doesn't need to be long. A few sentences will be good enough.

Report

  • A short report (4 pages max. excluding references in the ACM SIGGRAPH TOG format (acmtog) - you can download the precompiled latex template) should be prepared and sent two weeks before the talk, i.e., by 23:59 on Monday
  • Guideline: You can begin with writing a summary of the work you present as a start point; but, it would be better if you focus more on your own research rather than just finishing with the summary of the paper. We, including you, are not interested in revisiting the work done before; it is more meaningful if you make an effort to put your own reasoning about the work, such as pros and cons, limitation, possible future work, your own ideas for the issues, etc.

Presentation (slides)

  • You will present your topic in English, and the talk should last 30 minutes. After that, a discussion session for ca. 10 minutes will follow.
  • The slides should be structured according to your presentation. You can use any layout or template you like.
  • Plagiarism should be avoided; please do not simply copy the original authors' slides. You can certainly refer to them.
  • The semi-final slides (PDF) should be sent one week before the talk; otherwise, the talk will be canceled.
  • We strongly encourage you to finalize the semi-final version as far as possible. We will take a look at the version and give feedback. You can revise your slides until your presentation.
  • Be ready in advance. As the seminar in this semester is completely online, giving a virtual talk may be different from a real speech to your audience. To get prepared for your talk with "BigBlueButton", please read this guidance by Lukas Prantl.

Schedule

08 Mar 2020 Deregistration due
22 Mar 2020 Send three preferred topics
27 Mar 2020 Assign topics
20 Apr 2020 Introduction lecture
04 May 2020 First talk

Papers

No Date Presenter Paper Contact Person
1 04.05 Lukas Goll 2019, Zexiang Xu et al., Deep View Synthesis From Sparse Photometric Images, ACM Trans. Graph Björn List
2 04.05 Dai Liu

2019, Meka et al., Deep Reflectance Fields - High-Quality Facial Reflectance Field Inference From Color Gradient Illumination, ACM Trans. Graph

Liwei Chen
3 11.05 Anuj Berwal 2019, Philip et al., Multi-View Relighting Using A Geometry-Aware Network, ACM Trans. Graph (Cancelled) -
4 11.05 Nicolai Stein 2019, LeGendre et al., DeepLight: Learning Illumination for Unconstrained Mobile Mixed Reality, CVPR Erik Franz
5 18.05 Aman Kumar

2019, Koskela et al., Blockwise Multi-Order Feature Regression for Real-Time Path Tracing Reconstruction, ACM Trans. Graph

Björn List
6 - - 2019, Frühstück et al., TileGAN: Synthesis of Large-Scale Non-Homogeneous Textures, arXiv.org  
7 18.05 Yi Qiao 2019, Thies et al., Deferred Neural Rendering: Image Synthesis using Neural Textures, arXiv.org Liwei Chen
8 - - 2019, Rainer et al., Neural BTF Compression and Interpolation, Computer Graphics Forum  
9 25.05 Qianhao Li 2018, Wang et al., video-to-video synthesis, arXiv.org Erik Franz
10 25.05 Hans Hsu 2019, Li et al., Learning the Depths of Moving People by Watching Frozen People, CVPR Björn List
11 08.06 Zhuo Shi 2019, Karras et al., A Style-Based Generator Architecture for Generative Adversarial Networks, CVPR Björn List
12 08.06 Jonas Gregor Wiese 2019, Karras, Analyzing and Improving the Image Quality of StyleGAN, arXiv.org Erik Franz
13 15.06 Sangyu Tian 2018, Zhang et al., Mode-Adaptive Neural Networks for Quadruped Motion Control, ACM Trans. Graph. Liwei Chen
14 - - 2020, Schelling et al., Enabling Viewpoint Learning through Dynamic Label Generation, arXiv.org  
15 15.06 Julian Eder 2020, Egiazarian et al., Deep Vectorization of Technical Drawings, arXiv.org Erik Franz
16 22.06 Aarav Malik 2019, Deep-learning the Latent Space of Light Transport, arXiv.org Liwei Chen
17 22.06 Ruijie Chen 2020, Paliwal & Kalantari, Deep Slow Motion Video Reconstruction with Hybrid Imaging System, arXiv.org Björn List
18 29.06 Joong-Won Seo 2019, Choi & Kweon, Deep Iterative Frame Interpolation for Full-frame Video Stabilization, arXiv.org Liwei Chen
19 - - 2020, Biland et al., Frequency-Aware Reconstruction of Fluid Simulations with Generative Networks, arXiv.org -
20 29.06 Tom Dörr 2019, Chu et al., Learning Temporal Coherence via Self-Supervision for GAN-based Video Generation, arXiv.org Erik Franz

You can access the papers through TUM library's eAccess.

References