Master-Seminar – Deep Learning in Computer Graphics (IN2107, IN0014)

Lecturer Erik Franz, Georg Kohl, Nilam Tathawadekar, Prof. Dr. Nils Thürey
Studies Master Informatics
Time, Place

Mondays 14:00-16:00

Kick-Off: Monday, 2nd November 2020

Online "BigBlueButton" audio conference: access link will be sent via email

Begin 2nd November 2020

Content

In this course, students will autonomously investigate recent research about machine learning techniques in computer graphics. Independent investigation for further reading, critical analysis, and evaluation of the topic are required.

Requirements

Participants are required to first read the assigned paper and start writing a report. This will help you prepare for your presentation.

Attendance
  • It is only allowed to miss two talks. If you have to miss any, please let us know in advance, and write a one-page summary about the paper in your own words. Missing the third one means failing the seminar. 
  • As the seminar is completely online, we shall ask you for a short feedback or some comments for each talk. The feedback summary is just for checking attendance, so it doesn't need to be long. A few sentences will be good enough.
Report
  • A short report (4 pages max. excluding references in the ACM SIGGRAPH TOG format (acmtog) - you can download the precompiled latex template) should be prepared and sent two weeks before the talk, i.e., by 23:59 on Monday.
  • Guideline: You can begin with writing a summary of the work you present as a start point; but, it would be better if you focus more on your own research rather than just finishing with the summary of the paper. We, including you, are not interested in revisiting the work done before; it is more meaningful if you make an effort to put your own reasoning about the work, such as pros and cons, limitation, possible future work, your own ideas for the issues, etc.
  • For questions regarding your paper or feedback for a semi-final version of your report you can contact your advisor.
Presentation (slides)
  • You will present your topic in English, and the talk should last 30 minutes. After that, a discussion session of about 10 minutes will follow.
  • The slides should be structured according to your presentation. You can use any layout or template you like, but make sure to choose suitable colors and font sizes for readability.
  • Plagiarism should be avoided; please do not simply copy the original authors' slides. You can certainly refer to them.
  • The semi-final slides (PDF) should be sent one week before the talk, otherwise the talk will be canceled.
  • We strongly encourage you to finalize the semi-final version as far as possible. We will take a look at the version and give feedback. You can revise your slides until your presentation.
  • Be ready in advance. As the seminar in this semester is completely online, giving a virtual talk may be different from a real speech to your audience. To get prepared for your talk with "BigBlueButton", please read this guide by Lukas Prantl.
  • The final slides should be sent after the talk.

Schedule

14th September 2020 Deregistration due
5th October 2020 Deadline for sending an e-mail with 3 preferred topics
8th October 2020 Notification of assigned paper
2nd November 2020 Introduction lecture
30th November 2020 First talk

Topics

No Date Presenter Paper Advisor
 1 --- --- 2019, Zexiang Xu et al., Deep View Synthesis From Sparse Photometric Images, ACM Trans. Graph ---
 2 --- ---

2019, Meka et al., Deep Reflectance Fields - High-Quality Facial Reflectance Field Inference From Color Gradient Illumination, ACM Trans. Graph

---
 3 11.01. Yiman Li 2019, Philip et al., Multi-View Relighting Using A Geometry-Aware Network, ACM Trans. Graph Nilam Tathawadekar
 4 18.01. Elisa Xiao 2019, LeGendre et al., DeepLight: Learning Illumination for Unconstrained Mobile Mixed Reality, CVPR Georg Kohl
 5 30.11. Victor Oancea 2020, Nilsson & Akenine-Möller, Understanding SSIM, arXiv.org Georg Kohl
 6 07.12. Hanfeng Wu 2019, Frühstück et al., TileGAN: Synthesis of Large-Scale Non-Homogeneous Textures, arXiv.org Nilam Tathawadekar
 7 07.12. Manuel Wagner 2019, Thies et al., Deferred Neural Rendering: Image Synthesis using Neural Textures, arXiv.org Erik Franz
 8 30.11. Roman Kistol (cancelled) 2018, Zhang et al., The Unreasonable Effectiveness of Deep Features as a Perceptual Metric, CVPR Georg Kohl
 9 11.01. Mohamed Elshaer 2018, Wang et al., Video-to-Video Synthesis, arXiv.org Erik Franz
10 14.12. Shiyu Li 2020, Kim et al., Lagrangian Neural Style Transfer for Fluids, ACM Trans. Graph. Erik Franz
11 08.02. Xi Wang 2019, Karras et al., A Style-Based Generator Architecture for Generative Adversarial Networks, CVPR Nilam Tathawadekar
12 14.12. Anagha Moosad 2019, Karras et al., Analyzing and Improving the Image Quality of StyleGAN, arXiv.org Nilam Tathawadekar
13 08.02. Leonardo Santos Machado 2018, Zhang et al., Mode-Adaptive Neural Networks for Quadruped Motion Control, ACM Trans. Graph. Nilam Tathawadekar
14 01.02 Michael Sedrak (cancelled) 2020, Dupont et al., Equivariant Neural Rendering, ICML Georg Kohl
15 01.02. Maxi Barmetler (cancelled) 2020, Luo et al., Consistent Video Depth Estimation, ACM Trans. Graph Georg Kohl
16 18.01. Thomas Barthel Brunner 2019, Hermosilla et al., Deep-learning the Latent Space of Light Transport, arXiv.org Erik Franz
17 --- --- 2020, Wang et al., Attribute2Font: Creating Fonts You Want From Attributes, ACM Trans. Graph ---
18 25.01. Ruilin Qi 2019, Choi & Kweon, Deep Iterative Frame Interpolation for Full-frame Video Stabilization, arXiv.org Georg Kohl
19 25.01. Berk Saribas 2020, Xiao et al., Neural Supersampling for Real-Time Rendering, ACM Trans. Graph. Erik Franz
20 --- --- 2019, Chu et al., Learning Temporal Coherence via Self-Supervision for GAN-based Video Generation, arXiv.org ---

You can access the papers through TUM library's eAccess.

References