Master-Seminar – Deep Learning in Computer Graphics (IN2107, IN0014)

Prof. Dr. Nils Thuerey, Steffen Wiewel, Nilam Tathawadekar, Stephan Rasp

StudiesMaster Informatics
Time, Place

Mondays 16:00-20:00, Seminar room: MI 02.13.010

Begin

Oct. 14., 2019

Content

In this course, students will autonomously investigate recent research about machine learning techniques in the field of computer graphics. Independent investigation for further reading, critical analysis, and evaluation of the topic are required.

Requirements

Attendance
  • It is only allowed to miss a single time-slot. Missing a second one means failing the seminar. If you have to miss any, please let us know in advance.
Presentation (slides)
  • The participants have to present their topics in a talk (in English), which should last 30 minutes. Don't put too many technical details into the talk, make sure the audience gets the paper's main idea. Be prepared to answer questions regarding the technical details, you could prepare backup slides for that.
  • Afterwards, a short discussion session will follow (+ ~10 minutes).
  • The slides should be structured according to your presentation. You can use any layout or template you like.
  • Plagiarism is important; please do not simply copy the original authors' slides. You can certainly refer to them.
  • The semi-final slides (PDF) should be sent one week before the talk; otherwise, the talk will be canceled. We strongly encourage you to finalize the semi-final version as far as possible. We will take a look at the version and give feedback. You can revise your slides until your presentation.
  • Be ready in advance. We suggest testing the machines you are going to use before the lecture starts. You can bring your laptop or ask us one (also any converter you need for the projector) in advance. A laser pointer will be provided, so you can use if you want.
Report
  • A short report (4 pages max., excluding references, in the ACM SIGGRAPH TOG format (acmtog) - you can download the precompiled latex template) should be prepared and sent within two weeks after the talk, i.e., by 23:59 on Monday. When you send the report, please send the final slides (PDF) as well.
  • Guideline: You can begin with writing a summary of the work you present as a start point; but, it would be better if you focus more on your own research rather than just finishing with the summary of the paper. We, including you, are not interested in revisiting the work done before; it is more meaningful if you make an effort to put your own reasoning about the work, such as pros and cons, limitation, possible future work, your own ideas for the issues, etc.

Papers

DatePresenterPaper
11.11.2019Hu Mingzhe2019, Zexiang Xu et al., Deep View Synthesis From Sparse Photometric Images, ACM Trans. Graph
11.11.2019Daniel Mayau2019, Meka et al., Deep Reflectance Fields - High-Quality Facial Reflectance Field Inference From Color Gradient Illumination, ACM Trans. Graph
11.11.2019Lisa Kaldich2019, Philip et al., Multi-View Relighting Using A Geometry-Aware Network, ACM Trans. Graph
11.11.2019Domenik Popfinger2019, LeGendre et al., DeepLight: Learning Illumination for Unconstrained Mobile Mixed Reality, CVPR
16.12.2019Philipp Altrogge2019, Koskela et al., Blockwise Multi-Order Feature Regression for Real-Time Path Tracing Reconstruction, ACM Trans. Graph
16.12.2019Raphael Penz2019, Frühstück et al., TileGAN: Synthesis of Large-Scale Non-Homogeneous Textures, arXiv.org
16.12.2019Alex Mueller2019, Thies et al., Deferred Neural Rendering: Image Synthesis using Neural Textures, arXiv.org
16.12.2019Marcel Kollovieh2019, Rainer et al., Neural BTF Compression and Interpolation, Computer Graphics Forum
23.12.2019Felix Merkl2018, Wang et al., video-to-video synthesis, arXiv.org
23.12.2019Antonio Oroz2019, Li et al., Learning the Depths of Moving People by Watching Frozen People, CVPR
13.01.2020Jakob Robbiani2019, Vicini et al., A Learned Shape-Adaptive Subsurface Scattering Model, ACM Trans. Graph
13.01.2020Oskar Homburg2019, Karras et al., A Style-Based Generator Architecture for Generative Adversarial Networks, CVPR
13.01.2020Duc Thinh Nguyen2018, Zang et al., Mode-Adaptive Neural Networks for Quadruped Motion Control, ACM Trans. Graph.

You can access the papers through TUM library's eAccess.

References