Mondays 16:00-20:00, Seminar room: MI 02.13.010
Oct. 14., 2019
In this course, students will autonomously investigate recent research about machine learning techniques in the field of computer graphics. Independent investigation for further reading, critical analysis, and evaluation of the topic are required.
- It is only allowed to miss a single time-slot. Missing a second one means failing the seminar. If you have to miss any, please let us know in advance.
- The participants have to present their topics in a talk (in English), which should last 30 minutes. Don't put too many technical details into the talk, make sure the audience gets the paper's main idea. Be prepared to answer questions regarding the technical details, you could prepare backup slides for that.
- Afterwards, a short discussion session will follow (+ ~10 minutes).
- The slides should be structured according to your presentation. You can use any layout or template you like.
- Plagiarism is important; please do not simply copy the original authors' slides. You can certainly refer to them.
- The semi-final slides (PDF) should be sent one week before the talk; otherwise, the talk will be canceled. We strongly encourage you to finalize the semi-final version as far as possible. We will take a look at the version and give feedback. You can revise your slides until your presentation.
- Be ready in advance. We suggest testing the machines you are going to use before the lecture starts. You can bring your laptop or ask us one (also any converter you need for the projector) in advance. A laser pointer will be provided, so you can use if you want.
- A short report (4 pages max., excluding references, in the ACM SIGGRAPH TOG format (acmtog) - you can download the precompiled latex template) should be prepared and sent within two weeks after the talk, i.e., by 23:59 on Monday. When you send the report, please send the final slides (PDF) as well.
- Guideline: You can begin with writing a summary of the work you present as a start point; but, it would be better if you focus more on your own research rather than just finishing with the summary of the paper. We, including you, are not interested in revisiting the work done before; it is more meaningful if you make an effort to put your own reasoning about the work, such as pros and cons, limitation, possible future work, your own ideas for the issues, etc.
|11.11.2019||Hu Mingzhe||2019, Zexiang Xu et al., Deep View Synthesis From Sparse Photometric Images, ACM Trans. Graph|
|11.11.2019||Daniel Mayau||2019, Meka et al., Deep Reflectance Fields - High-Quality Facial Reflectance Field Inference From Color Gradient Illumination, ACM Trans. Graph|
|11.11.2019||Lisa Kaldich||2019, Philip et al., Multi-View Relighting Using A Geometry-Aware Network, ACM Trans. Graph|
|11.11.2019||Domenik Popfinger||2019, LeGendre et al., DeepLight: Learning Illumination for Unconstrained Mobile Mixed Reality, CVPR|
|16.12.2019||Philipp Altrogge||2019, Koskela et al., Blockwise Multi-Order Feature Regression for Real-Time Path Tracing Reconstruction, ACM Trans. Graph|
|16.12.2019||Raphael Penz||2019, Frühstück et al., TileGAN: Synthesis of Large-Scale Non-Homogeneous Textures, arXiv.org|
|16.12.2019||Alex Mueller||2019, Thies et al., Deferred Neural Rendering: Image Synthesis using Neural Textures, arXiv.org|
|16.12.2019||Marcel Kollovieh||2019, Rainer et al., Neural BTF Compression and Interpolation, Computer Graphics Forum|
|23.12.2019||Felix Merkl||2018, Wang et al., video-to-video synthesis, arXiv.org|
|23.12.2019||Antonio Oroz||2019, Li et al., Learning the Depths of Moving People by Watching Frozen People, CVPR|
|13.01.2020||Jakob Robbiani||2019, Vicini et al., A Learned Shape-Adaptive Subsurface Scattering Model, ACM Trans. Graph|
|13.01.2020||Oskar Homburg||2019, Karras et al., A Style-Based Generator Architecture for Generative Adversarial Networks, CVPR|
|13.01.2020||Duc Thinh Nguyen||2018, Zang et al., Mode-Adaptive Neural Networks for Quadruped Motion Control, ACM Trans. Graph.|
You can access the papers through TUM library's eAccess.
- Book: Bishop, Pattern Recognition and Machine Learning
- Book: Hastie et al., The Elements of Statistical Learning
- Online: Nielsen, Neural Networks and Deep Learning
- Online: Ruder, An Overview of Gradient Descent Optimization Algorithms