Seminar: Selected Topics in Machine Learning Research

(IN2107, IN4872)


  • Preliminary meeting: Wed, 29.01.2020, 17:30-18:30, MI HS1. Slides
  • Kick-off meeting: Thu, 23.04.2020, 12:00-14:00, MI 02.11.018
  • Seminar: 24.07.2020 9:00-18:00 and 25.07.2020 9:00-13:00, IGSSE seminar room (5530.EG.003)


This seminar is intended for Master's students only. You should have attended (and passed) the Machine Learning lecture (IN2064). Having attended Machine Learning for Graphs and Sequential Data (IN2323, formerly Mining Massive Datasets) is a plus.


The amount of research in machine learning has grown exponentially in the last couple of years, uncovering many promising and successful research directions. In this seminar we will select and discuss a diverse set of topics of current research. This seminar will let students get acquainted with current machine learning research, let them explore new fields and ideas and let them analyze and criticize recent publications.

To do so, each student will receive 2-5 research papers which they should carefully read and analyze. Starting from these they should explore the surrounding literature and summarize their findings, criticism, and research ideas in a 4-page paper (double column). The students will then review each other's work to give valuable feedback and criticism. Finally, all students will prepare 25-minute presentations and present their work during a block seminar at the end of the semester.

You can find more information in the preliminary meeting slides.

Possible topics

Each topic can be skipped or split into multiple subtopics, depending on popularity.

Modern architectures:

  • Knowledge graph embeddings
  • Node embeddings
  • Multi-scale learning on graphs
  • Spectral graph neural networks
  • Message passing neural networks
  • Graph neural networks for molecules
  • Optimal transport for machine learning
  • Normalizing flows
  • Transformers
  • Active learning
  • Unsupervised machine translation

Properties of machine learning models:

  • Expressive power of graph neural networks
  • Equivariance in neural networks
  • Uncertainty
  • Explainability
  • Robustness
  • Uncertainty