Program

Anna Harutyunyan, Reinforcement learning: an anti-tutorial

Abstract: In this “anti-tutorial”, we’ll examine the reinforcement learning framework through a critical lens. What are its core assumptions and narratives? What kind of intelligence does it truly model? What questions go unasked, and what answers remain out of reach?

We will then motivate and explore a complementary perspective, grounded in a different ontological starting point, and consider the new lenses it affords.

Finally, we’ll reflect on meta-principles for doing research that deliberately steps outside of its inherited frames.

 

Alex Iosevich, On discrete, continuous, and arithmetic aspects of Fourier uncertainty

Abstract: We are going to discuss the Fourier uncertainty principle in a variety of settings and apply the resulting estimates to the classical problem of exact signal recovery in electrical engineering. In the last two lectures, we are going to a refinement of these principles to the imputation of missing values in time series. Here theoretical results are combined with concrete Python programming to produce an imputation engine that will be tested on real-life data sets.

 

Gohar Kyureghyan, Mathematics of symmetric cryptography

Abstract: In the first two lectures, we give a brief introduction into the symmetric cryptography.  In the final lectures, we present some of its recent mathematical developments and  challenges, including:
 -how to measure the non-linearity of used mappings
 – constructions of optimal non-linear mappings
 – an unexpected nice algebraic structure of the $\chi$-mapping, which is used in many modern cryptographic algorithms.

 

Gábor Lugosi, Introduction to Statistical Learning Theory

 

Charles MargossianBayesian Statistics: a practical introduction

Abstract: This lecture series introduces the key tenants of Bayesian statistics and showcases its application using the software Stan (https://mc-stan.org/). Two practical benefits of the Bayesian approach are its ability to incorporate prior information and its principled treatment of uncertainty. We’ll define what a Bayesian model is, how to fit it to data using Markov chain Monte Carlo (MCMC), how to check the quality of MCMC and finally how to check the quality of the fitted model. The course will include some coding demonstrations: students who bring their laptop will be encouraged to code along.

 

Shant NavasardyanGenerative AI with Diffusion Models

Abstract: This short lecture series provides a mathematically grounded introduction to generative modeling with diffusion processes. We will build up from the foundational principles of diffusion models, covering both the theoretical underpinnings and key algorithmic ideas. The goal is to make sense of the core concepts—such as forward and reverse-time stochastic processes, score-based learning, and sampling techniques—while connecting them to major works that have shaped the field. Emphasis will be placed on clarity, mathematical rigor, and bridging gaps often found in the literature, making the material accessible to students and researchers with a solid background in probability and machine learning.

 

Razvan Pascanu, Intro to Deep Learning and LLMs

Abstract: In this series of lectures I’ll start by introducing a few basic concepts behind deep learning. In particular I would provide some insights in terms of expressivity of neural networks, trying to answer why they might be a good choice for function approximation and discuss the role of representation learning. This will be followed with a discussion on learnability, the core ideas behind backpropagation and some of the counter-intuitive properties of the learning process. Afterwards I will move into more modern architecture, going through some of the steps needed to get to typical modern DL systems. This will include normalization layers, skip connections and attention layer. In the last part of my lecture I will talk about transformers and recurrent models, in particular recent state space models as an alternative to the more traditional transformer architecture. Finally we will have an open discussion about current models, what they can and cannot do and implications to everyday life.

 

Armen VagharshakyanAnalytic methods in learning theory

Abstract: We present a primer on the geometry of numbers as a part of analytic number theory, with an emphasis on the origins of its problems. We showcase its application in learning theory as applied to time series.

 

Michal Valko, World Discovery Models and Gamification of Large Language Models