– Anna Harutyunyan, Reinforcement learning: an anti-tutorial
Abstract: In this “anti-tutorial”, we’ll examine the reinforcement learning framework through a critical lens. What are its core assumptions and narratives? What kind of intelligence does it truly model? What questions go unasked, and what answers remain out of reach?
We will then motivate and explore a complementary perspective, grounded in a different ontological starting point, and consider the new lenses it affords.
Finally, we’ll reflect on meta-principles for doing research that deliberately steps outside of its inherited frames.
– Alex Iosevich, On discrete, continuous, and arithmetic aspects of Fourier uncertainty
Abstract: We are going to discuss the Fourier uncertainty principle in a variety of settings and apply the resulting estimates to the classical problem of exact signal recovery in electrical engineering. In the last two lectures, we are going to a refinement of these principles to the imputation of missing values in time series. Here theoretical results are combined with concrete Python programming to produce an imputation engine that will be tested on real-life data sets.
– Gohar Kyureghyan, Mathematics of symmetric cryptography
– Gábor Lugosi, Introduction to Statistical Learning Theory
– Charles Margossian, Bayesian Statistics: a practical introduction
Abstract: This lecture series introduces the key tenants of Bayesian statistics and showcases its application using the software Stan (https://mc-stan.org/). Two practical benefits of the Bayesian approach are its ability to incorporate prior information and its principled treatment of uncertainty. We’ll define what a Bayesian model is, how to fit it to data using Markov chain Monte Carlo (MCMC), how to check the quality of MCMC and finally how to check the quality of the fitted model. The course will include some coding demonstrations: students who bring their laptop will be encouraged to code along.
– Shant Navasardyan, Generative AI with Diffusion Models
Abstract: This short lecture series provides a mathematically grounded introduction to generative modeling with diffusion processes. We will build up from the foundational principles of diffusion models, covering both the theoretical underpinnings and key algorithmic ideas. The goal is to make sense of the core concepts—such as forward and reverse-time stochastic processes, score-based learning, and sampling techniques—while connecting them to major works that have shaped the field. Emphasis will be placed on clarity, mathematical rigor, and bridging gaps often found in the literature, making the material accessible to students and researchers with a solid background in probability and machine learning.
– Razvan Pascanu, Intro to Deep Learning and LLMs
– Armen Vagharshakyan, Analytic methods in learning theory
Abstract: We present a primer on the geometry of numbers as a part of analytic number theory, with an emphasis on the origins of its problems. We showcase its application in learning theory as applied to time series.
– Michal Valko, World Discovery Models and Gamification of Large Language Models