Seminars and Colloquia by Series

Probability and variational methods in PDEs — optimal transport, regularity, and universality

Series
Job Candidate Talk
Time
Tuesday, December 12, 2023 - 11:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/96443370732
Speaker
Tobias RiedMax Planck Institute for Mathematics in the Sciences, Liepzig, Germany
In this talk I will present an overview of my research, highlighting in more detail two topics: 
1. A purely variational approach to the regularity theory of optimal transportation, which is analogous to De Giorgi’s strategy for the regularity theory of minimal surfaces. I will show some interesting connections to Wasserstein barycenters, branched transport, and pattern formation in materials science, as well as applications in density functional theory. 
2. Variational methods for a singular stochastic PDE describing the magnetization ripple, a microstructure in thin-film ferromagnets triggered by the poly-crystallinity of the sample. I will describe how the universal character of the magnetization ripple can be addressed using variational methods based on Γ-convergence.

Staircases and cuspidal curves in symplectic four manifolds

Series
School of Mathematics Colloquium
Time
Friday, December 8, 2023 - 16:00 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Dusa McDuffBarnard College, Columbia

Please Note: This colloquium will also be the staring talk for the 2023 Tech Topology Conference.

This talk will give an elementary introduction to my joint work with Kyler Siegel that shows how cuspidal curves in a symplectic manifold X such as the complex projective plane determine when an ellipsoid can be symplectically embedded into X.

"SAM as an Optimal Relaxation of Bayes" and "Lie Group updates for Learning Distributions on Machine Learning Parameters"

Series
Applied and Computational Mathematics Seminar
Time
Friday, December 8, 2023 - 11:00 for 1 hour (actually 50 minutes)
Location
https://gatech.zoom.us/j/98355006347
Speaker
Dr. Thomas Moellenhoff and Dr. Eren Mehmet KıralRIKEN

Please Note: Note special time, due to time zone difference from Japan. Joint with SIAM GT Student Chapter Seminar

Part I (SAM as an Optimal Relaxation of Bayes) Dr. Thomas Moellenhoff

Sharpness-aware minimization (SAM) and related adversarial deep-learning methods can drastically improve generalization, but their underlying mechanisms are not yet fully understood. In this talk, I will show how SAM can be interpreted as optimizing a relaxation of the Bayes objective where the expected negative-loss is replaced by the optimal convex lower bound, obtained by using the so-called Fenchel biconjugate. The connection enables a new Adam-like extension of SAM to automatically obtain reasonable uncertainty estimates, while sometimes also improving its accuracy.

Part II (Lie Group updates for Learning Distributions on Machine Learning Parameters) Dr. Eren Mehmet Kıral

I will talk about our recent paper https://arxiv.org/abs/2303.04397 with Thomas Möllenhoff and Emtiyaz Khan, and other related results. Bayesian Learning learns a distribution over the model parameters, allowing for different descriptions of the same data. This is (contrary to classical learning which "bets-it-all" on a single set of parameters in describing a given dataset and making predictions. We focus on classes of distributions which have a transitive Lie group action on them given by pushforwards of an action on the parameter space. I will also specialize to a few concrete Lie groups and show distinct learning behavior.

The Poisson point process and an application to semisimple symmetric spaces

Series
Job Candidate Talk
Time
Thursday, December 7, 2023 - 11:00 for 1 hour (actually 50 minutes)
Location
Skiles 006; Streaming available via zoom
Speaker
Amanda WilkensUT Austin

Please Note: Link to join via Zoom: https://gatech.zoom.us/j/93394018195?pwd=MGJZaWIwQUhVYW9ZZDFoWWFOc29EZz09 Meeting ID: 933 9401 8195 Passcode: SoM

We define and motivate the Poisson point process, which is, informally, a “maximally random” scattering of points in some locally compact, second countable space. We introduce the ideal Poisson--Voronoi tessellation (IPVT), a new random object with intriguing geometric properties when considered on a semisimple symmetric space (the hyperbolic plane, for example). In joint work with Mikolaj Fraczyk and Sam Mellick, we use the IPVT to prove the minimal number of generators of a torsion-free lattice in a higher rank, semisimple Lie group is sublinear in the co-volume of the lattice. We give some intuition for the proof. No prior knowledge on Poisson point processes or symmetric spaces will be assumed.

Asymmetric Distribution of Extreme Values of Cubic L-functions on the 1-line

Series
Number Theory
Time
Wednesday, December 6, 2023 - 15:30 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Chantal DavidConcordia University

A fundamental problem in analytic number theory is to calculate the maximal value of L-functions at a given point. For L-functions associated to quadratic Dirichlet characters at s = 1, the upper bounds and Ω-results of Littlewood differ by a factor of 2, and it is a long-standing (and still unsolved) problem to find out which one is closer to the truth. The important work of Granville and Soundararajan, who model the distribution of L(1, χ) by the distribution of random Euler products L(1, X) for random variables X(p) attached to each prime, shed more light to the question. We use similar techniques to study the distribution of L(1, χ) for cubic Dirichlet characters. Unlike the quadratic case, there is an asymmetry between lower and upper bounds for the cubic case, and small values are less probable than large values. This is a joint work with P. Darbar, M. Lalin and A. Lumley.

Spectral monotonicity under Gaussian convolution

Series
Analysis Seminar
Time
Wednesday, December 6, 2023 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Eli PuttermanTel Aviv University

The Poincaré constant of a body, or more generally a probability density, in $\mathbb R^n$ measures how "spread out" the body is - for instance, this constant controls how long it takes heat to flow from an arbitrary point in the body to any other. It's thus intuitively reasonable that convolving a "sufficiently nice" measure with a Gaussian, which tends to flatten and smooth out the measure, would increase its Poincaré constant ("spectral monotonicity"). We show that this is true if the original measure is log-concave, via two very different strategies - a dynamic variant of Bakry-Émery's $\Gamma$-calculus, and a mass-transportation argument. Moreover, we show that the dynamic $\Gamma$-calculus argument can also be extended to the discrete setting of measures on $\mathbb Z$, and that spectral monotonicity holds in this setting as well. Some results joint with B. Klartag.

Spectral monotonicity under Gaussian convolution

Series
Analysis Seminar
Time
Wednesday, December 6, 2023 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Eli PuttermanTel Aviv University

The Poincaré constant of a body, or more generally a probability density, in $\mathbb R^n$ measures how "spread out" the body is - for instance, this constant controls how long it takes heat to flow from an arbitrary point in the body to any other. It's thus intuitively reasonable that convolving a "sufficiently nice" measure with a Gaussian, which tends to flatten and smooth out the measure, would increase its Poincaré constant ("spectral monotonicity"). We show that this is true if the original measure is log-concave, via two very different strategies - a dynamic variant of Bakry-Émery's $\Gamma$-calculus, and a mass-transportation argument. Moreover, we show that the dynamic $\Gamma$-calculus argument can also be extended to the discrete setting of measures on $\mathbb Z$, and that spectral monotonicity holds in this setting as well. Some results joint with B. Klartag.  

Critical points of high-dimensional random functions

Series
Job Candidate Talk
Time
Tuesday, December 5, 2023 - 16:30 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Benjamin McKennaHarvard University

How many critical points does a random function from R^N to R have for large N? Such functions appear naturally in probability, data science, and mathematical physics. Questions like this one, which have attracted longstanding interest from both physicists and mathematicians, can help explain both physical phase transitions and algorithmic thresholds. I will give an overview of this "landscape complexity" program, its motivations, and recent progress coming from random matrices.

Subsquares in random Latin squares and rectangles

Series
Graph Theory Seminar
Time
Tuesday, December 5, 2023 - 15:30 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Alex DivouxGeorgia Tech

A $k \times n$ partial Latin rectangle is \textit{$C$-sparse} if the number of nonempty entries in each row and column is at most $C$ and each symbol is used at most $C$ times. We prove that the probability a uniformly random $k \times n$ Latin rectangle, where $k < (1/2 - \alpha)n$, contains a $\beta n$-sparse partial Latin rectangle with $\ell$ nonempty entries is $(\frac{1 \pm \varepsilon}{n})^\ell$ for sufficiently large $n$ and sufficiently small $\beta$. Using this result, we prove that a uniformly random order-$n$ Latin square asymptotically almost surely has no Latin subsquare of order greater than $c\sqrt{n\log n}$ for an absolute constant $c$. This is joint work with Tom Kelly, Camille Kennedy, and Jasdeep Sidhu.

Quantitative acceleration of convergence to invariant distribution by irreversibility in diffusion processes

Series
PDE Seminar
Time
Tuesday, December 5, 2023 - 15:30 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Yuqing WangGeorgia Tech

Sampling from the Gibbs distribution is a long-standing problem studied across various fields. Among many sampling algorithms, Langevin dynamics plays a crucial role, particularly for high-dimensional target distributions. In practical applications, accelerating sampling dynamics is always desirable. It has long been studied that adding an irreversible component to reversible dynamics, such as Langevin, can accelerate convergence. Concrete constructions of irreversible components have also been explored in specific scenarios. However, a general strategy for such construction is still elusive. In this talk, I will introduce the concept of leveraging irreversibility to accelerate general dynamics, along with the quantification of irreversible dynamics. Our theory is mainly based on designing a modified entropy functional originally developed for linear kinetic equations (Dolbeault et al., 2015).

Pages