Seminars and Colloquia by Series

Sum-Product with few primes

Series
Additional Talks and Lectures
Time
Monday, November 27, 2023 - 16:00 for 1.5 hours (actually 80 minutes)
Location
Skiles 005
Speaker
Brandon HansonUniversity of Maine

This talk concerns improving sum-product exponents for sets  of integers under the condition that each element of  has no more than  prime factors. The argument combines combinatorics, harmonic analysis and number theory.

Generative Machine Learning Models for Uncertainty Quantification

Series
Applied and Computational Mathematics Seminar
Time
Monday, November 27, 2023 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/98355006347
Speaker
Feng BaoFlorida State University

Generative machine learning models, including variational auto-encoders (VAE), normalizing flows (NF), generative adversarial networks (GANs), diffusion models, have dramatically improved the quality and realism of generated content, whether it's images, text, or audio. In science and engineering, generative models can be used as powerful tools for probability density estimation or high-dimensional sampling that critical capabilities in uncertainty quantification (UQ), e.g., Bayesian inference for parameter estimation. Studies on generative models for image/audio synthesis focus on improving the quality of individual sample, which often make the generative models complicated and difficult to train. On the other hand, UQ tasks usually focus on accurate approximation of statistics of interest without worrying about the quality of any individual sample, so direct application of existing generative models to UQ tasks may lead to inaccurate approximation or unstable training process. To alleviate those challenges, we developed several new generative diffusion models for various UQ tasks, including diffusion-model-assisted supervised learning of generative models, a score-based nonlinear filter for recursive Bayesian inference, and a training-free ensemble score filter for tracking high dimensional stochastic dynamical systems. We will demonstrate the effectiveness of those methods in various UQ tasks including density estimation, learning stochastic dynamical systems, and data assimilation problems.

Chebyshev varieties

Series
Algebra Seminar
Time
Monday, November 27, 2023 - 13:00 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Chiara MeroniHarvard John A. Paulson School of Engineering and Applied Sciences

Please Note: There will be a pre-seminar (aimed toward grad students and postdocs) from 11 am to 11:30 am in Skiles 006.

Chebyshev polynomials offer a natural basis for solving polynomial equations. When we switch from monomials to Chebyshev polynomials, we can replace toric varieties with Chebyshev varieties. We will introduce these objects and discuss their main properties, including equations, dimension, and degree. This is an ongoing project with Zaïneb Bel-Afia and Simon Telen.

Physics-inspired learning of differential equations from data.

Series
CDSNS Colloquium
Time
Friday, November 24, 2023 - 15:30 for 1 hour (actually 50 minutes)
Location
Skiles 249
Speaker
Matthew GoldenGeorgia Tech

Please Note: Seminar is in-person. Zoom link available: https://gatech.zoom.us/j/91390791493?pwd=QnpaWHNEOHZTVXlZSXFkYTJ0b0Q0UT09

Continuum theories of physics are traditionally described by local partial differential equations (PDEs). In this talk I will discuss the Sparse Physics-Informed Discovery of Empirical Relations (SPIDER) algorithm: a general algorithm combining the weak formulation, symmetry covariance, and sparse regression to discover quantitatively accurate and qualitatively simple PDEs directly from data. This method is applied to simulated 3D turbulence and experimental 2D active turbulence. A complete mathematical model is found in both cases.

The most likely evolution of diffusing and vanishing particles: Schrodinger Bridges with unbalanced marginals

Series
PDE Seminar
Time
Tuesday, November 21, 2023 - 15:30 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Yongxin ChenGeorgia Tech

Stochastic flows of an advective-diffusive nature are ubiquitous in biology and the physical sciences. Of particular interest is the problem to reconcile observed marginal distributions with a given prior posed by E. Schroedinger in 1932/32 and known as the Schroedinger Bridge Problem (SBP). It turns out that Schroedinger’s problem can be viewed as a problem in large deviations, a modeling problem, as well as a control problem. Due to the fundamental significance of this problem, interest in SBP and in its deterministic (zero-noise limit) counterpart of Optimal Transport (OT) has in recent years enticed a broad spectrum of disciplines, including physics, stochastic control, computer science, and geometry. Yet, while the mathematics and applications of SBP/OT have been developing at a considerable pace, accounting for marginals of unequal mass has received scant attention; the problem to interpolate between “unbalanced” marginals has been approached by introducing source/sink terms into the transport equations, in an adhoc manner, chiefly driven by applications in image registration. Nevertheless, losses are inherent in many physical processes and, thereby, models that account for lossy transport may also need to be reconciled with observed marginals following Schroedinger’s dictum; that is, to adjust the probability of trajectories of particles, including those that do not make it to the terminal observation point, so that the updated law represents the most likely way that particles may have been transported, or vanished, at some intermediate point. Thus, the purpose of this talk is to present recent results on stochastic evolutions with losses, whereupon particles are “killed” (jump into a coffin/extinction state) according to a probabilistic law, and thereby mass is gradually lost along their stochastically driven flow. Through a suitable embedding we turn the problem into an SBP for stochastic processes that combine diffusive and jump characteristics. Then, following a large-deviations formalism in the style of Schroedinger, given a prior law that allows for losses, we explore the most probable evolution of particles along with the most likely killing rate as the particles transition between the specified marginals. Our approach differs sharply from previous work involving a Feynman-Kac multiplicative reweighing of the reference measure which, as we argue, is far from Schroedinger’s quest. We develop a suitable Schroedinger system of coupled PDEs' for this problem, an iterative Fortet-IPF-Sinkhorn algorithm for computations, and finally formulate and solve a related fluid-dynamic control problem for the flow of one-time marginals where both the drift and the new killing rate play the role of control variables. Joint work with Tryphon Georgiou and Michele Pavon.

Machine learning, optimization, & sampling through a geometric lens

Series
School of Mathematics Colloquium
Time
Monday, November 20, 2023 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/98355006347
Speaker
Suvrit SraMIT & TU Munich

Please Note: Joint {School of Math Colloquium} and {Applied & Computational Math Seminar}. Note: *special time*. Speaker will present in person.

Geometry arises in myriad ways within machine learning and related areas. I this talk I will focus on settings where geometry helps us understand problems in machine learning, optimization, and sampling. For instance, when sampling from densities supported on a manifold, understanding geometry and the impact of curvature are crucial; surprisingly, progress on geometric sampling theory helps us understand certain generalization properties of SGD for deep-learning! Another fascinating viewpoint afforded by geometry is in non-convex optimization: geometry can either help us make training algorithms more practical (e.g., in deep learning), it can reveal tractability despite non-convexity (e.g., via geodesically convex optimization), or it can simply help us understand existing methods better (e.g., SGD, eigenvector computation, etc.).

Ultimately, I hope to offer the audience some insights into geometric thinking and share with them some new tools that help us design, understand, and analyze models and algorithms. To make the discussion concrete I will recall a few foundational results arising from our research, provide several examples, and note some open problems.

––
Bio: Suvrit Sra is a Alexander von Humboldt Professor of Artificial Intelligence at the Technical University of Munich (Germany), and and Associate Professor of EECS at MIT (USA), where he is also a member of the Laboratory for Information and Decision Systems (LIDS) and of the Institute for Data, Systems, and Society (IDSS). He obtained his PhD in Computer Science from the University of Texas at Austin. Before TUM & MIT, he was a Senior Research Scientist at the Max Planck Institute for Intelligent Systems, Tübingen, Germany. He has held visiting positions at UC Berkeley (EECS) and Carnegie Mellon University (Machine Learning Department) during 2013-2014. His research bridges mathematical topics such as differential geometry, matrix analysis, convex analysis, probability theory, and optimization with machine learning. He founded the OPT (Optimization for Machine Learning) series of workshops, held from OPT2008–2017 at the NeurIPS  conference. He has co-edited a book with the same name (MIT Press, 2011). He is also a co-founder and chief scientist of Pendulum, a global AI+logistics startup.

 

Machine learning, optimization, & sampling through a geometric lens

Series
Applied and Computational Mathematics Seminar
Time
Monday, November 20, 2023 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/98355006347
Speaker
Suvrit SraMIT & TU Munich

Please Note: Joint {Applied & Computational Math Seminar} and {School of Math Colloquium}. Speaker will present in person.

Geometry arises in myriad ways within machine learning and related areas. I this talk I will focus on settings where geometry helps us understand problems in machine learning, optimization, and sampling. For instance, when sampling from densities supported on a manifold, understanding geometry and the impact of curvature are crucial; surprisingly, progress on geometric sampling theory helps us understand certain generalization properties of SGD for deep-learning! Another fascinating viewpoint afforded by geometry is in non-convex optimization: geometry can either help us make training algorithms more practical (e.g., in deep learning), it can reveal tractability despite non-convexity (e.g., via geodesically convex optimization), or it can simply help us understand existing methods better (e.g., SGD, eigenvector computation, etc.).

Ultimately, I hope to offer the audience some insights into geometric thinking and share with them some new tools that help us design, understand, and analyze models and algorithms. To make the discussion concrete I will recall a few foundational results arising from our research, provide several examples, and note some open problems.

––
Bio: Suvrit Sra is a Alexander von Humboldt Professor of Artificial Intelligence at the Technical University of Munich (Germany), and and Associate Professor of EECS at MIT (USA), where he is also a member of the Laboratory for Information and Decision Systems (LIDS) and of the Institute for Data, Systems, and Society (IDSS). He obtained his PhD in Computer Science from the University of Texas at Austin. Before TUM & MIT, he was a Senior Research Scientist at the Max Planck Institute for Intelligent Systems, Tübingen, Germany. He has held visiting positions at UC Berkeley (EECS) and Carnegie Mellon University (Machine Learning Department) during 2013-2014. His research bridges mathematical topics such as differential geometry, matrix analysis, convex analysis, probability theory, and optimization with machine learning. He founded the OPT (Optimization for Machine Learning) series of workshops, held from OPT2008–2017 at the NeurIPS  conference. He has co-edited a book with the same name (MIT Press, 2011). He is also a co-founder and chief scientist of Pendulum, a global AI+logistics startup.

 

Geometry and the complexity of matrix multiplication

Series
Algebra Seminar
Time
Monday, November 20, 2023 - 13:00 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Austin ConnerHarvard University

Please Note: There will be a pre-seminar (aimed toward grad students and postdocs) from 11 am to 11:30 am in Skiles 006.

Determining the computational complexity of matrix multiplication has been one of the central open problems in theoretical computer science ever since in 1969
Strassen presented an algorithm for multiplication of n by n matrices requiring only O(n^2.81) arithmetic operations. The data describing this method is
equivalently an expression to write the structure tensor of the 2 by 2 matrix algebra as a sum of 7 decomposable tensors. Any such decomposition of an n by n
matrix algebra yields a Strassen type algorithm, and Strassen showed that such algorithms are general enough to determine the exponent of matrix multiplication. Bini later showed all of the above remains true when we allow the decomposition to depend on a parameter and take limits.

I present a recent technique for lower bounds for this decomposition problem, border apolarity. Two key ideas to this technique are (i) to not just look at the sequence of decompositions, but the sequence of ideals of the point sets determining the decompositions and (ii) to exploit the symmetry of the matrix
multiplication tensor to insist that the limiting ideal has an extremely restrictive structure. I discuss its applications to the matrix multiplication
tensor and other tensors potentially useful for obtaining upper bounds via Strassen's laser method. This talk discusses joint work with JM Landsberg, Alicia Harper, and Amy Huang.

A Polynomial Method for Counting Colorings of $S$-labeled Graphs

Series
Combinatorics Seminar
Time
Friday, November 17, 2023 - 15:15 for 1 hour (actually 50 minutes)
Location
Skiles 308
Speaker
Hemanshu KaulIllinois Institute of Technology

The notion of $S$-labeling, where $S$ is a subset of the symmetric group, is a common generalization of signed $k$-coloring, signed $\mathbb{Z}_k$-coloring, DP (or Correspondence) coloring, group coloring, and coloring of gained graphs that was introduced in 2019 by Jin, Wong, and Zhu.  In this talk we use a well-known theorem of  Alon and F\"{u}redi to present an algebraic technique for bounding the number of colorings of an $S$-labeled graph from below.  While applicable in the broad context of counting colorings of $S$-labeled graphs, we will focus on the case where $S$ is a symmetric group, which corresponds to DP-coloring (or, correspondence coloring) of graphs, and the case where $S$ is a set of linear permutations which is applicable to the coloring of signed graphs, etc.

 

This technique allows us to prove exponential lower bounds on the number of colorings of any $S$-labeling of graphs that satisfy certain sparsity conditions. We apply these to give exponential lower bounds on the number of DP-colorings (and consequently, number of  list colorings, or usual colorings) of families of planar graphs, and on the number of colorings of families of signed (planar) graphs. These lower bounds either improve previously known results, or are first known such results.

This joint work with Samantha Dahlberg and Jeffrey Mudrock.

Controlled SPDEs: Peng’s Maximum Principle and Numerical Methods

Series
SIAM Student Seminar
Time
Friday, November 17, 2023 - 11:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Lukas WesselsGeorgia Tech

In this talk, we consider a finite-horizon optimal control problem of stochastic reaction-diffusion equations. First, we apply the spike variation method which relies on introducing the first and second order adjoint state. We give a novel characterization of the second order adjoint state as the solution to a backward SPDE. Using this representation, we prove the maximum principle for controlled SPDEs.

In the second part, we present a numerical algorithm that allows the efficient approximation of optimal controls in the case of stochastic reaction-diffusion equations with additive noise by first reducing the problem to controls of feedback form and then approximating the feedback function using finitely based approximations. Numerical experiments using artificial neural networks as well as radial basis function networks illustrate the performance of our algorithm.

This talk is based on joint work with Wilhelm Stannat and Alexander Vogler. Talk will also be streamed: https://gatech.zoom.us/j/93808617657?pwd=ME44NWUxbk1NRkhUMzRsK3c0ZGtvQT09

Pages