Seminars and Colloquia by Series

Generative modeling through time reversal and reflection of diffusion processes

Series
Applied and Computational Mathematics Seminar
Time
Monday, April 29, 2024 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/98355006347
Speaker
Nicole YangEmory University

Please Note: Speaker will present in person.

In this talk, we discuss generative modeling algorithms motivated by the time reversal and reflection properties of diffusion processes. Score-based diffusion models (SBDM) have recently emerged as state-of-the-art approaches for image generation. We develop SBDMs in the infinite-dimensional setting, that is, we model the training data as functions supported on a rectangular domain. Besides the quest for generating images at ever higher resolution, our primary motivation is to create a well-posed infinite-dimensional learning problem so that we can discretize it consistently at multiple resolution levels. We demonstrate how to overcome two shortcomings of current SBDM approaches in the infinite-dimensional setting by ensuring the well-posedness of forward and reverse processes, and derive the convergence of the approximation of multilevel training. We illustrate that approximating the score function with an operator network is beneficial for multilevel training.

In the second part of this talk, we propose the Reflected Schrodinger Bridge algorithm: an entropy-regularized optimal transport approach tailored for generating data within diverse bounded domains. We derive reflected forward-backward stochastic differential equations with Neumann and Robin boundary conditions, extend divergence-based likelihood training to bounded domains, and demonstrate its scalability in constrained generative modeling.

Monotone generative modeling via a geometry-preserving mapping

Series
Applied and Computational Mathematics Seminar
Time
Monday, April 15, 2024 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/98355006347
Speaker
Wonjun LeeUniversity of Minnesota, Twin Cities

Generative Adversarial Networks (GANs) are powerful tools for creating new content, but they face challenges such as sensitivity to starting conditions and mode collapse. To address these issues, we propose a deep generative model that utilizes the Gromov-Monge embedding (GME). It helps identify the low-dimensional structure of the underlying measure of the data and then map it, while preserving its geometry, into a measure in a low-dimensional latent space, which is then optimally transported to the reference measure. We guarantee the preservation of the underlying geometry by the GME and c-cyclical monotonicity of the generative map, where c is an intrinsic embedding cost employed by the GME. The latter property is a first step in guaranteeing better robustness to initialization of parameters and mode collapse. Numerical experiments demonstrate the effectiveness of our approach in generating high-quality images, avoiding mode collapse, and exhibiting robustness to different starting conditions.

Diffusion Models: Theory and Applications (in PDEs)

Series
Applied and Computational Mathematics Seminar
Time
Monday, April 8, 2024 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/98355006347
Speaker
Yulong LuUniversity of Minnesota, Twin Cities

Diffusion models, particularly score-based generative models (SGMs), have emerged as powerful tools in diverse machine learning applications, spanning from computer vision to modern language processing. In the first part of this talk, we delve into the generalization theory of SGMs, exploring their capacity for learning high-dimensional distributions. Our analysis show that SGMs achieve a dimension-free generation error bound when applied to a class of sub-Gaussian distributions characterized by certain low-complexity structures.  In the second part of the talk, we consider the application of diffusion models in solving partial differential equations (PDEs). Specifically, we present the development of a physics-guided diffusion model designed for reconstructing high-fidelity solutions from their low-fidelity counterparts. This application showcases the adaptability of diffusion models and their potential to scientific computation.  

Accelerating Molecular Discovery with Machine Learning: A Geometric, Sampling and Optimization Perspective

Series
Applied and Computational Mathematics Seminar
Time
Monday, April 1, 2024 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/98355006347
Speaker
Yuanqi DuCornell University

Please Note: Speaker will present in person. Bio: Yuanqi Du is a PhD student at the Department of Computer Science, Cornell University studying AI and its intersection with Scientific Discovery advised by Prof. Carla P. Gomes. His research interests include Geometric Deep Learning, Probabilistic Machine Learning, Sampling, Optimization, and AI for Science (with a focus on molecular discovery). Aside from his research, he is passionate about education and community building. He leads the organization of a series of events such as the Learning on Graphs conference and AI for Science, Probabilistic Machine Learning workshops at ML conferences and an educational initiative (AI for Science101) to bridge the AI and Science community.

Recent advancements in machine learning have paved the way for groundbreaking opportunities in the realm of molecular discovery. At the forefront of this evolution are improved computational tools with proper inductive biases and efficient optimization. In this talk, I will delve into our efforts around these themes from a geometry, sampling and optimization perspective. I will first introduce how to encode symmetries in the design of neural networks and the balance of expressiveness and computational efficiency. Next, I will discuss how generative models enable a wide range of design and optimization tasks in molecular discovery. In the third part, I will talk about how the advancements in stochastic optimal control, sampling and optimal transport can be applied to find transition states in chemical reactions.

Function approximation with one-bit Bernstein polynomials and one-bit neural networks

Series
Applied and Computational Mathematics Seminar
Time
Monday, March 25, 2024 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/98355006347
Speaker
Weilin LiCity College of New York
The celebrated universal approximation theorems for neural networks typically state that every sufficiently nice function can be arbitrarily well approximated by a neural network with carefully chosen real parameters. With the emergence of large neural networks and a desire to use them on low power devices, there has been increased interest in neural network quantization (i.e., the act of replacing its real parameters with ones from a much smaller finite set). In this talk, we ask whether it is even possible to quantize neural networks without sacrificing their approximation power, especially in the extreme one-bit {+1,-1} case? We present several naive quantization strategies that yield universal approximation theorems by quantized neural networks, and discuss their advantages/disadvantages. From there, we offer an alternative approach based on Bernstein polynomials and show that {+1,-1} linear combinations of multivariate Bernstein polynomials can efficiently approximate smooth functions. This strategy can be implemented by means of a one-bit neural network and computed from point samples/queries. Joint work with Sinan Gunturk.

 

Diffusion Models for Arbitrary Discrete Markov Processes

Series
Applied and Computational Mathematics Seminar
Time
Monday, March 4, 2024 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/98355006347
Speaker
Zachary FoxOak Ridge National Laboratory

Please Note: Speaker will present in person.

Diffusion models have become ubiquitous for image generation and are increasingly being used for scientific applications. To date, many flavors of diffusion models have been developed by varying the stochastic process that noises data, but also the domain on which these processes act. Typically, generative diffusion models rely on a Gaussian diffusion process for training the backward transformations, which can then be used to generate samples from Gaussian noise. However, real world data often takes place in discrete-state spaces, including many scientific applications. Here we develop a theoretical formulation for arbitrary discrete-state Markov processes in the forward diffusion process using exact analysis. We relate the theory to the existing continuous-state Gaussian diffusion in discrete and continuous time. This approach is validated using a simple stochastic decay process, in which the reverse process generates images from a single all-black image, rather than a noisy prior distribution.

On Expressivity and Stability of Positional Encoding for Graph Neural Networks and Graph Transformers

Series
Applied and Computational Mathematics Seminar
Time
Monday, February 26, 2024 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/98355006347
Speaker
Pan LiGeorgia Institute of Technology

Designing effective positional encodings for graphs is key to building powerful graph transformers and enhancing message-passing graph neural networks’ expressive power. However, since there lacks a canonical order of nodes in the graph-structure data, the choice of positional encodings for graphs is often tricky. For example, Laplacian eigenmap is used as positional encodings in many works. However, it faces two fundamental challenges: (1) Non-uniqueness: there are many different eigen-decompositions of the same Laplacian, and (2) Instability: small perturbations to the Laplacian could result in completely different eigenvectors, leading to unpredictable changes in positional encoding. This is governed by the Davis-Kahan theorem, which further negatively impacts the model generalization. In this talk, we are to introduce some ideas on building stable positional encoding and show their benefits in model out-of-distribution generalization. The idea can be extended to some other types of node positional encodings. Finally, we evaluate the effectiveness of our method on molecular property prediction, link prediction, and out-of-distribution generalization tasks, finding improved generalization compared to existing positional encoding methods.

I will mainly talk about three papers:

1. Distance Encoding: Design Provably More Powerful Neural Networks for Graph Representation Learning,NeurIPS20, Pan Li, Yanbang Wang, Hongwei Wang, Jure Leskovec

2. Equivariant and Stable Positional Encoding for More Powerful Graph Neural Networks, ICLR22 Haorui Wang, Haoteng Yin, Muhan Zhang, Pan Li

3. On the Stability of Expressive Positional Encodings for Graphs, ICLR24 Yinan Huang, William Lu, Joshua Robinson, Yu Yang, Muhan Zhang, Stefanie Jegelka, Pan Li

 

 

Transferable Neural Networks for Partial Differential Equations

Series
Applied and Computational Mathematics Seminar
Time
Monday, February 12, 2024 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/98355006347
Speaker
Lili JuUniversity of South Carolina

Transfer learning for partial differential equations (PDEs) is to develop a pre-trained neural network that can be used to solve a wide class of PDEs. Existing transfer learning approaches require much information about the target PDEs such as its formulation and/or data of its solution for pre-training. In this work, we propose to design transferable neural feature spaces for the shallow neural networks from purely function approximation perspectives without using PDE information. The construction of the feature space involves the re-parameterization of the hidden neurons and uses auxiliary functions to tune the resulting feature space. Theoretical analysis shows the high quality of the produced feature space, i.e., uniformly distributed neurons. We use the proposed feature space as the predetermined feature space of a random feature model, and use existing least squares solvers to obtain the weights of the output layer. Extensive numerical experiments verify the outstanding performance of our method, including significantly improved transferability, e.g., using the same feature space for various PDEs with different domains and boundary conditions, and the superior accuracy, e.g., several orders of magnitude smaller mean squared error than the state of the art methods.

Structure-Preserving Methods for Nonlinear Hyperbolic Waves

Series
Applied and Computational Mathematics Seminar
Time
Monday, February 5, 2024 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/98355006347
Speaker
Philippe G. LeFlochSorbonne University and CNRS

Many numerical methods have been developed in the past years for computing weak solutions (with shock waves) to nonlinear hyperbolic conservation laws. My research, specifically, concerns the design of well-balanced numerical algorithms that preserve certain key structure of these equations in various applications, including for problems involving moving phase boundaries and other scale-dependent interfaces. In particular, in this lecture, I will focus on the evolution of a compressible fluid in spherical symmetry on a Schwarzschild curved background, for which I have designed a class of well-balanced numerical algorithms up to third-order of accuracy. Both the relativistic Burgers-Schwarzschild model and the relativistic Euler-Schwarzschild model were considered, and the proposed numerical algorithm took advantage of the explicit or implicit forms available for the stationary solutions of these models. The schemes follow the finite volume methodology and preserve the stationary solutions and, most importantly, allow us to investigate the global asymptotic behavior of such flows and determine the asymptotic behavior of the mass density and velocity field of the fluid. Blog: philippelefloch.org

"SAM as an Optimal Relaxation of Bayes" and "Lie Group updates for Learning Distributions on Machine Learning Parameters"

Series
Applied and Computational Mathematics Seminar
Time
Friday, December 8, 2023 - 11:00 for 1 hour (actually 50 minutes)
Location
https://gatech.zoom.us/j/98355006347
Speaker
Dr. Thomas Moellenhoff and Dr. Eren Mehmet KıralRIKEN

Please Note: Note special time, due to time zone difference from Japan. Joint with SIAM GT Student Chapter Seminar

Part I (SAM as an Optimal Relaxation of Bayes) Dr. Thomas Moellenhoff

Sharpness-aware minimization (SAM) and related adversarial deep-learning methods can drastically improve generalization, but their underlying mechanisms are not yet fully understood. In this talk, I will show how SAM can be interpreted as optimizing a relaxation of the Bayes objective where the expected negative-loss is replaced by the optimal convex lower bound, obtained by using the so-called Fenchel biconjugate. The connection enables a new Adam-like extension of SAM to automatically obtain reasonable uncertainty estimates, while sometimes also improving its accuracy.

Part II (Lie Group updates for Learning Distributions on Machine Learning Parameters) Dr. Eren Mehmet Kıral

I will talk about our recent paper https://arxiv.org/abs/2303.04397 with Thomas Möllenhoff and Emtiyaz Khan, and other related results. Bayesian Learning learns a distribution over the model parameters, allowing for different descriptions of the same data. This is (contrary to classical learning which "bets-it-all" on a single set of parameters in describing a given dataset and making predictions. We focus on classes of distributions which have a transitive Lie group action on them given by pushforwards of an action on the parameter space. I will also specialize to a few concrete Lie groups and show distinct learning behavior.

Pages