Seminars and Colloquia by Series

Sparse Solution Technique for Local Clustering and Function Approximation

Series
Applied and Computational Mathematics Seminar
Time
Monday, December 4, 2023 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/98355006347
Speaker
Zhaiming ShenUniversity of Georgia

The sparse solution obtained from greedy-based optimization approach such as orthogonal matching pursuit can be very useful and have many applications in different directions. In this talk, I will present two research projects, one is about semi-supervised local clustering, and the other is about function approximation, which make use of the sparse solution technique. We will show that the target cluster can be effectively retrieved in the local clustering task and the curse of dimensionality can be overcome for a dense subclass of the space of continuous functions via Kolmogorov superposition theorem. Both the theoretical and numerical results will be discussed.

Generative Machine Learning Models for Uncertainty Quantification

Series
Applied and Computational Mathematics Seminar
Time
Monday, November 27, 2023 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/98355006347
Speaker
Feng BaoFlorida State University

Generative machine learning models, including variational auto-encoders (VAE), normalizing flows (NF), generative adversarial networks (GANs), diffusion models, have dramatically improved the quality and realism of generated content, whether it's images, text, or audio. In science and engineering, generative models can be used as powerful tools for probability density estimation or high-dimensional sampling that critical capabilities in uncertainty quantification (UQ), e.g., Bayesian inference for parameter estimation. Studies on generative models for image/audio synthesis focus on improving the quality of individual sample, which often make the generative models complicated and difficult to train. On the other hand, UQ tasks usually focus on accurate approximation of statistics of interest without worrying about the quality of any individual sample, so direct application of existing generative models to UQ tasks may lead to inaccurate approximation or unstable training process. To alleviate those challenges, we developed several new generative diffusion models for various UQ tasks, including diffusion-model-assisted supervised learning of generative models, a score-based nonlinear filter for recursive Bayesian inference, and a training-free ensemble score filter for tracking high dimensional stochastic dynamical systems. We will demonstrate the effectiveness of those methods in various UQ tasks including density estimation, learning stochastic dynamical systems, and data assimilation problems.

Machine learning, optimization, & sampling through a geometric lens

Series
Applied and Computational Mathematics Seminar
Time
Monday, November 20, 2023 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/98355006347
Speaker
Suvrit SraMIT & TU Munich

Please Note: Joint {Applied & Computational Math Seminar} and {School of Math Colloquium}. Speaker will present in person.

Geometry arises in myriad ways within machine learning and related areas. I this talk I will focus on settings where geometry helps us understand problems in machine learning, optimization, and sampling. For instance, when sampling from densities supported on a manifold, understanding geometry and the impact of curvature are crucial; surprisingly, progress on geometric sampling theory helps us understand certain generalization properties of SGD for deep-learning! Another fascinating viewpoint afforded by geometry is in non-convex optimization: geometry can either help us make training algorithms more practical (e.g., in deep learning), it can reveal tractability despite non-convexity (e.g., via geodesically convex optimization), or it can simply help us understand existing methods better (e.g., SGD, eigenvector computation, etc.).

Ultimately, I hope to offer the audience some insights into geometric thinking and share with them some new tools that help us design, understand, and analyze models and algorithms. To make the discussion concrete I will recall a few foundational results arising from our research, provide several examples, and note some open problems.

––
Bio: Suvrit Sra is a Alexander von Humboldt Professor of Artificial Intelligence at the Technical University of Munich (Germany), and and Associate Professor of EECS at MIT (USA), where he is also a member of the Laboratory for Information and Decision Systems (LIDS) and of the Institute for Data, Systems, and Society (IDSS). He obtained his PhD in Computer Science from the University of Texas at Austin. Before TUM & MIT, he was a Senior Research Scientist at the Max Planck Institute for Intelligent Systems, Tübingen, Germany. He has held visiting positions at UC Berkeley (EECS) and Carnegie Mellon University (Machine Learning Department) during 2013-2014. His research bridges mathematical topics such as differential geometry, matrix analysis, convex analysis, probability theory, and optimization with machine learning. He founded the OPT (Optimization for Machine Learning) series of workshops, held from OPT2008–2017 at the NeurIPS  conference. He has co-edited a book with the same name (MIT Press, 2011). He is also a co-founder and chief scientist of Pendulum, a global AI+logistics startup.

 

On inverse problems to mean field game system

Series
Applied and Computational Mathematics Seminar
Time
Monday, November 13, 2023 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/98355006347
Speaker
Kui RenColumbia University

Mean field game models have been developed in different application areas. We discuss here inverse problems to mean field game models where we are interested in reconstructing missing information from observed data. We present a few different scenarios where differential data allows for the unique reconstruction of model parameters in various forms. The talk is mainly based on recent joint works with Nathan Soedjak and Kewei Wang.
 

Multifidelity Scientific Machine Learning

Series
Applied and Computational Mathematics Seminar
Time
Monday, November 6, 2023 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/98355006347 (to be confirmed)
Speaker
Dr. Panos StinisPacific Northwest National Laboratory

Please Note: Speaker will present in person

In many applications across science and engineering it is common to have access to disparate types of data or models with different levels of fidelity. In general, low-fidelity data are easier to obtain in greater quantities, but it may be too inaccurate or not dense enough to accurately train a machine learning model. High-fidelity data is costly to obtain, so there may not be sufficient data to use in training, however, it is more accurate.  A small amount of high-fidelity data, such as from measurements or simulations, combined with low fidelity data, can improve predictions when used together. The important step in such constructions is the representation of the correlations between the low- and high-fidelity data. In this talk, we will present two frameworks for multifidelity machine learning. The first one puts particular emphasis on operator learning, building on the Deep Operator Network (DeepONet). The second one is inspired by the concept of model reduction. We will present the main constructions along with applications to closure for multiscale systems and continual learning. Moreover, we will discuss how multifidelity approaches fit in a broader framework which includes ideas from deep learning, stochastic processes, numerical methods, computability theory and renormalization of complex systems.

Flexible Krylov methods for advanced regularization

Series
Applied and Computational Mathematics Seminar
Time
Monday, October 23, 2023 - 14:00 for
Location
Skiles 005 and https://gatech.zoom.us/j/98355006347
Speaker
Malena Landman Sabate Emory University

Inverse problems involve the reconstruction of hidden objects from possibly noisy indirect measurements and are ubiquitous in a variety of scientific and engineering applications. This kind of problems have two main features that make them interesting yet challenging to solve. First, they tend to be ill-posed: the reconstruction is very sensitive to perturbations in the measurements. Second, real-world applications are often large-scale: resulting in computationally demanding tasks. In this talk I will focus on discrete linear problems: giving a general overview of the well-established class of solvers called Krylov subspace methods and its regularizing properties; as well as flexible variants that make them suitable to solve more challenging optimization tasks. I will show results and examples in different imaging applications.

Balanced truncation for Bayesian inference

Series
Applied and Computational Mathematics Seminar
Time
Monday, October 2, 2023 - 14:00 for 1 hour (actually 50 minutes)
Location
Clough Commons 125 and https://gatech.zoom.us/j/98355006347
Speaker
Elizabeth QianSchool of Aerospace Engineering and School of Computational Science and Engineering at Georgia Tech

We consider the Bayesian approach to the linear Gaussian inference problem of inferring the initial condition of a linear dynamical system from noisy output measurements taken after the initial time. In practical applications, the large dimension of the dynamical system state poses a computational obstacle to computing the exact posterior distribution. Model reduction offers a variety of computational tools that seek to reduce this computational burden. In particular, balanced truncation is a control-theoretic approach to model reduction which obtains an efficient reduced-dimension dynamical system by projecting the system operators onto state directions which trade off the reachability and observability of state directions.  We define an analogous balanced truncation procedure for the Bayesian inference setting based on the trade off between prior uncertainty and data information. The resulting reduced model inherits desirable theoretical properties for both the control and inference settings: numerical demonstrations on two benchmark problems show that our method can yield near-optimal posterior covariance approximations with order-of-magnitude state dimension reduction.

Physics-guided interpretable data-driven simulations

Series
Applied and Computational Mathematics Seminar
Time
Monday, September 18, 2023 - 14:00 for 1 hour (actually 50 minutes)
Location
https://gatech.zoom.us/j/98355006347
Speaker
Youngsoo ChoiLawrence Livermore National Laboratory

Please Note: This is a virtual seminar. Speaker Bio: Youngsoo is a computational math scientist in Center for Applied Scientific Computing (CASC) under Computing directorate at LLNL. His research focuses on developing efficient reduced order models for various physical simulations for time-sensitive decision-making multi-query problems, such as inverse problems, design optimization, and uncertainty quantification. His expertise includes various scientific computing disciplines. Together with his team and collaborators, he has developed powerful model order reduction techniques, such as machine learning-based nonlinear manifold, space–time reduced order models, and latent space dynamics identification methods for nonlinear dynamical systems. He has also developed the component-wise reduced order model optimization algorithm, which enables fast and accurate computational modeling tools for lattice-structure design. He is currently leading data-driven physical simulation team at LLNL, with whom he developed the open source codes, libROM (i.e., https://www.librom.net), LaghosROM (i.e., https://github.com/CEED/Laghos/tree/rom/rom), LaSDI (i.e., https://github.com/LLNL/LaSDI), gLaSDI (i.e., https://github.com/LLNL/gLaSDI), and GPLaSDI (i.e., https://github.com/LLNL/GPLaSDI). He earned his undergraduate degree in Civil and Environmental Engineering from Cornell University and his Ph.D. degree in Computational and Mathematical Engineering from Stanford University. He was a postdoctoral scholar at Sandia National Laboratories and Stanford University prior to joining LLNL in 2017.

A computationally expensive physical simulation is a huge bottleneck to advance in science and technology. Fortunately, many data-driven approaches have emerged to accelerate those simulations, thanks to the recent advancements in machine learning (ML) and artificial intelligence. For example, a well-trained 2D convolutional deep neural network can predict the solution of the complex Richtmyer–Meshkov instability problem with a speed-up of 100,000x [1]. However, the traditional black-box ML models do not incorporate existing governing equations, which embed underlying physics, such as conservation of mass, momentum, and energy. Therefore, the black-box ML models often violate important physics laws, which greatly concern physicists, and require big data to compensate for the missing physics information. Additionally, it comes with other disadvantages, such as non-structure-preserving, computationally expensive training phase, non-interpretability, and vulnerability in extrapolation. To resolve these issues, we can bring physics into the data-driven framework. Physics can be incorporated into different stages of data-driven modeling, i.e., the sampling stage and model-building stage. Physics-informed greedy sampling procedure minimizes the number of required training data for a target accuracy [2]. Physics-guided data-driven model better preserves the physical structure and is more robust in extrapolation than traditional black-box ML models. Numerical results, e.g., hydrodynamics [3,4], particle transport [5], plasma physics, and 3D printing, will be shown to demonstrate the performance of the data-driven approaches. The benefits of the data-driven approaches will also be illustrated in multi-query decision-making applications, such as design optimization [6,7].

 

Reference
[1] Jekel, Charles F., Dane M. Sterbentz, Sylvie Aubry, Youngsoo Choi, Daniel A. White, and Jonathan L. Belof. “Using Conservation Laws to Infer Deep Learning Model Accuracy of Richtmyer-meshkov Instabilities.” arXiv preprint arXiv:2208.11477 (2022).
[2] He, Xiaolong, Youngsoo Choi, William D. Fries, Jon Belof, and Jiun-Shyan Chen. “gLaSDI: Parametric Physics-informed Greedy Latent Space Dynamics Identification.” arXiv preprint arXiv:2204.12005 (2022).
[3] Copeland, Dylan Matthew, Siu Wun Cheung, Kevin Huynh, and Youngsoo Choi. “Reduced order models for Lagrangian hydrodynamics.” Computer Methods in Applied Mechanics and Engineering 388 (2022): 114259.
[4] Kim, Youngkyu, Youngsoo Choi, David Widemann, and Tarek Zohdi. “A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder.” Journal of Computational Physics 451 (2022): 110841.
[5] Choi, Youngsoo, Peter Brown, William Arrighi, Robert Anderson, and Kevin Huynh. “Space–time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems.” Journal of Computational Physics 424 (2021): 109845.
[6] McBane, Sean, and Youngsoo Choi. “Component-wise reduced order model lattice-type structure design.” Computer methods in applied mechanics and engineering 381 (2021): 113813.
[7] Choi, Youngsoo, Gabriele Boncoraglio, Spenser Anderson, David Amsallem, and Charbel Farhat. “Gradient-based constrained optimization using a database of linear reduced-order models.” Journal of Computational Physics 423 (2020): 109787.

 

Recent Advances in Finite Element Methods for Solving Poisson-Nernst-Planck Ion Channel Models

Series
Applied and Computational Mathematics Seminar
Time
Monday, August 28, 2023 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/98355006347 (to be confirmed)
Speaker
Dexuan XieUniversity of Wisconsin-Milwaukee
Ion channels are a class of proteins embedded in biological membranes, acting as biological devices or 'valves' for cells and playing a critical role in controlling various biological functions. To compute macroscopic ion channel kinetics, such as Gibbs free energy, electric currents, transport fluxes, membrane potential, and electrochemical potential, Poisson-Nernst-Planck ion channel (PNPIC) models have been developed as systems of nonlinear partial differential equations. However, they are difficult to solve numerically due to solution singularities, exponential nonlinearities, multiple physical domain issues, and the requirement of ionic concentration positivity. In this talk, I will present the recent progress we made in the development of finite element methods for solving PNPIC models. Specifically, I will introduce our improved PNPIC models and describe the mathematical and numerical techniques we utilized to develop efficient finite element iterative methods. Additionally, I will introduce the related software packages we developed for a voltage-dependent anion-channel protein and a mixture solution of multiple ionic species. Finally, I will present numerical results to demonstrate the fast convergence of our iterative methods and the high performance of our software package. This work was partially supported by the National Science Foundation through award number DMS-2153376 and the Simons Foundation through research award 711776.

Two Phases of Scaling Laws for Nearest Neighbor Classifiers

Series
Applied and Computational Mathematics Seminar
Time
Thursday, May 25, 2023 - 10:30 for 1 hour (actually 50 minutes)
Location
https://gatech.zoom.us/j/98355006347
Speaker
Jingzhao ZhangTsinghua University

Please Note: Special time & day. Remote only.

A scaling law refers to the observation that the test performance of a model improves as the number of training data increases. A fast scaling law implies that one can solve machine learning problems by simply boosting the data and the model sizes. Yet, in many cases, the benefit of adding more data can be negligible. In this work, we study the rate of scaling laws of nearest neighbor classifiers. We show that a scaling law can have two phases: in the first phase, the generalization error depends polynomially on the data dimension and decreases fast; whereas in the second phase, the error depends exponentially on the data dimension and decreases slowly. Our analysis highlights the complexity of the data distribution in determining the generalization error. When the data distributes benignly, our result suggests that nearest neighbor classifier can achieve a generalization error that depends polynomially, instead of exponentially, on the data dimension.

Pages