TBA
- Series
- School of Mathematics Colloquium
- Time
- Thursday, April 23, 2026 - 11:00 for 1 hour (actually 50 minutes)
- Location
- Skiles 005
- Speaker
- Charles Bordenave – Institut de Mathématiques de Marseille – charles.bordenave@univ-amu.fr
Each year, millions complete brackets to predict the outcomes of the NCAA men’s and women’s basketball tournaments—an activity centered on a fundamental question in sports analytics: Who is number one? Ranking algorithms provide mathematical frameworks for addressing this question and are widely used in postseason selection and predictive modeling.
This talk examines two influential rating systems—the Colley Method and the Massey Method—both of which compute team rankings by solving systems of linear equations based on game outcomes. We discuss extensions that incorporate factors such as late-season momentum and home-field advantage, and we evaluate their impact on predictive performance.
Applications across sports, including basketball and soccer, will be presented, with particular attention to NCAA tournament bracket construction. Research-driven implementations of these methods have produced brackets that outperformed over 90% of millions of ESPN submissions. The talk concludes with open questions and broader applications of ranking methodology.
Bio:
Bio: Dr. Tim Chartier is the Joseph R. Morton Professor of Mathematics and Computer Science at Davidson College, where he specializes in data analytics. He has consulted with ESPN, The New York Times, the U.S. Olympic & Paralympic Committee, and teams in the NBA, NFL, MLB, and NASCAR. He founded and grew a sports analytics group to nearly 100 student researchers annually. The group, now student-run, provides analytics for Davidson College athletic teams.
His scholarship and leadership have been recognized nationally through service in the Mathematical Association of America (MAA) and with multiple honors, including an Alfred P. Sloan Research Fellowship, the MAA Southeastern Section Distinguished Teaching Award, and the MAA’s Euler Book Prize. He has also collaborated with educational initiatives at Google and Pixar and served as the 2022–23 Distinguished Visiting Professor at the National Museum of Mathematics.
We introduce proximal optimal transport divergences that provide a unifying variational framework interpolating between classical f-divergences and Wasserstein metrics. From a gradient-flow perspective, these divergences generate stable and robust dynamics in probability space, enabling the learning of distributions with singular structure, including strange attractors, extreme events, and low-dimensional manifolds, with provable guarantees in sample size.
We illustrate how this mathematical structure leads naturally to generative particle flows for reconstructing nonlinear cellular dynamics from snapshot single-cell RNA sequencing data,including real patient datasets, highlighting the role of proximal regularization in stabilizing learning and inference in high dimensions.
Bio: Markos Katsoulakis is a Professor of Applied Mathematics and an Adjunct Professor of Chemical & Biomolecular Engineering at UMass Amherst, whose research lies at the interface of PDEs, uncertainty quantification, scientific machine learning, and information theory. He serves on the editorial boards of the SIAM/ASA Journal on Uncertainty Quantification, the SIAM Journal on Scientific Computing, and the SIAM Mathematical Modeling and Computation book series. He received his Ph.D. in Applied Mathematics from Brown University and his B.Sc. from the University of Athens. His work has been supported by AFOSR, DARPA, NSF, DOE, and the ERC.
Please Note: Zoom link: https://gatech.zoom.us/j/97380260276?pwd=3965vnstqsCn7jcIJHrHXX5GlhwQRC.1
One of the biggest discoveries in the theory of dynamical systems was that smooth (deterministic) systems can behave very randomly. Since then a rich theory of chaotic properties of smooth dynamical systems was developed using geometric, topological and probabilistic methods. In the talk we will present ideas, highlight main results and discuss techniques that were developed during the last 70 years. In the second part we plan to discuss more recent advancements and present main open questions in the field. In the last part I will focus on connections between smooth ergodic theory and number theory.
Abstract: In recent years we have witnessed a symbiotic trend wherein LLMs are being combined with provers, solvers, and computer algebra systems, resulting in dramatic breakthroughs in AI for math. Following this trend, we have developed two lines of work in my research group. The first is the idea that "good" joint embeddings (JE) can dramatically improve the efficacy of LLM-based auto-formalization tools. We say that JEs are good if they respect the following invariant: semantically-equivalent formally-dissimilar objects (e.g., pairs of sematically-equivalent natural and formal language proofs) must be "close by" in the embedding space, and semantically inequivalent ones "far apart". We use such JE models as part of a successful RAG-based auto-formalization pipeline, demonstrating that such JEs are a critical AI-for-math technology. The second idea is Reinforcement Learning with Symbolic Feedback (RLSF), a class of techniques that addresses the LLM hallucination problem in contexts where we have access to rich symbolic feedback such math, physics, and code, demonstrating that they too are critical to the success of AI for math.
Bio: Dr. Vijay Ganesh is a professor of computer science at Georgia Tech and the associate director of the Institute for Data Engineering and Science (IDEaS), also at Georgia Tech. Additionally, he is a co-founder and steering committee member of the Centre for Mathematical AI at the Fields Institute, and an AI Fellow at the BSIA in Waterloo, Canada. Prior to joining Georgia Tech in 2023, Vijay was a professor at the University of Waterloo in Canada from 2012 to 2023, a co-director of the Waterloo AI Institute from 2021 to 2023, and a research scientist at the Massachusetts Institute of Technology from 2007 to 2012. Vijay completed his PhD in computer science from Stanford University in 2007.
Vijay's primary area of research is the theory and practice of SAT/SMT solvers, combinations of machine learning and automated reasoning, and their application in neurosymbolic AI, software engineering, security, mathematics, and physics. In this context he has led the development of many SAT/SMT solvers, most notably, STP, Z3str family of string solvers, Z3-alpha, MapleSAT, AlphaMapleSAT, and MathCheck. He also leads the development of several neurosymbolic AI tools aimed at mathematics, physics, and software engineering. On the theoretical side, he works on topics in mathematical logic and proof complexity. For his research, Vijay has won over 35 awards, honors, and medals, including an ACM Impact Paper Award at ISSTA 2019, ACM Test of Time Award at CCS 2016, and a Ten-Year Most Influential Paper citation at DATE 2008.
Heegaard Floer homology is a tool for studying three- and four-dimensional manifolds, using methods that are inspired by symplectic geometry. Bordered Floer homology is tool, currently under construction, for understanding how to reconstruct the Heegaard Floer homology in terms of invariants associated to its pieces. This approach has both conceptual and computational ramifications. In this talk, I will sketch the outlines of Heegaard Floer homology, with an emphasis on recent progress in bordered Floer homology. Heegaard Floer homology was developed in collaboration with Zoltan Szabo; bordered Floer homology is joint work with Robert Lipshitz and Dylan Thurston.
Sperner's lemma is a simple combinatorial result that is surprisingly powerful and useful---bringing together ideas in combinatorics, geometry, and topology while attracting interest from economists and game theorists. I'll explain why, show some old and new proofs, and present some recent generalizations with diverse applications.
I will discuss recent results in two research directions at the intersection of scientific machine learning and modeling of dynamical systems.
First, we consider systems of interacting agents or particles, which are commonly used in models throughout the sciences, and can exhibit complex, emergent large-scale dynamics, even when driven by simple interaction laws. We consider the following inference problem: given only observations of trajectories of the agents in the system, can we learn the unknown laws of interactions? We cast this as an inverse problem, discuss when this problem is well-posed, construct estimators for the interaction kernels with provably good statistical and computational properties, even in the nonparametric estimation regime when only minimal information is provided about the form of such interaction laws. We also demonstrate numerically that the estimated systems can accurately reproduce the emergent behaviors of the original systems, even when the observations are so short that no emergent behavior was witnessed in the training data. We also discuss the case where the agents are on an unknown network, and we need to estimate both the interaction kernel and the network.
In the second part of the talk, I will discuss recent applications of deep learning in the context of digital twins in cardiology, and in particular the use of operator learning architectures for predicting solutions of parametric PDEs, or functionals thereof, on a family of diffeomorphic domains — the patient-specific hearts -- which we apply to the prediction of medically relevant electrophysiological features of heart digital twins.