Seminars and Colloquia by Series

Sparse Quadratic Optimization via Polynomial roots

Series
ACO Student Seminar
Time
Friday, December 2, 2022 - 13:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Kevin ShuGeorgia Tech Math

We'll talk about problems of optimizing a quadratic function subject to quadratic constraints, in addition to a sparsity constraint that requires that solutions have only a few nonzero entries. Such problems include sparse versions of linear regression and principal components analysis. We'll see that this problem can be formulated as a convex conical optimization problem over a sparse version of the positive semidefinite cone, and then see how we can approximate such problems using ideas arising from the study of hyperbolic polynomials. We'll also describe a fast algorithm for such problems, which performs well in practical situations.

Breaking the quadratic gap for strongly polynomial solvers to combinatorial linear programs

Series
ACO Student Seminar
Time
Friday, November 18, 2022 - 13:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Bento NaturaGeorgia Tech ISyE

Recent years have seen tremendous progress in high-accuracy solvers for Maximum Flow, Minimum-Cost Flow and general Linear Programs (LP). Progress on strongly polynomial solvers for combinatorial LP on the other hand has stalled. The computational gap between high-accuracy solvers and strongly polynomial solvers is linear in the number of variables only for combinatorial LP on directed graphs. For combinatorial LP beyond directed graphs this gap is currently quadratic and is known since the 1980s due to a seminal result by Tardos.

We finally break the quadratic gap and design a strongly polynomial interior-point-method for combinatorial LP, which reduces the gap to only a linear factor. Thus, we match the known linear gap for LP on directed graphs. Furthermore, we close the linear gap between feasibility and optimization of combinatorial LP.

Self-Adjusting Data Structures

Series
ACO Student Seminar
Time
Friday, November 11, 2022 - 13:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Robert TarjanPrinceton University

Please Note: Robert Tarjan is the James S. McDonnell Distinguished University Professor of Computer Science at Princeton University. He has held academic positions at Cornell, Berkeley, Stanford, and NYU, and industrial research positions at Bell Labs, NEC, HP, Microsoft, and Intertrust Technologies. He has invented or co-invented many of the most efficient known data structures and graph algorithms. He was awarded the first Nevanlinna Prize from the International Mathematical Union in 1982 for “for outstanding contributions to mathematical aspects of information science,” the Turing Award in 1986 with John Hopcroft for “fundamental achievements in the design and analysis of algorithms and data structures,” and the Paris Kanellakis Award in Theory and Practice in 1999 with Daniel Sleator for the invention of splay trees. He is a member of the U.S. National Academy of Sciences, the U. S. National Academy of Engineering, the American Academy of Arts and Sciences, and the American Philosophical Society.

Data structures are everywhere in computer software.  Classical data structures are specially designed to make each individual operation fast.  A more flexible approach is to design the structure so that it adapts to its use.  This idea has produced data structures that perform well in practice and have surprisingly good performance guarantees.  In this talk I’ll review some recent work on such data structures, specifically on self-adjusting search trees and self-adjusting heaps.

Introduction to Quantum Computing and Its Role in Combinatorial Optimization

Series
ACO Student Seminar
Time
Friday, November 4, 2022 - 13:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Reuben TateGeorgia Tech Math

In recent years, there has been increased interest in using quantum computing for the purposes of solving problems in combinatorial optimization. No prior knowledge of quantum computing is necessary for this talk; in particular, the talk will be divided into three parts: (1) a gentle high-level introduction to the basics of quantum computing, (2) a general framework for solving combinatorial optimization problems with quantum computing (the Quantum Approximate Optimization Algorithm introduced by Farhi et al.), (3) and some recent results that my colleagues and I have found. Our group has looked at the Max-Cut problem and have developed a new quantum algorithm that utilizes classically-obtained warm-starts in order to improve upon already-existing quantum algorithms; this talk will discuss both theoretical and experimental results associated with our approach with our main results being that we obtain a 0.658-approximation for Max-Cut, our approach provably converges to the Max-Cut as a parameter (called the quantum circuit depth) increases, and (on small graphs) are approach is able to empirically beat the (classical) Goemans-Williamson algorithm at a relatively low quantum circuit-depth (i.e. using a small amount of quantum resources). This work is joint with Jai Moondra, Bryan Gard, Greg Mohler, and Swati Gupta.

Coloring Graphs with Forbidden Almost Bipartite Subgraphs

Series
ACO Student Seminar
Time
Friday, October 28, 2022 - 13:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
James AndersonGeorgia Tech Math

For graphs with maximum degree $\Delta$, a greedy algorithm shows $\chi(G) \leq \Delta + 1$. Brooks improved this to $\chi(G) \leq \Delta$ when $G$ has no cliques of size $\Delta + 1$, provided $\Delta \geq 3$. If is conjectured that if one forbids other graphs, the bound can be pushed further: for instance, Alon, Krivelevich, and Sudakov conjecture that, for any graph $F$, there is a constant $c(F) > 0$ such that $\chi(G) \leq (c(F) + o(1)) \Delta / \log\Delta$ for all $F$-free graphs $G$ of maximum degree $\Delta$. The only graphs $F$ for which this conjecture has been verified so far---by Alon, Krivelevich, and Sudakov themselves---are the so-called almost bipartite graphs, i.e., graphs that can be made bipartite by removing at most one vertex. Equivalently, a graph is almost bipartite if it is a subgraph of the complete tripartite graph $K_{1,t,t}$ for some $t \in \N$. The best heretofore known upper bound on $c(F)$ for almost bipartite $F$ is due to Davies, Kang, Pirot, and Sereni, who showed that $c(K_{1,t,t}) \leq t$. We prove that in fact $c(F) \leq 4$ for any almost bipartite graph $F$, thus making the bound independent of $F$ in all the known cases of the conjecture. We also establish a more general version of this result in the setting of DP-coloring (also known as correspondence coloring), which we give a gentle introduction to. Finally, we consider consequences of this result in the context of sublinear algorithms.

 

This is joint work with Anton Bernshteyn and Abhishek Dhawan.

Stability, Optimality, and Fairness in Federated learning

Series
ACO Student Seminar
Time
Friday, October 21, 2022 - 13:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Kate DonahueCornell

Federated learning is a distributed learning paradigm where multiple agents, each only with access to local data, jointly learn a global model. There has recently been an explosion of research aiming not only to improve the accuracy rates of federated learning, but also provide certain guarantees around social good properties such as total error or fairness. In this talk, I describe two papers analyzing federated learning through the lens of cooperative game theory (both joint with Jon Kleinberg).

 

In the first paper, we discuss fairness in federated learning, which relates to how error rates differ between federating agents. In this work, we consider two notions of fairness: egalitarian fairness (which aims to bound how dissimilar error rates can be) and proportional fairness (which aims to reward players for contributing more data). For egalitarian fairness, we obtain a tight multiplicative bound on how widely error rates can diverge between agents federating together. For proportional fairness, we show that sub-proportional error (relative to the number of data points contributed) is guaranteed for any individually rational federating coalition. The second paper explores optimality in federated learning with respect to an objective of minimizing the average error rate among federating agents. In this work, we provide and prove the correctness of an efficient algorithm to calculate an optimal (error minimizing) arrangement of players. Building on this, we give the first constant-factor bound on the performance gap between stability and optimality, proving that the total error of the worst stable solution can be no higher than 9 times the total error of an optimal solution (Price of Anarchy bound of 9). 


Relevant Links: https://arxiv.org/abs/2010.00753https://arxiv.org/abs/2106.09580https://arxiv.org/abs/2112.00818

Bio:
Kate Donahue is a fifth year computer science PhD candidate at Cornell advised by Jon Kleinberg. She works on algorithmic problems relating to the societal impact of AI such as fairness, human/AI collaboration and game-theoretic models of federated learning. Her work has been supported by an NSF fellowship and recognized by a FAccT Best Paper award. During her PhD, she has interned at Microsoft Research, Amazon, and Google.

Sparse Cholesky factorization by greedy conditional selection

Series
ACO Student Seminar
Time
Friday, October 7, 2022 - 13:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Stephen HuanGeorgia Tech CS

Dense kernel matrices resulting from pairwise evaluations of a kernel function arise naturally in machine learning and statistics. Previous work in constructing sparse transport maps or sparse approximate inverse Cholesky factors of such matrices by minimizing Kullback-Leibler divergence recovers the Vecchia approximation for Gaussian processes. However, these methods often rely only on geometry to construct the sparsity pattern, ignoring the conditional effect of adding an entry. In this work, we construct the sparsity pattern by leveraging a greedy selection algorithm that maximizes mutual information with target points, conditional on all points previously selected. For selecting k points out of N, the naive time complexity is O(N k^4), but by maintaining a partial Cholesky factor we reduce this to O(N k^2). Furthermore, for multiple (m) targets we achieve a time complexity of O(N k^2 + N m^2 + m^3) which is maintained in the setting of aggregated Cholesky factorization where a selected point need not condition every target. We directly apply the selection algorithm to image classification and recovery of sparse Cholesky factors. By minimizing Kullback-Leibler divergence, we apply the algorithm to Cholesky factorization, Gaussian process regression, and preconditioning with the conjugate gradient, improving over k-nearest neighbors particularly in high dimensional, unusual, or otherwise messy geometries with non-isotropic kernel functions.

Convexity of quadratic maps and convex hull via aggregation

Series
ACO Student Seminar
Time
Friday, September 30, 2022 - 13:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Shengding SunGeorgia Tech Math

Quadratic forms and their hidden convexity have been studied for decades, dating back to famous theorems by Toeplitz-Hausdorff, Dines and Brickman. It has very rich connection to optimization via Yakubovich's S-lemma. I will introduce these results, as well as an ongoing work of obtaining convex hull via aggregations, where we introduced the closely related notion of hidden hyperplane convexity.

Determinant Maximization via Matroid Intersection Algorithms

Series
ACO Student Seminar
Time
Friday, September 23, 2022 - 13:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Aditi Laddha

Determinant maximization problem gives a general framework that models problems arising in as diverse fields as statistics, convex geometry, fair allocations, combinatorics, spectral graph theory, network design, and random processes. In an instance of a determinant maximization problem, we are given a collection of vectors $U = {v_1, \ldots, v_n}$ in $d$ dimensions, and a goal is to pick a subset $S$ of given vectors to maximize the determinant of the matrix $\sum_{i \in S} v_i v_i^T$. Often, the set $S$ of picked vectors must satisfy additional combinatorial constraints such as cardinality constraint ($|S| \leq k$) or matroid constraint ($S$ is a basis of a matroid defined on the vectors). In this talk, we give a polynomial-time deterministic algorithm that returns an $r^{O(r)}$-approximation for any matroid of rank $r \leq d$. Our algorithm builds on combinatorial algorithms for matroid intersection, which iteratively improves any solution by finding an alternating negative cycle in the exchange graph defined by the matroids. While the determinant function is not linear, we show that taking appropriate linear approximations at each iteration suffice to give the improved approximation algorithm.

 

This talk is based on joint work with Adam Brown, Madhusudhan Pittu, Mohit Singh, and Prasad Tetali.

Resolving Matrix Spencer Conjecture Up to Polylogarithmic Rank

Series
ACO Student Seminar
Time
Monday, September 12, 2022 - 13:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Haotian JiangUniversity of Washington

In this talk, I will present a simple proof of the matrix Spencer conjecture up to poly-logarithmic rank: given symmetric d by d matrices A_1,...,A_n each with operator norm at most 1 and rank at most n/\log^3 n, one can efficiently find \pm 1 signs x_1,... ,x_n such that their signed sum has spectral norm \|\sum_{i=1}^n x_i A_i\|_op= O(\sqrt{n}). This result also implies a (\log n - 3 \log \log n) qubit lower bound for quantum random access codes encoding n classical bits with advantage >> 1/\sqrt{n}. Our proof uses the recent refinement of the non-commutative Khintchine inequality in [Bandeira, Boedihardjo, van Handel, 2022] for random matrices with correlated Gaussian entries.

Pages