Seminars and Colloquia by Series

Friday, March 10, 2017 - 13:05 , Location: Skiles 005 , Peng Zhang , College of Computing, Georgia Tech , Organizer: Marcel Celaya
Spielman and Teng (2004) showed that linear systems in Laplacian matrices can be solved in nearly linear time. Since then, a major research goal has been to develop fast solvers for linear systems in other classes of matrices. Recently, this has led to fast solvers for directed Laplacians (CKPPRSV'17) and connection Laplacians (KLPSS'16).Connection Laplacians are a special case of PSD-Graph-Structured Block Matrices (PGSBMs), block matrices whose non-zero structure correspond to a graph, and which additionally can be expressed as a sum of positive semi-definite matrices each corresponding to an edge in the graph. A major open question is whether fast solvers can be obtained for all PGSBMs (Spielman, 2016). Fast solvers for Connection Laplacians provided some hope for this. Other important families of matrices in the PGSBM class include truss stiffness matrices, which have many applications in engineering, and multi-commodity Laplacians, which arise when solving multi-commodity flow problems. In this talk, we show that multi-commodity and truss linear systems are unlikely to be solvable in nearly linear time, unless general linear systems (with integral coefficients) can be solved in nearly linear time. Joint work with Rasmus Kyng.
Friday, February 17, 2017 - 13:05 , Location: Skiles 005 , David Durfee , College of Computing, Georgia Tech , Organizer: Marcel Celaya
We present an algorithm that, with high probability, generates a random spanning treefrom an edge-weighted undirected graph in O (n^{5/3}m^{1/3}) time. The tree is sampled from adistribution where the probability of each tree is proportional to the product of its edge weights.This improves upon the previous best algorithm due to Colbourn et al. that runs in matrix multiplication time, O(n^\omega). For the special case of unweighted graphs, this improves upon thebest previously known running time of ˜O(min{n^\omega, mn^{1/2}, m^{4/3}}) for m >> n^{7/4} (Colbourn et al. ’96, Kelner-Madry ’09, Madry et al. ’15).The effective resistance metric is essential to our algorithm, as in the work of Madry et al., butwe eschew determinant-based and random walk-based techniques used by previous algorithms.Instead, our algorithm is based on Gaussian elimination, and the fact that effective resistance ispreserved in the graph resulting from eliminating a subset of vertices (called a Schur complement).As part of our algorithm, we show how to compute -approximate effective resistances for a set Sof vertex pairs via approximate Schur complements in O (m + (n + |S|)/\eps^{ −2}) time, without usingthe Johnson-Lindenstrauss lemma which requires eO(min{(m+|S|) \eps{−2},m+n /eps^{−4} +|S|/eps^{ −2}}) time. We combine this approximation procedure with an error correction procedure for handing edges where our estimate isn’t sufficiently accurate.Joint work with Rasmus Kyng, John Peebles, Anup Rao, and Sushant Sachdeva
Friday, February 10, 2017 - 13:05 , Location: Skiles 005 , Sarah Cannon , College of Computing, Georgia Tech , Organizer: Marcel Celaya
We give the first polynomial upper bound on the mixing time of the edge-flip Markov chain for unbiased dyadic tilings, resolving an open problem originally posed by Janson, Randall, and Spencer in 2002. The technique used, adapted from spin system analysis in statistical physics and not widely used in computer science literature, involves a multilevel decomposition of the state space and is of independent interest. A dyadic tiling of size n is a tiling of the unit square by n non-overlapping dyadic rectangles, each of area 1/n, where a dyadic rectangle is any rectangle that can be written in the form [a2^{-s}, (a+1)2^{-s}] x [b2^{-t}, (b+1)2^{-t}] for non-negative integers a,b,s,t. The edge-flip Markov chain selects a random edge of the tiling and replaces it with its perpendicular bisector if doing so yields a valid dyadic tiling. Specifically, we show that the relaxation time of the edge-flip Markov chain for dyadic tilings is at most O(n^{4.09}), which implies that the mixing time is at most O(n^{5.09}). We complement this by showing that the relaxation time is at least \Omega(n^{1.38}), improving upon the previously best lower bound of \Omega(n log n) coming from the diameter of the chain. This is joint work with David Levin and Alexandre Stauffer.
Friday, December 2, 2016 - 13:05 , Location: Skiles 006 , Daniel Zink , Georgia Tech , Organizer: Marcel Celaya
Conditional gradient algorithms (also often called Frank-Wolfe algorithms) are popular due to their simplicity of only requiring a linear optimization oracle and more recently they also gained significant traction for online learning. While simple in principle, in many cases the actual implementation of the linear optimization oracle is costly. We show a general method to lazify various conditional gradient algorithms, which in actual computations leads to several orders of magnitude of speedup in wall-clock time. This is achieved by using a faster separation oracle instead of a linear optimization oracle, relying only on few linear optimization oracle calls.
Friday, November 18, 2016 - 13:05 , Location: Skiles 005 , Tim Duff , School of Mathematics, Georgia Tech , Organizer: Marcel Celaya
At the intersection of computability and algebraic geometry, the following question arises: does an integral polynomial system of equations have any integral solutions? Famously, the combined work of Robinson, Davis, Putnam, and Matiyasevich answers this in the negative. Nonetheless, algorithms have played in increasing role in the development of algebraic geometry and its many applications. I address some research related to this general theme and some outstanding questions.
Friday, November 11, 2016 - 13:15 , Location: Skiles 005 , Chi Ho Yuen , School of Mathematics, Georgia Tech , Organizer: Marcel Celaya
The Jacobian (or sandpile group) of a graph is a well-studied group associated with the graph, known to biject with the set of spanning trees of the graph via a number of classical combinatorial mappings. The algebraic definition of a Jacobian extends to regular matroids, but without the notion of vertices, many such combinatorial bijections fail to generalize. In this talk, I will discuss how orientations provide a way to overcome such obstacle. We give a novel, effectively computable bijection scheme between the Jacobian and the set of bases of a regular matroid, in which polyhedral geometry plays an important role; along the way we also obtain new enumerative results related to the Tutte polynomial. This is joint work with Spencer Backman and Matt Baker.
Friday, November 4, 2016 - 13:05 , Location: Skiles 005 , Aurko Roy , Georgia Tech , Organizer: Marcel Celaya
We study the cost function for hierarchical clusterings introduced by Dasgupta where hierarchies are treated as first-class objects rather than deriving their cost from projections into flat clusters. It was also shown that a top-down algorithm returns a hierarchical clustering of cost at most O (α_n log n) times the cost of the optimal hierarchical clustering, where α_n is the approximation ratio of the Sparsest Cut subroutine used. Thus using the best known approximation algorithm for Sparsest Cut due to Arora-Rao-Vazirani, the top down algorithm returns a hierarchical clustering of cost at most O(log^{3/2} n) times the cost of the optimal solution. We improve this by giving an O(log n)- approximation algorithm for this problem. Our main technical ingredients are a combinatorial characterization of ultrametrics induced by this cost function, deriving an Integer Linear Programming (ILP) formulation for this family of ultrametrics, and showing how to iteratively round an LP relaxation of this formulation by using the idea of sphere growing which has been extensively used in the context of graph partitioning. We also prove that our algorithm returns an O(log n)-approximate hierarchical clustering for a generalization of this cost function also studied in Dasgupta. This joint work with Sebastian Pokutta is to appear in NIPS 2016 (oral presentation).
Friday, October 28, 2016 - 13:05 , Location: Skiles 005 , Kevin Lai , College of Computing, Georgia Tech , Organizer: Marcel Celaya
We consider the problem of estimating the mean and covariance of a distribution from iid samples in R^n in the presence of an η fraction of malicious noise; this is in contrast to much recent work where the noise itself is assumed to be from a distribution of known type. This agnostic learning problem includes many interesting special cases, e.g., learning the parameters of a single Gaussian (or finding the best-fit Gaussian) when η fraction of data is adversarially corrupted, agnostically learning a mixture of Gaussians, agnostic ICA, etc. We present polynomial-time algorithms to estimate the mean and covariance with error guarantees in terms of information-theoretic lower bounds. We also give an agnostic algorithm for estimating the 2-norm of the covariance matrix of a Gaussian. This joint work with Santosh Vempala and Anup Rao appeared in FOCS 2016.
Friday, October 14, 2016 - 13:00 , Location: Skiles 005 , Matthew Fahrbach , College of Computing, Georgia Tech , Organizer: Marcel Celaya
Graded posets are partially ordered sets equipped with a unique rank function that respects the partial order and such that neighboring elements in the Hasse diagram have ranks that differ by one. We frequently find them throughout combinatorics, including the canonical partial order on Young diagrams and plane partitions, where their respective rank functions are the area and volume under the configuration. We ask when it is possible to efficiently sample elements with a fixed rank in a graded poset. We show that for certain classes of posets, a biased Markov chain that connects elements in the Hasse diagram allows us to approximately generate samples from any fixed rank in expected polynomial time. While varying a bias parameter to increase the likelihood of a sample of a desired size is common in statistical physics, one typically needs properties such as log-concavity in the number of elements of each size to generate desired samples with sufficiently high probability. Here we do not even require unimodality in order to guarantee that the algorithm succeeds in generating samples of the desired rank efficiently. This joint work with Prateek Bhakta, Ben Cousins, and Dana Randall will appear at SODA 2017. 
Friday, September 23, 2016 - 13:05 , Location: Skiles 005 , Richard Peng , College of Computing, Georgia Tech , Organizer: Marcel Celaya
Parallel algorithms study ways of speeding up sequential algorithms by splitting work onto multiple processors. Theoretical studies of parallel algorithms often focus on performing a small number of operations, but assume more generous models of communication. Recent progresses led to parallel algorithms for many graph optimization problems that have proven to be difficult to parallelize. In this talk I will survey routines at the core of these results: low diameter decompositions, random sampling, and iterative methods.

Pages