Seminars and Colloquia by Series

Friday, March 28, 2014 - 12:05 , Location: Skiles 005 , Ioannis Panageas , Georgia Tech , Organizer:
Since the 50s and Nash general proof of equilibrium existence in games it is well understood that even simple games may have many, even uncountably infinite, equilibria with different properties. In such cases a natural question arises, which equilibrium is the right one? In this work, we perform average case analysis of evolutionary dynamics in such cases of games. Intuitively, we assign to each equilibrium a probability mass that is proportional to the size of its region of attraction. We develop new techniques to compute these likelihoods for classic games such as the Stag Hunt game (and generalizations) as well as balls-bins games. Our proofs combine techniques from information theory (relative entropy), dynamical systems (center manifold theorem), and algorithmic game theory. Joint work with Georgios Piliouras
Friday, November 22, 2013 - 13:05 , Location: Skiles 005 , Cristóbal Guzmán , ISyE, Georgia Tech , cguzman@gatech.edu , Organizer:
First-order (a.k.a. subgradient) methods in convex optimization are a popular choice when facing extremely large-scale problems, where medium accuracy solutions suffice. The limits of performance of first-order methods can be partially understood under the lens of black box oracle complexity. In this talk I will present some of the limitations of worst-case black box oracle complexity, and I will show two recent extensions of the theory:                                                                                                          First, we extend the notion of oracle compexity to the distributional setting, where complexity is measured as the worst average running time of (deterministic) algorithms against a distribution of instances. In this model, the distribution of instances is part of the input to the algorithm, and thus algorithms can potentially exploit this to accelerate their running time. However, we will show that for nonsmooth convex optimization distributional lower bounds coincide to worst-case complexity up to a constant factor, and thus all notions of complexity collapse; we can further extend these lower bounds to prove high running time with high probability (this is joint work with Sebastian Pokutta and Gabor Braun). Second, we extend the worst-case lower bounds for smooth convex optimization to non-Euclidean settings. Our construction mimics the classical proof for the nonsmooth case (based on piecewise-linear functions), but with a local smoothening of the instances. We establish a general lower bound for a wide class of finite dimensional Banach spaces, and then apply the results to \ell^p spaces, for p\in[2,\infty]. A further reduction will allow us to extend the lower bounds to p\in[1,2). As consequences, we prove the near-optimality of the Frank-Wolfe algorithm for the box and the spectral norm ball; and we prove near-optimality of function classes that contain the standard convex relaxation for the sparse recovery problem (this is joint work with Arkadi Nemirovski).
Friday, November 8, 2013 - 13:05 , Location: Skiles 005 , Gustavo Angulo , ISyE, Georgia Tech , Organizer:
In this talk, we introduce and study the forbidden-vertices problem. Given a polytope P and a subset X of its vertices, we study the complexity of linear optimization over the subset of vertices of P that are not contained in X. This problem is closely related to finding the k-best basic solutions to a linear problem. We show that the complexity of the problem changes significantly depending on how both P and X are described, that is, on the encoding of the input data. For example, we analyze the case where the complete linear formulation of P is provided, as opposed to the case where P is given by an implicit description (to be defined in the talk). When P has binary vertices only, we provide additional tractability results and linear formulations of polynomial size. Some applications and extensions to integral polytopes will be discussed. Joint work with Shabbir Ahmed, Santanu S. Dey, and Volker Kaibel.
Friday, November 1, 2013 - 13:05 , Location: Skiles 005 , Yingyu Liang , College of Computing, Georgia Tech , Organizer:
Recently, Bilu and Linial formalized an implicit assumption often made when choosing a clustering objective: that the optimum clustering to the objective should be preserved under small multiplicative perturbations to distances between points. They showed that for max-cut clustering it is possible to circumvent NP-hardness and obtain polynomial-time algorithms for instances resilient to large (factor O(\sqrt{n})) perturbations, and subsequently Awasthi et al. considered center-based objectives, giving algorithms for instances resilient to O(1) factor perturbations. In this talk, for center-based objectives, we present an algorithm that can optimally cluster instances resilient to (1+\sqrt{2})-factor perturbations, solving an open problem of Awasthi et al. For k-median, a center-based objective of special interest, we additionally give algorithms for a more relaxed assumption in which we allow the optimal solution to change in a small fraction of the points after perturbation. We give the first bounds known for k-median under this more realistic and more general assumption. We also provide positive results for min-sum clustering which is a generally much harder objective than center-based objectives. Our algorithms are based on new linkage criteria that may be of independent interest.
Friday, October 25, 2013 - 13:05 , Location: Skiles 005 , Ton Dieker , ISyE, Georgia Tech , Organizer:
This talk evolves around Markov functions, i.e., when a function of a Markov chain results in another Markov chain. We focus on two examples where this concept yields new results and insights: (1) the evolution of reflected stochastic processes in the study of stochastic networks, and (2) spectral analysis for a special high-dimensional Markov chain.
Friday, October 11, 2013 - 13:05 , Location: Skiles 005 , Jugal Garg , College of Computing, Georgia Tech , Organizer:
Although production is an integral part of the Arrow-Debreu market model, most of the work in theoretical computer science has so far concentrated on markets without production, i.e., the exchange economy. In this work, we take a significant step towards understanding computational aspects of markets with production. We first define the notion of separable, piecewise-linear concave (SPLC) production by analogy with SPLC utility functions. We then obtain a linear complementarity problem (LCP) formulation that captures exactly the set of equilibria for Arrow-Debreu markets with SPLC utilities and SPLC production, and we give a complementary pivot algorithm for finding an equilibrium. This settles a question asked by Eaves in 1975.  Since this is a path-following algorithm, we obtain a proof of membership of this problem in PPAD, using Todd, 1976. We also obtain an elementary proof of existence of equilibrium (i.e., without using a fixed point theorem), rationality, and oddness of the number of equilibria.  Experiments show that our algorithm runs fast on randomly chosen examples, and unlike previous approaches, it does not suffer from issues of numerical instability. Additionally, it is strongly polynomial when the number of goods or the number of agents and firms is constant. This extends the result of Devanur and Kannan (2008) to markets with production. Based on a joint work with Vijay V. Vazirani. 
Friday, September 27, 2013 - 13:05 , Location: Skiles 005 , Ernie Croot , School of Math, Georgia Tech , Organizer:
If A is a set of n integers such that the sumset A+A = {a+b : a,b in A} has size 2n-1, then it turns out to be relatively easy to prove that A is an arithmetic progression {c, c+d, c+2d, c+3d, ..., c+(n-1)d}. But what if you only know something a bit weaker, say |A+A| < 10 n, say? Well, then there is a famous theorem due to G. Freiman that says that A is a "dense subset of a generalized arithmetic progression" (whatever that is -- you'll find out). Recently, this subject has been revolutionized by some remarkable results due to Tom Sanders.  In this talk I will discuss what these are.
Friday, September 13, 2013 - 13:05 , Location: Skiles 005 , Ying Xiao , College of Computing, Georgia Tech , Organizer:
Fourier PCA is Principal Component Analysis of the covariance matrix obtained after reweighting a distribution with a random Fourier weighting. It can also be viewed as PCA applied to the Hessian matrix of functions of the characteristic function of the underlying distribution. Extending this technique to higher derivative tensors and developing a general tensor decomposition method, we derive the following results: (1) a polynomial-time algorithm for general independent component analysis (ICA), not requiring the component distributions to be discrete or distinguishable from Gaussian in their fourth moment (unlike in the previous work); (2) the first polynomial-time algorithm for underdetermined ICA, where the number of components can be arbitrarily higher than the dimension; (3) an alternative algorithm for learning mixtures of spherical Gaussians with linearly independent means. These results also hold in the presence of Gaussian noise.
Wednesday, August 21, 2013 - 13:00 , Location: ISyE Executive classroom , Daniel Dadush , Courant Institute, NYU , Organizer:
In 2011, Rothvoß showed that there exists a 0/1 polytope such that any higher-dimensional polytope projecting to it must have a subexponential number of facets, i.e., its linear extension complexity is subexponential. The question as to whether there exists a 0/1 polytope having high PSD extension complexity was left open, i.e. is  there a 0/1 polytope such that any spectrahedron projecting to it must be the intersection of a subexponential sized semidefinite cone and an affine space? We answer this question in the affirmative using a new technique to rescale semidefinite factorizations
Friday, April 26, 2013 - 13:05 , Location: Skiles 005 , Santanu Dey , ISyE, Georgia Tech , Organizer:
This is a review talk on an infinite dimensional relaxation of mixed integer programs (MIP) that was developed by Gomory and Johnson. We will discuss the relationship between  cutting planes for the original MIP and its infinite dimensional relaxation. Time permitting, various structural results about the infinite dimensional problem and some open problems will be presented. 

Pages