### Tuesday, November 10, 2015

#### Optimal Estimation of Low Rank Density Matrices

Tue, 11/10/2015 - 3:00pm, Skiles 005

Dong Xia, Georgia Inst. of Technology, School of Mathematics Organizer: Karim Lounici

The density matrices are positively semi-definite Hermitian matrices of unit trace that describe the state of a quantum system. We develop minimax lower bounds on error rates of estimation of low rank density matrices in trace regression models used in quantum state tomography (in particular, in the case of Pauli measurements) with explicit dependence of the bounds on the rank and other complexity parameters.Such bounds are established for several statistically relevant distances, including quantum versions of Kullback-Leibler divergence (relative entropy distance) and of Hellinger distance (so called Bures distance), and Schatten p-norm distances. Sharp upper bounds and oracle inequalities for least squares estimator with von Neumann entropy penalization are obtained showing that minimax lower bounds are attained (up to logarithmic factors) for these distances.

Joint work with Vladimir Koltchinskii.

### Tuesday, October 27, 2015

#### Generalized Dantzig Selector: Application to the k-support norm

Tue, 10/27/2015 - 3:00pm, Skiles 249

Changong Li, Georgia Inst. of Technology, School of Mathematics Organizer: Karim Lounici

We propose a Generalized Dantzig Selector (GDS) for linear models, in which any norm encoding the parameter structure can be leveraged for estimation. We investigate both computational and statistical aspects of the GDS. Based on conjugate proximal operator, a flexible inexact ADMM framework is designed for solving GDS, and non-asymptotic high-probability bounds are established on the estimation error, which rely on Gaussian width of unit norm ball and suitable set encompassing estimation error. Further, we consider a non-trivial example of the GDS using k-support norm. We derive an efficient method to compute the proximal operator for k-support norm since existing methods are inapplicable in this setting. For statistical analysis, we provide upper bounds for the Gaussian widths needed in the GDS analysis, yielding the first statistical recovery guarantee for estimation with the k-support norm. The experimental results confirm our theoretical analysis.

Review of a recent paper by Chatterjee et al. (Arxiv 1406.5291)

### Tuesday, October 20, 2015

#### Perturbation of linear forms of singular vectors under Gaussian noise

Tue, 10/20/2015 - 3:00pm, Skiles 005

Dong Xia, Georgia Inst. of Technology Organizer: Karim Lounici

Let A be a mxn matrix with singular value decomposition A=UDV', where the columns of U are left singular vectors and columns of V are right singular vectors of A. Suppose X is a mxn noise matrix whose entries are i.i.d. Gaussian random variables and consider A'=A+X. Let u_k be the k-th left singular vector of A and u'_k be its counterpart of A'. We develop sharp upper bounds for concentration of linear forms <u'_k,x> for any vector x. In particular, we can get component-wise perturbation of u'_k. Similarly result can be obtained for the right singular vectors of A'.The talk is based on a joint work with Vladimir Koltchinskii.

### Tuesday, October 6, 2015

#### Minimax and adaptive estimation of the Wigner function in QHT

Tue, 10/06/2015 - 3:05pm, Skiles 269

Katia Meziani, Universite Paris-Dauphine Organizer: Karim Lounici

### Tuesday, September 23, 2014

#### The Gaussian Radon Transform for Infinite-Dimensional Banach Spaces

Tue, 09/23/2014 - 3:00pm, Skiles 170

Irina Holmes, School of Mathematics, Georgia Tech Organizer: Karim Lounici

In this talk we construct an infinite-dimensional, stochastic version of the Radon transform. We work within the framework of abstract Wiener spaces, introduced by Leonard Gross. We present some basic properties of this transform, as well as compute some specific examples on the classical Wiener space.

### Tuesday, September 9, 2014

#### Variable Selection Consistency of Linear Programming Discriminant Estimator

Tue, 09/09/2014 - 3:00pm, Skiles 005

Dong Xia, School of Mathematics, Georgia Tech Organizer: Karim Lounici

The linear programming discriminant(LPD) estimator is used in sparse linear discriminant analysis for high dimensional classification problems. In this talk we will give a sufficient condition for the variable selection property of the LPD estimator and our result provides optimal bound on the requirement of sample size $n$ and magnitude of components of Bayes direction.

### Tuesday, October 23, 2012

#### Rademacher Averages and Phase Transitions in Glivenko–Cantelli Classes

Tue, 10/23/2012 - 3:05pm, Skyles 005

Krishnakumar Balasubramanian, Georgia Institute of Technology Organizer: Karim Lounici

I will be presenting the paper by S. Mendelson titled 'Rademacher Averages and Phase Transitions in Glivenko–Cantelli Classes'. Fat-shattering dimension and its use in characterizing GC classes will be introduced. Then a new quantity based on the growth rate of the Rademacher averages will be introduced. This parameter enables one to provide improved complexity estimates for the agnostic learning problem with respect to any norm. A phase transition phenomenon that appears in the sample complexity estimates, covering numbers estimates, and in the growth rate of the Rademacher averages will be discussed. Further (i) estimates on the covering numbers of a class when considered as a subset of spaces and (ii) estimate the fat-shattering dimension of the convex hull of a given class will be discussed.

### Tuesday, October 2, 2012

#### Statistical Algorithms and a Lower Bound for Detecting a Planted Clique

Tue, 10/02/2012 - 3:05pm, Skyles 005

Santosh Vempala, Georgia Institute of Technology Organizer: Karim Lounici

We present a framework for proving lower bounds on computational problems over distributions, including optimization and unsupervised learning. The framework is based on defining a restricted class of algorithms, called statistical algorithms, that instead of directly accessing samples from the input distribution can only obtain an estimate of the expectation of any given function on the input distribution. Our definition captures many natural algorithms used in theory and practice, e.g., moment-based methods, local search, linear programming, MCMC and simulated annealing. Our techniques are inspired by the statistical query model of Kearns from learning theory, which addresses the complexity of PAC-learning. For specific well-known problems over distributions, we obtain lower bounds on the complexity of any statistical algorithm. These include an exponential lower bounds for moment maximization and a nearly optimal lower bound for detecting planted clique distributions when the planted clique has size n^{1/2-\delta} for any constant \delta > 0. Variants of the latter problem have been assumed to be hard to prove hardness for other problems and for cryptographic applications. Our lower bounds provide concrete evidence of hardness. This is joint work with V. Feldman, E. Grigorescu, L. Reyzin and Y. Xiao.

### Tuesday, September 25, 2012

#### Uniform entropy numbers for VC classes

Tue, 09/25/2012 - 3:05pm, Skyles 005

Nishant Mehta, Georgia Institute of Technology Organizer: Karim Lounici

I'll be presenting bounds on the uniform entropy numbers for VC classes of sets, VC classes of functions, and symmetric convex hulls of VC classes of functions. The material is in Section 2.6 of Van der Vaart and Wellner.

### Tuesday, September 4, 2012

#### Generic Chaining

Tue, 09/04/2012 - 3:05pm, Skyles 005

William Mantzel, School of Electrical and Computer Engineering, Georgia Tech Organizer: Karim Lounici

Recap of generic chaining from last time and more discussion about it. Then, the lower Dudley bound (Theorem 2.1.1) and the Bernoulli upper bound (4.1.2) and statement of the Bernoulli conjecture (lower bound) will be covered from The Generic Chaining book.

### Friday, July 6, 2012

#### A geometric analysis of subspace clustering with outliers

Fri, 07/06/2012 - 3:05pm, Skiles 005

Mahdi Soltanolkotabi, Stanford University Organizer: Karim Lounici

One of the most fundamental steps in data analysis and dimensionality reduction consists of approximating a given dataset by a single low-dimensional subspace, which is classically achieved via Principal Component Analysis (PCA). However, in many applications, the data often lie near a union of low-dimensional subspaces, reflecting the multiple categories or classes a set of observations may belong to. In this talk we discuss the problem of clustering a collection of unlabeled data points assumed to lie near a union of lower dimensional planes. Simply stated the task is to assign each data point to a cluster so as to recover all the hidden subspaces. As is common in computer vision or unsupervised learning applications, we do not know in advance how many subspaces there are nor do we have any information about their dimensions. We present a novel geometric analysis of an algorithm named sparse subspace clustering (SSC), which significantly broadens the range of problems where it is provably effective. For instance, we show that SSC can recover multiple subspaces, each of dimension comparable to the ambient dimension. We also show that SSC can correctly cluster data points even when the subspaces of interest intersect. Further, we develop an extension of SSC that succeeds when the data set is corrupted with possibly overwhelmingly many outliers. Underlying our analysis are clear geometric insights, which may bear on other sparse recovery problems. We will also demonstrate the effectiveness of these methods by various numerical studies.

### Tuesday, February 21, 2012

#### Minimax Rates of Estimation for Sparse PCA in High Dimensions

Tue, 02/21/2012 - 4:00pm, Skyles 006

Karim Lounici, Georgia Institute of Technology, School of Mathematics Organizer: Karim Lounici

This presentation is based on the papers by D. Paul and I. Johnstone (2007) and V.Q. Vu and J. Lei (2012). Here is the abstract of the second paper. We study the sparse principal components analysis in the high-dimensional setting, where $p$ (the number of variables) can be much larger than $n$ (the number of observations). We prove optimal, non-aymptotics lower bounds and upper bounds on the minimax estimation error for the leading eigenvector when it belongs to an $l_q$ ball for $q\in [0,1]$. Our bound are sharp in $p$ and $n$ for all $q\in[0,1]$ over a wide class of distributions. The upper bound is obtained by analyzing the performance of $l_q$-constrained PCA. In particular, our results provide convergence rates for $l_1$-constrained PCA.

### Tuesday, February 14, 2012

#### Estimation of Low Rank Kernels on Graphs

Tue, 02/14/2012 - 4:00pm, Skyles 006

Vladimir Koltchinskii, Georgia Institute of Technology, School of Mathematics Organizer: Karim Lounici

Let (V, E) be a graph with vertex set V and edge set E. Let (X, X', Y) V \in V × {-1,1} be a random triple, where X, X' are independent uniformly distributed vertices and Y is a label indicating whether X, X' are "similar", or not. Our goal is to estimate the regression function S_*(u, v) = \mathbb{E}(Y|X = u, X' = v), u, v \in V based on n i.i.d. copies (X_1, X'_1, Y_1), ... , (X_n, X'_n, Y_n) of (X, X', Y). We are interested in this problem in the case when S_*: V × V \mapsto [-1,1] is a symmetric low rank kernel (or it can be well approximated by low rank kernels). In addition to this, assume that S_* is "smooth" on the graph. We study estimators based on a modified least squares method with complexity penalization involving both the nuclear norm and Sobolev type norms of symmetric kernels on the graph (defined in terms of its Laplacian). We prove error bounds for such estimators that improve the existing bounds in low rank matrix recovery in the cases when the target kernel is not only low rank, but also sufficiently smooth. The talk is based in part on a joint project with Pedro Rangel.

### Tuesday, February 7, 2012

#### Estimation of Low Rank Kernels on Graphs -- Rescheduled

Tue, 02/07/2012 - 4:00pm, Skiles 006

Vladimir Koltchinskii, School of Mathematics, Georgia Tech Organizer: Karim Lounici

Vladimir Koltchinskii's talk has been rescheduled for next Tuesday, February 14

### Tuesday, November 15, 2011

#### Randomized Approximation Schemes for Cuts and Flows in Capacitated Graphs

Tue, 11/15/2011 - 4:00pm, skyles 006

Yingyu Liang, School of Compter Science, Georgia tech Organizer: Karim Lounici

We will talk about how to approximate an arbitrary graph by a sparse graph with respect to cuts and flows, using random sampling techniques. More specifically, we will describe a near-linear-time randomized combinatorial construction that transforms any graph on n vertices into an O(n log n)-edge graph on the same vertices whose cuts have approximately the same value as the original graph's. The new graph can be used to accelerate cut and flow algorithms, leading to approximate solution on the original graph. The construction algorithms of the sparse graph are based on a general theorem analyzing the concentration of cut values near their expectation in random graphs.