- You are here:
- GT Home
- Home
- News & Events

Series: Stochastics Seminar

Random k-SAT is a distribution over boolean formulas studied widely in both statistical physics and theoretical computer science for its intriguing behavior at its phase transition. I will present results on the satisfiability threshold in a geometric model of random k-SAT: labeled boolean literals are placed uniformly at random in a d-dimensional cube, and for each set of k contained in a ball of radius r, a k-clause is added to the random formula. Unlike standard random k-SAT, this model exhibits dependence between the clauses. For all k we show that the satisfiability threshold is sharp, and for k=2 we find the location of the threshold as well. I will also discuss connections between this model, the random geometric graph, and other probabilistic models. This is based on joint work with Milan Bradonjic.

Series: Stochastics Seminar

The problem of finding large average submatrices of a real-valued matrix arises in the exploratory analysis of data from disciplines as diverse as genomics and social sciences. Motivated in part by previous work on this applied problem, this talk will present several new theoretical results concerning large average submatrices of an n x n Gaussian random matrix. We will begin by considering the average and joint distribution of the k x k submatrix having largest average value (the global maximum). We then turn our attention to submatrices with dominant row and column sums, which arise as the local maxima of a practical iterative search procedure for large average submatrices I will present a result characterizing the value and joint distribution of a local maximum, and show that a typical local maxima has an average value within a constant factor of the global maximum. In the last part of the talk I will describe several results concerning the *number* L_n(k) of k x k local maxima, including the asymptotic behavior of its mean and variance for fixed k and increasing n, and a central limit theorem for L_n(k) that is based on Stein's method for normal approximation.
Joint work with Shankar Bhamidi (UNC) and Partha S. Dey (UIUC)

Series: Stochastics Seminar

We consider optimal alignments of random sequences of length n which are i.i.d. For
such alignments we count which letters get aligned with
which letters how often. This gives as for every opitmal alignment
the frequency of the aligned letter pairs. These
frequencies expressed as relative frequencies and put
in vector form are called the "empirical distribution of letter pairs
along an optimal alignment". It was previously established
that if the scoring function is chosen at random,
then the empirical distribution of letter pairs along an opitmal
alignment converges. We show an upper bound for the rate of convergence
which is larger thatn the rate of the alignement score.
the rate of the alignemnt score can be obtained directly
by Azuma-Hoeffding, but not so for the empirical distribution of the aligned letter
pairs seen along an opitmal alignment:
which changing on letter in one of the sequences,
the optimal alginemnt score changes by at most a fixed quantity,
but the empirical distribution of the aligned letter pairs
potentially could change entirely.

Series: Stochastics Seminar

We analyze active learning algorithms, which only receive the classifications of examples when they ask for them, and traditional passive (PAC) learning algorithms, which receive classifications for all training examples, under log-concave and nearly log-concave distributions. By using an aggressive localization argument, we prove that active learning provides an exponential improvement over passive learning when learning homogeneous linear separators in these settings. Building on this, we then provide a computationally efficient algorithm with optimal sample complexity for passive learning in such settings. This provides the first bound for a polynomial-time algorithm that is tight for an interesting infinite class of hypothesis functions under a general class of data-distributions, and also characterizes the distribution-specific sample complexity for each distribution in the class. We also illustrate the power of localization for efficiently learning linear separators in two challenging noise models (malicious noise and agnostic setting) where we provide efficient algorithms with significantly better noise tolerance than previously known.

Series: Stochastics Seminar

We consider two approaches to address angles between random subspaces: classical random matrix theory and free probability. In the former, one constructs random subspaces from vectors with independent random entries. In the latter, one has historically started with the uniform distribution on subspaces of appropriate dimension. We point out when these two approaches coincide and present new results for both. In particular, we present the first universality result for the random matrix theory approach and present the first result beyond uniform distribution for the free probability approach. We further show that, unexpectedly, discrete uncertainty principles play a natural role in this setting. This work is partially with L. Erdos and G. Anderson.

Series: Stochastics Seminar

The classical Freidlin--Wentzell theory on small random
perturbations of dynamical
systems operates mainly at the level of large deviation estimates. In many
cases it would be interesting
and useful to supplement those with central limit theorem type results. We
are able to describe a class of situations where a Gaussian scaling limit
for the exit point of conditioned diffusions holds. Our main tools are
Doob's h-transform and new gradient estimates for Hamilton--Jacobi
equations. Joint work with Andrzej Swiech.

Series: Stochastics Seminar

We show that on any Riemannian manifold with the Ricci curvature non-negative we can construct a coupling of two Brownian motions which are staying fixed distance for all times. We show a more general version of this for the case of Ricci bounded from below uniformly by a constant k. In the terminology of Burdzy, Kendall and others, a shy coupling is a coupling in which the Brownian motions do not couple in finite time with positive probability. What we construct here is a strong version of shy couplings on Riemannian manifolds. On the other hand, this can be put in contrast with some results of von Renesse and K. T. Sturm which give a characterization of the lower bound on the Ricci curvature in terms of couplings of Brownian motions and our construction optimizes this choice in a way which will be explained. This is joint work with Mihai N. Pascu.

Series: Stochastics Seminar

The Kardar-Parisi-Zhang(KPZ) equation is a non-linear stochastic partial
di fferential equation proposed as the scaling limit for random growth
models in physics. This equation is, in standard terms, ill posed and
the notion of solution has attracted considerable attention in recent
years. The purpose of this talk is two fold; on one side, an
introduction to the KPZ equation and the so called KPZ universality
classes is given. On the other side, we give recent results that
generalize the notion of viscosity solutions from deterministic PDE to
the stochastic case and apply these results to the KPZ equation. The
main technical tool for this program to go through is a non-linear
version of Feyman-Kac's formula that uses Doubly Backward Stochastic
Differential Equations (Stochastic Differential Equations with times
flowing backwards and forwards at the same time) as a basis for the
representation.

Series: Stochastics Seminar

Wigner stated the general hypothesis that the distribution of
eigenvalue spacings of large complicated quantum systems is universal in
the sense that it depends only on the symmetry class of the physical system
but not on other detailed structures. The simplest case for this hypothesis
concerns large but finite dimensional matrices. Spectacular progress was
done in the past two decades to prove universality of random matrices
presenting an orthogonal, unitary or symplectic invariance. These models
correspond to log-gases with respective inverse temperature 1, 2 or 4. I
will report on a joint work with L. Erdos and H.-T. Yau, which yields
universality for log-gases at arbitrary temperature at the microscopic
scale. A main step consists in the optimal localization of the particles,
and the involved techniques include a multiscale analysis and a local
logarithmic Sobolev inequality.

Series: Stochastics Seminar

References

[1] S. Arlot and P. Massart. Data-driven calibration of penalties for least-squares regression. J. Mach. Learn.

Res., 10:245.279 (electronic), 2009.

[2] L. Birgé and P. Massart. Minimal penalties for Gaussian model selection. Probab. Theory Related Fields,

138(1-2):33.73, 2007.

[3] Vladimir Koltchinskii. Oracle inequalities in empirical risk minimization and sparse recovery problems,

volume 2033 of Lecture Notes in Mathematics. Springer, Heidelberg, 2011. Lectures from the 38th Prob-

ability Summer School held in Saint-Flour, 2008, École d.Été de Probabilités de Saint-Flour. [Saint-Flour

Probability Summer School].

[4] Pascal Massart. Concentration inequalities and model selection, volume 1896 of Lecture Notes in Math-

ematics. Springer, Berlin, 2007. Lectures from the 33rd Summer School on Probability Theory held in

Saint-Flour, July 6.23, 2003, With a foreword by Jean Picard.

The systematical study of model selection procedures, especially since the early nineties, has led to the
design of penalties that often allow to achieve minimax rates of convergence and adaptivity for the selected
model, in the general setting of risk minimization (Koltchinskii [3], Massart [4]).
However, the proposed penalties often su.er form their dependencies on unknown or unrealistic constants.
As a matter of fact, under-penalization has generally disastrous e.ects in terms of e¢ ciency. Indeed, the model
selection procedure then looses any bias-variance trade-o. and so, tends to select one of the biggest models in
the collection.
Birgé and Massart ([2]) proposed quite recently a method that empirically adjusts the level of penalization
in a linear Gaussian setting. This method of calibration is called "slope heuristics" by the authors, and is
proved to be optimal in their setting. It is based on the existence of a minimal penalty, which is shown to be
half the optimal one.
Arlot and Massart ([1]) have then extended the slope heuristics to the more general framework of empirical
risk minimization. They succeeded in proving the optimality of the method in heteroscedastic least-squares
regression, a case where the ideal penalty is no longer linear in the dimension of the models, not even a function
of it. However, they restricted their analysis to histograms for technical reasons. They conjectured a wide
range of applicability for the method.
We will present some results that prove the validity of the slope heuristics in heteroscedastic least-squares
regression for more general linear models than histograms. The models considered here are equipped with
a localized orthonormal basis, among other things. We show that some piecewise polynomials and Haar
expansions satisfy the prescribed conditions.
We will insist on the analysis when the model is .xed. In particular, we will focus on deviations bounds for
the true and empirical excess risks of the estimator. Empirical process theory and concentration inequalities
are central tools here, and the results at a .xed model may be of independent interest.