Thursday, January 12, 2017 - 11:05 , Location: Skiles 006 , Tengyuan Liang , University of Pennsylvania , email@example.com , Organizer: Michael Damron
Network data analysis has wide applications in computational social science, computational biology, online social media, and data visualization. For many of these network inference questions, the brute-force (yet statistically optimal) methods involve combinatorial optimization, which is computationally prohibitive when faced with large scale networks. Therefore, it is important to understand the effect on statistical inference when focusing on computationally tractable methods. In this talk, we will discuss three closely related statistical models for different network inference problems. These models answer inference questions on cliques, communities, and ties, respectively. For each particular model, we will describe the statistical model, propose new computationally efficient algorithms, and study the theoretical properties and numerical performance of the algorithms. Further, we will quantify the computational optimality through describing the intrinsic barrier for certain efficient algorithm classes, and investigate the computational-to-statistical gap theoretically. A key feature shared by our studies is that, as the parameters of the model changes, the problems exhibit different phases of computational difficulty.
Thursday, December 1, 2016 - 15:05 , Location: Skiles 005 , Kolyan Ray , Leiden Univ. , firstname.lastname@example.org , Organizer: Ionel Popescu
Asymptotic equivalence between two statistical models means that they have the same asymptotic (large sample) properties with respect to all decision problems with bounded loss. In nonparametric (infinite-dimensional) statistical models, asymptotic equivalence has been found to be useful since it can allow one to derive certain results by studying simpler models. One of the key results in this area is Nussbaum’s theorem, which states that nonparametric density estimation is asymptotically equivalent to a Gaussian shift model, provided that the densities are smooth enough and uniformly bounded away from zero.We will review the notion of asymptotic equivalence and existing results, before presenting recent work on the extent to which one can relax the assumption of being bounded away from zero. We further derive the optimal (Le Cam) distance between these models, which quantifies how close they are for finite-samples. As an application, we also consider Poisson intensity estimation with low count data. This is joint work with Johannes Schmidt-Hieber.
Wednesday, January 27, 2016 - 11:00 , Location: Skiles 006 , Jacopo De Simoi , Paris Diderot University , email@example.com , Organizer: Federico Bonetto
Dynamical billiards constitute a very natural class of Hamiltonian systems: in 1927 George Birkhoff conjectured that, among all billiards inside smooth planar convex domains, only billiards in ellipses are integrable. In this talk we will prove a version of this conjecture for convex domains that are sufficiently close to an ellipse of small eccentricity. We will also describe some remarkable relation with inverse spectral theory and spectral rigidity of planar convex domains. Our techniques can in fact be fruitfully adapted to prove spectral rigidity among generic (finitely) smooth axially symmetric domains which are sufficiently close to a circle. This gives a partial answer to a question by P. Sarnak.
Thursday, January 21, 2016 - 11:05 , Location: Skiles 006 , Craig Schroeder , UCLA , Organizer: Haomin Zhou
Hybrid particle/grid numerical methods have been around for a long time, andtheir usage is common in some fields, from plasma physics to artist-directedfluids. I will explore the use of hybrid methods to simulate many differentcomplex phenomena occurring all around you, from wine to shaving foam and fromsand to the snow in Disney's Frozen. I will also talk about some of thepractical advantages and disadvantages of hybrid methods and how one of theweaknesses that has long plagued them can now be fixed.
Tuesday, January 19, 2016 - 15:05 , Location: Skiles 005 , Yang Ning , Princeton University , firstname.lastname@example.org , Organizer: Ionel Popescu
We consider the problem of how to control the measures of false scientific discoveries in high-dimensional models. Towards this goal, we focus on the uncertainty assessment for low dimensional components in high-dimensional models. Specifically, we propose a novel decorrelated likelihood based framework to obtain valid p-values for generic penalized M-estimators. Unlike most existing inferential methods which are tailored for individual models, our method provides a general framework for high-dimensional inference and is applicable to a wide variety of applications, including generalized linear models, graphical models, classifications and survival analysis. The proposed method provides optimal tests and confidence intervals. The extensions to general estimating equations are discussed. Finally, we show that the p-values can be combined to control the false discovery rate in multiple hypothesis testing.
Friday, January 15, 2016 - 11:05 , Location: Skiles 006 , Daniel Sussman , Department of Statistics, Harward University , Organizer:
The eigendecomposition of an adjacency matrix provides a way to embed a graph as points in finite dimensional Euclidean space. This embedding allows the full arsenal of statistical and machine learning methodology for multivariate Euclidean data to be deployed for graph inference. Our work analyzes this embedding, a graph version of principal component analysis, in the context of various random graph models with a focus on the impact for subsequent inference. We show that for a particular model this embedding yields a consistent estimate of its parameters and that these estimates can be used to accurately perform a variety of inference tasks including vertex clustering, vertex classification as well as estimation and hypothesis testing about the parameters.
Friday, November 20, 2015 - 11:00 , Location: Skiles 006 , Mayya Zhilova , Weierstrass Institute , Organizer: Karim Lounici
Bootstrap is one of the most powerful and common tools in statistical inference. In this talk a multiplier bootstrap procedure is considered for construction of likelihood-based confidence sets. Theoretical results justify the bootstrap validity for a small or moderate sample size and allow to control the impact of the parameter dimension p: the bootstrap approximation works if p^3/n is small, where n is a sample size. The main result about bootstrap validity continues to apply even if the underlying parametric model is misspecified under a so-called small modelling bias condition. In the case when the true model deviates significantly from the considered parametric family, the bootstrap procedure is still applicable but it becomes conservative: the size of the constructed confidence sets is increased by the modelling bias. The approach is also extended to the problem of simultaneous confidence estimation. A simultaneous multiplier bootstrap procedure is justified for the case of exponentially large number of models. Numerical experiments for misspecified regression models nicely confirm our theoretical results.
Tuesday, November 17, 2015 - 11:05 , Location: Skiles 005 , Lutz Warnke , University of Cambridge , Organizer: Xingxing Yu
Random graphs are the basic mathematical models for large-scale disordered networks in many different fields (e.g., physics, biology, sociology). Their systematic study was pioneered by Erdoes and Renyi around 1960, and one key feature of many classical models is that the edges appear independently. While this makes them amenable to a rigorous analysis, it is desirable (both mathematically and in terms of applications) to understand more complicated situations. In this talk I will discuss some of my work on so-called Achlioptas processes, which (i) are evolving random graph models with dependencies between the edges and (ii) give rise to more interesting percolation phase transition phenomena than the classical Erdoes-Renyi model.
Tuesday, March 31, 2015 - 11:05 , Location: Skiles 005 , Andrei Martinez-Finkelshtein , Universidad de Almeria, Spain , Organizer: Jeff Geronimo
Polynomials defined either by some type of orthogonality or satisfying differential equations are pervasive in approximation theory, random matrix theory, special functions, harmonic analysis, scientific computing and applications. Numerical simulations show that their zeros exhibit a common feature: they align themselves along certain curves on the plane. What are these curves? In some cases we can answer this question, at least asymptotically. The answer connects fascinating mathematical objects, such as extremal problems in electrostatics, Riemann surfaces, trajectories of quadratic differentials, algebraic functions; this list is not complete. This talk is a brief survey of some ideas related to this problem, from the breakthrough developments in the 1980-ies to nowadays, finishing with some recent results and open problems.
Tuesday, March 3, 2015 - 11:05 , Location: Skiles 006 , Albert Fathi , ENS Lyon , email@example.com , Organizer: Andrzej Swiech
The goal of this lecture is to explain and motivate the connection between Aubry-Mather theory (Dynamical Systems), and viscosity solutions of the Hamilton-Jacobi equations (PDE). The connection is the content of weak KAM Theory. The talk should be accessible to the ''generic" mathematician. No a priori knowledge of any of the two subjects is assumed.