Seminars and Colloquia by Series

Wednesday, January 12, 2011 - 14:00 , Location: Skiles 006 , Nguyen Hoai-Minh , Courant Institute of Mathematical Sciences , Organizer: Wilfrid Gangbo
A region of space is cloaked for a class of measurements if observers are not only unaware of its contents, but also unaware of the presence of the cloak using such measurements. One approach to cloaking is the change of variables scheme introduced by Greenleaf, Lassas, and Uhlmann for electrical impedance tomography and by Pendry, Schurig, and Smith for the Maxwell equations. They used a singular change of variables which blows up a point into the cloaked region. To avoid this singularity, various regularized schemes have been proposed. In this talk I present results related to cloaking via change of variables for the Helmholtz equation using the natural regularized scheme introduced by Kohn, Shen, Vogelius, and Weintein, where the authors used a transformation which blows up a small ball instead of a point into the cloaked region. I will discuss the degree of invisibility for a nite range or the full range of frequencies, and the possibility of achieving perfect cloaking. At the end of my talk, I will also discuss some results related to the wave equation in 3d.
Tuesday, April 13, 2010 - 11:00 , Location: Skiles 269 , Rafael de le Llave , Department of Mathematics, University of Texas, Austin , Organizer:
Many mechanical systems have the property that some small perturbations can accumulate over time to lead to large effects. Other perturbations just average out and cancel. It is interesting in applications to find out what systems have these properties and which perturbations average out and which ones grows. A complete answer is far from known but it is known that it is complicated and that, for example, number theory plays a role. In recent times, there has been some progress understanding some mechanisms that lead to instability. One can find landmarks that organize the long term behavior and provide an skeleton for the dynamics. Some of these landmarks provide highways along which the perturbations can accumulate.
Thursday, February 4, 2010 - 15:00 , Location: Skiles 269 , Karim Lounici , University of Cambridge , Organizer:
We consider the statistical deconvolution problem where one observes $n$ replications from the model $Y=X+\epsilon$, where $X$ is the unobserved random signal of interest and where $\epsilon$ is an independent random error with distribution $\varphi$. Under weak assumptions on the decay of the Fourier transform of $\varphi$ we derive upper bounds for the finite-sample sup-norm risk of wavelet deconvolution density estimators $f_n$ for the density $f$ of $X$, where $f: \mathbb R \to \mathbb R$ is assumed to be bounded. We then derive lower bounds for the minimax sup-norm risk over Besov balls in this estimation problem and show that wavelet deconvolution density estimators attain these bounds. We further show that linear estimators adapt to the unknown smoothness of $f$ if the Fourier transform of $\varphi$ decays exponentially, and that a corresponding result holds true for the hard thresholding wavelet estimator if $\varphi$ decays polynomially. We also analyze the case where $f$ is a 'supersmooth'/analytic density. We finally show how our results and recent techniques from Rademacher processes can be applied to construct global nonasymptotic confidence bands for the density $f$.
Thursday, January 28, 2010 - 16:00 , Location: Skiles 269 , Josephine Yu , Georgia Tech , Organizer: Matt Baker
Tropical geometry can be thought of as geometry over the tropical semiring, which is the set of real numbers together with the operations max and +. Just as ordinary linear and polynomial algebra give rise to convex geometry and algebraic geometry, tropical linear and polynomial algebra give rise to tropical convex geometry and tropical algebraic geometry. I will introduce the basic objects and problems in tropical geometry and discuss some relations with, and applications to, polyhedral geometry, computational algebra, and algebraic geometry.
Tuesday, January 26, 2010 - 11:05 , Location: Skiles 269 , , Institute for Advanced Study, Princeton , , Organizer: Christopher Heil
In the lecture I will explain how various fundamental structures from group representation theory appear naturally in the context of discrete harmonic analysis and can be applied to solve concrete problems from digital signal processing. I will begin the lecture by describing our solution to the problem of finding a canonical orthonormal basis of eigenfunctions of the discrete Fourier transform (DFT). Then I will explain how to generalize the construction to obtain a larger collection of functions that we call "The oscillator dictionary". Functions in the oscillator dictionary admit many interesting pseudo-random properties, in particular, I will explain several of these properties which arise in the context of problems of current interest in communication theory.
Thursday, January 21, 2010 - 15:00 , Location: Skiles 255 , , Carnegie Mellon University , Organizer: Sung Ha Kang
In this talk, I will first discuss several chemotaxis models includingthe classical Keller-Segel model.Chemotaxis is the phenomenon in which cells, bacteria, and other single-cell or multicellular organisms direct their movements according to certain chemicals (chemoattractants) in their environment. The mathematical models of chemotaxis are usually described by highly nonlinear time dependent systems of PDEs. Therefore, accurate and efficient numerical methods are very important for the validation and analysis of these systems. Furthermore, a common property of all existing chemotaxis systems is their ability to model a concentration phenomenon that mathematically results in solutions rapidly growing in small neighborhoods of concentration points/curves. The solutions may blow up or may exhibit a very singular, spiky behavior. In either case, capturing such solutions numerically is a challenging problem.  In our work we propose a family of stable (even at times near blow up) and highly accurate numerical methods, based on interior penalty discontinuous Galerkin schemes (IPDG) for the Keller-Segel chemotaxis model with parabolic-parabolic coupling. This model is the basic step in the modeling of many real biological processes and it is described by a system of a convection-diffusion equation for the cell density, coupled with a reaction-diffusion equation for the chemoattractant concentration.We prove theoretical hp error estimates for the proposed discontinuous Galerkin schemes. Our proof is valid for pre-blow-up times since we assume boundedness of the exact solution.Numerical experiments to demonstrate the stability and accuracy of the proposed methods for chemotaxis models and comparison with other methods will be presented. Ongoing research projects will be  discussed as well.
Tuesday, January 19, 2010 - 11:05 , Location: Skiles 269 , , University of Michigan , Organizer: Prasad Tetali
The Edrei-Thoma theorem characterizes totally positive functions, and plays an important role in character theory of the infinite symmetric group. The Loewner-Whitney theorem characterizes totally positive elements of the general linear group, and is fundamental for Lusztig's theory of total positivity in reductive groups. In this work we derive a common generalization of the two theorems. The talk is based on joint work with Thomas Lam.
Thursday, January 7, 2010 - 14:00 , Location: Skiles 269 , , Tufts University , , Organizer: John Etnyre
Attached to every homeomorphism of a surface is a real number called its dilatation.  For a generic (i.e. pseudo-Anosov) homeomorphism, the dilatation is an algebraic integer that records various properties of the map.  For instance, it determines the entropy (dynamics), the growth rate of lengths of geodesics under iteration (geometry), the growth rate of intersection numbers under iteration (topology), and the length of the corresponding loop in moduli space (complex analysis). The set of possible dilatations is quite mysterious.  In this talk I will explain the discovery, joint with Benson Farb and Chris Leininger, of two universality phenomena.  The first can be described as "algebraic complexity implies dynamical complexity", and the second as "geometric complexity implies dynamical complexity".
Tuesday, December 8, 2009 - 14:00 , Location: Skiles 269 , Xia Hua , Massachusetts Institute of Technology , Organizer: Christian Houdre
In a regression model, say Y_i=f(X_i)+\epsilon_i, where (X_i,Y_i) are observed and f is an unknown regression function, the errors \epsilon_i may satisfy what we call the "weak'' assumption that they are orthogonal with mean 0 and the same variance, and often the further strong'' assumption that they are i.i.d. N(0,\sigma^2) for some \sigma\geq 0. In this talk, I will focus on the polynomial regression model, namely f(x) = \sum_{i=0}^n a_i x^i for unknown parameters a_i, under the strong assumption on the errors. When a_i's are estimated via least squares (equivalent to maximum likelihood) by \hat a_i, we then get the {\it residuals} \hat epsilon_j := Y_j-\sum_{i=0}^n\hat a_iX_j^i. We would like to test the hypothesis that the nth order polynomial regression holds with \epsilon_j i.i.d. N(0,\sigma^2) while the alternative can simply be the negation or be more specific, e.g., polynomial regression with order higher than n. I will talk about two possible tests, first the rather well known turning point test, and second a possibly new "convexity point test.'' Here the errors \epsilon_j are unobserved but for large enough n, if the model holds, \hat a_i will be close enough to the true a_i so that \hat epsilon_j will have approximately the properties of \epsilon_j. The turning point test would be applicable either by this approximation or in case one can evaluate the distribution of the turning point statistic for residuals. The "convexity point test'' for which the test statistic is actually the same whether applied to the errors \epsilon_j or the residuals \hat epsilon_j avoids the approximation required in applying the turning point test to residuals. On the other hand the null distribution of the convexity point statistic depends on the assumption of i.i.d. normal (not only continuously distributed) errors.
Monday, December 7, 2009 - 14:05 , Location: Skiles 255 , , IHES/Courant , Organizer: Igor Belegradek
The Dehn function is a group invariant which connects geometric and combinatorial group theory; it measures both the difficulty of the word problem and the area necessary to fill a closed curve in an associated space with a disc.  The behavior of the Dehn function for high-rank lattices in high-rank symmetric spaces has long been an openquestion; one particularly interesting case is SL(n,Z).  Thurston conjectured that SL(n,Z) has a quadratic Dehn function when n>=4. This differs from the behavior for n=2 (when the Dehn function is linear) and for n=3 (when it is exponential).  I have proved Thurston's conjecture when n>=5, and in this talk, I will give an introduction to the Dehn function, discuss some of the background of the problem and, time permitting, give a sketch of the proof.