- You are here:
- GT Home
- Home
- News & Events

Series: Analysis Seminar

TBA

Series: Geometry Topology Seminar

Series: CDSNS Colloquium

Consider a $T$-preserving probability measure $m$ on a dynamical system $T:X\to X$. The occupation time of a measurable function is the sequence $\ell_n(A,x)$ ($A\subset \mathbb R, x\in X$) defined as the number of $j\le n$ for which the partial sums $S_jf(x)\in A$. The talk will discuss conditions which ensure that this sequence, properly normed, converges weakly to some limit distribution. It turns out that this distribution is Mittag-Leffler and in particular the result covers the case when $f\circ T^j$ is a fractal Gaussian noise of Hurst parameter $>3/4$.

Series: GT-MAP Seminars

Since its inception, neuroscience has largely focused on the neuron as the functional unit of the nervous system. However, recent evidence demonstrates that populations of neurons within a brain area collectively show emergent functional properties ("dynamics"), properties that are not apparent at the level of individual neurons. These emergent dynamics likely serve as the brain’s fundamental computational mechanism. This shift compels neuroscientists to characterize emergent properties – that is, interactions between neurons – to understand computation in brain networks. Yet this introduces a daunting challenge – with millions of neurons in any given brain area, characterizing interactions within an area, and further, between brain areas, rapidly becomes intractable.I will demonstrate a novel unsupervised tool, Latent Factor Analysis via Dynamical Systems ("LFADS"), that can accurately and succinctly capture the emergent dynamics of large neural populations from limited sampling. LFADS is based around deep learning architectures (variational sequential auto-encoders), and builds a model of an observed neural population's dynamics using a nonlinear dynamical system (a recurrent neural network). When applied to neuronal ensemble recordings (~200 neurons) from macaque primary motor cortex (M1), we find that modeling population dynamics yields accurate estimates of the state of M1, as well as accurate predictions of the animal's motor behavior, on millisecond timescales. I will also demonstrate how our approach allows us to infer perturbations to the dynamical system (i.e., unobserved inputs to the neural population), and further allows us to leverage population recordings across long timescales (months) to build more accurate models of M1's dynamics.This approach demonstrates the power of deep learning tools to model nonlinear dynamical systems and infer accurate estimates of the states of large biological networks. In addition, we will discuss future directions, where we aim to pry open the "black box" of the trained recurrent neural networks, in order to understand the computations being performed by the modeled neural populations.pre-print available: lfads.github.io [lfads.github.io]

Series: Math Physics Seminar

This talk will be focused on the large deviation theory (LDT) for Schr\"odinger cocycles over a quasi-periodic or skew-shift base. We will also talk about its connection to positivity and regularity of the Lyapunov exponent, as well as localization. We will also discuss some open problems of the skew-shift model.

Series: Math Physics Seminar

TBA

Wednesday, March 28, 2018 - 14:00 ,
Location: Atlanta ,
Justin Lanier ,
GaTech ,
Organizer: Anubhav Mukherjee

Series: Analysis Seminar

Series: Research Horizons Seminar

Many
data sets in image analysis and signal processing are in a
high-dimensional space
but exhibit a low-dimensional structure. We are interested in building
efficient representations of these data for the purpose of compression
and inference. In the setting where a data set in $R^D$ consists of
samples from a probability measure concentrated
on or near an unknown $d$-dimensional manifold with
$d$ much smaller than $D$, we consider
two sets of problems: low-dimensional geometric approximations to the
manifold and regression of a function on the manifold. In the first
case, we construct multiscale low-dimensional empirical approximations
to the manifold and give finite-sample performance
guarantees. In the second case, we exploit these empirical geometric
approximations of the manifold and construct multiscale approximations
to the function. We prove finite-sample guarantees showing that we
attain the same learning rates as if the function
was defined on a Euclidean domain of dimension $d$. In both cases our
approximations can adapt to the regularity of the manifold or the
function even when this varies at different scales or locations.

Series: Geometry Topology Seminar

For oriented manifolds of dimension at least 4 that are simply connected at infinity, it is known that end summing (the noncompact analogue of boundary summing) is a uniquely defined operation. Calcut and Haggerty showed that more complicated fundamental group behavior at infinity can lead to nonuniqueness. We will examine how and when uniqueness fails. There are examples in various categories (homotopy, TOP, PL and DIFF) of nonuniqueness that cannot be detected in a weaker category. In contrast, we will present a group-theoretic condition that guarantees uniqueness. As an application, the monoid of smooth manifolds homeomorphic to R^4 acts on the set of smoothings of any noncompact 4-manifold. (This work is joint with Jack Calcut.)