- You are here:
- GT Home
- Home
- News & Events

Series: ACO Student Seminar

Understanding how the physiology of organisms arises through the dynamic interaction of the molecular constituents of life is an important goal of molecular systems biology, for which mathematical modeling can be very helpful. Different modeling strategies have been used for this purpose. Dynamic mathematical models can be broadly divided into two classes: continuous, such as systems of differential equations and their stochastic variants and discrete, such as Boolean networks and their generalizations. This talk will focus on the discrete modeling approach. Applications will include the study of stochasticity under this setting. No background in mathematical biology is required, and the talk will be accessible to a broad audience.

Series: ACO Student Seminar

In this talk, we are going to introduce Linearized Proximal Alternating Minimization Algorithm and its variants for total variation based variational model. Since the proposed method does not require any special inner solver (e.g. FFT or DCT), which is quite often required in augmented Lagrangian based approach (ADMM), it shows better performance for large scale problems. In addition, we briefly introduce new regularization method (nonconvex higher order total variation).

Series: ACO Student Seminar

In this talk we consider the problem of finding basic solutions to linear programs where some vertices are excluded. We study the complexity of this and related problems, most of which turn out to be hard. On the other hand, we show that forbidding vertices from 0-1 polytopes can be carried out with a compact extended formulation. A similar result holds for integer programs having a box-integrality property. We discuss some applications of our results.

Series: ACO Student Seminar

The hard-core model has attracted much attention across several
disciplines, representing lattice gases in statistical physics and
independent sets in discrete mathematics and computer science. On
finite graphs, we are given a parameter \lambda, and an independent
set I arises with probability proportional to \lambda^{|I|}. We
are interested in determining the mixing time of local Markov
chains that add or remove a small number of vertices in each step.
On finite regions of Z^2 it is conjectured that there is a phase
transition at some critical point \lambda_c that is approximately
3.79. It is known that local chains are rapidly mixing when
\lambda < 2.3882. We give complementary results showing that
local chains will mix slowly when \lambda > 5.3646 on regions
with periodic (toroidal) boundary conditions and when
\lambda > 7.1031 with non-periodic (free) boundary conditions.
The proofs use a combinatorial characterization of configurations
based on the presence or absence of fault lines and an enumeration
of a new class of self-avoiding walks called taxi walks.
(Joint work with Antonio Blanca, David Galvin and Prasad Tetali)

Series: ACO Student Seminar

Mechanism design for distributed systems is fundamentally concerned with aligning individual incentives with social welfare to avoid socially inefficient outcomes that can arise from agents acting autonomously. One simple and natural approach is to centrally broadcast non-binding advice intended to guide the system to a socially near-optimal state while still harnessing the incentives of individual agents. The analytical challenge is proving fast convergence to near optimal states, and we present the first results that carefully constructed advice vectors yield stronger guarantees. We apply this approach to a broad family of potential games modeling vertex cover and set cover optimization problems in a distributed setting. This class of problems is interesting because finding exact solutions to their optimization problems is NP-hard yet highly inefficient equilibria exist, so a solution in which agents simply locally optimize is not satisfactory. We show that with an arbitrary advice vector, a set cover game quickly converges to an equilibrium with cost of the same order as the square of the social cost of the advice vector. More interestingly, we show how to efficiently construct an advice vector with a particular structure with cost $O(\log n)$ times the optimal social cost, and we prove that the system quickly converges to an equilibrium with social cost of this same order.

Series: ACO Student Seminar

Inpainting, deblurring and denoising images are common tasks required for a number of applications in science and engineering. Since the seminal work of Rudin, Osher and Fatemi, image regularization by total variation (TV) became a standard heuristic for achieving these tasks. In this talk, I will introduce the TV regularization model and some connections with sparse optimization and compressed sensing. Later, I will summarize some of the fastest existing methods for solving TV regularization. Motivated by improving the super-linear (on the dimension) running time of these algorithms, we propose two heuristics for image regularization models: the first one is to replace the TV by the \ell^1 norm of the Laplacian, and the second is a new, to the best of our knowledge, approximation of the TV seminorm, based on a redundant parameterization of the gradient field. We prove that the latter regularizer is an O(log n) approximation of the TV seminorm. This proof is based on basic techniques from Discrete Fourier Analysis and an estimate of the fundamental solutions of the Laplace equation on a grid, due to Mangad. Finally, we present preliminary computational results for the three models, on mid-scale images. This talk will be self-contained. Joint work with Arkadi Nemirovski.

Series: ACO Student Seminar

We solve a 20-year old problem posed by M. Yannakakis and prove that there exists no polynomial-size linear program (LP) whose associated polytope projects to the traveling salesman polytope, even if the LP is not required to be symmetric. Moreover, we prove that this holds also for the maximum cut problem and the stable set problem. These results follow from a new connection that we make between one-way quantum communication protocols and semidefinite programming reformulations of LPs. (joint work with Samuel Fiorini, Serge Massar, Hans Raj Tiwary, and Ronald de Wolf)

Series: ACO Student Seminar

In this talk I will briefly survey results on Vertex Sparsification and some of our results on Mimicking network(or Exact Cut Sparsifier). Ankur Moitra introduced the notion of vertex sparsification to construct a smaller graph which preserves the properties of a huge network that are relevant to the terminals. Given a capacitated undirected graph $G=(V,E)$ with a set of terminals $K \subset V$, a vertex cut sparsifier is a smaller graph $H=(V_H,E_H)$ that approximately(quality f>=1) preserves all the minimum cuts between the terminals. Mimicking networks are the best quality vertex cut sparsifiers i.e, with quality 1. We improve both the previous upper($2^{2^{k}}$ ) and lower bounds($k+1$) for mimicking network reducing the doubly-exponential gap between them to a single-exponential gap. 1. Given a graph $G$, we exhibit a construction of mimicking network with at most $k$'th Hosten-Morris number ($\approx 2^{{(k-1)} \choose {\lfloor {{(k-1)}/2} \rfloor}}$) of vertices (independent of size of $V$). Furthermore, we show that the construction is optimal among all {\itrestricted mimicking networks} -- a natural class of mimicking networks that are obtained by clustering vertices together. 2. There exists graphs with $k$ terminals that have no mimicking network of size smaller than $2^{\frac{k-1}{2}}$. 3. We also exhibit constructions of better mimicking networks for trees($\lfloor(\frac{3k}{2})-1\rfloor$), outerplanar graphs($5k-9$) and graphs of bounded($t$) tree-width($k 2^{(2t+1) \choose {(2t+1)/2}}$). The talk will be self-contained and with no prerequisite.

Series: ACO Student Seminar

We present a new algorithm learning the class of two-sided disjunctions in
semi-supervised PAC setting and in the active learning model. These
algorithms are efficient and have good sample complexity. By exploiting the
power of active learning we are able to find consistent, compatible
hypotheses -- a task which is computationally intractable in the
semi-supervised setting.

Series: ACO Student Seminar

A branching random walk consists of a population of individuals each of whom perform a random walk step before giving birth to a random number of offspring and dying. The offspring then perform their own independent random steps and branching. I will present classic results on the convergence of the empirical particle measure to the Gaussian distribution, then present new results on large deviations of this empirical measure. The talk will be self-contained and can serve as an introduction to both the branching random walk and large deviation theory. The format will be 40 minutes of introduction and presentation, followed by a short break and then 20 minutes of discussion of open problems for those interested.