- You are here:
- GT Home
- Home
- News & Events

Series: Geometry Topology Seminar

In many physical situations we are interested in topological lower bounds for L^2-energy of volume preserving vector fields. Such situations include for instance evolution of a magnetic field in ideal magnetohydrodynamics. Classical energy bounds involve topological invariants like helicity which measure linkage of orbits in the flow. In this talk I will present a new lower bound in terms of the third order helicity, which is an invariant measuring a third order linkage of orbits. I will also discuss how the third order helicity can be derived from the Milnor's \mu-bar invariant for 3-component links.

Series: Graph Theory Seminar

In this lecture I will introduce the method and sketch some recent applications. The main idea is to exploit a natural connection between the evolution of discrete random processes and continuous functions on the real numbers. Roughly speaking, the method is as follows: Given a discrete random process, we calculate the expected change in the random variable (or variables) of interest in one step of the process, write a differential equation suggested by the expected change, and show that the evolution of the random variable over the course of the process is sharply concentrated around the trajectory given by the solution of the differential equation. This allows us to translate simple facts (often combinatorial in nature) about expected changes in one step of the process into strong statements about sharp concentration of the random variable over the entire course of the process.

Wednesday, October 22, 2008 - 15:00 ,
Location: Skiles 269 ,
ZhengJun Zhang ,
University of Wisconsin ,
Organizer: Liang Peng

Various correlation measures have been introduced in statistical inferences and applications. Each of them may be used in measuring association strength of the relationship, or testing independence, between two random variables. The quotient correlation is defined here as an alternative to Pearson's correlation that is more intuitive and flexible in cases where the tail behavior of data is important. It measures nonlinear dependence where the regular correlation coefficient is generally not applicable. One of its most useful features is a test statistic that has high power when testing nonlinear dependence in cases where the Fisher's Z-transformation test may fail to reach a right conclusion. Unlike most asymptotic test statistics, which are either normal or \chi 2, this test statistic has a limiting gamma distribution (henceforth the gamma test statistic). More than the common usages of correlation, the quotient correlation can easily and intuitively be adjusted to values at tails. This adjustment generates two new concepts -- the tail quotient correlation and the tail independence test statistics, which are also gamma statistics. Due to the fact that there is no analogue of the correlation coefficient in extreme value theory, and there does not exist an efficient tail independence test statistic, these two new concepts may open up a new field of study. In addition, an alternative to Spearman's rank correlation: a rank based quotient correlation is also defined. The advantages of using these new concepts are illustrated with simulated data, and real data analysis of internet traffic, tobacco markets, financial markets...

Wednesday, October 22, 2008 - 11:00 ,
Location: Skiles 255 ,
Andy Bommarius ,
School of Chemistry & Biochemistry, Georgia Tech ,
Organizer: Christine Heitsch

After rational protein design and combinatorial protein engineering (directed evolution), data-driven protein engineering emerges as a third generation of techniques for improving protein properties. Data-driven protein engineering relies heavily on the use of mathematical algorithms. In the first example, we developed a method for predicting the positions in the amino acid sequence that are critical for the catalytic activity of a protein. With nucleotide sequences of both functional and non-functional variants and a Support Vector Machine (SVM) learning algorithm, we set out to narrow the interesting sequence space of proteins, i.e. find the truly relevant positions. Variants of TEM-1 β-lactamase were created in silico using simulations of both mutagenesis and recombination protocols. The algorithm was shown to be able to predict critical positions that can tolerate up to two amino acids. Pairs of amino acid residues are known to lead to inactive sequences, unless mutated jointly. In the second example, we combine SVM, Boolean learning (BL), and the combination of the two, BLSVM, to find such interactive residues. Results on interactive residues in two fluorescent proteins, Discosoma Red Fluorescent Protein (Ds-Red) and monomeric Red Fluorescent Protein (mRFP), will be presented.

Wednesday, October 22, 2008 - 10:00 ,
Location: Skiles 269 ,
Arthur Szlam ,
UCLA ,
Organizer: Haomin Zhou

SPECIAL TIME AND LOCATION FOR THIS WEEK ONLY

The k-planes method is the generalization of k-means where the representatives of each cluster are affine linear sets. In this talk I will describe some possible modifications of this method for discriminative learning problems.

Tuesday, October 21, 2008 - 16:30 ,
Location: Skiles 255 ,
Chris Godsil ,
University of Waterloo ,
Organizer: Robin Thomas

Refreshments will be served at 4PM in Skiles 236.

The possibility of a quantum computer has lead to much new work in theoretical physics and, naturally enough, this work has raised many new mathematical problems. What is perhaps surprising is that it has lead to interesting problems in algebraic graph theory. For example, questions about the relative power of quantum computer and classical computers lead to questions about the chromatic number of certain graphs. In my talk I will discuss some of these problems, and the progress that has been made.

Series: PDE Seminar

I will discuss a few ways in which reaction diffusion models have been used to pattern formation. In particular in the setting of Cdc42 transport to and from the membrane in a yeast cell I will show a simple model which achieves polarization. The model and its analysis exhibits some striking differences between deterministic and probabilistic versions of the model.

Series: ACO Seminar

A central characteristic of a computer chip is the speed at which it processes data, determined by the time it takes electrical signals to travel through the chip. A major challenge in the design of a chip is to achieve timing closure, that is to find a physical realization fulfilling the speed specifications. We give an overview over the major tasks for optimizing the performance of computer chips and present several new algorithms. For the topology generation of repeater trees, we introduce a variant of the Steiner tree problem and present fast algorithm that balances efficiently between the resource consumption and performance. Another indispensable task is gate sizing, a discrete optimization problem with nonlinear or PDE constraints, for which a fast heuristic is introduced. The effectiveness in practice is demonstrated by comparing with newly developed lower bounds for the achievable delay. We conclude with a variant of the time-cost tradeoff problem from project management. In contrast to the usual formulation cycles are allowed. We present a new method to compute the time-cost tradeoff curve in such instances using combinatorial algorithms. Several problems in chip design can be modeled as time-cost tradeoff problems, e.g. threshold voltage optimization of plane assignment.

Series: Research Horizons Seminar

We consider the pseudodifferential operators H_{m,\Omega} associated by the prescriptions of quantum mechanics to the Klein-Gordon Hamiltonian when restricted to a compact domain \Omega in {\mathbb R}^d. When the mass m is 0 the operator H_{0,\Omega} coincides with the generator of the Cauchy stochastic process with a killing condition on \partial \Omega. (The operator H_{0,\Omega} is sometimes called the fractional Laplacian with power 1/2.) We prove several universal inequalities for the eigenvalues (joint work with Evans Harrell).

Series: Other Talks

We argue that computational models have an essential role in uncovering the principles behind a variety of biological phenomena that cannot be approached by other means. In this talk we shall focus on evolution. Living organisms function according to complex mechanisms that operate in different ways depending on conditions. Darwin's theory of evolution suggests that such mechanisms evolved through random variation guided by natural selection. However, there has existed no theory that would explain quantitatively which mechanisms can so evolve in realistic population sizes within realistic time periods, and which are too complex. Here we suggest such a theory. Evolution is treated as a form of computational learning from examples in which the course of learning depends only on the aggregate fitness of the current hypothesis on the examples, and not otherwise on individual examples. We formulate a notion of evolvability that distinguishes function classes that are evolvable with polynomially bounded resources from those that are not. For example, we can show that monotone Boolean conjunctions and disjunctions are demonstrably evolvable over the uniform distribution, while Boolean parity functions are demonstrably not. We shall discuss some broader issues in evolution and intelligence that can be addressed via such an approach.