- You are here:
- GT Home
- Home
- News & Events

Series: ACO Seminar

We consider the problem of finding an unknown graph by using two types of queries with an additive property. Given a graph, an additive query asks the number of edges in a set of vertices while a cross-additive query asks the number of edges crossing between two disjoint sets of vertices. The queries ask sum of weights for the weighted graphs. These types of queries were partially motivated in DNA shotgun sequencing and linkage discovery problem of artificial intelligence. For a given unknown weighted graph G with n vertices, m edges, and a certain mild condition on weights, we prove that there exists a non-adaptive algorithm to find the edges of G using O\left(\frac{m\log n }{\log m}\right) queries of both types provided that m \geq n^{\epsilon} for any constant \epsilon> 0. For a graph, it is shown that the same bound holds for all range of m. This settles a conjecture of Grebinski for finding an unweighted graph using additive queries. We also consider the problem of finding the Fourier coefficients of a certain class of pseudo-Boolean functions. A similar coin weighing problem is also considered. (This is joint work with S. Choi)

Series: ACO Seminar

Toda proved in 1989 that the (discrete) polynomial time hierarchy, {\bf PH}, is contained in the class {\bf P}^{#\bf P}, namely the class of languages that can be decided by a Turing machine in polynomial time given access to an oracle with the power to compute a function in the counting complexity class #{\bf P}. This result which illustrates the power of counting is considered to be a seminal result in computational complexity theory. An analogous result in the complexity theory over the reals (in the sense of Blum-Shub-Smale real Turing machines) has been missing so far. We formulate and prove a real analogue of Toda's theorem. Unlike Toda's proof in the discrete case, which relied on sophisticated combinatorial arguments, our proof is topological in nature. (Joint work with Thierry Zell.)

Series: ACO Seminar

Low-rank models are frequently used in machine learning and statistics. An important domain of application is provided by collaborative filtering, whereby a low-rank matrix describes the ratings that a large set of users attribute to a large set of products. The problem is in this case to predict future ratings from a sparse subset currently available. The dataset released for the Netflix challenge provides an ideal testbed for theory and algorithms for learning low-rank matrices. Given M, a random n x n matrix of rank r, we assume that a uniformly random subset E of its entries is observed. We describe an efficient procedure that reconstructs M from |E| = O(rn) observed entries with arbitrarily small root mean square error, whenever M is satisfies an appropriate incoherence condition. If r = O(1), the algorithm reconstructs M exactly from O(n log n) entries. This settles a recent open problem by Candes and Recht. In the process of proving these statements, we obtain a generalization of a celebrated result by Friedman-Kahn-Szemeredi and Feige-Ofek on the spectrum of sparse random matrices. [Based on joint work with R. H. Keshavan and S. Oh]

Series: ACO Seminar

We consider the problem of a search engine trying to assign a sequence of search keywords to a set of competing bidders, each with a daily spending limit. The goal is to maximize the revenue generated by these keyword sales, bearing in mind that, as some bidders may eventually exceed their budget, not all keywords should be sold to the highest bidder. We assume that the sequence of keywords (or equivalently, of bids) is revealed on-line. Our concern will be the competitive ratio for this problem versus the off-line optimum.
We extend the current literature on this problem by considering the setting where the keywords arrive in a random order. In this setting we are able to achieve a competitive ratio of 1-\epsilon under some mild, but necessary, assumptions. In contrast, it is already known that when the keywords arrive in an adversarial order, the best competitive ratio is bounded away from 1. Our algorithm is motivated by PAC learning, and proceeds in two parts: a training phase, and an exploitation phase.
Joint work with Tom Hayes, UNM.

Series: ACO Seminar

A central characteristic of a computer chip is the speed at which it processes data, determined by the time it takes electrical signals to travel through the chip. A major challenge in the design of a chip is to achieve timing closure, that is to find a physical realization fulfilling the speed specifications. We give an overview over the major tasks for optimizing the performance of computer chips and present several new algorithms. For the topology generation of repeater trees, we introduce a variant of the Steiner tree problem and present fast algorithm that balances efficiently between the resource consumption and performance. Another indispensable task is gate sizing, a discrete optimization problem with nonlinear or PDE constraints, for which a fast heuristic is introduced. The effectiveness in practice is demonstrated by comparing with newly developed lower bounds for the achievable delay. We conclude with a variant of the time-cost tradeoff problem from project management. In contrast to the usual formulation cycles are allowed. We present a new method to compute the time-cost tradeoff curve in such instances using combinatorial algorithms. Several problems in chip design can be modeled as time-cost tradeoff problems, e.g. threshold voltage optimization of plane assignment.

## Pages

- « first
- ‹ previous
- 1
- 2
- 3