- You are here:
- GT Home
- Home
- News & Events

Series: Dissertation Defense

This thesis addresses asymptotic behaviors and statistical inference methods for several newly proposed risk measures, including relative risk and conditional value-at-risk. These risk metrics are intended to measure the tail risks and/or systemic risk in financial markets. We consider conditional Value-at-Risk based on a linear regression model. We extend the assumptions on predictors and errors of the model, which make the model more flexible for the financial data. We then consider a relative risk measure based on a benchmark variable. The relative risk measure is proposed as a monitoring index for systemic risk of financial system. We also propose a new tail dependence measure based on the limit of conditional Kendall’s tau. The new tail dependence can be used to distinguish between the asymptotic independence and dependence in extreme value theory. For asymptotic results of these measures, we derive both normal and Chi-squared approximations. These approximations are a basis for inference methods. For normal approximation, the asymptotic variances are too complicated to estimate due to the complex forms of risk measures. Quantifying uncertainty is a practical and important issue in risk management. We propose several empirical likelihood methods to construct interval estimation based on Chi-squared approximation.

Series: Dissertation Defense

A subdivision of a graph G, also known as a topological G and denoted by TG, is a graph obtained from G by replacing certain edges of G with internally vertex-disjoint paths. This dissertation has two parts. The first part studies a structural problem and the second part studies an extremal problem. In the first part of this dissertation, we focus on TK_5, or subdivisions of K_5. A well-known theorem of Kuratowski in 1932 states that a graph is planar if, and only if, it does not contain a subdivision of K_5 or K_{3,3}. Wagner proved in 1937 that if a graph other than K_5 does not contain any subdivision of K_{3,3} then it is planar or it admits a cut of size at most 2. Kelmans and, independently, Seymour conjectured in the 1970s that if a graph does not contain any subdivision of K_5 then it is planar or it admits a cut of size at most 4. In this dissertation, we give a proof of the Kelmans-Seymour conjecture. We also discuss several related results and problems. The second part of this dissertation concerns subdivisions of large cliques in C_4-free graphs. Mader conjectured that every C_4-free graph with average degree d contains TK_l with l = \Omega(d). Komlos and Szemeredi reduced the problem to expanders and proved Mader's conjecture for n-vertex expanders with average degree d < exp( (log n)^(1/8) ). In this dissertation, we show that Mader's conjecture is true for n-vertex expanders with average degree d < n^0.3, which improves Komlos and Szemeredi's quasi-polynomial bound to a polynomial bound. As a consequence, we show that every C_4-free graph with average degree d contains a TK_l with l = \Omega(d/(log d)^c) for any c > 3/2. We note that Mader's conjecture has been recently verified by Liu and Montgomery.

Series: Dissertation Defense

Dissertation advisor: Luca Dieci

Numerical optimal transport is an important area of research, but most problems are too large and complex for easy computation. Because continuous transport problems are generally solved by conversion to either discrete or semi-discrete forms, I focused on methods for those two.
I developed a discrete algorithm specifically for fast approximation with controlled error bounds: the general auction method. It works directly on real-valued transport problems, with guaranteed termination and a priori error bounds.
I also developed the boundary method for semi-discrete transport. It works on unaltered ground cost functions, rapidly identifying locations in the continuous space where transport destinations change. Because the method computes over region boundaries, rather than the entire continuous space, it reduces the effective dimension of the discretization.
The general auction is the first relaxation method designed for compatibility with real-valued costs and weights. The boundary method is the first transport technique designed explicitly around the semi-discrete problem and the first to use the shift characterization to reduce dimensionality. No truly comparable methods exist.
The general auction and boundary method are able to solve many transport problems that are intractible using other approaches. Even where other solution methods exist, in testing it appears that the general auction and boundary method outperform them.

Series: Dissertation Defense

This thesis explores topics from two distinct fields of mathematics. The first part addresses a theme in abstract harmonic analysis, while the focus of the second part is a topic in compressive sensing. The first part of this dissertation explores the application of dominating operators in harmonic analysis by sparse operators. We make use of pointwise sparse dominations weighted inequalities for Calder\'on-Zygmund operators, Hardy-Littlewood maximal operator, and their fractional analogues. Dominating bilinear forms by sparse forms allows us to derive weighted inequalities for oscillatory integral operators (polynomially modulated CZOs) and random discrete Hilbert transforms. The later is defined on sets of initegers with asymptotic density zero, making these weighted inequalitites particulalry attractive. We also discuss a characterization of a certain weighted BMO space by commutators of multiplication operators with fractional integral operators. Compressed sensing illustrates the possibility of acquiring and reconstructing sparse signals via underdetermined (linear) systems. It is believed that iid Gaussian measurement vectors give near optimal results, with the necessary number of measurements on the order of slog(n/s) -- n is ambient dimension and s is sparsity threshhold.The recovery algorithm used above relies on a certain quasi-isometry property of the measurement matrix. A surprising result is that the same order of measurements gives an analogous quasi-isometry in the extreme quantization of one-bit sensing. Bylik and Lacey deliver this result as a consequence of a certain stochastic process on the sphere. We will discuss an alternative method that relies heavily on the VC-dimension of a class of subsets on the sphere.

Series: Dissertation Defense

The well-known Kelmans-Seymour conjecture states that every
nonplanar 5-connected graph contains TK_5. Ma and Yu prove the conjecture
for graphs containing K_4^- . In the thesis, we will find special TK_5 in
graphs containing K_4^-, i.e. two versions of generalization of their result
will be dealt with separately.

Series: Dissertation Defense

The main goal of the thesis is to study integro-differential equations. Integro-differential equations arise naturally in the study of stochastic processes with jumps. These types of processes are of particular interest in finance, physics and ecology. In the thesis, we study uniqueness, existence and regularity of solutions of integro-PDE in domains of R^n.

Series: Dissertation Defense

This thesis studies the effect of transverse surgery on open books, the Heegaard Floer contact invariant, and tightness. We show that surgery on the connected binding of a genus g open book that supports a tight contact structure preserves tightness if the surgery coefficient is greater than 2g-1. We also give criteria for when positive contact surgery on Legendrian knots will result in an overtwisted manifold.

Series: Dissertation Defense

Advisor: Dr. Federico Bonetto

We consider a model of N particles interacting through a Kac-style collision process, with m particles among them interacting, in addition, with a thermostat. When m = N, we show exponential approach to the equilibrium canonical distribution in terms of the L2 norm, in relative entropy, and in the Gabetta-Toscani-Wennberg (GTW) metric, at a rate independent of N. When m < N , the exponential rate of approach to equilibrium in L2 is shown to behave as m/N for N large, while the relative entropy and the GTW distance from equilibrium exhibit (at least) an "eventually exponential” decay, with a rate scaling as m/N^2 for large N. As an allied project, we obtain a rigorous microscopic description of the thermostat used, based on a model of a tagged particle colliding with an infinite gas in equilibrium at the thermostat temperature. These results are based on joint work with Federico Bonetto, Michael Loss and Hagop Tossounian.

Series: Dissertation Defense

The thesis investigates a version of the sum-product inequality studied by
Chang in which one tries to prove the h-fold sumset is large under the
assumption that the 2-fold product set is small. Previous bounds were
logarithmic in the exponent, and we prove the first super-logarithmic
bound. We will also discuss a new technique inspired by convex geometry to
find an order-preserving Freiman 2-isomorphism between a set with small
doubling and a small interval. Time permitting, we will discuss some
combinatorial applications of this result.

Series: Dissertation Defense

The thesis considers two distinct strategies for algebraic computation with polynomials in high dimension. The first concerns ideals and varieties with symmetry, which often arise in applications from areas such as algebraic statistics and optimization. We explore the commutative algebra properties of such objects, and work towards classifying when symmetric ideals admit finite descriptions including equivariant Gröbner bases and generating sets. Several algorithms are given for computing such descriptions. Specific focus is given to the case of symmetric toric ideals. A second area of research is on problems in numerical algebraic geometry. Numerical algorithms such as homotopy continuation can efficiently compute the approximate solutions of systems of polynomials, but generally have trouble with multiplicity. We develop techniques to compute local information about the scheme structure of an ideal at approximate zeros. This is used to create a hybrid numeric-symbolic algorithm for computing a primary decomposition of the ideal.