Discrete Analysis Seminar

This seminar is hosted weekly on Fridays 12:30 - 2pm in Evans 891. Contact if you would like to be added to the mailing list.

Schedule (Spring 2023)

Date Presenter Topic (hover for abstract) Links
January 20 Louis Golowich Quantum local testability of the hemicubic code
Quantum locally testable codes provide a means for encoding quantum data that protects against errors in such a way that the errors can be detected using only local tests on the encoded data. This notion of quantum local testability (for a certain class of codes, namely CSS codes) can be expressed as a form of high-dimensional expansion, which generalizes graph expansion to chain complexes. We will describe the state-of-the-art construction of such codes, which is based on the hemicube, a cellular complex that is topologically equivalent to the real projective plane. No prior background in quantum codes will be assumed.
arxiv
January 27 Louis Golowich Recent work and open questions on quantum locally testable codes
In a continuation of last week's talk, we will first finish the analysis of the hemicubic code. While the length-\(N\) hemicubic code has order sort(\(N\)) distance and \(1/\log N\) soundness, making it essentially the state-of-the-art quantum locally testable code, it remains an open question to improve these parameters to linear distance and constant soundness (as well as to improve the rate and locality parameters). We will discuss this open problem in more depth, and describe some recent constructions that improve certain parameters at the cost of others
arxiv
February 3 Zack Stier A quantum algorithm for functions of multiple commuting Hermitian matrices
We will introduce the multivariate quantum eigenvalue transform, and discuss an application to matrix functions of normal matrices. Joint work with Yonah Borns-Weil and Tahsin Saffat.
February 10 Edward Zeng Supersymmetry in Random Matrix Theory
We will present supercommutative gaussian integration and show how it can be used to compute important statistics in random matrix theory.
Tao's blog notes
February 17 Elizabeth Yang A Matrix Trickle-Down Theorem
Given a high-dimensional expander, Oppenheim's trickle-down theorem gives a clean way to deduce the expansion (i.e. upper bound the second eigenvalue) of the lower-order links using bounds on the expansion of the highest-order links. We provide a generalization of Oppenheim's trickle-down theorem that replaces bounds on the second eigenvalue with matrix inequalities instead. Time permitting, we will demonstrate a problem for which the matrix-trickle-down theorem has an advantage over Oppenheim's trickle-down theorem.
arxiv
February 24 Joao Basso Recent advances on the union-closed sets conjecture
A union-closed family \(\mathcal F\) of sets is one where the union of any two sets in the family is also in the family. Frankl conjectured in 1979 that, given a such family, there must exist an element that is present in at least half of the sets. Until recently, the best guarantee was that the most frequent element is present in a \(\Omega(1/\log|\mathcal F|)\) fraction of the sets. Three months ago, Justin Gilmer used an information-theoretic approach to improve this to a constant of 0.01. Just a few days later, other papers came out improving this constant to 0.38.... We will discuss the history of this conjecture, Gilmer's technical breakthrough and the follow-up improvements.
arxiv
March 3 Sebi Cioaba Optimal distortion embeddings of distance-regular graphs into Euclidean spaces
Embedding graphs into Euclidean spaces with least distortion is a topic well-studied in mathematics and computer science. Despite this research, there are just a few graphs for which the precise least distortion and a least distortion embedding is known. In 2008, Vallentin studied this problem for distance-regular graphs and obtained a lower bound for the least distortion of a distance-regular graph. In addition, he showed that this bound is tight for Hamming and Johnson graphs as well as strongly regular graphs and conjectured that his bound is always tight for distance-regular graphs. In this talk, I will describe our current progress on this problem This is joint work with Himanshu Gupta (University of Delaware) and Ferdinand Ihringer (Ghent University, Belgium) and Hirotake Kurihara (Yamaguchi University, Japan).
March 10 Tarun Kathuria Continous Time Derivations of Accelerated Methods with application to regression problems
In this talk, I'll first talk about a Lagrangian/Hamiltonian framework from which one can derive Accelerated Gradient Methods of Nesterov in continuous time. I will then show how different discretizations of this ODE lead to Nesterov's method for (higher-order) acceleration as well as a better discretization scheme due to Monteiro-Svaiter which gives better results than Nesterov's scheme under higher-order smoothness assumptions. The latter of which is essential for deriving the best known rates for \(\ell_p\) regression and approximate \(\ell_{\infty}\) regression, which also captures the max flow problem. Time permitting, I will go into those applications.
March 17 Sidhanth Mohanty Hyperbolic space and expansion
Hyperbolic space refers to a plane with a distance metric where parallel lines diverge. In this talk, we will discuss some basics of hyperbolic space, and illustrate how one can prove expansion of some simple-to-describe graphs using spectra of hyperbolic surfaces.
springer (see chapter 4)
March 24 Rikhav Shah Is a graph curved like a manifold?
Last semester we concluded that graphs were not manifolds. However, it's still fruitful to compute graph analogs of manifold properties/quantities. One such quantity is `curvature', and many graph analogs have been proposed. A paper from February of this year explores a newly proposed one, showing it has nice implications for diameter, expansion, mixing times, and more.
arxiv
April 7 Vilas Winstein A Birds-Eye View of Longest Increasing Subsequences
How long is the longest increasing subsequence in a random permutation? We will discuss the answer(s) to this question and some of the ideas involved, skipping almost all of the details. The proof goes through the discovery of a limit shape for large random young diagrams which correspond to permutations via the Robinson-Schensted algorithm. In this talk, we will mainly focus on the LLN-esque result due to Vershik, Kerov, Logan, and Shepp, but we will also state and briefly discuss the CLT-esque result due to Baik, Deift, and Johansson, where a mysterious (to me, at least) connection to random matrix theory appears.
notes
April 14 Zack Stier Concentration of Gaussian Cayley matrices
Gaussian Cayley matrices are random matrices formed by weighting representations of a given group by i.i.d. Gaussian random variables. In this talk, we will discuss recent work on controlling the expectation of the operator norm of Gaussian Cayley matrices. Time permitting, we will discuss an application to special cases of the matrix Spencer conjecture.
notes
April 21 Rikhav Shah How to count badly but efficently
Let's define a 'counting' datastructure. It supports two operations: incriment and query. Incriment increases some internal counter by one, and query returns the value of the counter. Counting to \(n\) requires \(\log n\) bits of memory. However, for some real-world systems \(n\) is large enough to make this memory cost prohibative. If we can tolerate some error in our count and have access to randomness, then we can do much better. In particular, if query needs only to return a constant factor approximation of the true count with high probability, one only needs \(\log \log n\) bits.
Recently, there's been interest in finding the optimal pseudo-deterministic datastructure for this task. That is, we have access to randomness but the output of the query operation should be highly concentrated on a single value (so the datastructure 'appears' deterministic to an outside observer). A couple weeks ago, a lower bound of \(\log n\) was shown.
April 28 Sergio Escobar The Eastin-Knill no-go theorem and a circumvention using quantum error correction
A vital component of reliable quantum computation is the ability to protect against errors that occur during computation. Quantum error correcting codes (QECCs) are designed to address this problem. However, achieving full protection against arbitrary errors comes at a price: it has been shown that there does not exist a quantum error correcting code which can correct arbitrary single-qubit errors and is capable of transversally implementing a universal gate set. We present a way to circumvent this restriction on quantum gate sets by introducing triorthogonal stabilizer codes.
arxiv