This seminar is hosted weekly on Thursdays 12:30 - 2pm in Evans 732. We meet for lunch just outside Evans at 11:30 before the seminar each week. Contact if you would like to be added to the mailing list.
Date | Presenter | Topic (hover for abstract) | Links |
---|---|---|---|
September 7 | Zack Stier |
Recent improvements on the Solovay--Kitaev theorem
The Solovay--Kitaev theorem, which has important
ramifications in quantum computing, gives guarantees on
approximations of an arbitrary element of \(SU(2)\)
using a fixed finite set, so long as that set generates
a dense subgroup. Given element \(g\) and precision
\(\varepsilon\), it was known how to compute in
\(\textnormal{poly}\log(1/\varepsilon)\) time a
\(\varepsilon\)-approximation which uses
\(O(\log^{3.001}(1/\varepsilon))\)-many gates from the
set. Kuperberg recently improved this count to
\(O(\log^{1.45}(1/\varepsilon))\). We will discuss the
proof of this new development.
|
arxiv |
September 14 | Omar Alrabiah |
A notion of log concavity on the Boolean cube
Log-concave distributions over \(\mathbb{R}^n\) are
distributions with density functions of the form \(e^V\)
where \(V\) is a concave function over \(\mathbb{R}^n\).
Such distributions demonstrate strong concentration
properties and yet are broad enough to include
distribution such as the Gaussian distribution, the
Laplace distribution, and uniform measures on convex
sets. In search of an analogous notion over the Boolean
cube, Eldan and Shamir study
\(\beta\)-semi-log-concave measures over the
Booleaan cube, which are measures \(\nu\) over \(\{-1,
1\}^n\) whose multilinear extension \(f\) satisfies
\(\nabla^2 \log f(x) \le \beta I_n\) for some \(\beta
\ge 0\). They prove that any Hamming Lipschitz test
function \(\varphi\) over \(\{-1, 1\}^n\) satisfies
\(\mathrm{Var}_{\nu}[\varphi] \le n^{2 - C_\beta}\) for
some \(C_\beta > 0\), which demonstrates a concentration
of measure of \(\mu\). In this talk, we will present the
proof of their result via
stochastic localization.
|
arxiv |
September 21 | Omar Alrabiah |
A notion of log concavity on the Boolean cube (part 2)
Last week, we presented the martingale method for the
uniform measure (more generally, product measures) over
\(C_n \triangleq \{-1, 1\}^n\) that showed that
\(\textnormal{Var}_{\nu}(\varphi) \le O(n)\) for any
\(1\)-Hamming-Lipschitz function \(\varphi : C_n \to
\mathbb{R}\). Then we realized that the martingale
method holds for any localization process on \(\nu\). We
ended with motivating stochastic localization as a
continuous-time version of the martingale method. This
week, we will show that stochastic localization on
\(\nu\) is in fact a localization process, making it
feasible to apply the martingale method over
continuous-time. We will then reduce the problem down to
showing that the rate of change of exponential tilts on
\(\nu\) with respect to the Wasserstein distance is at
most \(O(\beta n^{1 - c_\beta})\) for some \(c_\beta >
0\), which is the heart of the analysis and the part
that crucially uses the \(\beta\)-semi-log-concavity of
our measure \(\nu\).
|
|
September 28 | David Wu |
Sampling from the Sherrington-Kirkpatrick Gibbs measure
via algorithmic stochastic localization
Ising models are distributions on the Boolean hypercube
which allow for nearest neighbor interactions; these
models have a rich history throughout statistical
physics and TCS. The Sherrington-Kirkpatrick (SK)
ensemble is a restricted family of Ising models whose
interaction matrices are iid Gaussians. A longstanding
open problem in this area is to precisely nail down the
temperature threshold for efficiently sampling from the
SK model. El Alaoui, Montanari, and Sellke eschew the
typical Markov chain approach for approximate sampling
(in a total variation sense) and instead shoot for
weaker Wasserstein sampling by simulating the stochastic
localization SDE. The key idea is to use Approximate
Message Passing (AMP) as a subroutine for accurately
estimating the means of the intermediate tilted
distributions in the localization process. With this
approach, they are able to get within a constant factor
of the conjectured temperature threshold for efficient
sampling, and also prove a complementary hardness result
for stable sampling algorithms above the conjectured
threshold. In this talk we will contextualize their
results and explain the main ideas behind their proofs.
|
arxiv |
October 12 | Vilas Winstein |
Cutoff for Glauber Dynamics via Information Percolation
When you want a sample from a high-temperature Ising
model, you run the Glauber dynamics Markov chain until
it is close to the stationary distribution. But how do
you know when that happens? The cutoff phenomenon says
that there is a particular time before which the Markov
chain is far from stationary, and after which the Markov
chain is close to stationary. In this talk we will
discuss a proof of the cutoff phenomenon for this model,
which uses a framework called Information Percolation to
achieve quite precise results.
|
notes |
October 19 | Tarun Kathuria |
Convergence of Non-linear Reversible Markov Chains
I'll talk about polynomial Markov chains, a non-linear
generalization of Markov chains which arose in genetic
algorithms and chemical reaction networks. For such
dynamics with a certain stationary condition, even the
convergence is an open problem. Under the added
assumption of these dynamics being reversible/detailed
balanced, where also convergence was open, I'll present
a proof of convergence of such dynamics from any
starting point away from the boundary. This settles the
Global Attractor and Persistence Conjecture for
Reversible Non-Linear Markov Chains.
|
|
October 26 | Rikhav Shah |
Algorithmic Lovasz Local Lemma via Entropy Compression
Suppose we are attempting to construct an object which
satisfies a given list of constraints. The Lovasz Local
Lemma states that such an object will exist provided
that (a) the constraints are not too dependent on each
other and (b) each individual constraint is often
satisfied. It turns out that a simple randomized
algorithm will efficiently find a construction
satisfying all the constraints. The analysis uses an
interesting technique called
entropy compression which this talk will
describe.
|
|
November 2 | Zack Stier |
Asano's method for Lee--Yang-type theorems
In this talk we will discuss Asano's method, which can
be used to prove Lee--Yang-type theorems concerning
zero-free regions of partition functions for systems in
statistical mechanics and thus correspond to an absence
of phase transitions. Asano's method was first
formulated in the quantum setting using special
differential operators on a certain class of rational
functions, but can be realized in the classical setting
as a specific graph contraction. In the classical
setting, we will discuss the ferromagnetic Ising model,
and in the quantum setting we will discuss the
Heisenberg XXZ model.
|
notes |
November 9 | Zack Stier |
Asano's method for Lee-Yang-type theorems (part 2)
We will discuss Asano's method, which can be used to
prove Lee-Yang-type theorems concerning zero-free
regions of partition functions for systems in
statistical mechanics and thus correspond to an absence
of phase transitions. Asano's method was first
formulated in the quantum setting using special
differential operators on a certain class of rational
functions, but can be realized in the classical setting
as a specific graph contraction. In the classical
setting, the subject of the first part of the talk, we
discussed the ferromagnetic Ising model, and in the
quantum setting this week we will discuss the Heisenberg
XXZ model.
|
|
December 1 | Rikhav Shah |
"Inverting" singular matrices with the Grushin problem
method.
Sometimes we want to invert non-invertible matrices; for
instance, we may wish to solve an overspecified system
Ax=b, or compute the resolvent of a perturbed operator
at a possible eigenvalue. The "Grushin problem method"
suggests we augment the matrix by adding some rows and
columns in a way that guarantees the resulting matrix is
invertible. The inverse of this augmented matrix often
encodes enough information about the original matrix to
solve the problem at hand.
|
intro example |