Preprints
Sparse Pseudospectral Shattering [PDF / arXiv:2411.19926]
Rikhav Shah, Nikhil Srivastava, Edward Zeng
Abstract: The eigenvalues and eigenvectors of
nonnormal matrices can be unstable under perturbations of
their entries. This renders an obstacle to the analysis of
numerical algorithms for non-Hermitian eigenvalue problems.
A recent technique to handle this issue is pseudospectral
shattering [BGVKS23], showing that adding a random
perturbation to any matrix has a regularizing effect on the
stability of the eigenvalues and eigenvectors. Prior work
has analyzed the regularizing effect of dense Gaussian
perturbations, where independent noise is added to every
entry of a given matrix [BVKS20, BGVKS23, BKMS21, JSS21].
We show that the same effect
can be achieved by adding a sparse random perturbation. In
particular, we show that given any $n\times n$ matrix $M$ of
polynomially bounded norm: (a) perturbing $O(n\log^2(n))$
random entries of $M$ by adding i.i.d. complex Gaussians
yields $\log\kappa_V(A)=O(\text{poly}\log(n))$ and $\log
(1/\eta(A))=O(\text{poly}\log(n))$ with high probability;
(b) perturbing $O(n^{1+\alpha})$ random entries of $M$ for
any constant $\alpha>0$ yields
$\log\kappa_V(A)=O_\alpha(\log(n))$ and
$\log(1/\eta(A))=O_\alpha(\log(n))$ with high probability.
Here, $\kappa_V(A)$ denotes the condition number of the
eigenvectors of the perturbed matrix $A$ and $\eta(A)$
denotes its minimum eigenvalue gap.
A key mechanism of the proof
is to reduce the study of $\kappa_V(A)$ to control of the
pseudospectral area and minimum eigenvalue gap of $A$, which
are further reduced to estimates on the least two singular
values of shifts of $A$. We obtain the required least
singular value estimates via a streamlining of an argument
of Tao and Vu [TV07] specialized to the case of sparse
complex Gaussian perturbations.
*Submitted to STOC
The Pseudospectrum of Random Compressions of Matrices [PDF / arXiv:2501.01418]
Rikhav Shah
Abstract: The compression of a matrix \(A\in\mathbb C^{n\times n}\) onto a subspace \(V\subset\mathbb C^n\) is the matrix $Q^*AQ$ where the columns of $Q$ form an orthonormal basis for $V$. This is an important object in both operator theory and numerical linear algebra. Of particular interest are the eigenvalues of the compression and their stability under perturbations. This paper considers compressions onto subspaces sampled from the Haar measure on the complex Grassmannian. We show the expected area of the \(\varepsilon\)-pseudospectrum of such compressions is bounded by \(\textnormal{poly}(n)\log^2(1/\varepsilon)\cdot\varepsilon^\beta\), where \(\beta=6/5,4/3\), or \(2\) depending on some mild assumptions on $A$. Along the way, we obtain (a) tail bounds for the least singular value of compressions and (b) non-asymptotic small-ball estimates for random non-Hermitian quadratic forms surpassing bounds achieved by existing methods.
A Kaczmarz-Inspired Method for Orthogonalization [PDF / arXiv:2411.16101]
Rikhav Shah and Isabel Detherage
This paper asks if it is possible to orthogonalize a set of
\(n\) linearly independent unit vectors in \(n\) dimensions
by only considering random pairs of vectors at a time. In
particular, at each step one accesses a random pair of
vectors and can replace them with some different basis for
their span. If one could instead deterministically decide
which pair of vectors to access, one could run the
Gram-Schmidt procedure. In our setting, however, the pair is
selected at random, so the answer is less clear.
We provide a positive answer
to the question: given a pair of vectors at each iteration,
replace one with its component perpendicular to the other,
renormalized. This produces a sequence that almost surely
converges to an orthonormal set. Quantitatively, we can
measure the rate convergence in terms of the condition
number \(\kappa(A)\) of the square matrix \(A\) formed by
taking the \(n\) vectors to be its columns. When the
condition number is initially very large, we prove a rapidly
decreasing upper bound on the expected logarithm of the
condition number after \(t\) iterations. The bound initially
decreases by \(O(1/n^2)\) each iteration, but the decrease
tapers off as the condition number improves. We then show
\(O(n^4/\delta\varepsilon^2+n^3\log(\kappa(A))\log\log(\kappa(A)))\)
iterations suffice to bring the condition number below
\(1+\varepsilon\) with probability \(1-\delta\).