This is a full list of coursework I have studied as part of both my undergraduate (University of Washington) and graduate (Berkeley) education, as well as some of the material I have studied in my free time.

Nonparametric regression/classification/testing; Minimax Theory; Lasso; Ridge; Overparameterization; Boosting and Slow Learning; Implicit Regularization; Conformal Prediction; Model Aggregation; Forecasting Theory; Sequential Decision Making. While there was no formal course text, I followed * Elements of Statistical Learning * and * Statistical Learning with Sparsity * by Hastie.

Professor: Ryan TibshiraniGrade: (in progress)

Classical Ensembles (GOE, GUE, GSE), LLN for empirical densities, Wigner semicircle law, Marchenko-Pastur and Wacher Laws, determinantal correlation functions for GUE, local limits of correlation functions, Tracy-Widon distribution, Airy process, Pfaffian correlation functions for GOE, Multivariate Bessel functions, Harish Chandra- Itzykson-Zuber Integral formula, Asymptotics of Bessel functions, Dyson Brownian motion, global fluctuations of eigenvalues through Gaussian free field, Universality for local limits of eigenvalues, random matrix distributions as limits of models in 2D statistical mechanics, Lozenge tilings, KPZ Universality class. There was no course text, but I followed Tao's Random Matrix Theory text for part of the course.

Professor: Vadim GorinGrade: (in progress)

Linear codes, Dual Codes, Hamming/Singleton/GV/Elias-Bassalygo/Johnson bounds, asymptotically good codes, parity check and Hadamard codes, BCH codes, Reed-Soloman and Folded Reed-Soloman codes, list-decoding, concatenated codes and Zyablov bound, Justesen Codes, Wozencraft ensemble, Welsh-Berlekamp Algorithm for Reed-Soloman decoding, decoding of concatenated codes, Tanner codes, Expander codes, pseudorandomness, decoding expander codes and distance amplification, random errors and Shannons model, information theory and Shannon's coding theorems, Binary Symmetric Channel, Polarization and polar codes, List decoding of Reed-Soloman / Folded Reed-Soloman codes, locally decodable and correctable codes, Reed-Muller codes, list recovery, quantum error correcting codes. We followed Venkat's own text for the course. The text and notes for the course can be found here.

Professor: Venkatesan GuruswamiGrade: A-

Markov chain mixing times; continuous time Markov chains; couplings; monotone chains and monotone grand couplings; coupling from the past; Ising Model; Solid-on-Solid model; Gaussian Free Field; Interchange process; Exclusion processes; height functions; hydrodynamic limits; 6-vertex/ice models; percolation; KPZ universality class; dimer models; lozenge tilings. We followed the work of recent papers in the field.

Professor:Shirshendu Ganguly Grade: A+

Non-asymptotic viewpoint on high-dimensional statistics. Concentration inequalities; uniform laws; metric entropy and Gaussian/Rademacher Complexity; empirical process theory; random matrices and covariance estimation; LASSO; PCA; nonparametric least squares; minimax bounds. We followed High-Dimensional Statistics by Wainright.

Professor:Song Mei Grade: A

The theory of boundary value and initial value
problems for partial differential equations, with emphasis on nonlinear equations. Laplace's equation, heat
equation, wave equation, nonlinear first-order equations, conservation laws, Hamilton-Jacobi equations,
Fourier transform, Sobolev spaces. We followed *Partial Differential Equations* by Evans.

Professor:Daniel Tataru Grade: B+

Frequentist and Bayesian aspects of modeling, inference,
and decision-making. Statistical decision theory; point estimation; minimax and admissibility; Bayesian
methods; exponential families; hypothesis testing; confidence intervals; small and large sample theory;
and M-estimation. We followed *Theoretical Statistics* by Keener.

Professor:Will Fithian Grade: A-

Supervised learning, loss functions, likelihood ratio tests, Neyman-Pearson lemma, ROC curves, nondiscrimination/fairness criteria, empirical risk minimization, perceptron, features and representations, optimization, gradient descent, convexity, SVM, logistic regression, step-size decay, minibatching, momentum, regularization, kernel methods, generalization gap, overparameterization, model complexity, algorithm stability, Rademacher complexity, VC dimension, deep learning and neural netwroks, machine learning benchmarks, casuality and casual inference, casual graphs, reinforcement learning. In this course we followed the professors' own book *Patterns, Predictions, and Actions: a story about machine learning *

Professor:Benjamin Recht and Moritz Hardt Grade: A

Smooth manifolds and maps, tangent and normal
bundles. Sard's theorem and transversality, Whitney embedding theorem. differential forms, Stokes'
theorem, Frobenius theorem. Basic degree theory. Flows, Lie derivative, Lie groups and algebras. The
course follows *Introduction to Smooth Manifolds * by John Lee.

Professor:Richard Balmer Grade: A-

Stochastic processes and stationarity; Ergodicity; Continuous time markov chains; Brownian motion; Jump processes; Levy processes; renewal theory; Poisson and Compound Poisson processes; Birth-death processes; stochastic integration and Ito’s formula. We followed Foundations of Modern Probability by Kallenberg

Professor:Jim Pitman Grade: A

Diffeomorphisms and flows on manifolds. Ergodic theory. Stable manifolds, generic properties, structural stability. We followed the Professor’s lecture notes.

Professor:Fraydoun Rezakhanlou Grade: A

Convex optimization is a class of nonlinear optimization
problems where the objective to be minimized, and the constraints, are both convex. The course covers
some convex optimization theory and algorithms, and describes various applications arising in
engineering design, machine learning and statistics, finance, and operations research. The course includes
laboratory assignments, which consist of hands-on experiments with the optimization software CVX, and
a discussion section. We used Boyd’s *Convex Optimization * text, covering chapters 1-7 and 11

Professor:Laurent El Ghaoui and Somayeh Sojoudi Grade: A

Metric spaces and general topological spaces.
Compactness and connectedness. Characterization of compact metric spaces. Theorems of Tychonoff,
Urysohn, Tietze. Complete spaces and the Baire category theorem. Function spaces; Arzela-Ascoli and
Stone-Weierstrass theorems. Partitions of unity. Locally compact spaces; one-point compactification.
Introduction to measure and integration. Sigma algebras of sets. Measures and outer measures. Lebesgue
measure on the line and Rn. Construction of the integral. Dominated convergence theorem. We used
Folland’s *Real Analysis: Modern Techniques and Their Applications *.

Professor:Alan Hammond Grade: A+

Measure theory concepts needed for probability.
Expection, distributions. Laws of large numbers and central limit theorems for independent random
variables. Characteristic function methods. Conditional expectations, martingales and martingale
convergence theorems. Markov chains. Stationary processes. Brownian motion. We covered the first 5
chapters of Durret’s *Probability: Theory and Examples *.

Professor:Shirshendu Ganguly Grade: A+

- Stein's Method (Spring 2023), following
*Fundamentals of Stein's Method*by Nathan Ross - Large Deviations in Random Graphs (Fall 2022), following
*Large Deviations for Random Graphs*by Sourav Chatterjee - Gaussian Free Field (Spring 2022), following
*Lecture notes on the Gaussian Free Field*by Werner and Powell - Markov Chains Mixing (Fall 2021), following
*Markov Chains and Mixing Times (2ed)*by Levin and Peres

- Quantum Mechanics (Fall 2022), following
*Quantum Mechanics for Mathematicians*by Takhtadzhian

- Stochastic Calculus (in-progress), following
*Brownian Motion: An Introduction to Stochastic Processes*by Rene Schilling and*Introduction to Stochastic Integration*by Chung - Quantum Computation (Fall 2022), following
*Quantum Computation and Quantum Information*by Nielsen and Chung - Information Theory (Summer 2022), following
*Elements of Information Theory (2ed)*by Cover and Thomas. - Harmonic Functions (Summer 2022), following
*Harmonic Function Theory (2ed)*by Axler, Bourdon, Ramey - Category Theory (Fall 2021), following
*Basic Category Theory*by Tom Leinster

The following are all the relevant math courses I took during my time as an undergraduate at University of Washington. You can find a full transcript here.

Professor:Grade:

This course followed Principles of Mathematical Analysis (3ed) by Walter Rudin (aka Baby Rudin). We quickly reviewed the first 4 chapters on continuity, then thoroughly covered chapters 5,6, and 7 on differentiation, development of the Riemann-Stieltjes Integral, and sequences/series of functions as well as the implications of uniform convergence.

Professor:Bobby Wilson Grade: 4.0

This course follows Principles of Mathematical Analysis by Walter Rudin (Baby Rudin). It covers metric spaces (compactness, connectedness, continuity, and completeness), functions of several variables, normed linear spaces, partial derivatives, inverse function theorem, contraction mapping theorem, and the implicit function theorem.

Professor:Simon Bortz Grade: 4.0

This course followed Complex Variables by Joseph Taylor. Complex numbers; analytic functions; sequences and series; complex integration; Cauchy integral formula; Taylor and Laurent series; uniform convergence; residue theory; conformal mapping. Topics chosen from: Fourier series and integrals, Laplace transforms, infinite products, complex dynamics.

Professor:Hart Smith Grade: 4.0

This is a continuation of MATH 427 and also teaches out of Complex Variables by Joseph Taylor. I assume we will cover chapters 4 through 6 on general Cauchy Theorems, the Residue Theorem, Fourier/Laplace Transforms, and conformal mappings

Professor:Hart Smith Grade: 3.8

This course followed A Walk Through Combinatorics by Miklos Bona. We covered chapters 1 through 11 including the binomial theorem, cycles/permutations, combinations, set/integer partitions, the sieve method, generating functions, and basic graph theory (isomorphisms, paths/cycles, planar graphs, and matchings).

Professor:Ioana Dumitriu Grade: 4.0

The course followed Topology (2ed) by Munkres. We covered chapters 1 through 4 on topological spaces, continuous functions between topological spaces, connectedness and compactness, and the Countability/Separation Axioms. Furthermore, we briefly covered the beginning of chapter 9 on Algebraic Topology which introduces homotopy and the fundamental group

Professor:Judith Arms Grade: 4.0

Examines curves in the plane and 3-spaces, surfaces in 3-space, tangent planes, first and second fundamental forms, curvature, the Gauss-Bonnet Theorem, and possible other selected topics.

Professor:Grade: 3.8

Maximization and minimization of linear functions subject to constraints consisting of linear equations and inequalities; linear programming and mathematical modeling. Simplex method, elementary games and duality.

Professor:James Burke Grade: 4.0

Maximization and minimization of nonlinear functions, constrained and unconstrained; nonlinear programming problems and methods. Lagrange multipliers; Kuhn-Tucker conditions, convexity. Quadratic programming. Algorithms and analyzing convergence rates

Professor:Dmitriy Drusvyatskiy Grade: 4.0

This course followed Abstract Algebra (2ed) by Thomas Hungerford and covered chapters 1 through 6 and chapter 10 as well as a supplement on the Gaussian Integers. Elementary theory of rings and fields: polynomial rings. Ideals, homomorphisms, quotients, and fundamental isomorphism theorems. Fields and maximal ideals. Euclidean rings. Field extensions. Algebraic extensions. Vector spaces and degrees of extensions. Adjoining roots of polynomials. Finite fields.

Professor:David Collingwood Grade: 4.0

Introduction to methods of discrete mathematics, including topics from graph theory, network flows, and combinatorics. Emphasis on these tools to formulate models and solve problems arising in a variety of applications, such as computer science, biology, and management science.

Professor:Sean Griffin Grade: 4.0

This course followed Introduction to Probability by David Anderson. Axiomatic definitions of probability; random variables; conditional probability and Bayes' theorem; expectations and variance; named distributions: binomial, geometric, Poisson, uniform (discrete and continuous), normal and exponential; normal and Poisson approximations to binomial. Transformations of a single random variable. Markov and Chebyshev's inequality. Weak law of large numbers for finite variance.

Professor:Soumik Pal Grade: 4.0

This course followed Introduction to Probability by David Anderson. Jointly distributed random variables; conditional distributions and densities; conditional expectations and variance; covariance, correlation, and Cauchy-Schwarz inequality; bivariate normal distribution; multivariate transformations; moment generating functions; sums of independent random variables; Central Limit Theorem; Chernoff's inequality; Jensen's inequality.

Professor: Michael PerlmanGrade: 4.0

Concepts of probability and statistics. Conditional probability, independence, random variables, distribution functions. Descriptive statistics, transformations, sampling errors, confidence intervals, least squares and maximum likelihood. Exploratory data analysis and interactive computing

Professor:Caren Marzban Grade: 4.0

This course followed Elementary Dif erential Equations and Boundary Value Problems by Boyce. We covered chapters 1 through 4 which covered linear differential equations (of first, second, and higher order) and the associated techniques for solving them as well as chapter 6 which covered the Laplace transform

Professor:Grade: 4.0

This class covered the fundamentals of linear algebra such as matrix multiplication, viewing linear transformations as matrix multiplication, bases/coordinate systems, the fundamental subspaces, matrix decompositions, vector spaces, and eigenvalues and eigenvectors.

Professor:Grade: 4.0

This course followed Elementary Dif erential Equations and Boundary Value Problems by Boyce. We covered chapters 7 through 9 on systems of first order linear differential equations, the phase plane and stability analysis, as well as a brief supplement on PDEs such as the wave equation and heat equation.

Professor:Grade:4.0

This course roughly followed Mathematical Biology by James Murray. Examines fundamental models that arise in biology and their analysis through modern scientific computing. Covers discrete and continuous-time dynamics, in deterministic and stochastic settings, with application from molecular biology to neuroscience to population dynamics; statistical analysis of experimental data.

Professor:Eric Shea-Brown Grade: 4.0

s class introduced fundamental concepts of network science and graph theory for complex dynamical systems. Merges concepts from model selection, information theory, statistical inference, neural networks, deep learning, and machine learning for building reduced order models of dynamical systems using sparse sampling of high-dimensional data.

Professor:Nathan Kutz Grade: 4.0

omputational Methods in Data Analysis (grade 4.0): This class covered a variety of topics in signal processing and data analysis including Fourier/Gabor/Wavelet transforms for frequency-dependent signals, SVD / PCA and low-rank approximations, Dynamic Mode Decomposition, and supervised/unsupervised learning.

Professor:Nathan Kutz Grade: 4.0

Introductory survey of applied mathematics with emphasis on modeling of physical and biological problems in terms of differential equations. Formulation, solution, and interpretation of the results. Stability, phase-space, and bifurcation analysis.

Professor:Samuel Rudy Grade: 4.0

An intro to the use of MATLAB and its associated scientific computing packages. This included numerical integration/differentiation schemes, data visualization, and basic coding practices.

Professor:Grade:4.0

This course roughly followed Computational Neuroscience by Dayan and Abbot. Introduced mathematical models for neural communication, the challenges of neural encoding/decoding, and probabilistic models for neuronal activation.

Professor:Eric Shea-Brown Grade: 4.0

This course followed Machine Learning: A Probabilistic Perspective by Kevin Murphy. Methods for designing systems that learn from data and improve with experience. Supervised learning and predictive modeling: decision trees, rule induction, Maximum-Likelihood models, nearest neighbors, Bayesian methods, Expectation-Maximization, neural networks, support vector machines, and model ensembles. Unsupervised learning and clustering

Professor:Grade:4.0

This course followed Algorithm Design by Kleinberg. We covered the basic search algorithms (BFS, DFS), runtime analysis and big-O notation, dynamic programming, greedy algorithms and divide/conquer approaches, and NP-completeness.

Professor:Grade: 4.0

Introduction to different algorithms for state-space searches both in the presence and absence of some kind of adversary. Reinforcement learning and policy updates with Q-learning. Also covered probabilistic decision making and model making (Bayesian Inference, Markov Decision Processes, and Hidden Markov Models).

Professor:Grade: 4.0

Covers data models (relational and NoSQL), query languages (SQL, datalog, etc), managing transactions, and parallel data processing (Hadoop/Spark). Also gained some experience using large-scale cloud-computing systems such as Microsoft Azure and Amazon Web Services (AWS).

Professor:Grade: 4.0

Covered efficient implementations of many common data structures such as Stacks/Queue, dictionaries, Heaps, Binary/AVL trees, and disjoint sets as well as how to use them as part of a larger architecture.

Professor:Grade: 4.0

Basic Java and introduction to object oriented programming.

Professor:Grade: 4.0