# Spring 2016

## Mondays, 5pm-6pm, 939 Evans Hall

### Dinner afterwards

The Applied Algebra Seminar is an interdisciplinary seminar on applications of algebra to fields outside mathematics. Our goal is to connect mathematicians with researchers from other fields using algebraic tools in their work.

Talks are aimed at an interdisciplinary graduate student audience including mathematicians, computer scientists, engineers, biologists, economists, statisticians, and more.

## Organizers

Please contact David Dynerman or Elina Robeva if you are interested in giving a talk! Both mathematicians and scientists in other fields are warmly invited to present.

## February 1 — Carlos Amendola, TU Berlin (Math)

### Studying Inference for Gaussian Mixtures via Algebra: from Crabs to Moment Varieties

Gaussian mixture models have a rich history in statistics, perhaps first introduced by Karl Pearson in 1894 to explain asymmetry observed in data measured from a population of Naples' crabs. Their use in a wide range of applications include the areas of economics, engineering and genetics. A fundamental problem is to learn the defining model parameters from a sample, and possible solutions include Pearson's original method of moments and the popular EM algorithm via maximum likelihood optimization.

In this talk, we will explore these methods from an algebraic perspective, exposing both theoretical and computational consequences. For example, maximum likelihood estimates for Gaussian mixtures are transcendental while moment estimates are algebraic numbers. This will suggest the study of geometric objects that we will call Gaussian moment varieties. Based on joint work with Mathias Drton, Jean-Charles Faugere and Bernd Sturmfels.

## February 8 - GAC Reading Group

### Global Attractor Conjecture: Meeting 1

An informal meeting of the GAC reading group.

## February 22 - GAC Reading Group

### Global Attractor Conjecture: Meeting 2

An informal meeting of the GAC reading group.

## February 29 — Elizabeth Gross, San Jose State University (Math)

### The maximum likelihood threshold of a graph

The maximum likelihood threshold of a graph is the smallest number of data points that guarantees that maximum likelihood estimates exist almost surely in the Gaussian graphical model associated to the graph. In this talk, we will show that this graph parameter is connected to the theory of combinatorial rigidity. In particular, if the edge set of a graph G is an independent set in the (n-1)-dimensional generic rigidity matroid, then the maximum likelihood threshold of G is less than or equal to n. This connection allows us to make several new statements regarding the existence of the maximum likelihood estimator in situations of limited data.

This is joint work with Seth Sullivant.

## March 7 — Andrew Bridy, University of Rochester (Math)

### Automatic Sequences and Curves over Finite Fields

An amazing theorem of Christol states that a power series $$y$$ with coefficients in a finite field is an algebraic function if and only if its coefficient sequence can be produced by a finite automaton, which is a limited model of a computer with no memory. The proof uses combinatorics and linear algebra, but hidden in the theorem there is geometric information about a curve that contains $$y$$ in its function field. I make this explicit by demonstrating a precise link between the complexity of the automaton and the geometry of the curve.

## March 14 — Annie Raymond, University of Washington (Math)

### Symmetry and Turan Sums of Squares

Given a graph $$H$$, the Turan graph problem asks to find the maximum number of edges in a $$n$$ -vertex graph that does not contain any subgraph isomorphic to $$H$$. In recent years, Razborov's flag algebra methods have been applied to Turan hypergraph problems with great success. We show that these techniques embed naturally in standard symmetry-reduction methods for sum of squares representations of invariant polynomials. This connection gives an alternate computational framework for Turan problems with the potential to go further, as well as new tools to certify symmetric non-negative polynomials over certain hypercubes. Our results expose the rich combinatorics coming from the representation theory of the symmetric group present in flag algebra methods.

This is joint work with James Saunderson, Mohit Singh and Rekha Thomas.

## March 18 — Applied Algebra Afternoon

### 2:15 Anna Seigal: A siaga of seven pictures

The Society for Industrial and Applied Mathematics, SIAM, has recently released a journal of Applied Algebra and Geometry called SIAGA. See here for more information on the new journal.

The poster for the journal features seven pictures. In this talk I discuss these pictures and the mathematics that they represent.

More background is available at the math blog Picture This Maths.

### 3:00 Jameson Cahill: What is frame theory?

Frame theory emerged in the early 90s as a branch of applied functional/harmonic analysis. Since then, motivated by the fact that this theory is meant to be used in practical applications, many researchers have began restricting their attention to the finite dimensional setting.

In this finite dimensional setting many of the problems take on distinctly algebraic and combinatorial characteristics. I will briefly explain this history and discuss several examples where algebraic methods have already proven to be useful to solve problems of interest in the frame theory community.

### 4:00 Shamgar Gurevich: The orthogonal group I.

I will present the classical material on diagonalization of the Laplacian acting on functions on the sphere (eigenvalues, eigenspaces) using concrete models (spherical harmonics) of the irreducible representations of the rotation group SO(3). I think the presentation will be understood to advanced undergraduate student.

### 4:30 Joe Kileel: The orthogonal group II.

This is the second half of a joint expository talk on the representation theory of the orthogonal groups. In my portion of the talk, I will emphasize Lie algebra methods, including highest weights. Time permitting, an example of how to use this theory in Applied Algebra research will be presented.

## April 18 — Anna Seigal, UC-Berkeley (Math)

### Structured Clustering using Integer Programming

For biological applications, it is often important to cluster data based on some observable features, but simultaneously to impose structural restrictions on which combinations of data items can be in each cluster. Here we look at the example of breast cancer cell lines exposed to a range of signaling molecules. We show how integer programming, using a tensor format, can be used to take an initial clustering assignment that does not respect the structural restrictions to a provably closest clustering that incorporates the structural information. This enables interpretable groupings to be found from the biological information. The method can also be adapted to cluster the data from the outset via integer programming, finding the optimal clustering with respect to some cost function.

## April 25 — Paolo Aluffi, Florida State University (Math)

### The number of equations needed to determine a Segre class

According to a result of Samuel, the multiplicity of a variety along a codimension c subvariety may be determined by an ideal generated by c elements. On the other hand, bounds are known for the number of generators of a reduction of a given ideal. We will show that such results are facets of a more general fact on the number of equations needed to define a scheme with a given Segre class. Segre classes are fundamental invariants determining the behavior of a variety from the point of view of intersection theory. The result has applications to the computation of characteristic classes.

## May 9 — Shaowei Lin, Singapore University of Technology and Design

### The Singularity is Near: When Machines Transcend Data

Artificial intelligence thrives on efficient algorithms and powerful machines that transcend the observed data and accurately predict outcomes for the unseen future. In statistical learning theory, this capability is known as generalization. For instance, despite often having more model parameters than data points, deep learning is able to avoid overfitting and generalize to new data. This algorithm has seen incredible success in computer vision, speech recognition, natural language processing, strategy games and self-driving cars. To understand why deep learning generalizes, we need to study algebraic singularities. In 1998, Sumio Watanabe showed that the generalization error of a learning algorithm depends on the structure of model singularities near the observed data distribution. In this talk, we explore this theory through a simple example. If time permits, we might sketch out the important role that algebraic geometry may play in the race for artificial intelligence.