painting the corners
The Littlewood-Richardson coefficients are numbers which show up as tensor product multiplicities in representation theory, as intersection numbers in algebraic geometry, and as structure constants for the multiplication of Schur polynomials. They have an incredibly rich combinatorial structure, and there are many beautiful constructions realizing them as ways of counting various objects.
Inspired by Kyu-Hwan Lee’s paper on Kronecker coefficients, I decided to try training a model to learn Littlewood-Richardson coefficients.
The first natural question is to ask a classifier to recognize the Horn cone, that is, to decide whether a given coefficient is zero or not.
A Littlewood-Richardson coefficient $c_{\lambda,\mu}^{\nu}$ (henceforth, an LR number) depends on three partitions — $\lambda$, $\mu$, and $\nu$ — which we just represent as nonincreasing lists of nonnegative integers. Considered as vectors in $\mathbb{R}^{3n}$, the triples of partitions for which $c_{\lambda,\mu}^{\nu}\neq 0$ form a convex cone, cut out by linear inequalities. This is a highly structured set, and much is known about it. (See for instance Fulton’s 2000 survey.)
I used Anders Buch’s lrcalc to generate tables of triples of partitions, labelled by 0 or 1 according to whether the corresponding LR number is zero or not. Then I fed this into LightGBM to generate a decision-tree model, and test its predictive accuracy. Trained on approximately half of the data for partitions fitting inside a 5-by-5 box, the model predicted 0 with 95.79% accuracy, and 1 with 99.59% accuracy.
Details are in the repository ml-lrc.
This exercise was mainly an experiment in setting up ML tools for algebraic combinatorics. It’s not clear (yet) if it will lead to interesting mathematics, or interesting interpretability ideas. But here are a few quick observations:
Thanks to Kyu-Hwan Lee for his related work. My work on this project was partially supported by an NSF CAREER grant.