Kyle Chapman Research

My research involves multiple areas, but the common threads are questions involving diagrams of strings and questions involving algorithmic solutions

Planar Algebras

A planar algebra is a collection of diagrams involving boxes and strands attached to those boxes. The prototypical planar algebra is the Temperley-Lieb planar algebra, which consists of only ways of attaching strands to the top and bottom of the box. An in depth explanation of planar algebras is given by Jones in his paper on the arxiv, Planar Algebras I.

Each planar algebra has a combinatorial invariant called the principal graph. A major open question in planar algebra theory is what principal graphs can be realized as planar algebras. Alongside this is the question of reasonable ways of presenting those algerbras which are known to exist. Towards the second question, Stephen Bigelow and I are working on a paper which gives a family of planar algebras where the strands are colored. This family gives nice presentations for all planar algebras with principal graphs of norm squareroot of n, for n up to 4. This in particular gives a presentation for the planar algebras whose principal graphs are each of the En tilde, and which shows that they are nested.

Knot, Link and Braid Theory

A braid is a collection of strands in a slab or three space, which are strictly increasing. Wikipedia provides a reasonable introduction here. These strands can be closed to form a link, and theorems of Alexander and Markov show exactly the correspondence between braids and their closures. Braids can be represented faithfully with a matrix of sufficient size, using the Lawrence-Krammer representation.

I sought to generalize this representation to the much more general string-link monoid, a monoid which is like the braid group except in that strands need not strictly increase. This generalization failed, but it appears that the additional condition I needed to get a representation is an isotopy invariant. Thus, as the condition is vacuously true for braids, any string-link which fails this condition would be a proper string-link. This potential for a test of when a string-link is isotopic to a braid may be useful in determining a links braid-index, the minimum number of strands a braid must have to give that link as its closure.

Physical Knot Theory

Knots are typically considered only up to isotopy, which means that strands may be moved arbitrarily as long as they never intersect. For physical applications, however, we want to consider knots as having certain geometric properties. This includes things like thickness, which is generally ignored in traditional knot theory. Using equilateral polygional knots, we can randomly sample the space of geometric knots. This is generally done using a monte carlo markov chain, which involves starting with a regular polygon and doing random moves to it enough times to generate a random knot.

When we introduce requirements like thickness, this same method can be applied, and the resulting random sample "binned" according to what thickness constraints each knot satisfies. This requires a very large sample in order to allow for the restriction to each thickness to be of sufficient size, as larger thicknesses are much rarer. I am working on a paper which involves modifying the markov chain to only move through desired thicknesses, allowing for potentially faster generation times.

Game Theory

I am interested in many aspects of game theory, but one which I am formulating a paper for involves an algorithm for completely solving matrix games. A matrix game is a finite, two-person, zero-sum game, which is talked about here. There are two known algorithms for solving matrix games. The first, the Shapley-Snow algorithm, gives all solutions but is extremely computationally intensive. The second, involving linear programming, quickly finds a single solution.

I found a modification of the Shapley-Snow algorithm which accelerates the computation time using the information gathered by the linear programming algorithm. This is not able to improve computation time in worst case, but so far appears to be a great improvement in average computation time.