I am particularly interested in group theory, graph theory, linear algebra, combinatorics, parallel algorithms, and parallel computing. While this set of interests is largely theoretical, the combination (pun intended) of all these for the purpose of large data analysis lends itself to a huge number of applications. Being able to sift through massive amounts of data is paramount to the success of any large-scale organization, and having optimal techniques to gather, parse, analyze, and store data are necessary for that success. Further advances in the fields of medicine, academics, political science, and countless others rely on highly optimized data processing systems.

The amount of power that comes along with these systems shouldn't be abused, however. Combating inequality, inequity, and injustice should be the primary intent of these systems' use. The application of large data analysis is crucial to eliminating (or at least testing for) gerrymandering, fighting poverty, and increasing the efficacy of our educational system.

I am also part of the Metric Geometry and Gerrymandering Group. The group is dedicated to investigating the mathematics behind detecting and mitigating gerrymandering and its effects. I am heavily involved in a few projects, and have started researching some of our remaining questions here at the University of Iowa.

k-medoids Clustering on Graphs

In an attempt to intelligently seed MCMC random walk algorithms, we have developed a way to "cluster" the vertices of a graph using a modified k-medoids algorithm, similar to k-means. k-medoids, unlike k-means, identifies a point already in the dataset rather than calculating a centroid. This approach is reasonably novel, and is extensible beyond path length; virtually any graph metric can be used in exchange for path length.


This package hasn't been worked on in a little while, but the math involved is quite important. Developing ways to quickly identify graph faces is important to gerrymandering (and a host of other applied problems) while also being an interesting and nuanced mathematical problem.


This project is a Monte Carlo Markov chain implementation that allows us to sample from the space of possible districting plans. We have a few remaining questions that, in upcoming work, we intend to investigate:

  1. Are we sure that this Markov chain is irreducible?
  2. How far from the original plan is this model getting?
  3. How do imposing constraints (e.g. ε-population-equivalence, connectivity, etc.) affect the reducibility and periodicity of the chain?