Only a few days after talking about work on redistricting based on computing a geometric clustering, I came across (via Andrew Gelman) yet another paper on automatic redistricting by Roland Fryer and Richard Holden.
It's a very interesting paper, and at the risk of sounding patronizing, I have to say that I'm impressed at their use of power diagrams and the Voronoi trick, something that many theoryCS folks may not have encountered. Once again, the problem they're looking at is redistricting: can you design voting districts to satisfy certain fairness criteria ?
One specific question that they ask is: can you evaluate the quality of a state's redistricting plan ? Their answer is in the form of an approximation ratio. They fix a measure of quality for a redistricting, and take the ratio of the actual plan cost to the cost of the optimal plan. Their cost, in this case, is a well known clustering measure: the cost of a cluster (in this case, a district) is the all-pairs sum of distances squared, and the cost of a clustering (in this case, a redistricting) is the sum of cluster costs. One extra twist is that they require all clusters to be equally sized upto rounding errors. That is, a 2-clustering of 9 people must have one cluster with 5 and one cluster with 4 people.
The problem of computing an optimal clustering is NP-hard if k, the number of clusters, is part of the input. However, if not, then there's an elegant method for solving these kinds of problems which I'll call the "Voronoi trick".
The Voronoi Trick:
A clustering of n items into k clusters is a partitioning problem, and as such, there are roughly k^n ways of assigning items to clusters (k^n comes from treating all clusters as distinct; in general, the amount might be reduced by as much as k!). But for geometric problems, we fully expect that not all partitions are likely to be optimal, because of proximity constraints and the triangle inequality. Is there some way of restricting the set of possible partitions ?
Consider the following set of possible partitions of a set of n points in a Euclidean space. We put down k centers, and assign each point of the input to its nearest center. Now not all placement of k centers lead to distinct partitions: I can always perturb one center slightly without changing the combinatorial structure of the partition. We'll call such a partition a Voronoi partition, from the connection to Voronoi diagrams.
Now it's clear that not all partitions of the input are realizable as Voronoi partitions. For example, if we consider k=2 points in the plane, then there are only O(n^2) distinct partitions, rather than the 2^n ways of partitioning a set of items into two groups. In general, for a set of n points in d dimensions, there are roughly n^{kd} distinct Voronoi partitions, which is polynomial in n, rather than being exponential.
Now here comes the trick: for many clustering problems, including the one mentioned above, we can show that the optimal clustering must be realizable as a Voronoi partition. This kind of result has appeared in many different forms: Inaba et al showed that for two different kinds of minimum variance clustering (one where the cost of a cluster is the sum of squared distances from the center, and the other where the cost is the pairwise sum of squared distances of points in the cluster), the optimal clustering is a Voronoi partition. Other forms of this trick were used by Aurenhammer et al in their paper on least-squares partitioning where they showed that we could compute the optimal RMS matching between two point sets using Voronoi partitions.
One aspect of this trick that I elided: often, the Voronoi partitioning doesn't come from using a straight distance function, but from using a weighted distance function. One way this is done is to associate a weight w(s) with each site, and define the Voronoi bisector as the set of points satisfying d(x,s) - w(s) = d(x,s') - w(s'), rather than the usual d(x,s) = d(x,s'). If you recall the view of a Voronoi diagram as the lower envelope of a collection of cones in one higher dimension, with apexes at the site locations, then a weighted Voronoi diagram corresponds to lifting the cones by different (even negative) heights, corresponding to the weight function w(s). This diagram is also called a power diagram.
The Voronoi trick and clustering:
Apart from the fact that the Voronoi trick allows us to reduce the space of possible partitions to consider, it also allows us to isolate the influence of points on clusters. This idea was first exploited in the above-mentioned paper by Inaba et al, and has since been used in approximate clustering work by Badoiu, Har-Peled and Indyk, as well as the recent linear-time k-means algorithms by Kumar, Sabharwal and Sen. The argument goes something like this:
Since we know that each center is "influenced" by points in its Voronoi cell, we can focus on the problem of finding a single center out of the k. This is typically solved using a (simpler) 1-median or 1-means algorithm, run on a random sample of points that is likely to contain a large number of points from a single cluster. Once this is done, we can "peel" off this center, and repeat. The ability to do random sampling and peeling comes from the idea of irreducibility: namely, that either a k-clustering can be approximated using a k-1-clustering, or if not, it means that the k^th cluster is "reasonably well-separated" from the rest, and can be isolated using the above ideas.
Returning to redistricting:
Coming back to the redistricting paper, what they do is essentially show that their all-pairs measure of the cost of a cluster reduces to the 'cost to the centroid' measure, since all clusters are required to have the same size. In this case, they can make use of the power diagram to search efficiently for solutions. If I may nitpick, I wonder why they didn't look into the many approximation schemes available for these problems (but this is really a nitpick).
Postscript:
There are other tricks that are also worth of being called a 'Voronoi trick'. One such idea dates back to a paper by Dobkin, Drysdale and Guibas from 1983 ("D.P. Dobkin, R.L. Drysdale, IIi, and L.J. Guibas. Finding smallest polygons. In Advances in Computing Research, Voi. 1, JAI Press, 1983, 181-214."), and works as follows:
Suppose you want to find the k points from a point set of size n that optimize some function: for example, you want the k points with the smallest enclosing ball, or the k points with the smallest convex hull area, and so on. Further, let's say you have some expensive algorithm that can solve this problem in time O(n^c). For many of these functions, it can be shown that the optimal set of k points must lie within a single cell of the O(k)^th order Voronoi diagram of the set of points. In case you hadn't encountered this notion before, the k^th order Voronoi diagram is what you get if instead of taking the lower envelope of your cones, you take the k^th level of the arrangement. Intuitively, it consists of cells in which for any point, its k nearest neighbours are a fixed set of points.
The k^th order Voronoi diagram of points in the plane has complexity O(kn). Using the above fact, we can enumerate over all cells, and run the expensive algorithm over the O(k) points in each cell. This gives a running time of O(k^c * kn = k^{c+1} n), rather than n^c. This idea works upto a point: see this paper by fellow bloggers David and Jeff for improvements that bypass the Voronoi construction in favor of iterated nearest neighbors.
What you are talking about here sounds very similar to the `segmentation problems' of Kleinberg et al. In particular, the Price Segmentation problem is as follows: A client would buy a quantity q(p)=a_i-p b_i if the price is p. The constants a_i and b_i describe customer i. You can choose for each client what price to use when selling but you can only choose out of k predefined possibilities. (Say, because you need to prepare in advance some documentation arguing why that price is fair.) What should be the k prices? There are two variants of the problem: one considers k given and one allows you to choose the optimum k.
ReplyDeleteHi, I think this post very interesting and enlightening. I came across your article after doing a search on Voronois diagrams. Can you please explain why "for a set of n points in d dimensions, there are roughly n^{kd} distinct Voronoi partitions"? Perhaps it comes from choosing k points to be in distinct Voronoi regions? But I don't see how the dimension of the space comes into play.
ReplyDeleteI tried looking at the Inaba et al reference but couldn't understand their proof. I also don't see how you can enumerate all distinct Voronois partitions in polynomial time.
Thank you very much.