Parametric complexity is the notion of 'intrinsic hardness' that will be used to show the main lower bound. In this post, we'll go over the notion of parametric complexity, and view s-t mincuts (and dually, max flows) in this light.
Consider the max-flow problem on n vertices. It is parametrized by the capacity of each edge. Suppose now that we replace each such parameter by a linear function of a parameter , taking care that for some fixed range [p,q] all such capacities are non-negative. We get a parametrized instance in which the optimal flow f is a function of the parameter . Moreover, since the capacities are linear functions of , f itself is piecewise linear.
Let be the number of breakpoints of f i.e the number of points at which the slope of f changes. To picture this, imagine that some edge is critical for f, in that increasing the capacity on this edge increases f. Clearly, as the parameter increases, the capacity increases linearly, and so does f. At some point though, the capacity becomes large enough that this edge no longer plays a key role in capping f. At this point, some other edge (with capacity changing at a different rate) starts playing this key role, and f starts changing linearly again, but at a different rate.
Now suppose we compute for all possible parametrizations. The maximum value is called $\phi$, and is called the parametric complexity of f. Technically, is also a function of both n, the number of vertices in the graph, and , the bitlength of the input. The main result of this paper is the following two-step dance:
- Max-flow has high (exponential) parametric complexity
- A problem with high parametric complexity has high parallel complexity
Before we go on, another simple observation. Parametric complexity starts looking a lot less mysterious if you think geometrically. Let's look at min-cut, since it's a bit easier to see what's going on. Any s-t cut is some set of edges, and its capacity is the sum of the corresponding edge capacities. Therefore, the capacity of a cut is a linear function of . Graphically, we can think of this capacity as a line segment starting at and ending at .
We can do this for all cuts in the graph (and there are exponentially many of them). What we get is a familiar beast: the arrangement of the collection of line segments. Now what's the min-cut. For any value of , it's the cut with the smallest value. In other words, the function describing the min-cut as a value of is the lower envelope of this arrangement of line segments, and the function is merely the complexity of this lower envelope (the number of pieces in it).
Now the lower envelope of a collection of lines (note that although these are line segments, they extend across the domain of interest and therefore function as lines for combinatorial purposes) in 2D is linear in the number of lines. Since there are exponentially many cuts, we can expect that the complexity of the lower envelope is exponential. The hard part is showing that all the dependencies (a single edge can participate in many cuts) don't prevent us from achieving this.
Some technical points:
- We assume that non-numeric parts of an input remain fixed for all values of the parameter (i.e an edge doesn't disappear or reappear)
- All linear functions come with rational coefficients.
- The bitsize of the problem is defined as the maximum bitsize of any of the vertex descriptions and the coefficients of the linear functions. Note that although
High parametric complexity bounds have already been proved for many problems of interest. Combinatorial linear programming with linear bitsizes has a parametric complexity of . Min-cost flow has similar complexity, and the same bound can be proved for max-flow with quadratic bitsizes. Mulmuley in fact reproves this last result with what he says is a simpler argument, and also shows that in contrast, the global min-cut problem has polynomial parametric complexity (which is a good thing, since there's a RNC algorithm for it !).
Let's take a closer look at the parametrized complexity of max-flow (actually its dual, min cut). The goal is to come up with a construction that exhibits a parametric complexity exponential in the input. So we need a graph, and a way to parametrize edge capacities.
The graph itself is quite simple. It has a source s, a sink t, and vertices from 1 through n (call this set A) and 1' through n' (call this set B). s is connected to all the interior vertices, as is t. Each of A and B forms a clique. Moreover, vertices i and j', and j and i', are connected (i and j are different). One possible cut of the graph consists of all vertices in A (along with s). Another consists of all vertices in B (along with s). It will turn out that by choosing weights appropriately, both of these will represent min cuts, as will any set formed by exchanging some vertices from A with their prime counterparts in B. Since there are ways of doing this, we'll get the desired bound.
This is where things start getting a little tricky, and I must confess that I don't have a clean way of explaining what happens next (which is a short way of saying that you might be better off reading the relevant construction !). In brief, the idea works like this: an edge from s to i (or i') is given a capacity , where is a constant, and the capacities are set up to be exponentially decaying: . An edge from i to t is given a capacity , and an edge from i' to t is given capacity . The parameter is set to run from -T to T. Note that at these two extremes, either all edges from A to t have capacity zero, or all edges from B to t have capacity zero (making them good candidates for cut edges).
There are also cross edges to consider. An edge from i (or i') to the opposite side has the "default" capacity . However, edges in the cliques (inside A and B) are weighted differently: the edge from i to j in A has capacity , and this is the same for an edge from i' to j'.
What's the capacity of the cut that contains s and A on one side ? We add all edges from s to B (), and all edges from A to B () and A to t (). Clearly, this is minimized for , and it turns out that this is in fact a min cut. A similar argument works for a cut containing s and B, and .
But what happens as the parameter increases ? Note that it makes no sense to have i and i' on the same side of a cut, because there's no edge between them and so it's always better to keep them separate. This means that the min cut is always "complementary" with respect to A and B, which is useful. It also means that the cut will change by having i and i' swap positions as the parameter increases. The question that remains is how many times this can happen.
It turns out that via a reasonably clever (but with a boring inductive proof) argument, it can be shown that the region spanned by the parameter can be divided into regions by a binary division, and when each region is encoded using a standard bitstring, the string encodes the min cut set (1s denote elements from A, and 0s denote elements from B, and so on). The proof works by arguing that for any region defined by the binary division, the min cut remains invariant, and can be read off from the binary expansion of the region. This is proved by induction on the length of the bitstring.
Finally, since each min cut defines a linear function of the parameter with a different slope, we obtain the desired result: namely that the number of distinct slope changes is exponential.
No comments:
Post a Comment