Ruminations on computational geometry, algorithms, theoretical computer science and life
Friday, April 30, 2004
SoCG Early Registration Deadline is May 1
More info here
Holes in sets
The annual sausage convention is nearing, and it's time to look at the coming attractions...
A "Ramsey theorem" says loosely that one can find a structured subset of any size in a larger set, as long as the set is large enough.
The usual example of this is that if you take 6 points, connect all pairs, and then color each edge with either red or blue, you will always get a monochromatic triangle.
Erdos and Szekeres proved one of the first "geometric" Ramsey theorems, that said:
Theorem. For any k, there exists an n = n(k) such that all planar point sets of size n(k) have a convex subset of size k.
This subset is independent, in that all points are on the convex hull of this set.
However, if you change the problem slightly, things start getting interesting. Suppose that instead of asking for a convex subset, you wanted an empty convex subset. In general, suppose you want a "k-hole": a hole bounded by k points. Then the above theorem breaks down. Specifically,
Theorem (Horton). There exist arbitrarily large finite point sets in the plane that do not contain a 7-hole.
But we also know:
Every sufficiently large point set in the plane contains a 5-hole.
Which leaves as a tantalizing open question: What about 6 ?
Pinchasi, Radoicic and Sharir have a paper that develops some key combinatorial equalities and inequalities around this question.
Update: k = 6 is resolved.
A "Ramsey theorem" says loosely that one can find a structured subset of any size in a larger set, as long as the set is large enough.
The usual example of this is that if you take 6 points, connect all pairs, and then color each edge with either red or blue, you will always get a monochromatic triangle.
Erdos and Szekeres proved one of the first "geometric" Ramsey theorems, that said:
Theorem. For any k, there exists an n = n(k) such that all planar point sets of size n(k) have a convex subset of size k.
This subset is independent, in that all points are on the convex hull of this set.
However, if you change the problem slightly, things start getting interesting. Suppose that instead of asking for a convex subset, you wanted an empty convex subset. In general, suppose you want a "k-hole": a hole bounded by k points. Then the above theorem breaks down. Specifically,
Theorem (Horton). There exist arbitrarily large finite point sets in the plane that do not contain a 7-hole.
But we also know:
Every sufficiently large point set in the plane contains a 5-hole.
Which leaves as a tantalizing open question: What about 6 ?
Pinchasi, Radoicic and Sharir have a paper that develops some key combinatorial equalities and inequalities around this question.
Update: k = 6 is resolved.
Books that read like a thriller...
Occasionally one finds technical books that, either because of the subject matter, or the style of writing, or some other inexplicable properties, make for such gripping reading that one almost wants to read from cover to cover in a single sitting.
Two such books that I have come across are Combinatorial Geometry, by Janos Pach and Pankaj Agarwal, and Jiri Matousek's Lectures on Discrete Geometry. These books overlap a little, but are largely disjoint in their coverage, and contain some of the true gems in computational geometry, like Helly's theorem and other theorems in convex geometry, the theory of epsilon-nets, a wealth of Erdos problems, and many many more.
I first read Combinatorial Geometry in grad school, and remember my advisor remarking that he couldn't put the book down. At that time I found the comment strange, and intriguing enough to look at the book. Five hours later, I knew what he meant...
Two such books that I have come across are Combinatorial Geometry, by Janos Pach and Pankaj Agarwal, and Jiri Matousek's Lectures on Discrete Geometry. These books overlap a little, but are largely disjoint in their coverage, and contain some of the true gems in computational geometry, like Helly's theorem and other theorems in convex geometry, the theory of epsilon-nets, a wealth of Erdos problems, and many many more.
I first read Combinatorial Geometry in grad school, and remember my advisor remarking that he couldn't put the book down. At that time I found the comment strange, and intriguing enough to look at the book. Five hours later, I knew what he meant...
Tuesday, April 27, 2004
We are lucky...
Having "come of age" in the Internet era, I consider myself rather fortunate. Till the crash of 2000, it was not hard to convince myself of the indispensability of computer science (and computer scientists) and the research job market reflected that. People who had been around longer used to talk about the early 1990s when CS research jobs were hard to come by and graduating Ph.Ds did postdocs as a matter of course to bide their time.
It seems a lot worse now, but compared to other professions, we are incredibly fortunate. This musing was prompted by an article from the Chronicle of Higher Education about the Invisible Adjunct, an anonymous adjunct in the humanities who wrote an eponymous and very popular blog. She just had to stop writing, having given up on her quest of finding a tenure-track position after having 'adjuncted' for five years.
An excerpt:
The Invisible Adjunct grew up in a working-class family in Canada. She put herself through college near home. That's where advisers first gave her the counsel she now tries not to hate them for: "You're too smart for law school," they told her. "You're one of us." And with that, she was off to an elite graduate school in the United States.
Five or so years later, she had a new doctorate in history and she waded into the job market. She managed to score a campus interview at an elite research university, but came in second. Given how small the world of academe is, she doesn't want to tell everyone where she almost landed. Let's just say you've all heard of it.
But in the five years since, after coming so close at one of the nation's top institutions, she has never been granted another campus interview at an American college, though she has applied all over. To be so close and then to disappear into the fog of adjunctdom makes the system seem all the more unfair.
There's an interesting related article at Crooked Timber about academic Calvinism (short summary: the worthy are rewarded with tenured jobs, and the unworthy go to Hell (or adjunctland).
Link via Brad DeLong
It seems a lot worse now, but compared to other professions, we are incredibly fortunate. This musing was prompted by an article from the Chronicle of Higher Education about the Invisible Adjunct, an anonymous adjunct in the humanities who wrote an eponymous and very popular blog. She just had to stop writing, having given up on her quest of finding a tenure-track position after having 'adjuncted' for five years.
An excerpt:
The Invisible Adjunct grew up in a working-class family in Canada. She put herself through college near home. That's where advisers first gave her the counsel she now tries not to hate them for: "You're too smart for law school," they told her. "You're one of us." And with that, she was off to an elite graduate school in the United States.
Five or so years later, she had a new doctorate in history and she waded into the job market. She managed to score a campus interview at an elite research university, but came in second. Given how small the world of academe is, she doesn't want to tell everyone where she almost landed. Let's just say you've all heard of it.
But in the five years since, after coming so close at one of the nation's top institutions, she has never been granted another campus interview at an American college, though she has applied all over. To be so close and then to disappear into the fog of adjunctdom makes the system seem all the more unfair.
There's an interesting related article at Crooked Timber about academic Calvinism (short summary: the worthy are rewarded with tenured jobs, and the unworthy go to Hell (or adjunctland).
Link via Brad DeLong
Discrete Spacetime ?
I've been reading Lee Smolin's 'Three Roads to Quantum Gravity' recently. For those of you who may not know about his work, Lee Smolin is a physicist working on efforts to unify quantum theory with general relativity. This book describes three paths to doing so, and is part of a new attempt to popularize some of the more radical theories on the cutting edge of physics (Brian Greene's 'Elegant Universe' is another). The March issue of Scientific American has a brief writeup on Smolin's work as well.
There is much that I don't understand in this book, and there are some fascinating and counter-intuitive ideas there. However, what intrigued me the most was this (from Chapter 8):
With matter there is a limit to how small we can divide something, for at some point we are left with individual atoms. Is the same true of space ? If we continue dividing, do we eventually come to a smallest unit of space, some smallest possible volume ? Or can we go on forever, dividing space into smaller and smaller bits, without ever having to stop ? All three of the roads I described in the Prologue favour the same answer to this question: that there is indeed a smallest unit of space. It is much smaller than an atom of matter, but nevertheless, [...snip...], there are good reasons to believe that the continuous appearance of space is as much an illusion as the smooth appearance of matter.
As he goes on to explain, it is not that good ol' Euclidean space is extra chunky; the notion of space itself breaks down at a certain level, giving rise to discrete graph-like structures. Now to a discrete geometer or a combinatorial person, the idea that our universe itself is discrete is a delicious idea. The idea of entropy (in the physical sense) has been related to entropy (in the information-theoretic sense) using such ideas.
Further, it begins to make me wonder about the Church-Turing thesis (well almost anything will, but that's a different story). One of the objections that has been raised regarding the universality of the Church-Turing thesis is the fact that Turing machines only operate on discrete entities, and have no way of dealing with real numbers. Now I am going way out on a limb here, but could a discrete universe make this a moot point ?
Update: I just picked up 'The Elegant Universe'. The key difference is in the approach each emphasizes: Greene stresses the importance of string theory, and Smolin, while discussing three different paths to quantum gravity, favors loop quantum graivity (both of these reflecting the authors' own expertise). As works of writing though, I have to say that I find 'The Elegant Universe' a bit more of a pleasant read. It feels like Smolin is struggling to hit the right metaphors to explain the theories he discusses (no fault of his - this is one of the deepest areas of science, after all)
There is much that I don't understand in this book, and there are some fascinating and counter-intuitive ideas there. However, what intrigued me the most was this (from Chapter 8):
With matter there is a limit to how small we can divide something, for at some point we are left with individual atoms. Is the same true of space ? If we continue dividing, do we eventually come to a smallest unit of space, some smallest possible volume ? Or can we go on forever, dividing space into smaller and smaller bits, without ever having to stop ? All three of the roads I described in the Prologue favour the same answer to this question: that there is indeed a smallest unit of space. It is much smaller than an atom of matter, but nevertheless, [...snip...], there are good reasons to believe that the continuous appearance of space is as much an illusion as the smooth appearance of matter.
As he goes on to explain, it is not that good ol' Euclidean space is extra chunky; the notion of space itself breaks down at a certain level, giving rise to discrete graph-like structures. Now to a discrete geometer or a combinatorial person, the idea that our universe itself is discrete is a delicious idea. The idea of entropy (in the physical sense) has been related to entropy (in the information-theoretic sense) using such ideas.
Further, it begins to make me wonder about the Church-Turing thesis (well almost anything will, but that's a different story). One of the objections that has been raised regarding the universality of the Church-Turing thesis is the fact that Turing machines only operate on discrete entities, and have no way of dealing with real numbers. Now I am going way out on a limb here, but could a discrete universe make this a moot point ?
Update: I just picked up 'The Elegant Universe'. The key difference is in the approach each emphasizes: Greene stresses the importance of string theory, and Smolin, while discussing three different paths to quantum gravity, favors loop quantum graivity (both of these reflecting the authors' own expertise). As works of writing though, I have to say that I find 'The Elegant Universe' a bit more of a pleasant read. It feels like Smolin is struggling to hit the right metaphors to explain the theories he discusses (no fault of his - this is one of the deepest areas of science, after all)
Monday, April 26, 2004
Geometry and Combinatorics
This is a statement that most computational geometry folks can agree on. It is also a statement that encapsulates the primary source of tension and profundity in this field, namely the interplay between the discrete (algorithms) and the continuous (geometry).
Algorithms take as input (and generate as output) discrete entities; at the most basic level strings of 0s and 1s. A geometric entity such as a line or point cannot always be represented so concisely. It would not be an exaggeration to say that it is this conflict that has inspired many of the techniques, theories, and areas of computational geometry.
The most immediate effect has been practical. As anyone who has tried to implement a geometric algorithm quickly realizes, issues of precision and robustness are not dry academic concerns when your convex hull starts looking like spaghetti ! Some wonderful work has come out of the problems in implementing geometric algorithms; robust computing, exact polynomial evaluation, how to determine the precision needed for a computation... In fact computational geometry has stood out among all algorithmic disciplines in its attention to issues of implementation.
There is a deeper level of interplay between the discrete and the continuous in geometric algorithms though. In a sense, some of the best theory in computational and combinatorial geometry has come from trying to abstract out a combinatorial structure that describes a geometric setting, and then using that combinatorial structure to prove a desired result. The trick has always been to take what is needed from the underlying geometry to construct an appropriate combinatorial space to work with.
Examples of this abound. Simple results on arrangements of lines can be proved merely by observing that the lines and their intersections can be represented as a planar graph. The Koebe-Andreev-Thurston theorem characterizes planar graphs in terms of contacts between circles.
Many theorems on arrangements use only incidence axioms, and thus are true in the projective plane, which in turn can be represented combinatorially.
What is so fascinating about this is that you can get deeper and deeper into the geometric structure, and new combinatorial structures keep emerging. If projective spaces give you a lower bound (and they often do), then it tells you that you need more than just incidence relationships. If you need to use convexity, then Helly's theorem (and Radon's theorem and others) gives you a nice way of handling convexity combinatorially. If combinatorial set systems defined on geometric objects are giving weak bounds, then VC-dimension arguments give you a way of capturing geometry in a combinatorial way.
The relationship goes both ways; planar graphs and interval graphs are but two examples of rich graph classes that are defined geometrically. Results in coding theory come from arguments about sphere packing. One of the great breakthroughs in complexity theory, Mulmuley's proof separating P and a slightly restricted variant of NC, uses geometric and topological methods.
Ultimately, this is what I enjoy about computational geometry; it provides a playground for rich algorithms, and the means to utilize tools from some of the deepest areas in mathematics. Truly, sitting in this mean is no mean pleasure.....
Friday, April 23, 2004
Deadlines:
Lance mentions some upcoming conference deadlines.
Some geometry-related activities:
The Canadian Conference on Computational Geometry (alas, no food-based acronym here): submission deadline is May 3.
Graph Drawing 2004: submission deadline is May 31
Some geometry-related activities:
The Canadian Conference on Computational Geometry (alas, no food-based acronym here): submission deadline is May 3.
Graph Drawing 2004: submission deadline is May 31
Infection: The Solution
And here it is: the solution to last week's problem:
Consider any configuration of infected cells. Calculate the length of the boundary of these cells (a single side being 1 unit long) and call it L.
L remains constant as more cells are infected
In the picture above, the blue lines mark the boundary of the infected region. Observe that when a new cell is infected, it destroys two boundary pieces (from the cells that infected it) and adds two (for its other two sides). Hence L remains constant. Since the boundary of the entire grid is of length 4n, and each infected cell can contribute at most 4 units to L, there must be at least n infected squares to start with.
Credit to Shankar Krishnan, and Rudolf Fleischer, who supplied a variant of this proof.
Consider any configuration of infected cells. Calculate the length of the boundary of these cells (a single side being 1 unit long) and call it L.
In the picture above, the blue lines mark the boundary of the infected region. Observe that when a new cell is infected, it destroys two boundary pieces (from the cells that infected it) and adds two (for its other two sides). Hence L remains constant. Since the boundary of the entire grid is of length 4n, and each infected cell can contribute at most 4 units to L, there must be at least n infected squares to start with.
Credit to Shankar Krishnan, and Rudolf Fleischer, who supplied a variant of this proof.
Thursday, April 22, 2004
ACM Turing Award 2003 Goes To Alan Kay
ACM just announced the Turing Award for 2003. It goes to Alan Kay
For pioneering many of the ideas at the root of contemporary object-oriented programming languages, leading the team that developed Smalltalk, and for fundamental contributions to personal computing.
For those who don't know, the Turing Award is the most prestigious award in computer science and is awarded annually.
For pioneering many of the ideas at the root of contemporary object-oriented programming languages, leading the team that developed Smalltalk, and for fundamental contributions to personal computing.
For those who don't know, the Turing Award is the most prestigious award in computer science and is awarded annually.
Wednesday, April 21, 2004
Ipe
Xfig is the drawing tool of choice for people writing papers that use the latex->dvi->ps pipeline. However, Xfig lacks a geometric sensibility; a way to draw incidences, tangencies etc precisely without having to rely on a steady hand.
Ipe is a tool first developed by Otfried Cheong that can handle such features. You can request that objects be snapped to boundaries of other objects, intersections, grid lines, etc. It is extensible (via Ipelets), and generates LaTeX for easy inclusion in files. This is a godsend especially when you have text and pictures interspersed in a drawing (Xfig can do this, but it is a little trickier and you don't really get WYSIWYG behaviour). Ipe also has plugins for computing Voronoi diagrams and Delaunay triangulations.
I've know about Ipe for a while but have been too lazy to change over; playing with it recently was encouraging, as the learning curve appears to be quite gentle.
thanks to Sariel
Ipe is a tool first developed by Otfried Cheong that can handle such features. You can request that objects be snapped to boundaries of other objects, intersections, grid lines, etc. It is extensible (via Ipelets), and generates LaTeX for easy inclusion in files. This is a godsend especially when you have text and pictures interspersed in a drawing (Xfig can do this, but it is a little trickier and you don't really get WYSIWYG behaviour). Ipe also has plugins for computing Voronoi diagrams and Delaunay triangulations.
I've know about Ipe for a while but have been too lazy to change over; playing with it recently was encouraging, as the learning curve appears to be quite gentle.
thanks to Sariel
Interviewing
It is interview season at the labs, and a stream of candidates flows through our corridors. You can always tell who the candidate is by picking out the person in a suit (though that can occasionally be wrong at a corporate lab; thankfully I haven't yet accidentally asked my chairman which university he's interviewing from).
Taking the measure of a candidate is a tricky business in a research lab. Having come through the university system, I feel comfortable evaluating a candidate for a university position; one can look at letters, research interest, general drive and all kinds of parameters. In a research laboratory things can get a bit trickier. One wants the stellar academic credentials of course, but one also likes to see a certain kind of "renaissance" personality; the sense that the person you are interviewing is broader than the story their resume tells.
It is hard to determine this in our modern era of specialization. A candidate usually wants to show depth and expertise in an area, but it seems to me that one of the parameters for success in a lab is the ability to be flexible; to be able to take the skills one has and apply them both in one's own work, and when lending a hand in other projects. This calls for the ability to abstract, generalize and apply, rather than to specialize and drill down. I feel that in this respect at least, labs and universities differ, and this is a more interesting distinction between the two than the standard cliches about how much freedom of choice one has, how much business-oriented work one does, etc etc.
I wonder what kinds of questions can elicit whether or not a potential new hire has this kind of sensibility ?
Taking the measure of a candidate is a tricky business in a research lab. Having come through the university system, I feel comfortable evaluating a candidate for a university position; one can look at letters, research interest, general drive and all kinds of parameters. In a research laboratory things can get a bit trickier. One wants the stellar academic credentials of course, but one also likes to see a certain kind of "renaissance" personality; the sense that the person you are interviewing is broader than the story their resume tells.
It is hard to determine this in our modern era of specialization. A candidate usually wants to show depth and expertise in an area, but it seems to me that one of the parameters for success in a lab is the ability to be flexible; to be able to take the skills one has and apply them both in one's own work, and when lending a hand in other projects. This calls for the ability to abstract, generalize and apply, rather than to specialize and drill down. I feel that in this respect at least, labs and universities differ, and this is a more interesting distinction between the two than the standard cliches about how much freedom of choice one has, how much business-oriented work one does, etc etc.
I wonder what kinds of questions can elicit whether or not a potential new hire has this kind of sensibility ?
Tuesday, April 20, 2004
WWW and WWB ?
It is interesting to see how the evolution of the blogosphere has paralleled the evolution of the WWW.
* Tim Berners-Lee and others (Dave Winer and others) develop hypertext link tools like HTML (RSS)
* People start creating web pages (blogs) frenetically in an initial growth spurt.
* Bookmark lists (blogrolls) become all the rage as people try to find more and more web pages (blogs)
* Meta-organizing entities (Yahoo/Kinja) start emerging, as a way to organize and classify the web (blogosphere)
* Google (Technorati) starts using link structure to determine "interesting" pages (blogs)
* Major organizations start creating web pages (RSS feeds) in response to user demand.
Based on this, one can expect the following to happen in the future:
* blogrolls, and the wonderful diversity of blogs, will start shrinking, at least in terms of hits (subscriptions). Prominent blogs will start drawing more and more of the traffic away from smaller ones.
* Audio/video content delivery mechanisms (the enclosure field in RSS is already an example of this) will start gaining importance (napblog anyone?)
* Media outlets will figure out a way to get advertising into blogs, in a more prominent way than currently exists. One example is to force you to clickthru an ad to read a full entry from a blog or a news feed.
What else ?
* Tim Berners-Lee and others (Dave Winer and others) develop hypertext link tools like HTML (RSS)
* People start creating web pages (blogs) frenetically in an initial growth spurt.
* Bookmark lists (blogrolls) become all the rage as people try to find more and more web pages (blogs)
* Meta-organizing entities (Yahoo/Kinja) start emerging, as a way to organize and classify the web (blogosphere)
* Google (Technorati) starts using link structure to determine "interesting" pages (blogs)
* Major organizations start creating web pages (RSS feeds) in response to user demand.
Based on this, one can expect the following to happen in the future:
* blogrolls, and the wonderful diversity of blogs, will start shrinking, at least in terms of hits (subscriptions). Prominent blogs will start drawing more and more of the traffic away from smaller ones.
* Audio/video content delivery mechanisms (the enclosure field in RSS is already an example of this) will start gaining importance (napblog anyone?)
* Media outlets will figure out a way to get advertising into blogs, in a more prominent way than currently exists. One example is to force you to clickthru an ad to read a full entry from a blog or a news feed.
What else ?
Sunday, April 18, 2004
A meta-proof
inspired by pandagon: a meta-proof of P=/!=NP, coming to a newsgroup near you:
P: I would like to announce my proof of P=/!=NP. The proof is very short and demonstrates how to solve/not solve SAT in polynomial time. You may find a write up of the proof here.
|-- V: I started reading your proof and when you claim 'foobar' do you mean 'foobar' or 'phishbang' ?
|----P: I meant 'phishbang'. Thanks for pointing that out. An updated version is here.
|------V: Well if you meant 'phishbang' then statement "in this step we assume the feefum" is incorrect.
|--------P: No no, you don't understand. I can assume feefum because my algorithm has a glemish.
|-----------V: It has a glemish ? !! But having a glemish doesn't imply anything. All algorithms have glemishes !!
|----V': Yes, and in fact in the 3rd step of argument 4, your glemish contradicts the first propum.
|--V'': I think you need to understand some basic facts about complicity theory before you can go further. Here is a book to read.
|----P: My proof is quite clear, and I don't see why I have to explain it to you if you don't understand. I have spent a long time on this.
|------V': Um, this is a famous problem, and there are many false proofs, and so you do have to convince us that the argument using glemishes can actually work.
|--------P: But what is wrong in my proof ? I don't see any problems with it, and if you can't point one out, how can you say it is wrong.
|----------V'''': I don't have to read the entire proof: glemished algorithms are well known not to work.
|------------V'''''': Check out this reference by to see why.
P: <silence>
|--P: <answering earlier post>. This is what I mean by a glemish. it is really a flemish, not a glemish, which answers your objection.
|----P': Keep up the good work P. I tried publishing my result, and these people savaged my proof without even trying to identify a problem. All great mathematical progress has come from amateurs like us. See this link of all the theorems proved by non-experts.
|------V': Oh jeez, not P' again. I thought we had established that your proof was wrong.
|--------P': no you didn't: in fact I have a new version that explains the proof in such simple language even dumb&%&%s like you can get it.
|------P: Thanks P', I understand that there will be resistance from the community since I have proved what they thought to be so hard.
|--V': P, I'm trying to understand your proof, with the flemishes, and it seems that maybe there is a problem in step 517 with the brouhaha technique.
P: <silence>
|----P: V', thanks for pointing out that mistake. you are right. Instead of a brouhaha technique I need a slushpit. The details are complicated, so I will fix it and post a corrected version of the proof shortly. Thanks to all those who gave me constructive advice. I am glad that at least some of you have an open mind to accept new ideas.
This was prompted by the latest P=NP fiasco on comp.theory. I can only express my amazement and wonder at the few tireless souls (and they seem to be the same ones) who patiently try to address each new crackpot proof that comes down the pipe.
P: I would like to announce my proof of P=/!=NP. The proof is very short and demonstrates how to solve/not solve SAT in polynomial time. You may find a write up of the proof here.
|-- V: I started reading your proof and when you claim 'foobar' do you mean 'foobar' or 'phishbang' ?
|----P: I meant 'phishbang'. Thanks for pointing that out. An updated version is here.
|------V: Well if you meant 'phishbang' then statement "in this step we assume the feefum" is incorrect.
|--------P: No no, you don't understand. I can assume feefum because my algorithm has a glemish.
|-----------V: It has a glemish ? !! But having a glemish doesn't imply anything. All algorithms have glemishes !!
|----V': Yes, and in fact in the 3rd step of argument 4, your glemish contradicts the first propum.
|--V'': I think you need to understand some basic facts about complicity theory before you can go further. Here is a book to read.
|----P: My proof is quite clear, and I don't see why I have to explain it to you if you don't understand. I have spent a long time on this.
|------V': Um, this is a famous problem, and there are many false proofs, and so you do have to convince us that the argument using glemishes can actually work.
|--------P: But what is wrong in my proof ? I don't see any problems with it, and if you can't point one out, how can you say it is wrong.
|----------V'''': I don't have to read the entire proof: glemished algorithms are well known not to work.
|------------V'''''': Check out this reference by
P: <silence>
|--P: <answering earlier post>. This is what I mean by a glemish. it is really a flemish, not a glemish, which answers your objection.
|----P': Keep up the good work P. I tried publishing my result, and these people savaged my proof without even trying to identify a problem. All great mathematical progress has come from amateurs like us. See this link of all the theorems proved by non-experts.
|------V': Oh jeez, not P' again. I thought we had established that your proof was wrong.
|--------P': no you didn't: in fact I have a new version that explains the proof in such simple language even dumb&%&%s like you can get it.
|------P: Thanks P', I understand that there will be resistance from the community since I have proved what they thought to be so hard.
|--V': P, I'm trying to understand your proof, with the flemishes, and it seems that maybe there is a problem in step 517 with the brouhaha technique.
P: <silence>
|----P: V', thanks for pointing out that mistake. you are right. Instead of a brouhaha technique I need a slushpit. The details are complicated, so I will fix it and post a corrected version of the proof shortly. Thanks to all those who gave me constructive advice. I am glad that at least some of you have an open mind to accept new ideas.
This was prompted by the latest P=NP fiasco on comp.theory. I can only express my amazement and wonder at the few tireless souls (and they seem to be the same ones) who patiently try to address each new crackpot proof that comes down the pipe.
Thursday, April 15, 2004
Problem 1: Infection.
People have suggested starting a problem of the day/week/month/ series. The problem of course is that once you start, you have to continue !
My compromise is that I will occasionally post interesting problems that come my way (and will take suggestions), with solutions a week later. No regularity in the supply of problems is guaranteed though.
So, to start, a problem that comes from the puzzle master, Peter Winkler (via Shankar Krishnan):
This is a Game of Life variant. Consider an n X n grid in the plane. Let a grid cell become infected if two of its neighbours are infected (diagonals are not included). The figure below illustrates how a cell can get infected.
Prove that in order for all the cells to become infected, at least n of the cells must have been infected to start with.
As with many of Peter's problems, there exists a Book proof. Your mission, should you choose to accept it, is to find it. Email me or post your solution in the comments. The first k people to get the right answer, (where k is a random number between 1 and 10), will be rewarded with their 10 bytes of blogosphere fame.
My compromise is that I will occasionally post interesting problems that come my way (and will take suggestions), with solutions a week later. No regularity in the supply of problems is guaranteed though.
So, to start, a problem that comes from the puzzle master, Peter Winkler (via Shankar Krishnan):
This is a Game of Life variant. Consider an n X n grid in the plane. Let a grid cell become infected if two of its neighbours are infected (diagonals are not included). The figure below illustrates how a cell can get infected.
Prove that in order for all the cells to become infected, at least n of the cells must have been infected to start with.
As with many of Peter's problems, there exists a Book proof. Your mission, should you choose to accept it, is to find it. Email me or post your solution in the comments. The first k people to get the right answer, (where k is a random number between 1 and 10), will be rewarded with their 10 bytes of blogosphere fame.
Wednesday, April 14, 2004
The Fafblog
WARNING: Non-geometry/algorithms/math post ahead...
There are all kinds of blogs on the net, but there is only one Fafblog.
For a take on the day's events that is so maniacally funny you wonder what the author is smoking....here's a sample, written after the Bush press conference
On our resolve:
It is strong.
Also it is resolute and firm. It is so firm! You have no idea how firm our resolve is. Whew! It's pretty firm I'll tell you that.
In addition to resolve we have resoluteness and strength. These are also very strong and resolute.
Also, freedom is beautiful. Look at those mountains! Smell that ocean air!
On the hideous bug invaders:
Reports regarding the hideous bug invaders have been greatly exaggerated and their laying of eggs in soldiers' brains is in no way to be perceived as a setback in the war against terror. Our resolve is so strong!
They have laid their eggs in far fewer soldiers' brains than the media is reporting. Man media there is something wrong with you!
Why from the looks of things on CNN you'd think theyd insectofied three whole platoons of soldiers by now which is crazy talk! Ha ha! Crazy talk, our resolve is strong!
and now back to previously scheduled geomblogging....
There are all kinds of blogs on the net, but there is only one Fafblog.
For a take on the day's events that is so maniacally funny you wonder what the author is smoking....here's a sample, written after the Bush press conference
On our resolve:
On the hideous bug invaders:
and now back to previously scheduled geomblogging....
Theory and Practice
Straddling the theory + practice divide is always vexing, and I do this a lot when working at a research lab. The following note from an essay by Don Knuth is interesting:
The word 'theory' actually comes from the same root as the word 'theater'. .... the root word 'θέα' (a sight) led to (a place for viewing), (a spectator), and (a seeing, a spectacle).
And what about 'practice'. It's another Greek word, indeed another theatrical word: Practice means performance.
...
thus we can say that theory is to practice as rigor is to vigor.
-- from 'Theory and Practice II', in Selected papers in Computer Science
Knuth's point is not that theory is in any way subservient to practice: far from it. What he does argue is that theory far removed from practice becomes stale and ossified. His arguments, fleshed out over a series of articles, are more nuanced than I can do justice to in this note; suffice it to say that he wants to foster a far more active interaction between theory and practice than mathematicians (and theoretical computer scientists) are possibly comfortable with.
The essay ends with a beautiful bit of word play on the root word for theorem and the related root word for God:
Let us not confuse big-Omicron with big-Omega in making a theology out of theory. Let us rather practice what we preach. This will not only give the world better applications, it will ultimately give the world better theory.
The word 'theory' actually comes from the same root as the word 'theater'. .... the root word 'θέα' (a sight) led to (a place for viewing), (a spectator), and (a seeing, a spectacle).
And what about 'practice'. It's another Greek word, indeed another theatrical word: Practice means performance.
...
thus we can say that theory is to practice as rigor is to vigor.
-- from 'Theory and Practice II', in Selected papers in Computer Science
Knuth's point is not that theory is in any way subservient to practice: far from it. What he does argue is that theory far removed from practice becomes stale and ossified. His arguments, fleshed out over a series of articles, are more nuanced than I can do justice to in this note; suffice it to say that he wants to foster a far more active interaction between theory and practice than mathematicians (and theoretical computer scientists) are possibly comfortable with.
The essay ends with a beautiful bit of word play on the root word for theorem and the related root word for God:
Let us not confuse big-Omicron with big-Omega in making a theology out of theory. Let us rather practice what we preach. This will not only give the world better applications, it will ultimately give the world better theory.
Tuesday, April 13, 2004
The Abel Prize
I was not aware of this, but there is now a prize in mathematics that more closely resembles a Nobel prize. It is called the Abel Prize, (after Niels Henrik Abel) and has awarded prizes in 2003 and 2004.
From kuro5hin:
As is well known, there is no Nobel Prize in Mathematics. Instead, most people have looked to the Fields Medal as the most prestigious award for mathematical achievement. The Fields Medal, however, differs significantly from the Nobel Prize in that it is only awarded every four years (at each meeting of the International Congress of Mathematicians), to two, three or four recipients. Moreover, it is only awarded to mathematicians not older than 40, in order to encourage further achievements. This also has the pleasant side effect of rewarding recent work, rather than work done half a century earlier, as is too often the case with Nobel Prizes. Tellingly, both Atiyah (1966) and Serre (1954) have also won Fields Medals.
The Abel Prize, by contrast, seems to resemble the Nobel Prize more closely. It is awarded annually by a Scandinavian learned society (the Nobel Prize is awarded by the Swedish Academy of Sciences), it is worth a considerable amount of money, and so far all three recipients are over seventy years old. Happily, this allows rewarding those (relatively few) mathematicians who missed the Fields Medals due to their age.
This year's winners: Michael Attiyah and Isadore Singer for the discovery of the index theorem (explained here by John Rognes)
From kuro5hin:
As is well known, there is no Nobel Prize in Mathematics. Instead, most people have looked to the Fields Medal as the most prestigious award for mathematical achievement. The Fields Medal, however, differs significantly from the Nobel Prize in that it is only awarded every four years (at each meeting of the International Congress of Mathematicians), to two, three or four recipients. Moreover, it is only awarded to mathematicians not older than 40, in order to encourage further achievements. This also has the pleasant side effect of rewarding recent work, rather than work done half a century earlier, as is too often the case with Nobel Prizes. Tellingly, both Atiyah (1966) and Serre (1954) have also won Fields Medals.
The Abel Prize, by contrast, seems to resemble the Nobel Prize more closely. It is awarded annually by a Scandinavian learned society (the Nobel Prize is awarded by the Swedish Academy of Sciences), it is worth a considerable amount of money, and so far all three recipients are over seventy years old. Happily, this allows rewarding those (relatively few) mathematicians who missed the Fields Medals due to their age.
This year's winners: Michael Attiyah and Isadore Singer for the discovery of the index theorem (explained here by John Rognes)
Monday, April 12, 2004
More Computer Science Poetry
Continuing the April Poetry Month theme, check out the poems written for Satish Rao's complexity class at Berkeley.
Here are some samples:
A set of problems,
Intractable or easy?
We may never know. -- Nemanja Isailovic
And this one from 'Wes', written to mimic an actual poem:
Bits in your multitudes
Scarce to be counted
We need but sample
A few random ones.
-CS 270, "Bits"
and the original:
Stars in your multitudes
Scarce to be counted
Filling the darkness
With order and light.
-Les Miserables, "Stars"
Here are some samples:
A set of problems,
Intractable or easy?
We may never know. -- Nemanja Isailovic
And this one from 'Wes', written to mimic an actual poem:
Bits in your multitudes
Scarce to be counted
We need but sample
A few random ones.
-CS 270, "Bits"
and the original:
Stars in your multitudes
Scarce to be counted
Filling the darkness
With order and light.
-Les Miserables, "Stars"
Friday, April 09, 2004
Mathematical Jabberwocky
April is National Poetry Month. In that spirit, I present this offering, courtesy of The Cyberiad, by Stanislaw Lem (translated by Michael Kandel).
The setting:
Trurl the constructor has created an electronic bard that can compose any poem. Klapaucius, Trurl's friend and co-constructor, requests
A love poem, lyrical, pastoral, and expressed in the language of pure mathematics. Tensor algebra mainly, with a little topology and higher calculus, if need be. But with feeling, you understand, and in the cybernetic spirit.
The result:
Come, let us hasten to a higher plane,
Where dyads tread the fairy fields of Venn,
Their indices bedecked from one to n,
Commingled in an endless Markov chain!
Come, every frustum longs to be a cone,
And every vector dreams of matrices.
Hark to the gentle gradient of the breeze:
It whispers of a more ergodic zone.
In Riemann, Hilbert or in Banach space
Let superscripts and subscripts go their ways.
Our asymptotes no longer out of phase,
We shall encounter, counting, face to face.
I'll grant thee random access to my heart,
Thou'lt tell me all the constants of thy love;
And so we two shall all love's lemmas prove,
And in our bound partition never part.
For what did Cauchy know, or Christoffel,
Or Fourier, or any Boole or Euler,
Wielding their compasses, their pens and rulers,
Of thy supernal sinusoidal spell?
Cancel me not -- for what then shall remain?
Abscissas, some mantissas, modules, modes,
A root or two, a torus and a node:
The inverse of my verse, a null domain.
Ellipse of bliss, converse, O lips divine!
The product of our scalars is defined!
Cyberiad draws nigh, and the skew mind
cuts capers like a happy haversine.
I see the eigenvalue in thine eye,
I hear the tender tensor in thy sigh.
Bernoulli would have been content to die,
Had he but known such a squared cosine 2 phi!
I hope this encourages you to read the book. The Cyberiad is an amazingly joyful playground of words and inventive notions, far different from much of the science fiction we see today.
The setting:
Trurl the constructor has created an electronic bard that can compose any poem. Klapaucius, Trurl's friend and co-constructor, requests
A love poem, lyrical, pastoral, and expressed in the language of pure mathematics. Tensor algebra mainly, with a little topology and higher calculus, if need be. But with feeling, you understand, and in the cybernetic spirit.
The result:
Come, let us hasten to a higher plane,
Where dyads tread the fairy fields of Venn,
Their indices bedecked from one to n,
Commingled in an endless Markov chain!
Come, every frustum longs to be a cone,
And every vector dreams of matrices.
Hark to the gentle gradient of the breeze:
It whispers of a more ergodic zone.
In Riemann, Hilbert or in Banach space
Let superscripts and subscripts go their ways.
Our asymptotes no longer out of phase,
We shall encounter, counting, face to face.
I'll grant thee random access to my heart,
Thou'lt tell me all the constants of thy love;
And so we two shall all love's lemmas prove,
And in our bound partition never part.
For what did Cauchy know, or Christoffel,
Or Fourier, or any Boole or Euler,
Wielding their compasses, their pens and rulers,
Of thy supernal sinusoidal spell?
Cancel me not -- for what then shall remain?
Abscissas, some mantissas, modules, modes,
A root or two, a torus and a node:
The inverse of my verse, a null domain.
Ellipse of bliss, converse, O lips divine!
The product of our scalars is defined!
Cyberiad draws nigh, and the skew mind
cuts capers like a happy haversine.
I see the eigenvalue in thine eye,
I hear the tender tensor in thy sigh.
Bernoulli would have been content to die,
Had he but known such a squared cosine 2 phi!
I hope this encourages you to read the book. The Cyberiad is an amazingly joyful playground of words and inventive notions, far different from much of the science fiction we see today.
arxiv.org RSS feeds
Hein Roehrig points out that arxiv.org now has their own RSS feeds for new papers. For example, the computational geometry feed is at http://arxiv.org/rss/cs.CG, and other feeds are generated in the same fashion, using the URL
http://arxiv.org/rss/<main-category>.<subcategory>
http://arxiv.org/rss/<main-category>.<subcategory>
Thursday, April 08, 2004
because Google is the source of all knowledge
I felt I had to create a page with this...
Jeff Erickson took me to task (and rightly so) for not starting the Geomblog with
The Sacred Sausage Invocation
Let P be a set of n points in the plane in general position. Amen.
Amen indeed.....
For those who don't get it, the primary conference in computational geometry is the Symposium on Computational Geometry, often called SoCG.
Jeff Erickson took me to task (and rightly so) for not starting the Geomblog with
The Sacred Sausage Invocation
Let P be a set of n points in the plane in general position. Amen.
Amen indeed.....
For those who don't get it, the primary conference in computational geometry is the Symposium on Computational Geometry, often called SoCG.
Wednesday, April 07, 2004
Kepler's Conjecture
It appears that the NYT story I referred to in a previous post may not have been entirely on the money in its description of the situation regarding the publication of Hales' proof. The matter will resolve itself soon hopefully....
Square Wheels
An interesting study of polygonal motions (article by Ivars Peterson on Sciencenews, found via Radio):
A square wheel can roll smoothly, keeping the axle moving in a straight line and at a constant velocity, if it travels over evenly spaced bumps of just the right shape. This special shape is called an inverted catenary.
A catenary is the curve describing a rope or chain hanging loosely between two supports. At first glance, it looks like a parabola. In fact, it corresponds to the graph of a function called the hyperbolic cosine. Turning the curve upside down gives you an inverted catenary.
This picture is too funny:
(credit: Stan Wagon)
Some interesting points:
* A triangular wheel cannot roll smoothly on any surface
* Is there a road-worthy combination of wheel and surface in which the wheel and surface have the same shape ?
A square wheel can roll smoothly, keeping the axle moving in a straight line and at a constant velocity, if it travels over evenly spaced bumps of just the right shape. This special shape is called an inverted catenary.
A catenary is the curve describing a rope or chain hanging loosely between two supports. At first glance, it looks like a parabola. In fact, it corresponds to the graph of a function called the hyperbolic cosine. Turning the curve upside down gives you an inverted catenary.
This picture is too funny:
(credit: Stan Wagon)
Some interesting points:
* A triangular wheel cannot roll smoothly on any surface
* Is there a road-worthy combination of wheel and surface in which the wheel and surface have the same shape ?
Tuesday, April 06, 2004
Writing Papers
PhD is a hilarious comic strip on grad student life, especially in the engineering disciplines. I used to read it when I was a grad student at Stanford, and still find it spot-on in its depictions of life as a grunt in the trenches of science.
Here is the strip from Wednesday, April 05:
Here is the strip from Wednesday, April 05:
Monday, April 05, 2004
Kepler's Conjecture Theorem
Kepler's conjecture, one of the oldest conjectures in geometry, was that the optimal packing of unit spheres could never achieve a density of more than pi/sqrt{18}. A few years ago, Thomas Hales (with help from Samuel Ferguson), proved the conjecture with extensive use of computers to assist in the proof. The New York Times, in their infiinte wisdom, managed to greet this proof with a line that went something like 'Mathematicians have now proved what greengrocers have known all long' ....sigh...
What prompted this note is a new article (NYT; reg. reqd) that indicates that Thomas Hales' proof is going to be split into two parts:
But Dr. Hales's proof of the problem, known as the Kepler Conjecture, hinges on a complex series of computer calculations, too many and too tedious for mathematicians reviewing his paper to check by hand.
Believing it thus, at some level, requires faith that the computer performed the calculations flawlessly, without any programming bugs. For a field that trades in dispassionate logic and supposedly unambiguous truths and falsehoods, that is an uncomfortably gray in-between.
Because of the ambiguities, the journal, the prestigious Annals of Mathematics, has decided to publish only the theoretical parts of the proof, which have been checked in the traditional manner. A more specialized journal, Discrete and Computational Geometry, will publish the computer sections.
Now I don't quite trust the NYT when it comes to matters of science, but if this is true, it does send a rather unpleasant message, that the Annals of Mathematics is the place for rigorous proofs, and Discrete and Computational Geometry is the place for "computer-assisted proofs" (you can almost hear the sneer in that line). Now D&CG is the one of the primary journals of reference for work in computational geometry, and most of the papers published there are as theoretical as any. So I am somewhat puzzled at why this decision was made. If someone has more information to share, I'd like to hear more about it.
What prompted this note is a new article (NYT; reg. reqd) that indicates that Thomas Hales' proof is going to be split into two parts:
But Dr. Hales's proof of the problem, known as the Kepler Conjecture, hinges on a complex series of computer calculations, too many and too tedious for mathematicians reviewing his paper to check by hand.
Believing it thus, at some level, requires faith that the computer performed the calculations flawlessly, without any programming bugs. For a field that trades in dispassionate logic and supposedly unambiguous truths and falsehoods, that is an uncomfortably gray in-between.
Because of the ambiguities, the journal, the prestigious Annals of Mathematics, has decided to publish only the theoretical parts of the proof, which have been checked in the traditional manner. A more specialized journal, Discrete and Computational Geometry, will publish the computer sections.
Now I don't quite trust the NYT when it comes to matters of science, but if this is true, it does send a rather unpleasant message, that the Annals of Mathematics is the place for rigorous proofs, and Discrete and Computational Geometry is the place for "computer-assisted proofs" (you can almost hear the sneer in that line). Now D&CG is the one of the primary journals of reference for work in computational geometry, and most of the papers published there are as theoretical as any. So I am somewhat puzzled at why this decision was made. If someone has more information to share, I'd like to hear more about it.
Saturday, April 03, 2004
Submissions to Research Conferences
In computer science, the number of submissions to conferences has been increasing steadily over the past few years. There are many hypotheses attempting to explain this phenomenon, and as they are all completely unvalidated by real data, I will spare you the details.
However, in a pioneering and breathtakingly original piece of work, Cormode, Czumaj and Muthukrishnan have proposed a number of solutions to the problem of conference submission glut, as well as the problem of articially lowered conference acceptance ratios. The abstract:
In the beginning was the pub. This work was triggered by a pub conversation where the
authors observed that many resumes list acceptance ratios of conferences where their
papers appear, boasting the low acceptance ratio. The lower the ratio, better your paper
looks. The list might look equally impressive if one listed the rejection ratio of
conferences where ones paper was submitted and rejected. We decided to lampoon
rather than lament the effort the PC typically put in: wouldn’t the world be better if we
could encourage only high quality submissions and so run top conferences with very
high acceptance ratios? This paper captures our thoughts, and it is best consumed in a
pub (and in color).
Enjoy....
However, in a pioneering and breathtakingly original piece of work, Cormode, Czumaj and Muthukrishnan have proposed a number of solutions to the problem of conference submission glut, as well as the problem of articially lowered conference acceptance ratios. The abstract:
In the beginning was the pub. This work was triggered by a pub conversation where the
authors observed that many resumes list acceptance ratios of conferences where their
papers appear, boasting the low acceptance ratio. The lower the ratio, better your paper
looks. The list might look equally impressive if one listed the rejection ratio of
conferences where ones paper was submitted and rejected. We decided to lampoon
rather than lament the effort the PC typically put in: wouldn’t the world be better if we
could encourage only high quality submissions and so run top conferences with very
high acceptance ratios? This paper captures our thoughts, and it is best consumed in a
pub (and in color).
Enjoy....
Friday, April 02, 2004
Funny Paper Titles
I collect paper titles that I consider to be "interesting". Research paper titles have gone the way of non-fictions books in recent years, with the use of the execrable A:B format, which can loosely be explained as:
A (Let's say something that we think is clever and pithy):B (maybe no one got the joke, so let's explain it in boring detail).
Consider this post a plea for help. Don't let the dark side lure you !!! There are many ways to title a paper cleverly (if you so choose to), without being sucked into A:B inanity. Here are some of the gems. All of the titles below refer to papers in (some area of) computer science. Warning: this is a highly idiosyncratic view; please don't complain if I list (or don't list) your title here. If you have contributions, post them in the comments or email me.
Short and sweet
Division is good
Reductions that Lie
T(A)=T(B)?
Funny
Mick gets some (the odds are on his side) (This one is particularly noteworthy, seeing as it mixes cultural references in)
O-O: What have they done to DB2
It's okay to be skinny, if your friends are fat.
Just Weird
Gray Visiting Motzkins
Cracking the cracking problem with coons patches
Pass the butter
And finally, a paper title that isn't yet, but should be !!
Searching for Spock in a Quantum Haystack
A (Let's say something that we think is clever and pithy):B (maybe no one got the joke, so let's explain it in boring detail).
Consider this post a plea for help. Don't let the dark side lure you !!! There are many ways to title a paper cleverly (if you so choose to), without being sucked into A:B inanity. Here are some of the gems. All of the titles below refer to papers in (some area of) computer science. Warning: this is a highly idiosyncratic view; please don't complain if I list (or don't list) your title here. If you have contributions, post them in the comments or email me.
Short and sweet
Division is good
Reductions that Lie
T(A)=T(B)?
Funny
Mick gets some (the odds are on his side) (This one is particularly noteworthy, seeing as it mixes cultural references in)
O-O: What have they done to DB2
It's okay to be skinny, if your friends are fat.
Just Weird
Gray Visiting Motzkins
Cracking the cracking problem with coons patches
Pass the butter
And finally, a paper title that isn't yet, but should be !!
Searching for Spock in a Quantum Haystack
Thursday, April 01, 2004
Erdos and proofs 'from the Book' (cont..)
Yesterday I talked about Erdos's beautiful idea of proofs from the Book, and mentioned an example problem whose proof could be viewed as being a 'Book proof'.
Here is the proof, (due to L. M. Kelly)
Theorem. There is no way to arrange n points in the plane (not on a line) so that a line through any two points will always pass through a third.
Proof:
Suppose the theorem is false. Let P be the given set of points, and let L be the set of all lines that pass through at least two points of P. Over all pairs (p, l) where p is in P and l is in L, and p does not lie on l, pick the pair (pp, ll) such that the distance from pp to ll is the smallest. By assumption, all lines pass thru at least three points, and so there are three points on ll, two of which (call them q1, q2) must lie on one side of the perpendicular from pp to ll.
Say q1 is closer to the perpendicular and q2 is further. Consider the pair (q1, lll), where lll is the line joining pp and q2. A simple drawing shows that q1 is closer to lll than pp is to ll, contradicting the minimality of the pair (pp,ll). Hence the theorem is proved.
end proof.
Bonus Points
You might notice that the above proof uses metric properties of the Euclidean plane (i.e the notion of distance), and order axioms (the notion of betweenness). It is possible to prove the above result without using metric properties (merely using order axioms), but it is not possible to drop the order axioms themselves. The reason is a topic for another note, on the limitations of combinatorics in geometry. Stay tuned....
Here is the proof, (due to L. M. Kelly)
Theorem. There is no way to arrange n points in the plane (not on a line) so that a line through any two points will always pass through a third.
Proof:
Suppose the theorem is false. Let P be the given set of points, and let L be the set of all lines that pass through at least two points of P. Over all pairs (p, l) where p is in P and l is in L, and p does not lie on l, pick the pair (pp, ll) such that the distance from pp to ll is the smallest. By assumption, all lines pass thru at least three points, and so there are three points on ll, two of which (call them q1, q2) must lie on one side of the perpendicular from pp to ll.
Say q1 is closer to the perpendicular and q2 is further. Consider the pair (q1, lll), where lll is the line joining pp and q2. A simple drawing shows that q1 is closer to lll than pp is to ll, contradicting the minimality of the pair (pp,ll). Hence the theorem is proved.
end proof.
Bonus Points
You might notice that the above proof uses metric properties of the Euclidean plane (i.e the notion of distance), and order axioms (the notion of betweenness). It is possible to prove the above result without using metric properties (merely using order axioms), but it is not possible to drop the order axioms themselves. The reason is a topic for another note, on the limitations of combinatorics in geometry. Stay tuned....
Subscribe to:
Posts (Atom)