tag:blogger.com,1999:blog-6555947.post109831345608105728..comments2014-01-12T10:46:48.153-07:00Comments on The Geomblog: Counter-examplesSuresh Venkatasubramaniannoreply@blogger.comBlogger3125tag:blogger.com,1999:blog-6555947.post-1098730016730308642004-10-25T12:46:00.000-06:002004-10-25T12:46:00.000-06:00Thanks. this is a good example !Thanks. this is a good example !Sureshhttp://www.blogger.com/profile/15898357513326041822noreply@blogger.comtag:blogger.com,1999:blog-6555947.post-1098729812396665742004-10-25T12:43:00.000-06:002004-10-25T12:43:00.000-06:00Forgot to sign last comment. -David MolnarForgot to sign last comment. -David MolnarAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-6555947.post-1098729771151202182004-10-25T12:42:00.000-06:002004-10-25T12:42:00.000-06:00Cryptography. The entire early history of the fiel...Cryptography. The entire early history of the field is a litany of counterexamples -- for example, knapsack cryptosystems were conjectured to be secure because they were "based on" NP-hard problems. In practice, the notion of "based on" wasn't particularly formal, and eventually we found out they weren't all that hard to break. Andrew Odlyzko's "The Rise and Fall of Knapsack Cryptosystems" is pretty good, although slightly dated (the Chor-Rivest scheme was broken in 1998).<br />http://citeseer.ist.psu.edu/odlyzko90rise.html<br /><br />I suspect this history helped motivate the more recent interest in hardness of lattice problems. For example, the Ajtai-Dwork cryptosystem actually has a randomized reduction from the underlying problem to breaking the cryptosystem, and the underlying family of lattices has a worst-case to average case reduction. (Although later we found out that for "practical" parameters it doesn't seem to be hard enough.) <br /><br />In general, broken cryptosystems serve as counterexamples that push the development of theory in cryptography. They force us to reconsider our assumptions and look for more rigorous foundations. <br /><br />There's a second strain of counterexamples in cryptography, as well. These counterexamples might be closer to the spirit of the mathematical ones you mention. This strain looks at definitions of security in cryptography and comes up with cryptosystems that meet the definition but don't have some property we might "intuitively" expect a "secure" cryptosystem to have.<br /><br />For example, the standard definition of security for public-key encryption is indistinguishability of encryptions. If I give you a string X, and promise you it is either the encryption of a "1" bit or of a "0" bit, you shouldn't be able to tell which (with non-negligible probability). You can further augment this definition with a chosen-ciphertext oracle and show the stronger definition implies non-malleability. You can also show it implies semantic security, which says the adversary "learns nothing" about the plaintext given the ciphertext. It looks like a perfect definition.<br /><br />Does it hide which public key was used to encrypt? We want this for applications to anonymous messaging; if the encryption reveals the recipient, things become a lot harder. In fact it doesn't, and you can come up with a silly counterexample that leaks the recipient's identity with 100% certainty on every message. <br /><br />Another example here would be the random oracle model counterexamples of Canetti, Goldreich, and Halevi. They show there are signature schemes which are secure in the random oracle model, but not secure at all for *any* instantiation of the random oracle. I'm still trying to wrap my head around that one.Anonymousnoreply@blogger.com