The Course of Reason

When Good Proofs Go Bad

September 5, 2013

I am at a conference about formal methods in computer security. In formal methods, they use math talk to make proofs about systems. For example, cryptography is really important for computer security, and it would be nice if we could formally prove that a cryptographic system is secure. But the strangest thing always seems to happen. Some intrepid formal methods researcher will publish a proof about a cryptosystem’s being secure, and then in a year or so some other intrepid researcher will publish a successful attack strategy on the system. So, what is going on here? How could the proof turn out to be wrong? That’s been a big issue at the conference.

 

There was a cryptography symposium out in Oakland this year where some researchers presented a proof verifying cryptosystem TLS secure at 9:30 AM. At 1:30 PM at the same symposium, someone presented a successful attack. This is clearly a problem. Do only computer security formal methods people have this problem, or is it something else?

The consensus we’re moving towards is that the trouble comes from two parts of the reasoning process. The big picture reasoning process looks like this:

1. Create a formal model of the real world system you want to prove stuff about.

2. Use a formal language to prove stuff about the model.

3. Take the stuff you proved about the model, and say that it is true about the real world system.

Clearly, you can only make a mistake in step 2 if you make some logically incorrect move. This is not an interesting way to mess up a proof, although it does happen. But that’s not the type of error we care about, because it doesn’t cause the type of trouble we’re interested in. We are interested in cases where we have a rock solid legitimate proof with no logical errors and then later on it turns out not to be true.

Logic: the beginning of wisdom, not the end. 

So, interesting errors can happen in steps 1 and 3. And they do. All the time. Those are the non-logical aspects of the reasoning process. By “non-logical”, I mean there is no step by step deductive method for translating the real world system into the formal system, and then back again.

At step 1, a human being has to make decisions about which parts of the real world system to focus on. You simply can’t accurately model the entire system. That would defeat the purpose of having a model. You might as well just use the real world system. So, every model is missing some details. Sometimes, humans leave out details that turn out to be important. Or they might abstract a little too much, and make vague a concept that needs to be more exact.

In step 3, when we transfer the proof results from the formal model to the real world system, we forget about those details we ignored earlier. So, we take our proof and say, “hey look at this proof. It’s basically the highest level of confidence you can achieve. So, yeah, *nerd snicker*, I guess you could say your system is secure…” But those details we ignored will haunt us.

Even with a legitimate proof, we cannot be sure that the proof applies to the real world system we developed it for. Yes, it is still a proof, but its truth is only guaranteed about the FORMAL MODEL, not about the real world.

Is this problem unique to computer science? I don’t think so. I think it is a troll that lurks beneath every non-purely-mathematical discipline’s mathematical bridges. Engineering. Physics. Economics. Philosophy. In each of these disciplines we see the transfer of judgments from the deductively safe formal realm to the messy realm of nature. So while the equations themselves may be sound, their application to reality is always subject to error.

So if someone presents you with a proof and tries to force you to conclude something about concrete reality, it is okay to remain skeptical, and probe their methods in steps 1 and 3 of the reasoning process above.

You can always ask:

1. Did you do a good job formalizing the real world system?

2. Are you applying the formal stuff to the real stuff correctly?

These are thorny questions, and the proof advocate might double down and try to dismiss them. Don’t let this happen. Really probe for the processes of formalization and application and look for potential mistakes. 

[This post was originally published July 25, 2013 at the MU SASHA blog.]

 

 

About the Author: Seth Kurtenbach

Seth Kurtenbach's photo
Seth Kurtenbach is pursuing his PhD in computer science at the University of Missouri. His current research focuses on the application of formal logic to questions about knowledge and rationality. He has his Master's degree in philosophy from the University of Missouri, and is growing an epic beard in order to maintain his philosophical powers. You can email Seth at Seth.Kurtenbach@gmail.com or follow him on Twitter: @SJKur.

Comment

Register/Login

Name:
Email:
Location:

Guests may not post URLs. Registration is free and easy.

Remember my personal information

Notify me of follow-up comments?



Enter the word that goes in the blank: CFI is short for "Center ___ Inquiry"

Creative Commons License

The Course of Reason is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.

CFI blog entries can be copied or distributed freely, provided:

  • Credit is given to the Center for Inquiry and the individual blogger
  • Either the entire entry is reproduced or an excerpt that is considered fair use
  • The copying/distribution is for noncommercial purposes