Tuesday, November 6, 2007

Abductivist Refutation of Skepticism

A Refutation of Skepticism via Inference to the Best Explanation

Here’s an infallibilist argument for radical skepticism:

1) Really knowing anything requires an infallible, perfect kind of certainty.
2) This requires that no mistake is even possible.
3) It seems that for us fallible creatures, perhaps this is never the case.
4) Therefore, it seems that we know nothing at all.

Here’s a problem: Are propositions 1-4 themselves infallibly certain? Apparently not. Especially if nothing can be. More importantly, claim 1) looks quite dubious, right off the bat. Many philosophers think it’s false (and so do I). So this infallibilist argument doesn’t seem to really get off the ground.

Let’s try a better argument for radical skepticism:

a) If you’re justified in believing you really have hands, then you’re justified in believing you’re not merely a deluded brain floating in a vat hooked up to electrodes that feed you experiences from a super-computer.
b) But it’s logically possible that you’re a brain in a vat, AND
c) It’s physically possible that you’re a brain in a vat.
d) Therefore, it’s really and truly possible that you actually are, as a matter of fact and existence, a handless brain in a vat.
e) Therefore, you're not justified in believing that you’re not in a vat.
f) Therefore, you're not justified in believing that you have hands.

But, does e) really follow, once we jettison infalliblism? Once we realize that infallibilism is dubious, then it appears that knowledge and justification may be a game of more or less, rather than all or nothing. In other words, it seems to be a matter of relative degree, rather than a game of ones and zeroes.

Once we realize this, we can begin to see that the argument fails to go through. Why? Let’s scrutinize claim c) more closely.

c) It’s physically possible that you’re a brain in a vat.

This is the claim that holds the key.

Of course, the skeptic and the anti-skeptic agree that claim c) —“it’s physically possible…”—whatever else it is, isn’t something that we know with a perfect, infallible certainty to be really, actually true. After all, it may turn out, perhaps, that electrodes just won’t stick to such a wet surface, even with crazy glue. Or maybe it would just take too many electrodes, and there’s not enough surface area. Could it really be done? Could anybody now living pretend to be able to accomplish such a feat? What kind of fluid would need to be used in the vat? Would you, even for a minute, believe a guy who claimed he’d already done it—that he had a brain at home right now, living in vat of fluid, hooked up to a big computer via electrodes that were fooling the brain into thinking it had hands? It would be tough to believe. He almost might as well have told you he had a Cartesian demon living under his coffee table.

In other words, nobody claims to be infallibly certain that it actually is, really and truly, physically possible for one to be a brain in a vat.

Remember, unlike claim b), claim c) says nothing about what is or isn’t logically possible. It only concerns the question of what’s physically possible. I can’t stress this enough. It says:

c) It’s physically possible that you’re a brain in a vat.

Let’s put it more simply.

Here’s the question: Is the skeptic’s claim c) infallibly certain? If it is, then a matter of fact about the external world is really known with perfect infallible certainty, and skepticism is wrong. If it’s not perfectly, infallibly certain, then perhaps it’s more like a matter of degree. It might be relatively more or less plausible. Perhaps, for example, it seems to us to have some amount of plausibility, because we think that a brain could fit pretty easily into a vat, if the vat was big enough, and electrodes might be able to stimulate the brain somehow, and computers are pretty amazing, and it seems to us that brains, electrodes, and vats really exist.

But, that's a problem for the skeptic. If this isn't really an all-or-nothing kind of contest, but, instead, a more or less kind of contest, then the skeptic is in trouble. To see the problem, let’s compare these claims:

x) Brains really exist.
xi) Brains really exist, and they really and truly can survive in vats.

If we’re not infallibilists, then we’re not going to pretend that either of these claims is perfectly, infallibly certain. Certainty is a red herring. Infallibility is a mirage. So, we simply ask: which claim seems relatively better off? That looks like a pretty easy call: the first one, x), is relatively better off, because the second one can’t be true unless the first one is true, while the reverse is not the case. Obviously, the two are not equals, since claim xi) depends upon the truth of claim x), but claim x) doesn’t depend on claim xi) to be true.

Whatever relative plausibility we can credit to the bank account of the Vat story, is credit backed up solely by an uncertain check written against the Mundane story's bank account. However uncertain our mundane worldview may be, the skeptic's paranoid fantasies can only be relatively even less certain.

Abstractly put, for any Q, Q + P has a greater risk of error than Q alone, unless P is supposed to be infallibly certain, or Q is supposed to be impossible, neither of which is the case here, where Q stands for claim x), and P stands for claim c). Claim xi) is Q + P.

Put less abstractly, if a chain is only as strong as its weakest link, then it seems you can only make it more likely to break by adding another link, unless the newly added link is somehow unbreakable. But is anything in this vale of tears unbreakable? (Why is the vat story like a chain? Because, like many stories, it can’t really be true if one link fails. If, after all, there isn’t really any possible fluid in the world that could do the job, then the vat story can’t be true. If electrodes, or computers, or brains, aren’t really up to the task, then it can’t really be true that you are, actually, in a vat.)

If it seems so much as possible that brains might exist without being able to survive in vats, then xi) loses a “more or less” contest. If it is even possible for x) to be true and xi) false, then the two claims are not equal, because the reverse is impossible. For xi) to equal x) in plausibility or probability, it would have to be infallibly certain that brains really can, physically, survive in vats. But neither the skeptic nor the anti-skeptic pretends that this is infallibly certain, of course.

Again, we’re not talking about what is or isn’t logically possible. The thing turns on the question of what’s physically possible.

What if the skeptic objects that he doesn’t need claim c) ? What if he says he doesn’t need to be able to say that the vat story is physically possible, just so long as it’s logically possible? This won’t do. If a thing isn’t physically possible, then it isn’t possible that it’s really, actually true. In other words, it would be, as a matter of fact, impossible. Anything which is logically possible, but not physically possible, can’t possibly be really, actually, true.

anti-skeptic isn’t pretending to know whether or not a brain can live in a vat. Nor does he need to. He’s not pretending to know whether c) is actually true or false. He’s not pretending to know if the vat story is physically possible or not; he’s merely asking whether or not c) appears to be infallibly certain. The anti-skeptic is not refuting the skeptic by refuting claim c); he's refuting the skeptic by pointing out that the skeptic needs c) to be infallible. And, of course, nobody thinks that it is.

Look at it this way. The Brain in the Vat scenario has come to replace the Cartesian Demon. Why? Because, when it comes to the question of the plausibility of a scenario's physical possibility, the Vat is better than the Cartesian Demon, and somewhere down deep, we realize that matters. If it didn't matter, the Demon would be just as good as the Vat.

The key to solving the puzzle is to keep in mind that this is not necessarily a contest of all or nothing, but instead a relative contest of more or less.

Quee Nelson
posted November 6, 2008
(For a more robust account of this approach, see Quee Nelson, The Slightest Philosophy, 2007.)