Tuesday, November 6, 2007

Abductivist Refutation of Skepticism

A Refutation of Skepticism via Inference to the Best Explanation


Here’s an infallibilist argument for radical skepticism:

1) Really knowing anything requires an infallible, perfect kind of certainty.
2) This requires that no mistake is even possible.
3) It seems that for us fallible creatures, perhaps this is never the case.
4) Therefore, it seems that we know nothing at all.

Here’s a problem: Are propositions 1-4 themselves infallibly certain? Apparently not. Especially if nothing can be. More importantly, claim 1) looks quite dubious, right off the bat. Many philosophers think it’s false (and so do I). So this infallibilist argument doesn’t seem to really get off the ground.

Let’s try a better argument for radical skepticism:

a) If you’re justified in believing you really have hands, then you’re justified in believing you’re not merely a deluded brain floating in a vat hooked up to electrodes that feed you experiences from a super-computer.
b) But it’s logically possible that you’re a brain in a vat, AND
c) It’s physically possible that you’re a brain in a vat.
d) Therefore, it’s really and truly possible that you actually are, as a matter of fact and existence, a handless brain in a vat.
e) Therefore, you're not justified in believing that you’re not in a vat.
f) Therefore, you're not justified in believing that you have hands.

But, does e) really follow, once we jettison infalliblism? Once we realize that infallibilism is dubious, then it appears that knowledge and justification may be a game of more or less, rather than all or nothing. In other words, it seems to be a matter of relative degree, rather than a game of ones and zeroes.

Once we realize this, we can begin to see that the argument fails to go through. Why? Let’s scrutinize claim c) more closely.

c) It’s physically possible that you’re a brain in a vat.

This is the claim that holds the key.

Of course, the skeptic and the anti-skeptic agree that claim c) —“it’s physically possible…”—whatever else it is, isn’t something that we know with a perfect, infallible certainty to be really, actually true. After all, it may turn out, perhaps, that electrodes just won’t stick to such a wet surface, even with crazy glue. Or maybe it would just take too many electrodes, and there’s not enough surface area. Could it really be done? Could anybody now living pretend to be able to accomplish such a feat? What kind of fluid would need to be used in the vat? Would you, even for a minute, believe a guy who claimed he’d already done it—that he had a brain at home right now, living in vat of fluid, hooked up to a big computer via electrodes that were fooling the brain into thinking it had hands? It would be tough to believe. He almost might as well have told you he had a Cartesian demon living under his coffee table.

In other words, nobody claims to be infallibly certain that it actually is, really and truly, physically possible for one to be a brain in a vat.

Remember, unlike claim b), claim c) says nothing about what is or isn’t logically possible. It only concerns the question of what’s physically possible. I can’t stress this enough. It says:

c) It’s physically possible that you’re a brain in a vat.

Let’s put it more simply.

Here’s the question: Is the skeptic’s claim c) infallibly certain? If it is, then a matter of fact about the external world is really known with perfect infallible certainty, and skepticism is wrong. If it’s not perfectly, infallibly certain, then perhaps it’s more like a matter of degree. It might be relatively more or less plausible. Perhaps, for example, it seems to us to have some amount of plausibility, because we think that a brain could fit pretty easily into a vat, if the vat was big enough, and electrodes might be able to stimulate the brain somehow, and computers are pretty amazing, and it seems to us that brains, electrodes, and vats really exist.

But, that's a problem for the skeptic. If this isn't really an all-or-nothing kind of contest, but, instead, a more or less kind of contest, then the skeptic is in trouble. To see the problem, let’s compare these claims:

x) Brains really exist.
xi) Brains really exist, and they really and truly can survive in vats.

If we’re not infallibilists, then we’re not going to pretend that either of these claims is perfectly, infallibly certain. Certainty is a red herring. Infallibility is a mirage. So, we simply ask: which claim seems relatively better off? That looks like a pretty easy call: the first one, x), is relatively better off, because the second one can’t be true unless the first one is true, while the reverse is not the case. Obviously, the two are not equals, since claim xi) depends upon the truth of claim x), but claim x) doesn’t depend on claim xi) to be true.

Whatever relative plausibility we can credit to the bank account of the Vat story, is credit backed up solely by an uncertain check written against the Mundane story's bank account. However uncertain our mundane worldview may be, the skeptic's paranoid fantasies can only be relatively even less certain.

Abstractly put, for any Q, Q + P has a greater risk of error than Q alone, unless P is supposed to be infallibly certain, or Q is supposed to be impossible, neither of which is the case here, where Q stands for claim x), and P stands for claim c). Claim xi) is Q + P.

Put less abstractly, if a chain is only as strong as its weakest link, then it seems you can only make it more likely to break by adding another link, unless the newly added link is somehow unbreakable. But is anything in this vale of tears unbreakable? (Why is the vat story like a chain? Because, like many stories, it can’t really be true if one link fails. If, after all, there isn’t really any possible fluid in the world that could do the job, then the vat story can’t be true. If electrodes, or computers, or brains, aren’t really up to the task, then it can’t really be true that you are, actually, in a vat.)

If it seems so much as possible that brains might exist without being able to survive in vats, then xi) loses a “more or less” contest. If it is even possible for x) to be true and xi) false, then the two claims are not equal, because the reverse is impossible. For xi) to equal x) in plausibility or probability, it would have to be infallibly certain that brains really can, physically, survive in vats. But neither the skeptic nor the anti-skeptic pretends that this is infallibly certain, of course.

Again, we’re not talking about what is or isn’t logically possible. The thing turns on the question of what’s physically possible.

What if the skeptic objects that he doesn’t need claim c) ? What if he says he doesn’t need to be able to say that the vat story is physically possible, just so long as it’s logically possible? This won’t do. If a thing isn’t physically possible, then it isn’t possible that it’s really, actually true. In other words, it would be, as a matter of fact, impossible. Anything which is logically possible, but not physically possible, can’t possibly be really, actually, true.

The
anti-skeptic isn’t pretending to know whether or not a brain can live in a vat. Nor does he need to. He’s not pretending to know whether c) is actually true or false. He’s not pretending to know if the vat story is physically possible or not; he’s merely asking whether or not c) appears to be infallibly certain. The anti-skeptic is not refuting the skeptic by refuting claim c); he's refuting the skeptic by pointing out that the skeptic needs c) to be infallible. And, of course, nobody thinks that it is.

Look at it this way. The Brain in the Vat scenario has come to replace the Cartesian Demon. Why? Because, when it comes to the question of the plausibility of a scenario's physical possibility, the Vat is better than the Cartesian Demon, and somewhere down deep, we realize that matters. If it didn't matter, the Demon would be just as good as the Vat.

The key to solving the puzzle is to keep in mind that this is not necessarily a contest of all or nothing, but instead a relative contest of more or less.

Quee Nelson
posted November 6, 2008
(For a more robust account of this approach, see Quee Nelson, The Slightest Philosophy, 2007.)

26 comments:

Anonymous said...

"x) Brains really exist.
xi) Brains really exist, and they really and truly can survive in vats."

What about 'xii) Brains really exist, and they really and truly can survive in skulls'?

If we truly are brains in vats, the world must be so radically different that it isn't relevant that there is no known fluid to sustain a brain in such a way; if every experience we have had comes from some mysterious scientist's stimulations, we can't trust anything that we think of as physically possible.

Quee Nelson said...

You forget the anti-skeptic isn't pretending to know whether or not a brain can live in a vat. Nor does he need to.

He's not pretending to know whether c) is actually true or false. He's not pretending to know if the vat story is physically possible or not; he's merely asking whether or not c) appears to be infallibly certain.

The anti-skeptic is not refuting the skeptic by refuting claim c); he's refuting the skeptic by pointing out that the skeptic needs c) to be infallible.

And, of course, nobody thinks that it is.

Anonymous said...

But why does being a brain in a vat need to be infallible if having hands doesn't? Couldn't your argument be reversed? i.e.

a)If you're justified in believing you're a brain in a vat then you're justified in believing you don't have hands
b)But it's logically possible that you have hands AND
c)It's physically possible that you have hands
d)Therefore, it’s really and truly possible that you actually do, as a matter of fact and existence, have hands
e) Therefore, you're not justified in believing that you don't have hands
f) Therefore, you're not justified in believing that you are not a brain in a vat.

What makes this argument any different? If you can challenge the infallibility of c, then isn't the anti-skeptic refuted just as much as the skeptic?

The only way I can think of arguing that there is a difference is if having hands is more probable than being a brain in a vat, but what evidence could there be of that?

Quee Nelson said...

As you know, the skeptic, like an idealist, does not deny to you any of the rich and varied experiences you have, of course, which some of them call "the Phenomenal World."

I mean, he's not taking anything away from your common sense accout of the world, really, but just adding on to it -- tacking on a sort of "rider." For example, the Matrix adds a new, second world, the one with the jelly pods and flying robots in it.

Likewise, the vat scenario contains a second world, the one with the evil scientists in it. This adds various elements which, compared to the mundane story, are (like Descartes's demon) add-ons. Think about the miraculous fluid the vat would need to contain. Or the astonishing super-glue that could keep the electrodes glued in place on a wet surface. Or the amazing super-computer (gosh, how big would it have to be?)

The skeptical hypothesis requires both brains AND the miraculous fluid to be physically possible. The mundane hypothesis only requires brains to be physically possible. And for for any Q, Q + P has a greater risk of error than Q alone, unless P is supposed to be infallibly certain, or Q is supposed to be impossible, neither of which is the case here.

This isn't an appeal to Ockham's razor. Something closer to Fumerton's razor. On page 180, I quote what I like to call "Fumerton's razor":

"Suppose I am considering two incompatible theories T1 and T2, which, relative to my evidence, are equally likely to be true. Suppose, further, that after acquiring some additional evidence (let us call my new total body of evidence E) I find it necessary to add an hypothesis H1 to T2. Now provided the probability of T1 and T2 relative to E remains the same and assuming that the probability of H1 relative to E is less than 1, we can deduce from the probability calculus that, relative to E, T1 is more likely to be true than the complex theory (T2 and H1). Intuitively, T2 by itself ran the same risk of error as T1, so with the addition of another hypothesis which might be false it runs a greater risk of error than T1."

(Richard Fumerton, "Induction and Reasoning to the Best Explanation," 1980.)

Anonymous said...

What if the skeptic argued that mental states are all that exist? Then the anti-skeptic requires mental states (Q) and the physical world (P), so your argument would suggest that (if we reject the idea that we are just brains in vats) we should reject the physical world as well.

You say that the vat scenario involves a second world, but so does physical existence; the first world is that of our experiences, and the sensory information we receive. Anything further than that requires something of which we are uncertain, and there's no way to say one second world is any more probable than another. It's possible to assume that our senses provide us with an accurate world view, but how can that be more probable than a false one? In fact, given the infinite possibilities for error (as opposed to the one possibility for accuracy) it's far more probable that our world view is mistaken.

Anonymous said...

surely you could formulate the sceptical argument as follows:

1) If i dont know that im not a BIV then i dont know that i have hands.
2) i dont know that im not a BIV
3) so i dont know that i have hands.

physical possibility does not come into the picture. all that we require is that BIV would make no difference to our experiences, which it wouldnt.

The correct way to deal with the sceptic seems to be to take a humean line, and argue that i simply DO know that i have hands. but rather than run the closure principle to show from this that i know that not-BIV, i would argue that it is plainly obvious that we dont know not-BIV. so closure cannot apply to all cases.

the best way of explaining the failure of closure is via Dretske's account of relevant alternatives. "the pragmatic dimension of knowledge" is a fantastic paper of his which i think, coupled with the humean approach, offers a great refutation of the sceptic.

the obvious criticism that closure cannot be denied is daft. it is in fact common practice to debate whether closure applies to a case. e.g. i know my car is parked outside. you say, how do you know it hasnt been stoled - lots of cars are stolen in this city, so you dont know its parked outside. but i clearly do! the alternative that it has been stolen, i say, is irrelvant, i dont care, and i dont want to hear your silly sophisms... But you say, no it IS a relevant alternative...

and so the conversation goes.

the correct refutation of the sceptic is the denial of closure by coupling humean naturalism with Dretske's theory of knowledge as an absolute concept and relevant alternatives.

Anonymous said...

Justification is not knowledge. It is merely an excuse to believe. In poker, one might be justified (due to an understanding of probability) in betting that a card is not likely an Ace, but it is still, nonetheless, a bet. It is quite a different thing to know that it is an Ace.
Attempting to reduce knowledge to justification will not undermine the Descartes argument.

Quee Nelson said...

Are you thinking here that knowledge requires certainty?

Alternatively, are you thinking the Mundane scenario labors under a special "burden of proof" not shared equally by the Vat scenario?

If you believe there's no Santa, and you're justified in believing it, and it also happens as a matter of lucky fact to be true that there's no Santa, then you are one of the kids in your class who is lucky (or unlucky) enough to know there's no Santa.

Quee Nelson said...

So far, the above comments seem to gesture towards a dozen or so various moves that a skeptic might deploy.

In the book, I give these moves various names, and most of them appear as section titles in the Table of Contents, above. This makes it easier to look them up in the book, so, if you have it handy, see, for example, sections titled "Iterative Skepticism," "The Ubiquitous Burden of Proof Cheat," "Super Simple," "Cartesian Prejudice," "The Problem of the Criterion," etc.

Basically, we need at least pp. 152-220 (most of Chapter 4) to do justice to your questions.

Anonymous said...

I like the Santa example, but it fails too. I'll grant, It's ridiculous...I never did...ever, but it is nonetheless a belief, and I'd even say a justified belief. The only problem, is that I have to admit that it is POSSIBLE, though highly unlikely in my estimation, that there is a jolly old man (or a series of them) that deliver toys to children in a mode that corresponds to the traditional tales. There would be issues about getting to visit all of the children in the world in one night, but I'll admit I don't know enough about time/space to say that it is not possible...particularly with some method of changing or altering time. Furthermore, Santa could have changed his modus operandi, and may be using agents now to accomplish his goals.
As a corollary, I'm not too sure that we can (though its attractive) use the propositions of moderns science to combat skepticism. What I mean is that scientific precepts are themselves in flux. Our understanding of the big picture (quantum physics) changes every day (and is even hotly debated by experts who we trust in faith to be right...every day). The big picture has a necessary impact on our "understanding" of sense phenomena, and these along with a set of core beliefs I seem to have, form my view of reality.
Where I'm going with all this is that our story of the brain in the vat appeals more to our sensibilities (set of core beliefs), but really has no more validity (or lack of validity) than the demon story...except maybe subjectively to each reader. That still doesn't quite get us to objective truth based on "sense impressions" or, in other words, objective a posteriori knowledge. Can you really prove that the devil (or a being like him/her) absolutely doesn't exist? Without absolute proof, you can't prove something absolutely impossible...which is what you need to remove the risk of it being the case.

By the way, I really like where you are going with your arguments...I think you might be on to something.

Quee Nelson said...

Thank you. That’s very kind, and I appreciate your thoughtful comments too.

In your last one, you write:

“I like the Santa example, but it fails too. I'll grant, It's ridiculous...I never did [know there’s no Santa]...ever, but it is nonetheless a belief, and I'd even say a justified belief. The only problem, is that I have to admit that it is POSSIBLE, though highly unlikely in my estimation, that there is a jolly old man…”

Similarly, later, you write:

“Without absolute proof, you can't prove something absolutely impossible...which is what you need to remove the risk of it being the case.”

Again, I want to ask, do you feel that one must be infallibly/perfectly certain about something in order to know it? Here, I’ll assume you would say “yes.”

Sometimes we hear people say something like, “well, you don’t really know -- I mean, you can’t be 100 percent absolutely certain -- I mean, it’s POSSIBLE that you’re wrong here.” In my experience, they’ll usually draw out the word know, as if to italicize it: “knooow.” We might imagine more than one definition of “know” in the dictionary, one which requires 100 percent infallible/perfect certainty, and another more humble, ordinary, pedestrian definition of “know” in the sense of “You and I both know there’s no Santa, but watch what you say, because Cindy doesn’t know yet.”

Professional epistemologists who, like me, reject the requirement of perfect/infallible certainty, call themselves “fallibilists.” (I think it fair to say that as of today many of them might fit this description, so being a fallibilist doesn’t make me special.)

At this point I’d like to point you 3 other places:


1.) The book section titled “Infallible Certainty,” pp. 159-164:

http://books.google.com/books?id=J9aX878kHuAC&q=infallible


2.) Back to the top of the original argument:

http://queenelson.blogspot.com/2007/01/how-realist-can-beat-skeptic-quee.html


3.) And last, but not least:

http://plato.stanford.edu/entries/certainty/

Anonymous said...

Thanks for the links. I'll check them out and then get back to you. It'd probably be rude to get back to you without fully delving into the material. It may take some time though, as I work about 80 hours a week.

Also, it must have gotten chewed off in an edit, but I had originally written that I never believed in the proposition that Santa Clause exists...ever. The paragraph I wrote starts out a bit rough without that being clear.

Anonymous said...

If the big point you're trying to get at is simply and truly the skeptic needs c) to be infallible, this seems like a fairly meaningless point to win. By your own logic, it seems like we could give a good amount of justification to c) without proving it infallible and therefore have good justification for skepticism.

As the first Anonymous said:

x) Brains really exist.
xi) Brains really exist, and they really and truly can survive in vats
xii) Brains really exist, and they really and truly can survive in skulls

This is a key point. We aren't really comparing the likeliness of x) and xi), but xi) and xii). We have no way of gauging the relatively likeliness of xi) and xii).

This presents a big problem for you. Supposing the "miraculous" chemicals and set-up required for a brain to function in a vat is no more fanciful than supposing the "miraculous" chemicals and set-up required for a brain to function in a skull cavity. You are making the argument from personal incredulity that a brain-supporting fluid is any more fanciful than a brain-supporting skull.

It's not a simple matter of comparing the likeliness of Q to Q+1, but Q+1 to Q+1`.

Unknown said...

“Without absolute proof, you can't prove something absolutely impossible...which is what you need to remove the risk of it being the case.”

I think you have to except things that are logically impossible--things that are false by definition. We can safely call a square circle absolutely impossible, for example.

Quee Nelson said...

Second Anonymous,

You write:

"...it seems like we could give a good amount of justification to c) without proving it infallible and therefore have good justification for skepticism."

Note that c) is used as a premise in the "better" argument for skepticism, but the purpose of that argument is to prove its conclusion f). So, this move won't work, since whatever empirical story is offered to support empirical claim c) will clash with f), and the whole point of the argument is to support its conclusion, f).

Of course, you may notice that all skeptics are hemmed in by this constant danger of self-refutation. (If we can't know anything, then how can we know that we can't know anything?) See pp. 175-177.

Unknown said...

Why is it so important that the skeptic not be skeptical of skepticism? I don't think any skeptic thinks it is more likely that we are brains in vats, or manipulated by evil demons, just that it is conceivably possible, therefore all our knowledge claims can be doubted.

Unknown said...

Abduction doesn't refute skepticism, it merely pushes it to the side or replaces it with a different kind of skepticism. Shouldn't I really be skeptical about any inference to the best explanation, since tomorrow it might easily be replaced by a better explanation?

Quee Nelson said...

Josh, of course you're right that a Skeptic normally doesn't claim the Vat is more likely. The claim is always that it's NO LESS likely. It's supposed to be a stalemate. A tie. "No more" [one side than the other], as Sextus Empiricus put it. That's what the Skeptic is claiming.

You ask why this won't do. But, if you think about it, the whole essay is basically one long answer to your question. Maybe you might read it, from the top, and get back to me again? I don't want to just re-post the whole thing here. But I hope to hear from you again! Thanks for commenting.

Quee Nelson said...

Tony, you ask, "Shouldn't I really be skeptical about any inference to the best explanation, since tomorrow it might easily be replaced by a better explanation?"

Yes. Of course, our present beliefs about the world are almost certainly going to continue to slowly change and evolve, to a certain degree, as they always have. The best story is always the best one we have right now, at this point in time.

Nothing is infallibly certain. Nothing is written in stone.

Is this what you mean by "a different kind of skepticism?" You could say it's the kind ordinary people call a "healthy skepticism."

But the kind of skepticism that epistemology has traditionally been obsessed with, is the "radical epistemological" or "global" skepticism of the tradition, such as that of Pyrrho, David Hume, Peter Unger, etc., etc. That's what I'm out to refute. See Hume's quotes here:

http://queenelson.blogspot.com/2008/07/appendix.html

Unknown said...

My point was that the skeptic can willingly concede that we are probably not brains in vats, and maintain their skepticism. They do not claim that it is likely that knowledge claims are unjustified, they just need the slightest doubt to argue that knowledge claims are unjustifiable.

Once you willingly jettison infalliblism of course you take a lot of the bite from the skeptical argument, and I think that most people end up with the kind of 'more or less likely' pragmatism you endorse. Otherwise it would be impossible to function in the world.

So why is it that it is less likely that "we are brains in vats believing we are brains in flesh"? It has a fact proposition (We are brains in vats) and a belief proposition (we believe we are brains in flesh). How is that more complicated than "we are brains in flesh believing we are brains in flesh?" Both have a fact proposition and a belief proposition.

To put it another way, why believe it is physically possible for brains to exist in flesh? We get some supporting data from our senses, which could be being misled...

Quee Nelson said...

Here’s the question: Is the skeptic’s claim c) infallibly certain? If it is, then a matter of fact about the external world is really known with perfect infallible certainty, and skepticism is wrong.

If it’s not perfectly, infallibly certain, then perhaps it’s more like a matter of degree. It might be relatively more or less plausible. Perhaps, for example, it seems to us to have some amount of plausibility, because we think that a brain could fit pretty easily into a vat, if the vat was big enough, and electrodes might be able to stimulate the brain somehow, and computers are pretty amazing, and it seems to us that brains, electrodes, and vats really exist.

But, that's a problem for the skeptic. If this isn't really an all-or-nothing kind of contest, but, instead, a more or less kind of contest, then the skeptic is in trouble. To see the problem, let’s compare these claims:

x) Brains really exist.
xi) Brains really exist, and they really and truly can survive in vats.

If we’re not infallibilists, then we’re not going to pretend that either of these claims is perfectly, infallibly certain. Certainty is a red herring. Infallibility is a mirage. So, we simply ask: which claim seems relatively better off? That looks like a pretty easy call: the first one, x), is relatively better off, because the second one can’t be true unless the first one is true, while the reverse is not the case.

Obviously, the two are not equals, since claim xi) depends upon the truth of claim x), but claim x) doesn’t depend on claim xi) to be true.

Whatever relative plausibility we can credit to the bank account of the Vat story, is credit backed up solely by an uncertain check written against the Mundane story's bank account. However uncertain our mundane worldview may be, the skeptic's paranoid fantasies can only be relatively even less certain.

Look at it this way. The Brain in the Vat scenario has come to replace the Cartesian Demon. Why?

Because, when it comes to the question of the plausibility of a scenario's physical possibility, the Vat is better than the Cartesian Demon, and somewhere down deep, we realize that matters.

If it didn't matter, the Demon would be just as good as the Vat. Do you think it is?

C. said...

Quee, I very much like your blog, your book, and - especially - your pioneering, definitive classification of philosophers' hairstyles. I would like to take more time to delve into all of these.
However: I would like to point out for now that nothing like "inference to the best explanation" actually refutes the scepticism of Hume. Hume still makes the point that certain knowledge of causation, self, etc. is not available. What passes for refutation, then, is just a work-around to make us feel better about what knowledge we do have, and perhaps, better, to show how what we call 'knowledge' actually works. The problem with "best explanation" is that there is no theory of it, and perhaps not even consistency to its workings. We could go farther and question what we could actually mean by 'certainty', as, when faced with sceptical arguments, we did with 'knowledge'; Wittgenstein does all of this in the collection called "On Certainty". (My favorite is a dialog that goes something like: "There's a tree"; "How do know that's a tree?"; "Now I know I'm talking with a philosopher!"
Have you read Daniel Dennett's piece on the brain in a vat, called "Where Am I?" Not quite the point I'm making here, but amusing, entertaining and compelling.
In any case, thanks for your blog!

Quee Nelson said...

C., Since my book argues against Wittgenstein's philosophy (for being too postmodern/fideist), it doesn't help me to refer to him as an authority figure. See, for example, the famous/infamous Wittgenstein quotes I've included, above, from On Certainty. The philosophers are arranged in historical order, so he's near the bottom of the "Appendix."

When you say "certain knowledge," I take it you mean you feel that knowledge must be "certain" to be knowledge at all? But then you refer to "the knowledge that we DO have..." Do we, then have any? I don't understand.

When you express your (certain knowledge?) that Hume can't be beaten, is it because you have his Problem of Induction in mind? Your reference to "causation" suggests this.

Did you notice that my book has a section on "Infallible Certainty," and also on "Hume's Riddle of Induction"? I assume you've not read these sections; you can read a few parts of the book here, and on Google books, but not all of it, and unfortunately the chapter on induction is only available to people who actually have the physical book.

When I have the time, I hope to post more of the book. But, in the meantime, you can buy a used one very cheap on Amazon, or request your librarian to have one sent over.

I would be glad, if you do then decide to "delve more deeply," if you might then explain what the arguments are that you have in mind.

j said...

You really get to start off your argument with, "I really don't believe in infalliblist bs," and move on? Really? REALLY? You can just let that one go with no argument? Wow. That's handy.

Pharaohfitz said...

I don't know if I have a brain but believe I do. If I believe do I know? I don't know if my brain is in a vat or not, but it doesn't matter to me, I believe that I have hands and that if I place them in a fire, they will get burned, whether my brain is in a vat or not. Linear thinking and logic get in the way of knowledge or its precursors, belief and consciousness. Whether my knowledge is based on a logical fallacy doesn't change the fact that I think putting my hands in a fire will burn them, whether there are "Other Minds" (Wittgenstein's student Wisdom's book) or not, whether my supposed brain is in a vat or not. Love the hair stuff...

Unknown said...

Seems like "brain in a vat" is unfalsifiable, and therefore uninteresting. These questions are necessarily of the implicit form "Assuming we aren't a brain in a vat, we can conclude x, y & z".

As for the "wiring up electrodes", you can push it much further than that. What if the rules of quantum mechanics are but a computer simulated game in some other universe? We would feel, observe, and maybe even "be", just as we are , with none of our conclusions being altered (I think of Hofstadter's chapter on Einstein's brain encoded as a book with one page per neuron describing it's behavior). What's "real" in this sense is rather a meaningless question.