The blog of Dr Glenn Andrew Peoples on Theology, Philosophy, and Social Issues

Nuts and Bolts 001: What is knowledge?

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

In a recent discussion with one of the commenters over at M and M’s blog (see the interchange between myself and someone using the nickname “Heraclides”) it occurred to me yet again that there are people – especially on the internet – who frequently wander into arguments about what are essentially subjects in philosophy, who clearly don’t have a background in philosophy, who appear not to have done much (or any) reading in the area they are arguing about, who are at times not really familiar with some of the basic terminology involved (even though they are using it), and there’s nothing terrible about any of this so far – but then your realise that they are talking as though they are absolutely certain that they are experts in the field. You offer a little advice, but you are told by this obvious newcomer that you couldn’t possibly know what you’re talking about.

Take my recent encounter. I said that scientists treat theories as provisional, but they do not treat knowledge as provisional. Knowledge is, after all, warranted true belief, so a scientist only knows something if he has become convinced that it is true. The reply that I was promptly given was “Theories *are* knowledge 😉 This suggests to me that you don’t understand what a theory really is.” Oh, and as for the fact that knowledge is warranted true belief, this is what my zealous fellow blog visitor had to say:  “Only a religious person would write “knowledge is warranted true belief”. This both shows that you don’t understand science (and thereby aren’t in a position to criticise it) and that you don’t understand the failing of insisting something is “true belief” either (it’s blind to any revision or new information).”

Rather than simply get further frustrated at the bleak intellectual scene that one often finds in the comments section at blogs out there (as illustrated by the above encounter), I have decided to put a little more energy into becoming part of the solution. I’m adding a new category to my blog. The category is called “nuts and bolts.” In this new category, I’ll add posts that spell out basic terms and concepts used in the various subject areas in philosophy. You might think this is a bit redundant. After all, there are plenty of online dictionaries and encyclopedias out there. And you’re right, there are. But the way I see it, the more good basic information is out there, the more likely somebody will be to stumble upon it. So here it is, the very first post in the nuts and bolts category.

What is knowledge?

What does it mean to say that you know something? The first thing about knowledge – the stuff that I know – is that it consists of belief. I have a lot of beliefs. Some of them count as knowledge, and some of them don’t. That’s why it makes sense, when someone’s not really sure about a claim I’ve just made, for them to say “Glenn, you might think that’s so, but you don’t know it to be so.” Likewise, if someone asks me “where’s my wallet,” and I have absolutely no idea where it is – no beliefs whatsoever about its location,” I’ll say “I don’t know.” You can’t know X if you don’t have the belief that we’re calling X.

So far, knowledge = belief

But knowledge is more than just a belief, right? Otherwise, a person who believed in the flying spaghetti monster (OK, so no actual such people exist, but just hang with me on this one) could truthfully say “I know that the flying spaghetti monster is real. I believe that he is real, therefore I know that he’s real.” Take another example: There’s a jar full of jellybeans on my table, and I ask you how many there are. You decide (somehow) that there are 154 jellybeans in the jar. You tell me “I believe that there are 154 jellybeans in that jar.” next, I pour out all of the jellybeans onto the table and we count them. There are 135. If knowledge was just belief, then you actually knew that there were more jellybeans in the jar than there actually were! That’s obviously not right, so we need something extra to add to the definition before it’s really a definition of knowledge. Knowledge requires that the belief actually be correct. When someone says to me “I know what your name is,” their claim can only be true if in fact they know that my name is Glenn, because saying that my name is something else would not be true.

So far, knowledge = true belief

But is this enough? Clearly not. What if, for example, someone had guessed wildly that there were actually 135 jellybeans in my jar? What if someone had just randomly pulled a name out of a hat when trying to determine what my name is, and amazingly, the name “Glenn” was the one they drew? Obviously neither of these two examples would be examples of genuine knowledge, it would just be good luck in both cases. The belief would be accidentally true. Thus far the facts stated are absolutely uncontroversial. All agree that any plausible definition of knowledge must include the stipulation that knowledge must be a belief and it must be true. But something extra is needed, and it is the extra ingredient that has generated the bulk of the discussion in the literature.

Is knowledge justified true belief?

The first candidate for this missing ingredient was justification. The idea of justification is much like that of a legal definition – to justify your actions is to show that you did nothing wrong. Likewise, to justify holding your beliefs is to show that you did nothing wrong in forming your beliefs. You’ve got to be able to show that you had good grounds for holding the belief. Here’s an example that I put together: You’re driving in your car, and you’re traveling at 50 kilometres per hour. You look down at your speedometer,  which indicates that you’re traveling at 50 kilometres per hour. You therefore form the belief that you’re traveling at 50 kilometres. Here we’ve got three ingredients. Firstly it’s a belief. Secondly it’s true (you really are traveling at 50 kilometres per hour), thirdly, you’re justified in holding the belief – you checked with a speedometer, and under the circumstances of driving, this is how anyone could expect to find out how fast they were traveling. If knowledge is justified true belief, then the driver of this car now knows that he is traveling at 50 kilometres per hour (give or take a little).

For a while, and to most in the field of epistemology (the study of knowledge and belief), this seemed correct. This bubble was burst in 1963 in a short article of only just a little over two pages, titled “Is Justified True Belief Knowledge?” Without using the specific examples that Gettier used (I for one find them unnecessarily complicated), the heart of his paper is that some beliefs could be justified and yet still only accidentally true. Think again of the driving example I used earlier. I deliberately left something out. Let’s review the facts again: You’re driving in your car and in fact you are traveling at 50 kilometres per hour (so it’s true that you’re traveling at 50 kilometres per hour), you’ve just checked your speedometer, which indicates that you’re traveling at 50 kilometres per hour (so it’s justified to believe that this is your actual speed), and so you form the belief that you’re traveling at 50 kilometres per hour. So you’ve got a justified true belief. Now here’s what I didn’t tell you before: Your speedometer is broken, and it is stuck on 50. No matter how fast you had actually been traveling, your speedometer would have indicated that you were traveling 50 kilometers per hour. The observer who is aware of this can now say “he didn’t really know that he was traveling at 50 kilometres per hour after all. He was just lucky!”

Once we realise how “Gettier problems” work, it becomes clear that there could be thousands, perhaps millions of them. If I become blind and mentally unwell, and I hallucinate that there is a tree five metres in front of me, then I am not doing anything wrong when I form that belief, so it is a justified belief. And if it coincidentally turns out that there really is a tree five metres in front of me, then the belief is also true, but it is not knowledge.

Warranted true belief

Obviously the fact that a belief is justified isn’t quite enough to make a true belief into knowledge. Think again of that car speedometer.  Had it been working correctly, then you would have known that you were traveling at 50. As it is, you were just lucky. The idea of justification needed to be replaced to take that difference into account. That replacement is what epistemologists now refer to as warrant. Warrant takes into account Gettier examples and avoids them. Warrant is like justification but more carefully qualified. It takes into account our environment, and requires that it be congenial to passing on reliable information (so it rules out things like faulty speedometers), and it takes into account the proper functioning of our belief forming faculties in a truth aimed way (so it rules out things like hallucinations). Technically stated, and after carefully exploring other options and rejecting them, epistemologist Alvin Plantinga summed up:

[T]he best way to construe warrant is in terms of proper function: a belief has warrant, for a person, if it is produced by her cognitive faculties functioning properly in a congenial epistemic environment according to a design plan successfully aimed at the production of true or verisimilitudinous belief.

Alvin Plantinga, Warrant and Proper Function, 278.

Knowledge, therefore, is now widely (and I think, quite correctly) defined as “Warranted true belief.” It is a belief that you actually hold which is true and which is formed in the right way: By belief forming structures that are working correctly in a truth-aimed way in an environment that is conducive to providing those faculties with reliable information.

For further reading:

Richard Foley, “Conceptual Diversity in Epistemology,” Paul K. Moser (ed), Oxford Handbook of Epistemology (Oxford: Oxford University Press, 2002), 177-203.

Peter Klein, “Knowledge, Concept of,” Edward Craig (ed), Routledge Encyclopedia of Philosophy volume 5

Jonathan Kvanvig (ed), Warrant in Contemporary Epistemology (Rowman and Littlefield, 1996).

Alvin Plantinga, Warrant and Proper Function (New York: Oxford University Press, 1993).

Glenn Peoples

Previous

(dis)honest to God: How Not to Argue about the Smacking Referendum

Next

Public Lectures

66 Comments

  1. Johnnieboy

    Thanks for this entry- two thumbs up. You’ve taken the normally inaccessible field of philosophy and broken it down to an easily digestible form for the layman. Keep it coming!

  2. Kenny

    Good summary Glenn.

  3. Tuckster

    You might think this is a bit redundant.

    Not redundant, Glenn – there’s a difference between explaining something… and explaining it so that the layman can understand. You have a talent for the latter.

    (Or, what Johnnieboy said.)

  4. Glenn

    It might interest you to note that both Ken and Heraclides are now denying they said these things on the discussion thread at open parachute. A philosopher friend of these attacked me and in his post stated that no sensible person could doubt knowledge involves true belief, when I pounced on this they claimed I am misrepresenting the exchanges.

    • Matt, they DENIED it? Perhaps they’ve forgotten that MS Windows warning that advises people that information submitted to the internet can be viewed by other people. 🙂 Heraclides definitely said :

      Only a religious person would write “knowledge is warranted true belief”. This both shows that you don’t understand science (and thereby aren’t in a position to criticise it) and that you don’t understand the failing of insisting something is “true belief” either (it’s blind to any revision or new information).

      That’s pretty explicit. I trust you provided the reader of Ken’s blog with a link to show exactly where the claims were made. But seriously, the fact that they simply denied it is just incredible.

  5. Ken

    Glen – perhaps you can clarify things with some specific scientific examples. (Matt won’t respond to requests for practical situations>

    You have stated “that scientists treat theories as provisional, but they do not treat knowledge as provisional. Knowledge is, after all, warranted true belief, so a scientist only knows something if he has become convinced that it is true.” – Now to me that doesn’t accomadate the dynamic, lively nature of scientific knowledge – or my own research experience. That, of course may be a matter of terminology – but I wonder if it is, at base, a different understanding of how knowledge is obtained.

    I realise you are presenting an abstract philosophical definition. However, that is no use if it doesn’t have practical application. (It also concerns me that the practical application could well be opportunist and misleading. For exampole the claim of “biblical and theolkoiogcal truths”).

    However, to take you example of the jellybean jar. Sure, before investigation people can have “beliefs” about the number of beans. Now – we have sighted 135 beans and the claim can be made that there are 135 jellybeans in the jar.

    From your perspective:
    Is that a “theory”? Or is it “knowledge”?

    I would also like you to clarify your claim that “theories are not knowledge. Theories may or may not be true. Knowledge is true.” Again this doesn’t correspond to my experience.

    Its best to clarify this with specific examples of what scientific ideas you consider knowledge and which you consider theories. (And why you think these differ). I realise you may have problems thinking of specific examples yourself. Perhaps you could reply to my specific questions.

    Do any of the following aspects of scientific knowledge qualify as “knowledge” according to the abstract definition you are promoting:

    Thermodynamic laws – specifically the first (conservation) and second law (entropy);

    Atoms and atomic theory – especially the standard model;

    The standard cosmological model;

    Boyle’s gas laws

    Newtons laws of motion

    String theory.

    Of course, words like theory and law are used here as part of the normal nomenclature rather than implying any particular status of knowledge.

    If none of these qualify – could you give a few specific examples of what you would consider as scientific “knowledge” and scientific “theory?”

    Look forward to your response.

  6. Hi Ken

    Believe it or not, what I’ve offered here isn’t any sort of “abstract philosophical” definition of knowledge. It’s what philosophers have said, yes, but it’s what they have said when formulating a definition of the everyday, ordinary concept of knowledge that we’re all familiar with. It refers to the thing that we all refer to when we say that we actually know things.

    I gave a few of practical applications in the blog post itself (jellybeans, my name and cars), but in regard to your additional examples here’s how the definition of knowledge ties in:

    However, to take you example of the jellybean jar. Sure, before investigation people can have “beliefs” about the number of beans. Now – we have sighted 135 beans and the claim can be made that there are 135 jellybeans in the jar.

    From your perspective:
    Is that a “theory”? Or is it “knowledge”?

    Well some theories are right, and some are wrong. Whether something is knowledge is independent of whether or not it is a theory.

    In the above example, we have counted the jellybeans so we have a very well supported theory (some theories are not well supported). So yes it is a theory. However, our belief, encapsulated in this theory is also true – and it’s not true because we have counted the jellybeans, it’s true because as a matter of fact (in dependent of what we may or may not think) there really are 135 jellybeans in the jar. What makes it a theory is that we believe it and the way we formulate the claim. What makes it knowledge is the fact that it is true, and we believe it, and we have a sufficient warrant for believing it.

    Ideally, we would only believe theories when we do have sufficient warrant for believing them and they are true. However, since it’s possible to believe theories that are untrue and unwarranted, the set of things that are theories and the set of things that are knowledge are intersecting sets, but not identical sets.

    I think that on reflection you’d grant that this really does correspond to your experience. Scientists once held to the respectable theory that the orbit of planets was circular. It was based on some of the observable evidence (i.e. it wasn’t just a crazy theory), and they *thought* they knew it to be true. However, we now know that it wasn’t knowledge after all. It was a false theory. They didn’t actually know that planetary orbits were circular, they merely believed it.

    This is why we should say that many scientific theories should be described as provisional. Theories could be wrong, even if we’re fairly sure of them at the time. Knowledge, however, is not provisional. When we describe a theory as provisional, what we are saying is: “I provisionally claim that this theory constitutes knowledge. It might not be knowledge, but I provisionally declare that it is.” Knowledge is not provisional because “knowledge” is what we say about beliefs that are in fact true. We might not know how many of our beliefs really are knowledge, but we absolutely must say: “those beliefs I hold that constitute knowledge are definitely true.”

    You say: “Do any of the following aspects of scientific knowledge qualify as “knowledge” according to the abstract definition you are promoting:”

    Well again, it’s absolutely not an abstract definition of knowledge at all. It’s the run of the mill, everyday ordinary definition of knowledge that we all use when we are correctly using English. That being said, I hope that what I’ve said above is enough to show how these theories (or any theories at all) connect with the idea of knowledge.

  7. Ken

    [NOTE: This has been posted by Glenn, as Ken has advised me that he is experiencing a unique difficulty in posting at this site, and that his posts are being rejected. The following is Ken’s post.]

    I suspect Heraclides was correct when he identified the problem as one of you not understanding what is meant be a scientific theory. From my perspective and experience you have described theory in a naive mechanical way as either true of false. In reality scientific theories are dynamic, constantly developing so as to provide better and better reflections of reality. But they are reflections of reality – not reality itself.

    So they should not be described as absolutely true – although they will contain elements of absolute truth (as well as relative truth). I briefly described this in my post at Open Parachute (Epistemolo-what?!!).( Sorry I cant put a link as I think this causes rejection of my comments.)

    I think this is the problem with adopting an abstract approach (no matter how many philosophers do this). Unless our knowledge is intimately connected to reality, via practice, we are bound to come to conclusions which can’t really be used.

    We can take your example of the planets to illustrate scientific dynamism. At one stage orbits were assumed to be circular (good philosophical reasons for that belief, by the way). Evidence accumulating showing that this model produced the wrong predictions. Eventually it was recognised the orbits were elliptical with the sun at one of the foci.

    Now, the planetary science knowledge or theories were not either “true” of “false”. They were, and still are, just imperfect reflections of reality. We didn’t throw away that theory or knowledge when further evidence came in, we modified it. As philosopher of science Alan Chalmers points out, almost inevitably (but not completely so) “new” theories contain within themselves the “old” theories as limiting cases. Usually still very useful. After all, for most purposes we still use Newtonian mechanics, don’t we?

    The fact is that all knowledge, all scientific theories are provisional – it arises out of their dynamic nature. Scientists don’t normally describe their theories or knowledge as absolute truths (except to be provocative as Jerry Coyne did with the title of his recent book). We don’t normally talk about “scientific truths” in the way that theologians talk about “biblical truths.”

    As for examples – I had hoped that your classification of the different areas of scientific knowledge would illustrate your understanding of theory. I guess we are stuck with the jelly beans.

    Now you say that the theory there are 135 beans is a “well supported theory”, a “true belief”, “true as a matter of fact” and therefore “knowledge.”

    You have presumably come to that conclusion because of the empirical evidence, the fact that we have interacted with reality, sighted the beans and counted them.

    As a scientist I would say that it is scientific knowledge and a scientific theory (we actually tend to use the word theory only where ideas are well supported. Where we don’t yet have support we usually describe them as hypotheses). But I would never say it was “true as a matter of fact” or a “true belief.”

    While this is a trivial example I can expand this to parallel the real world situation we have in cosmology at the moment.

    OK – I have got by with the scientific theory that there are 135 beans in the jar. But after a while I notice there are things which just don’t agree. (Just like planetary orbits). Perhaps I start to realise that the volume in the jar doesn’t really correspond with that number, or some conflict exists with apparent weight. So I make some exact measurements (including re-sighting and counting the beans), like accurate measurements of the weight of the jar contents. This produces a result conflicting with the number. The improved evidence actually shows that the mass of beans in the jar is about 500% greater than it would be if there were only 135 beans. I am therefore forced to revise my theory, revise my knowledge, about the number of beans. I get a more exact picture – a better reflection of reality, and actually produce a new “problem.” There appear to be jelly beans that just don’t interact with electromagnetic radiation, but have mass and can therefore be weighed. This is the sort of problem scientists love – something new to investigate. I can assure you that in my own research a result which conflicted with my own (or others) theory was actually very welcome. At times like that I felt we had a good chance of making progress.

    As a said, a trivial imaginary example but it parallels the discovery of dark matter in galaxies.

    The point is that our knowledge about the jelly beans is dynamic – not “true belief”. It changes as we get more information. It may well be that for many purposes we can ignore these new types of jelly beans and the old theory would suffice. But for others we have to use the “new” theory (which contains within it the “old” theory).

    Briefly, that is how scientific epistemology works. Even though you may have a number of philosopher who you think don’t agree with that. But then again they aren’t in the process of obtaining knowledge about reality. Science is and most sensible people will agree it has been extremely successful at this task.

  8. Ken

    I think the best way to see why I don’t share your concerns about my position is to ask the reader to read my post, and then to read the way that you have depicted my stance. They are not the same.

    You continue to depict this ordinary, basic, boring definition of knowledge as idiosyncratic or strangely philosophical. It really isn’t, it is the totally normal English meaning. it explains very run of the mill ways of speaking such as “wait – you don’t know that! You only THINK that you do.”

    While I have said that knowledge is warranted true belief, I certainly haven’t said, as you seem to think I have, that we will know with certainty which of our beliefs are false and which are not.

    Take the jelly bean example again. You said:

    Now you say that the theory there are 135 beans is a “well supported theory”, a “true belief”, “true as a matter of fact” and therefore “knowledge.”

    Actually I never said that at all. Have another look at the post you’re responding to. I never said that the claim about 135 jellybeans is a true belief, and therefore it is knowledge. Truth is a necessary, but not a sufficient, condition of knowledge. Knowledge is not merely true belief, it is warranted true belief. If you’ve neglected important testing or ways of finding information, then your belief is not warranted, and if in reality there’s a different number of jelly beans, then the belief isn’t true. In either case, you don’t have knowledge.

    If you came across further evidence that called into question the claim that there are 135 jellybeans, then you don’t say “I will now revise my knowledge.” Far from it! What you say is “maybe I thought I had knowledge, when I didn’t really have knowledge after all. I will revise my theory, because I want my theory to get closer to being actual knowledge.”

    Calling a theory “knowledge” is just to express your belief that the theory is correct, because knowledge is warranted belief that is true.

    So as you hopefully see, I have no problem – as I already stated – with holding theories provisionally. Holding them provisionally just means being humble about the fact that they may not turn out to constitute knowledge.

  9. Nick

    Hi Glen,

    Thanks for the definition, perhaps you could clarify something for me. It seems to me (hard to tell, but I get a whiff) that you are a bit hung up on the “true” portion of this definition. Correct me if I am wrong, but it seems that the necessarily provisional nature of scientific theories somehow places them in another category other than knowledge for you.

    What I am driving at here, is that the true portion of your definition is actually not practically useful, at least as a binary TRUE/FALSE value as there appears to be no way to achieve 100% certainty about the truth of anything. Without the ability to achieve this 100% certainty, I would say that the true bit of the definition is redundant and actually practically meaningless. Unless of course, you know of any ways to achieve this 100% certainty?

    For the record, I have no direct experience of science, or of philosophy (outside of the pub anyways), but I do have an opinion, and this is a blog, so:

    In fact, for practical applications, it seems to me that a more useful (and honest) definition would be to replace this true stuff with a probability of certainty. Perhaps this is unorthodox, but I would then define knowledge as information about something, that has a probability of certainty. Nb. Of course, with such a definition, both the information and the probability of certainty can be incredibly complex and interrelated things.

    Where does science come into this? I rather like the image of science (and scientists) as providing an evolutionary fitness function in developing/evolving knowledge. Achieving ever more detailed, useful and certain information about reality. It also seems to be hugely practical in that any technique that can be used to improve the certainty of the knowledge can, and is used to do so. This includes, but is not limited to philosophical ideas were possible.

  10. Correct me if I am wrong, but it seems that the necessarily provisional nature of scientific theories somehow places them in another category other than knowledge for you.
    Depends on the theory some scientific theories are true and we are warranted in believing in them, these constitute knowledge. Other scientific theories are false, geocentricism for example was a scientific theory and its false, hence its not knowledge and it never was, its not the case that in the 13th century many people knew the world was the fixed centre of the universe, its that they were mistaken.
    What I am driving at here, is that the true portion of your definition is actually not practically useful, at least as a binary TRUE/FALSE value as there appears to be no way to achieve 100% certainty about the truth of anything. Without the ability to achieve this 100% certainty, I would say that the true bit of the definition is redundant and actually practically meaningless.
    Not at all, you seem to be conflating the issue of a proposition being true and a a person have 100% certainty that a proposition is true. To know something it only has to be true one does not have to have certainty that it is true. This

  11. Ken

    Beliefs are provisional, not knowledge. If you have to revise a theory because part of it is mistaken then that part of the theory was false and hence was not knowledge.

    Of course beliefs can be justified and mistaken, and we can latter discover things which means we change our opinion But that does not mean knowledge is provisional it means that our beliefs, even our justified ones are provisional.

  12. Nick

    Matthew @12 I think you have missed my point. I don’t disagree that you can define a category of things that are true. What I am saying is that we have no way of knowing what the members of the bucket labelled true things are to a 100% certainty. Without a way of determining the members of that category, then the actual category itself is practically useless. Thus, a more practically useful definition of knowledge should contain some concept of probability or degree of certainty, through which we can use that knowledge.

    Perhaps there could be some utility for such an abstract category of true things, but I haven’t yet read anything in this thread that supports such utility.

    I would probably even go so far out on a limb as to propose that the category itself does not exist for human beings, in that our languages and even thoughts are not precise enough to even compose information/knowledge that can be categorised as a 100% true description of reality. Maybe there is always room for refinement of knowledge. Perhaps the resident philosopher can flesh this out with some discussion on philosophy of language?

  13. Nick, “Maybe there is always room for refinement of knowledge.”

    As a whole, perhaps, but there are definitely some propositions that I think it’s safe to say that we know. For example “1 + 1 = 2” or “I am now posting on my blog.”

  14. Nick

    @14 When you say “it might be difficult in many cases to be 100% certain that we have knowledge”, I disagree. I am saying that it appears to be impossible to know anything as a 100% certainty. In other words, I don’t think that there is any knowledge that we are 100% certain of. I am happy to be proved wrong if somebody has an example of some 100% certain knowledge along with the reason why we are 100% certain.

    When you say “You call the idea of knowledge thus defined “useless” because we can’t have this type of certainty. I disagree. Knowledge is that to which we aspire.”,

    I suppose I can appreciate what you are saying on that point. If we consider such a definition as an unreachable target, then perhaps then the concept has some use as an aspiration. Maybe a bit like the concept of a limit in mathematics. Does this concept have any other, perhaps more practical usages? I still find my definition with a clear statement about probability to be a bit more descriptive of the actual nature of knowledge.

  15. Nick

    Apologies if @16 is not so clear, my quoting of text with quotes in it was probably not a good idea.

    @17 I would agree that we could probably weight the first of those statements up near the high end of certainty, but doesn’t this lead into the idea of platonic knowledge and a possible debate about whether mathematics has reality and is discovered, or is an invented construct that exists within certain constraints. To put it another way, is it possible to imagine scenarios in which we could discover that 1+12? I think I can just about do this enough to lower the certainty of that statement below 100%. To put it another way, if you deconstruct the equation, there are some pretty meaty justifications needed for the different concepts here. You will of course be aware of the contributions of Betrand Russell in this area. Having read some of his writing on this, I would suggest that the seemingly simple certainty of the equation can be seen as anything but.

    The 2nd of your statements is more problematic, particularly from where I am sitting. You could be a hacker posting from Estonia for all I know. Given the context though, I am perfectly happy to accept that you are posting on your blog as a working hypothesis.

    Your second statement gets even more interesting from where you are sitting, in that we could rapidly get into all sorts of thorny sense of self and consciousness issues and even good old existentialism. All of which, in my opinion is sufficient to again reduce this certainty below 100%.

  16. Nick

    Oops, now my numbering attempts seem to have gone awry. @17 should of course be @15.

  17. Nick

    @17. I am not having a good night. Of course my not equals less than and greater than symbols disappeared (html tags). The sentence should read as follows: “To put it another way, is it possible to imagine scenarios in which we could discover that 1+1!=2 ? Using the C style not equals operator.

  18. Nick:

    What I am driving at here, is that the true portion of your definition is actually not practically useful, at least as a binary TRUE/FALSE value as there appears to be no way to achieve 100% certainty about the truth of anything. Without the ability to achieve this 100% certainty, I would say that the true bit of the definition is redundant and actually practically meaningless. Unless of course, you know of any ways to achieve this 100% certainty?

    I think the issue may simply be one of terminology. Yes, given the definition of knowledge as being true warranted belief, it might be difficult in many cases to be 100% certain that we have knowledge. I have no problem with this, so we are not disagreeing there.

    You call the idea of knowledge thus defined “useless” because we can’t have this type of certainty. I disagree. Knowledge is that to which we aspire. We try as much as we can to get out beliefs and theories to line up with the way things are really are, and to the extent that we think they do, we provisionally call them knowledge.

    When you say that we can’t have 100% certainty, I think that’s what all scientists say. They admit that their theories are provisional. Important yes, probably true yes (as far as we currently know), and provisional. I don’t think this diminishes or trivialises them at all.

  19. Nick – just a brief comment (I’m not at home, I will say more later perhaps), but when I referred to the belief that I am now posting on the blog, I was referring to my own belief that I am now posting on the blog, not your belief (I could, after all, be a hacker as far as you know).

  20. Nick

    Ok, I have done a little bit of reading in the epistemic externalism area. I am not sure if I would describe this as subtle, more opaque, and to be honest, am not sure that I really want to spend much more time on this.

    However, I will offer some initial observations/comments and we can see where the discussion goes.

    Brain in vat examples seem to abound in this area. In fact, lots of this stuff seems to be composed as rejections of skepticism. From the wikipedia article:

    Either I am a BIV, or I am not a BIV.?If I am not a BIV, then when I say “I am not a BIV”, it is true.?If I am a BIV, then, when I say “I am not a BIV”, it is true (because “brain” and “vat” would only pick out the brains and vats being simulated).?—?My utterance of “I am not a BIV” is true.

    Seems to be all about defining some sort of objective truth here. I would instead add the concept of context to this. In other words, what is truth or reality in this example depends on the frame of reference. From the frame of reference of the simulator, there is a brain in a vat, from the frame of reference of the unknowing brain in the vat, there is not a brain in the vat.

    Perhaps there is a more sophisticated way of looking at this. You have a consciousness. This consciousness exists/is implemented on a given substrate. This substrate contains everything that is relevant to define the consciousness. In modern neuroscience for example, there is support for the view that the self of a person is not just a manifestation of the physical brain, but rather from the sum total of the physical processes, which also include the body, and not least the interactions between the brain/body and the external physical reality.

    In the brain in vat example, the substrate for consciousness is the brain in vat, plus the simulated reality, plus the interactions between the brain in vat and the simulated reality.

    In fact, I don’t know why people bother with the brain in vat at all, why not just simulate the brain/body as well. In other words, the consciousness lives entirely within the simulation, all you have is simulation. This should be starting to sound a bit familiar now, and is the real point, the word simulation ceases to have meaning outside of the concept of agency (ie. the simulator). All that is required to consider our reality a simulation is to have a simulator, or creator. This should be starting to sound real familiar now.

  21. Nick

    @Glen. Also, where do you stand on Reliabilism? Is a reliable process for justification sufficient for your concept of knowledge.

    I would again claim context as important. If you are talking about knowledge of reality, then in my opinion, a reliable process of justification is not sufficient, as it is easy to demonstrate that you can take an input from reality, run it through a reliable process, and then have an output that does not compare to reality.

    If you however, narrow the context of your knowledge, for example to a mathematical context, you could argue that given some basic axioms as the input, then all you need is a reliable process (logic) do generate/justify knowledge. There is of course some problems even here (as helpfully pointed out by Scott on Ken’s blob), as Gödels incompleteness theorems show.

    Also, what definition do you use for reliable here? I have assumed repeatable, this is the normal one I use.

  22. I think “brain in vat” type examples are not really representative of the serious types of material one can find on externals vs internalism.

    And yes, context is vital. This is contained in the idea of warrant, which is not mere justification. As I discussed in the initial blog post, there are contexts in which a person can be justified by a misleading process. I used the example of a stuck speedometer.

    Warrant, however, takes context (that is, the belief forming environment) thoroughly into account. As explained in the technical but succinct definition of warrant that I quoted then:

    [T]he best way to construe warrant is in terms of proper function: a belief has warrant, for a person, if it is produced by her cognitive faculties functioning properly in a congenial epistemic environment according to a design plan successfully aimed at the production of true or verisimilitudinous belief.

    Alvin Plantinga, Warrant and Proper Function, 278.

    A “congenial epistemic environment” is one in which my belief forming processes/structures interact with stimulus in the right kind of way to form beliefs that correspond to the way the world is.

    Also, in reliabilism, the word “reliable” is not applied to the justification process. It refers to the belief forming faculties (e.g. senses, brain etc). What I advocate is much like reliabilism, but the vital thing is the concept of warrant, int he normal sense in which I have been using that term throughout my posts.

    And no, I don’t use “reliable” to mean repeatable. It has nothing to do with whether something can be repeated. For example, a measuring device can be reliable but then break down so that it is no longer reliable. “Reliable” simply means that something works in the right way as to supply correct information.

    Really, I don’t see any reason for you to give up on warrant. It may be that it’s not the term that you’re accustomed to using, but it has all the features required for the ordinary process of knowledge acquisition that everyday human beings employ on a daily basis.

  23. Nick

    I will consider your other points a bit further before commenting, but in terms of your definition of reliability, doesn’t this just open up the pandoras box of certainty again. How do you know that something is working the right way or that it is supplying correct information.

    Isn’t this somewhat circular if you are introducing this into a discussion on justifying beliefs.

    Perhaps this is why I am used to a more process orientated concept of reliability. If you start with the same inputs, uses the same process/function, then it is a reliable process/function when you get the same results. NB. This does not pre suppose the same results each time, you could have a process that produces a random result from and input, but then you might be able to say that this reliably produces a random result.

  24. Nick

    @24. Perhaps thats not quite clear enough. The process could be considered reliable if it produced the same probability distribution of results. As I understand it, much of quantum electrodynamics rest on this principle. The famous photon through the slits experiment is not deterministic when looking at one event, but when looked at over many events fits a deterministic probability distribution.

  25. Nick

    @23 Ahhaa, I had not realized that warrant and justification were supposed to be two separate things. I cannot see the grounds for this. Isn’t this just introducing another form of unexplained certainty?

    I think we can find the seed of the problem within your statement:
    A “congenial epistemic environment” is one in which my belief forming processes/structures interact with stimulus in the right kind of way to form beliefs that correspond to the way the world is.

    I can accept that such belief forming processes/structures could exist, but: Again, how would you know whether your belief forming structures fit this specification? Testing against reality would be my pick.

    Also, it has been progressively more clearly demonstrated that the belief forming mechanisms of human beings don’t fit this specification. I have been trying to point this out to all comers over on Ken’s blog for a while, but most people ignore this point. Humans have very badly skewed senses of probability. We see patterns in randomness, agency where this is none. There are some very good explanations for this coming out of research in various aspects of human cognition where evolutionary theory is applied. You could argue that are lot of these have a touch of “just so story” to them, but they are the best explanations that we have.

    I think your final statement in that post sheds further light on this “warrant” idea:
    Really, I don’t see any reason for you to give up on warrant. It may be that it’s not the term that you’re accustomed to using, but it has all the features required for the ordinary process of knowledge acquisition that everyday human beings employ on a daily basis.

    How is this different from gut feeling, or personal conviction (both terms that you had no time for earlier in the discussion), both things that human beings use in knowledge acquisition on a daily basis.

    The fact that the human knowledge acquisition process is deeply flawed is (I think) one of the reasons why people with experience of science put effort into communicating with the public. They spend their lives struggling with their own biases and where possible removing or minimising these biases to further their research. They have experience of how there biases have led them down blind pathways. I think this is one reason that you find a high level of skepticism about “philosophical” thinking that gets a bit too far into “beautiful thought castle in the sky” territory .

  26. Nick

    Further to this, I really recommend that you have a look at the following link. http://www.edge.org/3rd_culture/bargh09/bargh09_index.html This is a discussion with John A. Bargh a social psychologist from Yale. He even has some warnings for professional philosophers such as yourself at some point in there.

  27. Nick, what I gave as the definition of reliable is not circular. I think what you’re referring to is not circularity, but rather a problem of infinite regress. This would be a problem if I endorsed a form of internalism, but I don’t, so it isn’t.

    This also applies to the “how would you know” question about a congenial epistemic environment in the post that followed. The whole point of externalism is that it rejects the need to enter into that sort of regress. I think that in order to know X, we simply need to know X. Internalism, buy contrast, says that in order to know X, we need to know that we know X. Of course, we then need to know that we know that we know that we know (etc), into infinity. This is why I reject internalism.

    I think you misunderstand my reference to what people do every day. I said that the concept of warrant that I am employing is one that in fact people do use every day. You say “How is this different from gut feeling, or personal conviction (both terms that you had no time for earlier in the discussion), both things that human beings use in knowledge acquisition on a daily basis.” It’s different because warrant is not gut feeling or personal conviction. Warrant is described earlier in this thread. Sure, it has one thing in common with those other things you listed: Namely that it plays a role in the way that people form beliefs on a daily basis, but beyond that the ideas are simply not the same.

  28. Nick

    Glen, in terms of reliability, yep sure, you can call what I was referring to as an infinite regress, but I stick to circular, as to my mind, you are trying to justify a statement about truth with the same statement, which is also circular.

    In fact I would probably go further and say that this is recursive. As I have explained, I do not have a background in science or philosophy, so probably use some words differently to you. No mind, I am flexible, and will swap to your term.

    Now as far as your point on externalism. You have stated here, that the whole point of externalism is to avoid this infinite recursion.

    The whole point of externalism is that it rejects the need to enter into that sort of regress. I think that in order to know X, we simply need to know X. Internalism, buy contrast, says that in order to know X, we need to know that we know X. Of course, we then need to know that we know that we know that we know (etc), into infinity. This is why I reject internalism.
    This clarifies things for me. I would now offer the following observations.

    This seems to be a definitional approach to avoiding this infinite regress. In other words, we will propose a concept (warrant) which by definition means that it is a true belief, and thus we have avoided infinities, and everybody is happy.

    I would say this is a false approach. As stated before, I have no problem with the concept of warrant as true belief, in that it could exist. And if this were a mathematical equation, I can see how you could avoid the infinity with this concept. However, I don’t think that this necessarily then applies to reality, and almost certainly, I don’t see any grounds to say that this applies to humans. Quite the reverse.

    I would also ask: Why the overwhelming need to avoid the infinite regress? This has come up before in discussion with people about the origins of the universe, and the prime mover ideas. In terms of explorations of anything, I don’t see that there is a qualitative difference between an infinite regress, or not. Surely this is just what you find somewhere. The main difference is to the amount of traction we can apply to the subject with logic or maths. An infinite regress can make that sort of analysis not so successful, but that is no reason to just define that it is not there, as we then have stepped away from reality into our own story.

    I would go further, and ask: What makes people so uncomfortable with infinities? Sure, the concept can be a bit hard to wrap your head around originally, but after a while, you can see that we can deal with them, we just need to accept that there will always be a bit of uncertainty. In maths, science and technology, people deal with these issues pragmatically and successfully on a daily basis, essentially because they don’t have the luxury of making them go away by definition.

    Some examples: Recursion (which is essentially an infinite regression) is very often used, and useful in computer programming. The concept of limits in calculus, can be regarded as a way of dealing with infinite detail.

  29. Nick, here’s why we need to avoid infinite regress in the way that I have been saying:

    If it’s really true that we need to “know that we know” something in order to actually know it (this is the position I reject), then we do not know anything. This is because we can never actually get to the beginning of an infinite regress (otherwise it would be finite).

    So the options are: 1) reject the model that requires the infinite regress, or 2) affirm that we know absolutely nothing about anything.

    Now it strikes me that 2) seems wildly false, and actually smacks of self contradiction. If I say “I know nothing,” the obvious question is “really? Do you know that?” So I think the infinite regress has completely unacceptable consequences.

    The issue is by no means, as you put it, “a bit of uncertainty.” It is not uncertainty at all that’s the problem. The problem is that if we accept internalism (the model that I think requires an infinite regress) then we must not merely be uncertain, but we must be prepared to state as fact that we have no knowledge.

  30. Nick

    I think we are really getting somewhere now. I think the specific issue lies with your statement below.
    If it’s really true that we need to “know that we know” something in order to actually know it (this is the position I reject), then we do not know anything. This is because we can never actually get to the beginning of an infinite regress (otherwise it would be finite).

    I think that this is false. The infinite regression here does not mean that we do not know anything. Rather, this is the precise reason why all knowledge is provisional. The sensible way to handle this is to accept the provisional nature of knowledge and then get about pragmatically making use of it anyway.

    This is possible by trying to make knowledge as precise as the evidence allows, but always being open to the possibility of review when new information arises, perhaps through traversing another layer of the regression. This often happens in science, consider the oft cited revision of newtonian gravity with general relativity. But, I am sure you have heard this all before, or?

    So, I must add option number 3 to your interpretations of the fore mentioned infinite regress:

    3) Accept that it is not possible to have absolute certainty about the truth of any knowledge, and proceed under the pragmatic basis that we can have differing degrees of certainty about knowledge. In other words, this is not an all or nothing question.

    In colloquial terms, you have rejected option 2) because the prospect of an infinite regress makes you uneasy and decided instead to ignore it by definition, and thus open up a massive blind spot in any thinking you undertake.

    I think that the flaw in this approach is probably even demonstrable. It would be hard to test, but I would predict that the utility of your strategy would actually be measurably lower than the utility of the provisional knowledge approach, as you could have all manner of unsupported beliefs that you have defined as true. The unrevisability of these beliefs would then place you at a disadvantage when dealing with reality vs someone willing to revise their beliefs as the evidence arrives. And I think this is what we actually see. Of course the scale of these difficulties would depend on the scale of the 100% certain beliefs and how successfully they are compartmentalized.

  31. Nick, to call knowledge provisional is to say that it could later be proven wrong. Obviously if knowledge is warranted true belief, then it’s not provisional.

    Again, scientific claims are provisional. That has nothing to do with the infinite regress or with having to “now that we know.” It’s a different issue and a quite separate controversy to the one about the infinite regress. It’s very easy to see how the infinite regress means that we have no knowledge at all.

    Let k1 = the knowledge that X is the case.

    Let k2 = the knowledge that I have k1.

    Let k3 = the knowledge that I have k2

    And so on. As I have said, k1 is the warranted true belief that X is the case. According tot he view that I have said I hold, in order to know X, all I need is k1. This is externalism.

    According to internalism, the view I reject, for any belief x, in order to actually know x, I need k2. But this means that in order to actually have k2, I need k3. But then in order to have k3, I also need k4. This continues into infinity. Since we can never actually reach infinity in this process, then it follows deductively that we can never know X.

    This has nothing to do with merely being able to say that we can have “provisional” knowledge of X. This just means that whatever else we can say about X, whatever feelings of certainty of confidence we might think we can have, we cannot know it. Ever. No matter what.

    This means that you can never even know whether scientific beliefs should be regarded as provisional. It means that we cannot know that it’s wrong to declare utter certainty about all of our beliefs. It means we cannot know anything at all.

    In other words, the infinite regress problem is not one that says that we should be less dogmatic or more uncertain. It tells us with certainty that we don’t know anything.

    You’re also attacking a straw man when you talk about “unsupported beliefs” or “unrevisable” beliefs. After all, I have repeatedly said that knowledge is not just true belief. It is warranted true belief. If a belief is warranted, then it is hardly unsupported – and I have certainly never said that we should be unwilling to revisit and revise our beliefs if necessary, even if we once thought that we knew them to be true.

    Edit:

    I want to tread carefully so as not to condescend, because it’s clearly a subject in which you have a genuine interest and I want to encourage, rather than annoy, so please hear this as an attempt to do that: I get the impression that until this discussion you hadn’t heard of either epistemic internalism or externalism, and these categories and the standard critiques of them, which are very well know within the subject, are a bit unfamiliar to you. For that reason, and because it looks like perhaps you’d be interested in exploring it a bit further, there are some really good resources out there (some free online ones, some books for purchase). You might find these useful:

    Michael Bergmann, Justification Without Awareness: A Defense of Epistemic Externalism (for sale at fishpond here).

    Alvin Goldman, “Reliabilism,” The Stanford Encyclopedia of Philosophy (online here)

    George Pappas, “Internalist vs. Externalist Conceptions of Epistemic Justification,” The Stanford Encyclopedia of Philosophy (online here)

    Matthias Steup, “The Analysis of Knowledge,” The Stanford Encyclopedia of Philosophy (online here)

    Juan Comesaña, “We are (almost) all Externalists Now,” online at Dr Comesaña’s page at the University of Wisconsin here.

  32. Nick

    You say that the infinite regression is not what makes things provisional. In your words then, what makes a scientific claim provisional?

    A few points of this infinite regression. You said:

    And so on. As I have said, k1 is the warranted true belief that X is the case. According tot he view that I have said I hold, in order to know X, all I need is k1. This is externalism.

    To me, this seems to be an attempt to avoid the infinite regression by definition. In practice, I don’t see how this actually avoids it however. You can still ask the question (which I have) of why you would define things in this way? This jumps right back into that infinite regression.

    It seems that your answer to the why question above, is to say, “because then we avoid the infinite regression”. Well, I would then have a further very practical question. Why is it better to avoid the infinite regression? Where is the utility in doing this?

    As I stated earlier, I have no problem with the concept that you could have 100% certainty about something given a way of doing this. I am just skeptical that you could ever know if you had this ability. Welcome back infinite regression.

    I could see however, that you might want to postulate this if it would help you make progress in some way, perhaps as a working hypothesis. I then have some very real questions for you. What does you externalism choice bring you? What does it offer me? Why should I spend time to find out more about it? Where is the productive output, or is this expected at some point in the future? What does this bring to science?

    In summary, I see this reasoning of ignoring the infinite regress faulty and analogous to Georg Cantor, when discovering multiple levels of infinities (in fact an infinity of them) in real numbers, throwing up his hands and saying “that can’t be, this must mean that real numbers do not exist”.

    Finally, it also seems to me that you have not spotted at least one more infinite regression that lurks in your definition of knowledge. To keep this post readable, I will leave that for another post.

  33. Nick

    A quick additional point on the above post. I just found out that there is school of thought in the philosophy of mathematics called finitism. Perhaps this is a close cousin of externalism. Personally, I find real numbers very useful.

  34. Nick, avoiding the infinite regression isn’t a matter of definition. The question involved is: in order to actually know something, do we need to have an awareness of what makes it warranted? This isn’t a question about meaning. The externalist (like me) says no.

    A reason for saying no as I do is illustrated by way of the reducto ad absurdum that I used earlier. That’s when you start by assuming that someone’s view is correct, and then show that it leads to absurdity. In this case, I argued that assuming internalism leads to the infinite regress that shows that we know nothing at all. Since this is an absurd conclusion, it counts as an argument against internalism.

    This is also the answer to your question: “Why is it better to avoid the infinite regression?” it is better to avoid it because it results in a conclusion that all of us (I would think) reject: the conclusion that we know nothing at all.

    You ask: “In your words then, what makes a scientific claim provisional?” The fact is there is a very large number of possible things that would serve to make our claims provisional. A very short list would include the following:

    * The fact that in reaching a particular conclusion, it’s possible that we’ve missed some of the relevant evidence, so we admit that possibility by saying that our claim is provisional.

    * The fact that our current technology that enables us to engage in a certain line of inquiry has limitations that might be superceded in the future, enabling us to make more accurate claims which might differ from the claims we now make.

    * The fact that in the future somebody might think up a theory that fits the facts just as well as the theory we are proposing, but it has greater explanatory power.

    * The fact that we might, unbeknown to us, have underestimated the number of tests (or the sample size) that is required to make the kind the kind of generalisations that we have made.

    And so on. The fact that these or any other number of facts may be the case is what means that we should deem many of our scientific claims to be provisional, rather than certain declarations of knowledge.

    Again, note that a lack of 100% certainty isn’t because of the infinite regression. As an externalist, I realise that some of my beliefs could be false. perhaps (for example) I have at times relied on a data gathering mechanism that isn’t reliable after all. Or perhaps I was mistaken in thinking that I had warrant for a belief. That’s not the same issue as whether or not the infinite regress is a problem.

  35. Nick

    Again, I think you have an assumption here that is leading your reasoning astray. You say:
    In this case, I argued that assuming internalism leads to the infinite regress that shows that we know nothing at all. Since this is an absurd conclusion, it counts as an argument against internalism.

    You have not as yet described why an infinite regress shows we know nothing at all. You have assumed this. I have given some examples of where we can work with infinites very productively, and thus this is not the barrier that you present. I am happy to give much more explicit examples here if you would like, just ask.

    To develop this a little further. The other infinite regress that I mentioned last time is one of precision. This is perhaps though another facet of the original infinite regress. That is, any statement of knowledge, or imagining of knowledge is by the nature of the medium in which that statement is constructed, imprecise. Language, thoughts, even something so precise as mathematics have an implicit infinite regression, as you can always ask for more precision. Even such a seemingly obvious thing such as your earlier example of 1+1=2 can open pandoras box. What are these symbols 1 and 2, and what do they mean. What does plus mean as an operation? What is equality? I would argue that I could continue to ask further questions about any answer given to the previous questions and hence we are back with an infinite regression.

    I have done a little skimming of some of the articles you mentioned before, and have seen one or two things that might be a bit interesting concerning relative degrees of certainty, so will probably read a little further. One thing I was immediately struck by however, was, what seem to be some pretty large assumptions that the various arguers did not seem to be aware of, or at least did not seem to be addressing. That is: the assumption that the tools with which they are posing their arguments, are reliable and/or unlimited in applicability. That is, logic (or induction if you prefer) and human reasoning, senses, thought or even consciousness. There did not seem to be any consideration of these points, (perhaps a bit with sensory), so seemed to me rather naive (in a non condescending way).

    This leads me to an important point relating to the infinite regress. Have you considered, that this might actually be an artifact of the tools you are using. In other words, infinite regresses, contradictions, paradoxes etc… all exist in the architecture of logic itself. In fact, I think that is the true interpretation of these things.

    With the above point in mind, please reread your definitions of provisional, in particular:
    * The fact that out current technology that enables us to engage in a certain line of inquiry has limitations that might be superceded in the future, enabling us to make more accurate claims which might differ from the claims we now make.

    Does this not also apply to the technology/tools that you are using to even make your argument, in this case logic. Doesn’t this just further undermine this whole approach?

    Finally, I would say the provisional nature of things is not just defined by the infinite regress, but this infinite regress is indicating that we cannot with the tool of logic prove anything to 100% certainty.

  36. Nick, again, the provisional nature of claims is not caused by the infinite regress of internalism. It’s merely caused by our own humility in admitting that we might be getting it wrong. But I’ll start from the top of your post:

    1) “You have not as yet described why an infinite regress shows we know nothing at all.”

    In fact I have. I did this in comment #32.

    2) The issue of precision and improvement is not an issue of infinite regress. In fact it is not a regress at all. A regress is where we are pushed on step back. For example, if a person says that he “knows that he knows” X, we can make him move back up the chain of justification and ask him “but do you know that you know that you know X”? Gaining precision is not a regression at all. Rather it is to hone in on an existing belief with greater care or with better tools, or to examine the same data in a better way. The idea of regression doesn’t even enter the picture.

    3) I don’t understand what you are getting at when you complain about externalists “not even addressing” the starting point of sensory experience or logic. In a way, I think you’re getting externalism mixed up with internalism. Bear in mind that many externalists are reliabilists, so they intentionally do not go down the road of justifying things like using logic or taking data from sense experience. An internalist might worry about scurrying down the path of regress, justifying our trust of our senses, and then justifying that justification, and then justifying that one, etc into infinity, but there’s just no need for an externalist to do that. It’s not naive. It’s intentional, and I think, quite proper.

    4) I gave some of the many possible reasons for saying that our scientific claims should be deemed provisional. I used an example about the factthat our technology might increase in the future, enabling more accurate claims to be made. You then said: “Doesn’t this just further undermine this whole approach?”

    In fact it does not undermine it. You refer to the tools that I am using to make my argument, like logic. However logic is not an example of the kind of thing I was talking about at all. Think along the lines of a telescope that we use to gain data and make claims. In the future, there may well be much better telescopes that can improve the accuracy of those claims. That’s what I mean by “technology.” Logic is not technology. Nobody’s ever going to release, say, Logic v. 2.10.

  37. Nick

    I am on my way out, but some quick responses to your last points:

    1) No, I disagree, all you did was explain where the infinite regression lies, and then state that because of this we can know nothing.

    How would you answer the proposal that: The levels of regression relate to layers of justification, and as such provide one mechanism whereby levels of certainty can be determined. In other words, one measure of certainty of a piece of knowledge could be how many layers of justification underpin it.

    In this way, we could never be 100% certain of anything as we could never reach the bottom of the regress, but we could qualitatively compare two statements about something by comparing how many layers of justification (regression) they have. This strikes me as a very much more useful (and perhaps sophisticated) interpretation of the infinite regression than just ignoring it. This is where my questions about the utility of externalism came from.

    2) You have missed by point. I have interpreted your dislike of the infinite regression to the infinite part, not the regression part. I am assuming here that you would be happy with a finite regression. And thus I am using concrete examples of other, related types of infinities to demonstrate that infinities are things that we can deal with, regressions or otherwise. Also, see my point above, what I propose there, is very much along the lines of interpreting the infinite regression as an aspect of precision. The more iterations of regression, the higher the precision.

    3) Sorry, I don’t think that I understand what you are saying here. If you are saying that people should not consider these issues, then haven’t you just constructed a nice little safe box to play in. If the rules of those discussions exclude questioning the rules of the discussions, I must just repeat my question and ask: Where is the utility here? Why would you restrict things in this way? What do expect to gain from such an restricted avenue of discussion?

    4) It appears to me that you are now claiming perfect techniques/knowledge for philosophical discussions. Logic as a branch of mathematics, like other mathematics, is not a static,perfect complete thing. And the substrate that it runs on is definitely not. If you think this, then my question is, when was logic v1.0 completed (isn’t there a rule of thumb about trusting version 1.0 products…)? We could also have a discussion about platonic realism here, but I think it doesn’t really matter if you think mathematics (and logic) is discovered, or created, my point still remains. You could still say, either we have more too discover, or more room to create.

    All of which supports my earlier point, it seems that you guys are not considering these things, or worse choosing to ignore them without seeing or caring about the limiting consequences of that choice.

  38. Nick,

    1) I didn’t just state that we can know nothing. I explained why not. I don’t see that repeating my explanation would help, so I won’t, I will just comment on your new comments.

    You say that the levels of regress could be something like levels of justification. So if we can go back one step, we have weak justification, but if we can go back, say, eight steps, then we have a stronger justification and hence more confidence.

    With respect, this fails to appreciate why the infinite regress is a problem. Let’s say we go back eight steps. How strong is that step? Well it’s only as strong, according to the internalist, as our awareness of how that step is justified. So if we only go back eight steps, the eighth step has ZERO justification because it depends on the ninth step. So how does the eighth step back help us at all? In fact, even if we go back a thousand steps, the same result will arise, because the regress is actually an infinite one, and NO step will be justified apart from the step that came before.

    2) You say that I’ve missed your point which was all about using precision as a way of employing a potentially infinite sequence (since precision can always be improved). But your reply now has no mention of precision at all.

    My problem is not with infinity per se, because there are potential infinities to improve on things like accuracy. The problem is with an infinite regression, because it requires us to step back to a point that can, in principle, never be reached.

    3) Your comments about a nice safe box don’t really engage the issue. If you embrace internalism, which carries with it absurd demands that result in an infinite regress, while others reject it, embracing views that do not lead to such problems, it’s no use complaining about other people embracing a less complex view, as though they are somehow cheating.

    4) Yes we have more to discover, but we are never going to revise, say, the law of non-contradiction or the law of identity.

  39. Nick

    1) Yes, if you look at each layer of justification on it’s own, it would not be better (all else being equal), than any other layer of justification on it’s own. The point is, that you don’t look at it on it’s own. We are comparing stacks of justification, not individual layers. Consider the phrase “The weight of evidence”. Can you see how two concepts could be evaluated by comparing the weights of two respective stacks of justifications?

    This is a probabilistic argument. As such, perhaps a probabilistic example would help.

    Lets say somebody is throwing a multi sided dice out of your sight. You do not know how many sides this dice has, but they tell you what number they get every time they throw it. Now consider the following series of results 1,6,3…. At position one, (as per your reasoning), you have only one event, so no way of making a reasonable statement about the number of sides. At position 2, obviously 2-5 have been ruled out,but 6 to infinity are still in play. At this point you propose a hypothesis, being that the dice has six sides. With only two events however, this is quite a weak hypothesis as it would be easy to imagine that a 10 or an 18 is thrown next. Now continue the series on for another 10 digits. 5,2,4,4,2,5,5,1,6,2. Now compare the hypothesis that you had at position two, with the hypothesis proposed at position 12 that the dice has six sides. Which one is the stronger hypothesis?

    Your reasoning in point one leads only to a binary true/false capability. Given this reasoning, by what mechanism could we possibly have differing levels of certainty about something?

    2) I am arguing that you could look at a regression as just a matter of precision.

    3)I am not saying that people are cheating. In fact, I have said that I can see, why you could adopt such a hypothesis so to simplify the question, and thus learn something new. As I understand it, this type of technique is regularly used in maths and physics to make progress in a field when confronted with seemingly insurmountable barriers.

    What I am asking you is: What is your justification for doing this? What are the new things you are learning, or trying to learn by adopting this hypothesis? In short, where is the utility of this approach?

    My point here, is that from the little I have read so far, it seems that people in this area either don’t know that they are working within the constraints of an hypothesis that they have chosen, or they have forgotten this.

    4) I think you might be generating another infinite regress here. I would ask how you could justify this? And you would say because you have defined this as being so, ala 100% certainty argument to avoid the infinite regress. I could then bring up an argument that Scott thoughtfully supplied for me over at Ken’s blog:

    Logic is a branch of mathematics. You could perhaps interpret Gödel’s incompleteness theorems to say that there are true things that you are unable to prove using mathematical logic. In other words, logic is incomplete. Logic itself has proved that it is incomplete. Somewhat less than a perfect unchanging tool then.

  40. Nick, the dice example is totally unlike the infinite regress. That example is not about basing a truth judgement on a belief that it is justified, based on a reason for thinking that this second belief is justified, based on a reason for thinking that this third belief is justified, and so on.

    The trouble with saying (my paraphrase) “sure you’re right when it comes to looking at each level on its own, so let’s look at them all together” is that considered together, the total is no better than the sum of its parts. I have offered reasons for saying that each part (i.e. each level in the infinite regress) is no better than the step that preceded it, and I have argued that no matter which level we stop at, the level of justification is zero. If you grant that I am correct “if you look at each layer of justification on it’s own,” then you should concede the conclusion: The whole process provides zero justification.

    We certainly can have degrees of certainty, but this has absolutely nothing to do with the existence of the infinite regress created by internalism.

    We can have degrees of confidence based on the amount or strength of the evidence available to us. The infinite regress does not even enter the picture. It is a non-factor, a different subject altogether.

    4) In saying that we are not going to revise the law of non-contradiction, I certainly am not “generating another infinite regress.” To merely claim that something is the case does not generate an infinite regress unless a person is an internalist who proposes the model of justification that I reject. secondly, I have not “defined this as being so.” I have merely observed that this is so.

    The law of non-contradiction is (in english): “It’s not the case that A and not A.

    There’s nothing incomplete about this claim as it is a necessary truth. If you believe that it is incomplete and possibly subject to revision, you will need to clearly explain how, and not by referring to a person’s name, but by going into detail.

  41. Ken

    Don\’t want to disturb the smooth flow of discussion between Nick and Glenn. Just want to support the concept of a logic 2.0 – especially when it comes to scientific knowledge/investigation. As Frank Wilczek has pointed out (<a href=\"http://www.edge.org/3rd_culture/wilczek09/wilczek09_index.html\">Edge interview</a>) <i>“The classic structures of logic are really far from adequate to do justice to what we find in the physical world.”</i>

    I think this gets back to Plantinga\’s understanding of \"warrant\" – which I think is mechanical and would rule out much of scientific knowledge – knowledge we know to be extremely effective. This is the concept that our cognitive system it \"designed\"/\"evolved\" for specific cognitive functions and therefore may be operating in a different environment to that it was \"intended\" for.

    We are now investigating phenomena well outside the environment in which our cognitive systems evolved. We have to accept that there may well be conflicts with our intuitive understanding of logic in such environments. But, I think it is well justified for use to think that we can do this quite successfully (even though some would say such investigation would have no \"warrant\").

    Well that is the case for much or scientific research now. Yet we have ways of getting around that – although the comprehension of our results in the manner we are normally used to is extremely difficult (We often have to see things mathematically, rather than as particles or waves, for example). Consequently we are faced with surpassing a logic which say \"Its not the case that A and not A\". Quantum logic is postulating at least at the level of logic that it can be A and not A. Serious attempts are being made to develop computers using this logic.

  42. Nick

    Glen, I disagree. The dice example is very like the infinite regression. For the simple reason that the layers of the infinite regression are layers of justification.

    You can think about each layer of justification as an answer to a “why would you think that” question about the previous layer. Thus, all else being equal, you can be more certain about something argued down through 100 levels of justification than you can be about something argued down through 2. The reason why this is, is the exact same reason as for the dice. You could reach a layer of justification in which there is no good answer to the why question. In other words, the whole structure is false. This possibility can considered as less likely the more layers down you go.

    This is the same as the dice. You can never be certain how many sides it has, because you can keep rolling it for an infinity of times, and there is always a chance that a 10 or 10000 pops up at some point further down in the progression. That doesn’t stop you however, being certain in a practical sense after a reasonable sample size that the dice has six sides, just not 100% certain. This is very like knowledge in my opinion, even to the extent of showing why falsifiability works.

    But wait, it doesn’t stop there, I have yet another argument why defining uncertainty of knowledge away by definition, ala externalism, is a bad idea.

    As you have stated, this is done to avoid the infinite regress. A key question then is: How do you actually know that this is an infinite regress, perhaps it is only an apparent infinite regress.

    This argument is similar to the prime mover arguments in that: What logic has brought you is what looks like an infinite regression, but you cannot be sure that it actually is an infinite regression, without traversing all the layers of the regression. This is one of the charming things about infinities. So as I see it, you cannot be certain that it is actually an infinite regression as you describe, because there are at least two other possibilites. These are:

    1) At some point, down in the layers of regression, you run into a, lets call it…. prime justification. That is, something that incontrovertibly and without needing further justification proves the previous layer. Perhaps the product of logic V8.5. NB. I am not talking about your solution through definition approach here, I am suggesting that perhaps there is a point of singularity in the regression, below which it is not possible to go, and thus it is finite.

    2) At some point, down in the layers of regression, you run into…. the original top layer of the regression. In which case, you have something a little bit different, you have a circular progression. I have not yet really thought through all the implications of this one, but what you are left with, is definitely different from an infinite regression.

  43. Ken, Wilczek’s comment is only that logic is inadequate for a full understanding of the world. That’s true, but it doesn’t suggest that logical laws like the law of noncontradiction might be rejected one day. It only means that there is more to know that such laws do not tell us.

    You say that the idea of warrant would rule out scientific knowledge, but I just don’t see how. You say: “We are now investigating phenomena well outside the environment in which our cognitive systems evolved.”

    That’s not important. What’s important is that when we use our belief forming faculties, we’re using them in an environment that is conducive to their reliably gathering information. Whether this is the environment in which they developed in history is not the issue.

    As far as the subject of quantum logic scenarios in which “A and not A” can be the case, you’d need some examples there.

  44. Nick

    Also, I had a little difficulty with you new security code captcha. The initial code it shows did not seem to work, but I pushed the change button a few times and then that worked. That happened again with this post, so I think that perhaps it is initially showing a false code.

  45. Nick, it’s important (inasmuch as any of these discussions are important, that is) that you realise why the dice / layers of justification is definitely unlike the infinite regress. Let me offer another attempt, and if this one fails too, I’ll leave it there.

    Here’s how the dice example works: At the time you make the hypothesis about there being 6 sides, based on current observation it has justification of a certain strength. Then you roll again, and the number of observations increases, strengthening the level of justification. And so on, through more and more “layers” of justification. It’s very clear to both of us how this example works. At each point, if someone says “Do you currently have justification for believing your hypothesis about 6 sides?” The correct answer is “Yes, to some degree, based on knowledge we have already gained from observation.” (Take note of the word “yes,” because this will become important.)

    But that is absolutely not how the infinite regress works. It’s a difference not merely in degree, but in kind. Here’s how the infinite regress – I will use the labels k1, k2, k3 and so on as used earlier.

    I hold a belief k1. If someone says “do you have justification for k1?” Here I cannot say yes. All I can say is “Only if I have some justification for k2. Otherwise I have none at all” If they then ask “Do you have justification for k2?” I must say “Only if I have some justification for k3. Otherwise I have none at all.”

    And so on. So the dice example and the idea of levels of justification are not cases like the infinite regress. They are entirely separate issues.

  46. Nick

    I disagree again. Firstly with the dice example, perhaps the temporal difference of the two hypotheses has thrown you. Consider then two different hypotheses that are proposed even before the dice is thrown. 1) The dice has six sides. 2) The dice has 10 sides. As you move through the progression, the relative probability of these statements being true diverge, and thus they can be compared from a probabilistic angle, and a preferred hypothesis selected.

    In your last post, you say:
    &lt;b&gt;
    I hold a belief k1. If someone says “do you have justification for k1?” Here I cannot say yes. All I can say is “Only if I have some justification for k2. Otherwise I have none at all” If they then ask “Do you have justification for k2?” I must say “Only if I have some justification for k3. Otherwise I have none at all.”&lt;/b&gt;
    Now, this is a little different to how you stated this earlier in the thread as:
    &lt;b&gt;
    Let k1 = the knowledge that X is the case.
    Let k2 = the knowledge that I have k1.
    Let k3 = the knowledge that I have k2
    &lt;/b&gt;
    You seem to have reversed the order direction of the regression, and inserted the word \\&quot;only\\&quot;. In my opinion, your insert of the word &lt;b&gt;only&lt;/b&gt; in the previous quote is just another way of stating your conclusion that you can know nothing. In other words, you have loaded the question to reach your conclusion. Not too good from my perspective.

    Further to your wording in the last post, if somebody asks me if I have justification for k1, of course I can say yes. My justification is K2, and so on down the line… It seems to me, that under your reasoning, strength of evidence can play no role, and that there is no possible way to compare two uncertain hypotheses. This comes back to my argument for utility. The utility of your reasoning seems to me much lower than the utility of the alternative approach.

  47. Nick

    I sense that you are getting a little tired of this discussion Glen, so perhaps I could make a bit of a summary of my position for the sake of clarity.

    The essential point of contention is related to the infinite regression you find when justifying knowledge, in your words:

    Let k1 = the knowledge that X is the case.
    Let k2 = the knowledge that I have k1.
    Let k3 = the knowledge that I have k2

    You are unhappy with the infinite regression, so have by definition removed it. In your words:
    The whole point of externalism is that it rejects the need to enter into that sort of regress. I think that in order to know X, we simply need to know X. Internalism, buy contrast, says that in order to know X, we need to know that we know X. Of course, we then need to know that we know that we know that we know (etc), into infinity. This is why I reject internalism.

    This amounts to a claim of 100% certainty that we know things (in this case X). I think this is an extraordinary claim, and as such requires extraordinary justification. In this case, I would argue that you need to have a 100% certain justification to support it. To undermine this claim therefore, only requires the injection of some level of doubt.
    I consider that I presented some very real grounds for doubt. I will try and summarise the main areas for doubt below.

    1)What is wrong with infinity anyway. Infinities do not mean that we know nothing, they just mean that we cannot be 100% certain. We can, and do work productively with infinites in many different ways. In the case of this particular infinite regression, we can consider the number of layers of regression as a weight of evidence, and use this (all else being equal) to compare hypotheses in a qualitative way.

    2) Are you sure that this is actually an infinite regression. Perhaps at some point in the layers of justification we run into a prime justification, that for whatever reason doesn’t require further justification, and then this is actually finite. Or, we could run into the original hypothesis again, in which case we have a circular justification. You cannot be sure that this is not the case without traversing all the layers of justification.

    3) The argument from utility. Cutting off the infinite regression by definition, is just a less useful approach to take. Treating everything as having a level of uncertainty, avoids the trap of incorrect 100% certain beliefs, at no cost above having 100% certain beliefs. Thus, 100% certainty is less useful.

    4) The argument from precision. The infinite regression found with the justification of knowledge is not the only infinity here. There is also a question of precision. How can something be 100% certain without it being 100% precise. I would argue that it might not even be possible to be 100% precise using language, human thoughts or even mathematics. This could however in some ways be considered as another aspect of the justification infinite regress.

    5) Are the tools with which you make your argument 100% reliable? This is an argument made with logic, using the human consciousness and communicated using human languages. All of these are arguably incomplete. In the case of the human consciousness, modern research is unearthing large amounts of evidence about the biases and blind spots that we as a species have. One of the most relavent blind spots here is that our instinctive senses of probability are very skewed.

    6) Perhaps the infinite regression is just an artifact of the limitations of the logic used. When we see infinities within maths, and logic as a part of maths, you could argue that these are not aspects of reality, and are just limitations inherent in the actual architecture of maths. Thus: The infinite regression is not actual, but just a limitation of our toolset that could be possibly be resolved using another method. This is perhaps really part of 5, but I have split it out for clarity.

    Please note, that I am not necessarily saying that I believe all of these arguments are true, I am just saying that they can be made logically, and as such open up doubt on your position.

    All of this aside though, I would again finally suggest to you, that if you could provide an example of the utility of your position, then I would have some interest. As it stands, I don’t see any grounds for it, and no use for it either. In short, it has nothing to offer.

  48. Nick, you’re right that I am a little tired of it, so I will comment on only one thing, and then leave it. Others can assess how well your other comments stack up, but I think I’ve said enough to address them.

    This amounts to a claim of 100% certainty that we know things (in this case X). I think this is an extraordinary claim, and as such requires extraordinary justification. In this case, I would argue that you need to have a 100% certain justification to support it. To undermine this claim therefore, only requires the injection of some level of doubt.

    This is wrong. All I am claiming with 100% certainty is the claim that we have knowledge, as you note. I am not claiming that when we know something (i.e. when an epistemic state of affairs is met), we will experience 100% certainty that we know (i.e. a psychological state of affairs will obtain). I’m not an internalist, after all. Therefore I am not committed to the claim that we need a “100%” justification for any given belief” (I’m just using your terms to keep things simple). At no point have I ever said that knowledge results in certainty. For that reason I can happily reject the objection, even if you think you’ve introduced a small degree of doubt about any belief.

    I think I’ve said enough about the other points raised.

  49. Nick

    I am happy also to leave the discussion there. Just some final words about my motivations in engaging in this discussion.

    The discussion that occurred previously on Ken’s blog had a very high emotional content that I thought was obscuring some of the issues. Fortunately there’s none of that here. With one of your final comments in that discussion you mentioned epistemic externalism as a way of being sure (which I read as 100% sure, i.e certain) that we have knowledge, or can know something.

    Having no direct experience of science or philosophy I was interested in how this idea worked, particularly in terms of it’s utility to us in gaining knowledge.

    Along the way I have read a few interesting things that could speak to evaluating the place of intuition in the formation/gathering of knowledge. This is one of my personal interests.

    However, also along the way, I discovered a key point of disagreement with you about the way to interpret this infinite regress in the justification of knowledge.

    With your latest phrasing of this infinite regression, you have included the word only, and used it in such a way as to effectively disallow evidence. I.e you will accept k1 only if you have some justification for K2. If we accepted this, then I think that we have no way of comparing levels of certainty between hypotheses etc…, and as such, I have no choice but to agree with Ken, that this is an anti scientific stance. I am sorry if this offends you, but that is my reading of your position.

    PS.. That new security code system you have, does not work at all well, it takes many attempts to get the posts through.

  50. Heraclides

    The first portion of your article is essentially a straight-forward personal attack.

    In this you’ve misrepresented my position and what I wrote several times, very badly. (The use of loaded adjectives should be a clue to readers, ‘zealous’, etc.) I did try explain these specific things that I wrote to you some time ago, so there is little need for it here, really.

    If you really just wanted to explain what you meant, you could have done just that, without ranting about someone, let alone leaving out material that explains why they wrote what they did.

    I tried as delicately as I could to explain to you that a problem you seem have to in writing to others is not considering that they can only read your posts at face value. In particular, if you are intending a narrow, niche meaning meaning for a phrase that has a different well-established meaning, people will naturally use the common meaning unless otherwise told. This really is up to you to make clear, as it unrealistic to ask people to second-guess possible meanings for every phrase you write! 😉

    At no point to I assert to be an expert in philosophy as you imply in your article above. In fact at several points I said explicitly that I was not. I did say that I knew what a scientific theory is and that I understood how science works.

    I also explained quite carefully why I wrote “Only a religious person would write “knowledge is warranted true belief””, but you haven’t bothered to inform your readers of the reason, in effect misrepresenting me. Others got my explanation the first time I presented it, seemingly with little effort so I find it odd that you are still complaining about this. (I would guess that the main reason that you are throwing all this abuse at me is that you feel that I was “attacking” religious people, when in fact I was not.)

    It is quite unfair to represent me out of context, without including my explanations, and furthermore not considering that your writing had a (large) part to play in it, as I tried let you understand. You do have a habit of assuming that people will “just know” what you mean, when they have no way of knowing without you explaining yourself. There is little point in getting angry at a reader for that.

    I haven’t time to read the explanation part of your article (nor all the comments) but given your initial words, I think I can be excused for hardly wanting to!

    I would appreciate an apology, the initial portion of this article is unnecessary, inaccurate and quite out of line.

    PS: I haven’t read the comments, except the last few. It strikes to me that the conversation seems to have ended on points that are similar to ones that I raised myself, which makes me doubly wonder why all the abuse hurled at me.

    • Heraclides, I’m quite comfortable with the way that I have represented your comments.

  51. Ken

    Glen – No, Wilczek’s comment is not about the inadequacy of logic alone. He’s pointing out classical logic doesn’t measure up to the task in modern physics.

    It’s not a matter of one day rejecting classical logic – rather extending it form the common sense world we evolved in to the world of the extremely small, extremely fast, extremely massive, etc., which we face in modern physics.

    I am pleased your interpretation of “warrant” doesn’t exclude the attempts to understand reality well outside the sort of world we live in and evolved in. This is what we confront when we have to consider things like the origin of the universe, or even the extremely small. It’s just that my reading of Plantinga leaves that possible interpretation. I actually think there’s a can of worms there.

    Quantum logic. Whereas today’s computers operate with 0 and 1 states, quantum computers will have the extra state of both 0 and 1 – at the same time. This enables a huge leap in computing power. However, it relies on creating such states with systems large enough to use for practical computing. So far this has only been done at the molecular scale. But, I understand, the experts see great possibilities.

    What I am saying is that our everyday, common sense logic, breaks down in some situations and therefore has to be extended (made more powerful). In the same way the Newtonian mechanics was extended (and became a limiting case of) Einsteinian mechanics. This is what also happened with the jelly beans when we considered matter which doesn’t interact with EM radiation.

  52. Ken

    Glen – No, Wilczek’s comment is not about the inadequacy of logic alone. He’s pointing out classical logic doesn’t measure up to the task in modern physics.

    It’s not a matter of one day rejecting classical logic – rather extending it form the common sense world we evolved in to the world of the extremely small, extremely fast, extremely massive, etc., which we face in modern physics.

    I am pleased your interpretation of “warrant” doesn’t exclude the attempts to understand reality well outside the sort of world we live in and evolved in. This is what we confront when we have to consider things like the origin of the universe, or even the extremely small. It’s just that my reading of Plantinga leaves that possible interpretation. I actually think there’s a can of worms there.

    Quantum logic. Whereas today’s computers operate with 0 and 1 states, quantum computers will have the extra state of both 0 and 1 – at the same time. This enables a huge leap in computing power. However, it relies on creating such states with systems large enough to use for practical computing. So far this has only been done at the molecular scale. But, I understand, the experts see great possibilities.

    What I am saying is that our everyday, common sense logic, breaks down in some situations and therefore has to be extended (made more powerful). In the same way the Newtonian mechanics was extended (and became a limiting case of) Einsteinian mechanics. This is what also happened with the jelly beans when we considered matter which doesn’t interact with EM radiation.

    • One more thing Ken – if your comment doesn’t appear right after you comment, it’s awaiting moderation and will appear after I’ve checked it. Most comments will appear right away. If you like I can delete the earlier version of the same comment.

  53. Ken, this could merely be my own lack of creativity, but I just don’t know what it even means to talk about “extending,” say, the law of non-contradiction. Could you elaborate, perhaps with an example?

  54. Ken

    Simply that our common logic, intuition, etc arise form our existence in, and evolution of our species in, what we consider the common sense world. Where things move at common sense speeds, have common sense sizes and masses etc. But this is only a limiting case of a vastly greater reality. When we probe that greater reality we often find our common sense logic, common sense intuitions don\’t apply.

    While the Shroedinger\’s cat thought experiment was originally proposed as a way of ridiculing some interpretations of quantum mechanics it does demonstrate the difficulty we have of conceptualising that vaster reality.

    And the thing is that much of that vaster reality is now part of our common sense reality, even though cognitively we have to find new ways of dealing with it. Our modern day chemistry and other technology is underpinned by it.

    Quantum computing logic and Schroedingers cat are just two examples. Everybody quotes the 2 slit experiment. There\’s phenomena like quantum tunnelling in physics and chemistry, atomic and molecular electronic orbitals, etc., etc.

  55. Heraclides

    They are misrepresentations Glenn. I explain why I wrote what I did but you have presented them as having been written for other reasons.

    Any one who is a decent person can verify this for themselves.

    Please apologise and remove your inaccurate and misleading statements about me.

  56. Ken, I just can’t see you giving an example of how the law of non-contradiction being extended. I’m not just being obstinate in saying this – I re-read your comment several times and I still can’t see it.

  57. Ken

    Well, Glenn, I have.

    Do you not understand what I mean by the 2 slit experiment or Schroedinger\’s cat?

    Is the problem that you have trouble with QM (Don\’t we all)?

    However, I think you will appreciate that I can\’t give an introductory course in the comments sections of a blog.

    Just accept that it shows that an electron (say) can be in position A and position B, or in position A and not in position A, at the same time.

    And that you use technology every day based on that logic.

    (By the way – can’t you change the background colour of your code – it’s almost impossible to read).

  58. Heraclides, I am not interested in arguing about that with you. I have re-read my post and your complaint, and I think that your complaint lacks merit. I correctly reproduced what you said (verbatim), and I correctly relayed my impression of your approach (see where I said “as though”). I believe that I have fairly portrayed what you were getting at, but if there’s doubt, I gave a link so that readers can go and check for themselves. That’s about as fair as I can be.

    If you are not satisfied with this you may contact me privately.

  59. Ken, I’m familiar in very basic terms with the double slit experiment, but it definitely has nothing to say about the law of non-contradiction being extended. I’m assuming then that you think that the Schroedinger’s cat example carries the full weight of the explanation.

    However, Schroedinger presented his example precisely because he recognised that there was an interpretation of quantum mechanics out there (namely the Copenhagen interpretation) which resulted in a contradiction and he wanted to show that this was so so that he could get people to reject that model. Granted, the following quote is from a Wikipedia article, but it’s a fair summary: “Schrödinger did not wish to promote the idea of dead-and-alive cats as a serious possibility; quite the reverse. The thought experiment serves to illustrate the bizarreness of quantum mechanics and the mathematics necessary to describe quantum states.”

    None of this implies that the law of contradiction might be incomplete or that it could be altered or extended.

    Now I don’t think that either outcome of this very specific issues has implications for the subject of the blog post, but I really can’t see a case for saying that laws of logic might be “extended,” just because there’s no coherent explanation of what that actually means.

    By the way, if the contrast between the code and the background was too great it could be read by some spam bots. It\’s supposed to require human effort 🙂

  60. Ken

    I imagine that the Wikipedia articles didn’t use the term “law of contradiction.” However, these experiments do show that classical logic is not up to the task when it comes to considering reality at this level. Hence the Wikipedia quote about bizarreness of QM and the necessity of using mathematics to deal with quantum states.

    Specifically the double slit experiment shows the need for a logic which considers that an electron (or photon, or whatever) can be here and not here at the same time. The particle/wave duality is a manifestation of that. Similar quantum tunnelling. If classical logic applied in the latter case the sun would not shine and very many chemical reactions would not occur.

    That is the nature of scientific knowledge. We must always be prepared top go beyond old conceptions and ways of thinking – go where the evidence indicates.

    This is why I raised the problem of Plantinga’s definitions of “warrant.” To me they encourage investigators to limits their possible outlook because of the requirement to have the appropriate cognitive apparatus (we don’t have this for the “quantum world” and to use it in the environment for which it was “designed” (evolved). We did not evolve in the “quantum world.”

    I suspect your difficulty in understanding what I mean by extending our logic beyond the classical (as Frank Wilczek was pointing out is necessary) is connected to your acceptance of Plantinga’s mechanical presentation of the concept of knowledge.

    Regarding the code – I can understand why, now. What I can’t understand is why you want that control. It does put commenters off and I would think that is the last thing you would want. My experience is that the normal spam software catches spam bots in almost every case.

  61. [I’ve edited this comment of mine, as the original was needlessly impatient and therefore unkind. My apologies for this.]

    Ken, I don’t imagine you’ll like me saying so, but I think you just misunderstand the double slit experiment if you think that it literally shows that a photon can be there and yet in the same sense not be there at the same time. It doesn’t do this, yet you’ve become convinced that it does.

    Also (and doubtless you will like this no more than the previous comment), you are still wrong in your description of Plantinga’s idea of warrant as “mechanical,” and I consider that you’re using the term only for rhetorical effect. You also mistake me if you think that I don’t think the law of non-contradiction can be extended on account of my notion of warrant. In fact the two issues are barely related at all.

    In all honesty, I don’t see much constructive purpose in continuing to re-visit the same issues with you, so I am just going to leave it there.

    Regarding the code – I made the change because the normal spam filter seemed to target you and only you for some reason (apart from the typical spam/porn bots), preventing you from commenting. You were the only person claiming that they were being prevented from commenting (although I’m inclined to think you were pressing “preview” instead of “post/comment”). Be flattered at the change!

  62. Gene

    Glenn,
    Totally appreciate the Nuts and Bolts. I wish more people would take time to read this.

  63. Jon

    Glenn, maybe you can help me out here.

    I know that Plantinga doesn’t hold that only true beliefs can have warrant. Some false beliefs can be warranted. But only true beliefs can have warrant sufficient for knowledge (lets say k-warrant).

    With this in mind, why say that knowledge is Warranted *True* Belief? If a belief has k-warrant, wouldn’t it have the property of being true simply by default? It seems that if the belief in question couldn’t have k-warrant if false, then a better candidate for knowledge would be something like K-Warranted Belief, or Fully Warrant Belief, since it’s not enough for the belief to be *merely* warranted, but then if we say K-Warranted True Belief, it’s almost a waste of words, since to have k-warrant is a property that only true beliefs can have in the first place.

    This may be a minor quibble, but it does seem to make the “True” aspect of Warranted True Belief a bit superfluous. I don’t know.

    Thoughts?

Powered by WordPress & Theme by Anders Norén