• InPitzotl
    880
    If math is discovered (knowledge), the B (belief) in the JTB definition of knowledge is an error.Agent Smith
    If math is invented (knowledge), the T (true) in the JTB definition of knowledge loses significance.Agent Smith
    I don't see how either of those things follow.

    If I discover a new route to work, I believe it to be a new route to work. Not even remotely a contradiction there, by any stretch I can imagine.

    If I invent a new way to clean the snow off my roof, then it can certainly be true that that method can clean the snow off my roof. Again, not even remotely a contradiction there, by any stretch I can imagine.
  • Isaac
    10.3k
    In the ordinary sense (of folk language games, of the type we would play when we say "the table is solid"), "R murdered W" can only be true if the state of affairs is such that R murderedkilled W, regardless of what a community of peers agrees on.InPitzotl

    Yeah, this is the point I thought you were making. If I agreed with it, I wouldn't bf making the point I'm making would I?

    they are more critically points being made, with said points challenging some previously made pointInPitzotl

    But there's no challenge. You're just repeating a basic correspodence theory of truth. I don't hold to such a theory. As I said, you can either shake your head in disbelief or discuss the reasons why you hold to a correspondence theory, but as yet, all you've done is simply declare it to be the case as if I might have somehow missed the concept.

    I've heard nothing here challenging the notion of sufficient to warrant belief.InPitzotl

    The aim is not to challenge sufficiency of warrant, it's to say that it is, on occasion, no different from a pragmatic notion of truth, or a deflationary notion of truth.

    if PO and PS have different truth values, they cannot possibly be applying the same truth criteria. Since they have different truth criteria, they cannot possibly be the same proposition. So all that really follows is that a particular sentence can express different propositions in different contexts.InPitzotl

    Again, this simply assumes a theory of truth (here a coherentist theory it seems). I don't hold to those theories of truth. Truth is not, for me, a property of propositions at all.
  • creativesoul
    11.9k
    In response to the OP...

    The cloth was not a cow. The farmer believed the cloth was a cow. All Gettier problems are accounting malpractices of an other's belief. Plain and simple. All of them.
  • InPitzotl
    880
    Yeah, this is the point I thought you were making.Isaac
    You don't even understand the point (I can say that with the hindsight of reading your entire post... oh boy, is it broken). The point was an extension of this post:
    Again, we're talking about an actual word here that people use in real language games. — Isaac
    Regarding that, Dr. Richard Kimble did not murder his wife
    InPitzotl
    ...your claim here is about how people use "an actual word" in real language games (referring to 3, the T condition of JTB). Here I'm offering rules of fiction as examples of how people use actual words in language games. By the way, this is the third indicator of such I've given in these interchanges.
    If I agreed with it, I wouldn't bf making the point I'm making would I?Isaac
    But you said this: "What you've said doesn't seem related to what I'm arguing in the slightest"

    they are more critically points being made, with said points challenging some previously made point — InPitzotl
    ...
    But there's no challenge.
    Isaac
    Nope, not going to even start debating the meaning of the word "challenge" here. Just pretend I invented a shade of meaning of "challenge" relevant to your charge: "What you've said doesn't seem related to what I'm arguing in the slightest". Pretend it means related to what you're arguing in the slightest in just the right way such that if you agreed to it, you wouldn't be making the point you're making.
    You're just repeating a basic correspodence theory of truth.Isaac
    Nope (see below).
    I don't hold to such a theory.Isaac
    That's irrelevant to your claim: "we're talking about an actual word here that people use in real language games".
    or discuss the reasons why you hold to a correspondence theoryIsaac
    Why I hold to this (but see below) theory is irrelevant here, because it's not the topic here. The topic here is "we're talking about an actual word here that people use in real language games".
    but as yet, all you've done is simply declare it to be the caseIsaac
    No, I have offered here as an indicator of how people use the actual word in real language games how they use the word in fictive language games. Also, this is the third indicator I have offered for how people use the word.
    as if I might have somehow missed the concept.Isaac
    Well, you are missing any support for something you keep claiming outside of 100% horse grade pretense, and any semblance whatsoever for any sort of falsibiability condition.
    Again, this simply assumes a theory of truthIsaac
    As advertised, all it's doing is addressing a particular non-trivial reading of this:
    I'm arguing that both 'know' and 'true' have different meanings in different contexts and as such JTB has no special claim to be a definition of 'knowledge'.Isaac
    ...and that's still ambiguous out the wazoo (and I directly invited you to rephrase it). The reading it addresses is one where your "This table is solid" being true in one sense and false in another "as such" suggests anything at all about the JTB theory of knowledge. Assuming JTB for this purpose is not problematic.

    Incidentally:
    or discuss the reasons why you hold to a correspondence theoryIsaac
    here a coherentist theory it seemsIsaac
    In terms of what I've described, there's no difference between these two views. So why do you think it's a correspondence theory in the first quote and then suddenly a coherentist theory in another?
  • Agent Smith
    9.5k
    JTB theory of knowledge:

    S (a person) knows P (a proposition) iff

    1. S believes P
    2. P is true
    3. P is justified

    When (1) S believes P it means, for S, P is true or S thinks P is true.

    The JTB theory of knowledge:

    S knows P iff
    1. S thinks P is true
    2. P is true
    3. P is justified

    The content of belief is propositional for the simple reason that only propositions can be true.
  • invizzy
    149
    Is it people's intuitions that the Gettier problem is solved if we are relativists regarding what is true?

    What I mean by that is that while I think we converge on what 'truth' is (stuff to do with the objective world out there and so on) we don't necessarily converge on what is 'true' is. It seems to me that 'true' smuggles in the fact that the speaker might be wrong in a way that 'truth' doesn't.
    I think that is relevant for the idea that knowledge is JTB. That stands for 'justified true belief' not 'justified truth belief' after all!

    Is the cost of this sort of idea too high? I wouldn't want to be accused of a certain sort of relativism where there is no truth for instance.
  • Ludwig V
    1.7k
    Excuse me joining this so late in the day. I wasn't a member nine months ago.

    I certainly don't agree that the Gettier problem is solved by relativism about truth. I think that if relativism is true (which I don't accept) then the concept of knowledge is meaningless.

    Gettier creates the problem by offering a justification based on a false belief. Which seems to me not a justification for anything - even if it is a reasonable belief. He combines this with a story that provides a truth-condition for the proposed knowledge quite independent of the justification. The result is a set of conditions that escape the definition. The story is not catered for by the definition. The mistake is to try and classify it as knowledge or not.

    I think what I have said falls under the slogan "no false lemmas".
  • Sam26
    2.7k
    The Gettier problems conflate two things, viz., the difference between a claim to know, and the definition of knowledge. Because one believes that they know X, it doesn't follow that they do know it. It must be demonstrated that they do indeed know. All Gettier problems are mistaken knowledge claims. Why do you think the phrase "I thought I knew (Wittgenstein)," has a use? It's because we are often mistaken. The problem isn't with the definition of knowledge as JTB, it's with the claims people make. It's as if Gettier in saying that my claim, based on what I believe is JTB, is the same as being JTB. It's not. I find it strange that so many philosophers think that the Gettier's examples actually tell us something important about the failure of JTB.
  • Ludwig V
    1.7k
    I agree with you that it is strange that so many people think that Gettier is important. But somehow the problem gets under one's skin. I think it's because the solution seems so simple, but then turns out to be so hard to pin down. I agree with you that believing that one knows is not sufficient for knowledge. It has to be endorsed by someone else. That's the effect of the T clause in the JTB. But Gettier doesn't claim that his characters know. On the contrary, he claims that they have a justified true belief and not knowledge. That's the point.

    And its a feature of the stories that the main character doesn't know the full circumstances; I assume that is because if the main character knew the full circumstances, they would immediately recognize that their justification is not a justification and would then not even believe.

    So although I agree with your conclusion, I don't agree with your diagnosis. Sorry.
  • Sam26
    2.7k
    Gettier doesn't claim that his characters know. On the contrary, he claims that they have a justified true belief and not knowledge. That's the point.Ludwig V

    I understand that Gettier is saying that they don't know, but those who are having the experience of seeing X, claim they know. Gettier is saying, they don't know, based on his examination of JTB. Again, Gettier is confused. You can't infer from someone's claim that they know, that they do indeed know, and that's what Gettier is doing. He's saying, see, their using JTB and it failed to give knowledge. He's conflating one's claim to knowledge with actually having knowledge. There's nothing difficult about this. That's my point.
  • Srap Tasmaner
    5k
    The mistake is to try and classify it as knowledge or not.Ludwig V

    The one thing everyone agrees on is that there is no knowledge here, so I wonder why you think there's a problem saying there is or isn't.

    "No false lemmas" is discussed even on the Wikipedia page for the Gettier problem, in a section that begins with this amusing banner:

    This article needs additional citations for verification. (October 2021)

    Also on the SEP, which says

    However, this “no false lemmas” proposal is not successful in general.SEP

    That's at least some places to start if you're sympathetic to the "no false lemmas" response.

    He's saying, see, their using JTB and it failed to give knowledge. He's conflating one's claim to knowledge with actually having knowledge.Sam26

    This is not even in the ballpark of the Gettier problem.


    For completeness, here's the IEP page on the Gettier problem.
  • Andrew M
    1.6k
    However, this “no false lemmas” proposal is not successful in general.
    — SEP

    That's at least some places to start if you're sympathetic to the "no false lemmas" response.
    Srap Tasmaner

    I regard the "no false lemmas" condition as essentially correct. The criticisms are really around what counts as a lemma. But if one is to construct a Gettier case, it requires the failed knowledge to depend in some relevant way on something false.

    To take the robot dog example from SEP, James thought he observed a dog and consequently concluded that there is a dog in the field. His conclusion was correct, and justified, but not knowledge because the lemma that he observed a dog was false. He in fact didn't.

    The SEP analysis frames it as observing an "apparent" dog, which is why they think it's a counterexample to the "no false lemmas" condition.
  • Srap Tasmaner
    5k


    If justification and truth run on separate tracks, then justification can sometimes lead, quite reasonably, to falsehood, just as we can sometimes hold true beliefs by luck. (Lotteries provide the clearest examples for both: you can pick the winning number, without justification, and you can only be justified in believing that you didn't, given the odds, but you can't know it.)

    "No false lemmas," by stipulating their conjunction, doesn't really address the main issue: either the true, justified lemma is knowledge, or it should face a Gettier case of its own — that is, you will be lucky that your premise is true. (If it's knowledge, then we've taken a step toward Williamson's E = K, the idea that rational beliefs are based on knowledge; but to claim that knowledge must be based on knowledge is either empty — because of course we'll take valid inference to be knowledge-preserving — or circular. If there's a third option, it's pretty subtle, but maybe there is.)

    I'll admit, though, that it does seem to help. In Russell's example, checking the time from a clock that's stopped, had you looked a minute earlier or later, you would have formed a false belief, so you were lucky to have looked when you did. Now suppose that the clock was working and had the correct time, but stopped right after you looked; now I think we want to say you do have knowledge even though a minute later you would have formed a false belief. You were genuinely lucky in looking while the clock was still such that it was knowledge imparting.

    So what's changed? If you look a minute later, we're exactly in Russell's scenario; a minute earlier, and you're fine. What if we compress things: suppose the clock stopped this time yesterday, briefly surges into life as you approach, just long enough to tell the right time for a minute or so, and then fails again. Now your window of luck is a range of a minute or so — too early or too late is still Russell, but for a brief span, the clock is knowledge imparting. Does that sound right? It sounds a bit dodgier now; you have been nearly as lucky as in Russell's scenario. The clock starting again feels wrong; had it started a minute earlier it would carry on being ahead until it failed, later and it would remain behind. What's missing is the clock actually being set; if a worker had just gotten the clock to work, and set the right time, you would again be acceptably lucky to look while it's keeping the correct time, even if it only did so for a minute before the worker cursed and set to work again.

    To say that the clock has been set properly is to say that the time it displays is not only true, but justified, I suppose. But we can keep pushing the problem of luck back into these ceteris paribus conditions, which will grow without bound. Was the worker going by his own watch? What if his watch only happened to have the correct time? We're either going to continue demanding that truth and justification stay conjoined, or we're going to allow them to separate at some point, and that's the point at which Gettier will take hold.

    Perhaps though what we're seeing here is that Gettier is the inevitable result of treating beliefs as atomic, and that the revenge cases are indicating that our beliefs never confront reality singly but as a whole, the Quinean view, I guess.
  • creativesoul
    11.9k
    Beliefs are not equivalent to P. <-------That's a basic problem underwriting Gettier problems. The 'logical' rules of entailment are another. Treating beliefs as though they are 'naked' propositions with no speaker is yet another.
  • Ludwig V
    1.7k
    The one thing everyone agrees on is that there is no knowledge here, so I wonder why you think there's a problem saying there is or isn't.

    I'm sorry I wasn't very clear. Some people think that there is no knowledge in Gettier cases, but that there is justified true belief. Hence they conclude that the JTB definition is inadequate. Others, like me, think that the JTB is correct, (subject to some caveats). They think that if there is no knowledge, there cannot be justified true belief. The question comes down to whether the main character's belief is justified or not; the stories create situations in which it isn't possible to give a straight answer. Or that's my view.

    I was aware that not everyone agrees with "no false lemmas". I confess that I don't know what the full definition of a lemma would be so I'm not in a position to argue with them. For the sake of brevity, I ignored them. The "apparent dog" is not an impressive counter-example. An apparent dog is not a dog. One might argue that a robot dog is a kind of dog, but that would blow the point of the story, so we don't need to worry about that.
  • Srap Tasmaner
    5k
    The question comes down to whether the main character's belief is justified or not; the stories create situations in which it isn't possible to give a straight answer.Ludwig V

    I think that's a pretty common reaction. "No false lemmas" can itself be taken as meaning that the belief in question wasn't really knowledge because it wasn't really justified, or as a fourth condition, separate from justification.

    I find the whole approach suspect, as I think justification belongs with rational belief formation, where it's perfectly natural to consider the support offered by evidence as probabilistic, and the beliefs derived as partial. That leaves knowledge nowhere (as some would have it) or as a separate mental state, not belief that's really really justified.

    But I'm open to argument that JTB-NFL can be made to work.
  • Andrew M
    1.6k
    If justification and truth run on separate tracks, then justification can sometimes lead, quite reasonably, to falsehood, just as we can sometimes hold true beliefs by luck. (Lotteries provide the clearest examples for both: you can pick the winning number, without justification, and you can only be justified in believing that you didn't, given the odds, but you can't know it.)Srap Tasmaner

    Yes.

    "No false lemmas," by stipulating their conjunction, doesn't really address the main issue: either the true, justified lemma is knowledge, or it should face a Gettier case of its own — that is, you will be lucky that your premise is true. (If it's knowledge, then we've taken a step toward Williamson's E = K, the idea that rational beliefs are based on knowledge; but to claim that knowledge must be based on knowledge is either empty — because of course we'll take valid inference to be knowledge-preserving — or circular. If there's a third option, it's pretty subtle, but maybe there is.)Srap Tasmaner

    I think it's a reasonable view that the lemma be knowledge (which admittedly is a higher standard than simply truth), but it does need to be contextualized. That is, what counts as knowledge depends on the relevant standard in the particular context.

    So the lottery example makes a particular set of possibilities salient. The belief that one will lose is justified in one sense (i.e., highly likely to be correct), but not another (i.e., it's not a valid inference). But it's worth noting that there are other less obvious ways things can go wrong (or right). Maybe Alice bought all the tickets, but then the lottery was cancelled. Or maybe Bob bribes someone and "wins" on that basis.

    It's like the the coin flip that is purportedly 50/50 odds of heads or tails, but instead lands on its edge.

    I'll admit, though, that it does seem to help. In Russell's example, checking the time from a clock that's stopped, had you looked a minute earlier or later, you would have formed a false belief, so you were lucky to have looked when you did. Now suppose that the clock was working and had the correct time, but stopped right after you looked; now I think we want to say you do have knowledge even though a minute later you would have formed a false belief. You were genuinely lucky in looking while the clock was still such that it was knowledge imparting.Srap Tasmaner

    Yes, that's exactly the issue as I see it. More below.

    So what's changed? If you look a minute later, we're exactly in Russell's scenario; a minute earlier, and you're fine. What if we compress things: suppose the clock stopped this time yesterday, briefly surges into life as you approach, just long enough to tell the right time for a minute or so, and then fails again. Now your window of luck is a range of a minute or so — too early or too late is still Russell, but for a brief span, the clock is knowledge imparting. Does that sound right? It sounds a bit dodgier now; you have been nearly as lucky as in Russell's scenario. The clock starting again feels wrong; had it started a minute earlier it would carry on being ahead until it failed, later and it would remain behind. What's missing is the clock actually being set; if a worker had just gotten the clock to work, and set the right time, you would again be acceptably lucky to look while it's keeping the correct time, even if it only did so for a minute before the worker cursed and set to work again.

    To say that the clock has been set properly is to say that the time it displays is not only true, but justified, I suppose. But we can keep pushing the problem of luck back into these ceteris paribus conditions, which will grow without bound. Was the worker going by his own watch? What if his watch only happened to have the correct time? We're either going to continue demanding that truth and justification stay conjoined, or we're going to allow them to separate at some point, and that's the point at which Gettier will take hold.

    Perhaps though what we're seeing here is that Gettier is the inevitable result of treating beliefs as atomic, and that the revenge cases are indicating that our beliefs never confront reality singly but as a whole, the Quinean view, I guess.
    Srap Tasmaner

    Yes. So I think what is important here is context. In one sense every part of the world connects to every other part however you carve it up, even if only indirectly. But it's not very useful that we should have to account for everything in order to know anything at all. So we take a slice that is, in some practical and reasonable sense, separable and that is what justification applies to. That might mean keeping all possibilities of a lottery together as inseparable (since they are salient), but not all the ways a clock can go wrong. However, if one focuses on that aspect, as you have done above, then the boundaries of what counts may move or be contestable, at least for the moment. But they will probably move back again when one's focus changes to something else. Consider the coin toss example above. We don't want to miss the forest for the trees by over-analyzing it. (Though experience also counts here - we don't want to continually have financial crises, wars and pandemics and have everyone always say, "Well, who could've known?". They aren't black swans.)

    So a particular belief may or may not be justified, depending on where one sets the contextual boundaries. But the clock time is either correct or not independent of those standards. That is, truth keeps us connected to the world and keeps us honest.

    It's not clear to me that a knowledge-first view especially helps with these issues. There's still the question of whether you knew the time or not, and what counts as evidence.
  • Andrew M
    1.6k
    I was aware that not everyone agrees with "no false lemmas". I confess that I don't know what the full definition of a lemma would be so I'm not in a position to argue with them. For the sake of brevity, I ignored them. The "apparent dog" is not an impressive counter-example. An apparent dog is not a dog. One might argue that a robot dog is a kind of dog, but that would blow the point of the story, so we don't need to worry about that.Ludwig V

    A lemma, here, is a premise of one's purported knowledge. So the stated premise in the case of the robot dog was that James had observed an apparent dog. Framed that way, it's a true premise, so avoids the "no false lemmas" condition. However I think that framing is problematic and agree that it's not an impressive counter-example.
  • creativesoul
    11.9k
    S (a person) knows P (a proposition) iff

    1. S believes P
    2. P is true
    3. P is justified

    When (1) S believes P it means, for S, P is true or S thinks P is true.
    Agent Smith

    Okay, so long as...

    P need not be true in order for it to be believed. The phrase "for S, P is true" conflates truth and belief. There are plenty of cases when/where S can believe P, but P be false. Belief is necessary but insufficient for well-grounded true belief. Truth is necessary and insufficient for well grounded true belief. Justification follows suit if, being justified requires being argued for. If justification requires putting one's reasons into words in a manner which somehow dovetails with current conventional rules governing the practice; well then, we cannot possibly account for well grounded true belief that emerges prior to the complex language use necessary for becoming a successful practitioner of justification endeavors..

    Toddlers cannot do that, yet they can certainly know when some statement is false on its face. They can know right away that what was said to be the case was not the case, despite their complete incompetence to put that into words.

    A twenty-seven month old child knew when "Thur's nuthin in thur for yew!" was false despite the fact that they she was uncapable of uttering it or her reasons for not believing it. She knew that that claim was false because she knew that there were things in there. She opened the door and demonstrated her knowledge to each and every individual in the room. Not all that uncommon an occurrence, I would think.

    -----------------------------------------------------------------------------------------------------------------------

    The problem with JTB that Gettier called attention to was not a problem with well-grounded true belief at all. It was and remains a problem with the so-salled 'rules' of logical entailment. They permitted Edmund to change Smith's belief from being about himself to being about someone else. It is when we forget that that we fall into error.

    While it may be true that "the man with ten coins in his pocket will get the job" is entailed by "I have ten coins in my pocket, and I'm going to get the job", it is most certainly impossible for Smith to believe that anyone else will get the job.

    FULL STOP.
  • creativesoul
    11.9k
    Smith is not justified in believing that anyone else has ten coins in their pocket. He is also not justified in believing anyone else with ten coins in their pocket will get the job, but that's exactly what happened - contrary to his belief that he was the man with ten coins in his pocket who would get the job.

    FULL STOP.
  • creativesoul
    11.9k
    Gettier demonstrates how the rules of entailment do not successfully preserve truth.


    Why, again, is it an acceptable thing to do? We employ the rules of entailment, completely change Smith's beliefs from false ones to true ones, and then somehow think that this is all acceptable?
  • Ludwig V
    1.7k


    I’ve read all your posts and I think you’ll find replies or reactions in what follows.

    I would like to explain why I said that the mistake is to answer the question.

    Faced with a problem like this one, it can be helpful to look at things from a fresh perspective. That can be achieved here by putting oneself in the place of the subject and considering the situation, not so much from the question whether it counts as knowledge or not but considering the related question “was I right or not”.

    Take the Gettier case at the beginning of this thread:-

    It's dusk, you're a farmer. You go into your fields and see a cowish shape (it actually happens to be a cloth swaying in the wind). You conclude that there's a cow in your field. There, in fact, is a cow in your field.TheMadFool

    If you know that you didn't see a cow, but just a cowishly shaped rag, you will withdraw your claim that there's a cow in the field. But you will notice that there is a cow in the field, but that you couldn’t have seen it. So you were right, but for the wrong reasons. If it had been a bet, you would have won it. But it is precisely to differentiate winning a bet from knowing that the J clause was invented. So it is clearly not knowledge or even justified true belief because the J clause fails.

    Now, Gettier stipulates that it is possible for one to be justified in believing that p even when p is false. This opens the door to his counter-examples, but I am reluctant to find fault with it.

    However, there is a problem with the next step. He further stipulates that if one believes that p and if p entails q, one is entitled to deduce that q and believe it. He does not say that it is sufficient to believe that p entails q. Hence, even though we must accept the belief that p, if it asserted by S, we must agree that p entails q, if the justification is to be valid. Assuming that we are not talking about the truth-functional definition of implication, it is clear that even if p does entail q, one is not entitled to deduce q if p is false. So the cases all fail.

    Russell’s clock is not a classic Gettier problem (and Russell himself treats as a simple case of true belief which is not knowledge). It raises the rather different problem, that we nearly always make assumptions which could be taken into account, but are ignored for one reason or another, or even for no particular reason. Sometimes these assumptions fail, and the result is awkward to classify. Jennifer Nagel calls this the Harman Vogel paradox.

    The classic example is parking your car in the street to attend a meeting or party or whatever. If all goes well, you will be perfectly comfortable saying that you know that your car is safe. But suppose the question arises “Is your car safe? Are you sure it hasn’t been stolen?” You ignored that possibility when you parked, assuming that the area was safe. But perhaps you aren’t quite sure, after all. It is perfectly possible that my car will be stolen while I’ve left it. I do not know how to answer this. Our yearning for certainty, for which knowledge caters, collides with the practical need to take risks and live with uncertainty. One might point out that we take risks every time we assert something; if it goes wrong, we have to withdraw the assertion. But that is just a description of the situation, not a solution.

    I think there may be something to be said for the knowledge-first view, but I haven’t done any detailed work on it. It might well be worth following up. It occurs to me that it would be much easier to teach the use of “know” to someone who didn’t know either “know” or “believe” than the other way round.

    The J clause is a bit of a rag-bag and I’m not sure it is capable of a strict definition. But I’m not sure how much, if at all, that matters.
  • creativesoul
    11.9k


    Russell's clock is another example of accounting malpractices in my view. The person believed that a stopped clock was working. That was the false belief. It is common to attribute something much different in the form of a proposition that they would agree to at the time(that clock is working). I think that that is a mistake when it comes to false belief.

    It is humanly impossible to knowingly be mistaken(to knowingly hold false belief). The mistaken false belief is not in the form of a proposition that the person knowingly believes, such as one they would assent to if asked at the time. To quite the contrary, the belief is impossible for them to knowingly hold. It is only when they become aware of the fact that they were mistaken, that they once held false belief, that they will readily assent to such.

    It is only after believing that a stopped clock was working and later becoming aware that the clock had stopped that it is possible to know that one had once believed that a stopped clock was working.

    Believing that a cloth is a cow is not equivalent to believing that "a cloth is a cow" is true. The same holds good for "a broken clock is working".

    Beliefs are not equivalent to propositional attitudes.
  • Ludwig V
    1.7k


    I agree that
    It is humanly impossible to knowingly be mistaken(to knowingly hold false belief).creativesoul
    But I don't quite understand why you say it is humanly impossible. It seems to me self-contradictory to assert "I believe that p and it is not the case that p". It is equivalent to "p is true and p is false." (Moore's paradox, of course.)

    And I don't understand what you mean when you say
    Beliefs are not equivalent to propositional attitudes.creativesoul
    I was under the impression that belief was one of the paradigmatic propositional attitudes.

    Perhaps you are referring to your point that
    Believing that a cloth is a cow is not equivalent to believing that "a cloth is a cow" is true.creativesoul
    It is true that sometimes people explicitly verbalize a belief, whether to themselves or others and sometimes they don't - and of course, animals believe things, but clearly don't verbalize them. But I don't understand why that makes any difference here.
  • Srap Tasmaner
    5k
    So it is clearly not knowledge or even justified true belief because the J clause fails.Ludwig V

    This is just to deny one of Gettier's premises. And that's fine, of course, but on what grounds?

    Gettier is deliberately pretty vague about justification so that his argument applies to various formulations in the literature. He describes the evidence his protagonist has, with the assumption that it's the sort of thing we would usually consider adequate to justify holding a belief. If it's not, in our view, adequate, then we ought to strengthen the evidence until it is.

    I take it as obviously true that we can have very strong evidence for a proposition that is false, for the simple reason that evidence is mostly a matter of probability.

    In the case at hand, we have a farmer looking out at a field, and he's probably used to the way a cow standing in the field catches just a bit of light so that it's an indistinct bright spot in the otherwise dark field. He's never known anything else to be in the field that had this effect, though obviously a great number of things, including the cloth that turns out to be there, could, so when he sees such an indistinct bright spot he thinks 'cow'. That seems to me a perfectly reasonable belief and he has probably formed a true belief on just such a basis thousands of times before.

    There is obviously a gap between the evidence and his conclusion; that gap has never mattered before, but this time reality falls into that gap. C'est la vie. You find the gap too large to allow him to claim to know there is a cow in his field; the point of the Gettier problems is to ask how small the gap has to be before you are willing to allow such a knowledge claim. If there has to be no gap at all, then it's a little unclear how useful the idea of 'justification' is. We may believe inference from empirical evidence can approach demonstrative proof, but we don't generally believe it can actually reach it.

    Wherever size gap you choose to accept, even in a single case, that's where your situation can be Gettierized. That's my read. Rational belief is the sort of thing you have evidence for, not knowledge.



    I find Lewis's contextualism a pretty obscure doctrine, so I'm not ready to go there yet, and I have more reading to do before taking on contextualism in general. Your view seems to be some sort of hybrid, in which knowledge is still a sort of justified belief, but what counts as justification is context-dependent. (Usually contextualism passes right by justification.)

    For this case, Lewis might say that until that fateful evening when the farmer mistook an old shirt for a cow, the possibility of something making the same sort of bright spot in the field as a cow does was irrelevant, but it's not irrelevant for us as the constructors of the hypothetical, so we have to refuse to attribute knowledge to him. But now what? Must the farmer forevermore wonder whether the bright spot is a cow or an old shirt? Because we know he was mistaken once? We might well ask, but Lewis specifically does not make such demands on the farmer, who either will or won't. This is a puzzling theory, that the less imaginative you are the more you know.

    What does seem to be the kernel of truth in this story is that — atomism incoming — some states of affairs are relevant to the truth of a proposition P, and some aren't, and some states of affairs are relevant to your knowing that P, and some aren't.

    One of the roles of knowledge is to raise our standards of knowledge. I mean cases like this: suppose my son and I haul out the air compressor to inflate his car's tires, and then he's to put the compressor away in the shed. If I later ask him if he got it squared away, his report amounts to a claim to know that he did, and we can itemize that report: he knows to coil up the extension cord; he knows to bleed the hose, else it's too stiff to coil up; he knows to switch off the outlet it's plugged into or unplug it; but does he know to bleed the pressure from the tank? Did I even show him how to do that? If he didn't bleed the tank, then he only believes he put it away properly, but he doesn't know he did because a step has been left out, something relevant to the truth of "I put away the compressor properly."

    That sort of relevance analysis, or contextualism, if that's what it is, I'd endorse. It does mean that the more you know, the more cases of putative knowledge might be excluded, because the world is rich with half-assery. But that's nothing like Lewis's acceptance of knowledge going bad when doing epistemology; I'm not imagining that the tank should be bled, I know it. And so it is with those benighted contestants on Deal or No Deal who claim to know the million dollars is in their case: they know no such thing; I know that they don't know not because I'm more imaginative, but because I know how random choice works, and they apparently don't.
  • Michael
    15.6k
    Assuming that we are not talking about the truth-functional definition of implication, it is clear that even if p does entail q, one is not entitled to deduce q if p is false. So the cases all fail.Ludwig V

    I don't think this adequately addresses the reasoning.

    The reasoning is: if the belief that p is justified, and if p entails q, then the belief that q is justified.

    For example, assume that I am justified in believing that my car is in the garage. If my car is in the garage then it entails that my car is not on the road. Therefore, I am justified in believing that my car is not on the road.

    This seems a reasonable argument.
  • Ludwig V
    1.7k

    This seems a reasonable argument.Michael
    Yes, it is a reasonable argument. I didn’t pay attention to the point that if S is justified in believing that p, S is justified in asserting that p.

    It’s not really a surprise that a blanket refutation like that doesn’t work. It would have been noticed long ago if there were such a thing.

    But I do stand by my opinion that “S is justified in believing that p and p is false” is problematic. I’m still working this through, so I’ll say no more here.

    It doesn’t rescue my argument, because in a Gettier case, the falsity of p is not known to S.

    I stand by the observation that S is not the final authority whether p does entail q. So justification is not simply up to S’s say-so.

    Which doesn’t rescue my argument either.



    You raise the problem of certainty.

    I see it this way. If we accept “S knows that p” when p is false, the concept of knowledge has lost what makes it distinct from belief, so I’m very reluctant to do that. Perhaps it sets a high bar to knowledge, almost certainly higher than everyday non-philosophical use would expect. It is not fatal. It just means that any claim to knowledge is open to revision unless and until certainty is achieved, (and it may never be). (And I’m using “certainty”, not in the sceptic’s sense, but in the sense that certainty is defined for each kind of proposition by the language-game in which it is embedded.) In normal life, we have to determine questions of truth and falsity as best we can, withdrawing mistaken beliefs when they appear. Final certainty as regulative ideal, not always achieved.

    One may be justified in believing that p even if p is false. This opens the door to Gettier cases, no matter how stingy or generous the criteria are. The problems actually arise when S believes the right thing for the wrong, but justifiable, reasons.

    How to respond? Well, my response to your farmer is 1) he thought he saw a cow, 2) he didn’t see a cow, 3) there was a cow. I observe that a) 1) and 3) are reasons for saying that he knew and that b) 2) is a reason for saying that he didn’t. I conclude that it is not proven that he knew, and that it is not proven that he didn’t, so I classify the case as unclassifiable.

    Unclassifiability is not that uncommon, and there are various ways in ordinary language of dealing with it. Within philosophy, there is no appetite for abandoning the JTB (not even Gettier actually suggests that). There is no consensus agreement on what modification or addition to the JTB would resolve this (and anyway philosophers aren’t legislators except within their own discipline (or sub-discipline)). Perhaps the suggestion of treating “know” as primitive could help but failing that there is no solution.

    I don’t know what you mean by “making JTB-NFL work”. But I think this is a description of the situation. If you have a better one, I would very much like to know it.
  • Andrew M
    1.6k
    I find Lewis's contextualism a pretty obscure doctrine, so I'm not ready to go there yet, and I have more reading to do before taking on contextualism in general. Your view seems to be some sort of hybrid, in which knowledge is still a sort of justified belief, but what counts as justification is context-dependent. (Usually contextualism passes right by justification.)Srap Tasmaner

    Yes, my view perhaps differs from Lewis' in that regard. In the case of the clock example, I know it's 3pm as long as I see that the clock says 3pm and the clock is working properly. Now if I was asked whether I knew that the clock definitely hadn't stopped 5 minutes ago, then I don't know that. But wouldn't it then follow that I don't know the time? [*]

    My view is that different standards of justification are being applied here. I know the time according to a pragmatic standard (the clock was working and I looked). As Williamson notes, "Knowledge doesn’t require infallibility. What it requires is that, in the situation, you couldn't too easily have been mistaken."

    The clock question switches the context and applies a higher standard (for the moment anyway). That is, I don't know that the clock didn't stop 5 minutes ago and so don't know the time per that higher standard. I assume it didn't, but that's not the same thing. But I'm also not merely assuming it's 3pm. I did look at the clock which is all that is ordinarily expected. (Though if our lives depended on getting the time correct in some context, more might be expected - thus raising the standard in that context.)

    Must the farmer forevermore wonder whether the bright spot is a cow or an old shirt? Because we know he was mistaken once? We might well ask, but Lewis specifically does not make such demands on the farmer, who either will or won't. This is a puzzling theory, that the less imaginative you are the more you know.Srap Tasmaner

    No he shouldn't do that. Being more imaginative doesn't change the fact that he ordinarily knows there's a cow there. However if the shirt situation commonly occurred, that would violate Williamson's requirement above - that you can't too easily have been mistaken. This is exploited by the "fake barn" Gettier case - what is normally a low probability case is instead a high probability case in that region. One doesn't know they have seen a barn in that region unless they look more closely.

    --

    [*] Which is the Harman-Vogel paradox that @Ludwig V referred to. Jennifer Nagel has a useful survey of some of the responses (contextualism, relativism, interest-relative invariantism, error theory) and her own solution (dual-process theory) in "The Psychological Basis of the Harman-Vogel Paradox".
  • creativesoul
    11.9k
    I agree that
    It is humanly impossible to knowingly be mistaken(to knowingly hold false belief).
    — creativesoul
    But I don't quite understand why you say it is humanly impossible.
    Ludwig V

    Well, I say it because it seems pretty clear to me that in each and every instance - at the precise moment in time - when we become aware of the fact that and/or come to know that... something is not true or that something is not the case... it is quite literally impossible for us to believe otherwise.




    It seems to me self-contradictory to assert "I believe that p and it is not the case that p". It is equivalent to "p is true and p is false." (Moore's paradox, of course.)

    Seems pretty clear to myself also that asserting "I believe that p and it is not the case that p" is self-contradictory. That's just an inevitable consequence of what the words mean(how they're most commonly used). I'm also inclined to agree that it is very often(perhaps most often) semantically equivalent to asserting "p is true and p is false". The exceptions do not matter here.



    I'm glad Moore's paradox has been mentioned...

    Moore's paradox has him wondering why we can say something about someone else that we cannot also say about ourselves. He offers an example of our knowing when someone else holds false belief and then pointing it out while they still hold it. He asks, "why can we not do that with ourselves?" or words to that effect/affect. The reason why we can say "It's raining outside, but they do not believe it", but we cannot say the same thing about ourselves is because we are completely unaware of holding false beliefs while holding them, but we can be aware of others' while they hold them.




    And I don't understand what you mean when you say
    Beliefs are not equivalent to propositional attitudes.

    Honestly, I'm not at all surprised by any hesitation. It's well-founded, especially if you're unfamiliar with my position on the relevant matters. The worldview I argue for - what makes the most sense to me - is uniquely my own; a frankenstein's monster of sorts, built from globally sourced parts. Epicurus, Xeno, Plato, Aristotle, Hume, Kant, Heidegger, Witt, Russell, Moore, Ayer, Tarski, Kripke, Quine, Davidson, Searle, Austin, and Dennett were all influencial to my view. I'm certain there are many more. That was right off the top of my head, which happens to mirror exactly how I prefer to practice this discipline.

    I take serious issue with how academic convention has been taking account of meaningful human thought and belief.

    I've yet to have seen a school of thought practicing a conception of meaningful thought and/or belief, consciousness, or any other sort of meaningful experience that is simple but adequate enough to be able to take account of the initial emergence, and yet rich enough in potential to be able to also account for the complexity that complex written language use has facilitated, such as the metacognitive endeavors we're currently engaged in here.

    I've yet to have seen one capable of bridging the gap between language less creatures and language users in terms that are easily amenable to evolutionary progression.




    I was under the impression that belief was one of the paradigmatic propositional attitudes.

    Indeed, it is! Rightly so, as well...

    ...when and if we're specifically discussing belief about propositions, assertions, statements, utterances, etc. Not all belief is about language use. It very often is however, and when that is the case, it makes perfect sense for us to say that if one has an attitude towards the proposition "there is a cow in the field" such that they hold that the proposition is true, then they have a particular belief that amounts to a propositional attitude. I'm in complete agreement with that much - on it's face,

    However, and this is what's crucial to grasp, if one believes that a piece of cloth is a cow, they most certainly do not - cannot - have an attitude towards the proposition "a piece of cloth is a cow" such that they hold that that proposition is true. That belief is not equivalent to a propositional attitude.

    "There is a cow in the field" is not entailed by belief that a piece of cloth is a cow. The same holds good with barn facades and stopped clocks.




    Perhaps you are referring to your point that
    Believing that a cloth is a cow is not equivalent to believing that "a cloth is a cow" is true.
    — creativesoul
    It is true that sometimes people explicitly verbalize a belief, whether to themselves or others and sometimes they don't - and of course, animals believe things, but clearly don't verbalize them. But I don't understand why that makes any difference here.

    The point wasn't specifically about whether or not people explicitly verbalize a belief. The point is that we cannot explicitly verbalize some false belief while holding it, because we cannot know we hold them - at the time. As before...

    We cannot knowingly believe a falsehood.

    Verbalizing belief(false ones too!) requires knowing what you believe. We can believe that a piece of cloth is a cow. We can believe that a barn facade is a barn. We can believe that a stopped clock is working.

    What we cannot believe is that "a piece of cloth is a cow", or "a barn facade is a barn", or "a stopped clock is working" are true statements/assertions/propositions/etc. If we do not know that we believe a piece of cloth is a cow, if we do not know that we believe a barn facade is a barn, if we do not know that we believe a stopped clock is working, then we cannot possibly explicitly verbalize it.

    Our beliefs during such situations are not equivalent to propositional attitudes.

    I suppose my position could be taken as rejecting the J, T, and B aspects of those candidates.
  • Ludwig V
    1.7k


    The question of context is obviously very important in all this. It seems to me that two important features of the context are the probability of being right and the risks if we are wrong.

    I haven’t got any interesting conclusions about S is justified in believing that p and p is false. Just muddle.

    There is so much going on now that I'm having trouble keeping up. It is a great problem to have. Thanks to you all.

    Here are some comments:-

    As Williamson notes, "Knowledge doesn’t require infallibility. What it requires is that, in the situation, you couldn't too easily have been mistaken."Andrew M

    All right. But when you find you are mistaken, you need to withdraw the claim to know.

    Which is the Harman-Vogel paradox that Ludwig V referred to. Jennifer Nagel has a useful survey of some of the responses (contextualism, relativism, interest-relative invariantism, error theory) and her own solution (dual-process theory) in "The Psychological Basis of the Harman-Vogel Paradox".Andrew M

    I’m not sure whether to classify the classic Gettier cases as variants of that paradox or a completely different variety. But I am sure that this paradox is much more difficult and more important than the Gettier cases. I have looked at some of what Jennifer Nagel has written about this. I didn’t find any of the theories particularly appealing. I’m certain it deserves treatment separate from the Gettier cases.

    Moore's paradox has him wondering why we can say something about someone else that we cannot also say about ourselves. He offers an example of our knowing when someone else holds false belief and then pointing it out while they still hold it. He asks, "why can we not do that with ourselves?" or words to that effect/affect.creativesoul

    I read Moore's stuff about this so long ago that I'm afraid I can't remember exactly what he said. But I would have thought that that the answer was pretty clear. Once you recognize that a belief is false, you have to abandon it.

    And I'm sure you know that there are other paradoxes of self-reference which he must have been aware of. (e.g. The liar paradox and Russell’s paradoxes about sets that are members of themselves.)

    One of the differences between those two cases is that in the case of the liar paradox, the contradiction is created in the act of asserting it, not by the proposition itself. Moore’s paradox is like that.

    What we cannot believe is that "a piece of cloth is a cow", or "a barn facade is a barn", or "a stopped clock is working" are true statements/assertions/propositions/etc. If we do not know that we believe a piece of cloth is a cow, if we do not know that we believe a barn facade is a barn, if we do not know that we believe a stopped clock is working, then we cannot possibly explicitly verbalize it.creativesoul

    You are right in the first sentence. In the second sentence, while we cannot verbalize that belief to ourselves, other people can, and they can prove that what they say is true by observing what you do. When I have realized what the situation is, I can verbalize it in various ways without any problem.

    It is true that it is odd to describe this as a propositional attitude (and it is also odd in the case of language-less creatures). I don't really know what a proposition is. I just use the term because it is a grammatical feature that conveniently groups together certain words that see to belong together. So I'm not in a position to explain.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.