• Sam26
    3k
    Many acknowledge this, but then when pushed will only rely on science as if it's really the only method/s that counts. This is a confusion even among scientists. The problem is that most people (including scientists) don't have a good epistemology.
  • Sam26
    3k
    From my paper:

    Much of the contemporary discussion treats Gettier’s paper as showing that JTB is insufficient. I do not think this is the right lesson. The examples do not undermine the model itself. They depend on a confusion between what looks justified on the surface and what is genuinely justified within a practice. Once we attend to the structure of justification, including its graded and fallible character, it becomes clear that these cases fail to satisfy the justification condition in the first place. They rest on false grounds or on a lack of the relevant conceptual competence, and so they fall outside the classical model rather than threatening it. Seen in this way, Gettier does not overturn JTB; it signals the need to make explicit features of justification that the classical formulation left implicit. That is the task taken up by JTB+U in the sections that follow.

    Worked Gettier example (diagnostic use). Consider the familiar “ten coins” case. Smith has strong evidence that Jones will get the job, and Smith has counted ten coins in Jones’s pocket. Smith forms the belief, “The man who will get the job has ten coins in his pocket,” by straightforward logical inference from what he takes himself to know. Unknown to Smith, Jones will not get the job. Smith will get the job, and Smith also happens to have ten coins in his own pocket. The belief is true, and it can look well supported, but it does not have the standing required for knowledge.

    What fails is not truth, and not belief, but justification. The support Smith relies on depends on what is not the case, namely that Jones will get the job, and this triggers No False Grounds. One can say that Smith’s inference is valid, but validity is not enough, because justification is not merely a logical relation among propositions. It is a standing within a practice, fixed by public criteria that settle what counts as competent support in the context. The same case also brings Practice Safety into view. Smith stumbles into the truth by luck. In ordinary situations where the evidence is similar, he would draw the same conclusion, yet it would be false, so the belief is not practice safe. Defeater screening makes the point plain: once it is determined that Jones may not get the job, the belief loses its standing, and the only repair is to replace the faulty ground. Gettier does not refute JTB, it corrects a picture of justification as a private sense of assurance or a merely formal inference, rather than a public standing fixed by our epistemic practice.
  • Sam26
    3k
    I'm currently writing a book Why Christianity Fails using this epistemic model. Specifically, I analyze the testimonial evidence for the resurrection and demonstrate the weakness of the evidence.
  • Alexander Hine
    61
    Science is distinctive because it tends to force convergence by building systematic error detection into the practice. But the justificatory work still flows through the same routes. That is why it is a mistake to treat “science” as the only path to knowledge, and also a mistake to treat testimony as automatically inferior. The real question is the quality of the route in the case at hand, and whether the guardrails hold.Sam26

    You mean to elucidate for this audience that your project is a taxonomy of scientific method.
  • Sam26
    3k
    You mean to elucidate for this audience that your project is a taxonomy of scientific method.Alexander Hine

    Not quite. What I am offering is a taxonomy of routes of justification that operate across many practices: testimony, logic, sensory experience, linguistic training, and pure logic in a boundary-setting role. Science is one prominent domain where these routes are integrated and disciplined by unusually strong correction mechanisms, but the taxonomy is not confined to science, and it is not meant to reduce every kind of knowing to scientific procedure.

    The purpose is practical: when someone claims knowledge, I want to be able to ask, which route is doing the work here, what standards govern it in that domain, what would count as a mistake or defeater, and do the guardrails hold. That applies to science, but it also applies to ordinary life, history, law, engineering, and philosophy when philosophy is making knowledge claims rather than offering a mere stance.

    If you want a quick check, a lot of what I call “knowledge” is acquired by testimony and linguistic training long before anyone does anything recognizably scientific.
  • Alexander Hine
    61
    The purpose is practical: when someone claims knowledge, I want to be able to ask, which route is doing the work here, what standards govern it in that domain, what would count as a mistake or defeater, and do the guardrails hold.Sam26

    Isn't the annunciation of knowledge itself bound to the character of a localised hermeneutic. Do you give the least weight to individual or subjective testimony? Where is the rationale for weighted significance in your system for each or a combination of what you term, 'routes'?
  • Sam26
    3k
    Isn't the annunciation of knowledge itself bound to the character of a localised hermeneutic. Do you give the least weight to individual or subjective testimony? Where is the rationale for weighted significance in your system for each or a combination of what you term, 'routes'?Alexander Hine

    Yes, the annunciation of knowledge is always situated in a local hermeneutic, a language, a practice, a way of drawing distinctions. I'm not trying to deny that. My point is that this doesn't reduce justification to “mere interpretation,” because within a practice there are criteria for correct and incorrect application, there are recognized mistake-conditions, and there are ways of correcting ourselves when the practice throws up error. The hermeneutic is real, but it isn't the whole story.

    On individual or subjective testimony, I do give it weight. Testimony is one of the primary routes by which we acquire knowledge, and that includes first-person reports. The question isn't whether the report is subjective, it's how it stands within the standards that govern testimonial support: provenance, competence, independence, convergence, and defeater sensitivity. A single report is rarely self-authenticating, but it can still carry justificatory standing, especially when it's consistent, detailed, and later supported by independent lines of check.

    As for weighting the routes, I'm not assigning a fixed hierarchy. I'm saying that the weight is determined by the case. In a given context we ask: which route is actually doing the work, what would count as a mistake in this domain, what would count as a defeater, and how strong are the correction mechanisms that are available. Then we look for convergence across routes, because that's often what turns a fragile support into stable standing. So the rationale for weight isn't that one route always dominates, but that different practices and different questions demand different standards, and the guardrails, No False Grounds, Practice Safety, and Defeater Screening, discipline whatever routes are in play.
  • Tom Storm
    10.8k
    I'm currently writing a book Why Christianity Fails using this epistemic model. Specifically, I analyze the testimonial evidence for the resurrection and demonstrate the weakness of the evidence.Sam26

    A digression, perhaps and forgive my tone which is not intended to be strident. Are there not innumerable contributions on variations of this matter already, from Bart Ehrman to Richard Carrier?

    Does Christianity fail if the Jesus story can’t be demonstrated? And what does “fail” mean here?

    We already know that there’s no eyewitness testimony from the time of Jesus, let alone for a resurrection. The Gospels were written years later by anonymous authors and survive only as copies of translations of earlier copies. We also know that Jews didn’t think much of the preacher's claims. Do we need more on this? I sometimes wonder if debunking the evidence in detail just makes some people take the story more seriously.
  • Sam26
    3k
    I'll probably start a separate thread on that subject Tom. I'm not going to get into this subject here, but later in another thread. I'll just say this, most of the testimonial evidence is secondhand (hearsay), so by definition it's weak.
  • T Clark
    16k
    Three guardrails that discipline justification

    If justification is a standing within a practice, it still needs discipline. Not every chain of support confers standing, and not every true belief that happens to be well supported counts as knowledge. In the paper I use three guardrails to mark common ways justification fails, even when a belief looks respectable.

    No False Grounds (NFG)....

    Practice Safety...

    Defeater Screening...
    Sam26

    I'm trying to think of how I would translate this into a way to approach this issue from an engineering, or at least pragmatic, perspective. I guess I would call your No False Grounds guardrail "quality control and assurance." These are the procedures you follow and standards you apply to assure the quality of the data you use as input. For engineering or scientific activities, these procedures and standards will generally be formal, concrete, and mandatory. For less critical activities, they will be applied less formally, although the general principles are similar. This is a complex issue and is at the heart of my understanding of "truth." Here's something I wrote years ago that might shed some light, keeping in mind this is just a small part of the issues to be addressed by an overall quality control program.

    Say I have data--chemical laboratory analysis and data measurements for 100 water samples for 10 chemical constituents. So I have a 10 x 100 table of data. Is it true? What does that even mean? What can possibly go wrong?

    • It's the wrong data.
    • The data was tabulated incorrectly.
    • Samples were collected incorrectly in the field.
    • Samples were not packaged correctly - refrigeration.
    • The wrong analytical methods or detection limits were specified.
    • Samples were not analyzed within holding times.
    • The analysis was not performed in accordance with standard operating procedures.
    • The appropriate quality assurance procedures were not followed.
    • The analyses did not meet the laboratory's quality assurance standards.
    • And lots more.
    These issues would be addressed by use of what are called standard operating procedures (SOPs) during data collection. Data validation would then be performed after data collection and reduction to verify procedures have been met. To put this is more general terms for situations where this level of formality is not required--for all the "grounds" you use to establish truth, you must know where it came from, how you know it, and what the uncertainties are,

    I guess your Practice Safety guardrail could be comparable to an engineering standard of practice. These are formal requirements established by regulations, codes, technical standards, and administrative standards created by governments, industry groups, engineering societies, and other organizations.

    I'm not sure how I would fit your "defeator screening" procedure into the system I'm describing.
  • Sam26
    3k
    I'm no engineer, but it might look something like the following:

    No False Grounds (NFG) = “Are we building on bad inputs?”
    This is your QA/QC point. It asks whether the data or the key assumptions are incorrect in a way that would make the conclusion questionable.

    Examples: wrong sample, mishandled sample, wrong method, transcription error, the lab did not follow procedures, etc.

    Practice Safety = “Is the method we used a safe, normal way to reach this kind of conclusion?”
    This is closer to standard of practice. It is not perfection, it is “we used a route that usually catches mistakes.”

    Examples: proper calibration, chain of custody, replication, using accepted modeling procedures, etc.

    Defeater Screening = “Even if the data are good, is there something that would overturn the conclusion?”

    This is the part that is easiest to miss, because it happens after you think you are done.
    It is the deliberate search for “what would make this conclusion fail.”

    Examples in your setting:

    A different source could explain the same contaminant pattern.

    A missing geological feature changes the direction of some flow.

    Seasonal changes that would modify an important consideration.

    Another dataset (borings, field observations, historical site use) conflicts with the story you are telling.

    So in one line:

    NFG: inputs are not false.

    Practice Safety: the route to the conclusion is not fragile.

    Defeater Screening: no overlooked “gotcha” would overturn the conclusion.

    That is how your quality program maps into my epistemology. That's the best I can do not being an engineer. It's just a matter of getting use to the procedure. Engineering has these procedures built into their conclusions.
12Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.