JTB+U and the Grammar of Knowing: Justification, Understanding, and Hinges (Paper Based Thread) Continuing with paper...
Post #12
7. JTB+U and Artificial Intelligence: Why AI Does Not “Know”
The present interest in artificial intelligence has brought an old temptation back into view. We are inclined to treat fluent performance as if it were knowledge, and to treat the production of correct sounding answers as if it were understanding. This temptation is understandable. The outputs of large language models can resemble the surface of competent human speech. They can summarize, explain, and argue, and they can do so in a way that often passes casual scrutiny. Yet the resemblance is grammatical only at the level of appearance. When we look more closely, we see that the ordinary criteria for knowledge are not satisfied, not because the machine lacks a private mental state, but because it does not stand within the practices that give the concept of knowledge its use.
Truth remains the success condition for knowledge, and nothing in what follows weakens that point. An artificial system can produce a true statement, sometimes with striking reliability. But knowledge is not merely the arrival at truth. Knowledge is true belief that stands within a practice of justification, and the standing is not a decorative label. It depends on the routes by which the belief is supported, the guardrails that discipline that support, and the background of bedrock certainties that makes the whole practice possible. This is the first reason the language of knowledge becomes slippery when we apply it to machines. The system produces assertions, but it does not participate in the language-games in which assertion, challenge, withdrawal, and justification have their life.
This is also where the role of bedrock certainties becomes decisive. Human justification presupposes a background that stands fast for us. These certainties are not items we know. They are the inherited conditions under which doubting and knowing take place. They form a hierarchy in the sense that some stand deeper than others, and they are displayed in action rather than defended by argument. The point is not that a machine lacks a set of stored assumptions. It is that the machine is not trained into a form of life in which such certainties function as the background of justification. An AI system does not stand within the practices that define what counts as a mistake, what counts as correction, and what counts as the withdrawal of a claim. It can be updated, constrained, and fine-tuned, but this is not the same as occupying the human space in which bedrock certainties show themselves as what stands fast.
The five routes also clarify the difference. When a person justifies a belief through testimony, inference, sensory experience, or linguistic training, the support is situated within a practice in which the believer can be held responsible to standards. These standards are public and they include the possibility of being corrected in the relevant way. A language model can mimic the outward form of these routes. It can cite sources, draw inferences, and use perceptual language, but these are linguistic gestures, not placements within the practice itself. The model does not have testimony in the human sense, since it is not a participant in the practices that give testimony its standing. It does not infer in the human sense, since it does not operate with the conceptual competence that makes an inference a movement within a language-game rather than a pattern of token transitions. It does not perceive, and so it does not have sensory experience as a route of justification. It displays linguistic training in the limited sense that it has been trained on linguistic material, but this is not the kind of training by which a human learner comes to grasp the use of a concept within a form of life. It is closer to the acquisition of a statistical profile of usage than to the possession of a concept.
This is why the distinction between statistical competence and conceptual competence matters. A model can be statistically competent, in the sense that it produces language that fits patterns in its training data. It can do this at scale and with impressive fluency. But conceptual competence is not the possession of patterns. It is the ability to use a concept correctly within a practice, to respond to correction, to recognize when a challenge is relevant, and to withdraw a claim when the practice requires it. These are not private mental achievements. They are displayed in the way one stands within a language-game. The machine can be made to output a retraction. It can be prompted to list possible objections. Yet these are outputs, not the standing of a belief within a practice of justification.
The guardrails bring the point into sharper focus. No False Grounds matters because a model can generate support that looks acceptable and yet includes a false claim doing essential work. Practice Safety matters because a model’s correct output may be the result of a fortunate match rather than stable standing, especially in domains where the system has not been constrained by reliable sources. Defeater Screening matters because, while a model can generate lists of objections, it does not occupy the public discipline in which defeaters arise as challenges that change the standing of a belief. The model can simulate the discourse of justification, but it does not stand within a practice where its claims are owned, corrected, and withdrawn in the way that our language-games require.
None of this implies that AI is useless in epistemic life. The opposite is true. Artificial systems can be powerful instruments within human practices of justification, especially when their outputs are constrained by reliable data and when they are treated as aids rather than as knowers. They can help gather information, surface patterns, and organize arguments. But this usefulness does not collapse the grammatical distinction between producing true sentences and knowing. To treat the machine as a knower is to project the grammar of our concept of knowledge onto something that does not meet its criteria of use.
This is why JTB+U is especially valuable in the present environment. It gives us a disciplined way to distinguish persuasion from justification, fluency from conceptual competence, and the appearance of support from genuine standing within a practice. It also helps explain why the language of certainty is often misused in discussions of AI. A model can produce confident sounding claims, and this can resemble subjective certainty. But hinge certainty is not confidence, and epistemic certainty is not mere persistence under repetition. The kinds of certainty that matter to knowledge are rooted in practices and in what stands fast within them. The machine does not inhabit that structure, even when its outputs resemble the surface of a human epistemic performance.
For these reasons it is better to say that artificial systems can produce true statements, and can assist human beings in practices of justification, without saying that they know. The temptation to speak otherwise is understandable, but it blurs the grammar of knowledge at exactly the moment when we most need it to be clear.