You challenge me (within the norms of politeness too, another ethical frame) in the name of inferential norms, calling upon me to defend my claim. Indeed, in making that came, I have indeed committed myself to its defense. If I can't defend a strong challenge, it's my duty to withdraw or modify the claim. — plaque flag
Ultimately, epistemic agreements and disagreements rest upon assumptions as to what speakers means by their words:
Doesn't it strike you as odd, the assumption that a person can believe in something impossible? For what is said to be impossible is also said to not exist, and so cannot be said to be the cause of the person's belief. So how can a belief even
refer to something that is impossible?
And what then of
falsified beliefs? Aren't they also a problematic concept for similar reasons, for weren't one's previously held beliefs, that one presently judges to be "falsified", also caused by something that fully explains their previous existence?
Isn't a physicalist, who is committed to a causal understanding of cognition, forced in the name of objective science to always side with the epistemic opinions of the speaker, no matter how wrong, mad or contradictory the speaker might sound? For shouldn't the physicalist always interpret a speaker's utterances in the same manner that he interprets as a sneeze that is understood to refer to nothing more than it's immediate causes?
(When interpreted with empathy, do Flat-Earther's really exist?)
One supports this approach phenomenologically, which is to say by simply bringing us to awareness of what we have been doing all along. Scan this forum. See us hold one another responsible for keeping our stories straight. See which inferences are tolerated, which rejected. Bots can learn this stuff from examples, just as children do. — plaque flag
From the perspective of an engineer who has a causal understanding of AI technology and a responsibility to fix it, the gibberish spoken by an "untrained" or buggy chatbot is meaningful in a way that it isn't for a naive user of technology who is intending to play a different language-game with it. And obviously, any agent of finite capacity can only learn to play well at one language game at the expense of doing worse at the others.
I am also not allowed to contradict myself, for the self is the kind of thing (almost by definition) that ought not disagree with itself --- must strive toward coherence, to perform or be a unity. — plaque flag
When you find yourself disagreeing with the beliefs of your earlier self, are you really contradicting your earlier self? For didn't the facts of the matter change that you were responding to?
Consider a sequence of beliefs {b 1,b 2, b 3 ...} regarding the truth of the "liar sentence" L unfolded over time, where L = "this statement is false" :
b 1. Presently at time t = 1, L is believed to be true.
b 2. Presently at time t = 2, b 1 is understood to imply that L is false, and hence that b 1 is false.
b 3. Presently at time t = 3, b 2 is understood to imply that L is true, and hence that b 2 is false.
b 4. Presently at time t = 4, b 3 is understood to imply that L is false, and hence that b 3 is false.
In spite of the fact that each belief negates the previous one, each belief can nevertheless be considered to be "true" at the time of it's construction without entailing contradiction with any of the other beliefs, for none of the beliefs were
simultaneously held to be true, and each belief refers to a different object, namely it's own temporal context.