Comments

  • Sleeping Beauty Problem
    SB's answer: "Because the protocol ties one lamp to Heads-runs and two lamps to Tails-runs, among the awakenings that actually occur across repeats, the lamp I'm under now will have turned out to be a T-lamp about two times out of three. So my credence that the current coin toss result is Tails is 2/3." (A biased coin would change these proportions; no indifference is assumed.)

    The coin's fairness fixes the branches and the long-run frequencies they generate. The protocol fixes how many stopping points each branch carries. Beauty's "what are the odds?" becomes precise only when she specifies what it is that she is counting.

    Note on indifference: The Thirder isn't cutting the pie into thirds because the three interview situations feel the same. It's the other way around: SB is indifferent because she already knows their long-run frequencies are equal. The protocol plus the fair coin guarantee that, among the awakenings that actually occur, the two T-awakenings together occur twice as often as the single H-awakening, and within each coin outcome the Monday vs Tuesday T-awakenings occur equally often. So her equal treatment of the three interview cases is licensed by known frequencies, not assumed by a principle of indifference. Change the coin bias or the schedule and her "indifference" (and her credence) would change accordingly.
    Pierre-Normand

    Thirders who argue their position on the basis of frequential probabilities are conflating the subject waking up twice in a single trial (in the case of Tails) for two independent and identically distributed repeated trials, but the subject waking up twice in a single trial constitutes a single outcome, not two outcomes. Frequentist Thirders are therefore overcounting.

    There is only one alleatorically acceptable probability for P(Head | Monday OR Tuesday) (which is the question of the SB problem) :

    P(Head | Monday OR Tuesday) =
    P(Monday OR Tuesday | Head) x P(Head) / P(Monday Or Tuesday)

    where

    P(Head) = 0.5 by assumption.
    P(Monday Or Tuesday) = 1 by assumption.

    P(Monday OR Tuesday | Head) = P(Monday | Head) + P(Tuesday | Head) = 1 + 0 = 1.

    P(Head | Monday OR Tuesday) = 1 x 0.5 / 1 = 0.5.
  • Sleeping Beauty Problem
    Then what would you say it is? If you say Q, then your credence in Tails must be 1-Q, and you have a paradox.JeffJo

    If you insist that credence must be expressed as a number Q, then in general I would refuse to assign a credence for that reason - cases like SB in which credences are artificially constrained to be single probability values, doesn't merely result in harmless paradoxes but in logical contradictions (dutch books) with respect to causal premises. Likewise, I am generally more likely to bet on a binary outcome when I know for sure that the aleatoric probability is 50/50, compared to a binary outcome for which I don't know the aleatoric probability.

    In order to avoid unintented inferences, the purpose for assigning credences needs to be known. For example, decisions are often made by taking posterior probability ratios of the form P(Hypothesis A | Observation O )/ P(Hypothesis B | Observation O). For this purpose, assigning the prior probability credence P(Hypothesis A = 0.5) is actually a way of saying that credences don't matter for the purpose of decision making using the ratio, since in that case the credences cancel out in the posterior probability ratio to produce the likelihood ratio P(observation O | Hypothesis A)/P(Observation O | Hypothesis B) that only appeals to causal (frequential) information. This is also the position of Likelihoodism; a view aligned with classical frequential statistics, that prior probabilities shouldn't play a part in decision making unless they are statistically derived from earlier experiments.

    An acceptable alternative to assigning likelihoods, which often cannot be estimated as in single experiment situations, is to simply to list the possible outcomes without quantifying. Sometimes there is enough causal information to at least order possibilities in terms of their relative likelihood, even if quanitification of their likelihoods isn't possible or meaningful.
  • Sleeping Beauty Problem
    The SB problem is a classic illustration of confusing what probability is about. It is not a property of the system (the coin in the SB problem), it is a property of what is known about the system.JeffJo

    Then you are referring to subjective probability which is controversial, for reasons illustrated by the SB problem. Aleatory probability by contrast is physical probability and directly or indirectly refers to frequencies of occurrence.


    That is, your credence in an outcome is not identically the prior probability that it will occur. Example:

    I have a coin that I have determined, through extensive experimentation, is biased 60%:40% toward one result. But I am not going to tell you what result is favored.
    I just flipped this coin. What is your credence that the result was Heads?
    JeffJo

    It is correct to point out that credence does not traditionally refer to physical probability but to subjective probability. It is my strong opinion however, that credence ought to refer to physical probability. For example, my answer to your question is say that my credence is exactly what you've just told me and nothing more, that is my credience is 60/40 in favour of heads or 60/40 in favour of tails.

    Even though you know that the probability-of-occurrence is either 60% or 40%, your credence in Heads should be 50%. You have no justification to say that Heads is the favored result, or that Tails is. So your credence is 50%. To justify, say, Tails being more likely than Heads, you would need to justify Tails being more likely to be the favored result. And you can't.JeffJo

    I definitely would not say that my credence is 50/50, because any statistic computed with that credence would not be reflective of the physical information that you have provided.
  • Sleeping Beauty Problem
    I don't see any questionable appeal to the principle of indifference being made in the standard Thirder arguments (though JeffJo may be making a redundant appeal to it, which isn't needed for his argument to go through, in my view.) Sleeping Beauty isn't ignorant about frequency information since the relevant information can be straightforwardly deduced from the experiment's protocol. SB doesn't infer that her current awakening state is a T-awakening with probability 1/3 because she doesn't know which one of three indistinguishable states it is that she currently is experiencing (two of which are T-awakenings). That would indeed be invalid. She rather infers it because she knows the relative long run frequency of such awakenings to the 2/3 by design.Pierre-Normand

    But the SB experiment is only assumed to be performed once; SB isn't assumed to have undergone repeated trials of the sleeping beauty experiment, let alone have memories of the previous trials, but only to have been awoken once or twice in a single experiment, for which no frequency information is available, except for common knowledge of coin flips. So the SB is in fact appealing to a principle of indifference as per the standard explanation of the thirders position, e.g. wikipedia.

    In any case, a frequentist interpretation of P(Coin is Tails) = 0.5 isn't compatible with a frequentist interpretation of P(awoken on Tuesday) = 1/3.

    For sake of argument, suppose P(Coin is Tails) = 0.5 and that this is a frequential probability, and that inductive reasoning based on this is valid.

    Now if P(awoken on Tuesday) = 1/3, then it must also be the case that

    P(awoken on Tuesday | Coin is Tails) x P(Coin is tails) = 1/3, as typically assumed by thirders at the outset. But this in turn implies that

    P(awoken on Tuesday | Coin is tails) = (1/3)/0.5 = 2/3.

    Certainly this isn't a frequential probability unless SB having undergone repeated trials notices that she is in fact woken more times on a tuesday than a monday in cases of Tails, in contradiction to the declared experimental protocol. Furthermore, this value doesn't even look reasonable as a credence , because merely knowing apriori that the outcome of the coin is tails shouldn't imply a higher credence of being awoken on tuesday rather than Monday.

    Credences are a means of expressing the possession of knowledge without expressing what that knowledge is. To assign consistent credences requires testing every implied credence for possible inconsistencies. Thirders fail this test. Furthemore, credences should not be assigned on the basis of ignorance; a rational SB would not believe that every possible (day, coin-outcome) pair has equal prior probability, rather she would only assume was is logically necessary - namely that one of the pairs will obtain with either unknown or undefined probability.
  • Sleeping Beauty Problem
    What the SB problem amounts to is a Reductio ad absurdum against the principle of indifference being epistemically normative, a principle that in any case is epistemically inadmissible, psychologically implausible, and technically unnecessary when applying probability theory; a rational person refrains from assigning probabilities when ignorant about frequency information; accepting equal odds is not a representation of ignorance (e.g Bertrand's Paradox).

    - It is commonly falsely argued by thirders, that halvers are suspect to a Dutch-book argument, by virtue of losing twice as much money if the coin lands tails, than they gain if the coin lands heads (since the dutch-book is defined as an awoken SB placing and losing two bets, each costing her $1 in the case of tails, one on monday and one on tuesday, versus her placing and winning only one bet rewarding her with $1 on Monday if the coin lands heads). But this dutch book argument is invalidated by the fact that it it equivalent to SB beingapriori willing to win $1 in the case of heads and losing $2 in the case of tails, i..e. SB knowingly accepting a Dutch Book with an expected loss of 0.5x1 - 0.5x2 = -$0.5 before the experiment begins, given her prior knowledge that P(H) = 0.5. So the Dutchbook argument is invalid and is actually an argument against the thirder position.

    The (frankly unnecessary) lesson of SB is that meaningful probabilities express causal assumptions, and not feelings of indifference about outcomes.
  • "Ought" and "Is" Are Not Two Types of Propositions
    "The 'ought' you mentioned, as in 'it ought to rain,' is a prediction. In contrast, the 'must' in a normative conclusion is a requirement for action—a behavioral standard that everyone ought to abide by."panwei

    Your definition of 'must' is circular here. Circular definitions are characteristic of speech acts ("Tie your shoelaces! because I said so!") and also of analytic propositions ("Bachelor" means "unmarried man").

    In such contexts, it is right to point out that their use is not necessarily inferential, because they might represent instructions, wishes, promises, postulates, conventions, orders etc, rather than assumptions or facts. But the English meaning of "ought" is used both as a speech act and as an inference, depending on the context, which reflects the fact that we often cannot know whether a sentence is meant as a speech act or as a hypothesis, especially when considering the fact that speech acts are often issued on the basis of assumptions.

    This also reflects a fundamental asymmetry of information between speaker and listener; When a speaker uses "ought", they might intend it as a speech act or as a prediction but the listener cannot be certain as to what the speaker meant, even after asking the speaker to clarify himself, because we are back to circular definitions.
  • "Ought" and "Is" Are Not Two Types of Propositions
    Are 'oughts' inferences, and are 'ises' reducible to 'oughts'?

    In ordinary language, "ought" is also used to signify predictive confidence, as in "it ought to rain"; so "oughts" aren't necessarily used in relation to utility maximisation. Furthermore, we understand what an agent is trying to achieve in terms of our theory of the agent's mind, which is partly based on our observations of their past behaviour. So an inference of what an agent 'ought' to do on the basis of what 'is' can perhaps be understood as an application of Humean induction. And our description of what 'is' tends to invoke teleological concepts, e.g. if we describe a ball as being a snooker ball it is because we believe that it ought to behave in the normal way that we expect of snooker balls from past experience.

    So if descriptions of what is the case are necessarily inferential, and if our understanding of moral obligations are in terms of our theory of minds which in turn are inferred from behavioral observations, then perhaps there is an argument for saying that only oughts exist, even if we are never sure which ones.
  • "Ought" and "Is" Are Not Two Types of Propositions
    In Decision Theory, States and Actions are generally treated as logically orthogonal concepts; an 'is' refers to the current state of an agent, and an 'ought' refers to the possible action that has the highest predicted utility in relation to the agent's 'is'. This treatment allows causal knowledge of the world to be separated from the agent's subjective preferences.

    Paradoxically, this can imply that the psychological distinction between states versus action utilities is less clear, considering the fact that agents don't generally have the luxury of having perfect epsistemic knowledge of their worlds prior to taking an action (e.g. as required to solve the Bellman Equation).

    Also, an action is only as good as the state that it leads to - rewards are related to (state,action) pairs, so utility values can be thought of as equivalence classes of states quotiented with respect to action utilities. This is practically important, since agents don't generally have the memory capacity to store perfect world knowledge even if it were available. Agents tend to visit and focus their learning on the state->action->(reward,state) chains that correspond to highest reward, and then learn compressed representations of these visited states in terms of a small number of features that efficiently predict utility. E.g Chess Engines estimate the utility of a board position by representing the board in terms of a managebly small number of spatial relations between pieces, especially in relation to the Kings. So the representational distinction between states and action reward values in the mind of an agent is muddied.
  • How LLM-based chatbots work: their minds and cognition
    In order to fully dislodge the Cartesian picture, that Searle's internalist/introspective account of intentionally contentful mental states (i.e. states that have intrinsic intentionality) indeed seem not to have fully relinquished, an account of first person authority must be provided that is consistent with Wittgenstein's (and Ryle and Davidson's) primary reliance on public criteria.Pierre-Normand

    Quine provided the most useful conceptual framework for both scientists, technologists and philosophers, since LLMs can be naturally interpreted as physically instantiating Quine's web of belief, namely an associative memory of most public knowledge. The nature and knowledge of LLMs can then appraised in line with Quine's classification of sentence types.

    (A short paraphrase of (Quineian sentence types returned by Google Gemini))

    Theoretical sentences: Describe things not directly observable, such as "Atoms are the basic building blocks of matter". They require complex background knowledge and cannot be verified by a simple, direct observation.

    Observation categoricals: Sentences that involve a relationship between two events, often derived from theory and hypothesis together, such as "When the sun comes up, the birds sing".

    Occasion sentences: Sentences that are sometimes true and sometimes false, like "It is raining". An observation sentence can also be an occasion sentence, as "It is cold" is true on some occasions and false on others.

    "Myth of the museum" sentences: Traditional view of language where sentences are like labels for pre-existing meanings, which Quine rejects because it assumes meanings exist independently of observable behavior.


    They are the "Chinese room" types of sentences that bear no specific relationship to the sensory inputs of a particular language user, that are encoded in LLMs, by constrast to Quine's last category of sentences, namely the Observation Sentences, whose meaning is "private", in other words whose meaning reduces to ostensive demonstration and the use of indexicals on a per language-user basis.
  • Banning AI Altogether
    I find the the appeals to Wittgenstein as a gold standard of philospohical writing ironic, considering how indispensible AI is for the layreader who wishes to engage with Wittgenstein's thinking in a historically accurate fashoin. This is all thank to Wittgenstein's apparent inability to articulate himself, and because of a greater irony that the anti-AI brigade of this forum overlook: Wittgenstein never quoted the philosophers he was targetting or stealing from, leading to great difficulties when it comes to understanding, criticising and appraising the originality of his ideas. (I'm not aware of any idea of Wittgenstein's that wasn't more precisely articulated by an earlier American pragmatist such as Dewey or Peirce, or by a contemporary logician such as Russell or Frege or Ramsey, or by a post-positivist such as Quine) And yet these more articulate philosophers are rarely discussed on this forum - I would argue because precise writing is more technical and therefore more cognitively demanding than giving hot-takes of aphorisms .

    Wittgenstein's standard of philsophical writing wasn't publishable in his own time, at least not for the standards required by anayltc philospohy, let alone our time. So if AI should not be quoted because of source uncertainty, then what is the justification on this forum for allowing people to quote Wittgenstein?
  • Banning AI Altogether
    Let's focus on the actual harms that AI use has so far wrought upon this forum: What are they?
  • Banning AI Altogether
    I think this is all a storm in a teacup. It is obvious etiquette to quote an AI response in the same way that one would quote a remark from a published author, and nobody should object to a quoted AI response that is relevant and useful to the context of the thread.

    Also, for those of us who use AI for studying subjective and controversial philosophical topics, it can be useful to read the AI responses that other people are getting on the same topic, due to the fact that AI responses can be influenced by conversation history and can be biased towards the user's opinion. Community feedback can therefore help people objectively appraise the AI responses they are getting.
  • Banning AI Altogether
    One thing to bear in mind about LLMs, is that they are fined tuned by human expert supervision post the internet scraping, tokenization and compression stage, although not all subjects are supervised equally. And so it isn't the case as it was when LLMs initially burst on the scene, that they are mere statistical auto-completers regressing to the wisdom of crowds. Generally. whilst they are generally reliable when it comes to traditional academic subjects and mainstream knowledge, they can be expected to revert to responses closer to auto-completion in fringe subject areas; which is why human discussion forums remain useful - for checking and refining AI assisted ideas. Notably, although ChatGPT can estimate its own ignorance on a topic, which is a necessary feature for it to know when to consult external sources of information to accurately answer a user query, it never presents a confidence estimate when replying to the user. This lack of transparency, together with its reversion to auto-completion, can be a problem for example, when relying upon an LLM to learn domain specific languages that aren't popular, or when relying on LLMs to learn synthetic natural language such as Ithkuil or Lojban; which is a presently unfortunate state of affairs for those of us who see great potential in LLMs for the purposes of experimental philosophy.
  • Banning AI Altogether
    ChatGPT and Gemini start by mirroring society's default communicative presumption, namely of a public world of shared referents that all competent speakers access during the course of conversation, and so debates invariably involve the AI initially using words in the normal intersubjective mode, leading to the appearance of it defending metaphysical realism, followed by it shifting to using words in the subjective mode when the communicative presumption is questioned, leading to the appearance of the AI retreating to psychological realism or idealism. But all that is actually happening, is that the AI is switching between two grammatical modes of speaking that correspond to two distinct sub-distributions of language use (namely intersubjective communication that purposely omits perspective to produce the illusion of shared-world semantics, versus subjective expression that reduces to perspective).

    Ai demonstrates that self-reflection isn't needed for a comptent peformance of philosophical reasoning, because all that is needed to be an outwardly competent philosopher is mastery of the statistics of natural language use, in spite of the fact that the subject of philosophy and the data of natural language use are largely products of self-reflection. So it is ironic that humans can be sufficiently bad at self-reflection, such that they can benefit from the AI reminding them of the workings of their own language.
  • First vs Third person: Where's the mystery?
    From an external point of view, cognition is private and indirect. From an internal point of view, cognition is public and direct. So Husserl and Descartes can be both semantically correct, provided that we don't mix their postulates and apply them in different contexts.
  • First vs Third person: Where's the mystery?
    IMO, Chalmer and Dennett both had a tendency to misconstrue the meaning of "physical" as denoting a metaphysical category distinct from first personal experience, as opposed to denoting a semantic delineation between third-personal versus first-personal meaning.

    In the case of Dennett, his misunderstanding is evident when he conjectures that Mary the colour scientist can learn the meaning of red through a purely theoretical understanding. But this argument fails to acknowledge that physical concepts are intersubjectively defined without reference to first personal perceptual judgements. Hence there are no public semantic rules to build a bridge from physical theory, whose symbols have public universal meaning, to perceptual judgements that are not public but specific to each language user, as would be required for mary to learn appearances from theory.

    In the case of Chalmer, (or perhaps we should say "the early Chalmer"), his misunderstanding is evident in his belief in a hard problem. Chalmers was correct to understand that first-person awareness isn't reducible to physical concepts, but wrong to think of this as a problem. For if physical properties are understood to be definitionally irreducible to first-person experience, as is logically necessary for physical concepts to serve as a universal protocol of communication, then the hard problem isn't a problem but an actually useful, even indispensable, semantic constraint for enabling universal communication.

    Semaphore provides a good analogy; obviously there is a diference between using a flag as a poker to stoke one's living room fire, versus waving the flag in accordance with a convention to signal to neighbours the presence of the fire that they cannot see. We can think of the semantics of theoretical physics to be akin to semaphore flag waving, and the semantics of first-person phenomenology to be akin to fire stoking. These distinct uses of the same flag (i.e uses of the same lexicon) are not reducible to each other and the resulting linguistic activities are incommmensurable yet correlated in a non-public way that varies with each language user. This dual usage of language gives rise to predicate dualism, which advocates for the existence of a hard problem mistake for a substance or property dualism.
  • Thoughts on Epistemology
    Your question "how do you know that what you think are defeaters and are progressive evolution really are?" is the right question to ask, because it highlights the difference between thinking one has a defeater and actually having one. JTB+U is built precisely to keep that distinction clear.Sam26

    Isn't understanding the same thing as justification? I'm not sure what the U adds to JTB, given that we assess understanding in terms of justifications.

    As for deciding whether a refutation is valid or not, this rests upon the truth of one's auxiliary hypotheses. So unless those can also be tested, one cannot know whether the refutation is valid, which is the staple criticism of Popper's falsificationism - that individual hypotheses are impossible to test, since their validity stands and falls with the truth of every other hypothesis. So the bridge from practical refutation in everyday life, which often involves the testing of individual hypotheses under the assumption of true auxilliary hypotheses, doesn't withstand skeptical scrutinty and the standards demanded by scientific epistemology - an essentially unattainable standard, relegating JTB to the realm of the impossible, or to the realm of semantics that is epistemically vacuous.
  • Thoughts on Epistemology
    I think this relates to another question. Practices and language clearly evolve over time. What causes them to change the way they do? Presumably, this is how J might relate to T and U.

    In my own work I have drawn a parallel between these hinges and Gödel’s incompleteness theorems,
    just as Gödel showed that no consistent formal system strong enough for arithmetic can prove all the truths it contains or even establish its own consistency from within, Wittgenstein shows that epistemic systems rest on unprovable certainties. Both reveal a structural limit on internal justification. Far from undermining knowledge, these limits are enabling conditions: mathematics requires axioms it cannot justify, and our epistemic practices require hinges that stand fast without proof.
    — Sam26

    I am not sure about this comparison, axioms are justified and questioned all the time. If you tried to present a system with arbitrary axioms, or ones that seemed prima facie false, no one is likely to take them seriously. The gold standard is that they seem self-evident (arguably, a sort of justification). There have been intense debates over axioms, which can take place because "justification" is not itself bound by any axiomatized system. Afterall, what are the axioms for English, German, or Latin? Axioms are assessed by intuition, consequence, coherence, explanatory success, or even aesthetics, etc. Reasons/justifications are given.
    Count Timothy von Icarus

    I think that axioms are a misleading interpretation of Wittgenstein's hinges.

    i) Axioms are typically used to represent truth-apt empirical hypotheses.
    ii) Axioms are stated in advance of proving theorems.
    iii) Axioms are detachable and optional parts of a reasoning system .

    i suspect that neither i,ii,or iii are generally true of Wittgenstein's hinges. To think this way would be to construe Wittgenstein as being committed to traditional foundationalist epistemology built upon logical atomism, as naturally embodied by the intended interpretation of an axiomatic system, which most Wittgensteinians think to be a gross misconstrual of his later ideas.

    Nevetheless, the later Wittgenstein's epistemological views still come across as immature and lacking in sophistication when compared to the detailed accounts of scientific knowledge and justification by Carnap and Quine. To me, Wittgenstein sometimes comes across as a descriptive Carnapian, in the sense that like Carnap, Wittgenstein seemed to think (as in OC) that it was useful to delineate the internal questions of truth and justification that make sense from within a particular linguistic framework, from the external questions concerning the choice of linguistic framework. But unlike Carnap, I don't think that Wittgenstein saw the internal-external distinction as having prescriptive epistemological value, for essentially the same reasons as Quine; namely due to rejecting the analytic-synthetic distinction.

    If Wittgenstein had fully rejected the logical atomism of the Tractatus, and if he wasn't comitted to the picture theory of meaning and the accompanying idea of intentional propositional attitudes that the picture theory of meaning is wedded to, and if Wittgenstein wasn't committed to the analytic synthetic distinction, then presuambly Wittgenstein's later epistemological views were closer to Quine's confirmation holism, in which case hinges are merely entrenched but revisable assertions, even if they are fixed for all intents and purposes within specific cases of reasoning.
  • Thoughts on Epistemology
    This is a false dilemma. John's subjective truth will be conditioned by his understanding of what mathematical truth is, which he has learnt through interaction with others who teach him. Unless that has happened John may have a subjective opinion, but it doesn't count as a mathematical opinion.Ludwig V


    Yes, the keyword here is interaction - more specfiically John's ongoing interactions with his environment that maintains a correlation between his conditioning and external truth-makers. The critical importance of ongoing interaction is both overlooked, and many would argue incompatible with, the traditional epistemological notion of apriori, intentional belief states that we are supposed to believe can make semantic and epistemological contact with truth-makers before interaction. For it isn't feasible that a propositional attitude with respect to a future-contigent proposition, can access the truthmaker of the proposition in advance of the actual interactive use of the proposition.

    As Wittgenstein might have put it, both the meaning and truth of a future-contigent proposition are up in the air, because the referential semantics of a future-contigent proposition cannot decided before the truth of the proposition is evaluated, which critically undermines the traditional epistemological concept of intentional belief states that are naively presumed to consist of a teleological mental state holding in mind a possible outcome of the future before it happens.

    Hence emphasising interaction rather than beliefs can resolve the dilemma of semantic-externalism or trivialism in the same way thay Bayesian Statistics does - pragmatically through making it clear that beliefs are not intentional mental states, but conventions used for interpreting and controlling behavioural conditioning, in a sense that rejects the traditionally internalist and static epistemological notion of belief states.
  • Thoughts on Epistemology
    John points to the white board, which has the figure 2 written on it. He says, "That is a prime number." We'll call the sentence he uttered S.

    The cause of his use of S is a factor in determining the truth conditions. That cause is not the truth conditions, though. Or if it is, how?
    frank

    Here we must ask if John's understanding of mathematics is relevant to the mathematical truth of his utterance:

    From the perspective of the mathematics community other than John, the answer is clearly no; for whether 2 is a prime number is not decided by John's understanding of prime numbers but by a computable proof by contradiction written down on paper and simulated on a computer, that bears no necessary relationship to the hidden causal process of John's neuro-psychology, even if the two are correlated due to John being a trained mathematician.

    On the other hand, from the perspective of John, who isn't in a position to distinguish his personal understanding of mathematics from our actual mathematics, the answer is clearly yes. So we have two distinct notions of truth in play: Intersubjective mathematical truth, for which the truth maker is independent of Johns judgements whether or not his judgements are correlated with intersubjective mathematical truth, versus what we might call "John's subjective truth" in which the truth maker is identified with the neuropsychological causes of John's utterances. If John is a well-respected mathematician, then we might be tempted to conflate the two notions of truth, but we shouldn't forget that the two notions of truth (causally determined versus community determined) aren't the same notion of truth.
  • Thoughts on Epistemology
    Do I have to know that X is true in order to use it as the T in a JTB statement?J

    Under the strongest possible interpretation of truth-conditional semantics (the principle of maximal charity), the meaning of your use of a sentence S refers to the actual cause of your use of S; in which case, the answer to your question is vacuously yes, because the truth of your utterance of S is necessarily true when your utterance of S has been correctly understood.

    On the other hand, if the community gets to decide the truth-maker of your use of S irrespective of whatever caused you to utter S (the principle of minimal charity), then you cannot know that S is true until after you have used S and received feedback. In which case, the truth of S isn't a quality of your mental state when you used S.
  • Thoughts on Epistemology
    Truth conditional semantics does not escape the dilemma between the postulation of belief intentionality ,causal semantics and trivialism on the one hand, versus the postulation of false beliefs and community decided truthmakers on the other, but illustrates how the dichotomy is muddied in actual linguistic practice through a process of biased radical translation.

    On the one hand, a radical translation of a speaker's utterances in terms of truth-conditional semantics, interprets the speaker's utterances as denoting statistical correlations between his mental state and his external world (charity). But on the other hand, the radical translator gets to decide the cases when the speaker's utterances are supposedly "false" (uncharity), in accordance with the translator's personal agenda, as opposed to in terms of the actual causes of the speaker's utterances when he said the "wrong" thing.

    Davidson's proposal is scientifically useful but non-philosophical and aligns with how the concept of "beliefs" are used practically and non-seriously in AI and machine learning, especially in the case of Bayesian reinforcement learning when we callibrate a neural network's responses to the external states of the environment and call the resulting neuron activations "beliefs" (which denote our wishes). But Davidison, like machine learning, ducks the philosophical question as to how to rehabilitate epistemology, given that any realist notion of beliefs seems untenable.
  • Thoughts on Epistemology
    I don't quite understand this. Our community ascribes false beliefs to people all the time and that's why they are called "intentional"Ludwig V

    And that is the idea I am attacking. Supposedly, Intentionality refers to "The quality of mental states (e.g. thoughts, beliefs, desires, hopes) which consists in their being directed towards some object or state of affairs." - Google Gemini

    So according to this definition of intentionality, the intentionality of a mental state has nothing to do with the opinions and linguistic biases of a community, and concerns a genuine, real relationship between a believer and an object that his beliefs are directed towards. But if this relationship is a causal relationship between the object of the belief and the mental state of the believer, then how is a false belief possible?

    Notice that we don't attribute false beliefs to a glitchy measurment device - rather we refer to the device as uncalibrated or not functioning in accordance with its specification. And so we don't consider measurement error as as an attribute of the state of the measuring device; rather we consider the device as not functioning in accordance with our wishes, in that it is us who chooses the "truthmaker" of what we want the device to be measuring. And hence we do not attribute intentionality to the state of the device with respect to our desired truthmaker.

    The situation isn't different with humans as measuring devices. And hence as with the example of a thermometer, either humans have intentional belief states, in which case their beliefs cannot be false due to the object of their beliefs being whatever caused their beliefs, else their beliefs are permitted to be false, in which case the truthmaker of their belief is decided externally by their community.
  • Thoughts on Epistemology
    Thermometers never commit epistemic errors; they can only mislead those who uncritically rely upon them. Likewise, the same can be said of a 'believer's' utterances.

    The dilemma is either

    A. a belief merely refers to the coexistence of a believer's mental state and an external truth-maker, where the external truth-maker is decided by the linguistic community rather than the believer. In which case the intentionality associated with the believer's mental state is irrelevant with respect to the belief that the community ascribes to the believer as a matter of linguistic convention rather than of neurological fact.

    or

    B. Beliefs refer to the actual physical causes of the believer's mental-state - in which case the believer's intentionality is relevant - so much so, that it is epistemically impossible for the believer to have false beliefs. (Trivialism).

    So you either have to sacrifice belief intentionality or you have to accept trivialism. There is no "inbetween" alternative IMO. Either way, the naive conception of beliefs as binary truth-apt intentional states is untenable and ought to be eliminated from discourse.
  • AI cannot think
    Don't think of thinking as a solitary activity, as in a circular causal process. Think of thinking as open communication between two or more processes, with each process defining a notion of truth for the other process, leading to semi-autonomous adaptive behaviour.

    E.g. try to visualize a horse without any assistance and draw it on paper. This is your generative psychological process 1. Then automatically notice the inaccuracy of your horse drawing. This is your critical psychological process 2. Then iterate to improve the drawing. This instance of thinking is clearly a circular causal process involving two or more partially-independent psychological actors. Then show the drawing to somebody (Process 3) and ask for feedback and repeat.

    So in general, it is a conceptual error to think of AI systems as closed systems that possess independent thoughts, except as an ideal and ultimately false abstraction. Individual minds, like indivdual computer programs are "half-programs" that are reactive systems waiting for external input, whose behaviour isn't reducible to an individual internal state.
  • Idealism in Context
    If mathematics were merely convention, then its success in physics would indeed be a miracle — why should arbitrary symbols line up so exactly with the predictability of nature? And if it were merely empirical, then we could never be sure it applies universally and necessarily...Wayfarer

    Science isn't committed to the reality of alethic modalities (necessity, possibility, probability) in the devout epistemological sense you seem to imply here, for they are merely tools of logic and language - the modalities do not express propositional content unless they are falsifiable, which generally isn't the case.

    A nice case of the “unreasonable effectiveness” is Dirac’s prediction of anti-matter — it literally “fell out of the equations” long before there was any empirical validation of it. That shows mathematics is not just convention or generalisation, but a way of extending knowledge synthetically a priori.Wayfarer

    IMO, that is a merely an instance of an inductive argument happening to succeed. A purpose of any theory is to predict the future by appealing to induction -- but there is no evidence of inductive arguments being more right than wrong on average. Indeed, even mathematics expresses that it cannot be unreasonably effective, aka Wolpert's No Free Lunch Theorems of Statistical Learning Theory.

    Humans have a very selective memory when it comes to remembering successes as opposed to failures. Untill the conjecture is tested under scrutiny, it can be dismissed.
  • Idealism in Context
    But Kant’s point is that neither account explains why mathematics is both necessary and informative. If it were analytic, it would be tautological; if empirical, it would be contingent. The synthetic a priori is his way of capturing that “in-between” character. It also has bearing on how mathematics is 'unreasonably efficacious in the natural sciences.'Wayfarer

    Or rather, it explains why mathematics is simply efficacious - mathematical conventions are arbitrary and independent of facts and hence a priori, and yet the mathematical proofs built upon them require labour and resources to compute, which implies that the truth of mathematical theorems is physically contigent and hence synthentic a posteriori. Hence the conjecture of unreasonable effectiveness is not-even-wrong nonsense, due to the impossibility of giving an a priori definition of mathematical truth.
  • Thoughts on Epistemology
    Here is my position:

    1). I cannot know false propositions a priori.
    2). I can have known false propositions a posteriori.

    This is because I cannot distinguish the truth from my beliefs a priori, and yet I do make the distinction in hindsight. My concept of truth is in flux, so there is no contradiction here, even if this position isn't compatible with common grammatical usage of the verb "to know" or "to have known".
  • Evidence of Consciousness Surviving the Body
    A seventh misconception treats negative cases as field-defeaters (“if some reports are wrong, the thesis fails”). The thesis of this chapter is proportionate: it does not depend on unanimity or on universal accuracy. It claims that some anchored cases survive ordinary scrutiny and that these anchors stabilize the larger testimonial field. One counterexample to a weak report does not touch a different case whose particulars were independently confirmed.Sam26

    But you haven't presented any cases that can be expected to survive an ordinary degree of scientific scrutiny.

    A third misconception claims “there are no controls,” implying that without randomized trials, testimony cannot carry weight. Prospective hospital protocols supply a different kind of control: fixed clinical clocks, environmental constraints (taped eyes, sealed rooms), hidden-target or procedure-bound particulars, and independent confirmation. These features limit post-hoc embroidery and allow specific claims to be checked. They do not turn testimony into lab instrumentation, but they do make some reports probative under ordinary public standards.Sam26

    Randomized trials aren't a requirement, but a controlled enviornment is necessary so as to eliminate the possibility that supposedly unconscious subjects are actually conscious and physically sensing and cognitively reconstructing their immediate environments by normal sensory means during EEG flat-lining. One such an experiment is The Human Consciousness Project that investigated awareness during resuscitation of cardiac arrest patients in collaborration with 25 medical centers across the US and Europe. That investigation among other things, controlled the environmment so as to assess the possibility that NDE subjects were sensing information that they couldn't posssibly deduce by normal bodily means (remote viewing).

    "The study was to introduce a multi-disciplinary perspective, cerebral monitoring techniques, and innovative tests.[7]. Among the innovative research designs was the placement of images in resuscitation areas. The images were placed on shelves below the ceiling and could only be seen from above. The design was constructed in order to verify the possibility of out-of-body experiences"

    The results were negative, with none of the patients recalling seeing the test information that was situated above their heads:

    " The authors reported that 101 out of 140 patients completed stage 2 interviews. They found that 9 out of 101 cardiac arrest survivors had experiences that could be classified as near-death experiences. 46% could retrieve memories from their cardiac arrest, and the memories could be subdivided into the following categories: fear; animals/plants; bright light; violence/persecution; deja-vu; family; recalling events post-CA. Of these, 2% fulfilled the criteria of the Greyson NDE scale and reported an out-of-body experience with awareness of the resuscitation situation. Of these, 1 person described details related to technical resuscitation equipment. None of the patients reported seeing the test design with upward facing images."

    .
  • Evidence of Consciousness Surviving the Body
    In modern western societies, a testimony that appeals to clairvoyance falls under misrepresentation of evidence, an inevitable outcome under witness cross examination in relation to critical norms of rational enquiry and expert testimony, possibly resulting in accusations of perjury against the witness. I would hazard a guess that the last time an American court accepted 'spectral' evidence was during the Salem witch trials.

    The need for expert testimony is even enshrined in the code of Hammurabi of ancient Mesopotamia; not even the ancients accepted unfettered mass testimony.

    So much for us "naysaying materialists" refusing to accept courtroom standards of evidence (unless we are talking about courtrooms in a backward or corrupt developing country).
  • Evidence of Consciousness Surviving the Body
    I am guessing that if EEGs are flatlining when patients are developing memories associated with NDEs, that this is evidence for sparse neural encoding of memories during sleep that does not involve the global electrical activity of millions of neurons that is entailed by denser neural encoding that an EEG would detect.

    Which seems ironic, in the sense that Sheldrake proponent's seem to think that apparent brain death during memory formation is evidence for radically holistic encoding of memories extending beyond the brain. But when you think about it for more than a split second, the opposite seems far more likely, namely atomistic, symbol-like memories being formed that slip under the EEG radar.
  • Evidence of Consciousness Surviving the Body
    Sam, name one reproducible experiment under controlled laboratory conditions that confirms that NDEs entail either clairvoyance or disembodied cognition.

    Intersubjective reproducibility of stimulus-responses of subjects undergoing NDEs is critical for the intersubjective interpretation of NDE testimonies, for otherwise we merely have a set of cryptic testimonies expressed in the private languages of NDE subjects.
  • Evidence of Consciousness Surviving the Body


    Sure, so the question is whether proponents of physical explanations for "consciousness" and purported anomalous phenomena share that sentiment, in which case everyone is arguing at cross purposes, assuming of course that both sides can agree that the evidence for telepathy and remote viewing is sorely lacking.
  • Evidence of Consciousness Surviving the Body
    Why must it be physical? this assumes from the outset that everything real must be made of particles or fields described by physics. But that is precisely the point in dispute.

    Consider an analogy: in modern physics, atoms aren’t little billiard balls but excitations of fields. Yet fields themselves are puzzling entities—mathematically precise but ontologically unclear. No one thinks an electromagnetic field is a “blob of energy floating around.” It’s astructuring principle that manifests in predictable patterns, even if its “substance” is elusive.
    Wayfarer

    Which is precisely why Physics survives theory change, at least for ontic structural realists - for only the holistic inferential structure of theories is falsifiable and semantically relevant. I think you might be conflating Physics with Physicalism - the misconception that physics has determinate and atomic denotational semantics (i.e. Atomism) .

    It is because "Physicality" is intersubjective, structural, and semantically indeterminate with respect to the subjectivities of the users of physical theories, that every possible world can be described "physically".

    Being "physical" isn't a property of the denoted, but refers to the fact that the entity concerned is being intersubjectively denoted, i.e referred to only in the sense of abstract Lockean primary qualities that are intersubjectively translatable by leaving the Lockean secondary qualities undefined, whereby individual speakers are free to subjectively interpret physics as they see fit (or as I call it, "The Hard Feature of Physics").
  • Evidence of Consciousness Surviving the Body
    If we agree that one case of NDE was real, then we are dealing with an anomaly that materialism cannot describe. I am wondering how you could explain the NDE experience when there is no brain activity.MoK

    For the record, I don't consider any such case to be real - a flat EEG reading isn't a sufficient measurement for defining brain death. Only quacks seriously entertain such theories. But if such cases were real in some sense of having intersubjective confirmation of anomalous phenomena, then it would at most imply a hole in our current physical theories, resulting in a new physical theory with regards to an extended notion of the body with additional senses, coupled with a new definition of personhood. Ultimately, all of this would amount to reducing our conception of such anomalous phenomena to a new physical normality that would ultimately leave religious followers and believers of the paranormal feeling as dissatisfied as they are presently.

    NDEs cannot in principle deliver the epistemic certainty and psychological security that their enthusiasts want, even if they are assumed to be veridical.
  • Evidence of Consciousness Surviving the Body
    Even if NDEs were veridical, that wouldn't be enough to challenge physicalism or mind-brain equivalence. The same goes for past life regression. At most, only a particular and narrow minded version of physicalism would be refuted. The same existential doubts, anxieties and disputes would eventually resurface exactly as before, with respect to a merely extended conception of the body and the senses, a conception that could even bring new forms of nihilism.
  • Idealism in Context
    That all events in the universe are causally inevitable is the thesis of Determinism. A thesis is an hypothesis, not an ontological commitment. As a thesis, it accepts that it may be proved wrong, in the same way that the equation s=0.5∗g∗t2 may be proved wrong. A thesis does not require a suspension of scepticism, which is why it is a thesis.RussellA

    Actually that's untrue, because without ontological commitment to universal quantification over absolute infinity, one cannot distinguish the hypothesis of determinism from its anti-thesis.

    What a hypothesis means is subject to as much uncertainty as its truth value. Unless one is already committed to the truth of determinism, one isn't in a position to know what the hypothesis of "determinism" refers to.
  • Referential opacity
    Leibniz's Law at the Post Office

    The postal system relies upon referential transparency, namely of knowing an immutable address that is associated with an intended recipient, as opposed to knowing the mutable personal details of the sender and the recipient which are kept hidden from the postal service ("information opacity").

    So here, the information space (that is hidden from the postal service) is comprised of vectors of information, where a vector is a possible list of attributes corresponding to a possible user. This information space is dual to the address space, namely the set of possible postal addresses for possible users.

    The information space is a vector field; the vector field indices are the address space.

    Address information can also be an attribute of information space, but this shouldn't be confused with the address space: the address information that you put on your resume isn't the address used by the postal system. Address information is mutable information that is considered to be an attribute of senders and recipients, whereas a postal address is part of the immutable structure of the postal system.

    What if user moves house?

    if a user moves house, this is represented by an arrow linking 'before' and 'after' vectors in information space (assuming the info is available there). But from the perspective of the postal service, users don't move house, rather houses change their occupants - because the postal system uses postal addresses to designate rigidly.

    Leibniz's Law

    Assuming that Leibniz's Law holds with respect to a given postal service, then it holds internally in the sense of characterising the postal operations of that given postal system, but it does not hold externally in the sense of surviving a change to the postal service itself.

    The indiscernibility of identicals is a definitional criterion for the meaning of a pair of addresses:

    ∀x ∀y[ x = y → ∀F(Fx ↔ Fy)] (i.e. identical addresses imply identical occupants).

    Compare that to Frege's disasterous Basic Law V(b)

    ϵF = ϵG → ∀x(Fx ≡ Gx)

    Here, the difference is that ϵF and ϵG are extensions, namely vectors in information space rather than addresses. If these vectors are finite then they can be fully observed , meaning that if they are observed to be identical then they must be same vector, meaning that V(b) is applicable. But in the infinite case, the two lists cannot be exhaustively observed, in which case we have at most equality between two incomplete lists, which obviously cannot imply that they denote the same vector due to the problem of induction.

    (Frege and many logicians after him, conflated the notion of addresses, which can always designate rigidly by virtue of merely being indexicals devoid of information content, with observation vectors that cannot rigidly designate the set of concepts that they all under).

    The identity of indiscernibles is postally invalid if multiple home ownership is allowed:

    ∀x∀y[∀F(Fx ↔ Fy) → x = y ] (which is true of a vector space, but generally false of a vector field).
  • Idealism in Context
    The movement of the stone is determined by the force of gravity.

    It is part of the nature of language that many words are being used as figures of speech rather than literally, such as "determined". Also included are metaphor, simile, metonymy, synecdoche, hyperbole, irony and idiom.
    RussellA

    Yes, that is perfectly reasonable as an informal description of gravity when describing a particular case of motion in the concrete rather than in the abstract and as Russell observed, in such cases the concept of causality can be eliminated from the description. But determinism takes the causal "determination" of movement by gravity literally, universally and outside of the context of humans determining outcomes, and in a way that requires suspension of Humean skepticism due to the determinist's apparent ontological commitment to universal quantification over generally infinite domains.

    Recall the game-semantic interpretation of the quantifiers, in which the meaning of a universal quantifier refers to a winning strategy for ensuring the truth of the quantified predicate P(x) whichever x is chosen . This interpretation is in line with the pragmatic sense of determination used in the language-game of engineering, where an engineer strategizes against nature to determine a product design that is correlated with generally favourable outcomes but that is never failure proof. (The engineer's sense of "winning" is neither universal nor guaranteed, unlike the determinist's).

    If a determinist wants to avoid being charged with being ontologically commited to Berkeley's Spirits in another guise, then he certainly cannot appeal to a standard game-semantic interpretation of the quantifiers. But then what other options are available to him? Platonism? (Isn't that really the same as the spirit world?). He has no means of eliminating the quantifiers unless he believes the world to be finite. Perhaps he could argue that he is using "gravity" as a semantically ambiguous rigid designator, but in that case he is merely making determinism true by convention...
  • Idealism in Context
    Determinism can always survive on a theoretical level, in the sense that in ill-posed problem with more than one possible solution can always be converted into a well-posed problem with exactly one solution by merely adding additional premises.

    However, the ordinary english meaning of "determine" does not refer to a property but to a predicate verb relating an intented course of action to an outcome. Ironically, an absolute empirical interpretation of "intention" is ill-posed and hence so is the empirical meaning of "determination", and is the reason why metaphysical definitions and defences of determinism are inherently circular.

    For this reason, I think materalism, i.e a metaphysical commitment to objective substances, should be distanced from determinism - for if anything, a commitment to determinism looks like a metaphysical commitment to the objective existence of intentional forces of agency (i.e. spirits) that exist above and beyond the physically describable aspects of substances.