Comments

  • Sleeping Beauty Problem


    All I can say is that we aren't agreeing as to the semantics of the problem. Your sample space includes the counterfactual possibility (H, Tuesday), which isn't in the sample space of the experiment as explicitly defined. You appeal to "if we awoke SB on tuesday on the event of heads" might be a perfectly rational hypothetical in line with common-sense realism, but that hypothetical event isn't explicit in the problem description. Furthermore, the problem is worded as a philosophical thought experiment from the point of view of SB as a subject who cannot observe that tuesday occurred on a heads result, nor even know of her previous awakenings, in sharp contrast to an external point of view relative to which her awakenings are distinguishable and for which the existence of tuesday isn't conditional on the outcome of the event of tails.

    As straw-clutching as this might sound, there are radically minded empiricists who would argue that the existence of "tuesday" for the sleeping beauty is contingent upon her being awake. For such radical empiricists the event (H,Tuesday) doesn't merely have zero probability, but is a logical contradiction from SB's perspective.

    Epistemically for SB,

    (h,mon) -> observable, but undiscernable.
    (h,tue) - > unobservable.
    (t,mon) -> observable but undiscernible.
    (t,tue) -> observable but undiscernible.

    So we are back to the question as to whether (h,tue) should be allowed in the sample space. This is ultimately what our dispute boils down to.
  • Sleeping Beauty Problem
    The reason I keep asking for specific answers to specific questions, is that I find that nobody addresses "my sample space." Even though I keep repeating it. They change it, as you did here, to include the parts I am very intentionally trying to eliminate.JeffJo

    I think you misunderstand me. I am simply interpreting the thrux of your position in terms of an extended sample space. This isn't miscontruing your position but articulating it in terms of Bayesian probabilities. This step is methodological and not about smuggling in new premises, except those that you need to state your intuitive arguments, which do constitute additional but reasonable premises. [/quote]


    There are two, not three, random elements. They are COIN and DAY. WAKE and SLEEP are not random elements, they are the consequences of certain combinations, the consequences that SB can observe.JeffJo

    Look at this way: It is certainly is the case that according to the Bayesian interpretation of probabilities, one can speak of a joint probability distribution over (Coin State, Day State, Sleep State), regardless of one's position on the topic. But in the case of the frequentist halfer, the sleep-state can be marginalised out and in effect ignored, due to their insistence upon only using the coin information and rejecting counterfactual outcomes that go over and above the stated information.

    There are two sampling opportunities during the experiment, not two paths. The random experiment, as it is seen by SB's "inside" the experiment, is just one sample. It is not one day on a fixed path as seen by someone not going through the experiment, but one day only. Due to amnesia, each sample is not related, in any way SB can use, to any other.JeffJo

    You have to be careful here, because you are in danger of arguing for the halfers position on their behalf. Counterfactual intuitions, which you are appealing to below, are in effect a form of path analysis, even if you don't see it that way.

    Each of the four combinations of COIN+DAY is equally likely (this is the only application of the PoI), in the prior (this means "before observation") probability distribution. Since there are four combinations, each has a prior ("before observation") probability of 1/4.

    In the popular problem, SB's observation, when she is awake, is that this sample could be H+Mon, T+Mon, or T+Tue; but not H+Tue. She knows this because she is awake. One specific question I ask, is what happens if we replace SLEEP with DISNEYWORLD. Because the point that I feel derails halfers is the sleep.
    JeffJo

    But the Sleeping Beauty Problem per-se does not assume that the Sleeping Beauty exists on tuesday if the coin lands heads, because it does not include an outcome that measures that possibility. Hence you need an additional variable if you wish to make your counterfactual argument that SB would continue to exist on tueday in the event the coin lands heads. Otherwise you cannot formalise your argument.

    Just to clarify, I'm not confusing you for a naive thirder, as I mistook you for initially, where i just assumed that you were blindly assigning a naive prior over three possible outcomes. I think your counterfactual arguments are reasonable, and I verified that they numerally check out; but they do require the introduction of a third variable to the sample space in order to express your counterfactual intuition that I called "sleep state" (which you could equally call "the time independent state of SB").
  • Sleeping Beauty Problem


    Well, I've come to the conclusion that your answer is in some sense philosophically superior to the result insisted upon by halfers like myself, even though your answer is technically false. In a nutshell, I think that although you have lost the battle, you have won the war.

    As I understand it, your proposal is essentially the principle of indifference applied to a sample space that isn't the same as the stated assumptions of the SB problem, namely your sample space is based on the triple

    {Coin,Day,Wakefulness}

    upon which you assign the distribution Pr(Heads,Monday,Awake) = Pr(Tails,Monday,Awake) = Pr(Heads,Tuesday,Asleep) = Pr(Tails,Tuesday,Awake) = 1/4.

    But the important thing isn't your appeal to indifference "on a single bell" as you put it, but the different sample space you used and the viewpoint it provides. (Any measure can be assumed on your sample space, provided that it satisfies the marginal distribution P(Heads) = 1/2 and assigns coherent condtional credences - your particular choice based on PoI is easily seen to be coherent, for the reasons you point out.

    By contrast, the probability space for the classical SB problem is that of a single coinflip C = {H,T}, namely (C,{0,H,T,{H,T}},P) where P (C = H) = 0.5 . From this premise it isn't technically possible to conclude anything except for the halfers position, namely

    P(C = H | Monday Or Tuesday) = P(C = H) = 1/2.

    for reasons already explained many times, (and which is more rigorously proved by pushing the measure P forwards onto the different sample space of day outcomes, and then disintegrating the resulting measure and taking the inverse to obtain the conditional probability for P( C = H | Monday Or Tuesday), but this is incidental).

    But what makes your argument incorrect for the SB problem à la lettre, namely the use of a non-permitted sample space that is based on commonsense counterfactual intuition that goes beyond the explicitly stated premises of the SB problem, is also what makes your argument interesting and persuasive, for your argument for the thirder's position is based on coherent counterfactual intuitions that are commonsensically valid and important to point out, even though they are inapplicable with respect to a strict interpretation of the SB problem as explicitly stated.

    Essentially, if by "probability" we mean a coherence value based only on the frequential probability of the coin landing heads as explicitly assumed by the SB problem and we do not make any other assumptions no matter how intuitively plausible, then the answer can only be a half because the sample space is of one coin flip. But if we interpret "probability" more liberally to a mean a credence that includes commonsense counterfactual intuitions, then the answer can be different to a half provided that we define "probability" more precisely to permit this and extend our premises of the SB problem to include counterfactual premises that allow your chosen sample space. But then the answer isn't necessarily equal to a third, for that particular case requires the use of the Principle of Indifference applied to bells, which a non-halfer might object to, even though he agrees to use your sample space.
  • Sleeping Beauty Problem
    So don't use that as a model, use the well-established methods of conditional probability. Ring a bell at noon of both days. An awake SB hears it, but a sleeping SB is unaffected in any way.

    The prior probabilities of a specific bell-ring being on any member of {H+Mon, T+Mon, H+Tue, T+Tue} is 1/4. If SB hears it, H+Tue is eliminated. Conditional probability says:

    Pr(H+Mon|Bell) = Pr(H+Mon)/[Pr(H+Mon)+Pr(T+Mon)+Pr(T+Tue)] = 1/3.
    JeffJo

    Your bell is just a label for the event {Monday OR Tuesday} which is independent of the coin flip, and so you are merely repeating the same appeal to indifference as before.

    What I was pointing out is that this application of the principle of indifference isn't consistently applied to SB.

    Lets start by assuming the credence that you insist upon:

    P(Monday,Heads) = P(Tuesday, Heads) = P(Tuesday,Tails) = 1/3

    To verify that you are happy with this credence assignment, you need to check the hypothetical credences that this credence implies. In the case of P(Monday | Tails) we get

    P(Monday | Tails) = P(Monday, Tails) / P(Tails) = (1/3) / (1/2) = 2/3.

    Are you happy with this implied conditional credence? If SB is told that the outcome is tails when she wakes up, then should she believe that it is twice as likely to be monday than tuesday, given her knowledge of tails?
  • Is all belief irrational?
    I agree if I understand your position correctly as being deflationary. I would simply put it by saying that an interrogated subject isn't in the epistemically exalted position to distinguish his 'beliefs' from what he 'knows' about the world.

    To find out what somebody believes, don't ask them for a self report of the form "I believe that X is true with n% confidence", but rather, ask them what they know about the world, because what a person is prepared to assert about the world is a more accurate measure of what their actual "beliefs" are, and can be expected to be at odds with what they say about themselves when introspecting unreliably.

    The next question should concern the extent to which beliefs exist internally within a person in the sense of a mental state, versus externally of the person as behavioural hypotheses that society projects onto the person. (Since we have no reason to assume that people understand themselves).
  • Sleeping Beauty Problem
    In any way that SB can assess her credence, that does not reference her position in the map, the answer is 1/3.
    Using four volunteers, where each sleeps though a different combination in {H&Mon, T&Mon, H&Tue, T&Tue}? On any day, the credence assigned to each of the three awake volunteers cannot be different. and they must add up to 1. The credence is 1/3.
    Use the original "awake all N days, or awake on on one random day in the set of N" problem? N+1 are waking combinations, only one corresponds to "Heads." The credence is 1/(N+1).
    Change the "sleep" day to a non-interview day? It is trivial that the answer is 1/3.

    I'm sure there are others. The point is that the "halfer run-based" argument cannot provide a consistent result. It only works if you somehow pretend SB can utilize information, about which "run" she is in, that she does not and cannot posses.
    JeffJo

    No, the Halfer position doesn't consider SB to have any information that she could utilize when awakened, due to the fact that SB's knowledge that it is either Monday or Tuesday doesn't contribute new information about the coin, which she only observes after the experiment has concluded.

    Also, your reasoning demonstrates why we shouldn't conflate indifference with equal credence probability values. Yes, an awakened SB doesn't know which of the possible worlds she inhabits and is indifferent with regards to which world she is in and rightly so. No, this doesn't imply that she should assign equal probabilty values for each possible world: For example, we have already shown that if an awakened SB assigns equal prior probabilities to every possible world that she might inhabit, then she must assign unequal credences for it being monday versus tuesday when conditioning on a tails outcome.

    To recap, if P(Monday) = 2/3 (as assumed by thirders on the basis of indifference with respect to the three possible awakenings), and if P(Tails | Monday) = 1/2 = P(Tails) by either indiffererence or aleatoric probability, then

    P(Monday | Tails) = P(Tails |Monday) x P(Monday) / P(Tails) = (1/2 x 2/3) / (1/2) = 2/3.

    So let's assume that SB is awakened on monday or tuesday and is told, and only told, that the result was Tails. According to the last result, if SB initially assigns P(Monday) = 2/3 on the basis of the principle of indifference as per thirders, then she must infer having learned of the tails result that monday is twice as likely as tuesday, in spite of mondays and tuesdays equally occcuring on a tails result.

    As this demonstrates, uniform distributions have biased implications. So if SB insists on expressing her state of indifference over possible worlds in the language of probabilty, she should only say that any probability distribution over {(Monday,Heads),(Monday,Tails) ,(Tuesday,Tails)} is compatible with her state of indifference, subject to the constraint that the unconditioned aleatoric probability of the coin is fair.

    However, if she really must insist on choosing a particular probability distribution to represent her state of indifference, then she can still be a halfer by using the prinicple of indifference to assert P(Monday | Tails) = P(Tuesday | Tails), and then inferring the unconditioned credence that it is monday to be P(Monday = 1/2), which coheres with the halver position.
  • Sleeping Beauty Problem
    SB's answer: "Because the protocol ties one lamp to Heads-runs and two lamps to Tails-runs, among the awakenings that actually occur across repeats, the lamp I'm under now will have turned out to be a T-lamp about two times out of three. So my credence that the current coin toss result is Tails is 2/3." (A biased coin would change these proportions; no indifference is assumed.)

    The coin's fairness fixes the branches and the long-run frequencies they generate. The protocol fixes how many stopping points each branch carries. Beauty's "what are the odds?" becomes precise only when she specifies what it is that she is counting.

    Note on indifference: The Thirder isn't cutting the pie into thirds because the three interview situations feel the same. It's the other way around: SB is indifferent because she already knows their long-run frequencies are equal. The protocol plus the fair coin guarantee that, among the awakenings that actually occur, the two T-awakenings together occur twice as often as the single H-awakening, and within each coin outcome the Monday vs Tuesday T-awakenings occur equally often. So her equal treatment of the three interview cases is licensed by known frequencies, not assumed by a principle of indifference. Change the coin bias or the schedule and her "indifference" (and her credence) would change accordingly.
    Pierre-Normand

    Thirders who argue their position on the basis of frequential probabilities are conflating the subject waking up twice in a single trial (in the case of Tails) for two independent and identically distributed repeated trials, but the subject waking up twice in a single trial constitutes a single outcome, not two outcomes. Frequentist Thirders are therefore overcounting.

    There is only one alleatorically acceptable probability for P(Head | Monday OR Tuesday) (which is the question of the SB problem) :

    P(Head | Monday OR Tuesday) =
    P(Monday OR Tuesday | Head) x P(Head) / P(Monday Or Tuesday)

    where

    P(Head) = 0.5 by assumption.
    P(Monday Or Tuesday) = 1 by assumption.

    P(Monday OR Tuesday | Head) = P(Monday | Head) + P(Tuesday | Head) = 1 + 0 = 1.

    P(Head | Monday OR Tuesday) = 1 x 0.5 / 1 = 0.5.
  • Sleeping Beauty Problem
    Then what would you say it is? If you say Q, then your credence in Tails must be 1-Q, and you have a paradox.JeffJo

    If you insist that credence must be expressed as a number Q, then in general I would refuse to assign a credence for that reason - cases like SB in which credences are artificially constrained to be single probability values, doesn't merely result in harmless paradoxes but in logical contradictions (dutch books) with respect to causal premises. Likewise, I am generally more likely to bet on a binary outcome when I know for sure that the aleatoric probability is 50/50, compared to a binary outcome for which I don't know the aleatoric probability.

    In order to avoid unintented inferences, the purpose for assigning credences needs to be known. For example, decisions are often made by taking posterior probability ratios of the form P(Hypothesis A | Observation O )/ P(Hypothesis B | Observation O). For this purpose, assigning the prior probability credence P(Hypothesis A = 0.5) is actually a way of saying that credences don't matter for the purpose of decision making using the ratio, since in that case the credences cancel out in the posterior probability ratio to produce the likelihood ratio P(observation O | Hypothesis A)/P(Observation O | Hypothesis B) that only appeals to causal (frequential) information. This is also the position of Likelihoodism; a view aligned with classical frequential statistics, that prior probabilities shouldn't play a part in decision making unless they are statistically derived from earlier experiments.

    An acceptable alternative to assigning likelihoods, which often cannot be estimated as in single experiment situations, is to simply to list the possible outcomes without quantifying. Sometimes there is enough causal information to at least order possibilities in terms of their relative likelihood, even if quanitification of their likelihoods isn't possible or meaningful.
  • Sleeping Beauty Problem
    The SB problem is a classic illustration of confusing what probability is about. It is not a property of the system (the coin in the SB problem), it is a property of what is known about the system.JeffJo

    Then you are referring to subjective probability which is controversial, for reasons illustrated by the SB problem. Aleatory probability by contrast is physical probability and directly or indirectly refers to frequencies of occurrence.


    That is, your credence in an outcome is not identically the prior probability that it will occur. Example:

    I have a coin that I have determined, through extensive experimentation, is biased 60%:40% toward one result. But I am not going to tell you what result is favored.
    I just flipped this coin. What is your credence that the result was Heads?
    JeffJo

    It is correct to point out that credence does not traditionally refer to physical probability but to subjective probability. It is my strong opinion however, that credence ought to refer to physical probability. For example, my answer to your question is say that my credence is exactly what you've just told me and nothing more, that is my credience is 60/40 in favour of heads or 60/40 in favour of tails.

    Even though you know that the probability-of-occurrence is either 60% or 40%, your credence in Heads should be 50%. You have no justification to say that Heads is the favored result, or that Tails is. So your credence is 50%. To justify, say, Tails being more likely than Heads, you would need to justify Tails being more likely to be the favored result. And you can't.JeffJo

    I definitely would not say that my credence is 50/50, because any statistic computed with that credence would not be reflective of the physical information that you have provided.
  • Sleeping Beauty Problem
    I don't see any questionable appeal to the principle of indifference being made in the standard Thirder arguments (though JeffJo may be making a redundant appeal to it, which isn't needed for his argument to go through, in my view.) Sleeping Beauty isn't ignorant about frequency information since the relevant information can be straightforwardly deduced from the experiment's protocol. SB doesn't infer that her current awakening state is a T-awakening with probability 1/3 because she doesn't know which one of three indistinguishable states it is that she currently is experiencing (two of which are T-awakenings). That would indeed be invalid. She rather infers it because she knows the relative long run frequency of such awakenings to the 2/3 by design.Pierre-Normand

    But the SB experiment is only assumed to be performed once; SB isn't assumed to have undergone repeated trials of the sleeping beauty experiment, let alone have memories of the previous trials, but only to have been awoken once or twice in a single experiment, for which no frequency information is available, except for common knowledge of coin flips. So the SB is in fact appealing to a principle of indifference as per the standard explanation of the thirders position, e.g. wikipedia.

    In any case, a frequentist interpretation of P(Coin is Tails) = 0.5 isn't compatible with a frequentist interpretation of P(awoken on Tuesday) = 1/3.

    For sake of argument, suppose P(Coin is Tails) = 0.5 and that this is a frequential probability, and that inductive reasoning based on this is valid.

    Now if P(awoken on Tuesday) = 1/3, then it must also be the case that

    P(awoken on Tuesday | Coin is Tails) x P(Coin is tails) = 1/3, as typically assumed by thirders at the outset. But this in turn implies that

    P(awoken on Tuesday | Coin is tails) = (1/3)/0.5 = 2/3.

    Certainly this isn't a frequential probability unless SB having undergone repeated trials notices that she is in fact woken more times on a tuesday than a monday in cases of Tails, in contradiction to the declared experimental protocol. Furthermore, this value doesn't even look reasonable as a credence , because merely knowing apriori that the outcome of the coin is tails shouldn't imply a higher credence of being awoken on tuesday rather than Monday.

    Credences are a means of expressing the possession of knowledge without expressing what that knowledge is. To assign consistent credences requires testing every implied credence for possible inconsistencies. Thirders fail this test. Furthemore, credences should not be assigned on the basis of ignorance; a rational SB would not believe that every possible (day, coin-outcome) pair has equal prior probability, rather she would only assume was is logically necessary - namely that one of the pairs will obtain with either unknown or undefined probability.
  • Sleeping Beauty Problem
    What the SB problem amounts to is a Reductio ad absurdum against the principle of indifference being epistemically normative, a principle that in any case is epistemically inadmissible, psychologically implausible, and technically unnecessary when applying probability theory; a rational person refrains from assigning probabilities when ignorant about frequency information; accepting equal odds is not a representation of ignorance (e.g Bertrand's Paradox).

    - It is commonly falsely argued by thirders, that halvers are suspect to a Dutch-book argument, by virtue of losing twice as much money if the coin lands tails, than they gain if the coin lands heads (since the dutch-book is defined as an awoken SB placing and losing two bets, each costing her $1 in the case of tails, one on monday and one on tuesday, versus her placing and winning only one bet rewarding her with $1 on Monday if the coin lands heads). But this dutch book argument is invalidated by the fact that it it equivalent to SB beingapriori willing to win $1 in the case of heads and losing $2 in the case of tails, i..e. SB knowingly accepting a Dutch Book with an expected loss of 0.5x1 - 0.5x2 = -$0.5 before the experiment begins, given her prior knowledge that P(H) = 0.5. So the Dutchbook argument is invalid and is actually an argument against the thirder position.

    The (frankly unnecessary) lesson of SB is that meaningful probabilities express causal assumptions, and not feelings of indifference about outcomes.
  • "Ought" and "Is" Are Not Two Types of Propositions
    "The 'ought' you mentioned, as in 'it ought to rain,' is a prediction. In contrast, the 'must' in a normative conclusion is a requirement for action—a behavioral standard that everyone ought to abide by."panwei

    Your definition of 'must' is circular here. Circular definitions are characteristic of speech acts ("Tie your shoelaces! because I said so!") and also of analytic propositions ("Bachelor" means "unmarried man").

    In such contexts, it is right to point out that their use is not necessarily inferential, because they might represent instructions, wishes, promises, postulates, conventions, orders etc, rather than assumptions or facts. But the English meaning of "ought" is used both as a speech act and as an inference, depending on the context, which reflects the fact that we often cannot know whether a sentence is meant as a speech act or as a hypothesis, especially when considering the fact that speech acts are often issued on the basis of assumptions.

    This also reflects a fundamental asymmetry of information between speaker and listener; When a speaker uses "ought", they might intend it as a speech act or as a prediction but the listener cannot be certain as to what the speaker meant, even after asking the speaker to clarify himself, because we are back to circular definitions.
  • "Ought" and "Is" Are Not Two Types of Propositions
    Are 'oughts' inferences, and are 'ises' reducible to 'oughts'?

    In ordinary language, "ought" is also used to signify predictive confidence, as in "it ought to rain"; so "oughts" aren't necessarily used in relation to utility maximisation. Furthermore, we understand what an agent is trying to achieve in terms of our theory of the agent's mind, which is partly based on our observations of their past behaviour. So an inference of what an agent 'ought' to do on the basis of what 'is' can perhaps be understood as an application of Humean induction. And our description of what 'is' tends to invoke teleological concepts, e.g. if we describe a ball as being a snooker ball it is because we believe that it ought to behave in the normal way that we expect of snooker balls from past experience.

    So if descriptions of what is the case are necessarily inferential, and if our understanding of moral obligations are in terms of our theory of minds which in turn are inferred from behavioral observations, then perhaps there is an argument for saying that only oughts exist, even if we are never sure which ones.
  • "Ought" and "Is" Are Not Two Types of Propositions
    In Decision Theory, States and Actions are generally treated as logically orthogonal concepts; an 'is' refers to the current state of an agent, and an 'ought' refers to the possible action that has the highest predicted utility in relation to the agent's 'is'. This treatment allows causal knowledge of the world to be separated from the agent's subjective preferences.

    Paradoxically, this can imply that the psychological distinction between states versus action utilities is less clear, considering the fact that agents don't generally have the luxury of having perfect epsistemic knowledge of their worlds prior to taking an action (e.g. as required to solve the Bellman Equation).

    Also, an action is only as good as the state that it leads to - rewards are related to (state,action) pairs, so utility values can be thought of as equivalence classes of states quotiented with respect to action utilities. This is practically important, since agents don't generally have the memory capacity to store perfect world knowledge even if it were available. Agents tend to visit and focus their learning on the state->action->(reward,state) chains that correspond to highest reward, and then learn compressed representations of these visited states in terms of a small number of features that efficiently predict utility. E.g Chess Engines estimate the utility of a board position by representing the board in terms of a managebly small number of spatial relations between pieces, especially in relation to the Kings. So the representational distinction between states and action reward values in the mind of an agent is muddied.
  • How LLM-based chatbots work: their minds and cognition
    In order to fully dislodge the Cartesian picture, that Searle's internalist/introspective account of intentionally contentful mental states (i.e. states that have intrinsic intentionality) indeed seem not to have fully relinquished, an account of first person authority must be provided that is consistent with Wittgenstein's (and Ryle and Davidson's) primary reliance on public criteria.Pierre-Normand

    Quine provided the most useful conceptual framework for both scientists, technologists and philosophers, since LLMs can be naturally interpreted as physically instantiating Quine's web of belief, namely an associative memory of most public knowledge. The nature and knowledge of LLMs can then appraised in line with Quine's classification of sentence types.

    (A short paraphrase of (Quineian sentence types returned by Google Gemini))

    Theoretical sentences: Describe things not directly observable, such as "Atoms are the basic building blocks of matter". They require complex background knowledge and cannot be verified by a simple, direct observation.

    Observation categoricals: Sentences that involve a relationship between two events, often derived from theory and hypothesis together, such as "When the sun comes up, the birds sing".

    Occasion sentences: Sentences that are sometimes true and sometimes false, like "It is raining". An observation sentence can also be an occasion sentence, as "It is cold" is true on some occasions and false on others.

    "Myth of the museum" sentences: Traditional view of language where sentences are like labels for pre-existing meanings, which Quine rejects because it assumes meanings exist independently of observable behavior.


    They are the "Chinese room" types of sentences that bear no specific relationship to the sensory inputs of a particular language user, that are encoded in LLMs, by constrast to Quine's last category of sentences, namely the Observation Sentences, whose meaning is "private", in other words whose meaning reduces to ostensive demonstration and the use of indexicals on a per language-user basis.
  • Banning AI Altogether
    I find the the appeals to Wittgenstein as a gold standard of philospohical writing ironic, considering how indispensible AI is for the layreader who wishes to engage with Wittgenstein's thinking in a historically accurate fashoin. This is all thank to Wittgenstein's apparent inability to articulate himself, and because of a greater irony that the anti-AI brigade of this forum overlook: Wittgenstein never quoted the philosophers he was targetting or stealing from, leading to great difficulties when it comes to understanding, criticising and appraising the originality of his ideas. (I'm not aware of any idea of Wittgenstein's that wasn't more precisely articulated by an earlier American pragmatist such as Dewey or Peirce, or by a contemporary logician such as Russell or Frege or Ramsey, or by a post-positivist such as Quine) And yet these more articulate philosophers are rarely discussed on this forum - I would argue because precise writing is more technical and therefore more cognitively demanding than giving hot-takes of aphorisms .

    Wittgenstein's standard of philsophical writing wasn't publishable in his own time, at least not for the standards required by anayltc philospohy, let alone our time. So if AI should not be quoted because of source uncertainty, then what is the justification on this forum for allowing people to quote Wittgenstein?
  • Banning AI Altogether
    Let's focus on the actual harms that AI use has so far wrought upon this forum: What are they?
  • Banning AI Altogether
    I think this is all a storm in a teacup. It is obvious etiquette to quote an AI response in the same way that one would quote a remark from a published author, and nobody should object to a quoted AI response that is relevant and useful to the context of the thread.

    Also, for those of us who use AI for studying subjective and controversial philosophical topics, it can be useful to read the AI responses that other people are getting on the same topic, due to the fact that AI responses can be influenced by conversation history and can be biased towards the user's opinion. Community feedback can therefore help people objectively appraise the AI responses they are getting.
  • Banning AI Altogether
    One thing to bear in mind about LLMs, is that they are fined tuned by human expert supervision post the internet scraping, tokenization and compression stage, although not all subjects are supervised equally. And so it isn't the case as it was when LLMs initially burst on the scene, that they are mere statistical auto-completers regressing to the wisdom of crowds. Generally. whilst they are generally reliable when it comes to traditional academic subjects and mainstream knowledge, they can be expected to revert to responses closer to auto-completion in fringe subject areas; which is why human discussion forums remain useful - for checking and refining AI assisted ideas. Notably, although ChatGPT can estimate its own ignorance on a topic, which is a necessary feature for it to know when to consult external sources of information to accurately answer a user query, it never presents a confidence estimate when replying to the user. This lack of transparency, together with its reversion to auto-completion, can be a problem for example, when relying upon an LLM to learn domain specific languages that aren't popular, or when relying on LLMs to learn synthetic natural language such as Ithkuil or Lojban; which is a presently unfortunate state of affairs for those of us who see great potential in LLMs for the purposes of experimental philosophy.
  • Banning AI Altogether
    ChatGPT and Gemini start by mirroring society's default communicative presumption, namely of a public world of shared referents that all competent speakers access during the course of conversation, and so debates invariably involve the AI initially using words in the normal intersubjective mode, leading to the appearance of it defending metaphysical realism, followed by it shifting to using words in the subjective mode when the communicative presumption is questioned, leading to the appearance of the AI retreating to psychological realism or idealism. But all that is actually happening, is that the AI is switching between two grammatical modes of speaking that correspond to two distinct sub-distributions of language use (namely intersubjective communication that purposely omits perspective to produce the illusion of shared-world semantics, versus subjective expression that reduces to perspective).

    Ai demonstrates that self-reflection isn't needed for a comptent peformance of philosophical reasoning, because all that is needed to be an outwardly competent philosopher is mastery of the statistics of natural language use, in spite of the fact that the subject of philosophy and the data of natural language use are largely products of self-reflection. So it is ironic that humans can be sufficiently bad at self-reflection, such that they can benefit from the AI reminding them of the workings of their own language.
  • First vs Third person: Where's the mystery?
    From an external point of view, cognition is private and indirect. From an internal point of view, cognition is public and direct. So Husserl and Descartes can be both semantically correct, provided that we don't mix their postulates and apply them in different contexts.
  • First vs Third person: Where's the mystery?
    IMO, Chalmer and Dennett both had a tendency to misconstrue the meaning of "physical" as denoting a metaphysical category distinct from first personal experience, as opposed to denoting a semantic delineation between third-personal versus first-personal meaning.

    In the case of Dennett, his misunderstanding is evident when he conjectures that Mary the colour scientist can learn the meaning of red through a purely theoretical understanding. But this argument fails to acknowledge that physical concepts are intersubjectively defined without reference to first personal perceptual judgements. Hence there are no public semantic rules to build a bridge from physical theory, whose symbols have public universal meaning, to perceptual judgements that are not public but specific to each language user, as would be required for mary to learn appearances from theory.

    In the case of Chalmer, (or perhaps we should say "the early Chalmer"), his misunderstanding is evident in his belief in a hard problem. Chalmers was correct to understand that first-person awareness isn't reducible to physical concepts, but wrong to think of this as a problem. For if physical properties are understood to be definitionally irreducible to first-person experience, as is logically necessary for physical concepts to serve as a universal protocol of communication, then the hard problem isn't a problem but an actually useful, even indispensable, semantic constraint for enabling universal communication.

    Semaphore provides a good analogy; obviously there is a diference between using a flag as a poker to stoke one's living room fire, versus waving the flag in accordance with a convention to signal to neighbours the presence of the fire that they cannot see. We can think of the semantics of theoretical physics to be akin to semaphore flag waving, and the semantics of first-person phenomenology to be akin to fire stoking. These distinct uses of the same flag (i.e uses of the same lexicon) are not reducible to each other and the resulting linguistic activities are incommmensurable yet correlated in a non-public way that varies with each language user. This dual usage of language gives rise to predicate dualism, which advocates for the existence of a hard problem mistake for a substance or property dualism.
  • Thoughts on Epistemology
    Your question "how do you know that what you think are defeaters and are progressive evolution really are?" is the right question to ask, because it highlights the difference between thinking one has a defeater and actually having one. JTB+U is built precisely to keep that distinction clear.Sam26

    Isn't understanding the same thing as justification? I'm not sure what the U adds to JTB, given that we assess understanding in terms of justifications.

    As for deciding whether a refutation is valid or not, this rests upon the truth of one's auxiliary hypotheses. So unless those can also be tested, one cannot know whether the refutation is valid, which is the staple criticism of Popper's falsificationism - that individual hypotheses are impossible to test, since their validity stands and falls with the truth of every other hypothesis. So the bridge from practical refutation in everyday life, which often involves the testing of individual hypotheses under the assumption of true auxilliary hypotheses, doesn't withstand skeptical scrutinty and the standards demanded by scientific epistemology - an essentially unattainable standard, relegating JTB to the realm of the impossible, or to the realm of semantics that is epistemically vacuous.
  • Thoughts on Epistemology
    I think this relates to another question. Practices and language clearly evolve over time. What causes them to change the way they do? Presumably, this is how J might relate to T and U.

    In my own work I have drawn a parallel between these hinges and Gödel’s incompleteness theorems,
    just as Gödel showed that no consistent formal system strong enough for arithmetic can prove all the truths it contains or even establish its own consistency from within, Wittgenstein shows that epistemic systems rest on unprovable certainties. Both reveal a structural limit on internal justification. Far from undermining knowledge, these limits are enabling conditions: mathematics requires axioms it cannot justify, and our epistemic practices require hinges that stand fast without proof.
    — Sam26

    I am not sure about this comparison, axioms are justified and questioned all the time. If you tried to present a system with arbitrary axioms, or ones that seemed prima facie false, no one is likely to take them seriously. The gold standard is that they seem self-evident (arguably, a sort of justification). There have been intense debates over axioms, which can take place because "justification" is not itself bound by any axiomatized system. Afterall, what are the axioms for English, German, or Latin? Axioms are assessed by intuition, consequence, coherence, explanatory success, or even aesthetics, etc. Reasons/justifications are given.
    Count Timothy von Icarus

    I think that axioms are a misleading interpretation of Wittgenstein's hinges.

    i) Axioms are typically used to represent truth-apt empirical hypotheses.
    ii) Axioms are stated in advance of proving theorems.
    iii) Axioms are detachable and optional parts of a reasoning system .

    i suspect that neither i,ii,or iii are generally true of Wittgenstein's hinges. To think this way would be to construe Wittgenstein as being committed to traditional foundationalist epistemology built upon logical atomism, as naturally embodied by the intended interpretation of an axiomatic system, which most Wittgensteinians think to be a gross misconstrual of his later ideas.

    Nevetheless, the later Wittgenstein's epistemological views still come across as immature and lacking in sophistication when compared to the detailed accounts of scientific knowledge and justification by Carnap and Quine. To me, Wittgenstein sometimes comes across as a descriptive Carnapian, in the sense that like Carnap, Wittgenstein seemed to think (as in OC) that it was useful to delineate the internal questions of truth and justification that make sense from within a particular linguistic framework, from the external questions concerning the choice of linguistic framework. But unlike Carnap, I don't think that Wittgenstein saw the internal-external distinction as having prescriptive epistemological value, for essentially the same reasons as Quine; namely due to rejecting the analytic-synthetic distinction.

    If Wittgenstein had fully rejected the logical atomism of the Tractatus, and if he wasn't comitted to the picture theory of meaning and the accompanying idea of intentional propositional attitudes that the picture theory of meaning is wedded to, and if Wittgenstein wasn't committed to the analytic synthetic distinction, then presuambly Wittgenstein's later epistemological views were closer to Quine's confirmation holism, in which case hinges are merely entrenched but revisable assertions, even if they are fixed for all intents and purposes within specific cases of reasoning.
  • Thoughts on Epistemology
    This is a false dilemma. John's subjective truth will be conditioned by his understanding of what mathematical truth is, which he has learnt through interaction with others who teach him. Unless that has happened John may have a subjective opinion, but it doesn't count as a mathematical opinion.Ludwig V


    Yes, the keyword here is interaction - more specfiically John's ongoing interactions with his environment that maintains a correlation between his conditioning and external truth-makers. The critical importance of ongoing interaction is both overlooked, and many would argue incompatible with, the traditional epistemological notion of apriori, intentional belief states that we are supposed to believe can make semantic and epistemological contact with truth-makers before interaction. For it isn't feasible that a propositional attitude with respect to a future-contigent proposition, can access the truthmaker of the proposition in advance of the actual interactive use of the proposition.

    As Wittgenstein might have put it, both the meaning and truth of a future-contigent proposition are up in the air, because the referential semantics of a future-contigent proposition cannot decided before the truth of the proposition is evaluated, which critically undermines the traditional epistemological concept of intentional belief states that are naively presumed to consist of a teleological mental state holding in mind a possible outcome of the future before it happens.

    Hence emphasising interaction rather than beliefs can resolve the dilemma of semantic-externalism or trivialism in the same way thay Bayesian Statistics does - pragmatically through making it clear that beliefs are not intentional mental states, but conventions used for interpreting and controlling behavioural conditioning, in a sense that rejects the traditionally internalist and static epistemological notion of belief states.
  • Thoughts on Epistemology
    John points to the white board, which has the figure 2 written on it. He says, "That is a prime number." We'll call the sentence he uttered S.

    The cause of his use of S is a factor in determining the truth conditions. That cause is not the truth conditions, though. Or if it is, how?
    frank

    Here we must ask if John's understanding of mathematics is relevant to the mathematical truth of his utterance:

    From the perspective of the mathematics community other than John, the answer is clearly no; for whether 2 is a prime number is not decided by John's understanding of prime numbers but by a computable proof by contradiction written down on paper and simulated on a computer, that bears no necessary relationship to the hidden causal process of John's neuro-psychology, even if the two are correlated due to John being a trained mathematician.

    On the other hand, from the perspective of John, who isn't in a position to distinguish his personal understanding of mathematics from our actual mathematics, the answer is clearly yes. So we have two distinct notions of truth in play: Intersubjective mathematical truth, for which the truth maker is independent of Johns judgements whether or not his judgements are correlated with intersubjective mathematical truth, versus what we might call "John's subjective truth" in which the truth maker is identified with the neuropsychological causes of John's utterances. If John is a well-respected mathematician, then we might be tempted to conflate the two notions of truth, but we shouldn't forget that the two notions of truth (causally determined versus community determined) aren't the same notion of truth.
  • Thoughts on Epistemology
    Do I have to know that X is true in order to use it as the T in a JTB statement?J

    Under the strongest possible interpretation of truth-conditional semantics (the principle of maximal charity), the meaning of your use of a sentence S refers to the actual cause of your use of S; in which case, the answer to your question is vacuously yes, because the truth of your utterance of S is necessarily true when your utterance of S has been correctly understood.

    On the other hand, if the community gets to decide the truth-maker of your use of S irrespective of whatever caused you to utter S (the principle of minimal charity), then you cannot know that S is true until after you have used S and received feedback. In which case, the truth of S isn't a quality of your mental state when you used S.
  • Thoughts on Epistemology
    Truth conditional semantics does not escape the dilemma between the postulation of belief intentionality ,causal semantics and trivialism on the one hand, versus the postulation of false beliefs and community decided truthmakers on the other, but illustrates how the dichotomy is muddied in actual linguistic practice through a process of biased radical translation.

    On the one hand, a radical translation of a speaker's utterances in terms of truth-conditional semantics, interprets the speaker's utterances as denoting statistical correlations between his mental state and his external world (charity). But on the other hand, the radical translator gets to decide the cases when the speaker's utterances are supposedly "false" (uncharity), in accordance with the translator's personal agenda, as opposed to in terms of the actual causes of the speaker's utterances when he said the "wrong" thing.

    Davidson's proposal is scientifically useful but non-philosophical and aligns with how the concept of "beliefs" are used practically and non-seriously in AI and machine learning, especially in the case of Bayesian reinforcement learning when we callibrate a neural network's responses to the external states of the environment and call the resulting neuron activations "beliefs" (which denote our wishes). But Davidison, like machine learning, ducks the philosophical question as to how to rehabilitate epistemology, given that any realist notion of beliefs seems untenable.
  • Thoughts on Epistemology
    I don't quite understand this. Our community ascribes false beliefs to people all the time and that's why they are called "intentional"Ludwig V

    And that is the idea I am attacking. Supposedly, Intentionality refers to "The quality of mental states (e.g. thoughts, beliefs, desires, hopes) which consists in their being directed towards some object or state of affairs." - Google Gemini

    So according to this definition of intentionality, the intentionality of a mental state has nothing to do with the opinions and linguistic biases of a community, and concerns a genuine, real relationship between a believer and an object that his beliefs are directed towards. But if this relationship is a causal relationship between the object of the belief and the mental state of the believer, then how is a false belief possible?

    Notice that we don't attribute false beliefs to a glitchy measurment device - rather we refer to the device as uncalibrated or not functioning in accordance with its specification. And so we don't consider measurement error as as an attribute of the state of the measuring device; rather we consider the device as not functioning in accordance with our wishes, in that it is us who chooses the "truthmaker" of what we want the device to be measuring. And hence we do not attribute intentionality to the state of the device with respect to our desired truthmaker.

    The situation isn't different with humans as measuring devices. And hence as with the example of a thermometer, either humans have intentional belief states, in which case their beliefs cannot be false due to the object of their beliefs being whatever caused their beliefs, else their beliefs are permitted to be false, in which case the truthmaker of their belief is decided externally by their community.
  • Thoughts on Epistemology
    Thermometers never commit epistemic errors; they can only mislead those who uncritically rely upon them. Likewise, the same can be said of a 'believer's' utterances.

    The dilemma is either

    A. a belief merely refers to the coexistence of a believer's mental state and an external truth-maker, where the external truth-maker is decided by the linguistic community rather than the believer. In which case the intentionality associated with the believer's mental state is irrelevant with respect to the belief that the community ascribes to the believer as a matter of linguistic convention rather than of neurological fact.

    or

    B. Beliefs refer to the actual physical causes of the believer's mental-state - in which case the believer's intentionality is relevant - so much so, that it is epistemically impossible for the believer to have false beliefs. (Trivialism).

    So you either have to sacrifice belief intentionality or you have to accept trivialism. There is no "inbetween" alternative IMO. Either way, the naive conception of beliefs as binary truth-apt intentional states is untenable and ought to be eliminated from discourse.
  • AI cannot think
    Don't think of thinking as a solitary activity, as in a circular causal process. Think of thinking as open communication between two or more processes, with each process defining a notion of truth for the other process, leading to semi-autonomous adaptive behaviour.

    E.g. try to visualize a horse without any assistance and draw it on paper. This is your generative psychological process 1. Then automatically notice the inaccuracy of your horse drawing. This is your critical psychological process 2. Then iterate to improve the drawing. This instance of thinking is clearly a circular causal process involving two or more partially-independent psychological actors. Then show the drawing to somebody (Process 3) and ask for feedback and repeat.

    So in general, it is a conceptual error to think of AI systems as closed systems that possess independent thoughts, except as an ideal and ultimately false abstraction. Individual minds, like indivdual computer programs are "half-programs" that are reactive systems waiting for external input, whose behaviour isn't reducible to an individual internal state.
  • Idealism in Context
    If mathematics were merely convention, then its success in physics would indeed be a miracle — why should arbitrary symbols line up so exactly with the predictability of nature? And if it were merely empirical, then we could never be sure it applies universally and necessarily...Wayfarer

    Science isn't committed to the reality of alethic modalities (necessity, possibility, probability) in the devout epistemological sense you seem to imply here, for they are merely tools of logic and language - the modalities do not express propositional content unless they are falsifiable, which generally isn't the case.

    A nice case of the “unreasonable effectiveness” is Dirac’s prediction of anti-matter — it literally “fell out of the equations” long before there was any empirical validation of it. That shows mathematics is not just convention or generalisation, but a way of extending knowledge synthetically a priori.Wayfarer

    IMO, that is a merely an instance of an inductive argument happening to succeed. A purpose of any theory is to predict the future by appealing to induction -- but there is no evidence of inductive arguments being more right than wrong on average. Indeed, even mathematics expresses that it cannot be unreasonably effective, aka Wolpert's No Free Lunch Theorems of Statistical Learning Theory.

    Humans have a very selective memory when it comes to remembering successes as opposed to failures. Untill the conjecture is tested under scrutiny, it can be dismissed.
  • Idealism in Context
    But Kant’s point is that neither account explains why mathematics is both necessary and informative. If it were analytic, it would be tautological; if empirical, it would be contingent. The synthetic a priori is his way of capturing that “in-between” character. It also has bearing on how mathematics is 'unreasonably efficacious in the natural sciences.'Wayfarer

    Or rather, it explains why mathematics is simply efficacious - mathematical conventions are arbitrary and independent of facts and hence a priori, and yet the mathematical proofs built upon them require labour and resources to compute, which implies that the truth of mathematical theorems is physically contigent and hence synthentic a posteriori. Hence the conjecture of unreasonable effectiveness is not-even-wrong nonsense, due to the impossibility of giving an a priori definition of mathematical truth.
  • Thoughts on Epistemology
    Here is my position:

    1). I cannot know false propositions a priori.
    2). I can have known false propositions a posteriori.

    This is because I cannot distinguish the truth from my beliefs a priori, and yet I do make the distinction in hindsight. My concept of truth is in flux, so there is no contradiction here, even if this position isn't compatible with common grammatical usage of the verb "to know" or "to have known".
  • Evidence of Consciousness Surviving the Body
    A seventh misconception treats negative cases as field-defeaters (“if some reports are wrong, the thesis fails”). The thesis of this chapter is proportionate: it does not depend on unanimity or on universal accuracy. It claims that some anchored cases survive ordinary scrutiny and that these anchors stabilize the larger testimonial field. One counterexample to a weak report does not touch a different case whose particulars were independently confirmed.Sam26

    But you haven't presented any cases that can be expected to survive an ordinary degree of scientific scrutiny.

    A third misconception claims “there are no controls,” implying that without randomized trials, testimony cannot carry weight. Prospective hospital protocols supply a different kind of control: fixed clinical clocks, environmental constraints (taped eyes, sealed rooms), hidden-target or procedure-bound particulars, and independent confirmation. These features limit post-hoc embroidery and allow specific claims to be checked. They do not turn testimony into lab instrumentation, but they do make some reports probative under ordinary public standards.Sam26

    Randomized trials aren't a requirement, but a controlled enviornment is necessary so as to eliminate the possibility that supposedly unconscious subjects are actually conscious and physically sensing and cognitively reconstructing their immediate environments by normal sensory means during EEG flat-lining. One such an experiment is The Human Consciousness Project that investigated awareness during resuscitation of cardiac arrest patients in collaborration with 25 medical centers across the US and Europe. That investigation among other things, controlled the environmment so as to assess the possibility that NDE subjects were sensing information that they couldn't posssibly deduce by normal bodily means (remote viewing).

    "The study was to introduce a multi-disciplinary perspective, cerebral monitoring techniques, and innovative tests.[7]. Among the innovative research designs was the placement of images in resuscitation areas. The images were placed on shelves below the ceiling and could only be seen from above. The design was constructed in order to verify the possibility of out-of-body experiences"

    The results were negative, with none of the patients recalling seeing the test information that was situated above their heads:

    " The authors reported that 101 out of 140 patients completed stage 2 interviews. They found that 9 out of 101 cardiac arrest survivors had experiences that could be classified as near-death experiences. 46% could retrieve memories from their cardiac arrest, and the memories could be subdivided into the following categories: fear; animals/plants; bright light; violence/persecution; deja-vu; family; recalling events post-CA. Of these, 2% fulfilled the criteria of the Greyson NDE scale and reported an out-of-body experience with awareness of the resuscitation situation. Of these, 1 person described details related to technical resuscitation equipment. None of the patients reported seeing the test design with upward facing images."

    .
  • Evidence of Consciousness Surviving the Body
    In modern western societies, a testimony that appeals to clairvoyance falls under misrepresentation of evidence, an inevitable outcome under witness cross examination in relation to critical norms of rational enquiry and expert testimony, possibly resulting in accusations of perjury against the witness. I would hazard a guess that the last time an American court accepted 'spectral' evidence was during the Salem witch trials.

    The need for expert testimony is even enshrined in the code of Hammurabi of ancient Mesopotamia; not even the ancients accepted unfettered mass testimony.

    So much for us "naysaying materialists" refusing to accept courtroom standards of evidence (unless we are talking about courtrooms in a backward or corrupt developing country).
  • Evidence of Consciousness Surviving the Body
    I am guessing that if EEGs are flatlining when patients are developing memories associated with NDEs, that this is evidence for sparse neural encoding of memories during sleep that does not involve the global electrical activity of millions of neurons that is entailed by denser neural encoding that an EEG would detect.

    Which seems ironic, in the sense that Sheldrake proponent's seem to think that apparent brain death during memory formation is evidence for radically holistic encoding of memories extending beyond the brain. But when you think about it for more than a split second, the opposite seems far more likely, namely atomistic, symbol-like memories being formed that slip under the EEG radar.
  • Evidence of Consciousness Surviving the Body
    Sam, name one reproducible experiment under controlled laboratory conditions that confirms that NDEs entail either clairvoyance or disembodied cognition.

    Intersubjective reproducibility of stimulus-responses of subjects undergoing NDEs is critical for the intersubjective interpretation of NDE testimonies, for otherwise we merely have a set of cryptic testimonies expressed in the private languages of NDE subjects.
  • Evidence of Consciousness Surviving the Body


    Sure, so the question is whether proponents of physical explanations for "consciousness" and purported anomalous phenomena share that sentiment, in which case everyone is arguing at cross purposes, assuming of course that both sides can agree that the evidence for telepathy and remote viewing is sorely lacking.
  • Evidence of Consciousness Surviving the Body
    Why must it be physical? this assumes from the outset that everything real must be made of particles or fields described by physics. But that is precisely the point in dispute.

    Consider an analogy: in modern physics, atoms aren’t little billiard balls but excitations of fields. Yet fields themselves are puzzling entities—mathematically precise but ontologically unclear. No one thinks an electromagnetic field is a “blob of energy floating around.” It’s astructuring principle that manifests in predictable patterns, even if its “substance” is elusive.
    Wayfarer

    Which is precisely why Physics survives theory change, at least for ontic structural realists - for only the holistic inferential structure of theories is falsifiable and semantically relevant. I think you might be conflating Physics with Physicalism - the misconception that physics has determinate and atomic denotational semantics (i.e. Atomism) .

    It is because "Physicality" is intersubjective, structural, and semantically indeterminate with respect to the subjectivities of the users of physical theories, that every possible world can be described "physically".

    Being "physical" isn't a property of the denoted, but refers to the fact that the entity concerned is being intersubjectively denoted, i.e referred to only in the sense of abstract Lockean primary qualities that are intersubjectively translatable by leaving the Lockean secondary qualities undefined, whereby individual speakers are free to subjectively interpret physics as they see fit (or as I call it, "The Hard Feature of Physics").