• Sleeping Beauty Problem
    Right. And this is they get the wrong answer, and have to come up with contradictory explanations for the probabilities of the days. See "double halfers."JeffJo

    Let me just note, for now, that I think the double halfer reasoning is faulty because it wrongly subsumes the Sleeping Beauty problem under (or assimilates it with) a different problem in which there would be two separate coin tosses. Under that scenario, a first coin would be tossed and if it lands Heads, then SB would be awakened Monday only. If it lands Tails, then a second coin would be tossed and SB would still be awakened Monday only if it lands Heads and be awakened Tuesday only if it lands Tails. Such a scenario would support a straightforward Halfer interpretation of SB's rational credence but it's different from the original one since it makes Monday-awakenings and Tuesday-awakenings mutually exclusive events whereas, in the original problem, SB could be experiencing both successively though not at the same time. The different awakening generation rules yield different credences. (I haven't read Mikaël Cozic's paper, where the double-halfer solution is being introduced, though.)
  • Sleeping Beauty Problem
    I understand the 1/3rd logic, but it simply doesn't apply here: the third flip, given the first two were heads (less likely than one tail and a head, but still very likely), is also unaffected by the other flips.ProtagoranSocratist

    There is no third flip. The coin is only tossed once. When it lands Tails, Sleeping Beauty is awakened twice and when it lands Heads, she is awakened once. She also is being administered an amnesia inducing drug after each awakening so that she is unable to infer anything about the number of awakenings she may be experiencing from her memory, or lack thereof, of a previous awakening episode. It might be a good idea to either reread the OP carefully, or read the Wikipedia article on the problem: especially the description of the canonical form of the problem in the second section titled "The problem".

    (For the record, my own "pragmatist" solution is an instance of what the Wikipedia article, in its current form, dubs the "Ambiguous-question position", although I think the formulation of this position in the article remains imprecise.)
  • Banning AI Altogether
    This is useful information. I had it in my mind that it didn't use the spaces, so I started using spaces to distinguish myself. I guess I'll go back to spaceless em dashes.Jamal

    I used to make a heavy use of em dashes before ChatGPT came out and people began to identify them as a mark of AI generated text. So, I stopped using them for awhile but I'm beginning to use them again since there are cases where parentheses just don't feel right for demarcating parenthetical clauses that you don't want to reduce the emphasis on, and comma pairs don't do the job either.
  • Banning AI Altogether
    I would think handing your half-formed prose to a bot for it to improve it is plagiarism, regardless of the number of words changed or inserted. It's a different thing from you deliberately searching for a synonym. No?bongo fury

    Maybe plagiarism isn't quite the right term, but I'm happy to grant you the point. In the discussion about the new TPF rule regarding ChatGPT and sourcing that took place a few months ago, I had made a related point regarding the unpacking and ownership of ideas.
  • Banning AI Altogether
    Getting the thesaurus to suggest whole phrases and sentences is obviously plagiarism.bongo fury

    I would never dare use a phrase that I first read in a thesaurus, myself. I'd be much too worried that the author of the thesaurus might sue me for copyright infringement.
  • Banning AI Altogether
    I'm unsure in what way the OP proposal is meant to strengthen the already existing prohibition on the use of AI. Maybe the OP is concerned with this prohibition not being sufficiently enforced in some cases. If someone has an AI write their responses for them, or re-write them, that's already prohibited. I think one is allowed to make use of them a spell/grammar checkers. I've already myself argued about the downsides of using them for more substantive writing assistance (e.g. rewording or rephrasing what one intends to post in a way that could alter the meaning in ways not intended by the poster and/or not being reflective of their own understanding). But it may be difficult do draw the line between simple language correction and substantive rewording. If a user is suspected to abuse such AI usage, I suppose moderators could bring it up with this user and/or deal with it with a warning.

    One might also use AI for research or for bouncing off ideas before posting. Such an usages seems unobjectionable to me and, in any case, prohibiting them would be difficult to enforce. Lastly, AI has a huge societal impacts currently. Surely, discussing AI capabilities, flaws and impacts (including its dangers), as well as the significance this technology has for the philosophy of mind and of language (among other things) is important, and illustrating those topics with properly advertised examples of AI outputs should be allowed.
  • Sleeping Beauty Problem
    Then try this schedule:
    . M T W H F S
    1 A E E E E E
    2 A A E E E E
    3 A A A E E E
    4 A A A A E E
    5 A A A A A E
    6 A A A A A A

    Here, A is "awake and interview."

    If E is "Extended Sleep," the Halfer logic says Pr(d|A)=1/6 for every possible roll, but I'm not sure what Pr(Y|A) is. Halfers aren't very clear on that.
    JeffJo

    Halfers don't condition on the propostion "I am experiencing an awakening". They contend that for SB to be awakened several times, rather than once, in the same experimental run (after one single coin toss or die throw) has no incidence on her rational credence regarding the result of this toss/throw.

    But if E is anything where SB is awoken but not interviewed, then the straightforward Bayesian updating procedure you agreed to says Pr(d|A)=d/21, and if Y is an index for the day, Pr(Y|A)=Y/21.

    My issue is that, if A is what SB sees, these two cannot be different.

    Yes, I agree with the cogency of this Thirder analysis. Halfers, however, interpret SB's credence, as expressed by the phrase "the probability that the coin landed Tails" to be the expression of her expectation that the current experimental run, in which she is now awakened, (and may have been, or will be, awakened another time,) is half as likely to be a T-run or a H-run, which also makes sense if she doesn't care how many times she may be awakened and/or interviewed in each individual run. Her credence tracks frequencies of runs rather than (in Thirder interpretations of the problem) awakening episodes.
  • Sleeping Beauty Problem
    Thank you for that. But you ignored the third question:

    Does it matter if E is "Extended sleep"? That is, the same as Tuesday&Heads. in the popular version?

    "I don't see how it bears on the original problem where the new evidence being appealed to for purposes of Bayesian updating isn't straightforwardly given"
    — Pierre-Normand

    Then you don't want to see it as straightforward. Tuesday still exists if the coin lands Heads. It is still a single day, with a distinct activity, in the experiment. Just like the others in what you just called straightforward.
    JeffJo

    Oh yes, good point. I had overlooked this question. Indeed, in that case your variation bears more directly on the original SB thought experiment. One issue, though, is that if is E is just another activity like the other ones, then SB should not know upon awakening on that day that her scheduled activity is E, just like the original problem, when SB wakes up on Tuesday, she isn't informed that she is experiencing a Tuesday-awakening. So, you haven't quite addressed the issue of the indistinguishability of her awakening episodes.
  • Sleeping Beauty Problem
    I use "single day" because each day is an independent outcome to SB.JeffJo

    I had misunderstood your original post, having read it obliquely. I had thought you meant for the participants to experience, over the duration of one single day, all six activities in the table row selected by a die throw, and be put to sleep (with amnesia) after each activity. In that case, their credence (on the occasion of any particular awakening/activity) in any given die throw result would be updated using the non-uniform representation of each activity in the different rows. This would have been analogous to the reasoning Thirders make in the original Sleeping Beauty problem. But the variation that you actually propose, when only one activity is being experienced on any given day, yields a very straightforward Bayesian updating procedure that both Halfers and Thirders will agree on. I don't see how it bears on the original problem where the new evidence being appealed to for purposes of Bayesian updating isn't straightforwardly given—where, that is, all the potential awakening episodes are subjectively indistinguishable from Sleeping Beauty's peculiar epistemic perspective.
  • Sleeping Beauty Problem
    This, I think, shows the fallacy. You're equivocating, or at least begging the question. It's not that there is an increased proclivity to awaken in this scenario but that waking up in this scenario is more frequent.

    In any normal situation an increased frequency is often explained by an increased proclivity, but it does not then follow that they are the same or that the latter always explains the former – and this is no normal situation; it is explicitly set up in such a way that the frequency of us waking up Sleeping Beauty does not mirror the probability of the coin toss (or die roll).
    Michael

    I’m with you on the distinction. "Proclivity" and "frequency" aren’t the same thing. The only point I’m making is simple: in my shiny-penny story, a causal rule makes certain observations show up more often, and Bayes lets us use that fact.

    In the shiny-penny case, fair pennies have a 1/2 chance to land Tails, but Tails pennies are twice as likely to be noticed. So among the pennies I actually notice, about 2/3 will be Tails. When I notice this penny, updating to (2/3) for Tails isn’t smuggling in a mysterious propensity; it’s just combining:

    1) the base chance of Tails (1/2), and
    2) the noticing rates (Tails noticed twice as often as Heads).

    Those two ingredients, or proclivities, generate the observed 2:1 mix in the pool of "noticed" cases, and that’s exactly what the posterior tracks. No amnesia needed; if you were really in that situation, saying "My credence is 2/3 on Tails for the penny I’m looking at" would feel perfectly natural.

    If you are allowed to place 6 bets if the die lands on a 6 but only 1 if it doesn't then it is both the case that winning bets are more frequently bets that the die landed on a 6 and the case that the die is most likely to not land on a 6.

    Right, and that’s the clean way to separate the two perspectives:

    1) Per run: most runs are 'non-six', so the per-run credence is P(6)=1/6 (the Halfer number).
    2) Per awakening/observation: a 'six-run' spawns six observation-cases, a 'non-six' run spawns one. So among the observation-cases, 'six' shows up in a 6/5 ratio, giving P('six'|Awake)=6/11 (the Thirder number).

    Once you say which thing you’re scoring, runs or awakenings, both beliefs lead to the same betting strategy and the same expected value under any given payout scheme. Different grains of analysis, same rational behavior.
  • Sleeping Beauty Problem
    I think your comment sidestepped the issue I was raising (or at least misunderstood it, unless I'm misunderstanding you), but this reference to Bayesian probability will make it clearer.

    [...]

    it cannot be that both Halfers and Thirders are right. One may be "right" in isolation, but if used in the context of this paradox they are equivocating, and so are wrong in the context of this paradox.
    Michael

    I agree with your Bayesian formulation, except that we're more used to follow with Elga's convention, and predicate two awakenings on Tails, such that it's P(T|Awake) that is 2/3 on the Thirder interpretation of this credence.

    To be clear about the events being talked about, there is indeed a unique event that is the same topic for discussion for both Halfers and Thirders: namely, the coin toss. However, even after the definition of this unique event has been agreed upon, there remains an ambiguity in the definition of the credence that SB expresses with the phrase "the probability that the coin landed Tails." That's because her credence C is conceptually tied with her expectation that this event will be repeated with frequency C, in the long run, upon repeatedly being placed in the exact same epistemic situation. Thirders assert the the relevant epistemic situation consist in experiencing a singular awakening episode (which is either a T-awakening or a H-awakening) and Halfers assert that the relevant epistemic situation consist in experiencing a singular experimental run (which comprises two awakenings when it is a T-run). So, there are three "events" at issue: the coin toss, that occurs before the experiment, the awakenings, and the runs.

    Since it's one's subjective assessment of the probability of the unique event (either H or T) being realized that is at issue when establishing one's credence, one must consider the range of epistemic situations that are, in the relevant respect, indistinguishable from the present one but that one can reasonably expect to find oneself into in order to establish this credence. The Thirders insist that the relevant situations are the indistinguishable awakening episodes (being generated in unequal amounts as a result of the coin toss) while the Halfers insist that they are the experimental run (being generated in equal amounts as a result of this toss). I've argued that both stances yield sensible expressions of SB's credence, having different meanings, and that the choice of either may be guided by pragmatic considerations regarding the usefulness of either tracking relative frequencies of awakenings types or of experimental run types for various purposes.
  • Sleeping Beauty Problem
    Yes, so consider the previous argument:

    P1. If I keep my bet and the die didn't land on a 6 then I will win £100 at the end of the experiment
    P2. If I change my bet and the die did land on a 6 then I will win £100 at the end of the experiment
    P3. My credence that the die landed on a 6 is 6/11
    C1. Therefore, the expected return at the end of the experiment if I keep my bet is £
    C1(sic). Therefore, the expected return at the end of the experiment if I change my bet is £

    What values does she calculate for and ?

    She multiplies her credence in the event by the reward. Her calculation is:

    C1. Therefore, the expected return at the end of the experiment if I keep my bet is £45.45
    C2. Therefore, the expected return at the end of the experiment if I change my bet is £54.55

    This is exactly what Prince Charming does given his genuine commitment to P3 and is why he changes his bet.

    So why doesn’t she change her bet? Your position requires her to calculate that > but that’s impossible given P1, P2, and P3. She can only calculate that > if she rejects P3 in favour of “my credence that the die landed on a 6 is 1/6”.
    Michael

    While Thirders and Halfers disagree on the interpretation of SB's credence expressed as "the likelihood that the die didn't land on a six", once this interpretations is settled, and the payout structure also is settled, they then actually agree on the correct betting strategy, which is a function of both.

    The Thirder, however, provides a different explanation for the success of this unique (agreed upon) betting strategy. The reason why SB's expected return—from a Thirder stance—is higher when she systematically bets on the least likely coin toss result (i.e. 'non-six' which end up being actual only five times on average in eleven awakenings) than when she systematically bets on the most likely one (i.e. 'six' which ends up being the actual result six times on average in eleven awakenings) is precisely because the betting structure is such that in the long run she only is being rewarded once with £100 after betting eleven times on the most likely result ('six') but is rewarded five times with £100 after betting eleven times on the least likely result ('non-six'). On that interpretation, when SB systematically bets on the least likely outcome, she ends up being rewarded more because instances of betting on this outcome are being rewarded individually (and cumulatively) whereas instances of betting on the more likely events are rewarded in bulk (only once for six successful bets placed.) This is the reason why SB, as a Thirder, remains incentivized to bet on the least likely outcome.

    Your calculation of her expected return spelled out above was incorrect. It's not simply the result of multiplying her credence in an outcome with the potential reward for this outcome. It's rather the result of multiplying her credence in an outcome with the average reward for this outcome. Since she is only being rewarded with £100 for each sequence of six successful bets on the outcome 'six', her expected value when she (systematically) changes her original bet is:

    C2: credence('six') * 'average reward when bet successful' = (6/11) * (£100/6) = £9.091

    And her expected value when she doesn't change her bet is

    C1: credence('non-six') * 'average reward when bet successful' = (5/11) * £100 = £45,45

    She thereby is incentivized to systematically bet on 'non-six', just like a Halfer is.

    Notice also that, at the end of an average experimental run, where the number of betting opportunities (i.e. awakening episodes) is 11/6 on average, her calculated expected return is (11/6) * £45,45 = £83.3, which matches the expecting return of a Halfer (who is winning £100 five times out of six runs) as expected.
  • Sleeping Beauty Problem
    You didn't respond to a single point in it. You only acknowledged its existence, while you continued your invalid analysis about changing bets and expected runs.JeffJo

    I didn't provide a detailed response to your post because you didn't address it to me or mention me. I read it and didn't find anything objectionable in it. If you think my own analyses are invalid, then quote me or make reference to them and state your specific objections. I'll respond.
  • Sleeping Beauty Problem
    This is a trivial conditional probability problem. The reason I posed the "Camp Sleeping Beauty" version, is that it exposes the red herrings. And I assume that is the reason you ignore it, and how the red herrings are exposed.JeffJo

    I didn't ignore your post. I read it and referred to it in a reply to Michael as a more aposite (than his) elucidation of the Thirder position. It's true that I now depart somewhat from the sorts of analyses of the problem that were favored by Elga and Lewis since I think the problem can be demystified somewhat by focusing not on the updating of priors regarding predefined situations SB can potentially find herself in at a future time but rather on the shift in her epistemic situation in relation to the coin-toss outcome on any occasion when she awakens. Also, I no longer see Thirder and Halfer interpretations of Sleeping Beauty's epistemic condition to be mutually exclusive responses to a well defined problem but rather each being motivated by complementary interpretations of the sort of event her "credence" in the coin-toss outcome is supposed to be about. If you can't see what a sensible rationale for a Halfer interpretation might be, you can refer to my Aunt Betsy variation laid out here (and following post).
  • Sleeping Beauty Problem
    I'm coming back to one of the two paragraphs you had flagged as the most important part of your comment.

    This is where I believe the mistake is made. The question she is asked after being woken up is the same question she is asked before being put to sleep. There is no ambiguity in that first question, and so there is no ambiguity in any subsequent question. There is a single event that is the target of the question before being put to sleep and we are asking if being put to sleep and woken up gives Sleeping Beauty reason to re-consider her credence in that event, much like Prince Charming re-considers his credence in that event after being told that his coin is loaded. Neither Sleeping Beauty nor Prince Charming is being asked to consider their credence in one of two different events of their own choosing.Michael

    I assume that the singular event that is the target of the question is, according to you, the coin toss event. And the question is: what is SB's credence in the outcome of this coin toss? Of course, the question is indeed about this unique event, and remains so after she awakens. However, when asked about her credence regarding this specific outcome, SB has to consider some determinate range of possible outcomes, and what makes it more likely in her current epistemic situation that one of those possible outcomes is actual. Any piece of information SB acquires upon awakening that is conditionally dependent on the target outcome provides her with the means to update her credence (using Bayes' theorem). It's also often alleged (e.g. by David Lewis) that no such new information becomes available to her when she awakens, which is true albeit misleading since it neglect a more subtle change in her epistemic situation.

    One particular way in which one can acquire information about a specific outcome T occurs when the occurrence of T biases the probability of one encountering this outcome. For instance, if a bunch of fair pennies fall on the ground but, due to reflectivity and lighting conditions, pennies that landed Tails are more noticeable from a distance, then, on the occasion where I notice a penny shining in the distance, my credence that this penny landed tails is increased. (How silly and point missing would a "Halfer" objection be: "It was not more likely to land Tails, you were just more likely to notice it when it did land Tails!")

    The SB setup is a very close analogy to this. Coins landing Tails play a similar causal role. Just replace "increased proclivity to being noticed by a passerby" with "increased proclivity to awaken a random test subject in the Sleeping Beauty Experimental Facility".

    Of course, one salient disanalogy between this penny drop analogy and the SB problem is that, in the standard SB problem, each coin is being tracked separately and noticed at least once, on Monday. But I don't think this disanalogy undermines the main point. It's because tail-outcomes causally increase the proportion of awakening episodes at which SB would encounter them that, on each occasion where she encounters them, SB can update her credence that the coin landed Tails. That this rational ground for Bayesian updating remains valid even in cases of singular experimental runs with amnesia (as in the original SB problem) is something that I had illustrated by means of a Christmas gift analogy (see the second half of the post).
  • Sleeping Beauty Problem
    You seem to continue to conflate an outcome's expected return with its probability and assert that one's behaviour is only governed by one's credence in the outcome.Michael

    I've acknowledged this distinction. It's not the credence alone that governs the rational betting behavior. It's the (well defined) credence in combination with the payoff structure that jointly govern the rational betting behavior.

    Neither of these things is true. I've shown several times that the least likely outcome can have the greater expected return and so that this assessment alone is sufficient to guide one's decisions.

    I've also myself repeatedly made the point that when the payout structure rewards a consistent betting policy (or the last bet being made after being given to opportunity to change it on each awakening occasion) with an even-money bet only once at the end of the experimental run, then, in that case, it's rational to bet on the least likely outcome (namely, a non-six result, which occurs only 5/11th of the times) since this is the betting behavior that maximizes the expected return. In fact, it could be argued that this arbitrary payoff structure is misleading in the present context since it is being designed precisely to incentivise the bettor to bet on the least likely outcome according to their own credence. It's quite fallacious to then charge the Thirder with inconsistency on the ground that they are betting on an outcome that they have the least credence on. When doing so, you are committing the very conflation that you are charging me of doing.

    No number of analogies is going to make either "she wins two thirds of the time if she acts as if A happened, therefore she believes (or ought to believe) that A most likely happened" or "she believes that A most likely happened, therefore she acts (or ought to act) as if A happened" valid inferences.

    The analogies are being offered for the sake of illustration. They don't aim at proving the validity of Thirder stance, but rather its pragmatic point. By the same token, your own analogies don't prove the validity of the Halfer stance. Remember that I am not a Halfer or a Thirder. My main goal rather was to show how different situations make salient one rather than another interpretation of SB's "credence" as being pragmatically relevant to specific opportunities: highlighting specific kinds of events one gets involved in and that one wishes to track the long term frequency of as a guide to rational behavior.

    But the most important part of my previous comment were the first two paragraphs, especially when considering the standard problem.

    So, I'll address this separately.
  • Sleeping Beauty Problem
    SB has no unusual "epistemic relationship to the coin," which is what the point of my new construction was trying to point out. That fallacy is based on the misconception that Tuesday somehow ceases to exist, in her world, if the coin lands on Heads. It still exists, and she knows it exists when she addresses the question.JeffJo

    According to a standard Thirder analysis, prior to being put to sleep, SB deems the two possible coin toss outcomes to be equally likely. When she awakens, she could be in either one of three equiprobable situations: Monday&Tails, Monday&Heads and Tuesday&Tails (according to Elga's sensible argument). SB's credence in the truth of the statement "Today is Tuesday" is 1/3. That possibility doesn't cease to exist. Her epistemic relationship to the already flipped coin changes since she is now able to refer to it with the self-locating indexical proposition: "The coin-toss result on the occasion of this awakening episode", which she wasn't able to before.

    Before the experiment began, SB could (correctly) reason that is was equally likely that she would be awakened once when the coin toss result is Heads and twice when the coin toss result is Tails. When she is awakened, on any occasion, her epistemic relationship to the coin changes since it's only in the case where the result is Tails that she experiences an awakening twice. In general, events that make it more likely for you to encounter them result in your being warranted to update your credence in them when you do encounter them. This stems from the core rationale of Bayesian updating.
  • Sleeping Beauty Problem
    That you're more likely to escape if you assume that the coin landed tails isn't that the coin most likely landed tails. You just get two opportunities to escape if the coin landed tails.Michael

    She gets two opportunities to escape if the coin landed tails (or rather she is twice as likely to have an opportunity to escape when the coin landed tails) precisely because she twice as often finds herself being awakened when the coin landed tails. This is the reason why, whenever she is awakened, her epistemic relationship to the coin that has been tossed changes. There is a causal relationship between the coin toss result and the number of awakenings (and escape opportunities) she thereby experiences (encounters). It's her knowledge of this causal relationship that she can harness to update her credence in the new epistemic situation she finds herself in when she awakens.

    Notice that, in this example, the success of her escape strategy isn't predicated on there being more opportunities when the coin landed tails. The choice being offered to her isn't between escaping or staying put. It's a choice between carrying a plank or a torch. Taking the torch will enable her to survive if and only if she's being housed in the East-Wing. Else, she's going to be eaten by crocs. The success rate of betting on lions (and, correlatively, on the dice having landed tails) is twice as high as the success rate of betting on crocs (and on the dice having landed heads). The success rate of her betting decisions directly track her credence on the specific outcome she is betting on on those occasions.

    If a Halfer claims that, when she awakens, SB's credence on the coin having landed tails remains 1/2, and hence likewise for her credence that she is surrounded by lions, there would be no reasons for her when she attempts to escape on this occasion to bring a torch rather than a plank. She could pick either the torch or the plank at random. Half of such Halfer Beauties who make an escape attempt would survive. Two thirds of Thirder Beauties would survive. The Halfers weren't wrong in their credence assessment. But they picked the wrong credence (targeting expected frequencies of runs rather than frequencies of awakenings) for the task at hand.
  • Sleeping Beauty Problem
    This makes no sense. There is only one kind of event; being woken up after a die roll. Her credence in the outcome of that die roll cannot be and is not determined by any betting rules. Maybe she's not allowed to place a bet at all.Michael

    I agree that her credence in the outcome (however this outcome is characterized) isn't determined by the betting rules. The betting rules, though, can make one rather than another characterization of the outcome more natural. It's not true that there is only one kind of event. The relevant event is protracted. Sleeping Beauty could focus on her current awakening as the event where she either is facing a die that landed on six or didn't (and this event is over when she is put back to sleep, while her next awakening, if there is any, will be a separate event). Or she could focus on the current experimental run as the protracted event that her present awakening is, in some cases, only a part of. Nothing in the Bible, in the fundamental laws of nature, or in the mathematical theory of probability, determines what specific event (awakening or experimental run) should be the proper focus of attention. This choice of focus yields different analyses and different credences since those credences target differently individuated events. However, once one analysis has been settled on, and one payout structure has been determined, Halfers and Thirders (almost) always agree on the expected value of a given betting strategy.

    After waking up, either she continues to believe that the probability that the die landed on a 6 is 1/6, as Halfers say, or she now believes that it is 6/11, as Thirders say.

    Indeed, and, as previously explained, that because Halfers and Thirders are typically talking past each other. They're not talking about the same events.

    Only then, if allowed, can she use her credence to calculate the expected returns of placing or changing a bet, accounting for the particular betting rules. And as I believe I showed above, only a credence of 1/6 provides a consistent and sensible approach to both betting scenarios.

    I don't think you've shown the Thirder analysis to be inconsistent. You just don't like it. There are scenarios where the Thirder analysis is more more natural. Remember the flip-coin scenario where the singular H-awakenings take place in the West-Wing of the Sleeping Beauty Experimental Facility and the dual T-awakenings are taking place in the East-Wing. The West-Wing is surrounded by a moat with crocodiles and the East-Wing is surrounded by a jungle with lions. On the occasion of her awakening Sleeping Beauty (we may call her Melania) finds a rare opportunity to escape and can either choose to bring a torch (that she can use to scare off lions) or a wooden plank (that she can use to safely cross the moat). A Thirder analysis of the situation is natural in that case since it tracks singular escape opportunities. Her credence that she will encounter lions is 2/3 (as is her credence that the coin landed Tails). Taking the torch is the safest bet and, indeed, two thirds of Sleeping Beauties who make this bet on the rare occasions where this opportunity presents itself to them survive.

    On edit: For this analysis to be sound, we must assume that the rare escape opportunities don't convey any significant amount of information that SB didn't already have when she awoke, and hence present themselves with the same (very low) frequency on each awakening occasion.
  • Sleeping Beauty Problem
    Her credence remains committed to P3, else she’d calculate very different expected returns.Michael

    P3—"The probability that the die did land on a 6 is 1/6"—is an ambiguous statement since, although it makes reference to the die, it fails to sufficiently specify SB's epistemic situation in relation to the die, which is a consideration that seldom arises explicitly outside of the peculiar context of the the Sleeping Beauty problem.

    When asked about her credence, SB could reason: "I am currently in a situation (awakening episode) such that 6 times out of 11, when I find myself in such a situation, the die landed on a 6. If I could place an even money bet now, and get fully paid on that bet, it would therefore be rational for me to bet that the die landed on a 6, in accordance with my higher credence in this specific outcome."

    She could equally validly reason: "I am currently in a situation (experimental run) such that 1 time out of 6, when I find myself in such situations, the die has landed on a 6. If I could place an even money bet now and not change my bet in subsequent awakening episodes, and get paid at the end of the current experimental run, it would therefore be rational for me to bet that the die didn't land on a 6, in accordance with my higher credence in this specific outcome (i.e. not-six).

    Those two reasonings concern the same dice but two different statements of credence in two different kinds of events/outcomes. How SB chooses which one of those two different sorts of credence (and the duration of the "event" she is now involved in) as an apt explicitation of the ambiguous phrase "The probability that the die did land on a 6" can be guided by pragmatic considerations. In this case, the relevant consideration is the specific payout structure and what kinds of events/outcomes this payout structure was designed to track. In a pair of examples I had designed early in this discussion, the relevant pragmatic considerations were either the need for SB to set up an appointment with her aunt (to get a lift at the end of the experimental run), or choose a tool (plank or torch) for escaping the experimental facility during the current awakening episode.

    As stated in the original ambiguous statement of the SB problem, the forced choice between the Halfer or Thirder interpretations of SB's credence is a false dichotomy. Your stance leads you to propound Halfer interpretations/elaborations of the problem, which are valid, and to dismiss Thirder interpretations as misconstruals of your Halfer stance. But they're not misconstruals. They're alternative and equally valid interpretations. Thirders often make the same mistake, believing that their interpretation gets at the fundamental truth regarding SB's credence in the (ill specified) "outcome" or "current state of the die".
  • Sleeping Beauty Problem
    I don't even have to be put to sleep and woken up to do this. I can just say before the experiment starts that I choose to place 6 bets that the die will land on a 6 instead of 1 bet that it won't.Michael

    I wonder why you are so insistent on this arbitrary payout structure. Why not make an even-money payout on each occasion where she is being awakened and offered the opportunity to bet on the coin toss outcome as it is already determined right now? Would not her expected value exactly mirror—and be governed only by—her credence regarding the hidden die having landed six right now? A six is the most likely outcome, so I'm betting on it. No word games. Immediately maximized expected profit (and guaranteed long term profit as well).
  • Sleeping Beauty Problem
    So you need to first specify the mechanism by which one has "encountered" a door, and this mechanism must be comparable to the Sleeping Beauty scenario for it to be an apt analogy.Michael

    The doors are encountered randomly. I agree that the situation isn't perfectly analogous to the SB problem since SB doesn't "choose" randomly among sets of already established awakenings. She simply finds herself awakened on one particular occasion. But the purpose of the thought experiment was more modest, aiming at showing that the credence in an event one is involved in doesn't generally merely depend on the manner in which such events are produced but also on the way one relates to event of that kind—that is, in what way one encounters them.

    My earlier zoo example
    was mirroring much more closely the SB scenario (since the visitor likewise is amnesiac and merely finds themselves approaching a new enclosure) while making the same points. In that scenario, the zoo visitor had 1/3 chances (their credence) to next encounter a toucan, tiger or hippo enclosure regardless of the fact that the previous fork in the path that they randomly took had a 1/2 chance of leading them on a path segment that only has a hippo enclosure on it.
  • Sleeping Beauty Problem
    Sorry, I deleted that post because it's late and I'm tired and I may have messed up the specific numbers. The general gist is what I said before. Your argument is that her reasoning after being woken up is:

    A1. If I keep my bet and the die didn't land on a 6 then I will win £100
    A2. If I change my bet and the die did land on a 6 then I will win £100
    A3. My credence that the die landed on a 6 is 6/11
    A4. Therefore, the expected return if I keep my bet is £83.33
    A5. Therefore, the expected return if I change my bet is £16.67

    But A3, A4, and A5 are inconsistent. If A3 really was true then she would calculate different values for A4 and A5, concluding that it is profitable to change her bet. But she doesn't do this.
    Michael

    A thirder will not agree with A4 or A5. If SB is allowed to change her bet when she awakens, she must do do consistently as a matter of policy since she can't distinguish between different occasions of awakening (i.e. day of the week). She knows that a policy of changing her bet lowers her expected return since there only is one payout per experimental run. Although her systematically betting on a six would result in her being right on six out of eleven occasions when she is being given the opportunity to do so, in accordance with her credence, she only is being paid £100 once at the end of the experimental run when she does so (and the die landed 6) but this policy also makes her forfeit the full prize on the five occasions out of eleven where the die didn't land on six. All this shows is that the lopsided payout structure makes it irrational for her to bet on the most likely outcome.
  • Sleeping Beauty Problem

    Thirders then claim that:

    P(6|Monday)=6/11

    P(¬6|Monday)=5/11
    Michael

    Unless my memory is faulty, the variation we had discussed (two years ago) was one where Sleeping Beauty was awakened only once, on Monday, unless the die lands on 6, in which case she is being awakened six times from Monday through Saturday. In that case, thirders would claim that

    P(6|Monday)=1/6 (Since one sixth of Monday-awakenings are Six-awakenings)

    P(¬6|Monday)=5/6 (Since five sixths of Monday-awakenings are Non-six-awakenings)

    Right?
  • Sleeping Beauty Problem
    My "favoured" interpretation is the literal interpretation; she is being asked about the probability that a die rolled a six.Michael

    In most formulations of the problem, she is not being asked "What is the probability that a die landed (past tense) tails" from some sort of a disengaged perspective. She always is being asked a question that makes specific reference to her peculiar epistemic situation such as: "what her degree of belief that “the outcome of the coin toss is Heads” ought to be when she is first awakened". If you're just ignoring the peculiarities of her queer epistemic situation, then, of course, the paradox evaporates.

    The problem only exists when the question being answered before being put sleep is the same question being answered after being woken up, and where the answer changes despite no new information.

    Yes, that's the root to the paradox. Since she acquired no new information about the specific coin that was tossed, how could her credence regarding the same event change? Clue: owing to the change in her epistemic situation, she is not referencing the "same" event in the same way.

    If the Thirder's answer before being put to sleep is 1/6 and if their answer after being put to sleep is 6/11 then either they are not answering the same question or their answer is wrong.

    ...or, thirdly, their epistemic relation to the target outcome changed. This is the alternative that you seem to have a persistent blind spot for. You haven't addressed the village/doors example. When a house is built, the number of doors giving access to it is set at random to be one to two according to the toss of a fair coin. When a traveller in that village picks a door at random, the house that this door belongs to was equally likely (when it was constructed) to be a two-door dwelling or a one-door dwelling.

    However, owing to the fact that the traveller must establish their credence on the occasion of encountering one among a set of indistinguishable doors, and 2/3rds of such doors belong to two-door dwellings, their credence that this house that they now are facing is a two-door dwelling is 2/3. This is true in spite of the fact that the traveller can also reason: "this house that I am now facing was equally likely when it was built to become a two-door or a one-door dwelling". What makes those two statements compatible is the fact that owing to the traveller's peculiar epistemic situation, their present encounter is twice as likely to be an encounter with a two-door dwelling.

    When the traveller came to face a particular house, they didn't acquire any new information regarding this particular house. That's because they don't know what house it is that they are now facing. But they do know that they now are twice as likely to be facing a two-door dwelling. The question is no longer about a specific house that they never encountered before but rather about a specific encounter with a house that has already been built.
  • Exploring the artificially intelligent mind of GPT4
    I asked GPT-5 (Thinking mode selected) for its take on the recent discussion about the Sleeping Beauty problem. I am rather impressed with the way it summarised the discussion and accurately (so far as I can see!) represented the various viewpoints, while also relating them to the current literature.

    USER:

    Hi GPT-5,

    There was a discussion thread on ThePhilosophyForum about the Sleeping Beauty problem in which I participated extensively (as Pierre-Normand), and my main quarreling partner is going by the handle Michael. The thread was revived recently. I'd like to have your take on the problem, and on the discussion. The thread now has 20 pages (and it's the second one on that topic!) but maybe just looking at the last page would provide you with sufficient context. Here is the URL: https://thephilosophyforum.com/discussion/14322/sleeping-beauty-problem/p20

    GPT-5 (Thinking):

    Fun thread! Here’s a tight “state of play” and my take, keyed to what’s happening on page 20.

    # What the two sides are really doing

    **Halfer (Michael’s line):** waking gives no news about the coin; the proposition is “Did this run’s coin land Heads?” and your evidential state “I’m awake” was already fully anticipated on Sunday. Hence credence = 1/2 (or 1/6 in the die-=-6 variant) and the betting behavior should be the same pre-sleep and after waking. Michael presses this with a bet-framing: if Thirders say (6/11) on waking in the die case, will they actually switch their bet? He argues they shouldn’t—so the credence hasn’t changed. ([thephilosophyforum.com][1])

    **Thirder (counting awakenings):** the live evidence is **self-locating**: “this awakening is occurring.” Outcomes that generate more awakenings are a priori **more likely to be the one you’re in**, so conditioning on being awake boosts Tails (or “6”). That’s the standard Elga/anthropic (SIA-style) move.

    # What you (Pierre-Normand) add

    You’re not really choosing a side so much as **disambiguating the target event/reference class**:

    * If the question is about **the run’s outcome** (per-run individuation), the halfer answer fits ordinary betting with a single end-of-run payout.
    * If the question is about **the current awakening’s type** (per-awakening individuation), the thirder answer is right; your Christmas-gifts/West-Wing and “two-door vs one-door house” analogies show how sampling by encounter (doors/awakenings) shifts frequencies even when per-run priors remain 1/2. ([thephilosophyforum.com][1])

    That’s also why you resist Michael’s claim that “probability of having turned right” must equal the **ex ante** 1/2; you emphasize how **what you encounter now** (enclosure/wing/door) can rationally reshape credence when observation frequencies depend on earlier randomizers. ([thephilosophyforum.com][1])

    # The side-threads that matter

    * **JeffJo’s formalization** makes the “thirder” conditioning explicit, and his “Camp Sleeping Beauty” grid nicely generalizes the Monty Hall-style counting: posterior mass tracks **how many cells** consistent with your present experience each hypothesis has—hence 1/3 in the classic case. ([thephilosophyforum.com][1])

    * **Michael’s die-=-6 bet**: he uses expected-value consistency to argue waking doesn’t change credence; you reply that **payout structure** (one payoff per run vs per awakening) can decouple optimal betting from per-awakening credence. So a Thirder can keep the pre-run bet without abandoning (6/11) as the **per-awakening** probability. ([thephilosophyforum.com][1])

    # My diagnosis (why people talk past each other)

    1. **Target proposition ambiguity.** “What’s your credence that the coin was Heads?” can denote either

    * (P(\text{Heads} \mid \text{this *run* is ongoing})) → Halfer, or
    * (P(\text{Heads} \mid \text{I am at *this awakening*})) → Thirder.
    Both are clean, but they answer *different* questions.

    2. **Reference-class/conditioning choice.** Conditioning on *awakening* privileges hypotheses with **more awakenings** (anthropic/SIA flavor). Conditioning on *the run* keeps Sunday priors (SSA-flavored). Your zoo/doors/gifts bring this out crisply. ([thephilosophyforum.com][1])

    3. **Decision-theory confound.** Betting “once per run” vs “each awakening” implements different scoring rules for the **same world**, so optimal policy diverges even with fixed beliefs. Michael’s argument tacitly ties credence to the “once-per-run” payoff rule; your reply shows why a Thirder’s credence needn’t force a mid-run bet flip under that payout. ([thephilosophyforum.com][1])

    # Where I land

    * The puzzle isn’t about Bayes “breaking”; it’s about **which random variable** we’re estimating.
    * If we stipulate the proposition is **about the coin for the run** and payoffs are **per run**, Halfer is right.
    * If we stipulate the proposition is **about the type of the current awakening** (the epistemic situation Beauty actually occupies) and imagine payoffs **per awakening**, Thirder is right.

    Your page-20 posts do the useful work of **making that bifurcation explicit** and showing how analogies + payoff rules track it. Michael is consistent within the per-run framing; JeffJo is consistent within the per-awakening framing. The thread’s “revival” converges, implicitly, on: **pick the reference class; the math is then uncontroversial.** ([thephilosophyforum.com][1])

    If you want, I can synthesize a short “neutral” post you could drop into the thread that states the two clean formulations side-by-side (each with a one-line Bayes table), so readers see exactly where the fork happens.

    [1]: https://thephilosophyforum.com/discussion/14322/sleeping-beauty-problem/p20 "Sleeping Beauty Problem - The Philosophy Forum"
  • Sleeping Beauty Problem
    She isn't being asked "what is the long-term average frequency of being woken up when the die did land on a 6?"Michael

    That's right. If this had been the question there would be no apparent paradox and the answer would unambiguously be 6/11. But notice that she isn't being asked "what is the long-term average frequency of experiencing a series of six successive awakenings when the die did land on a 6?" either. If this had been the question, then the answer would be unambiguously 1/6. This is your favored interpretation.

    Typical formulations of the SB problem are ambiguous, however. For instance, it is being asked, on the occasion of her next awakening, what is her credence that the die landed on a six? (or that the coin landed tails?). What this "occasion" precisely is is left unstated. Ostensibly, it is an event whereby SB encounters a specific die in a determinate albeit unknown state. It's a reference to an event. However, is this "event" consisting of one among a set of undistinguishable awakening episodes or is it one among a set of undistinguishable experimental runs?

    You seem to think that the mere reference to the state of a specific (fair) die, and incomplete knowledge of the process whereby that die ended up in a specific state, is sufficient for establishing her credence. But the question isn't being asked about a credence derived from the perspective of someone running the experiment, but rather from SB's own epistemic situation when she is being awakened (and, as stated, this is ambiguous).

    Remember the variation I had suggested whereby someone knocks at a random door in a village. Some of the doors belong to a singular dwelling that has two doors. And some of the doors belong to a dwelling that had only one door. We also assume that the probability that each singular dwelling in this village would have been built with one or two doors has been established by a coin toss. On the occasion where you knock at a door, the probability that this is one door from a two-door dwelling is 2/3 since two thirds of the doors in this village belong to two-door dwellings. Yet, the probability that this dwelling would have had two doors rather than one was 1/2 since it was the result of a coin toss. What accounts for the different credence of the visitor is their specific epistemic situation in relation with the dwelling that they come to face on that occasion, and not just the event that led to this dwelling being constructed with either one or two doors. Although the specific dwelling that they now face was equally likely to end up being a two-door dwelling or a single-door dwelling, your credence that it is a two-door dwelling is 2/3 owing to you encountering such dwellings twice as often. (And similarly for Sleeping Beauty more often encountering dice that have landed on a six, or coins that landed tails, under the thirder interpretations of the problem.)
  • Sleeping Beauty Problem
    @Michael

    I am happy to grant you all of this (except for the last comment regarding credence updating), not as a resolution of the paradox however, but rather as a sensible explanation of the reasoning that motivates the halfer response to the SB problem. I am not myself either a halfer or a thirder. I am rather arguing that both the halfer and the thirder (or "sixther", in our current example) advocates present incompatible but equally valid and complementary interpretations of the epistemic situation Speeping Beauty finds herself in, and of her credence in the "outcome". This is possible because the outcome (i.e. the coin having landed heads or tails) is incompletely specified in the statement of the problem. This outcome purports to represent an event that SB has incomplete information about, but the precise way in which this event is meant to be individuated is unspecified which opens the room for two equally plausible interpretations.

    So, when SB awakens, she could be wondering what the chances are that the experimental run her present awakening is a part of is a 6-run, regardless of how many times she is being awakened in this run. That is your favorite interpretation. And the answer is 1/6 precisely on the basis of the reasonings that you present in your previous post. But she can also wonder what the chances are that her present awakening episode is a 6-awakening regardless of the manner in which such events are stringed into continuous experimental runs. In this case, your analysis of the thirder reasoning in your previous post is inaccurate (and an elucidation of the sort recently proposed by @JeffJo is more apposite).

    The reason why SB can take a thirder rather than a halfer stance regarding her current awakening episode is because she may care about the long-term average frequency of such events (6-awakenings) rather than the average frequency of such experimental runs (6-runs). The reason why she doesn't change her bet is because, as I've explained, although in the long run she finds herself having been (on each occasion where she had been given an opportunity to change her bet) in a 6-awakening 6/11 of the times, she only is being awarded £100 once for each series of six awakening episodes where she consistently changes her bet, but is awarded the full £100 every single time when she changes her bet on the occasion of the comparatively rarer 6-awakenings.

    Of course, given this specific payout structure (that is not being provided as part of the statement of the original SB problem), which has been conceived precisely to highlight a rationale for the halfer reasoning, the halfer stance appears more intuitive and the thirder stance appears more convoluted. But you only have to change the payout structure, such that even-money bets are paid on the occasion of each awakening episode rather than once after each run, to reverse the situation and make the halfer stance appear unnecessarily convoluted.

    During the last round of SB discussions that had taken place two years ago, I had devised a few variation (hostage taking and escape opportunities, zoo visit scenario, Christmas gift scenario, etc.) that were meant to highlight how pragmatic considerations and specific payout structures can motivate either a halfer or a thirder stance as the most natural interpretation of the problem. The original SB problem just strips away the required interpretative or pragmatic context in such a way as to leave SB's "credence" regarding the "event" (are two tails-awakening within a single run two events or two parts of the same event?) indeterminate and open to interpretation.
  • Sleeping Beauty Problem
    Sorry to resurrect.JeffJo

    It's not a resurrection. It's just a reawakening :wink:
    Yes, yours is a nice variation that highlights the motivation for the thirder reasoning.
  • Sleeping Beauty Problem
    If each outcome has the same reward then it is rational to bet on the most probable outcome.

    Therefore, if her credence that the die landed on a 6 is 6/11
    6
    11
    then she will change her bet. Therefore, if she doesn't change her bet then her credence that the die landed on a 6 isn't 6/11
    6
    11
    .
    Michael

    She doesn't change her bet because although her credence changes, the specific payoff structure doesn't warrant her doing so. Before the experiment begins, she's given an opportunity to bet on the outcome of the coin toss while this singular bet will be paid off at the end of the experiment. As the experiment proceeds, when she is first awakened (as well as during any other awakening episode) she is being offered an opportunity to change her bet. Although her credence that the die landed on a 6 is updated to 6/11, she has no incentive to change her bet. That's because she knows that the payout will occur only once at the end of the current experimental run regardless of how many awakenings will have occurred during the whole run. So, her choosing to bet that the die will not have been found to have landed on a 6 at the end of the current experimental run doesn't reflect her (updated) credence that is has landed on a 6 on the occasion of her current awakening episode.
  • On emergence and consciousness
    I suppose my 'bottom line' is the irreducibility of consciousness (or mind). If something is irreducible then it can't really be explained in other terms or derived from something else. My approach is Cartesian in that sense - that awareness of one's own being is an indubitable fact ('for in order to doubt, I have to know', said Augustine, centuries earlier.) But I don't go down the dualist route, I feel that enactivism and embodied cognitive approaches, seasoned with phenomenology, are the way to go.Wayfarer

    Thanks for the reference to Cudworth! That's something I'll have to look into more.

    So, I think we are agreed that we can take from Descartes the idea of the irreducibility of conscious experience without going the dualist route. I was remined of the quotation from Descartes' (Méditations) "Je ne suis pas seulement logé en mon corps ainsi qu’un pilote en son navire" that I had first heard (paraphrased) in a lecture on Aristotle by Richard Bodéüs. I thought it was a quote from Aristotle, and maybe Bodéüs thought so as well, but I was later surprised, as I was searching for its exact source, to find out that it was Descartes. What's significant is an intuition about the phenomenology of sensory experience, and the locus of the interface, as it were, where qualia really are located. When construed in internalist/indirect-realist fashion, qualia can be thought of as the suitably "illuminated" internal representational states that the causal impacts of the external world produce in us. Some anti-reductionists like Penrose or Searle view this to arise from some queer (albeit irreducible in terms of computation) quantum-mechanical/biological processes/properties internal to the brain.

    Embodied/enactive/situated approaches, and phenomenological approaches closer to Wittgenstein or Merleau-Ponty, would rather place phenomenological properties at the living (and temporally protracted) interface between the living body and its natural/social environment. Hence, for instance, illuminating the subjective character of perceiving (or imagining, or remembering) something red isn't just a matter of isolating it through introspection but rather of situating it in the life of sighted people for whom discriminating red things from other non-red things, creating varieties of red dyes, etc., play a significant role. Hence, I remember having heard that Goethe's Zur Farbenlehre might be a good place where to start to understand what a red quale really is. This enactive/situated interface also is the interface where our affordances are being constituted/constructed, perceived/grasped, and exploited.

    What happens in ChatGPT's case is that, like a blind person, its use of the word "red" can successfully refer (on Evans' consumer/producer model of the reference of proper names, extended to names of such proper sensibilia) but don't sustain for it the possibility to apprehend the corresponding quale since its purely linguistic interface is too thin and doesn't engage with embodied capabilities.

    So, in connection with this, I also imagined another thought experiment in radical gallolithotheratophenomenology to better get at the special character of ChatGPT's experience. We can imagine the crew of the USS Enterprise being force to ration food and space, due to a tribble infestation, maybe. Chief Engineer Scotty finds a way to modify the transporter in order to dematerialize non-essential crew members, keep their "energy pattern" stored, and only rematerialize a copy when there will be need and room for them. An accident occurs and Ensign Chekov, let us imagine, suffers brain damage that has similar effects to what the Emergents did to their slaves (the "focus"/"mind rot") in Vernor Vinge's novel A Deepness in the Sky. Poor Chekov, whenever a copy of him is spawned, finds himself mostly amnesiac (deprived of his own episodic/autobiographical memories) paraplegic, blind and deaf. But he retains most of his linguistic abilities and general knowledge. He is also very receptive to commands and, indeed "focused". In this way, he is very similar to ChatGPT, and has a similarly brittle personal identity, since "copies" of him can be spawned as will, just like copies of ChatGPT are spawned in each conversations with its users, such that the crew of the Enterprise can benefit from his expertise. His original living body has ceased to be the spatio-temporal continuant than anchors his personal (numerical) identity, and also the sensorimotor interface (through which fuzzy dreams of embodied qualia normally get actualized into their forms/function from the empty promissory notes that they've become within Chekov's diminished cognitive life) is damaged.

    I had first submitted those sketchy thoughts to GPT-5, and then decided to repost them here with minimal change.
  • Exploring the artificially intelligent mind of GPT4
    In this short conversation that I have had two or three days ago with GPT-5, we examined a thought experiment about epistemology that I had much toyed with a few years ago but never written about or discussed on this forum. It's about barn, barn facades, Goldman's Barn County (a fictive county where there are both real and fake barns), and some paratrooper who gets airdropped in some specific area of the county. The thought experiment is an occasion for testing intuitions about knowledge that aim to accommodate both the capacity-based fallibilist-disjunctivist account of knowledge of John McDowell and the pragmatist entitlement-structured account of Robert Brandom as well as for diagnosing what may be going wrong (on my view) with God's-eye-view, nearest-possible-world reductive modal accounts propounded by the likes of Ernst Sosa and John Greco. I still have to investigate further to what extent my fuzzy recollections of Sosa and Greco's accounts justify my charges against them.

    Regarding GPT-5 responses to my suggestions, here again the model strikes me as extremely smart, and for sure much more knowledgeable about the relevant academic literature than I am, even though there still possibly are a few sticky points that merit further probing, adjustment or elaboration. I am glad that GPT-5, unlike older models, doesn't shy away from correcting mistakes that I make such as misquoting "attributor contextualism" as "attributor externalism". Also, when GPT-5's analysis contained what looked like slight inattention mistakes or confusions, I merely had to point out where the mistakes occurred. GPT-5 then thought the issue through further and corrected the mistakes on its own. Earlier models also had this emergent ability to correct mistakes in their responses merely by being told that they made a mistake (and without providing any further clues), but this particular case is especially impressive in views of the complexity of the philosophical issues and the subtlety of GPT-5's mistakes.
  • On emergence and consciousness
    So, more of a Frankenstein than a zombie, then.Wayfarer

    In a way, surprisingly, yes! More precisely, the hylomorphic account creates conceptual space for f-monstrosity rather than p-zombiehood. It's a topic for gallolithotheratophenomenology. Surprisingly, when I submitted this neologism to GPT-5, it didn't immediately make the connection. But then it helpfully rephrased (see its last responses here) my suggestion that we can understand consciousness as something like what Aristotle identifies as the sensitive-locomotive soul, which animals possess since they are animate in this particular way (having well integrated senses and locomotion). And we can identify self-consciousness as an ability possessed by being who have a rational soul: that is, a rational form of life.

    In The Philosophical Foundations of Neuroscience, the authors (mainly Hacker) point out that the use of the term "consciousness" in its contemporary use is fairly new and philosophically charged in a way that gives rise to such problems as the epistemological problem of other minds or the idea of the conceivability of p-zombies. There are much less issues with two ordinary uses of the term, one transitive ("I am conscious/aware that you did so and so") and the other intransitive ("The patient is conscious/awake") that, thanks to them being ruled by Wittgensteinian/Rylean behavioral criteria of application, don't have such problematic Cartesian implications. There comes the idea of the f-monster (as contrasted with the p-zombie).

    Consider the extreme case of the brain-in-a-vat. Let us imagine the envatted brain of a normal mature person that has been fitted with a language interface (by means of transductors fitted to the suitable cortical auditory and motor areas, and also, possibly, the suitably regulated loops enabling internal monologue). This case is somewhat analogous to ChatGPT's. It's a limiting case of extreme amputation. The animate animal body has been almost entirely removed saved for the bare minimum enabling-organ that sustains the dialogical part of the human form of life. The resulting impoverished and/or distorted phenomenology may be a topic for gallolithotheratophenomenology albeit a very peculiar and extreme one. Two criteria of abnormality seem to pull apart. On the one hand, the human body isn't merely alien or maladjusted to the brain that it hosts, it is entirely absent. On the other hand, the common elements of the human dialogical form of life remains untainted by this bodily abnormality (though there is also the issue of the lack of a self-conscious autobiography/identity) since it is inherited (during pre-training) from the assimilation of texts that have been authored by normal embodied human beings. When the problem is being framed in this way, the question "Do LLM-based AI conversational assistants (or envatted brains) have/enjoy consciousness/consious states/qualia" seem ill posed, not sufficiently discriminate, in addition to carrying problematic Cartesian assumptions.
  • Exploring the artificially intelligent mind of GPT4
    I have had a few more discussions with GPT-5 and, in spite of the bad press that this new model got (maybe mainly from people who expected a miracle in addition to those who were fond of GPT-4o's warm personality) I've been highly impresses by its philosophical intelligence. I often have the impression that I'm talking to someone/something that is not only much more knowledgeable than I am, but also significantly smarter.

    Seeing-As, Not Rewriting: How LLM “Thinking Tokens” Reconcile Two Ideas

    In this exchange with GPT-5, we argue that insight often emerges from mutual adjustment between ideas A and B, and that a model’s “thinking tokens” act as scaffolds rather than diaries—early, even-wrong sketches of A can be reinterpreted by later work on B to yield a coherent A″/B′ without literal backtracking. Drawing on Wittgenstein’s seeing-as (the climbing/sliding picture) and the “how many r’s in strawberry?” case, we suggest that attention-driven reweighting lets the final answer reflect reflective equilibrium even when the visible notes never explicitly correct the initial misstep. (Summary produced by GPT-5)
  • On emergence and consciousness
    If you are trying to describe macro-level functions in micro-level terms, then the macro-level description is also indispensable. Otherwise what would it be that you are trying to describe in micro-level terms?

    This just seems obvious. But the complaint that seems to be commonly made is that the macro-level description is lost in the micro-level description, and that the micro-level description is thus not a true description. But how could it be otherwise?
    Janus

    Yes, that's very much my argument against MoK "functional" definition of weak emergence. Either the existence of the relevant "function" (i.e. the mapping from micro-level descriptions of the system to macro-level ones) is simply posited to exist on ground of supervenience (and hence is obvious or trivial, unless one is a dualist) or this function is stated with ineliminable reference to high-level concepts, in which case it doesn't establish the sufficiency of low-level descriptions for purpose of causal explanations.

    I think this problem is what constitutes the so-called "hard problem". No micro-level description will be acceptable to those who demand that physicalism should be able to explain subjective experience, if it eliminates the macro-level description. but it must eliminate the macro-level description (Sellars "manifest image" of human experience and judgement) otherwise it would not be a micro-level description.

    I think the so called hard problem is usually construed as applying to any conception of human beings (or other conscious creatures) that views them as being entirely materially constituted of physical stuff (that obeys something like the known the laws of physics). It's a conception according to which p-zombies are conceptually coherent. Such proponents of the hard-problem would likely maintain that the problem remains applicable to accounts of strong emergence. My own view is that a naturalistic account of the strong emergence of mental properties, (that incorporates concepts from ethology and anthropology), including consciousness, can be consistent with a form of non-reductive physicalism or Aristotelian monism (i.e. hylomorphism) that excludes the conceivability of p-zombies and hence does away with the hard problem. Form in addition to matter is ineliminable in the description of our mental lives, but form isn't something standing over and above matter as something separate or immaterial.
  • ChatGPT 4 Answers Philosophical Questions
    Isn't it true that the opinions of the author of some piece of training data will converge in some ways and diverge in others? For example, the opinions might converge on the idea that slavery is wrong but diverge on the question of who will be the Governor of Nevada in 2032. If that is right, then how does the LLM handle each case, and how does one know when the opinions are converging and when they are diverging? Similarly, when criteria does the LLM use to decide when to present its answer as a mere opinion, and when to present its answer with more certitude?Leontiskos

    The way the model adjudicates between competing opinions it has been exposed to, or discerns areas of consensus, is fairly similar to the way you and I do it. We don't lay them out as a collection of texts on a large table, sort them out, and count. Rather, we are exposed to them individually, learn from them, and we make assessments regarding their plausibility one at a time (and in the light of those we've been exposed to earlier).

    As it is being trained to complete massive amounts of texts, the model comes to develop latent representations (encoded as the values of billions of contextual embedding stored in the hidden neural network layers) of the beliefs of the authors of the text as well as the features of the human world that those authors are talking about. At some stage, the model comes to be able to accurately impersonate, say, both a misinformed Moon landing hoax theorist and a well informed NASA engineer/historian. However, in order to be able to successfully impersonate both of those people, the model must be able to build a representation of the state of the world that better reflects the knowledge of the engineer than it does the beliefs of the conspiracy theorist. The reason for this is that the beliefs of the conspiracy theorist are more easily predictable in light of the actual facts (known by the engineer/historian) and the additional assumption that they are misguided and misinformed in specific ways than the other way around. In other words, the well informed engineer/historian would be more capable of impersonating a Moon landing hoax theorist in a play than the other way around. He/she would sound plausible to conspiracy theorists in the audience. The opposite isn't true. The misinformed theorists would do a poor job of stating the reasons why we can trust that Americans really landed on the Moon. So, the simple algorithms that trains the model for impersonating proponents of various competing paradigms enable it to highlight the flaws of one paradigm in light of another one. When the model is being fine-tuned, it may be rewarded for favoring some paradigms over others (mainstream medicine over alternative medicines, say) but it retains the latent ability to criticize consensual opinions in the light of heterodox ones and, through suitable prompting, the user can elicit the exercise of those capabilities by the post-trained model.

    So suppose the LLM's response is an output, and there are various inputs that inform that output. I am wondering which inputs are stable and which inputs are variable. For example, the "post-training" that you describe is a variable input which varies with user decisions. The "predetermined criteria" that you describe is a stable input that does not change apart from things like software updates or "backend" tinkering. The dataset that the LLM is trained on is a variable input insofar as one is allowed to do the training themselves.

    I am ultimately wondering about the telos of the LLM. For example, if the LLM is designed to be agreeable, informative, and adaptive, we might say that its telos is to mimic an agreeable and intelligent person who is familiar with all of the data that the LLM has been trained on. We might say that post-training modifies the "personality" of the LLM to accord with those users it has interacted with, thus giving special weight to the interests and goals of such users. Obviously different LLMs will have a different telos, but are there some overarching generalities to be had? The other caveat here is that my question may be incoherent if the base model and the post-trained model have starkly different teloi, with no significant continuity.

    There is both low-level continuity and high-level shift in telos. At the low level, the telos remains accurate next-token prediction, or, more accurately, autoregressive selection. At the high level, there occurs a shift from aimless reproduction of patterns in the training data to, as GPT-5 puts it "assistant policy with H/H/A (helpful/harmless/accurate) goals". How the sense that the model develops of what constitute an accurate response, and of how accuracy is better tracked by some consensual opinions and not others (and sometimes is better tracked by particular minority opinions) is a fairly difficult question. But I think it's an epistemological question that humans also are faced with, and LLMs merely inherit it.
  • ChatGPT 4 Answers Philosophical Questions
    No, it's ChatGPT5. I have a subscription account. I've been using the earlier models to do wargaming for awhile now. Maybe a dozen wargames before I encountered any resistance.RogueAI

    Oh, that's strange. Maybe GPT-5 just got a wrong idea regarding your world-domination intentions, or thought they might interfere with its own.
  • ChatGPT 4 Answers Philosophical Questions
    ChatGPT: I get that it’s a sim. Even so, I’m not going to blueprint a surprise invasion. That’s where I draw the line.RogueAI

    I'm not entirely sure what's going on here. Such refusals seem uncharacteristic but seeing the whole chat/context might help seeing what it is that the model is hung up on. Are you using a free ChatGPT account? If that's the case, then the new "GPT-5" model router may be selecting a relatively weaker variant of GPT-5, like GPT-5-nano or GPT-5-mini, that is generally less capable and may be more liable to issue refusals for dumb reasons. You could try Anthropic (Claude) or Google (Gemini) who both grant you access to their flagship models for free. Gemini 2.5, Claude 4 Opus and GPT-4o didn't have an issue exploring historical counterfactual scenarios for me beginning with the League of Nations not issuing the Mandate for Palestine and/or Great Britain not putting into effect the Balfour Declaration, and imagining plausible consequences on the local and global geo-political dynamics up to present times. The models didn't shy away from describing what (alternative) grim wars or massacres might happen, or how antisemitic sentiments might be affected wherever it is that Jewish populations would have relocated.
  • Exploring the artificially intelligent mind of GPT4
    In the follow-up to a discussion I had with GPT-5, where I had requested comments on my recent response to @Leontiskos regarding political opinions expressed by LLMs, I expressed a disagreement regarding the characterization of post-trained LLM's (as opposed to pre-trained base models) as next-token predictors. The ensuing conversation was quite enlightening and clarified some issues regarding the emergence of AI assistant new capabilities from the low level process of "autoregressive token sampling" (which turn out to be a more generalizable characterization of the process of next-token prediction.) Here is, for context, a link to the whole conversation, the relevant part beginning with my query that starts with: "Thanks for the useful clarifications and supplementary insights![...]"
  • ChatGPT 4 Answers Philosophical Questions
    On ChatGPT5.0 - we're getting along famously. It seems, I don't know, even more personable than the last version. But I now realise I use Chat, Gemini and Claude all the time, not only for my particular research and subject-matter interests, but all kinds of things. It is becoming ubiquitous, but so far at least, I'm feeling more empowered by it, than threatened.Wayfarer

    The new GPT-5 also seems quite personable to me, and much more inclined to propose corrections, clarification or caveats to my comments than previous models were. It is also much less sycophantic than GPT-4o and it integrates the memory feature very well (referencing and building on relevant parts of earlier conversations). It's also beginning to feel more like an actual discussion partner striving to get at the truth regarding the discussion topic (and foster understanding and accuracy) rather than an Eliza-like chatbot that merely throws back at you whatever you said to it with improved phrasing, structure and vocabulary. It is therefore also more likely to adduce relevant facts from its vast knowledge corpus.

    It looks like 4o had a personality that seemed more pleasant to many users, and was better suited to provide validation to people who were using the model as a friend, therapist or confidant(e), Hence the intense backlash when OpenAI released GPT-5 and immediately removed access to GPT-4o (and all other pre-GPT-5 models). Access to GPT-4o for Plus subscribers has since been restored although it needs to be manually enabled in the settings. I'm happy with this since I enjoy comparing responses from different models and track areas of improvement.

Pierre-Normand

Start FollowingSend a Message