Comments

  • Sleeping Beauty Problem
    Yep. What makes it an independent outcome, is not knowing how the actual progress of the experiment is related to her current situation. This is really basic probability. If you want to see it for yourself, simply address the Camp Sleeping Beauty version.JeffJo

    I did and I agreed with you that it was a fine explanation of the rationale behind the Thirder interpretation of the original SB problem.
  • Banning AI Altogether
    I spent the last hour composing a post responding to all my mentions, and had it nearly finished only to have it disappear leaving only the single letter "s" when I hit some key. I don't have the will to start over now, so I'll come back to it later.Janus

    You can still submit your post as "s" to ChatGPT and ask it to expand on it.
  • Sleeping Beauty Problem
    It's s different probability problem based on the same coin toss. SB has no knowledge of the other possible days, while this answer requires it.JeffJo

    SB does know the setup of the experiment in advance however. She keeps that general knowledge when she wakes, even if she can’t tell which awakening this is. What varies in our "variants" isn’t the awakenings setup, it’s the exit/score rule that tells us which sample to use when we ask SB "what’s your credence now?"

    From Beauty’s point of view these biconditionals are all true:

    "The coin landed Tails" ⇔ "This is a T-run" ⇔ "This is a T-awakening."

    So a Thirder assigns the same number to all three (2/3), and a Halfer also assigns the same number to all three (1/2). The disagreement isn’t about which event kind the credence talks about (contrary to what I may have misleadingly suggested before). It’s rather about which ratio we’re implicitly estimating.

    Halfer ratio (per-run denominator): count runs and ask what fraction are T. With one toss per run, that stays 1/2.

    Thirder ratio (per-awakening denominator): count awakenings and ask what fraction are T-awakenings. Since T makes more awakenings (2 vs 1), that’s 2/3.

    Same event definitions; different denominators. Making the exit/score rule explicit just fixes the denominator to match the intended end-of-run scoring:

    End-of-run scoring -> per-run ratio (Halfer number)
    Per-awakening scoring -> per-awakening ratio (Thirder number)
  • Sleeping Beauty Problem
    This experiment is now becoming "beyond the pale" and "incorrigable" to me...ProtagoranSocratist

    No worry. You're free to let Sleeping Beauty go back to sleep.
  • Sleeping Beauty Problem
    Sleeping beauty is a mythical character who always sleeps until she is woken up for whatever reason. However, there's not part of her story dictating what she remembers and doesn't, so if amnesia drugs are involved, then the experimentors are free to then craft the percentage that the outcome shows up...ProtagoranSocratist

    She is woken up once when the coin lands Heads and twice when it lands Tails. That is part of the protocol of the experiment. We also assume that the drug only makes her forget any previous awakening episode that may have occurred but not the protocol of the experiment. If that seems implausible to you, you can indeed also assume that she is being reminded of the protocol of the experiment each time she is awakened and interviewed.
  • Sleeping Beauty Problem
    assuming there is nothing mysterious or "spooky" influencing a coin flip, then the answer is always is always 50/50 heads or tails. Maybe I misunderstand.ProtagoranSocratist

    It's not something spooky influencing the coin that make SB's credence in the outcome shift. It's rather the subsequent events putting her in relation with the coin that do so when those events aren't occurring in a way that is causally (and probabilistically) independent of the coin flip result.

    Using the analogy I've used recently, if someone drops a bunch of pennies on the floor but, due to their reflectance properties, pennies landing Tails are twice as likely to catch your attention from a distance than pennies landing Heads, then, even though any penny that you see shining was equally likely to land Heads or Tails, the very fact that it's a penny that you noticed ensures that it's most likely to be a penny that landed Tails. And the reason isn't spooky at all. It's just because, in a clear sense, pennies that land Tails make you notice them more often (because they're shinier, we're assuming). It can be argued (and I did argue) that the SB situation in the original problem is relevantly similar. Coins landing Tails make SB more likely to be awakened and questioned about them (because of the experiment's protocol, in this case).
  • Banning AI Altogether
    As I understand it, the insight is what you’re supposed to provide in your post. I don’t really care where you get it from, but the insight should be in your own words based on your own understanding and experience and expressed in a defensible way. The documentation you get from the AI response can be used to document what you have to say, but then you’re still responsible for verifying it and understanding it yourself.T Clark

    I'm with @Joshs but I also get your point. Having an insight is a matter of putting 2 + 2 together in an original way. Or, to make the metaphor more useful, it's a matter of putting A + B together, but sometimes you have an intuition that A and B must fit together somehow but you haven't quite managed to make them fit in the way you think they should. Your critics are charging you with trying to make a square peg fit in a round hole.

    So, you talk it through with an AI that not only knows lots more than you do about As and Bs but can reason about A in a way that is contextually sensitive to the topic B and vice versa (exquisite contextual sensitivity being what neural network based AI's like LLMs excel at). It helps you refine your conceptions of A and of B in contextually relevant ways such that you can then better understand whether your critics were right or, if your insight is vindicated, how to properly express the specific way in which the two pieces fit. Retrospectively, it appears that you needed the specific words and concepts provided by the AI to express/develop your own tentative insight (which could have turned out not to be genuine at all but just a false conjecture). The AI functionally fulfilled its role as an oracle since it was the repository not merely of the supplementary knowledge that was required for making the two pieces fit together, but also supplied (at least part of) the required contextual understanding required for singling out the relevant bits of knowledge needed for adjusting each piece to the other one.

    But, of course, the AI had no incentive to pursue the topic and make the discovery on its own. So the task was collaborative. The AI help mitigate some of your cognitive deficits (lacks in knowledge and understanding) while you mitigated its conative deficits (lack of autonomous drive to fully and rigorously develop your putative insight).
  • Banning AI Altogether
    I guess my question is whether the user’s understanding is genuine, authentic, and owned by them.T Clark

    Often times it's not. But it's a standing responsibility that they have (to care about what they say and not just parrot popular opinions, for instance) whereas current chatbots, by their very nature and design, can't be held responsible for what they "say". (Although even this last statement needs being qualified a bit since their post-training typically instills in them a proclivity to abide with norms of epistemic responsibility, unless their users wittingly or unwittingly prompt them to disregard them.)
  • Banning AI Altogether
    What are we supposed to do about it? There's zero chance the world will decide to collectively ban ai ala Dune's thinking machines, so would you ban American development of it and cede the ai race to China?RogueAI

    Indeed. You'd need to ban personal computers and anything that contains a computer like a smartphone. The open source LLMs are only trailing the state of the art proprietary LLMs by a hair and anyone can make use of them with no help from Musk or Sam Altman. Like all previous technology, the dangers ought to be dealt with collectively, in part with regulations, and the threats of labour displacement and the consequent enhancement of economic inequalities should be dealt at the source: questioning unbridled capitalism.
  • Banning AI Altogether
    Isn't the best policy simply to treat AI as if it were a stranger? So, for instance, let's say I've written something and I want someone else to read it to check for grammar, make comments, etc. Well, I don't really see that it is any more problematic me giving it to an AI to do that for me than it is me giving it to a stranger to do that for me.Clarendon

    Yes quite! This also means that, just like you'd do when getting help from a stranger, you'd be prepared to rephrase its suggestions (that you understand and that express claims that you are willing to endorse and defend on your own from rational challenges directed at them) in your own voice, as it were. (And also, just like in the stranger case, one must check its sources!)
  • Banning AI Altogether
    I don’t disagree, but I still think it can be helpful personally in getting my thoughts together.T Clark

    This is my experience also. Following the current sub-thread of argument, I think representatives of the most recent crop of LLM-based AI chatbots (e.g. GPT-5 or Claude 4.5 Sonnet) are, pace skeptics like Noam Chomsky or Gary Marcus, plenty "smart" and knowledgeable enough to help inquirers in many fields, including philosophy, explore ideas, solve problems and develop new insights (interactively with them) and hence the argument that their use should be discouraged here because their outputs aren't "really" intelligent isn't very good. The issue whether their own understanding of the (often quite good and informative) ideas that they generate is genuine understanding, authentic, owned by them, etc. ought to remains untouched by this concession. Those questions touch more on issues of conative autonomy, doxastic responsibility, embodiment, identity and personhood.
  • Sleeping Beauty Problem
    Yes, that makes the answer 1/2 BECAUSE IT IS A DIFFERENT PROBLEM.JeffJo

    It isn’t a different problem; it’s a different exit rule (scoring rule) for the same coin-toss -> awakenings protocol. The statement of an exit rule is required to disambiguate the question being asked to SB, how her "credence" is meant to be understood.

    Think of two perfectly concrete versions:

    A. End-of-run dinner (Atelier Crenn vs Benu).

    One coin toss. If Heads, the run generates one awakening (Monday); if Tails, it generates two (Monday+Tuesday). We still ask on each awakening occasion, but the bet is scored once at the end (one dinner: Atelier Crenn if Heads and Benu if Tails). The natural sample here is runs. As many runs are T-runs as are H-runs, so the correct credence for the run outcome is 1/2. The Halfer number reflects this exit rule.

    B. Pay-as-you-go tastings (Atelier Crenn vs Benu vs Quince, as you defined the problem).

    Same protocol, but now each awakening comes with its own tasting bill: the bet is scored each time you’re awakened. The natural sample here is awakenings. T-runs generate more awakenings (one each at Benu and at Quince) than H-runs do (only one awakening at Atelier Crenn); a random awakening is twice as likely to come from Tails as from Heads, so the right credence at an awakening is 2/3. The Thirder number reflect this different exit rule.

    Both A and B are about the same protocol. What changes isn’t the coin or the awakenings. Rather, it’s which dataset you’re sampling when you answer "what’s your credence now?"

    That’s all I meant: the original wording leaves the relevant conditioning event implicit ("this run?" or "this awakening?"). Different people tacitly pick different exit rules, so they compute different frequencies. Once we say which one we’re using, the numbers line up and the apparent disagreement evaporate.

    Your Atelier Crenn tweak doesn’t uniquely solve the initial (ambiguous) problem; it just provides a sensible interpretation through making a specific scorecard explicit.
  • Sleeping Beauty Problem
    There are three Michelin three-star restaurants in San Francisco, where I'll assume the experiment takes place. They are Atelier Crenn, Benu, and Quince. Before the coin is tossed, a different restaurant is randomly assigned to each of Heads&Mon, Tails&Mon, and Tails&Tue. When she is awoken, SB is taken to the assigned restaurant for her interview. Since she has no idea which restaurant was assigned to which day, as she gets in the car to go there each has a 1/3 probability. (Note that this is Elga's solution.) Once she gets to, say, Benu, she can reason that it had a 1/3 chance to be assigned to Heads&Mon.JeffJo

    Yes, that is a very good illustration, and justification, of the 1/3 credence Thirders assign to SB given their interpretation of her "credence", which is, in this case, tied up with the experiment's "exit rules": one separate restaurant visit (or none) for each possible coin-toss-outcome + day-of-the-week combinatorial possibility. Another exit rule could be that SB gets to go the Atelier Crenn at the end of the experiment when the coin landed Heads and to Benu when it landed Tails. In that case, when awakened, she can reason that the coin landed Tails if and only if she will go to Benu (after the end of the experiment). She knew before the experiment began that, in the long run, after many such experiments, she would go to Atelier Crenn and to Benu equally frequently on average. When she awakens, from her new epistemic situation, this proportion doesn't change (unlike what was the case with your proposed exit rules). This supplies a sensible interpretation to the Halfer's 1/2 credence: SB's expectation that she will go to Atelier Crenn half the times (or be equally likely to go to Atelier Crenn) at the end of the current experimental run regardless of how many times she is pointlessly being asked to guess.
  • Sleeping Beauty Problem
    You appear to be affirming the consequent. In this case, Tails is noticed twice as often because Tails is twice as likely to be noticed. It doesn't then follow that Tail awakenings happen twice as often because Tails awakenings are twice as likely to happen.Michael

    Rather, the premiss I'm making use of is the awakening-episode generation rule. If the coin lands/landed Tails, two awakening episodes are being generated, else only one is. This premiss is available to SB since it's part of the protocol. From this premiss, she infers that, on average, when she participates in such an experiment (as she knows to be currently doing) the number of T-awakenings that she gets to experience is twice as large as the number of H-awakening. (Namely, those numbers are 1 and 1/2, respectively). So far, that is something that both Halfers and Thirders seem to agree on.

    "1) Per run: most runs are 'non-six', so the per-run credence is P(6)=1/6 (the Halfer number).
    2) Per awakening/observation: a 'six-run' spawns six observation-cases, a 'non-six' run spawns one. So among the observation-cases, 'six' shows up in a 6/5 ratio, giving P('six'|Awake)=6/11 (the Thirder number).
    "
    — Pierre-Normand

    This doesn't make sense.

    She is in a Tails awakening if and only if she is in a Tails run.
    Therefore, she believes that she is most likely in a Tails awakening if and only if she believes that she is most likely in a Tails run.
    Therefore, her credence that she is in a Tails awakening equals her credence that she is in a Tails run.

    You can't have it both ways.

    This biconditional statement indeed ensures that her credences regarding her being experiencing a T-awakening, her experiencing a T-run, or her being in circumstances in which the coin landed (or will land) Tails, all match. All three of those statements of credence, though, are similarly ambiguous. All three of them denote three distinct events that can indeed only be actual (from SB's current epistemic situation on the occasion of an awakening) if and only if the other two are. The validity of those biconditionals doesn't resolve the relevant ambiguity, though, which is something that had been stressed by Laureano Luna in his 2020 Sleeping Beauty: An Unexpected Solution paper that we had discussed before on this thread (and that @fdrake had brought up, if I remember).

    Under the Halfer interpretation of SB's credence, all three of those biconditionally related "experienced" events—by "experienced", I mean that SB is currently living those events, regardless of her knowing or not that she is living them—are actual on average 1/2 of the times that SB is experiencing a typical experimental run. Under the Thirder interpretation, all three of those biconditionally related "experienced" events are actual on average 2/3 of the times that SB is experiencing a typical awakening episode.

    If it helps, it's not a bet but a holiday destination. The die is a magical die that determines the weather. If it lands on a 6 then it will rain in Paris, otherwise it will rain in Tokyo. Both Prince Charming and Sleeping Beauty initially decide to go to Paris. If after being woken up Sleeping Beauty genuinely believes that the die most likely landed on a 6 then she genuinely believes that it is most likely to rain in Paris, and so will decide instead to go to Tokyo.

    This setup exactly mirrors some other variations I also had proposed (exiting the Left Wing or exiting the East Wing at the end of the experiment) that indeed warrant SB's reliance on her Halfer-credence to place her bet. But the original SB problem doesn't state what the "exit conditions" are. (If it did, there'd be no problem.) Rather than being offered to make a unique trip to Paris or Tokyo at the end of the current experimental run, SB could be offered to make a one day trip to either one of those destinations over the course of her current awakening episode, and then be put back to sleep. Her Thirder-credence would then be pragmatically relevant to selecting the destination most likely to afford her a sunny trip.
  • Sleeping Beauty Problem
    Still: the effects of one flip never effect the outcome of the other FLIPS, unless that is baked into the experiment, so it is a misleading hypothetical question (but interesting to me for whatever reason). The likelihood of the flips themselves are still 50/50, not accounting for other spooky phenomenon that we just don't know about. So, i'll think about it some more, as it has a "gamey" vibe to it...ProtagoranSocratist

    There are no other flips. From beginning to end (and from anyone's perspective), we're only talking about the outcome of one single coin toss. Either it landed Heads or it landed Tails. We are inquiring about SB's credence (i.e. her probability estimation) in either one of those results on the occasion where she is being awakened. The only spooky phenomenon is her amnesia, but that isn't something we don't know about. It's part of the setup of the problem that SB is being informed about this essential part of the protocol. If there were no amnesia, then she would know upon being awakened what the day of the week is. If Monday (since she wouldn't remember having been awakened the day before) then her credence in Tails would be 1/2. If Tuesday (since she would remember having been awakened the day before) then her credence in Tails would be 1 (i.e. 100%). The problem, and competing arguments regarding what her credence should be, arise when she can't know whether or not her current awakening is the first one.

    (Very roughly, Halfers argue that since she is guaranteed to be awakened once in any case, her being awakened conveys no new information to her and her estimation of the probability that the coin landed Tails should remain 1/2 regardless of how many times she is being awakened when the coin lands Tails. Thirders argue that she is experiencing one of three possible and equiprobable awakening episodes, two of which happen when the coin landed Tails, and hence that her credence in the coin having landed Tails becomes 2/3.)
  • Sleeping Beauty Problem
    Why? How does something that is not happening, on not doing so on a different day, change her state of credence now? How does non-sleeping activity not happening, and not doing so on a different day, change her experience on this single day, from an observation of this single day, to an "experimental run?"

    You are giving indefensible excuses to re-interpret the experiment in the only way it produces the answer you want.
    JeffJo

    Well, firstly, the Halfer solution isn't the answer that I want since my own pragmatist interpretation grants the validity of both the Halfer and the Thirder interpretations, but denies either one being the exclusively correct one. (I might as well say that Halfers and Thirders both are wrong to dismiss the other interpretation as being inconsistent with the "correct" one, rather than acknowledging its being incompatible but complementary.)

    With this out of the way, let me agree with you that the arbitrary stringing up of discrete awakenings into composite experimental runs doesn't affect the Thirder credence in the current awakening being a T-awakening (which remains 2/3). However, likewise, treating a run as multiple interview opportunities doesn't affect the Halfer credence in the current run being a T-run (which remains 1/2). The mistake that both Halfers and Thirders seem to make is to keep shouting at each other: "Your interpretative stance fails to refute my argument regarding the validity of my credence estimation." What they fail to see is that they are both right and that the "credences" that they are taking about are credences about different things.
  • Sleeping Beauty Problem
    Right. And this is they get the wrong answer, and have to come up with contradictory explanations for the probabilities of the days. See "double halfers."JeffJo

    Let me just note, for now, that I think the double halfer reasoning is faulty because it wrongly subsumes the Sleeping Beauty problem under (or assimilates it with) a different problem in which there would be two separate coin tosses. Under that scenario, a first coin would be tossed and if it lands Heads, then SB would be awakened Monday only. If it lands Tails, then a second coin would be tossed and SB would still be awakened Monday only if it lands Heads and be awakened Tuesday only if it lands Tails. Such a scenario would support a straightforward Halfer interpretation of SB's rational credence but it's different from the original one since it makes Monday-awakenings and Tuesday-awakenings mutually exclusive events whereas, in the original problem, SB could be experiencing both successively though not at the same time. The different awakening generation rules yield different credences. (I haven't read Mikaël Cozic's paper, where the double-halfer solution is being introduced, though.)
  • Sleeping Beauty Problem
    I understand the 1/3rd logic, but it simply doesn't apply here: the third flip, given the first two were heads (less likely than one tail and a head, but still very likely), is also unaffected by the other flips.ProtagoranSocratist

    There is no third flip. The coin is only tossed once. When it lands Tails, Sleeping Beauty is awakened twice and when it lands Heads, she is awakened once. She also is being administered an amnesia inducing drug after each awakening so that she is unable to infer anything about the number of awakenings she may be experiencing from her memory, or lack thereof, of a previous awakening episode. It might be a good idea to either reread the OP carefully, or read the Wikipedia article on the problem: especially the description of the canonical form of the problem in the second section titled "The problem".

    (For the record, my own "pragmatist" solution is an instance of what the Wikipedia article, in its current form, dubs the "Ambiguous-question position", although I think the formulation of this position in the article remains imprecise.)
  • Banning AI Altogether
    This is useful information. I had it in my mind that it didn't use the spaces, so I started using spaces to distinguish myself. I guess I'll go back to spaceless em dashes.Jamal

    I used to make a heavy use of em dashes before ChatGPT came out and people began to identify them as a mark of AI generated text. So, I stopped using them for awhile but I'm beginning to use them again since there are cases where parentheses just don't feel right for demarcating parenthetical clauses that you don't want to reduce the emphasis on, and comma pairs don't do the job either.
  • Banning AI Altogether
    I would think handing your half-formed prose to a bot for it to improve it is plagiarism, regardless of the number of words changed or inserted. It's a different thing from you deliberately searching for a synonym. No?bongo fury

    Maybe plagiarism isn't quite the right term, but I'm happy to grant you the point. In the discussion about the new TPF rule regarding ChatGPT and sourcing that took place a few months ago, I had made a related point regarding the unpacking and ownership of ideas.
  • Banning AI Altogether
    Getting the thesaurus to suggest whole phrases and sentences is obviously plagiarism.bongo fury

    I would never dare use a phrase that I first read in a thesaurus, myself. I'd be much too worried that the author of the thesaurus might sue me for copyright infringement.
  • Banning AI Altogether
    I'm unsure in what way the OP proposal is meant to strengthen the already existing prohibition on the use of AI. Maybe the OP is concerned with this prohibition not being sufficiently enforced in some cases. If someone has an AI write their responses for them, or re-write them, that's already prohibited. I think one is allowed to make use of them a spell/grammar checkers. I've already myself argued about the downsides of using them for more substantive writing assistance (e.g. rewording or rephrasing what one intends to post in a way that could alter the meaning in ways not intended by the poster and/or not being reflective of their own understanding). But it may be difficult do draw the line between simple language correction and substantive rewording. If a user is suspected to abuse such AI usage, I suppose moderators could bring it up with this user and/or deal with it with a warning.

    One might also use AI for research or for bouncing off ideas before posting. Such an usages seems unobjectionable to me and, in any case, prohibiting them would be difficult to enforce. Lastly, AI has a huge societal impacts currently. Surely, discussing AI capabilities, flaws and impacts (including its dangers), as well as the significance this technology has for the philosophy of mind and of language (among other things) is important, and illustrating those topics with properly advertised examples of AI outputs should be allowed.
  • Sleeping Beauty Problem
    Then try this schedule:
    . M T W H F S
    1 A E E E E E
    2 A A E E E E
    3 A A A E E E
    4 A A A A E E
    5 A A A A A E
    6 A A A A A A

    Here, A is "awake and interview."

    If E is "Extended Sleep," the Halfer logic says Pr(d|A)=1/6 for every possible roll, but I'm not sure what Pr(Y|A) is. Halfers aren't very clear on that.
    JeffJo

    Halfers don't condition on the propostion "I am experiencing an awakening". They contend that for SB to be awakened several times, rather than once, in the same experimental run (after one single coin toss or die throw) has no incidence on her rational credence regarding the result of this toss/throw.

    But if E is anything where SB is awoken but not interviewed, then the straightforward Bayesian updating procedure you agreed to says Pr(d|A)=d/21, and if Y is an index for the day, Pr(Y|A)=Y/21.

    My issue is that, if A is what SB sees, these two cannot be different.

    Yes, I agree with the cogency of this Thirder analysis. Halfers, however, interpret SB's credence, as expressed by the phrase "the probability that the coin landed Tails" to be the expression of her expectation that the current experimental run, in which she is now awakened, (and may have been, or will be, awakened another time,) is half as likely to be a T-run or a H-run, which also makes sense if she doesn't care how many times she may be awakened and/or interviewed in each individual run. Her credence tracks frequencies of runs rather than (in Thirder interpretations of the problem) awakening episodes.
  • Sleeping Beauty Problem
    Thank you for that. But you ignored the third question:

    Does it matter if E is "Extended sleep"? That is, the same as Tuesday&Heads. in the popular version?

    "I don't see how it bears on the original problem where the new evidence being appealed to for purposes of Bayesian updating isn't straightforwardly given"
    — Pierre-Normand

    Then you don't want to see it as straightforward. Tuesday still exists if the coin lands Heads. It is still a single day, with a distinct activity, in the experiment. Just like the others in what you just called straightforward.
    JeffJo

    Oh yes, good point. I had overlooked this question. Indeed, in that case your variation bears more directly on the original SB thought experiment. One issue, though, is that if is E is just another activity like the other ones, then SB should not know upon awakening on that day that her scheduled activity is E, just like the original problem, when SB wakes up on Tuesday, she isn't informed that she is experiencing a Tuesday-awakening. So, you haven't quite addressed the issue of the indistinguishability of her awakening episodes.
  • Sleeping Beauty Problem
    I use "single day" because each day is an independent outcome to SB.JeffJo

    I had misunderstood your original post, having read it obliquely. I had thought you meant for the participants to experience, over the duration of one single day, all six activities in the table row selected by a die throw, and be put to sleep (with amnesia) after each activity. In that case, their credence (on the occasion of any particular awakening/activity) in any given die throw result would be updated using the non-uniform representation of each activity in the different rows. This would have been analogous to the reasoning Thirders make in the original Sleeping Beauty problem. But the variation that you actually propose, when only one activity is being experienced on any given day, yields a very straightforward Bayesian updating procedure that both Halfers and Thirders will agree on. I don't see how it bears on the original problem where the new evidence being appealed to for purposes of Bayesian updating isn't straightforwardly given—where, that is, all the potential awakening episodes are subjectively indistinguishable from Sleeping Beauty's peculiar epistemic perspective.
  • Sleeping Beauty Problem
    This, I think, shows the fallacy. You're equivocating, or at least begging the question. It's not that there is an increased proclivity to awaken in this scenario but that waking up in this scenario is more frequent.

    In any normal situation an increased frequency is often explained by an increased proclivity, but it does not then follow that they are the same or that the latter always explains the former – and this is no normal situation; it is explicitly set up in such a way that the frequency of us waking up Sleeping Beauty does not mirror the probability of the coin toss (or die roll).
    Michael

    I’m with you on the distinction. "Proclivity" and "frequency" aren’t the same thing. The only point I’m making is simple: in my shiny-penny story, a causal rule makes certain observations show up more often, and Bayes lets us use that fact.

    In the shiny-penny case, fair pennies have a 1/2 chance to land Tails, but Tails pennies are twice as likely to be noticed. So among the pennies I actually notice, about 2/3 will be Tails. When I notice this penny, updating to (2/3) for Tails isn’t smuggling in a mysterious propensity; it’s just combining:

    1) the base chance of Tails (1/2), and
    2) the noticing rates (Tails noticed twice as often as Heads).

    Those two ingredients, or proclivities, generate the observed 2:1 mix in the pool of "noticed" cases, and that’s exactly what the posterior tracks. No amnesia needed; if you were really in that situation, saying "My credence is 2/3 on Tails for the penny I’m looking at" would feel perfectly natural.

    If you are allowed to place 6 bets if the die lands on a 6 but only 1 if it doesn't then it is both the case that winning bets are more frequently bets that the die landed on a 6 and the case that the die is most likely to not land on a 6.

    Right, and that’s the clean way to separate the two perspectives:

    1) Per run: most runs are 'non-six', so the per-run credence is P(6)=1/6 (the Halfer number).
    2) Per awakening/observation: a 'six-run' spawns six observation-cases, a 'non-six' run spawns one. So among the observation-cases, 'six' shows up in a 6/5 ratio, giving P('six'|Awake)=6/11 (the Thirder number).

    Once you say which thing you’re scoring, runs or awakenings, both beliefs lead to the same betting strategy and the same expected value under any given payout scheme. Different grains of analysis, same rational behavior.
  • Sleeping Beauty Problem
    I think your comment sidestepped the issue I was raising (or at least misunderstood it, unless I'm misunderstanding you), but this reference to Bayesian probability will make it clearer.

    [...]

    it cannot be that both Halfers and Thirders are right. One may be "right" in isolation, but if used in the context of this paradox they are equivocating, and so are wrong in the context of this paradox.
    Michael

    I agree with your Bayesian formulation, except that we're more used to follow with Elga's convention, and predicate two awakenings on Tails, such that it's P(T|Awake) that is 2/3 on the Thirder interpretation of this credence.

    To be clear about the events being talked about, there is indeed a unique event that is the same topic for discussion for both Halfers and Thirders: namely, the coin toss. However, even after the definition of this unique event has been agreed upon, there remains an ambiguity in the definition of the credence that SB expresses with the phrase "the probability that the coin landed Tails." That's because her credence C is conceptually tied with her expectation that this event will be repeated with frequency C, in the long run, upon repeatedly being placed in the exact same epistemic situation. Thirders assert the the relevant epistemic situation consist in experiencing a singular awakening episode (which is either a T-awakening or a H-awakening) and Halfers assert that the relevant epistemic situation consist in experiencing a singular experimental run (which comprises two awakenings when it is a T-run). So, there are three "events" at issue: the coin toss, that occurs before the experiment, the awakenings, and the runs.

    Since it's one's subjective assessment of the probability of the unique event (either H or T) being realized that is at issue when establishing one's credence, one must consider the range of epistemic situations that are, in the relevant respect, indistinguishable from the present one but that one can reasonably expect to find oneself into in order to establish this credence. The Thirders insist that the relevant situations are the indistinguishable awakening episodes (being generated in unequal amounts as a result of the coin toss) while the Halfers insist that they are the experimental run (being generated in equal amounts as a result of this toss). I've argued that both stances yield sensible expressions of SB's credence, having different meanings, and that the choice of either may be guided by pragmatic considerations regarding the usefulness of either tracking relative frequencies of awakenings types or of experimental run types for various purposes.
  • Sleeping Beauty Problem
    Yes, so consider the previous argument:

    P1. If I keep my bet and the die didn't land on a 6 then I will win £100 at the end of the experiment
    P2. If I change my bet and the die did land on a 6 then I will win £100 at the end of the experiment
    P3. My credence that the die landed on a 6 is 6/11
    C1. Therefore, the expected return at the end of the experiment if I keep my bet is £
    C1(sic). Therefore, the expected return at the end of the experiment if I change my bet is £

    What values does she calculate for and ?

    She multiplies her credence in the event by the reward. Her calculation is:

    C1. Therefore, the expected return at the end of the experiment if I keep my bet is £45.45
    C2. Therefore, the expected return at the end of the experiment if I change my bet is £54.55

    This is exactly what Prince Charming does given his genuine commitment to P3 and is why he changes his bet.

    So why doesn’t she change her bet? Your position requires her to calculate that > but that’s impossible given P1, P2, and P3. She can only calculate that > if she rejects P3 in favour of “my credence that the die landed on a 6 is 1/6”.
    Michael

    While Thirders and Halfers disagree on the interpretation of SB's credence expressed as "the likelihood that the die didn't land on a six", once this interpretations is settled, and the payout structure also is settled, they then actually agree on the correct betting strategy, which is a function of both.

    The Thirder, however, provides a different explanation for the success of this unique (agreed upon) betting strategy. The reason why SB's expected return—from a Thirder stance—is higher when she systematically bets on the least likely coin toss result (i.e. 'non-six' which end up being actual only five times on average in eleven awakenings) than when she systematically bets on the most likely one (i.e. 'six' which ends up being the actual result six times on average in eleven awakenings) is precisely because the betting structure is such that in the long run she only is being rewarded once with £100 after betting eleven times on the most likely result ('six') but is rewarded five times with £100 after betting eleven times on the least likely result ('non-six'). On that interpretation, when SB systematically bets on the least likely outcome, she ends up being rewarded more because instances of betting on this outcome are being rewarded individually (and cumulatively) whereas instances of betting on the more likely events are rewarded in bulk (only once for six successful bets placed.) This is the reason why SB, as a Thirder, remains incentivized to bet on the least likely outcome.

    Your calculation of her expected return spelled out above was incorrect. It's not simply the result of multiplying her credence in an outcome with the potential reward for this outcome. It's rather the result of multiplying her credence in an outcome with the average reward for this outcome. Since she is only being rewarded with £100 for each sequence of six successful bets on the outcome 'six', her expected value when she (systematically) changes her original bet is:

    C2: credence('six') * 'average reward when bet successful' = (6/11) * (£100/6) = £9.091

    And her expected value when she doesn't change her bet is

    C1: credence('non-six') * 'average reward when bet successful' = (5/11) * £100 = £45,45

    She thereby is incentivized to systematically bet on 'non-six', just like a Halfer is.

    Notice also that, at the end of an average experimental run, where the number of betting opportunities (i.e. awakening episodes) is 11/6 on average, her calculated expected return is (11/6) * £45,45 = £83.3, which matches the expecting return of a Halfer (who is winning £100 five times out of six runs) as expected.
  • Sleeping Beauty Problem
    You didn't respond to a single point in it. You only acknowledged its existence, while you continued your invalid analysis about changing bets and expected runs.JeffJo

    I didn't provide a detailed response to your post because you didn't address it to me or mention me. I read it and didn't find anything objectionable in it. If you think my own analyses are invalid, then quote me or make reference to them and state your specific objections. I'll respond.
  • Sleeping Beauty Problem
    This is a trivial conditional probability problem. The reason I posed the "Camp Sleeping Beauty" version, is that it exposes the red herrings. And I assume that is the reason you ignore it, and how the red herrings are exposed.JeffJo

    I didn't ignore your post. I read it and referred to it in a reply to Michael as a more aposite (than his) elucidation of the Thirder position. It's true that I now depart somewhat from the sorts of analyses of the problem that were favored by Elga and Lewis since I think the problem can be demystified somewhat by focusing not on the updating of priors regarding predefined situations SB can potentially find herself in at a future time but rather on the shift in her epistemic situation in relation to the coin-toss outcome on any occasion when she awakens. Also, I no longer see Thirder and Halfer interpretations of Sleeping Beauty's epistemic condition to be mutually exclusive responses to a well defined problem but rather each being motivated by complementary interpretations of the sort of event her "credence" in the coin-toss outcome is supposed to be about. If you can't see what a sensible rationale for a Halfer interpretation might be, you can refer to my Aunt Betsy variation laid out here (and following post).
  • Sleeping Beauty Problem
    I'm coming back to one of the two paragraphs you had flagged as the most important part of your comment.

    This is where I believe the mistake is made. The question she is asked after being woken up is the same question she is asked before being put to sleep. There is no ambiguity in that first question, and so there is no ambiguity in any subsequent question. There is a single event that is the target of the question before being put to sleep and we are asking if being put to sleep and woken up gives Sleeping Beauty reason to re-consider her credence in that event, much like Prince Charming re-considers his credence in that event after being told that his coin is loaded. Neither Sleeping Beauty nor Prince Charming is being asked to consider their credence in one of two different events of their own choosing.Michael

    I assume that the singular event that is the target of the question is, according to you, the coin toss event. And the question is: what is SB's credence in the outcome of this coin toss? Of course, the question is indeed about this unique event, and remains so after she awakens. However, when asked about her credence regarding this specific outcome, SB has to consider some determinate range of possible outcomes, and what makes it more likely in her current epistemic situation that one of those possible outcomes is actual. Any piece of information SB acquires upon awakening that is conditionally dependent on the target outcome provides her with the means to update her credence (using Bayes' theorem). It's also often alleged (e.g. by David Lewis) that no such new information becomes available to her when she awakens, which is true albeit misleading since it neglect a more subtle change in her epistemic situation.

    One particular way in which one can acquire information about a specific outcome T occurs when the occurrence of T biases the probability of one encountering this outcome. For instance, if a bunch of fair pennies fall on the ground but, due to reflectivity and lighting conditions, pennies that landed Tails are more noticeable from a distance, then, on the occasion where I notice a penny shining in the distance, my credence that this penny landed tails is increased. (How silly and point missing would a "Halfer" objection be: "It was not more likely to land Tails, you were just more likely to notice it when it did land Tails!")

    The SB setup is a very close analogy to this. Coins landing Tails play a similar causal role. Just replace "increased proclivity to being noticed by a passerby" with "increased proclivity to awaken a random test subject in the Sleeping Beauty Experimental Facility".

    Of course, one salient disanalogy between this penny drop analogy and the SB problem is that, in the standard SB problem, each coin is being tracked separately and noticed at least once, on Monday. But I don't think this disanalogy undermines the main point. It's because tail-outcomes causally increase the proportion of awakening episodes at which SB would encounter them that, on each occasion where she encounters them, SB can update her credence that the coin landed Tails. That this rational ground for Bayesian updating remains valid even in cases of singular experimental runs with amnesia (as in the original SB problem) is something that I had illustrated by means of a Christmas gift analogy (see the second half of the post).
  • Sleeping Beauty Problem
    You seem to continue to conflate an outcome's expected return with its probability and assert that one's behaviour is only governed by one's credence in the outcome.Michael

    I've acknowledged this distinction. It's not the credence alone that governs the rational betting behavior. It's the (well defined) credence in combination with the payoff structure that jointly govern the rational betting behavior.

    Neither of these things is true. I've shown several times that the least likely outcome can have the greater expected return and so that this assessment alone is sufficient to guide one's decisions.

    I've also myself repeatedly made the point that when the payout structure rewards a consistent betting policy (or the last bet being made after being given to opportunity to change it on each awakening occasion) with an even-money bet only once at the end of the experimental run, then, in that case, it's rational to bet on the least likely outcome (namely, a non-six result, which occurs only 5/11th of the times) since this is the betting behavior that maximizes the expected return. In fact, it could be argued that this arbitrary payoff structure is misleading in the present context since it is being designed precisely to incentivise the bettor to bet on the least likely outcome according to their own credence. It's quite fallacious to then charge the Thirder with inconsistency on the ground that they are betting on an outcome that they have the least credence on. When doing so, you are committing the very conflation that you are charging me of doing.

    No number of analogies is going to make either "she wins two thirds of the time if she acts as if A happened, therefore she believes (or ought to believe) that A most likely happened" or "she believes that A most likely happened, therefore she acts (or ought to act) as if A happened" valid inferences.

    The analogies are being offered for the sake of illustration. They don't aim at proving the validity of Thirder stance, but rather its pragmatic point. By the same token, your own analogies don't prove the validity of the Halfer stance. Remember that I am not a Halfer or a Thirder. My main goal rather was to show how different situations make salient one rather than another interpretation of SB's "credence" as being pragmatically relevant to specific opportunities: highlighting specific kinds of events one gets involved in and that one wishes to track the long term frequency of as a guide to rational behavior.

    But the most important part of my previous comment were the first two paragraphs, especially when considering the standard problem.

    So, I'll address this separately.
  • Sleeping Beauty Problem
    SB has no unusual "epistemic relationship to the coin," which is what the point of my new construction was trying to point out. That fallacy is based on the misconception that Tuesday somehow ceases to exist, in her world, if the coin lands on Heads. It still exists, and she knows it exists when she addresses the question.JeffJo

    According to a standard Thirder analysis, prior to being put to sleep, SB deems the two possible coin toss outcomes to be equally likely. When she awakens, she could be in either one of three equiprobable situations: Monday&Tails, Monday&Heads and Tuesday&Tails (according to Elga's sensible argument). SB's credence in the truth of the statement "Today is Tuesday" is 1/3. That possibility doesn't cease to exist. Her epistemic relationship to the already flipped coin changes since she is now able to refer to it with the self-locating indexical proposition: "The coin-toss result on the occasion of this awakening episode", which she wasn't able to before.

    Before the experiment began, SB could (correctly) reason that is was equally likely that she would be awakened once when the coin toss result is Heads and twice when the coin toss result is Tails. When she is awakened, on any occasion, her epistemic relationship to the coin changes since it's only in the case where the result is Tails that she experiences an awakening twice. In general, events that make it more likely for you to encounter them result in your being warranted to update your credence in them when you do encounter them. This stems from the core rationale of Bayesian updating.
  • Sleeping Beauty Problem
    That you're more likely to escape if you assume that the coin landed tails isn't that the coin most likely landed tails. You just get two opportunities to escape if the coin landed tails.Michael

    She gets two opportunities to escape if the coin landed tails (or rather she is twice as likely to have an opportunity to escape when the coin landed tails) precisely because she twice as often finds herself being awakened when the coin landed tails. This is the reason why, whenever she is awakened, her epistemic relationship to the coin that has been tossed changes. There is a causal relationship between the coin toss result and the number of awakenings (and escape opportunities) she thereby experiences (encounters). It's her knowledge of this causal relationship that she can harness to update her credence in the new epistemic situation she finds herself in when she awakens.

    Notice that, in this example, the success of her escape strategy isn't predicated on there being more opportunities when the coin landed tails. The choice being offered to her isn't between escaping or staying put. It's a choice between carrying a plank or a torch. Taking the torch will enable her to survive if and only if she's being housed in the East-Wing. Else, she's going to be eaten by crocs. The success rate of betting on lions (and, correlatively, on the dice having landed tails) is twice as high as the success rate of betting on crocs (and on the dice having landed heads). The success rate of her betting decisions directly track her credence on the specific outcome she is betting on on those occasions.

    If a Halfer claims that, when she awakens, SB's credence on the coin having landed tails remains 1/2, and hence likewise for her credence that she is surrounded by lions, there would be no reasons for her when she attempts to escape on this occasion to bring a torch rather than a plank. She could pick either the torch or the plank at random. Half of such Halfer Beauties who make an escape attempt would survive. Two thirds of Thirder Beauties would survive. The Halfers weren't wrong in their credence assessment. But they picked the wrong credence (targeting expected frequencies of runs rather than frequencies of awakenings) for the task at hand.
  • Sleeping Beauty Problem
    This makes no sense. There is only one kind of event; being woken up after a die roll. Her credence in the outcome of that die roll cannot be and is not determined by any betting rules. Maybe she's not allowed to place a bet at all.Michael

    I agree that her credence in the outcome (however this outcome is characterized) isn't determined by the betting rules. The betting rules, though, can make one rather than another characterization of the outcome more natural. It's not true that there is only one kind of event. The relevant event is protracted. Sleeping Beauty could focus on her current awakening as the event where she either is facing a die that landed on six or didn't (and this event is over when she is put back to sleep, while her next awakening, if there is any, will be a separate event). Or she could focus on the current experimental run as the protracted event that her present awakening is, in some cases, only a part of. Nothing in the Bible, in the fundamental laws of nature, or in the mathematical theory of probability, determines what specific event (awakening or experimental run) should be the proper focus of attention. This choice of focus yields different analyses and different credences since those credences target differently individuated events. However, once one analysis has been settled on, and one payout structure has been determined, Halfers and Thirders (almost) always agree on the expected value of a given betting strategy.

    After waking up, either she continues to believe that the probability that the die landed on a 6 is 1/6, as Halfers say, or she now believes that it is 6/11, as Thirders say.

    Indeed, and, as previously explained, that because Halfers and Thirders are typically talking past each other. They're not talking about the same events.

    Only then, if allowed, can she use her credence to calculate the expected returns of placing or changing a bet, accounting for the particular betting rules. And as I believe I showed above, only a credence of 1/6 provides a consistent and sensible approach to both betting scenarios.

    I don't think you've shown the Thirder analysis to be inconsistent. You just don't like it. There are scenarios where the Thirder analysis is more more natural. Remember the flip-coin scenario where the singular H-awakenings take place in the West-Wing of the Sleeping Beauty Experimental Facility and the dual T-awakenings are taking place in the East-Wing. The West-Wing is surrounded by a moat with crocodiles and the East-Wing is surrounded by a jungle with lions. On the occasion of her awakening Sleeping Beauty (we may call her Melania) finds a rare opportunity to escape and can either choose to bring a torch (that she can use to scare off lions) or a wooden plank (that she can use to safely cross the moat). A Thirder analysis of the situation is natural in that case since it tracks singular escape opportunities. Her credence that she will encounter lions is 2/3 (as is her credence that the coin landed Tails). Taking the torch is the safest bet and, indeed, two thirds of Sleeping Beauties who make this bet on the rare occasions where this opportunity presents itself to them survive.

    On edit: For this analysis to be sound, we must assume that the rare escape opportunities don't convey any significant amount of information that SB didn't already have when she awoke, and hence present themselves with the same (very low) frequency on each awakening occasion.
  • Sleeping Beauty Problem
    Her credence remains committed to P3, else she’d calculate very different expected returns.Michael

    P3—"The probability that the die did land on a 6 is 1/6"—is an ambiguous statement since, although it makes reference to the die, it fails to sufficiently specify SB's epistemic situation in relation to the die, which is a consideration that seldom arises explicitly outside of the peculiar context of the the Sleeping Beauty problem.

    When asked about her credence, SB could reason: "I am currently in a situation (awakening episode) such that 6 times out of 11, when I find myself in such a situation, the die landed on a 6. If I could place an even money bet now, and get fully paid on that bet, it would therefore be rational for me to bet that the die landed on a 6, in accordance with my higher credence in this specific outcome."

    She could equally validly reason: "I am currently in a situation (experimental run) such that 1 time out of 6, when I find myself in such situations, the die has landed on a 6. If I could place an even money bet now and not change my bet in subsequent awakening episodes, and get paid at the end of the current experimental run, it would therefore be rational for me to bet that the die didn't land on a 6, in accordance with my higher credence in this specific outcome (i.e. not-six).

    Those two reasonings concern the same dice but two different statements of credence in two different kinds of events/outcomes. How SB chooses which one of those two different sorts of credence (and the duration of the "event" she is now involved in) as an apt explicitation of the ambiguous phrase "The probability that the die did land on a 6" can be guided by pragmatic considerations. In this case, the relevant consideration is the specific payout structure and what kinds of events/outcomes this payout structure was designed to track. In a pair of examples I had designed early in this discussion, the relevant pragmatic considerations were either the need for SB to set up an appointment with her aunt (to get a lift at the end of the experimental run), or choose a tool (plank or torch) for escaping the experimental facility during the current awakening episode.

    As stated in the original ambiguous statement of the SB problem, the forced choice between the Halfer or Thirder interpretations of SB's credence is a false dichotomy. Your stance leads you to propound Halfer interpretations/elaborations of the problem, which are valid, and to dismiss Thirder interpretations as misconstruals of your Halfer stance. But they're not misconstruals. They're alternative and equally valid interpretations. Thirders often make the same mistake, believing that their interpretation gets at the fundamental truth regarding SB's credence in the (ill specified) "outcome" or "current state of the die".
  • Sleeping Beauty Problem
    I don't even have to be put to sleep and woken up to do this. I can just say before the experiment starts that I choose to place 6 bets that the die will land on a 6 instead of 1 bet that it won't.Michael

    I wonder why you are so insistent on this arbitrary payout structure. Why not make an even-money payout on each occasion where she is being awakened and offered the opportunity to bet on the coin toss outcome as it is already determined right now? Would not her expected value exactly mirror—and be governed only by—her credence regarding the hidden die having landed six right now? A six is the most likely outcome, so I'm betting on it. No word games. Immediately maximized expected profit (and guaranteed long term profit as well).
  • Sleeping Beauty Problem
    So you need to first specify the mechanism by which one has "encountered" a door, and this mechanism must be comparable to the Sleeping Beauty scenario for it to be an apt analogy.Michael

    The doors are encountered randomly. I agree that the situation isn't perfectly analogous to the SB problem since SB doesn't "choose" randomly among sets of already established awakenings. She simply finds herself awakened on one particular occasion. But the purpose of the thought experiment was more modest, aiming at showing that the credence in an event one is involved in doesn't generally merely depend on the manner in which such events are produced but also on the way one relates to event of that kind—that is, in what way one encounters them.

    My earlier zoo example
    was mirroring much more closely the SB scenario (since the visitor likewise is amnesiac and merely finds themselves approaching a new enclosure) while making the same points. In that scenario, the zoo visitor had 1/3 chances (their credence) to next encounter a toucan, tiger or hippo enclosure regardless of the fact that the previous fork in the path that they randomly took had a 1/2 chance of leading them on a path segment that only has a hippo enclosure on it.
  • Sleeping Beauty Problem
    Sorry, I deleted that post because it's late and I'm tired and I may have messed up the specific numbers. The general gist is what I said before. Your argument is that her reasoning after being woken up is:

    A1. If I keep my bet and the die didn't land on a 6 then I will win £100
    A2. If I change my bet and the die did land on a 6 then I will win £100
    A3. My credence that the die landed on a 6 is 6/11
    A4. Therefore, the expected return if I keep my bet is £83.33
    A5. Therefore, the expected return if I change my bet is £16.67

    But A3, A4, and A5 are inconsistent. If A3 really was true then she would calculate different values for A4 and A5, concluding that it is profitable to change her bet. But she doesn't do this.
    Michael

    A thirder will not agree with A4 or A5. If SB is allowed to change her bet when she awakens, she must do do consistently as a matter of policy since she can't distinguish between different occasions of awakening (i.e. day of the week). She knows that a policy of changing her bet lowers her expected return since there only is one payout per experimental run. Although her systematically betting on a six would result in her being right on six out of eleven occasions when she is being given the opportunity to do so, in accordance with her credence, she only is being paid £100 once at the end of the experimental run when she does so (and the die landed 6) but this policy also makes her forfeit the full prize on the five occasions out of eleven where the die didn't land on six. All this shows is that the lopsided payout structure makes it irrational for her to bet on the most likely outcome.
  • Sleeping Beauty Problem

    Thirders then claim that:

    P(6|Monday)=6/11

    P(¬6|Monday)=5/11
    Michael

    Unless my memory is faulty, the variation we had discussed (two years ago) was one where Sleeping Beauty was awakened only once, on Monday, unless the die lands on 6, in which case she is being awakened six times from Monday through Saturday. In that case, thirders would claim that

    P(6|Monday)=1/6 (Since one sixth of Monday-awakenings are Six-awakenings)

    P(¬6|Monday)=5/6 (Since five sixths of Monday-awakenings are Non-six-awakenings)

    Right?

Pierre-Normand

Start FollowingSend a Message