• Andrew M
    1.6k
    The probability of the awakenings is dependent on the coin flip (1st awakening is 1 if heads, 0.5 if tails), whereas the probability that a coin flip lands heads is independent.Michael

    True, but the quarterer would agree (first awakening is 1 if heads, 1/3 if tails). However the probability of being Monday or Tuesday is also independent and equally probable. The quarterer would argue that those probabilities should carry through to the first and second awakening probabilities in the experiment since no new information had been acquired on awaking.

    So why not apply the same reasoning to Sleeping Beauty? The initial coin toss has 1/2 odds of heads, so it's 1/2 odds of heads.Michael

    The nature of the experiment means that the odds are different for the awakened Beauty. If she is told that it is Monday, the odds are different again and this is the case for both halfers and thirders. In this latter case, it seems that your analogies apply equally to halfers.

    This is why I suggested the alternative experiment where we don't talk about days at all and just say that if it's heads then we'll wake her once (and then end the experiment) and if it's tails then we'll wake her twice (and then end the experiment). There aren't four equally probable states in the experiment.

    So we either say that P(Awake) = 1 (and/)or we say that being awake doesn't provide Beauty with any information that allows her to alter the initial credence that P(Heads) = 0.5.
    Michael

    Saying that P(Awake) = 1 is fine since we can calculate the other probabilities accordingly. But I think the four equally probable states clarifies the mathematical relationship between the independent observer viewpoint (where the odds are familiar and intuitive) and Sleeping Beauty's viewpoint.
  • Michael
    15.6k
    Saying that P(Awake) = 1 is fine since we can calculate the other probabilities accordingly.Andrew M

    If it's 1 then P(Heads|Awake) = 0.5 * 1 / 1 = 0.5.

    But I think the four equally probable states clarifies the mathematical relationship between the independent observer viewpoint (where the odds are familiar and intuitive) and Sleeping Beauty's viewpoint.Andrew M

    Then perhaps you could explain how it works with my variation where Beauty is woken on either Monday or Tuesday if tails, but not both. Do we still consider it as four equally probable states and so come to the same conclusion that P(Heads|Awake) = 0.5 * 0.5 / 0.75 = 1/3?
  • Michael
    15.6k
    Saying that P(Awake) = 1 is fine since we can calculate the other probabilities accordingly.Andrew M

    If it's 1 then P(Heads|Awake) = 0.5 * 1 / 1 = 0.5.

    But I think the four equally probable states clarifies the mathematical relationship between the independent observer viewpoint (where the odds are familiar and intuitive) and Sleeping Beauty's viewpoint.Andrew M

    Then perhaps you could explain how it works with my variation where Beauty is woken on either Monday or Tuesday if tails, but not both. Do we still consider it as four equally probable states and so come to the same conclusion that P(Heads|Awake) = 0.5 * 0.5 / 0.75 = 1/3?

    My own take on it is that our states would be set up with the probability of being awaken on that day as so:

      M  T
    H 1  0
    T 0.5 0.5
    

    And I know that in Sleeping Beauty's case the observer's states would be set up as so:

      M  T
    H 1  0
    T 1  1
    

    But although Sleeping Beauty knows that she will be woken on both Monday and Tuesday if tails, she doesn't know that today is Monday or is Tuesday, and so from her perspective she has to reason that if the coin flip was tails then the probability that today is Monday is 0.5, giving her the following:

          today is M  today is T
    if H       1           0
    if T       0.5         0.5
    

    And this is exactly Elga's reasoning:

    If (upon first awakening) you were to learn that the toss outcome is Tails, that would amount to your learning that you are in either T1 or T2. Since being in T1 is subjectively just like being in T2, and since exactly the same propositions are true whether you are in T1 or T2, even a highly restricted principle of indifference yields that you ought then to have equal credence in each. But your credence that you are in T1, after learning that the toss outcome is Tails, ought to be the same as the conditional credence P(T1| T1 or T2), and likewise for T2. So P(T1| T1 or T2) = P(T2 | T1 or T2 ), and hence P(T1) = P(T2).

    So her reasoning should be:

    1. P(Tails) = 0.5
    2. Tails → P(Monday) = 0.5
    3. P(Tails ∩ Monday) = 0.25

    This is correct when waking up just once, so why not also when possibly waking up twice?
  • BlueBanana
    873
    The Mondays have to be as likely. Let's say the coin is thrown on Monday evening instead. It's clear now that the coin flip can't affect that since it happens after the test.
  • BlueBanana
    873
    This is correct when waking up just once, so why not also when possibly waking up twice?Michael

    Because when you are woken up more than twice more awakenings are caused by throwing tails.
  • Michael
    15.6k
    Because when you are woken up more than twice more awakenings are caused by throwing tails.BlueBanana

    In scenario one there are two awakenings for every tail thrown and one for every head thrown. It's a fair coin, and so the proportion of tail-awakenings to head-awakenings is 2:1.

    In scenario two there is one awakening for every tail thrown and one for every head thrown. It's a weighted coin that favours tails, and so the proportion of tail-awakenings to head-awakenings is 2:1.

    These are very different scenarios, despite the same result. I don't think it makes sense to simply use the proportion of tails-awakenings to head-awakenings to consider the likelihood that the toss was tails.

    Consider a scenario where if it's heads then Amy and either Bob or Charlie is asked, and if it's tails then all three are asked. How do we determine the likelihood that the toss was tails? Do we simply say it's 3:2 in favour of tails? I say no. We have to look at each individual and ask first "what is the likelihood of a coin toss landing heads?" and then "what is the likelihood that I'll be asked if the coin toss lands heads?". For Amy, Bob, and Charlie, the answer to the first question is "1/2", for Amy the answer to the second question is "1", and for Bob and Charlie the answer to the second question is "1/2". So for Amy it's a 1/2 chance of heads and for Bob and Charlie it's 1/4 chance of heads.
  • Jeremiah
    1.5k
    Hey look at that. He saw the Wednesday argument and slipped in a defeater!Srap Tasmaner

    Speculation on Beauty's possible speculation was never a good foundation for reallocation.

    Elga's argument is:
    The answer is that you have gone from a situation in which you count your own temporal location as irrelevant to the truth of H, to one in which you count your own
    temporal location as relevant to the truth of H.
    If that is so then when Beauty considers the actual question that is when temporal location is relevant and at that time Wednesday is already off the table, as the asking of the question eliminates it. And speculation that Beauty was considering Wednesday as a possibility before the interview is speculation and is outside the relevant temporal location which is in consideration.

    If you don't buy that argument, then you still have not explained how this supposed new information is relevant to the uncertainty of temporal location when considering if it is Tuesday or Monday. That uncertainty still remains, and probability is the measure of uncertainty. Wednesday is certainly off the table, at this point.
  • Jeremiah
    1.5k
    When considering probability, which is the measurement of uncertainty, then we consider the domain of uncertain possible outcomes. Certain outcomes bring no value to that domain, unless they are confounding and even still they don't show up in the domain themselves. Otherwise you skew your measurement by adding extra irrelevant information. If you like consider this under the scope of Occam's Razor.
  • Srap Tasmaner
    5k

    Done a little more sniffing around, and thirders frequently argue there's information here. Elga doesn't. <shrug>

    As SB, you are asked for your degree of belief that a random event has occurred or will occur.

    If I flip a fair coin and ask you for your degree of belief that it landed heads, you'll answer 50%.

    Suppose instead I say I'm going to tell you how it landed. What is your degree of belief that I'm going to tell you it landed heads? It will again be 50%. They're usually identical.

    Now try this with SB: instead of asking for your degree of belief, I'm going to tell you how the coin toss landed. What is your degree of belief that I will tell you it landed heads? Is it 50%?

    We thirders think halfers are looking at the wrong event. Just because you're asked how the coin landed, doesn't mean that's the event you have to look at to give the best answer.

    (I've also got a variation where I roll a fair die after the coin toss, and ask or tell you twice as frequently on tails. Same deal: what's the random event? Is it just the coin toss?)
  • Jeremiah
    1.5k
    We thirders think halfers are looking at the wrong eventSrap Tasmaner

    Why should I even consider the Bayesian approach in the first place?
  • Michael
    15.6k
    Now try this with SB: instead of asking for your degree of belief, I'm going to tell you how the coin toss landed. What is your degree of belief that I will tell you it landed heads? Is it 50%?Srap Tasmaner

    Yes?
  • Srap Tasmaner
    5k

    Is it?

    I think the halfer intuition is that a coin toss is a coin toss -- doesn't matter if you're asked once on heads and twice on tails.

    But consider this. What is your expectation that I'll tell you it was heads, given that it was heads? 100%. What's your expectation that I'll tell you it was tails, given that it was tails? 100%. Does that mean they're equally likely? To answer that question, you have to ask this question: if I randomly select an outcome-telling from all the heads-tellings and all the tails-tellings, are selecting a heads-telling and selecting a tails-telling equally likely? Not if there are twice as many tails-tellings.

    Both conditionals are certainties, but one is still more likely than the other, in this specific sense.
  • Srap Tasmaner
    5k
    the Bayesian approachJeremiah

    If that means the "subjective" interpretation of probability, it's just what the question is about.

    Maybe it ends up showing that "degree of belief" or "subjective probability" is an incoherent concept and we all become frequentists.
  • Michael
    15.6k
    if I randomly select an outcome-telling from all the heads-tellings and all the tails-tellings, are selecting a heads-telling and selecting a tails-telling equally likely? Not if there are twice as many tails-tellings.Srap Tasmaner

    Sure, but that's not how we'd actually consider the probabilities. We can try it right now. I'll flip a coin. If it's heads I'll tell you that it's heads. If it's tails I'll tell you twice that it's tails.

    Do you actually think that there's a 1/3 chance that I'll tell you it was heads (and a 2/3 chance that I'll tell you it was tails)? Or do you think there's a 1/2 chance that I'll tell you it was heads (and a 1/2 chance that I'll tell you twice that it's tails)? I say the latter.
  • Michael
    15.6k
    This is the problem I have with the self-indication assumption, and is why I brought up the question of a single-world theory vs a many-world theory. Should we really prefer the many-world theory simply because there are more outcomes in which we're in one of many worlds than there are outcomes in which we're in the only world?
  • Srap Tasmaner
    5k

    There's a 50% chance you'll tell me "at all" that it's heads, and same for tails. But there's more than a 50% chance that a random selection from the tellings you've done will be a tails telling.
  • Michael
    15.6k
    There's a 50% chance you'll tell me "at all" that it's heads, and same for tails. But there's more than a 50% chance that a random selection from the tellings you've done will be a tails telling.Srap Tasmaner

    And I think the former is the proper way to talk about the probability that it was heads, not the latter. We can then distinguish between the case of telling someone twice that it's tails and the case of telling someone once but using a weighted coin that favours tails.

    I think this is just SSA vs SIA.
  • Srap Tasmaner
    5k
    the proper wayMichael

    Sleeping Beauty is a pretty unusual situation though.

    Some of us think it merits switching to counting occasions instead of counting classes of occasions. There are two ways to do it. YMMV.

    On our side there are confirming arguments from wagering and weighted expectations. On the halfer side I only see the "no new information" argument.
  • Jeremiah
    1.5k
    It is the notion we start with a prior then update that prior. You don't update priors in a Classical apporach, as there are no priors. In a Classical frame they should be viewed as two different sample spaces, and the probability is dependent on the event, which is the randomly selected subset from the sample space. How they are selected would depend on the conditions of the subset.
  • Srap Tasmaner
    5k
    We can then distinguish between the case of telling someone twice that it's tails and the case of telling someone once but using a weighted coin that favours tails.Michael

    The problem with SB is that the outcomes are like a 2:1 biased coin, but the payouts (as @andrewk pointed out) are like a 3:1. If we ignore wagering, could SB tell the difference between the official rules and a variant with a single interview and a biased coin? If she can't, is that an argument in favor of one position or the other?

    From the other side, wagering will tell my SB that it's not a biased coin but a bizarre interview scheme. Will a halfer SB be able to tell the difference?
  • Andrew M
    1.6k
    If it's 1 then P(Heads|Awake) = 0.5 * 1 / 1 = 0.5.Michael

    P(Awake) = 1 is true when Beauty is awake in the experiment. So, under that condition, P(Heads|Awake) = P(Heads) = 1/3. (See argument below.)

    Then perhaps you could explain how it works with my variation where Beauty is woken on either Monday or Tuesday if tails, but not both. Do we still consider it as four equally probable states and so come to the same conclusion that P(Heads|Awake) = 0.5 * 0.5 / 0.75 = 1/3?Michael

    No. In your variation, P(Heads|Awake) = 1/2. Your variation requires a second coin toss on tails to determine which day Beauty is woken. The unconditional probabilities are:

                   Mon         Tue
    Heads          Awake:1/4   Asleep:1/4
    Tails-Heads2   Awake:1/8   Asleep:1/8
    Tails-Tails2   Asleep:1/8  Awake:1/8
    

    P(Heads|Awake) = P(Heads and Awake) / P(Awake) = 1/4 / 1/2 = 1/2
    P(Tails|Awake) = P(Tails and Awake) / P(Tails) = 1/4 / 1/2 = 1/2
    P(Tails-Heads2|Awake) = P(Tails-Heads2 and Awake) / P(Awake) = 1/8 / 1/2 = 1/4
    P(Tails-Tails2|Awake) = P(Tails-Tails2 and Awake) / P(Awake) = 1/8 / 1/2 = 1/4

    So conditionalizing on being awake:

                  Mon  Tue
    Heads         1/2  0
    Tails-Heads2  1/4  0
    Tails-Tails2  0    1/4
    

    P(Heads) = P(Tails) = 1/2
    P(Monday|Tails) = P(Monday and Tails) / P(Tails) = 1/4 / 1/2 = 1/2
    P(Tails and Monday) = 1/4

    Which is the same conclusion that you reached.

    This is correct when waking up just once, so why not also when possibly waking up twice?Michael

    Because the probability from the sleep states flow to proportionally more tails states. Here's the corresponding working for the original Sleeping Beauty problem. The unconditional probabilities are:

            Mon        Tue
    Heads   Awake:1/4  Asleep:1/4
    Tails   Awake:1/4  Awake:1/4
    

    P(Heads|Awake) = P(Heads and Awake) / P(Awake) = 1/4 / 3/4 = 1/3
    P(Tails|Awake) = P(Tails and Awake) / P(Awake) = 1/2 / 3/4 = 2/3
    P(Tails and Monday|Awake) = P(Tails and Monday and Awake) / P(Awake) = 1/4 / 3/4 = 1/3

    So conditionalizing on being awake:

           Mon  Tue
    Heads  1/3  0
    Tails  1/3  1/3
    
  • andrewk
    2.1k
    @Michael @Srap Tasmaner
    It occurred to me that there may be a parallel between this puzzle and Nick Bostrom's simulation hypothesis. We are analogous to Beauty, and the coin coming up tails is analogous to beings in some universe developing the ability to perform simulations so intricate that consciousness arises in the simulands, call it p.

    Say the average number of conscious simulands created in a universe in which those simulations are developed is N. Let the probability of conscious life arising in a universe be q and M be the number of conscious beings arising in such a universe.

    We then wonder what is the probability that we are in a simulation. I think Bostrom argues that it is

    pN / (pN + qM)

    which he thinks would be close to 1 because N would likely be much bigger than M because in such a world, using computers or their equivalent it would be easy to conduct enormous numbers of simulands. That corresponds to the Thirder argument in the Beauty problem because it says that every consciousness, whether simulated or not, is equally likely to be the one I am experiencing, so each one has probability 1/(pN + qM).

    In contrast, the Halfer position says that the probability of being a particular simuland is p/N, and the probability of being a particular non-simulated consciousness is q/M. So the probability of being a simulated consciousness is

    N(p/N) / (N(p/N) + q(q/M)) = p / (p+q)

    That will be much lower that Bostrom's estimate because it is not affected by N being much bigger than M. It makes no difference how many simulands the simulators create.

    Hence, the Halfer position provides support to those that don't like Bostrom's suggestion that they are probably a simuland.

    What do you think?
  • Michael
    15.6k
    Yes, that's like the example I gave earlier regarding whether we should favour a many-worlds theory over a single-world theory. From what I've read it depends on whether one accepts the self-sampling assumption or the self-indication assumption, two principles defined by Bostrom in one of his books.
  • Srap Tasmaner
    5k
    Yet another take 1. (More to come.)

    Ignore the coin toss completely. The intention of the problem is that Beauty cannot know whether this is her first or second interview. If we count that as a toss-up, then





    That is, Beauty would expect a wager at even money to pay out as if there were a single interview and the coin was biased 3:1 tails:heads. And it does.
  • Andrew M
    1.6k
    Ignore the coin toss completely. The intention of the problem is that Beauty cannot know whether this is her first or second interview. If we count that as a toss-up, then...Srap Tasmaner

    If I understand you, you are presenting a quarterer scenario where the probabilities conditioned on being awake are:

           Mon  Tue
    Heads  1/4  0
    Tails  1/4  1/2
    

    But, if so, how would the experiment or wagering be conducted to make it work?
  • Srap Tasmaner
    5k
    quartererAndrew M

    Mainly so we'd get to use that word.

    This is all stuff we've said before -- this comment summarizes the mechanism by which standard thirder wagering pays out 3:1, as @andrewk pointed out, instead of 2:1.

    You could also think of it as revenge against the halfer position, which draws the table this way:

    SSiqXYS.jpg

    Halfers, reasoning from the coin toss, allow Monday-Heads to "swallow" Tuesday-Heads.

    Reasoning from the interview instead, why can't we do the same?

    plbhebN.jpg
  • Srap Tasmaner
    5k

    The post I referenced had a mistake!

    ($2 bets below for simplicity, since the coin is fair.)
    Before I gave the SB payoffs at even money as
             Bet
             H   T
    Toss H   1  -1
         T  -1   2
    
    and noted that heads will break even while tails makes a profit. That's wrong. The right table is obviously
             Bet
             H   T
    Toss H   1  -1
         T  -2   2
    
    because you bet heads incorrectly twice when the coin lands tails.

    Thus the SB 2:1 table would be:
             Bet
             H   T
    Toss H   2  -1
         T  -2   1
    
    and everyone breaks even. 2:1 are the true odds.

    For a reminder, the single toss for a 3:1 biased coin ($2 bet for consistency):
              Bet
              H    T
    Toss H    .5  -.5
         T  -1.5  1.5      
    
    Same as the SB results: heads loses $1, and tails earns $1.

    And, no, obviously SB doesn't break even on 3:1 bets:
             Bet
             H   T
    Toss H   3  -1
         T  -2   2/3
    
    At odds greater than 2:1, heads will be the better bet.

    Sleeping Beauty remains its own thing: the odds really are 2:1, but the payoffs are 3:1.

    (Disclaimer: I think this is the most natural way to imagine wagering, but you can come up with schemes that will support the halfer position too. They look tendentious to me, but it's arguable.)
  • Andrew M
    1.6k
    This is all stuff we've said before -- this comment summarizes the mechanism by which standard thirder wagering pays out 3:1, as andrewk pointed out, instead of 2:1.Srap Tasmaner

    Yes, that all makes sense.

    You could also think of it as revenge against the halfer position, which draws the table this way:
    ...
    Halfers, reasoning from the coin toss, allow Monday-Heads to "swallow" Tuesday-Heads.

    Reasoning from the interview instead, why can't we do the same?
    Srap Tasmaner

    Indeed.

    So it's interesting that one can set up wagers for halfer, thirder or quarterer outcomes. But it seems to me that probability is not simply about what wagers one can set up and what their outcomes are since everyone should rationally agree about that. Instead, it's about what the probabilities of the states are when conditionalizing on being awake. And that just results in the thirder position.

    As to the question of what new relevant information (if any) arises when Beauty awakes that justifies an update on P(Heads), I think it's really just a change of context. When referring to the coin toss outcome from the perspective of an independent observer, P(Heads) = 1/2. However what is relevant to Beauty is P(Heads|Awake) = 1/3. When she awakes, P(Awake) = 1 and so P(Heads) = 1/3.

    So seen this way, no new relevant information has been learned on awaking. Instead, P(Heads) is indexed to a context. Which context is relevant depends on the perspective one is taking - the perspective of the independent observer or the awakened Beauty.
  • Jeremiah
    1.5k
    Don't forget to include the possibility she is on the Moon when she awakes. I mean as long as we are adding unnecessary entities.
  • Jeremiah
    1.5k
    As long as Beauty is uncertain of her temporal location, then Beauty only has two relevant sample spaces to choose from: awakenings and the coin flip.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment