Mon Tue Total Heads Awake:$1 Asleep:$2 $3 Tails Awake:$4 Awake: $8 $12
I think the halfer reasoning should just be that it’s a 50:50 chance that it’s heads, whether unconditioned or conditioned to Monday. We shouldn’t be applying some formula and should just consider what we know about coin flips. — Michael
P(Heads) = 1/2 P(Heads|Monday) = 1/2 Mon Tue Mon Tue Heads 1/2 Heads 1/2 Tails 1/4 1/4 Tails 1/2
The thirder model relies explicitly on there being a single toss of a coin with heads and tails distributed 50:50. (And we've agreed you cannot construct an alternate model with a weighted coin.) How can Beauty take that as a premise and then be unable to reach the conclusion that the chances of heads were 1/2? — Srap Tasmaner
This is worked out not by some equation but just by knowing the rules and how coin flips work. — Michael
The thirder reasoning works against intuition as well, especially in the extreme version. It suggests that if we’re to be woken a thousand times in the case of tails then we should be almost certain that it’s tails upon waking, despite the fact that it’s an unbiased coin flip. And all because if we’re right then we’re right more often? That shouldn’t be the measure. — Michael
Indeed, I think it means that the odds here are not truly 2:1 at all. — Srap Tasmaner
I can't figure out how to make this into a normal wager of any kind. — Srap Tasmaner
Yes, there is something absurd about the 2/3, but it's a result of putting in twice as many blues per toss but then taking them out one at a time, as if they were the same as the reds. — Srap Tasmaner
Mon Tue Wed ... Heads 1/2 Tails 1/2000 1/2000 1/2000 ...
The thing is, this 2:1 proportion of interviews is right, but remember that SB does not payout like a wager on a 2:1 biased coin. It pays out like a 3:1 coin. — Srap Tasmaner
What are your odds of getting a red marble? 1/2. — Srap Tasmaner
A randomly selected marble is now twice as likely to be blue — Srap Tasmaner
but each blue is discounted, and is only half the total available evidence of a tails flip, unlike the reds each of which is all the evidence of a heads. — Srap Tasmaner
Oh really? So you think they are the same thing. — Jeremiah
Mon Tue Heads 1/3 Tails 1/3 1/3
You forgot Heads and Wednesday, Heads and Thursday, Heads and Friday. . . . . and on forever.
Then do the same with Tails. — Jeremiah
And in the process don't forget the terms on the days Beauty will be awakened AND interviewed (AKA the sample space), was defined before the experiment started. — Jeremiah
There are a couple of curiosities here:
in the first (and perhaps only) interview, heads is twice as likely as tails;
there aren't half as many second interviews as firsts, but one third.
I find these proportions strange. Lewis ends up here and shrugs. I'm not sure what to make of it, but this is a far cry from the way I think halfers want to think of their position, that it's just a coin flip with some meaningless frosting on it. — Srap Tasmaner
Whichever position we take, something about it is counterintuitive. — Srap Tasmaner
Beauty can either consider the probability based on the three possible awakenings that include an interview, or she could consider the coin flip, because regardless of any other considerations, a coin flip is still just a coin flip. Those are the only two relevant sample spaces. — Jeremiah
Mon Tue Heads 1/4 1/4 Tails 1/4 1/4
Mon Tue Heads 1/3 0 Tails 1/3 1/3
This is all stuff we've said before -- this comment summarizes the mechanism by which standard thirder wagering pays out 3:1, as andrewk pointed out, instead of 2:1. — Srap Tasmaner
You could also think of it as revenge against the halfer position, which draws the table this way:
...
Halfers, reasoning from the coin toss, allow Monday-Heads to "swallow" Tuesday-Heads.
Reasoning from the interview instead, why can't we do the same? — Srap Tasmaner
Ignore the coin toss completely. The intention of the problem is that Beauty cannot know whether this is her first or second interview. If we count that as a toss-up, then... — Srap Tasmaner
Mon Tue Heads 1/4 0 Tails 1/4 1/2
If it's 1 then P(Heads|Awake) = 0.5 * 1 / 1 = 0.5. — Michael
Then perhaps you could explain how it works with my variation where Beauty is woken on either Monday or Tuesday if tails, but not both. Do we still consider it as four equally probable states and so come to the same conclusion that P(Heads|Awake) = 0.5 * 0.5 / 0.75 = 1/3? — Michael
Mon Tue Heads Awake:1/4 Asleep:1/4 Tails-Heads2 Awake:1/8 Asleep:1/8 Tails-Tails2 Asleep:1/8 Awake:1/8
Mon Tue Heads 1/2 0 Tails-Heads2 1/4 0 Tails-Tails2 0 1/4
This is correct when waking up just once, so why not also when possibly waking up twice? — Michael
Mon Tue Heads Awake:1/4 Asleep:1/4 Tails Awake:1/4 Awake:1/4
Mon Tue Heads 1/3 0 Tails 1/3 1/3
The probability of the awakenings is dependent on the coin flip (1st awakening is 1 if heads, 0.5 if tails), whereas the probability that a coin flip lands heads is independent. — Michael
So why not apply the same reasoning to Sleeping Beauty? The initial coin toss has 1/2 odds of heads, so it's 1/2 odds of heads. — Michael
This is why I suggested the alternative experiment where we don't talk about days at all and just say that if it's heads then we'll wake her once (and then end the experiment) and if it's tails then we'll wake her twice (and then end the experiment). There aren't four equally probable states in the experiment.
So we either say that P(Awake) = 1 (and/)or we say that being awake doesn't provide Beauty with any information that allows her to alter the initial credence that P(Heads) = 0.5. — Michael
It's the same problem, with identical results, just without the sleight of hand. — tom
I just gave you my response and I am not going to wade through your clear misunderstanding of Lewis's argument. — Jeremiah
Read the article to find out why it has a plus. — Jeremiah
Beauty conditionalizes on being awakened, so the values change to 1/4 / 3/4 = 1/3. — Srap Tasmaner
↪Andrew M She is never told it is Monday, there is no relevant self-locating information, and she knew there were only three possible awake periods before the experiment. Everything we know is everything she knows before the experiment therefore 1/3 is a prior. We don't have any privy information here. — Jeremiah
P(Heads|Awake) = P(Heads) * P(Awake|Heads) / P(Awake)
If she applies this before the experiment then she knows that P(Heads|Awake) = 0.5 * 1 / 1 = 0.5. — Michael
You, as the contestant, know for certain that one of the other two doors is empty, once the door is opened, you still know for certain that one of the doors is empty. OK, you now now which one is empty, and that IS information of sorts, but is it relevant information? — tom
All she knows is that she is awake, and that is twice as likely to be associated with tails. — tom
She is never told it is Monday, each awaking is the same, there is no hint as to which day it is; temporally she is uncertain of her location. — Jeremiah
In the Monty Hall problem, the host gives you information that changes the probabilities that you assign to each door. That information is new to you.
— Andrew M
The host does not, that's the trick. — tom
Not as the problem was described at the top of the thread. No information is given to Sleeping Beauty beyond what she was told would happen. To her each awakening is identical, and there are three of them. — tom
Beauty doesn't gain relevant new information when awakening, she knew all this before hand. If we do the experiment on you, then you have a prior belief that it is 1/3, what new information would then update that? Priors need relevant new information which would allow us to update it, not just any old information that you think happened. — Jeremiah
Then there's 6 states, not 7. You're counting the tails state twice, which you shouldn't do. The two tails days need to share the probability that it's the tails state (1/6) giving each 1/12 which is the correct figure you get when you apply the probability rule:
P(A and B) = P(A) * P(B|A)
P(Tails and Tuesday) = P(Tails) * P(Tuesday|Tails)
P(Tails and Tuesday) = 1/6 * 1/2 = 1/12 — Michael
Mon Tue Heads 5/6 0 Tails 1/12 1/12
Mon Tue Heads 5/7 0 Tails 1/7 1/7
Mon Tue Heads 5/12 5/12 Tails 1/12 1/12
Or a weighted coin that’s 5/6 chance of heads. — Michael
Mon Tue Roll 1-5 5/7 0 Roll 6 1/7 1/7
Except in this, and in the Monty Hall problem, there is no new information. — tom
Assume she's woken on Monday if it's heads or Tuesday and Wednesday if it's tails.
Do you agree that P(Monday|Awake) = 1/2? — Michael
Mon Tue Heads 1/2 0 Tails 1/4 1/4
Mon Tue Heads 1/3 0 Tails 1/3 1/3
However, it is not. When awakened Beauty does not know if it is Monday or Tuesday. — Jeremiah
I say bet £1 on heads. — Michael
(and in the case that it's tails it's only her bet on the last day that's accepted). — Michael
↪Andrew M That is not new information, she knew she'd be awakened beforehand. New relevant and significance information to reallocating creditably would be if she was told what day it was on Monday. — Jeremiah
I prefer the Monty Hall problem. — tom
Am I being completely stupid about this? — Srap Tasmaner
I only presented this side because they led with the 1/2 argument; however, they are correct in pointing out Beauty has gained no additional information. Really all she knows is what she was told before the experiment. — Jeremiah
The skeptic of course doesn't claim to know whether I'm in state (1a) or (1b), but he claims that even if I'm lucky and I'm in fact in state (1a), the possibility of a mistake still exists, which I cannot rule out. But my point is that if one is in fact in state (1a) then there's no possibility of him to have the same experience and be mistaken. — Fafner
I shall consider the following argument for skepticism:
(1) Either (a) I see that I have hands or (b) it merely seems to me that I have hands because I’m deceived by Descartes’ evil demon.
(2) According to the skeptic, whenever I seem to see that I have hands, it is always logically possible that I’m deceived by Descartes’ evil demon.
(3) Hence I can never really know for sure whether I really have hands.
...
(*) Whenever it seems to the subject that he's in state (a), it is always possible for him to actually be in state (b).
But (*) is incoherent. — Fafner
Thanks for the reference to Wallace on Everett's interpretation. I just looked up his book The Emergent Multiverse: Quantum Theory According to the Everett Interpretation. The second part of the book, entitled Probability in a Branching Universe is of much interest to me. — Pierre-Normand
Let me just note that Rovelli and Bitbol both endorse relational approaches that share some features with Everett's interpretation. But they don't reify the multiverse anymore than they do its branches. — Pierre-Normand
I am quite sympathetic also with the main drift of Apokrisis's constraint-based approach. But I think is it quite congenial to the pragmatist (or relational) interpretation of QM that I also favor over the alternative metaphysically 'realist' interpretations. It is indeed thanks to thermodynamical constraints that the structured and controllable 'classical world' emerges at all from the chaos of the homogeneous gas of the early expanding universe. — Pierre-Normand
Advocates of the Everett interpretation among physicists (almost exclusively) and philosophers (for the most part) have returned to Everett’s original conception of the Everett interpretation as a pure interpretation: something which emerges simply from a realist attitude to the unitarily-evolving quantum state.
How is this possible? The crucial step occurred in physics: it was the development of decoherence theory.
...
For decoherence is by its nature an approximate process: the wave-packet states that it picks out are approximately defined; the division between system and environment cannot be taken as fundamental; interference processes may be suppressed far below the limit of experimental detection but they never quite vanish. The previous dilemma remains (it seems): either worlds are part of our fundamental ontology (in which case decoherence, being merely a dynamical process within unitary quantum mechanics, and an approximate one at that, seems incapable of defining them), or they do not really exist (in which case decoherence theory seems beside the point).
Outside philosophy of physics, though (notably in the philosophy of mind, and in the philosophy of the special sciences more broadly) it has long been recognised that this dilemma is mistaken, and that something need not be fundamental to be real. In the last decade, this insight was carried over to philosophy of physics. — The Everett Interpretation - David Wallace
There you go! That's what I am talking about - accepting actual cut-offs in principled fashion. I find it encouraging that Tegmark is blogging in a way that sounds like confessing his sins. :) — apokrisis
I'm over my head here. But I've seen MWI described as superdeterministic. https://en.wikipedia.org/wiki/Superdeterminism — JupiterJess