But what determines the right question to ask isn't the statement of the Sleeping Beauty problem as such but rather your interest or goal in asking the question. I gave examples where either one is relevant. — Pierre-Normand
we are changing the structure of the problem and making it unintelligible that we should set the prior P(W) to 3/4. — Pierre-Normand
Even though the player is dismissed (as opposed to Sleeping Beauty, who is left asleep), a prior probability of P(Dismissed) = 1/4 can still be assigned to this state where he loses an opportunity to bet/guess. Upon observing the game master pulling out a ball, the player updates his prior for that state to zero, thus impacting the calculation of the posterior P(Red|Opp). If we assign P(Dismissed) = 1/4, it follows that P(Red|Opp) = 1/3. — Pierre-Normand
Your revised scenario seems to neglect the existence of a state where the player is being dismissed. — Pierre-Normand
This scenario doesn't accurately reflect the Sleeping Beauty experiment. Instead, imagine that one bag is chosen at random. You are then given one ball from that bag, but you're not allowed to see it just yet. You then drink a shot of tequila that causes you to forget what just happened. Finally, you are given another ball from the same bag, unless the bag is now empty, in which case you're dismissed. The balls are wrapped in aluminum foil, so you can't see their color. Each time you're given a ball, you're invited to express your credence regarding its color (or to place a bet, if you wish) before unwrapping it. — Pierre-Normand
Would not a halfer say that they are equally as likely? — Pierre-Normand
Sleeping Beauty's inability to single out any one of those possible awakenings as more or less likely than another — Pierre-Normand
If she opts to track her awakenings (centered possible worlds), her credence in heads is 1/3. — Pierre-Normand
It seems quite counterintuitive that if my credence concerns the outcome of the experimental run I'm in, it is P(10) = 1/10, and if it's the outcome of the present awakening, it's P(10) = 10/19, and that both outcomes are perfectly correlated. — Pierre-Normand
For instance, suppose you offer me the opportunity to purchase a $100 lottery ticket that carries a one in a septillion chance of winning me $200 septillion. Despite the expected value being positive, it may not be reasonable for me to purchase the ticket. However, it would be a logical fallacy to extrapolate from this example and conclude that it would also be unreasonable for me to buy a $100 lottery ticket with a one in ten chance of winning me $2000. Given I'm not in immediate need of this $100, it might actually be quite unreasonable for me to pass up such an opportunity, even though I stand to lose $100 in nine times out of ten. — Pierre-Normand
Rather, it pointed out that your calculation of P(Heads | Monday or Tuesday) = 1/2 simply restates the unconditional probability P(H) without taking into account Sleeping Beauty's epistemic situation. — Pierre-Normand
This belief change is unusual. It is not the result of your receiving new information — you were already certain that you would be awakened on Monday. (We may even suppose that you knew at the start of the experiment exactly what sensory experiences you would have upon being awakened on Monday.) Neither is this belief change the result of your suffering any cognitive mishaps during the intervening time — recall that the forgetting drug isn’t administered until well after you are first awakened. So what justifies it?
The answer is that you have gone from a situation in which you count your own temporal location as irrelevant to the truth of H, to one in which you count your own temporal location as relevant to the truth of H.
The argument you've put forward could be seen as suggesting that the vast body of literature debating the halfer, thirder, and double-halfer solutions has somehow missed the mark, treating a trivial problem as a complex one. This isn't an argument from authority. It's just something to ponder over. — Pierre-Normand
And I think it's even better to not consider days and just consider number of times wakened. So first she is woken up, then put to sleep, then a coin is tossed, and if tails she's woken again. Then we don't get distracted by arguing that her being asleep on Tuesday if Heads is part of the consideration. It doesn't make sense to say that she's asleep during her second waking if Heads.
With this reasoning I think Bayes' theorem is simple. The probability of being woken up is 1 and the probability of being woken up if Heads is 1. That she's woken up a second time if Tails is irrelevant. — Michael
Since the setup of the experiment doesn't even require that anyone look at the result of the toss before Monday night, nothing changes if the toss is actually performed after Sleeping Beauty's awakening. In that case the credences expressed on Monday are about a future coin toss outcome rather than an already actualized one. — Pierre-Normand
If so, this would suggest a highly unusual implication - that one could acquire knowledge about future events based solely on the fact that someone else would be asleep at the time of those events. — Pierre-Normand
Before being put to sleep, your credence in H was 1/2. I’ve just argued that when you are awakened on Monday, that credence ought to change to 1/3. This belief change is unusual. It is not the result of your receiving new information — you were already certain that you would be awakened on Monday.
...
Thus the Sleeping Beauty example provides a new variety of counterexample to Bas Van Fraassen’s ‘Reflection Principle’ (1984:244, 1995:19), even an extremely qualified version of which entails the following:
"Any agent who is certain that she will tomorrow have credence x in proposition R (though she will neither receive new information nor suffer any cognitive mishaps in the intervening time) ought now to have credence x in R."
It looks like you may have misinterpreted Elga's paper. He doesn't define P as an unconditional probability. In fact, he expressly defines P as "the credence function you ought to have upon first awakening." Consequently, P(H1) and P(T1) are conditional on Sleeping Beauty being in a centered possible world where she is first awakened. The same applies to P(R) and P(B1), which are conditional on you being in a centered possible world where you are presented with a ball still wrapped in aluminum foil before being given a tequila shot. — Pierre-Normand
To understand what P(R) entails, let's look at the situation from the perspective of the game master. At the start of the game, there is one red ball in one bag and two blue balls in the other. The game master randomly selects a bag and takes out one ball (without feeling around to see if there is another one). They hand this ball to you. What's the probability that this ball is red? — Pierre-Normand
Here, P(R|R or B1) is the probability that the ball you've just received is red, conditioned on the information (revealed to you) that this is the first ball you've received in this run of the experiment. In other words, you now know you haven't taken a shot of tequila. Under these circumstances, P(R) = P(B1) = 1/2. — Pierre-Normand
This scenario doesn't accurately reflect the Sleeping Beauty experiment. — Pierre-Normand
But your credence that you are in T1, after learning that the toss outcome is Tails, ought to be the same as the conditional credence P(T1|T1 or T2), and likewise for T2. So P(T1|T1 or T2) = P(T2|T1 or T2), and hence P(T1) = P(T2).
...
But your credence that the coin will land Heads (after learning that it is Monday) ought to be the same as the conditional credence P(H1|H1 or T1). So P(H1|H1 or T1)=1/2, and hence P(H1) = P(T1).
Combining results, we have that P(H1) = P(T1) = P(T2). Since these credences sum to 1, P(H1)=1/3.
I would like the halfer to explain why ruling out the Tuesday scenario doesn't affect their credence in the coin toss outcome at all. — Pierre-Normand
My current credence P(H) is 1/2, but if I were placed in this exact same situation repeatedly, I would expect the outcome H to occur one third of the time. — Pierre-Normand
Sleeping Beauty's calculation that P(H) = 1/3 doesn't hinge on her participation in the experiment being repeated. She's aware that if the coin lands heads, she will be awakened once, but if it lands tails, she will be awakened twice. If we run this experiment once with three participants, and all three of them bet on T every time they are awakened, they will be correct 2/3 of the time on average, which aligns with their credences. — Pierre-Normand
If we run this experiment once with three participants, and all three of them bet on T every time they are awakened, they will be correct 2/3 of the time on average, which aligns with their credences. — Pierre-Normand
Most of her awakenings occur on the rare occasion when 100 tosses yield heads, which forms the basis for her credence P(100H) being greater than 1/2. — Pierre-Normand
However, the Sleeping Beauty problem specifically inquires about her credence, not about the rationality of her attempt to maximize her expected value, or her preference for some other strategy (like maximizing the number of wins per experimental run rather than average gain per individual bet).
Even if she were to endorse your perspective on the most rational course of action (which doesn't seem unreasonable to me either), this wouldn't influence her credence. It would simply justify her acting in a manner that doesn't prioritize maximizing expected value on the basis of this credence. — Pierre-Normand
The only significant divergence lies in the frequency of opportunities: the hostage can't be provided with frequent chances to escape without invalidating the analogy, whereas Sleeping Beauty can be given the chance to guess (or place a bet) every single day she awakens without undermining the experiment.
However, we can further refine the analogy by allowing the hostage to escape unharmed in all instances, but with the caveat that he will be recaptured unknowingly and re-administered the amnesia-inducing drug. This would align the scenarios more closely. — Pierre-Normand
However, consider a different scenario where the hostage has a small, constant probability ε of discovering the means of escape each day (case-2). In this scenario, stumbling upon this means of escape would provide the hostage with actionable evidence that he could use to update his credence. Now, he would believe with a probability of 6/11 that he's in safehouse #2, thereby justifying his decision to pick up the torch. Consequently, given that 6 out of 11 kidnapped victims who find the means to escape are surrounded by lions, 6 out of 11 would survive. — Pierre-Normand
Your credence in each possibility is based on the number of ways in which you could find yourself in your current situation given the possible outcomes of the specific coin toss. — Pierre-Normand
It's worth noting that your provided sequence converges on 1/3. If the captive is not keeping track of the date, their credence should indeed be exactly 1/3. — Pierre-Normand
