For instance, suppose you offer me the opportunity to purchase a $100 lottery ticket that carries a one in a septillion chance of winning me $200 septillion. Despite the expected value being positive, it may not be reasonable for me to purchase the ticket. However, it would be a logical fallacy to extrapolate from this example and conclude that it would also be unreasonable for me to buy a $100 lottery ticket with a one in ten chance of winning me $2000. Given I'm not in immediate need of this $100, it might actually be quite unreasonable for me to pass up such an opportunity, even though I stand to lose $100 in nine times out of ten. — Pierre-Normand
It would be unreasonable of you to believe that you are most likely to win. — Michael
Yes, if you repeat the game enough times then you will win more than you lose, but it is still irrational to believe that if you play the game once then you are most likely to win. — Michael
and that both outcomes are perfectly correlated — Pierre-Normand
It seems quite counterintuitive that if my credence concerns the outcome of the experimental run I'm in, it is P(10) = 1/10, and if it's the outcome of the present awakening, it's P(10) = 10/19, and that both outcomes are perfectly correlated. — Pierre-Normand
Perhaps it's a misnomer to call them correlated, because there's no meaningful notion of a joint event of both occurrences within the same sample space. — fdrake
What I mean is that whenever the coin landed heads during a particular awakening, then it also landed heads during the particular experimental run this awakening is a part of, and vice versa. — Pierre-Normand
Given that the experiment doesn’t work by randomly selecting an interview from the set of all interviews, I don’t think it rational to reason as if it is. The experiment works by rolling a dice, and so it is only rational to reason as if we’re randomly selected from the set of all participants. — Michael
If she opts to track her awakenings (centered possible worlds), her credence in heads is 1/3. — Pierre-Normand
And how does one do this if not by reasoning as if one's interview is randomly selected from the set of possible interviews? — Michael
Sleeping Beauty's inability to single out any one of those possible awakenings as more or less likely than another — Pierre-Normand
Well, that’s the very thing being debated. A halfer might say that a Monday & Heads awakening is twice as likely as a Monday & Tails awakening, [...] — Michael
Would not a halfer say that they are equally as likely? — Pierre-Normand
This scenario doesn't accurately reflect the Sleeping Beauty experiment. Instead, imagine that one bag is chosen at random. You are then given one ball from that bag, but you're not allowed to see it just yet. You then drink a shot of tequila that causes you to forget what just happened. Finally, you are given another ball from the same bag, unless the bag is now empty, in which case you're dismissed. The balls are wrapped in aluminum foil, so you can't see their color. Each time you're given a ball, you're invited to express your credence regarding its color (or to place a bet, if you wish) before unwrapping it. — Pierre-Normand
I didn't properly address this but I actually think it illustrates the point quite clearly. I'll amend it slightly such that the blue balls are numbered and will be pulled out in order, to better represent Monday and Tuesday.
If I'm told that this is my first ball (it's Monday) then P(R) = P(B1) = 1/2.
If I'm told that this is a blue ball (it's Tails) then P(B1) = P(B2) = 1/2.
If I don't know whether this is my first ball or is a blue ball (neither know it's Monday or tails) then I should assume I randomly select a ball from a bag, and so P(R) = 1/2 and P(B1) = P(B2) = 1/4.
This contrasts with your reasoning where we randomly select a ball from a pile such that P(R) = P(B1) = P(B2) = 1/3. — Michael
Your revised scenario seems to neglect the existence of a state where the player is being dismissed. — Pierre-Normand
Even though the player is dismissed (as opposed to Sleeping Beauty, who is left asleep), a prior probability of P(Dismissed) = 1/4 can still be assigned to this state where he loses an opportunity to bet/guess. Upon observing the game master pulling out a ball, the player updates his prior for that state to zero, thus impacting the calculation of the posterior P(Red|Opp). If we assign P(Dismissed) = 1/4, it follows that P(Red|Opp) = 1/3. — Pierre-Normand
It doesn't. You're dismissed after red or the second blue. — Michael
we are changing the structure of the problem and making it unintelligible that we should set the prior P(W) to 3/4. — Pierre-Normand
If you're going to reason this way then you also need to account for the same with blue. You reach in after the second ball and pull out nothing. So really P(Dismissed) = 2/5. — Michael
That's precisely the point, and why I suggested ignoring days and just saying that if heads then woken once and if tails then woken twice. P(W) = 1.
That (Heads & Tuesday (or second waking)) is a distraction that leads you to the wrong conclusion. — Michael
But what determines the right question to ask isn't the statement of the Sleeping Beauty problem as such but rather your interest or goal in asking the question. I gave examples where either one is relevant. — Pierre-Normand
Your safehouse and escape example is analogous to that first case, not the second. Your introduction of opportunities to escape changes the answer, and so it isn't comparable to the Sleeping Beauty case where there is nothing comparable to it (and so if anything the second case above is closer to the Sleeping Beauty problem). — Michael
Introducing the concept of escape possibilities was intended to illustrate that what is at stakes in maximizing the accuracy of the expressed credences can dictate the choice of the narrow versus wide interpretations of the states that they are about.
In the safehouse and escape example: if the prisoner's goal is to maximize their chances of correctly predicting 'being-in-safehouse-#1' on any given awakening day, they should adopt the 'thirder' position (or a 5/11 position). If their goal is to maximize their chances of correctly predicting 'being-in-safehouse-#1' for any given kidnapping event (regardless of its duration), they should adopt the 'halfer' position (or a 6/11 position). — Pierre-Normand
If the agents expression of their credences are meant to target the narrow states, then they are trying to track frequencies of them as distributed over awakening episodes. If they are meant to target the wide states, then they are trying to track frequencies of them as distributed over experimental runs (or kidnaping events). — Pierre-Normand
It doesn't make sense to me to say that the answer in this scenario is relevant to a scenario where there are no opportunities to escape. — Michael
The opportunity to escape just enables the prisoner to put their credence to good use, and to chose how to most appropriately define the states that those credences are about. It doesn't change their epistemic situation. — Pierre-Normand
It does change the epistemic situation. It's exactly like the scenario with the coin tosses and the prizes, where in this case the prize is the opportunity to escape. — Michael
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.