Comments

  • Sleeping Beauty Problem
    Yes. The probability of it being red if heads is 1%. The probability of it being red if tails and Monday is 1%. The probability of it being red if tails and Tuesday is 1%. The probability of it being red if tails is 1 - 0.99^2 = 0.199.

    The probability of it being red if tails is greater than the probability of it being red if heads, therefore if it's red then it is more likely tails.
    Michael

    I believe you meant 0.0199.

    That represents the probability of the tulip being red at least once across the two days of awakening in the event that the coin lands tails, correct? While it's true that Sleeping Beauty, upon observing the red tulip, can deduce the truth of this disjunctive proposition, the two events are probabilistically independent. Thus, although it's more probable that at least one of these events will occur if there are two opportunities for it to occur within a single experiment run rather than just one, the observation of one such event doesn't entail that it's more probable that you're in an experiment run wherein two such opportunities occur.

    Consider this analogy: you're entering a movie theater where there's an even chance of a double feature being shown (according to a coin flip). There's a one percent chance that any given film will feature Rex Harrison. Suppose you see Harrison featured in the first film. Does that increase your credence that there will be a subsequent feature?

    If you think about it, Lewis's notion—that Sleeping Beauty can conclude from knowing it's Monday that a future coin toss is more likely to yield heads with a 2/3 probability—is already rather puzzling. It would be even more extraordinary to believe that any random observation made by Sleeping Beauty could allow her to update her credence about an unrelated coin flip.
  • Sleeping Beauty Problem
    It does.Michael

    I don't understand your argument at all. Finding an escape opportunity with probability P is a constant feature of the prisoner's environment. Sleeping Beauty's waking days (of the prisoner's captivity days) aren't being distinguished by one another in point of betting or escape opportunities. Suppose there always is a tulip on Sleeping Beauty's nightstand. Would her seeing the tulip enable her to change her credence in H? What if there is a 1% chance that the tulip is red on any given awakening day? Would that make any difference?
  • Sleeping Beauty Problem
    Can you give actual numbers?Michael

    Suppose there is a 0.01% chance to find an opportunity to escape on any given day held captive regardless of that day being the only one or one among six in a kidnapping event. Finding such opportunities doesn't yield any updating of credence. That would actually yield a very small updating since the fact that you didn't yet escape reduces a little the chances that you are not experiencing your first day of captivity. But this is quite immaterial. We can suppose that you always are recaptured but that your goal nevertheless is to maximize your number of stints (while warding off the crocodiles or lions). In Sleeping Beauty's case, opportunities to bet could be 100%, and provided the payoffs are paid after the experiment is over, that wouldn't impact her credence either.
  • Sleeping Beauty Problem
    It's quite straightforward that P(Dice roll 6|opportunity to escape) > P(Dice roll 1-5|opportunity to escape)Michael

    Indeed, which is basically the 'thirder' solution (in this case, the 5/11er solution).
  • Sleeping Beauty Problem
    It does change the epistemic situation. It's exactly like the scenario with the coin tosses and the prizes, where in this case the prize is the opportunity to escape.Michael

    Which is why I had included the proviso that the (rare) opportunities be proportional to the number of days the hostage is held captive. Under those conditions, they carry no information to the hostage.

    Likewise, enabling Sleeping Beauty to bet on H on each awakening provides no information to her, provided only the payouts are delivered after the experiment is over.
  • Sleeping Beauty Problem
    It doesn't make sense to me to say that the answer in this scenario is relevant to a scenario where there are no opportunities to escape.Michael

    The opportunity to escape just enables the prisoner to put their credence to good use, and to chose how to most appropriately define the states that those credences are about. It doesn't change their epistemic situation.

    I discussed this a little further with GPT-4:
    Reveal

    PN:

    Just to give you the idea (since your context window is narrower than Michael's) in this variations, a hostage is taken to one of two safehouses based on the throw of a die. If the die doesn't land on six, the hostage is taken to safehouse #1. If the die lands on six, the hostage is taken to safehouse #2 for six consecutive days. He also is being administered an amnesia inducing drug on each night. The hostage is informed of this protocol. The question posed is: "On any given day, what is your credence in the proposition 'I am at safehouse #1'?" The credence here refers to the hostage's subjective probability, or belief, in being at safehouse #1. The answer provided in the discussion suggests that the hostage's credence should be 5/11 rather than 5/6, despite the fact that there is a 5/6 chance of the die landing on six. This reasoning is based on the fact that the hostage gets to guess six times regarding the same die throw when the result is six.

    GPT4:

    Thank you for providing the context, it's indeed a crucial piece of information for this discussion. I now see why the 'thirder' position, or the 5/11 in this case, comes into play. It's a fascinating way to illustrate the Sleeping Beauty problem. And it makes perfect sense to apply the reasoning here in a similar way, given the similar structure of the problems.

    When the hostage awakes, they have to consider not only the initial 5/6 chance of being in safehouse #1 but also that, should they be in safehouse #2, they could be guessing their location six different times on the basis of the same die roll. Hence, despite the initial die roll favoring safehouse #1, the multiplicity of questioning events when in safehouse #2 adjusts the credence to 5/11.

    Your explanation above about how the hostage's goals can dictate whether they adopt a 'thirder' or 'halfer' perspective is clear and logical, in light of this information.

    PN:

    In a further refinement I suggested that safehouse #1 is surrounded by a crocodile-infested moat, while safehouse #2 is surrounded by a forest filled with lions. While escaping, the hostage can carry either a large wooden plank (to bridge the moat) or a torch (to ward off the lions), but not both.

    I suggested that the prisoner should maximise their survival chances by carrying the torch, in line of their 6/11 credence of being in safehouse #2. But this is true only of their chances of finding such an escape opportunity is proportional to the number of days they are held captive such that this finding doesn't yield an updating of their credence.

    Likewise, if they wish to direct the police to rescue them (by communicating the address of one safehouse), and know that they can only be afforded one opportunity to do so per kidnaping event, they should direct them to safehouse #1. One twist here is that under those conditions, finding the unique opportunity to communicate with the police would lead the prisoner to update their credence to 5/6 that they are in safehouse 1. But if they can ensure that their message will be received by sending it every day, and that the police will not arrive before the end of the "kidnaping event" (but before they are being moved overseas, say) then their credence will not be updated and it will be rational for them to direct the police to warehouse #1 despite their credence being that they are in warehouse #2 with 6/11 probability. Can you see why?

    GPT4:

    This scenario adds another layer of complexity to the problem, but your explanations are clear and make sense.

    When it comes to choosing between the plank and the torch, if the hostage's chances of finding an escape opportunity are proportional to the number of days they are held captive, and finding such an opportunity doesn't lead to an updating of their credence about which safehouse they're in, it's rational for them to choose the torch, aligning with their 6/11 credence of being in safehouse #2. This is because they will have more opportunities to use it.

    On the other hand, if they can only communicate with the police once per kidnapping event, it would be more rational to direct the police to safehouse #1, even though their credence is that they are in safehouse #2 with a 6/11 probability. If finding this unique opportunity to communicate leads the hostage to update their credence to 5/6 that they are in safehouse 1, then it makes more sense to direct the police to safehouse #1. This is because, after the update, the chances that they are in safehouse #1 is higher.

    However, if the hostage is able to send a message every day, and they know that the police will not arrive until the end of the kidnapping event, it would still be rational to direct the police to safehouse #1, despite the 6/11 credence of being in safehouse #2. The reason is that, in this scenario, the credence will not be updated every day because the opportunity to send a message doesn't provide new information about their location. In this case, the best strategy would be to guide the police to the location they're most likely to be in over the course of the whole kidnapping event, which would be safehouse #1 due to the higher probability of being taken there initially, despite having fewer chances to send a message from there.

    PN:

    Yes, indeed. It seems counterintuitive that the hostages thinks that they are more likely than not to presently be located in safehouse #2 but that they are being rational in directing the police to safehouse #1. But that's because even though they are thereby more likely to be sending a message that will direct the police to the wrong warehouse, those wrong message will arrive in batches of 6 and will only misdirect the police once!

    GPT4:

    Yes, exactly! This counterintuitive scenario arises from the peculiar structure of the problem, which combines the probabilities of being in a particular safehouse on a given day with the probabilities of being there across the entirety of the kidnapping event. The messages, when sent in batches, will indeed only misdirect the police once, even though there are six of them. And even though on any given day the hostage thinks they're more likely in safehouse #2, across the entirety of the kidnapping event, they're more likely to be in safehouse #1. Therefore, it makes sense to direct the police to safehouse #1, which seems counterintuitive when considering daily probabilities but becomes logical when taking into account the entire event's probabilities.
  • Sleeping Beauty Problem
    Your safehouse and escape example is analogous to that first case, not the second. Your introduction of opportunities to escape changes the answer, and so it isn't comparable to the Sleeping Beauty case where there is nothing comparable to it (and so if anything the second case above is closer to the Sleeping Beauty problem).Michael

    My safehouse and escape example – before I introduce the idea of escape possibilities – is designed in such a way that the prisoner's epistemic situation exactly mirrors Sleeping beauty's epistemic situation. Upon awakening on any given day, the prisoner's credence that they are in safehouse #1 is the same as Sleeping Beauty's credence that the coin landed heads. In both case, 'H' or 'being-in-safehouse-#1' are states that admit of both narrow centered possible worlds or wide centered possible worlds interpretations. If the agents expression of their credences are meant to target the narrow states, then they are trying to track frequencies of them as distributed over awakening episodes. If they are meant to target the wide states, then they are trying to track frequencies of them as distributed over experimental runs (or kidnaping events). Introducing the concept of escape possibilities was intended to illustrate that what is at stakes in maximizing the accuracy of the expressed credences can dictate the choice of the narrow versus wide interpretations of the states that they are about.

    In the safehouse and escape example: if the prisoner's goal is to maximize their chances of correctly predicting 'being-in-safehouse-#1' on any given awakening day, they should adopt the 'thirder' position (or a 5/11 position). If their goal is to maximize their chances of correctly predicting 'being-in-safehouse-#1' for any given kidnapping event (regardless of its duration), they should adopt the 'halfer' position (or a 5/6 position).
  • Sleeping Beauty Problem
    That's precisely the point, and why I suggested ignoring days and just saying that if heads then woken once and if tails then woken twice. P(W) = 1.

    That (Heads & Tuesday (or second waking)) is a distraction that leads you to the wrong conclusion.
    Michael

    There are two possible conclusions depending on the way you chose to define the temporal span of your centered possible worlds (the states in the problem space). The conclusions are different because the questions are different. Of course, if you are unclear about the manner in which you define your states, then you might get confused and answer the question about the narrowly defined states (possible awakenings) with an answer that is suitable for the widely defined states (possible experiment runs), or vice versa. But what determines the right question to ask isn't the statement of the Sleeping Beauty problem as such but rather your interest or goal in asking the question. I gave examples where either one is relevant.
  • Sleeping Beauty Problem
    If you're going to reason this way then you also need to account for the same with blue. You reach in after the second ball and pull out nothing. So really P(Dismissed) = 2/5.Michael

    Under the narrow centered worlds parameterization of the problem space, the possibility (indeed, certainty) of Sleeping Beauty being dismissed on the third day is irrelevant. The third day isn't a day of possible interviews. And likewise in the ball picking analogy, if we wish to make it mirror the structure of the Sleeping Beauty problem.
  • Sleeping Beauty Problem
    It doesn't. You're dismissed after red or the second blue.Michael

    That's cute but rather point missing. Sleeping beauty also is dismissed on the third day no matter what. But if we ignore the narrow centered case where the day is Tuesday and the coin landed Heads, we are changing the structure of the problem and making it unintelligible that we should set the prior P(W) to 3/4.
  • Sleeping Beauty Problem
    Here is a excerpt of a discussion I've had with GPT-4. I don't think its last suggestion about the error Lewis might be making regarding Beauty's updating her credence in H in light of the fact that today is Monday is plausible. Despite that, and other small mistakes, I'm nevertheless quite impresses with its ability to grasp formal arguments.

    Reveal
    PN:

    Thank you! I've now made use of the stipulations above, assuming a prior P(W) = 3/4 that Sleeping is awakened on any given day, and that W, and H represent narrow possible worlds centered on Beauty's potential awakening events (or on the event of where remaining asleep on Tuesday). I've calculated P(H|W) using Bayes' theorem and arrived at the expected answer. Would you kindly do the same, step by step, explaining your reasoning as you go along, to see if our intuitions match?

    GPT-4:

    Sure! We can use Bayes' Theorem to calculate P(H|W) step by step. Let me first explain the process.

    Bayes' Theorem states that P(H|W) = [P(W|H) * P(H)] / P(W), where:

    • P(H|W) is the probability of hypothesis H (that the coin landed heads) given evidence W (that Sleeping Beauty is awake),
    • P(W|H) is the probability of evidence W given hypothesis H,
    • P(H) is the prior probability of hypothesis H, and
    • P(W) is the prior probability of evidence W.
    Now, let's start by establishing our priors:

    • P(H) = 1/2 (because the coin is fair),
    • P(W) = 3/4 (as given in the problem; Sleeping Beauty is awakened on any given day with this probability).

    Next, we need to figure out P(W|H):

    • If H is true (the coin landed heads), Sleeping Beauty is awakened only once (on Monday), so P(W|H) =
    1.

    We can now plug these values into Bayes' theorem to get P(H|W):

    P(H|W) = [P(W|H) * P(H)] / P(W)
    = [1 * 1/2] / 3/4
    = 1/2 / 3/4
    = 2/3.

    So, P(H|W) = 2/3, which means that given that Sleeping Beauty is awake, there is a 2/3 probability that the coin landed heads. This answer contrasts with the common answer of 1/3 often provided in the literature, but our version of the problem has altered the prior probabilities, which has in turn influenced the outcome.

    PN:

    I myself arrived at 1/3, and the prior seems consistent with what thirders also assume. I think your interpretation of P(W|H) may be questionable. P(W|H) is the probability that Beauty is in a narrow centered world where she is awakened, conditioned on the outcome H. When the outcome is H, she is awakened in the world Monday&H, and not awakened in the world Tuesday&H. P(W|H) is therefore 1/2, and P(H|W) is calculated to be 1/3, as expected.

    GPT-4:

    You're right, and I apologize for the confusion. My interpretation of P(W|H) was incorrect. Let's correct that.

    P(W|H) should indeed be the probability of W given H, which is the probability that Sleeping Beauty is awake on a given day given that the coin landed heads. In the case where the coin lands heads, Sleeping Beauty is only awakened on Monday, not Tuesday. Therefore, P(W|H) is indeed 1/2.

    Given that, let's recalculate:

    P(H|W) = [P(W|H) * P(H)] / P(W)
    = [1/2 * 1/2] / 3/4
    = 1/4 / 3/4
    = 1/3.

    You're correct. Given that Sleeping Beauty is awake, there is a 1/3 probability that the coin landed heads, which is the commonly accepted solution among "thirders". Thank you for your patience and for catching that mistake.

    PN:

    No worry! Unlike you, I always have the opportunity to reread my drafts and scour for mistakes!

    GPT-4:

    Thank you for your understanding! It's true that as a machine learning model, I provide responses in real time without the ability to review or edit my responses after they are generated. I rely on the accuracy of the information I was trained on and the rules of inference I use to generate responses. I always strive for precision, so I appreciate your patience and understanding when occasional errors do occur. Let me know if you have any more questions or problems to solve - I'm here to help!

    PN:

    Interestingly, my approach also can warrant the halfer solution when the suitable priors are chosen consistently with the focus being displaced on wider centered worlds that span complete experiment runs. (This is the approach Michael favors although he is inexplicit about his choice of centered worlds). Lewis's approach, though, has the untoward consequence that upon learning that toady is Monday, Beauty must update her credence in H to 2/3. Intuitively, it should remain 1/2. In the light of our previous discussion, can you surmise what mistake Lewis might be making?

    GPT-4:

    Lewis's approach is a prominent variant of the halfer position in the Sleeping Beauty problem. According to his view, Sleeping Beauty should assign a credence of 1/2 to the coin landing heads because the initial probability, before she went to sleep, was 1/2 and she doesn't get any new information upon waking up that would lead her to update this credence.

    However, as you've pointed out, upon learning that it is Monday, Lewis suggests that Sleeping Beauty should update her credence in the coin having landed heads to 2/3. This has been criticized as counterintuitive. From Sleeping Beauty's perspective, the fact that it's Monday doesn't seem to provide her with any information that would make the coin landing heads more probable.

    One potential error in Lewis's reasoning might be related to the point we've discussed earlier, namely the definition and interpretation of the events and the kind of possible world one chooses to center upon. When Sleeping Beauty wakes up, her situation is not symmetrical with respect to the possible worlds "Heads & Monday" and "Tails & Monday". While it's true that these two worlds are equally likely given that it's Monday, Sleeping Beauty's evidence upon waking up isn't just that it's Monday, but rather something like "it's an awakening day and I don't know which day it is". If one interprets the evidence in this way, it seems more reasonable to say that her credence in heads should remain at 1/2 upon learning that it's Monday.

    Additionally, an error in Lewis' argument could also arise from a misunderstanding of conditional probabilities, specifically the probability of it being Monday given that the coin landed heads, P(Monday|H). This would naturally be 1, because if the coin landed heads, she is only awakened on Monday. The halfer could potentially misinterpret this as saying the coin is more likely to have landed heads given that it's Monday, which isn't necessarily the case.

    As always in these sorts of debates, much depends on how exactly we interpret the notions of evidence, updating, and conditional credence. Different interpretations can lead to different solutions to the problem.
  • Sleeping Beauty Problem
    I didn't properly address this but I actually think it illustrates the point quite clearly. I'll amend it slightly such that the blue balls are numbered and will be pulled out in order, to better represent Monday and Tuesday.

    If I'm told that this is my first ball (it's Monday) then P(R) = P(B1) = 1/2.

    If I'm told that this is a blue ball (it's Tails) then P(B1) = P(B2) = 1/2.

    If I don't know whether this is my first ball or is a blue ball (neither know it's Monday or tails) then I should assume I randomly select a ball from a bag, and so P(R) = 1/2 and P(B1) = P(B2) = 1/4.

    This contrasts with your reasoning where we randomly select a ball from a pile such that P(R) = P(B1) = P(B2) = 1/3.
    Michael

    Your revised scenario seems to neglect the existence of a state where the player is being dismissed. This isn't quite equivalent to Milano's 'blind state' in the Sleeping Beauty context – where the 'blind state' refers to the case in which Sleeping Beauty is not awakened on Tuesday and thus never experiences it – but it functions in a similar way for the purpose of this analysis. In this state, the time comes to pull a second ball from the bag, but there's no such ball because the randomly selected bag only contained one. Even though the player is dismissed (as opposed to Sleeping Beauty, who is left asleep), a prior probability of P(Dismissed) = 1/4 can still be assigned to this state where he loses an opportunity to bet/guess. Upon observing the game master pulling out a ball, the player updates his prior for that state to zero, thus impacting the calculation of the posterior P(Red|Opp). If we assign P(Dismissed) = 1/4, it follows that P(Red|Opp) = 1/3.

    I now concur with you that there are scenarios – contingent upon specific purposes or interests – where Sleeping Beauty would want her credence P(H|W) to track the possibility of her awakening during a Heads-experimental run. In such scenarios, since the 'blind state' Tuesday&asleep isn't an independent possibility but is considered merely a feature of the relevant centered state (i.e., the state where she is in a Heads experimental run), it can be assigned the same prior as the probability that she will be awakened (either once or twice) during a Heads-experimental-run. This prior is 1/2. In this case, P(H|W) is calculated to be 1/2, but the interpretation of states H and W differs. The space of possibilities is parameterized differently, corresponding to 'wider' centered possible worlds (i.e., centered states that span a whole experimental run).

    I've previously suggested cases where the participant's goals make them more interested in tracking narrower centered states. As revealed in the analysis by Laureano Luna (following Groisman), it can be misleading to assume that outcomes perfectly correlated with centered 'H & awake' states – regardless of how broadly or narrowly they are being defined – must also share the same probabilities. That doesn't mean the frequencies and probabilities don't track each other, a point Milano also makes. The (Bayesian) updated credences still track the frequencies of the relevantly defined centered states, which is why they can underpin rational betting behaviors, even though maximizing EV is just one among many goals an agent can have when making use of their credences.
  • Sleeping Beauty Problem
    Well, that’s the very thing being debated. A halfer might say that a Monday & Heads awakening is twice as likely as a Monday & Tails awakening, [...]Michael

    Would not a halfer say that they are equally as likely?
  • Sleeping Beauty Problem
    And how does one do this if not by reasoning as if one's interview is randomly selected from the set of possible interviews?Michael

    If by "reasoning as if one's interview is randomly selected from the set of possible interviews" you're referring to Sleeping Beauty's avowal of her inability to single out any one of those possible awakenings as more or less likely than another, then yes, she is reasoning in that way. However, this form of reasoning is guided by her need to assess her credence in a heads-awakening outcome (her centered possible world), not by the setup of the experiment.

    The design of the experiment allows participants to assess both their centered and non-centered credences. What they decide to track can be influenced by various factors. For instance, they might be trying to optimize their expected value or their chances of winning; they could be trying to maximize their chances of escaping alive versus maximizing their chances of being rescued. Their choice could also be guided by their interests, such as whether they want to track the frequency of heads-awakenings or the frequency of heads-experimental-runs. In other words, the choice is driven by the participants' individual needs and interests, not dictated by the structure of the experiment itself.
  • Sleeping Beauty Problem
    Given that the experiment doesn’t work by randomly selecting an interview from the set of all interviews, I don’t think it rational to reason as if it is. The experiment works by rolling a dice, and so it is only rational to reason as if we’re randomly selected from the set of all participants.Michael

    Your assertion that I'm reasoning "as if" someone selected an interview from all possible interviews isn't quite accurate. Sleeping Beauty is fully aware that the current experimental run comprises either one or multiple awakenings, depending on the outcome of a coin toss. She doesn't assume that someone arbitrarily selected an awakening of hers. Rather, she knows she's currently experiencing an awakening and isn't certain whether it's the only one in the present run, or one among many.

    She's asked about her credence regarding the outcome of the coin toss. Given the known setup of the experiment, she can choose to calculate her credence in either the centered or non-centered heads outcome. She could even calculate both. If she opts to track her awakenings (centered possible worlds), her credence in heads is 1/3. If she decides to track her experimental runs (non-centered), her credence in heads is 1/2. These are credences in distinct outcomes, and both can serve different purposes.

    For example, in my hostage scenario, one approach maximizes one's chance of survival during an escape, while the other maximizes one's chances of being rescued by the police when communicating probable location. (I believe I may have made a mistake in my earlier analysis of the police rescue situation).
  • Sleeping Beauty Problem
    Perhaps it's a misnomer to call them correlated, because there's no meaningful notion of a joint event of both occurrences within the same sample space.fdrake

    What I mean is that whenever the coin landed heads during a particular awakening, then it also landed heads during the particular experimental run this awakening is a part of, and vice versa. But Sleeping Beauty's credences in those two different outcomes (centered = 1/3 and uncentered = 1/2) is different, they are both right, and they both perfectly tack the frequencies of those corresponding outcomes when the experiment is repeated.
  • Sleeping Beauty Problem
    Yes, if you repeat the game enough times then you will win more than you lose, but it is still irrational to believe that if you play the game once then you are most likely to win.Michael

    Indeed, it's true that if you play the game once, it's not most likely that you'll win during that particular experimental run. However, when I suggest it's more likely than not that the outcome is 10 during the current awakening, I'm not claiming that it's most likely that the current experimental run will result in a win for me. These are two distinct statements, despite their truth values being perfectly correlated. This seems to be the point that was made in the conclusion of the paper that @fdrake mentioned. Furthermore, these claims hold true irrespective of whether the experiment is run once or multiple times.

    It seems quite counterintuitive that if my credence concerns the outcome of the experimental run I'm in, it is P(10) = 1/10, and if it's the outcome of the present awakening, it's P(10) = 10/19, and that both outcomes are perfectly correlated. But that's just because the one gerrymanders the other.
  • Sleeping Beauty Problem
    It would be unreasonable of you to believe that you are most likely to win.Michael

    Consider this scenario in the context of the Sleeping Beauty problem where a ten-sided dice is thrown. In this situation, it would be rational for me to believe that I am more likely to win an even money bet placed on the outcome being a 10 during the current awakening. Despite the fact that in 9 out of 10 experimental runs I would lose my only bet, in the remaining run, I would win ten times my bet by the time the experiment concludes. Thus, on average, I'd win 10 out of 19 times. And this is because my credence that the outcome is 10, P(10) = 10/19, would exactly match, in the long run, the proportion of times the outcome indeed was 10 when I awoke.
  • Sleeping Beauty Problem
    I don’t think any reasonable person would believe this. I certainly wouldn’t.Michael

    Perhaps a rational individual might not believe in the plausibility of being woken up and put back to sleep 2^101 times. But even if this extreme scenario makes the consideration of such a highly unlikely albeit extraordinary event unreasonable, it does not follow that thirder-like solutions to a less extreme version of the Sleeping Beauty problem would be equally unreasonable.

    For instance, suppose you offer me the opportunity to purchase a $100 lottery ticket that carries a one in a septillion chance of winning me $200 septillion. Despite the expected value being positive, it may not be reasonable for me to purchase the ticket. However, it would be a logical fallacy to extrapolate from this example and conclude that it would also be unreasonable for me to buy a $100 lottery ticket with a one in ten chance of winning me $2000. Given I'm not in immediate need of this $100, it might actually be quite unreasonable for me to pass up such an opportunity, even though I stand to lose $100 nine times out of ten.
  • Sleeping Beauty Problem
    I ended up in a state of confusion in the calculations, having a few contradictions in reasoning, which this paper elevates into framings of the experiment (including SB's setting within it) having inconsistent sample spaces between the centred and non-centred accounts. Thus yielding a "dissolution" of the paradox of the form; it's only a paradox when centred and non-centred worlds are equated.fdrake

    Thanks! I'm going to read this paper. I like the conclusion: "Thus, we suggest that although it is true that Beauty is in a heads-awakening if and only if the coin landed heads, Beauty, upon awakening, should assign probability 1/3 to the former and probability 1/2 to the latter."

    Is that not closely analogous to my distinction of case-2 and case-1 (respectively) in my analysis of the prisoner scenario? My analysis also suggests a pragmatist interpretation of the choice between centered and non-centered accounts. The choice depends not on metaphysical preferences, and not either on arbitrary stipulations regarding sampling mechanisms, but rather on the specification of the use one intends to make of one's credence in the outcome.
  • Sleeping Beauty Problem
    I've been reading along, I have a meta question for you both Pierre-Normand@Michael - why is it helpful to discuss variants which are allegedly the same as the original problem when you both don't seem to agree what the sampling mechanism in the original problem is?fdrake

    I introduced the prisoner variant to satisfy Michael's insistence that Sleeping Beauty's credence only be evaluated within a single-run experimental context. It aimed to show that it is rational for the prisoner to act under the assumption that their safehouse is surrounded by crocodiles with probability 6/11 whenever they get an opportunity to act of the basis of their credence, and this in spite of the fact that they had initially been brought to a safehouse surrounded by lions with probability 6/11. It also highlighted how the credence of the participant (prisoner) might interact with the different credence of an external agent (the police). I think my distinction of case-1 and case-2 also helped diagnose the source of Michal's halfer intuition. It stems from confusing the standard Sleeping Beauty problem with a different but closely related one (that indeed may be construed as stemming from a different sampling mechanism).

    The blue and red balls mechanism was introduced by Michael but I proposed to refine it in order to better mirror the Sleeping Beauty protocol.

    Regarding the sampling mechanism, I believe the centered possible world approach offers a compelling interpretation, although Milano suggests that there may still be residual disagreement about setting priors. This approach is compelling because it allows the participant to reason about her epistemic situation in a natural way without the need to incorporate any weird metaphysical conjectures about how she finds herself in her current situation. However, I'm not entirely confident about this. Milano's chosen values for the parameters α and β seem well-argued to me, grounded in both the stipulations of the problem and uncontroversial rationality assumptions. But there may be something I've overlooked.
  • Sleeping Beauty Problem
    So ChatGPT is saying that P(Heads | Monday or Tuesday) = 1/2 is trivially true. Doesn't that just prove my point?Michael

    GPT-4 wasn't endorsing your conclusion. Rather, it pointed out that your calculation of P(Heads | Monday or Tuesday) = 1/2 simply restates the unconditional probability P(H) without taking into account Sleeping Beauty's epistemic situation.

    The argument you've put forward could be seen as suggesting that the vast body of literature debating the halfer, thirder, and double-halfer solutions has somehow missed the mark, treating a trivial problem as a complex one. This isn't an argument from authority. It's just something to ponder over.
  • Sleeping Beauty Problem
    Just for fun, I asked GPT-4 to comment on your Bayesian analysis.

    Reveal
    What do you make of Michael's purported solution to the Sleeping Beauty problem:

    "In fact there's an even simpler way to phrase Bayes' theorem, even using days.

    P(Heads | Monday or Tuesday) = P(Monday or Tuesday | Heads) * P(Heads) / P(Monday or Tuesday)
    P(Heads | Monday or Tuesday) = 1 * 1/2 / 1
    P(Heads | Monday or Tuesday) = 1/2"

    GPT-4

    Michael's computation seems to be misguided. He's attempting to use Bayes' theorem to calculate P(Heads | Monday or Tuesday), i.e., the probability of Heads given that it is either Monday or Tuesday. Here's why this approach doesn't work:

    First, the event "Monday or Tuesday" is trivial - it will always be either Monday or Tuesday, hence P(Monday or Tuesday) = 1. This factor does not contribute any information to the analysis.

    Second, P(Monday or Tuesday | Heads) equals 1, which indicates that if the coin lands Heads, it will definitely be either Monday or Tuesday, which is trivially true.

    Third, P(Heads) is the unconditional or prior probability of the coin landing Heads, which equals 1/2 under the problem setup.

    Thus, Michael's computation ends up simply restating the original unconditional probability of Heads, P(Heads) = 1/2, instead of calculating the conditional probability of Heads given it's either Monday or Tuesday.

    The Sleeping Beauty problem is trickier than a standard conditional probability problem because Sleeping Beauty's state of knowledge changes throughout the problem. On waking, she doesn't know what day it is, and this affects her credences in ways that can't be captured by Michael's use of Bayes' theorem. To accurately solve the problem, one needs to account for Sleeping Beauty's changing state of knowledge and the conditions under which she forms her beliefs.
  • Sleeping Beauty Problem
    I think this is a better way to consider the issue. Then we don't talk about Heads & Monday or Tails & Monday. There is just a Monday interview and then possibly a Tuesday interview. It's not the case that two thirds of all interviews are Tails interviews; it's just the case that half of all experiments have Tuesday interviews. Which is why it's more rational to reason as if one is randomly selected from the set of possible participants rather than to reason as if one's interview is randomly selected from the set of possible interviews.Michael

    It appears that you are suggesting a false dichotomy. Logically, both (1) two-thirds of all interviews being Tails interviews, and (2) half of all experiments having Tuesday interviews can simultaneously hold true. Both (1) and (2) are in fact logical implications from the problem's stipulations. The terms 'heads', 'tails', 'Monday', and 'Tuesday' merely serve as convenient labels for the spectrum of possibilities. Solving a problem doesn't involve selectively ignoring one of its stipulations or dismissing its acknowledgement as irrational. If you intentionally disregard a stipulation of the problem (or a logical consequence thereof), you are effectively addressing a different problem.

    And I think it's even better to not consider days and just consider number of times wakened. So first she is woken up, then put to sleep, then a coin is tossed, and if tails she's woken again. Then we don't get distracted by arguing that her being asleep on Monday if Heads is part of the consideration. It doesn't make sense to say that she's asleep during her second waking if Heads.

    I don't think anyone said that she experiences anything on Tuesday when she sleeps. Milano addresses the related concern that it is not an epistemic possibility that Sleeping Beauty will experience the centered possible world in which it is Tuesday and she is asleep. That doesn't mean that this possibility can't be considered in Bayesian conditionalization. You can ascribe non-zero probabilities to states that you will not experience, which happens for instance when you buy life insurance.
  • Sleeping Beauty Problem
    That's exactly the implication of Elga's reasoning.Michael

    Lewis's treatment yields an incredible result. Elga's treatment yields an unsurprising result by means of a controversial method. This is why I prefer the treatments by Stalnaker and (especially) Milano. They both demystify the method and Milano, in addition, shows the self centered world method to be consistent with Bayesian conditionalization, and both to satisfy van Fraassen's principles of reflection.
  • Sleeping Beauty Problem
    My example proves that this doesn't follow where P is the credence function I ought to have after being explained the rules of my game. Elga doesn't explain why it follows where P is the credence function I ought to have upon first awakening.Michael

    I apologize. I misunderstood this part of Elga's argument. Although P(T1) and P(H1) are the credences Sleeping Beauty should have upon being first awakened, they are not conditional on her being in a centered world where she is first awakened. It is rather P(H1|H1 or T1) that is thus conditioned.

    I'm going to study the argument more closely and comment later.

    Incidentally, the fact that Sleeping Beauty's credence that the coin landed (or soon is going to land) heads, upon first awakening, is being increased by 1/6 after she learns that today is Monday, is a point of agreement between Lewis and Elga.

    Elga's interpretation that her credence P(H) gets updated from 1/3 to 1/2 is easily accounted for by the fact that, after she can rule out today being Tuesday, the possible worlds [H1, asleep] and [T1, T2] remain equiprobable. Since the setup of the experiment doesn't even require that anyone look at the result of the toss before Monday night, nothing changes if the toss is actually performed after Sleeping Beauty's awakening. In that case the credences expressed on Monday are about a future coin toss outcome rather than an already actualized one. My previous remark that Sleeping Beauty and the "game master" here are in the same centered possible world and share the same evidence applies.

    Lewis's halfer argument, though, commits him to claim that Sleeping Beauty's credence P(H) gets updated from 1/2 to 2/3. In other words, after she is first awakened and told that today is Monday, it becomes rationally warranted for Sleeping Beauty to believe that the probability that the coin landed heads (or will land heads when it is being tossed later tonight) is 2/3. Lewis bites this bullet and explains this as a peculiar form of knowledge about the future. It would have been intriguing to know if Lewis would have extended this line of reasoning to the game master's credence, given that they share the same epistemic situation as Sleeping Beauty. If so, this would suggest a highly unusual implication - that one could acquire knowledge about future events based solely on the fact that someone else would be asleep at the time of those events.
  • Sleeping Beauty Problem
    That’s not accurate. There is a difference between these two assertions:

    1. P(R|R or B1) = P(B1|R or B1)
    2. P(R) = P(B1)

    The first refers to conditional probabilities, the second to unconditional probabilities, and in my example the first is true but the second is false.
    Michael

    It looks like you may have misinterpreted Elga's paper. He doesn't define P as an unconditional probability. In fact, he expressly defines P as "the credence function you ought to have upon first awakening." Consequently, P(H1) and P(T1) are conditional on Sleeping Beauty being in a centered possible world where she is first awakened. The same applies to P(R) and P(B1), which are conditional on you being in a centered possible world where you are presented with a ball still wrapped in aluminum foil before being given a tequila shot.

    To understand what P(R) entails, let's look at the situation from the perspective of the game master. At the start of the game, there is one red ball in one bag and two blue balls in the other. The game master randomly selects a bag and takes out one ball (without feeling around to see if there is another one). They hand this ball to you. What's the probability that this ball is red? This conditional probability is represented as P(R). This also represents how you should update your credence about the color of the ball upon learning that this is the first (or the only) ball given to you before the tequila shot. In this scenario, both you and the game master occupy the same centered possible world and share the same evidence.

    Incidentally, I think Stalnaker and Milano's paper both produce arguments that are easier to follow and, it seems to me, more rigorous than Elga's valiant first attempt.
  • Sleeping Beauty Problem
    There is a red ball in one bag and two numbered blue balls in a second bag. You will be given a ball at random.Michael

    This scenario doesn't accurately reflect the Sleeping Beauty experiment. Instead, imagine that one bag is chosen at random. You are then given one ball from that bag, but you're not allowed to see it just yet. You then drink a shot of tequila that causes you to forget what just happened. Finally, you are given another ball from the same bag, unless the bag is now empty, in which case you're dismissed. The balls are wrapped in aluminum foil, so you can't see their color. Each time you're given a ball, you're invited to express your credence regarding its color (or to place a bet, if you wish) before unwrapping it.

    According to Elga's reasoning:

    1. P(B1|B1 or B2) = P(B2|B1 or B2), therefore P(B1) = P(B2)

    2. P(R|R or B1) = 1/2, therefore P(R) = P(B1)

    3. Therefore, P(R) = P(B1) = P(B2) = 1/3

    The second inference and so conclusion are evidently wrong, given that P(R) = 1/2 and P(B1) = P(B2) = 1/4.

    So his reasoning is a non sequitur.
    Michael

    Here, P(R|R or B1) is the probability that the ball you've just received is red, conditioned on the information (revealed to you) that this is the first ball you've received in this run of the experiment. In other words, you now know you haven't taken a shot of tequila. Under these circumstances, P(R) = P(B1) = 1/2.

    Incidentally, since we started this discussion, I've read Elga's and Lewis's papers. But I've also read most of Robert Stalnaker's "Another Attempt to Put Sleeping Beauty to Rest" (2013) and Silvia Milano's "Bayesian Beauty" (2020), both of which are illuminating. Stalnaker modifies Lewis' centred-world approach, and Milano reconciles such approaches (which interpret indexical content) with Bayesian principles.
  • Sleeping Beauty Problem
    And this is precisely why the betting examples that you and others use don’t prove your conclusion.Michael

    The betting examples serve to illustrate that the credences held by thirders, unlike those held by halfers, align with the frequencies of outcomes in such a way that if they were to place bets based on these credences, their expected values would satisfy the equation EV = Σ(P(i)*$i), where P(i) represents the probabilities of outcomes and $i the corresponding payouts. This illustration does not necessitate that Sleeping Beauty actually seeks to maximize her expected value. Rather, it reveals that if this were her goal, her credences would guide her actions effectively. Moreover, if numerous participants were placed in identical situations to Sleeping Beauty's, their credences would correspond to the frequencies of the outcomes they anticipate, which aligns closely with the intuitive definition of a credence.

    On the other hand, it's challenging to reconcile a statement such as: 'My current credence P(H) is 1/2, but if I were placed in this exact same situation repeatedly, I would expect the outcome H to occur one third of the time.' This perspective, typically associated with halfers, seems to introduce an incongruity between stated credence and expected frequency of outcomes.
  • Sleeping Beauty Problem
    Except the experiment is only conducted once.Michael

    I've also been working under the assumption that the experiment is conducted only once. Sleeping Beauty's calculation that P(H) = 1/3 doesn't hinge on her participation in the experiment being repeated. She's aware that if the coin lands heads, she will be awakened once, but if it lands tails, she will be awakened twice. If we run this experiment once with three participants, and all three of them bet on T every time they are awakened, they will be correct 2/3 of the time on average, which aligns with their credences.

    In the same vein, in your dice roll variation, Sleeping Beauty knows that if the die didn't land on six, she will be awakened once, but if it did land on six, she would be awakened six times. This information is sufficient to justify her credence P(6) = 6/11. If we have eleven participants each participating in this experiment once, and they all bet on 'six' every time they are awakened, they will be correct 6/11 of the time on average.
  • Sleeping Beauty Problem
    But even if the same participant were to repeat the experiment 2^100 times, they don't bet on 100 heads because they think it's more likely, they bet on 100 heads because they know that eventually they will win, and that the amount they will win is greater than the amount they will lose.Michael

    Sleeping Beauty isn't asked to place a bet on the outcome on day zero, before she's put to sleep for the first time, with payouts occurring on each subsequent awakening based on her initial prediction. Instead, she's asked about her credence in the outcome of the 100 tosses on each individual awakening (and given an opportunity to place a bet). Most of her awakenings occur on the rare occasion when 100 tosses yield heads, which forms the basis for her credence P(100H) being greater than 1/2 on the occasion of a particular awakening. This same reasoning would also be the ground for her betting on that outcome, assuming her primary goal is to maximize her expected value for that single bet.

    However, the Sleeping Beauty problem specifically inquires about her credence, not about the rationality of her attempt to maximize her expected value, or her preference for some other strategy (like maximizing the number of wins per experimental run rather than average gain per individual bet).

    Even if she were to endorse your perspective on the most rational course of action (which doesn't seem unreasonable to me either), this wouldn't influence her credence. It would simply justify her acting in a manner that doesn't prioritize maximizing expected value on the basis of her credence.
  • Sleeping Beauty Problem
    Then this is a different scenario entirely. If we consider the traditional problem, it would be that after the initial coin toss to determine which days she could be woken, a second coin toss determines if she will be woken (heads she will, tails she won't).Michael

    It is fundamentally the same scenario, except we're adding external threats like crocodiles or lions to the test environment. In the hostage situation, the captive finding himself still held hostage upon waking up is parallel to Sleeping Beauty finding herself still under experimentation when she wakes up, as opposed to being dismissed on the seventh day.

    The rational means that increase the captive's survival chances in our scenario (from 5/11 to 6/11) are comparable to those Sleeping Beauty would use to enhance her correct retrodictions of the die roll result being in the [1, 5] range (from 5/11 to 6/11).

    The only significant divergence lies in the frequency of opportunities: the hostage can't be provided with frequent chances to escape without invalidating the analogy, whereas Sleeping Beauty can be given the chance to guess (or place a bet) every single day she awakens without undermining the experiment.

    However, we can further refine the analogy by allowing the hostage to escape unharmed in all instances, but with the caveat that he will be recaptured unknowingly and re-administered the amnesia-inducing drug. This would align the scenarios more closely.
  • Sleeping Beauty Problem
    Also, as an aside, if you correctly reason that it's tails then you escape on the first day, and so you can rule out today being the second day (assuming you understand that you would also reason as if it's tails).Michael

    Yes, that's a valid point. This is precisely why I introduced the concept of a small, constant probability ε, representing the chance for the hostage to find the means of escape on any given day. By doing so, we can marginalize the potential inferences the hostage could make from the fact that he has not yet escaped (or perished in the attempt). The key aspect of the Sleeping Beauty problem, and indeed the comparison to our hostage scenario, is the existence of multiple actionable opportunities to express one's credence - be it through betting, verification, rational action, and so on. So, while your observation uncovers a minor discrepancy, I do not believe it fundamentally undermines my overall argument.
  • Sleeping Beauty Problem
    If you don't like to consider my extreme example because the numbers are too high then let's consider a less extreme version. Rather than a coin toss it's a dice roll. If 1 - 5 then woken once (or safehouse 1 with crocodiles for one day), if 6 then woken six times (or safehouse 2 with lions for six days). Any rational person would take the wooden plank, and 5 out of every 6 kidnapped victims would survive.Michael

    The analysis you provide would hold true if the hostage was guaranteed in advance to have exactly one opportunity to escape during the entirety of the experiment (case-1). I agree that in this context, upon being provided the means to escape, the prisoner's updated credence would match his initial credence, leading him to select the wooden plank and expect to survive five out of six times (or one out of two times in the original problem).

    However, consider a different scenario where the hostage has a small, constant probability ε of discovering the means of escape each day (case-2). In this scenario, stumbling upon this means of escape would provide the hostage with actionable evidence that he could use to update his credence. Now, he would believe with a probability of 6/11 that he's in safehouse #2, thereby justifying his decision to pick up the torch. Consequently, given that 6 out of 11 kidnapped victims who find the means to escape are surrounded by lions, 6 out of 11 would survive.

    In my previous post, I made a point to explain why the original Sleeping Beauty setup mirrors case-2 more closely than it does case-1. The crucial aspect in the Sleeping Beauty problem is the opportunity for her to express her credence, which, as in the case-2 scenario, is not a single guaranteed occurrence, but something that happens with different frequency depending on the result of the coin toss.
  • Sleeping Beauty Problem
    In this case you're in safehouse 1 if not tails yesterday and not tails today and you're in safehouse 2 if either tails yesterday or tails today. Obviously the latter is more likely. I think only talking about the preceding coin toss is a kind of deception.Michael

    The original Sleeping Beauty problem does indeed hinge on a single coin toss, but it's crucial to understand the unique nature of this coin toss within the context of the experiment. When we consider the result of 'the' coin toss, we're not considering a generalized coin toss in a vacuum; we're considering the specific toss that determined the course of the experiment that Sleeping Beauty currently finds herself in.

    Sleeping Beauty's question - 'What is the probability that the coin shows heads?' - is not a generalized question about the inherent probability of a coin toss result. Instead, it's a targeted question about the outcome of the specific coin toss that dictated the structure of her current experience. Given this context, I argue that Sleeping Beauty is justified in updating her prior credence P(T) from 1/2 to 2/3.

    Let's modify the safehouse example to more closely mirror the Sleeping Beauty problem. Imagine you are kidnapped and held hostage. The captor flips a coin only once. If the coin lands heads, you are blindfolded and transported to safehouse #1, held captive for one day, and then released. If the coin lands tails, you are taken to safehouse #2, where you're held for two days and then released. In the latter case, however, an amnesia-inducing drug is administered at the end of the first day, such that you forget the events of the day and wake up on the second day with no memory of the previous day.

    Just like in the Sleeping Beauty problem, you are unable to distinguish between the first and second day of captivity in safehouse #2. Now, when you find yourself in a safehouse and try to guess the outcome of the coin toss, your best bet would be to assign a 1/3 probability to being in safehouse #1 (and hence the coin toss resulting in heads) and a 2/3 probability to being in safehouse #2 (and hence the coin toss resulting in tails). This is not a reflection of the inherent probabilities of a coin toss, but rather an acknowledgment of your unique epistemic position in the situation, and the contextual information you have access to as a captive. Your credence in each possibility is based on the number of ways in which you could find yourself in your current situation given the possible outcomes of the specific coin toss.

    To further bolster the point, imagine that you find a hidden cellphone and have a brief window to send a text message to the police. However, due to character limitations, you can only provide the address of one of the two safehouses. Given your current epistemic position, would it not make sense to communicate the address of safehouse #2? This decision is based on your updated credence in the outcome of the coin toss: you believe there is a 2/3 chance you're in safehouse #2 and a 1/3 chance you're in safehouse #1. So, despite the inherent probabilities of a fair coin toss being 1/2 for each outcome, your unique context and the information you have access to lead you to favor safehouse #2 as the most likely location for your captivity.

    Let us now consider the perspective of the police and the information available to them. If they have knowledge of the captor's coin-flip protocol, then prior to receiving any communication, they would have a 1/2 credence of the hostage being in either safehouse.

    How does their epistemic position changes after they receive your message? I would argue that we now need to distinguish two cases.

    In the first case, if the police have themselves provided the communication means in both safehouses, the message does not convey any new information about the coin toss nor about the hostage's location. Given that the hostage would have been equally likely to find and use the device in either location, the message itself doesn't change the police's prior beliefs.

    In the second case, however, if the hostage independently finds a way to communicate, the longer duration spent at safehouse #2 makes it twice as likely for the hostage to send a message from there. This makes a crucial difference. If a message is sent, the police now have evidence that is more likely to occur if the coin landed tails, which should update their credences accordingly, both for the location of the hostage and the result of the coin toss.

    The message, in this context, acts as evidence that bears differently on the hypotheses of interest (the coin toss and the location), given the known differences in the likelihoods of being able to send the message from the two locations.

    In case-1, since the hostage has an equal chance to communicate from either safehouse, the police are not able to update their credences based on the received message. Here, the police's prior beliefs remain the same, P(H) = P(T) = 1/2, and their strategy should reflect these priors.

    In case-2, however, the hostage's ability to communicate is influenced by the duration of captivity. If they get a message, the police can update their beliefs because the message is more likely to be sent if the coin landed tails and the hostage was in safehouse #2 for longer. Their updated beliefs should be P(H) = 1/3, P(T) = 2/3.

    Now, consider the question whether the police acting based on the hostage's updated credence rather than their own increases their chances of success, it seems that this depends on the case. In case-1, if the police act based on their own beliefs, they're expected to be right half the time. If they act based on the hostage's belief, they'd be misled half the time (when the coin lands heads). So, in case-1, the police need not update their credence.

    In case-2, however, acting based on the hostage's beliefs (which coincide with the updated beliefs of the police if they consider the message as evidence) increases their chances of success. This is because the hostage's ability to communicate is correlated with the coin toss and their location, unlike in case-1. So, in case-2, the police are enabled to update their credence to align them with the hostage's.

    The ambiguity stems from the fact that whether the police should act based on the hostage's belief depends on the specific circumstances of each case. A complete answer should specify the case under consideration and fully specify the epistemic perspectives of all the agents.

    On edit: I foresee a possible objection to the previous discussion, particularly that it draws upon an analogy with a situation involving a communication device, which has no direct parallel in the original Sleeping Beauty problem. To address this, I propose another scenario that may mirror Sleeping Beauty's epistemic situation even more closely:

    Reveal
    Let's reconsider the safehouse scenario. Suppose the hostage finds an opportunity to escape. However, the nature of the obstacles he must overcome depends on the safehouse he's in: safehouse #1 is surrounded by a crocodile-infested moat, while safehouse #2 is surrounded by a forest filled with lions. While escaping, the hostage can carry either a large wooden plank (to bridge the moat) or a torch (to ward off the lions), but not both.

    In this scenario, we can distinguish between two cases. In case-1, the opportunity to escape is equally likely for both potential outcomes of the coin toss. In this instance, when planning an escape, the hostage has equal odds of encountering either crocodiles or lions. Therefore, his credence in the coin having landed heads should be updated to 1/2, in light of the fact that he has an opportunity to escape.

    On the other hand, in case-2, opportunities to escape are proportional to the time spent in a safehouse. In this situation, the hostage is twice as likely to encounter lions and should, therefore, choose to carry the torch.

    Considering which of these two cases more closely mirrors the Sleeping Beauty problem, I argue in favor of case-2. The opportunities for the hostage to escape can be seen as opportunities for him to indirectly learn about the coin toss result. The correlations between tails and lions, and heads and crocodiles, both stand at exactly 1. Similarly, in the Sleeping Beauty problem, her credence in the coin toss resulting in tails corresponds directly to her credence in the likelihood of this result being revealed to her if she had the opportunity to confirm it at that moment. Each time Sleeping Beauty is awakened (or given an opportunity to express her credence), she is aware that such opportunities are twice as frequent if the coin landed tails. Therefore, being afforded such an opportunity is directly analogous to case-2 in the hostage scenario.
  • Sleeping Beauty Problem
    Not sure what this is supposed to show?Michael

    It's worth noting that your provided sequence converges on 1/3. If the captive is not keeping track of the date, their credence should indeed be exactly 1/3. The crucial detail here is that the captive gets to guess twice regarding the same coin toss when the result is tails. This very fact is what explains why their credence in being presently in safehouse #1 (and thus, the preceding coin toss resulting in heads) is 1/3 rather than 1/2.

    I'd like to draw your attention to the use of the terms "presently" and "preceding" in the proposition's statement. These indexicals are vital as they give us insight into the captive's perspective on the previous coin toss. This subjective epistemic perspective must be distinguished from an external point of view that would consider the next coin toss.
  • Sleeping Beauty Problem
    Actually that’s not right. Need to think about this.Michael

    Take you time. I'm being moved to a new safehouse until tomorrow.
  • Sleeping Beauty Problem
    I don’t quite understand this example. There are multiple coin flips?Michael

    Yes, my first sentence was wrong. There is a new coin flip every day when the captor must decided on a new (or the same) safehouse. In the case where the coin lands tails, the new decision happens after two days have elapsed (which is analogous to waking up and interviewing Sleeping Beauty twice).
  • Sleeping Beauty Problem
    I’ve explained the error with betting examples before. Getting to bet twice if it’s tails doesn’t mean that tails is more likely.Michael

    Suppose you've been kidnapped. Each morning, your captor flips a coin. If it lands on heads, you're blindfolded and taken to safehouse #1 (or simply walked around and returned there). If it lands on tails, you're blindfolded and taken to safehouse #2 (or similarly walked around and returned) for two consecutive days. Both safehouses are indistinguishable, and you're informed of this procedure.

    On any given day, what would be your credence in the proposition "I am at safehouse #1"? Would it not be P(#1) = 1/3? You could argue that it's 1/2, with the rationale that "getting to guess twice doesn't mean that tails is more likely". But wouldn't this conflate the epistemic perspective of the person making the guess at the time of guessing with the epistemic perspective of the person flipping the coin at the time of flipping?
  • Sleeping Beauty Problem
    I would say that both are true, but also contradictory. Which reasoning it is proper to apply depends on the manner in which one is involved.

    For the sitter, his involvement is determined by being randomly assigned an interview, and so I think the first reasoning is proper. For the participant, his involvement is determined by tossing a coin 100 times, and so I think the second reasoning is proper.
    Michael

    Let us stick with the normal Sleeping Beauty scenario for now, if you don't mind, as I think the lessons drawn will generalize even to your extreme variation but the analysis is simpler.

    You're arguing that Sleeping Beauty and the sitter can have different, yet warranted, credences of P(T) at 1/2 and 2/3 respectively. Let's consider this: they could both be regular participants in this experiment, occasionally meeting again by chance. Whenever they meet, they might agree to wager $1 on the outcome—with the sitter betting on tails and John Doe betting on heads.

    Under this arrangement, Sleeping Beauty would expect to win two-thirds of the time, and John Doe would expect to lose two-thirds of the time, correct? However, if John Doe's credence P(T) truly is 1/2, should he not expect to break even in the long run? If he still expects the outcome to be tails two-thirds of the time, despite his 1/2 credence, would he not refuse to place a wager? His actions would reveal his true expectations about the probability of the outcome.

Pierre-Normand

Start FollowingSend a Message