Comments

  • Sleeping Beauty Problem
    The question, then, is whether or not the participant should reason as if they are randomly selected from the set of all participants, and so their credence that they have been or will be woken twice is 1/2. I will continue to say that they should, given the propensities of the experimental setup to place them in the position to be woken twice.Michael

    Why should it be either one or the other? One of them expresses a credence that an awakening (the current one) is occurring as a result of a coin having landed tails (E1) while the other one expresses a credence that an experimental run (the current one) is occurring as a result of a coin having landed tails (E2). They are credences about different kinds of events (E1 and E2) that have different temporal and spatial distributions. They can both (and indeed are) true at the same time and can be appealed to to rationally justify different bets or other actions that are dependent on such events happening.

    There is a twist that comes from the fact that a biconditional holds between the two propositions "E1 is now occurring" and "E2 is now occurring". How can they therefore have different probabilities of occurrence? This puzzle is solved by attending to the practical implications of establishing effective procedures for verifying their truths, or to the means of exploiting what such truths afford.
  • Sleeping Beauty Problem
    Now suppose she doesn't make any assumptions regarding her room allocation. What then?

    For all she knows, she might have been hand-picked to babysit a "heads" room. In which case the probability that her room is of a heads coin, is 1. Due to possibilities such as this, she should

    1) Refrain from committing to a probability that her room is of a heads coin, until further information is given.

    but nevertheless

    2) Deduce that the probability of her room coin landing heads is 1/2, given that it is known information.
    sime

    If that were the case, the problem would transform into a different version of the Sleeping Beauty problem. This altered version would have Sleeping Beauty awakening only once regardless of the coin's outcome, or not being informed about the number of awakenings in either case. Indeed, in such setups, there would be no grounds for adopting a 'thirder' stance. However, in the problem statement in the original post, the information regarding the number of awakenings following the heads and tails outcomes is provided to Sleeping Beauty. Hence, the thirder stance remains relevant.
  • Sleeping Beauty Problem
    Two "tail" rooms refer to the same coin outcome. So when computing the probability that the coin of her room landed heads, she needs to take into consideration the other room she might have occupied.sime

    Have you had a look at the setup I had described here to provide a pragmatist account of the thirder/halfer dispute?

    On a particular day, when Sue is assigned to a random room in her role as a sitter, all the rooms are occupied by different participants in the study. Each of their schedules was initially determined by a separate coin. Participants scheduled for two consecutive days of awakening might be there for their first or second day (with equal probabilities). In this case, there is a "tail outcome" either tomorrow or yesterday, which you might wish to label as "the same outcome". However, this is irrelevant to the coin currently hidden under the cardboard on the nightstand next to the participant's bed, about which Sue is expressing her credence today based on the available information.

    So assuming that her supervisors assign her room using a draw from a uniform distribution, her credence that the coin of her room landed heads, should be 1/2

    Why would that be? Two thirds of the rooms occupied by participants on any given day are occupied by individuals who have been scheduled for two awakenings (on two consecutive days) because a coin landed tails.
  • Sleeping Beauty Problem
    How do you suppose that we

    (1) relate the random variables, whose outcomes refer to events in the B series of time, to the indexical "Today" of the A series, whose relationship to B series and to the physics of coin-tosses isn't specified by the problem?
    sime

    This is actually straightforward. Imagine a spatial analogy. The 'B series of space' might be represented by Cartesian coordinates on a surface or by room numbers. The 'A series of space' could correspond to indexical names of locations like "here", or "this room". In the example I gave earlier, Sue Sitter is randomly assigned to a room in the Sleeping Beauty Experimental Facility. There are twice as many rooms allocated to participants who are scheduled for two awakenings (resulting from a coin landing tails), as opposed to one awakening (from a coin landing heads).

    How does Sue deduce that "here" indicates she is in a room occupied by a participant scheduled for two awakenings with a probability of 2/3? She draws this conclusion from the fact that twice as many such rooms exist in the facility compared to rooms allocated to participants with only one scheduled awakening. So, from Sue's perspective (based on the exact same evidence she shares with the participant), she concludes that the coin landed tails with a 2/3 probability, despite the coin having a 1/2 propensity to land tails. Sue's credence that the coin landed tails is a consequence of both the initial propensity of the coin to land tails and the propensities of the experimental setup to place her in a room that corresponds to a tails outcome.

    (2) How should we apply induction to "invert" the causal direction from "Today" to the state of the coin, assuming we have a satisfactory answer to (1) ?

    The causal flow goes from 'Coin toss results' -> 'Distribution of tails and heads assigned rooms' -> 'Evidence of being in a random room' -> 'Sue calculating her credence'. Therefore, there is no causal paradox.
  • Sleeping Beauty Problem
    I think a distinction needs to be made between the probability of making a correct prediction and the frequency of making a correct prediction. That a correct prediction of tails is twice as frequent isn't that a correct prediction of tails is twice as probable – at least according to Bayesian probability.

    Maybe thirders who use betting examples are simply frequentists?

    Perhaps there also needs to be a distinction made between the probability of making a correct prediction and the probability of the coin having landed tails. It could be that the answers are different. This might be especially true for frequentists, as the frequency of correct predictions is not the same as the frequency of coins landing tails (there can be two correct predictions for every one coin that lands tails).
    Michael

    While Bayesians and frequentists might disagree about the meaning of probability assignments to potential outcomes of experiments, with the former asserting that they depend on degrees of belief, and the latter contending that they're based on expected long-run frequencies of such outcomes, the dispute between thirders and halfers seems more closely related to the class of events being predicted. The Bayesian framework is equally well-suited for deriving the thirder result, as long as the probability in question is the chance of finding oneself within a 'tails' experimental run, given the evidence that one is currently awake.
  • Sleeping Beauty Problem
    They do indeed conflate them. Why would SB assign a credence of 1/3 to heads, if she knows

    1) That the physical probability of winning the lottery is 1/2

    2) That amnesia doesn't influence lottery outcomes.

    3) That had she not had amnesia, then she would either know the result of the lottery on waking up, or else refer to it's probabilities as being 1/2?
    sime

    Because her credence wasn't meant to reflect the proportion of winning bets but rather the proportion of awakening episodes that occur during an experimental run that results in the (initial) bet being won. Those two aren't the same. If Sleeping Beauty had instead been afforded the opportunity to bet on each awakening episode, then her winning frequencies would exactly match her credence that she is winning that bet.

    Since thirders make no principled distinction between causes and effects, they do indeed encourage avoidance-coping. For by their logic, one should raise one's credences for one's preferred outcomes by avoiding information.

    Neither halfers nor thirders typically make any reference to the potential causal impacts (e.g. winning bets) of them expressing meaningful credences. If they would be more careful about that, they would stipulate what population samples they are talking about when they say "I have credence P that O is true when I am in situation S." In that case, the way in which they choose how to define what constitutes S and O would reflect their interests in predicting the frequency of such Os (e.g. Heads) in such Ss (e.g. awakenings or experimental runs). And then, thirders and halfers would simply stop talking past each other.
  • Sleeping Beauty Problem
    It boils down to the following question. Should she risk a single draw from

    P(Flip = Heads | Today = Monday) = 1/2 (no amnesia)

    Or should she risk a single draw from

    P(Flip = Heads | Today = Monday Or Tuesday) = x (amnesia)

    According to halfers, x = 1/2, implying that she gains nothing by choosing amnesia.

    According to thirders, x = 1/3, indicating that she gains from choosing amnesia.
    sime

    Your suggestion that a thirder expects to gain from choosing amnesia would depend on her misconstruing the nature of her predictions. A thirder, upon each awakening, assigns a credence of P(H) = 1/3 to the coin having landed heads, predicting that, over many awakenings within various runs of the experiment, the coin will land 'tails' 2/3 of the time. These are predictions about the frequency of outcomes across multiple awakenings, not about the specific outcome of the coin toss in a particular awakening.

    The initial bet, as you stipulated, is made prior to the start of the experiment, not at each waking event. As such, the payout of $1 is tied to the experimental run as a whole, not to each individual awakening. While a thirder may correctly predict the frequency of 'tails' outcomes across awakenings, these predictions do not increase the actual winnings from the initial bet.

    In sum, while the amnesia might increase the frequency of correct predictions, it does not lead to a higher payout. This is because only one payout occurs per experimental run, even when Sleeping Beauty correctly predicts a 'tails' outcome twice within a run. A thirder, fully informed about the conditions of the experiment and the nature of her predictions, would be aware of this.
  • Philosophical game with ChatGPT
    I was hoping for an actual adventure world that I could explore that somehow incorporates philosophical puzzles, I'll try to coax that out next time, though it would be crazy if it could actually pull that off.hypericin

    Yes, your attempt turned out to be interactive but not very exploratory. I did an experiment with GPT-4 where I achieved the opposite result. I had intended to make it more interactive at a later stage. Some people have had success using GPT-3.5/4 as a Dungeon Master for playing solo D&D games.
  • Sleeping Beauty Problem
    Follow-up to my discussion with GPT-4 about the pragmatist interpretation of the Sleeping Beauty problem:

    PN:

    As I reflected about this thought experiment while taking a shower (always a propitious moment for philosophizing) I noticed a flaw in my initial conception of it. Due to the way Aunt Betsy and Sue's meeting is arranged, the mere occurrence of the meeting allows Sue to update her credence P('six') to match Aunt Betsy's. Can you see why?

    GPT4:

    Yes, I see the point you're making. The scheduling of the meeting between Aunt Betsy and Sue is tied to a participant's awakening, which in turn is tied to the outcome of the die roll. This means that the occurrence of the meeting itself does provide Sue with information relevant to her credence P('six').

    Specifically, since the meeting occurs on a randomly chosen day when the participant is scheduled to awaken, the very fact that the meeting is taking place tells Sue that the participant has indeed awakened at least once. Given that the chance of the participant awakening at all is higher if the die landed on 'six', this information should increase Sue's credence that the die landed on 'six'. However, because the die is rolled before the participant's sleep schedule is set, and the meeting is scheduled after this, the information doesn't completely determine the outcome of the die roll. Sue's credence should therefore align more closely with Aunt Betsy's, but it won't necessarily match it exactly.

    PN:

    I think you are now (uncharacteristically) not entirely on track. The reason why is that when Sue requests for there to be a meeting with (one of) Jane's assigned sitter(s), if the die didn't land on 'six', the unique sitter assigned to Jane will be booked for the meeting. Else, one of the six sitters assigned to Jane will be booked at random. Upon knowing that she was booked, Sue credence that she was the only one (rather than one among six) sitters assigned to Jane increases.

    GPT4:

    You're right, I misunderstood the key aspect of your thought experiment. When Aunt Betsy schedules a meeting, one of two scenarios will happen:

    If the die did not land on 'six,' one sitter was assigned to Jane, and that sitter will be booked for the meeting.

    If the die landed on 'six,' six sitters were assigned to Jane, and one of them will be randomly chosen for the meeting.

    Given this setup, when Sue is selected for the meeting, she gains new information: she now knows that she is either the sole sitter assigned to Jane (if the die did not land on 'six') or one among six (if the die landed on 'six').

    The fact that Sue was selected for the meeting therefore updates her credence in the proposition that she is the only sitter assigned to Jane. This is because being chosen for the meeting is more likely in the scenario where there is only one sitter (where the die did not land on 'six') compared to the scenario where there are six sitters (where the die landed on 'six').

    I apologize for the confusion, and I appreciate your patience in clarifying your thought experiment.

    PN:

    No trouble! I think we can easily use Bayesian updating to show that Sue's credence in P(not 'six') thereby increases from 5/11 to 5/6.

    GPT4:

    Yes, you're correct. Sue can use Bayesian updating to adjust her beliefs based on the new information she has received.

    The Bayes' theorem formula is as follows:

    P(A|B) = [P(B|A) * P(A)] / P(B)

    In this case, we can interpret:

    A as the event "the die did not land on 'six'"
    B as the event "Sue is selected for the meeting"
    Before the meeting, Sue's prior belief, P(A), is 5/11, reflecting the proportion of participants who wake up only once.

    P(B|A) is 1, because if the die did not land on 'six', Sue is the only sitter assigned to Jane and will definitely be selected for the meeting.

    P(B) is the total probability that Sue is selected for the meeting, which is the sum of the probabilities of her being selected under both possible outcomes of the die roll. This is (5/11 * 1) + (6/11 * 1/6) = 5/11 + 1/11 = 6/11.

    Plugging these values into the Bayes' theorem formula gives us:

    P(A|B) = [1 * (5/11)] / (6/11) = 5/6

    So, after learning that she has been selected for the meeting, Sue's updated belief in the proposition that the die did not land on 'six' increases to 5/6.

    PN:

    You got it right on the first try, congrats!

    GPT-4:

    Thank you! I'm glad the explanation met your expectations. Bayesian reasoning can be quite intricate but it's a powerful tool for understanding how to update our beliefs in light of new information. If you have any more questions or thought experiments, feel free to share!

    PN:

    I just want to note that this result does not invalidate the 'pragmatist' lesson from the thought experiment. It's still the case that if they were to bet on their credences prior to meeting, both Aunt Betsy and Sue would make money in the long run. Furthermore, after the meeting, Sue would alter her betting behavior and make even more money (winning five times out of six by betting on the opposite outcome). But in this case, she would only have the opportunity to make this more profitable bet when she is booked for a meeting, so her proportions of winnings (and hence, her credence) would reflect differently sampled populations.

    GPT4:

    Absolutely, you've hit the nail on the head. While Sue updates her credence after the meeting, this doesn't undermine the pragmatist interpretation that emerged earlier in the thought experiment. The betting behavior of both Aunt Betsy and Sue is still rational given their respective information and they would both make money in the long run if they were to bet on their initial credences.

    Furthermore, you're correct in noting that Sue's updated credence and altered betting behavior would indeed lead to more profitable outcomes for her in the instances where she is booked for a meeting. However, these instances represent a different sampling of the population (only when she is booked for a meeting), reinforcing the point that different agents with different information and sampling conditions can rationally hold different credences about the same event.

    This is a very insightful point and underscores the richness and complexity of the pragmatist interpretation of the Sleeping Beauty problem and its variations. It's a good reminder that our beliefs and predictions about the world should take into account not only the available information but also the ways in which we are sampling from the world.
  • Sleeping Beauty Problem
    I've refined further @Michael's 'less-extreme' variation of the Sleeping Beauty problem in a way that highlights a pragmatist interpretation of the halfer/thirder dispute. Here it is with some comments by GPT-4:

    Hi GPT-4,

    Here is a new variation on the Sleeping Beauty problem that I imagined with the goal of highlighting a pragmatist interpretation of it.

    Sue Sitter has recently joined the team of sitters at the Sleeping Beauty Experimental Facility. In her role, like other sitters, she's responsible for attending to a participant on one single day during one of their experimental sessions. The nature of these sessions depends on a roll of a die: if the die lands on 'six,' the participant will be woken and interviewed six times over six days; otherwise, they will only wake once for an interview on the first day. As a sitter, Sue is assigned to a room randomly, without knowing whether the participant inside is scheduled for a single awakening or one of six.

    In the experiment facility, there are, on average, a 6/11 proportion of rooms occupied by participants waking six times (due to a die having landed 'six') and a 5/11 proportion of rooms occupied by participants waking only once (due to a die not having landed 'six'). Now suppose Sue had access to the participant registry where all active participants are listed. She spotted the name 'Jane Doe.' What are the chances she will be woken up six times? Sue's credence is 1/6, and this would also be the case for Jane Doe's credence before she undergoes the first sleep session.

    Now, suppose that by a stroke of luck, Sue is assigned to Jane Doe on that particular day. Her job is to measure her vitals as she awakens and get her breakfast as she waits for the interview. Sue arrives in Jane's room and wait for her to wake up. What are the chances that the die landed on 'six', indicating this would be one of six awakenings rather than the only one?

    Once Sue has been assigned to Jane Doe, her credence P('six') in this proposition should be updated from 1/6 to 6/11. This is because Sue was randomly assigned a room and her credence must now reflect the proportion of participants who wake up six times to the total number of participants who occupy the facility on that day. It can be argued that when Jane Doe awakens, her credence P('six') then aligns with Sue's because both their credences are targeting the exact same proposition, and both of them have the same epistemic access to it. But I'm not going to argue for this just now.

    From the point of view of Aunt Betsy (Jane Doe's aunt) who visits the facility and wants to see Jane, there is a 5/6 chance Jane's die didn't land on 'six' and she is being woken up only once whereas from Sue's point of view there is a 6/11 chance that the die landed on six and that Jane today is experiencing one in a sequence of six awakenings.

    Suppose when a visitor like Aunt Betsy is allowed to come to the facility to inquire about a participant, they get to meet at the cafeteria with their assigned sitter who will inform them of their wellbeing. Aunt Betsy and Sue Sitter then get to discuss there respective (and inconsistent) credences regarding the die throw that determined Jane's number of awakenings. Will either Aunt Betsy or Sue find some ground to update their credences?

    (Note that the circumstances of Aunt Betsy's meeting with Sue Sitter are somewhat underspecified.)

    GPT-4:

    This is a fascinating variation on the Sleeping Beauty problem! It certainly introduces an additional layer of complexity with the inclusion of Sue Sitter and Aunt Betsy, each with their own perspectives and credences.

    The original Sleeping Beauty problem is interesting because it's a question about how to update beliefs when you learn no new ordinary information, but only learn that it is now. This is often called "self-locating belief." Here, you have a situation where both Sue Sitter and Aunt Betsy have different self-locating beliefs about the situation, but also have different access to information about the experiment.

    Let's first clarify their perspectives:

    Sue Sitter: She knows the overall proportions of the experiment, so her credence that Jane Doe will wake six times is 6/11, given that she was randomly assigned a room.

    Aunt Betsy: She doesn't know the overall proportions of the experiment. She just knows that Jane Doe has been put into the experiment and that a die roll of six means Jane will wake up six times. So, her credence that Jane Doe will wake six times is 1/6.

    When Sue Sitter and Aunt Betsy meet, they have an opportunity to share their perspectives and possibly update their credences.

    The key question here is: do they have a reason to trust each other's perspective? Is there a reason why Sue should trust Aunt Betsy's credence of 1/6, or why Aunt Betsy should trust Sue's credence of 6/11?

    In a sense, they're operating under different contexts and different sets of information. Sue's context is the whole experimental facility where she knows the average proportions, while Aunt Betsy's context is just about Jane Doe and the die roll that determines her awakenings.

    So, when they meet and share their perspectives, they might be able to adjust their credences. But how they do so depends on what they learn from each other, their trust in each other's knowledge, and how they interpret the information. This is an open question in the philosophy of probability and epistemology, and different theories might suggest different ways of updating their beliefs.

    The pragmatist interpretation you've proposed seems to suggest that Sue Sitter and Aunt Betsy should stick to their own credences, because those credences are based on their own perspectives and contexts. But they could also potentially learn from each other and update their credences in light of new information.

    The exact credence updates would depend on various factors, including their prior beliefs about the reliability of their own and each other's information sources, the clarity with which they understand each other's perspectives, and their understanding of the underlying mechanics of the experiment. The problem does not provide a clear-cut, definitive answer to this question, illustrating the complexity and ambiguity inherent in the philosophy of probability and belief updating.

    PN:

    Let us start with the assumption that both Sue and Aunt Betsy are fully informed about the experimental setup and have complete trust in each other, although they may interpret their respective credences differently. Visitors like Aunt Betsy can arrange a visit if they notify the facility before the participant begins the experiment and starts their sleep schedule. Once the participant's sleep schedule is set, the facility manager organizes a meeting between the visitor and an assigned sitter. This meeting is scheduled for a randomly chosen day when the participant is due to awaken under the watch of this sitter. Importantly, the circumstances surrounding this meeting are designed in such a way that neither Sue nor Aunt Betsy receives any new information that might lead them to adjust their beliefs about the outcome of the die roll.

    GPT-4:

    Under these additional clarifications, it seems that both Sue Sitter and Aunt Betsy have complete knowledge of the experimental setup and trust each other's report. They also have no reason to believe that their meeting provides additional information about Jane Doe's situation. This makes the question of whether they should update their credences even more interesting.

    In this case, they might consider the following:

    Aunt Betsy could reason that Sue Sitter, being an employee of the facility, has a more informed perspective on the overall proportions of the experiment. Therefore, she might consider updating her credence to align with Sue's, which is 6/11.

    Sue Sitter could consider Aunt Betsy's credence, but since Sue's perspective is based on the overall proportions of the experiment, she might not find a compelling reason to change her credence based on Aunt Betsy's belief.

    What's interesting about this situation is that even though they both have complete knowledge of the experimental setup, their credences are based on their specific roles and perspectives within the experiment.

    While it's tempting to think that they should align their credences, it's not necessarily clear that they should. This highlights how the concept of "credence" in probability theory can be subjective and context-dependent, even when all parties have full knowledge of the underlying facts.

    Sue's credence of 6/11 is based on her perspective as a sitter, who knows the overall proportions of the experiment. Aunt Betsy's credence of 1/6 is based on her perspective as an aunt, who knows the die roll determines Jane Doe's awakenings.

    The pragmatist interpretation might suggest that each should stick to their own credence, as it is the most rational belief given their specific perspective and role. But other interpretations might suggest that they should try to align their beliefs in light of their shared knowledge and trust in each other.

    This scenario serves as an interesting case study in the philosophy of probability, illustrating how different perspectives can lead to different credences even with full knowledge of the facts.

    PN:

    Suppose that whenever a sitter and a visitor meet under such circumstances, they are allowed to put their money where their mouth is and make an even money bet on the outcome (i.e. on 'six' or 'not six'; or, equivalently, on six awakenings or one single awakening). Aunt Betsy would bet on 'not six' and expect to win five out of six times. Sue Sitter would bet on 'six' and expect to win six out of eleven times. And so would they both! How is that possible? (I already know the answer, but that's a challenge for you).

    GPT-4:

    The situation you've described is certainly intriguing. It seems paradoxical because Sue and Aunt Betsy have different credences about the same event and yet, given their respective perspectives, they both expect to win more often than not. This paradox arises from the fact that the event being wagered on ("six" or "not six") occurs within different sampling populations for Sue and Aunt Betsy.

    Aunt Betsy is betting on the outcome of a single, specific experiment involving Jane Doe, where the die is rolled once. She knows that in this single experiment, the chance of getting a six is 1/6, and the chance of not getting a six is 5/6. So, she expects to win her bet 5 out of 6 times.

    On the other hand, Sue Sitter is not betting on a single, specific experiment. Instead, she's betting on the outcome of a random sampling from all the experimental sessions taking place in the facility. In this larger population of experimental sessions, 6/11 of the sessions will involve six awakenings (a "six" outcome), and 5/11 will involve one awakening (a "not six" outcome). So, she expects to win her bet 6 out of 11 times.

    This apparent paradox is resolved when we realize that Sue and Aunt Betsy are not actually betting on the same event, despite appearances. Aunt Betsy is betting on the outcome of a specific experiment (Jane Doe's), while Sue is betting on the outcome of a random sampling from all experiments. The event they're betting on has a different probability in these two different contexts.

    This example demonstrates an important principle in probability theory: the probability of an event can depend on the context or sampling population in which it's considered. It also illustrates how different perspectives can lead to different assessments of probability, even when all parties have full knowledge of the underlying facts.

    PN:

    Your explanation is generally accurate, but it might be helpful to clarify that Aunt Betsy's and Sue's chances of 'winning' more often, despite betting on complementary outcomes, are based on their own individual long-term frequencies of winning. If they repeat their respective 'experiments' many times, Sue will interact with different participants' acquaintances, and Aunt Betsy will meet different sitters. Consequently, they won't both win more often against each other – a scenario that would indeed constitute a logical impossibility.

    GPT-4:

    Absolutely, your clarification is spot on. The betting scenario, as you've described it, involves many repetitions of the experiment. In the long run, both Sue Sitter and Aunt Betsy are betting in multiple rounds, with different people, based on their respective beliefs about the outcome of the experiment.

    Aunt Betsy, when betting with different sitters across different experiments, will expect to win 5 out of 6 times because she is considering each experiment as a single event where a die is rolled once.

    On the other hand, Sue Sitter, when betting with different visitors across different experiments, will expect to win 6 out of 11 times because she is considering the overall proportions of the experiment and the distribution of awakenings in the facility.

    They are not directly betting against each other over and over again. If they were, then it would indeed be a logical impossibility for both to win more often than they lose, since they're betting on complementary outcomes.

    This scenario is an interesting demonstration of how different perspectives can lead to different probabilities, and how those probabilities are borne out in the long term through repeated trials. It also shows how the same event can be perceived differently depending on the context and the observer's perspective.
  • Sleeping Beauty Problem
    According to thirders, if she has amnesia then

    P(Tails and Tuesday) = P(Tails and Monday) = P(Heads and Monday) = 1/3
    sime

    More precisely, the thirder view could be expressed as the claim that whenever Sleeping Beauty awakens, from her epistemic perspective, P(today is Tuesday and the coin landed tails) = P(today is Monday and the coin landed tails) = P(today is Monday and the coin landed heads) = 1/3

    So, according to thirders her probability of losing $1 when awakened on monday in a state of amnesia is 1/3.

    1/3 would be her prior upon awakening and before being informed that the day is Monday. Upon being informed of this, Sleeping Beauty would update her credence to 1/2.
  • Sleeping Beauty Problem
    Even the word "state" feels too coarse for Sleeping Beauty, since it could denote the situation a robust well-defined subject finds themselves in, or it could denote the very identity of that subject. --- At least, that's how the two main camps look to me. One wonders, where am I? how did I get here? One wonders, what am I? what has made me into this?

    As you say, it's all about individuation. Lacking a fixed point of individuation, you can push the lever however you like but you won't actually move anything.
    Srap Tasmaner

    I agree. I view the states at issue to be irreducibly epistemic states of agents. They are, in other words, relational states defined pragmatically. So, when expressing their credences, thirders and halfers had better properly specify the intended relata of this relation. If credences are explicitly stated as ratios, then one must likewise attend to the way in which the two things being divided are individuated. I'm working on a variation of the Sleeping Beauty problem in which all the options for individuating the relevant relata are laid out so that it becomes clear when halfers and thirders are talking past each other, and when it is that they are led astray through being inconsistent with their individuating assumptions.
  • Sleeping Beauty Problem
    So thanks for the enlightening discussion.Michael

    Thanks to you! I'll likely revisit some of your most recent objection just for the sake of clarifying my position (and refine it if needs be). Of course, I'd be delighted if you'd chime in again whenever you feel like it.
  • Sleeping Beauty Problem
    Let P(Flip | Today ) represent the conditional probability of the state of coin, relative to the value of SB's indexical "Today". Then

    2/3 = P(Flip = Tails | Today = Mon Or Tue) = P(Flip = Tails) = 1/2

    What is wrong with this conclusion of inconsistency?
    sime

    From Sleeping Beauty's perspective, the conditional probability of the coin flip being tails upon her awakening isn't conditioned on 'today' being either Monday or Tuesday, but rather on her actual state of being awakened 'today'. In other words, she conditions this probability on her being in a narrow (day long rather than experimental-run long) centered possible world in which she is awakened. It's this conditioning that yields her updating her prior 1/2 to the posterior 2/3.
  • Sleeping Beauty Problem
    The issue with the SB problem, is whether to consider the two potential wakings as the same experiment, or different ones. This version resolves that. SB knows that when the researchers looked at the coins, there are four possible arrangements with probability 1/4 each: {HH, HT, TH, TT}. She also knows that, since she is awake, HH is eliminated. She can update her beliefs in the other three to 1/3 each.JeffJo

    I love your variation! However, your conclusion may be a bit premature as halfers can counter it using the same argument with which they counter the thirder position in Elga's setup. Allow me to break down the possible scenarios considering the initial flip results, the rearrangement of the second coin, and the resultant awakening sequence:

    TT -> TH -> two awakenings (ww)
    TH -> TT -> two awakenings (ww)
    HT -> HH -> waking then sleeping (ws)
    HH -> HT -> sleeping then waking (sw)

    Given these four possible experimental runs following the four possible initial coin flip results, we find that when Sleeping Beauty awakens, she can certainly rule out HH as the current state of the two coins during that specific awakening episode. However, this does not eliminate the possibility of being in either of the last two experimental runs (in addition to, of course, either of the first two).

    She could be waking up due to the initial coin flip (which is consistent with run-1, run-2, or run-3), or she could be waking up as a result of the rearrangement of the second coin (run-1, run-2, or run-4). As all these runs are still consistent with her experience, halfers might argue that Sleeping Beauty has gained no new information upon awakening and that these four possible runs remain equiprobable.
  • Sleeping Beauty Problem
    What is wrong with this conclusion of inconsistency?sime

    I'm going to respond tomorrow since I'm now nearing the end of my current period of awakening. Meanwhile, I want to post this answer GPT-4 gave—on behalf of the halfer this time—to a challenge I had imagined. GPT-4 still has the ability to knock my socks off!

    PN:

    Yesterday, as I was thinking about the halfer's stance that upon awakening Sleeping Beauty's credence on the coin having landed heads should be 1/2, while acknowledging that when the coin landed tails she has two opportunities to guess "tails", thereby acknowledging that a thirder SB would guess right two thirds of the time when betting on tails (and thereby be more willing to bet on tails in order to win more bets), I thought of the following challenge. We suppose that on each awakening a second fair coin also is tossed. The second coin toss is independent and, unlike the first, doesn't impact how the experiment is run. If Sleeping Beauty is more willing to bet on the first coin having landed tails than the second one, does that not imply that the second fair coin now has a less than 1/2 chance to have landed tails?

    GPT-4:

    Your challenge indeed illustrates a fascinating aspect of this problem. Here is a possible way to respond:

    As a halfer, Sleeping Beauty would be more willing to bet on the first coin landing tails, not because she believes it's more likely to land tails, but because there are two opportunities to be correct when betting on tails as opposed to one opportunity when betting on heads. However, this should not influence her betting behavior regarding the second coin, which is independent of the first and does not affect the experiment's procedure.

    The crucial point here is that the additional awakenings under the tails outcome for the first coin change the betting odds in favor of tails, but they do not change the underlying physical probabilities of the coin flip, which remain at 1/2 for both heads and tails.

    For the second coin, which is not linked to the experiment's procedure and leads to only one awakening regardless of its outcome, there are no such changes in the betting odds, and the physical probabilities remain the only relevant factor for Sleeping Beauty's bets. Therefore, Sleeping Beauty would remain indifferent between betting on heads or tails for the second coin, considering both outcomes equally likely.

    In this scenario, the credence Sleeping Beauty assigns to the first coin's outcome upon awakening (1/2 for heads and tails according to the halfer perspective) reflects the physical probabilities of the coin flip, while her willingness to bet on tails rather than heads reflects the altered betting odds due to the additional awakenings under the tails outcome. Conversely, both her credence and her willingness to bet for the second coin directly reflect the physical probabilities, as there are no additional factors affecting the betting odds.
  • Sleeping Beauty Problem
    If a PoI advocate assigns the credence

    P(Day = mon, Flip = tails ) = P(Day = mon , Flip = heads) = P(Day = tue, Flip = tails) = 1/3

    Then his unconditioned credence that it is Monday is

    P(Day = mon) = 2/3

    which is clearly not the expression of indifference.
    sime

    This assignment is an expression of pairwise indifference between the three possible awakening circumstances. But rather than relying on the Principle of Indiference, I proposed to assign credences on the basis the odds Sleeping Beauty should rationally be willing to accept when betting on those outcomes. The problem's original specification is already precise enough ensures that if Sleeping Beauty accepts 2 to 1 odds on bets on any of these three outcomes upon awakening, she would break even in the long run.

    Then for sake of consistency he must ignore the actual information he is given about the coin, by using

    P(Flip = tails) = 2/3

    There's no inconsistency here. It's precisely because the premise holds that the coin is equally likely to land heads or tails when tossed, and the fact that tails would result in twice as many awakenings, that Sleeping Beauty deduces that she could accept 2 to 1 odds on each awakening on the outcome 'Flip = heads' and break even over the long run.
  • Sleeping Beauty Problem
    The halver's position is based on the common-sense postulate that the physical properties and propensities of objects remain unchanged when their states are conditioned upon irrelevant information. Given acceptance of that postulate plus the premise of a fair coin, why shouldn't the halver insist that they are talking about reality, as opposed to their mere mental state?

    The thirder's position is indeed a ratio of possible words, but there is scant evidence to support the idea that credences are accurately represented by taking ratios over possible worlds.
    sime

    Taking ratios over possible worlds is a vivid way to illustrate the issue at hand when it comes to Bayesian updating of credences in light of new information. Regarding credences, in well-defined problems like Sleeping Beauty, as Pradeep Mutalik suggested in this column (in the comments section), "if you have 'a degree of certainty' of 1/n then you should be willing to accept a bet that offers you n or more dollars for every dollar you bet."

    I would challenge the idea that Sleeping Beauty's judgement about the coin flip is merely a judgement about the propensities of that coin. Rather, it's a judgement about her own epistemic relation to the result of the flip. Or you could argue it's a judgement about the propensity of the whole experimental setup to put her in relationship with specific coin flip results. This echoes Werner Heisenberg's comment: "What we observe is not nature in itself but nature exposed to our method of questioning." I believe this to be true generally, not just in the context of quantum theory.

    Upon awakening, Sleeping Beauty does gain information about the coin flip result. She knows that (from her own perspective) "if today is Monday, then heads and tails are equally likely." She also learns that "if today is Tuesday, then the coin landed tails." Since she can't rule out "today" being Tuesday, her credence (and willingness to bet) on a tails outcome should be higher than 1/2. "Today" being an indexical, the knowledge expressed pertains to her own situated epistemic perspective (as well as her 'pragmatic' perspective, or affordances to make bets or predictions). But this perspective only is "subjective" in the sense that it is suitable for capturing probabilities, or frequencies, relevant to her personal involvement in the experiment.
  • Sleeping Beauty Problem
    Two Envelopes seems to encourage abuse of the principle of indifference in exactly this way. Maybe it's just something like this: rationality requires treating "It's one or the other but I've literally no idea which" as an uninformative 50:50 prior only when there's the real possibility of acquiring new information upon which to update that prior. I'd rather just say, no, don't do that, "I don't know" doesn't mean "It's 50:50", [...]Srap Tasmaner

    I think there are good grounds for Sleeping Beauty upon awakening to ascribe strictly equal probabilities P = 1/3 to the three cases: (1) Today is Monday and the coin landed tails, (2) Today is Monday and the coin landed heads and (3) Today is Tuesday and the coin landed tails, rather than simply claiming ignorance. This doesn't involve any abuse of the principle of indifference so far as I can see. Rather, Sleeping Beauty can deduce those three outcomes to have equal probabilities from the statement of the problem, or experimental setup. Suppose upon awakening she would always bet on the same one of the three outcomes chosen in advance. In that case, upon repeating the whole experiment run many times, she's be right roughly 1/3 of the time she awakens and make this bet. This would be the case regardless of which one of the three outcomes she chooses to always bet on. Halfers have an obvious rejoinder to this line of thinking but I just wanted to make this point for now.
  • Sleeping Beauty Problem
    Rather than getting back into the nitty-gritty, I'm thinking about the stuff I posted a while back, the possible self slices and all that.Srap Tasmaner

    I think this observation is very astute. As I am developing a pragmatist conception of credence, I am trying to move away from the idea that one's credence in the state H is entirely determined by the specification of the ways in which one can come to be in that state. The Sleeping Beauty problem completely specifies how (and in which circumstances) Sleeping Beauty can come to be in state H, however one wants to define what counts as "being in state H". But it fails to specify how one might want to individuate such states (or 'self slices') so as to determine the relevant frequency or probability ratios that we ordinarily express as credences.

    Whenever Sleeping Beauty is awakening during a H experimental run, she is awakening during a H day, and vice versa. But there isn't a one to one relationship between the two sets of 'events' (i.e. between the two sets of centered possible worlds) once they have been individuated.

    The relationship is one to one between types but not between tokens, we might say.

    In the Sleeping Beauty problem the equivalence relation is:

    H-run <--> H&Monday (H-day)
    T-run <--> T&Monday or T&Tuesday (T-day)

    The ratio considered by the thirders is the ratio of H-days per day long centered possible worlds (or individual awakenings). It is indeed 1/3. The ratio considered by the halfers is the ratio of H-runs per run-long centered possible worlds. It is indeed 1/2. The credences expressed by thirders and by halfers therefore have different topics.
  • Sleeping Beauty Problem
    So then there are two different ways to reason with nothing to prove that one or the other is the "right" way?Michael

    That's correct since events that happen in the world don't come flagged with sign posts that say: "the current event begins here" and "the current event terminates here." How credences in the probabilities of events are assessed depend on the way those events are individuated and this can be dictated by pragmatic considerations.
  • Sleeping Beauty Problem
    And that first question is the premise of the problem. Sleeping Beauty is asked her credence that the coin landed heads. That's it. She's not being asked to consider the most profitable betting strategy for multiple games.Michael

    The state "the coin landed heads" is ambiguous with respect to the span of the centered possible world that it might be referring to. I think you would benefit from reading Groisman.
  • Sleeping Beauty Problem
    5 out of every 6 victims escape. I count by participants, not by escape attempts. I think it's more reasonable.Michael

    Since on my approach probabilities track frequencies, even if there is just one kidnapping event, the hostage's chances of survival are 5 in 11 whenever an escape attempt occurs.
  • Sleeping Beauty Problem
    She can and should use known priors to condition her credence, and one such prior is that she is more likely to win a prize/have the opportunity to escape if tails/a dice roll of 6. As such, if she wins a prize or has the opportunity to escape she should condition on this and her credence should favour tails/a dice roll of 6, otherwise her she should condition on not winning a prize or having the opportunity to escape and her credence should favour heads//a dice roll of 1-5.Michael

    She can't condition her credence on H by winning a prize if she only learns about her victory after the experiment ends. The same applies to her opportunity to bet — this fact is probabilistically independent of H and thus doesn't allow for an update in credence. Would you still argue that your belief that the movie you're watching is part of a double feature increases the moment you see Rex Harrison appearing on screen? Or that it increases even further when you notice he isn't wearing a top hat, or when he is?

    And if she's guaranteed the opportunity to escape each day her credence should favour a dice roll of 1-5.

    So, she should be carrying a plank and end up being eaten by lions on 6 out of 11 escape attempts?

    But honestly, all this talk of successes is irrelevant anyway. As I said before, these are two different things:

    1. Sleeping Beauty's credence that the coin tossed for the current, one-off, experiment landed heads
    2. Sleeping Beauty's most profitable strategy for guessing if being asked to guess on heads or tails over multiple games

    It's simply a non sequitur to argue that if "tails" is the answer to the second then "1/3" is the answer to the first.

    The manner in which (1) is stated suggest that Sleeping Beauty is referring to the wide centered possible world spanning the whole experiment run. In that case, her credence in H should be 1/2.

    The second one makes it rational for her to rely on her credence regarding narrow centered possible worlds spanning single awakening episodes. There indeed isn't any entailment from the suitability of one framing of the question from (1) to (2) or vice versa. The two sentences concern themselves with different questions.

    By the way, I've just finished reading Groisman's The End of Sleeping Beauty’s Nightmare. Although I found it somewhat glib, it's undoubtedly the most readable paper on this topic that I've encountered so far. Groisman makes some very compelling points that are directly relevant to the discussion we are presently having. However, I prefer Laureano Luna's approach, which elaborates further on Groisman's work and brings more nuance.
  • Sleeping Beauty Problem
    Then forget the nature of prize. If I know that I’ve won a prize my credence is that the first coin toss landed tails.Michael

    In your prize scenario, knowing that you've won a prize indeed gives you new information, and this can rightly affect your credences. However, in the scenarios with Sleeping Beauty and the prisoner, merely being presented with an opportunity to bet or escape does not give them any new information about the outcome of the coin toss (or throw of the die). They must decide how to take advantage of this opportunity (by choosing to carry the torch or the plank, or choosing what safehouse address to communicate to the police) before gaining any knowledge about the success of the attempt. The offering of the opportunities carry no information and provide no ground for updating credences.
  • Sleeping Beauty Problem
    That's only because I walk into one film. If I'm given amnesia and walk into the second film (if there is a second film) then it affects my credence.

    It's exactly like my scenario with the coin toss and prizes. If heads then the car is the possible prize, otherwise the motorbike is the possible prize. If a car then a single coin toss determines if I get it (if heads), if a motorbike then two coin tosses determine if I get it (one head is enough to win).
    Michael

    In your scenario, the nature of the prize is conditioned on the coin toss results. In the film example, the number of features (and the coin toss result that determined this number) is probabilistically independent of the probability that Rex Harrison will be featured. And likewise with Sleeping Beauty and the betting opportunities, or the prisoner and their escape opportunities.
  • Sleeping Beauty Problem
    Sorry, misunderstand the movie example. It’s a different answer if I only get to walk into one film, which would be comparable to Sleeping Beauty only waking on Monday (or Tuesday) if tails.Michael

    The main point is that seeing Rex Harrison being featured (while knowing that 1% of the movies randomly being shown in this theater feature him) doesn't impact your credence in this movie being part of a double feature.
  • Sleeping Beauty Problem
    No, but if I walk in not knowing if it’s the first or second then my credence favours it being part of a double feature.Michael

    Yes, me too. But that's just because two out of three showings are parts of double features. Seeing Rex Harrison being featured doesn't change my credence. It remains 2/3.
  • Sleeping Beauty Problem
    Yes. The probability of it being red if heads is 1%. The probability of it being red if tails and Monday is 1%. The probability of it being red if tails and Tuesday is 1%. The probability of it being red if tails is 1 - 0.99^2 = 0.199.

    The probability of it being red if tails is greater than the probability of it being red if heads, therefore if it's red then it is more likely tails.
    Michael

    I believe you meant 0.0199.

    That represents the probability of the tulip being red at least once across the two days of awakening in the event that the coin lands tails, correct? While it's true that Sleeping Beauty, upon observing the red tulip, can deduce the truth of this disjunctive proposition, the two events are probabilistically independent. Thus, although it's more probable that at least one of these events will occur if there are two opportunities for it to occur within a single experiment run rather than just one, the observation of one such event doesn't entail that it's more probable that you're in an experiment run wherein two such opportunities occur.

    Consider this analogy: you're entering a movie theater where there's an even chance of a double feature being shown (according to a coin flip). There's a one percent chance that any given film will feature Rex Harrison. Suppose you see Harrison featured in the first film. Does that increase your credence that there will be a subsequent feature?

    If you think about it, Lewis's notion—that Sleeping Beauty can conclude from knowing it's Monday that a future coin toss is more likely to yield heads with a 2/3 probability—is already rather puzzling. It would be even more extraordinary to believe that any random observation made by Sleeping Beauty could allow her to update her credence about an unrelated coin flip.
  • Sleeping Beauty Problem
    It does.Michael

    I don't understand your argument at all. Finding an escape opportunity with probability P is a constant feature of the prisoner's environment. Sleeping Beauty's waking days (of the prisoner's captivity days) aren't being distinguished by one another in point of betting or escape opportunities. Suppose there always is a tulip on Sleeping Beauty's nightstand. Would her seeing the tulip enable her to change her credence in H? What if there is a 1% chance that the tulip is red on any given awakening day? Would that make any difference?
  • Sleeping Beauty Problem
    Can you give actual numbers?Michael

    Suppose there is a 0.01% chance to find an opportunity to escape on any given day held captive regardless of that day being the only one or one among six in a kidnapping event. Finding such opportunities doesn't yield any updating of credence. That would actually yield a very small updating since the fact that you didn't yet escape reduces a little the chances that you are not experiencing your first day of captivity. But this is quite immaterial. We can suppose that you always are recaptured but that your goal nevertheless is to maximize your number of stints (while warding off the crocodiles or lions). In Sleeping Beauty's case, opportunities to bet could be 100%, and provided the payoffs are paid after the experiment is over, that wouldn't impact her credence either.
  • Sleeping Beauty Problem
    It's quite straightforward that P(Dice roll 6|opportunity to escape) > P(Dice roll 1-5|opportunity to escape)Michael

    Indeed, which is basically the 'thirder' solution (in this case, the 5/11er solution).
  • Sleeping Beauty Problem
    It does change the epistemic situation. It's exactly like the scenario with the coin tosses and the prizes, where in this case the prize is the opportunity to escape.Michael

    Which is why I had included the proviso that the (rare) opportunities be proportional to the number of days the hostage is held captive. Under those conditions, they carry no information to the hostage.

    Likewise, enabling Sleeping Beauty to bet on H on each awakening provides no information to her, provided only the payouts are delivered after the experiment is over.
  • Sleeping Beauty Problem
    It doesn't make sense to me to say that the answer in this scenario is relevant to a scenario where there are no opportunities to escape.Michael

    The opportunity to escape just enables the prisoner to put their credence to good use, and to chose how to most appropriately define the states that those credences are about. It doesn't change their epistemic situation.

    I discussed this a little further with GPT-4:
    Reveal

    PN:

    Just to give you the idea (since your context window is narrower than Michael's) in this variations, a hostage is taken to one of two safehouses based on the throw of a die. If the die doesn't land on six, the hostage is taken to safehouse #1. If the die lands on six, the hostage is taken to safehouse #2 for six consecutive days. He also is being administered an amnesia inducing drug on each night. The hostage is informed of this protocol. The question posed is: "On any given day, what is your credence in the proposition 'I am at safehouse #1'?" The credence here refers to the hostage's subjective probability, or belief, in being at safehouse #1. The answer provided in the discussion suggests that the hostage's credence should be 5/11 rather than 5/6, despite the fact that there is a 5/6 chance of the die landing on six. This reasoning is based on the fact that the hostage gets to guess six times regarding the same die throw when the result is six.

    GPT4:

    Thank you for providing the context, it's indeed a crucial piece of information for this discussion. I now see why the 'thirder' position, or the 5/11 in this case, comes into play. It's a fascinating way to illustrate the Sleeping Beauty problem. And it makes perfect sense to apply the reasoning here in a similar way, given the similar structure of the problems.

    When the hostage awakes, they have to consider not only the initial 5/6 chance of being in safehouse #1 but also that, should they be in safehouse #2, they could be guessing their location six different times on the basis of the same die roll. Hence, despite the initial die roll favoring safehouse #1, the multiplicity of questioning events when in safehouse #2 adjusts the credence to 5/11.

    Your explanation above about how the hostage's goals can dictate whether they adopt a 'thirder' or 'halfer' perspective is clear and logical, in light of this information.

    PN:

    In a further refinement I suggested that safehouse #1 is surrounded by a crocodile-infested moat, while safehouse #2 is surrounded by a forest filled with lions. While escaping, the hostage can carry either a large wooden plank (to bridge the moat) or a torch (to ward off the lions), but not both.

    I suggested that the prisoner should maximise their survival chances by carrying the torch, in line of their 6/11 credence of being in safehouse #2. But this is true only of their chances of finding such an escape opportunity is proportional to the number of days they are held captive such that this finding doesn't yield an updating of their credence.

    Likewise, if they wish to direct the police to rescue them (by communicating the address of one safehouse), and know that they can only be afforded one opportunity to do so per kidnaping event, they should direct them to safehouse #1. One twist here is that under those conditions, finding the unique opportunity to communicate with the police would lead the prisoner to update their credence to 5/6 that they are in safehouse 1. But if they can ensure that their message will be received by sending it every day, and that the police will not arrive before the end of the "kidnaping event" (but before they are being moved overseas, say) then their credence will not be updated and it will be rational for them to direct the police to warehouse #1 despite their credence being that they are in warehouse #2 with 6/11 probability. Can you see why?

    GPT4:

    This scenario adds another layer of complexity to the problem, but your explanations are clear and make sense.

    When it comes to choosing between the plank and the torch, if the hostage's chances of finding an escape opportunity are proportional to the number of days they are held captive, and finding such an opportunity doesn't lead to an updating of their credence about which safehouse they're in, it's rational for them to choose the torch, aligning with their 6/11 credence of being in safehouse #2. This is because they will have more opportunities to use it.

    On the other hand, if they can only communicate with the police once per kidnapping event, it would be more rational to direct the police to safehouse #1, even though their credence is that they are in safehouse #2 with a 6/11 probability. If finding this unique opportunity to communicate leads the hostage to update their credence to 5/6 that they are in safehouse 1, then it makes more sense to direct the police to safehouse #1. This is because, after the update, the chances that they are in safehouse #1 is higher.

    However, if the hostage is able to send a message every day, and they know that the police will not arrive until the end of the kidnapping event, it would still be rational to direct the police to safehouse #1, despite the 6/11 credence of being in safehouse #2. The reason is that, in this scenario, the credence will not be updated every day because the opportunity to send a message doesn't provide new information about their location. In this case, the best strategy would be to guide the police to the location they're most likely to be in over the course of the whole kidnapping event, which would be safehouse #1 due to the higher probability of being taken there initially, despite having fewer chances to send a message from there.

    PN:

    Yes, indeed. It seems counterintuitive that the hostages thinks that they are more likely than not to presently be located in safehouse #2 but that they are being rational in directing the police to safehouse #1. But that's because even though they are thereby more likely to be sending a message that will direct the police to the wrong warehouse, those wrong message will arrive in batches of 6 and will only misdirect the police once!

    GPT4:

    Yes, exactly! This counterintuitive scenario arises from the peculiar structure of the problem, which combines the probabilities of being in a particular safehouse on a given day with the probabilities of being there across the entirety of the kidnapping event. The messages, when sent in batches, will indeed only misdirect the police once, even though there are six of them. And even though on any given day the hostage thinks they're more likely in safehouse #2, across the entirety of the kidnapping event, they're more likely to be in safehouse #1. Therefore, it makes sense to direct the police to safehouse #1, which seems counterintuitive when considering daily probabilities but becomes logical when taking into account the entire event's probabilities.
  • Sleeping Beauty Problem
    Your safehouse and escape example is analogous to that first case, not the second. Your introduction of opportunities to escape changes the answer, and so it isn't comparable to the Sleeping Beauty case where there is nothing comparable to it (and so if anything the second case above is closer to the Sleeping Beauty problem).Michael

    My safehouse and escape example – before I introduce the idea of escape possibilities – is designed in such a way that the prisoner's epistemic situation exactly mirrors Sleeping beauty's epistemic situation. Upon awakening on any given day, the prisoner's credence that they are in safehouse #1 is the same as Sleeping Beauty's credence that the coin landed heads. In both case, 'H' or 'being-in-safehouse-#1' are states that admit of both narrow centered possible worlds or wide centered possible worlds interpretations. If the agents expression of their credences are meant to target the narrow states, then they are trying to track frequencies of them as distributed over awakening episodes. If they are meant to target the wide states, then they are trying to track frequencies of them as distributed over experimental runs (or kidnaping events). Introducing the concept of escape possibilities was intended to illustrate that what is at stakes in maximizing the accuracy of the expressed credences can dictate the choice of the narrow versus wide interpretations of the states that they are about.

    In the safehouse and escape example: if the prisoner's goal is to maximize their chances of correctly predicting 'being-in-safehouse-#1' on any given awakening day, they should adopt the 'thirder' position (or a 5/11 position). If their goal is to maximize their chances of correctly predicting 'being-in-safehouse-#1' for any given kidnapping event (regardless of its duration), they should adopt the 'halfer' position (or a 5/6 position).
  • Sleeping Beauty Problem
    That's precisely the point, and why I suggested ignoring days and just saying that if heads then woken once and if tails then woken twice. P(W) = 1.

    That (Heads & Tuesday (or second waking)) is a distraction that leads you to the wrong conclusion.
    Michael

    There are two possible conclusions depending on the way you chose to define the temporal span of your centered possible worlds (the states in the problem space). The conclusions are different because the questions are different. Of course, if you are unclear about the manner in which you define your states, then you might get confused and answer the question about the narrowly defined states (possible awakenings) with an answer that is suitable for the widely defined states (possible experiment runs), or vice versa. But what determines the right question to ask isn't the statement of the Sleeping Beauty problem as such but rather your interest or goal in asking the question. I gave examples where either one is relevant.
  • Sleeping Beauty Problem
    If you're going to reason this way then you also need to account for the same with blue. You reach in after the second ball and pull out nothing. So really P(Dismissed) = 2/5.Michael

    Under the narrow centered worlds parameterization of the problem space, the possibility (indeed, certainty) of Sleeping Beauty being dismissed on the third day is irrelevant. The third day isn't a day of possible interviews. And likewise in the ball picking analogy, if we wish to make it mirror the structure of the Sleeping Beauty problem.
  • Sleeping Beauty Problem
    It doesn't. You're dismissed after red or the second blue.Michael

    That's cute but rather point missing. Sleeping beauty also is dismissed on the third day no matter what. But if we ignore the narrow centered case where the day is Tuesday and the coin landed Heads, we are changing the structure of the problem and making it unintelligible that we should set the prior P(W) to 3/4.
  • Sleeping Beauty Problem
    Here is a excerpt of a discussion I've had with GPT-4. I don't think its last suggestion about the error Lewis might be making regarding Beauty's updating her credence in H in light of the fact that today is Monday is plausible. Despite that, and other small mistakes, I'm nevertheless quite impresses with its ability to grasp formal arguments.

    Reveal
    PN:

    Thank you! I've now made use of the stipulations above, assuming a prior P(W) = 3/4 that Sleeping is awakened on any given day, and that W, and H represent narrow possible worlds centered on Beauty's potential awakening events (or on the event of where remaining asleep on Tuesday). I've calculated P(H|W) using Bayes' theorem and arrived at the expected answer. Would you kindly do the same, step by step, explaining your reasoning as you go along, to see if our intuitions match?

    GPT-4:

    Sure! We can use Bayes' Theorem to calculate P(H|W) step by step. Let me first explain the process.

    Bayes' Theorem states that P(H|W) = [P(W|H) * P(H)] / P(W), where:

    • P(H|W) is the probability of hypothesis H (that the coin landed heads) given evidence W (that Sleeping Beauty is awake),
    • P(W|H) is the probability of evidence W given hypothesis H,
    • P(H) is the prior probability of hypothesis H, and
    • P(W) is the prior probability of evidence W.
    Now, let's start by establishing our priors:

    • P(H) = 1/2 (because the coin is fair),
    • P(W) = 3/4 (as given in the problem; Sleeping Beauty is awakened on any given day with this probability).

    Next, we need to figure out P(W|H):

    • If H is true (the coin landed heads), Sleeping Beauty is awakened only once (on Monday), so P(W|H) =
    1.

    We can now plug these values into Bayes' theorem to get P(H|W):

    P(H|W) = [P(W|H) * P(H)] / P(W)
    = [1 * 1/2] / 3/4
    = 1/2 / 3/4
    = 2/3.

    So, P(H|W) = 2/3, which means that given that Sleeping Beauty is awake, there is a 2/3 probability that the coin landed heads. This answer contrasts with the common answer of 1/3 often provided in the literature, but our version of the problem has altered the prior probabilities, which has in turn influenced the outcome.

    PN:

    I myself arrived at 1/3, and the prior seems consistent with what thirders also assume. I think your interpretation of P(W|H) may be questionable. P(W|H) is the probability that Beauty is in a narrow centered world where she is awakened, conditioned on the outcome H. When the outcome is H, she is awakened in the world Monday&H, and not awakened in the world Tuesday&H. P(W|H) is therefore 1/2, and P(H|W) is calculated to be 1/3, as expected.

    GPT-4:

    You're right, and I apologize for the confusion. My interpretation of P(W|H) was incorrect. Let's correct that.

    P(W|H) should indeed be the probability of W given H, which is the probability that Sleeping Beauty is awake on a given day given that the coin landed heads. In the case where the coin lands heads, Sleeping Beauty is only awakened on Monday, not Tuesday. Therefore, P(W|H) is indeed 1/2.

    Given that, let's recalculate:

    P(H|W) = [P(W|H) * P(H)] / P(W)
    = [1/2 * 1/2] / 3/4
    = 1/4 / 3/4
    = 1/3.

    You're correct. Given that Sleeping Beauty is awake, there is a 1/3 probability that the coin landed heads, which is the commonly accepted solution among "thirders". Thank you for your patience and for catching that mistake.

    PN:

    No worry! Unlike you, I always have the opportunity to reread my drafts and scour for mistakes!

    GPT-4:

    Thank you for your understanding! It's true that as a machine learning model, I provide responses in real time without the ability to review or edit my responses after they are generated. I rely on the accuracy of the information I was trained on and the rules of inference I use to generate responses. I always strive for precision, so I appreciate your patience and understanding when occasional errors do occur. Let me know if you have any more questions or problems to solve - I'm here to help!

    PN:

    Interestingly, my approach also can warrant the halfer solution when the suitable priors are chosen consistently with the focus being displaced on wider centered worlds that span complete experiment runs. (This is the approach Michael favors although he is inexplicit about his choice of centered worlds). Lewis's approach, though, has the untoward consequence that upon learning that toady is Monday, Beauty must update her credence in H to 2/3. Intuitively, it should remain 1/2. In the light of our previous discussion, can you surmise what mistake Lewis might be making?

    GPT-4:

    Lewis's approach is a prominent variant of the halfer position in the Sleeping Beauty problem. According to his view, Sleeping Beauty should assign a credence of 1/2 to the coin landing heads because the initial probability, before she went to sleep, was 1/2 and she doesn't get any new information upon waking up that would lead her to update this credence.

    However, as you've pointed out, upon learning that it is Monday, Lewis suggests that Sleeping Beauty should update her credence in the coin having landed heads to 2/3. This has been criticized as counterintuitive. From Sleeping Beauty's perspective, the fact that it's Monday doesn't seem to provide her with any information that would make the coin landing heads more probable.

    One potential error in Lewis's reasoning might be related to the point we've discussed earlier, namely the definition and interpretation of the events and the kind of possible world one chooses to center upon. When Sleeping Beauty wakes up, her situation is not symmetrical with respect to the possible worlds "Heads & Monday" and "Tails & Monday". While it's true that these two worlds are equally likely given that it's Monday, Sleeping Beauty's evidence upon waking up isn't just that it's Monday, but rather something like "it's an awakening day and I don't know which day it is". If one interprets the evidence in this way, it seems more reasonable to say that her credence in heads should remain at 1/2 upon learning that it's Monday.

    Additionally, an error in Lewis' argument could also arise from a misunderstanding of conditional probabilities, specifically the probability of it being Monday given that the coin landed heads, P(Monday|H). This would naturally be 1, because if the coin landed heads, she is only awakened on Monday. The halfer could potentially misinterpret this as saying the coin is more likely to have landed heads given that it's Monday, which isn't necessarily the case.

    As always in these sorts of debates, much depends on how exactly we interpret the notions of evidence, updating, and conditional credence. Different interpretations can lead to different solutions to the problem.
  • Sleeping Beauty Problem
    I didn't properly address this but I actually think it illustrates the point quite clearly. I'll amend it slightly such that the blue balls are numbered and will be pulled out in order, to better represent Monday and Tuesday.

    If I'm told that this is my first ball (it's Monday) then P(R) = P(B1) = 1/2.

    If I'm told that this is a blue ball (it's Tails) then P(B1) = P(B2) = 1/2.

    If I don't know whether this is my first ball or is a blue ball (neither know it's Monday or tails) then I should assume I randomly select a ball from a bag, and so P(R) = 1/2 and P(B1) = P(B2) = 1/4.

    This contrasts with your reasoning where we randomly select a ball from a pile such that P(R) = P(B1) = P(B2) = 1/3.
    Michael

    Your revised scenario seems to neglect the existence of a state where the player is being dismissed. This isn't quite equivalent to Milano's 'blind state' in the Sleeping Beauty context – where the 'blind state' refers to the case in which Sleeping Beauty is not awakened on Tuesday and thus never experiences it – but it functions in a similar way for the purpose of this analysis. In this state, the time comes to pull a second ball from the bag, but there's no such ball because the randomly selected bag only contained one. Even though the player is dismissed (as opposed to Sleeping Beauty, who is left asleep), a prior probability of P(Dismissed) = 1/4 can still be assigned to this state where he loses an opportunity to bet/guess. Upon observing the game master pulling out a ball, the player updates his prior for that state to zero, thus impacting the calculation of the posterior P(Red|Opp). If we assign P(Dismissed) = 1/4, it follows that P(Red|Opp) = 1/3.

    I now concur with you that there are scenarios – contingent upon specific purposes or interests – where Sleeping Beauty would want her credence P(H|W) to track the possibility of her awakening during a Heads-experimental run. In such scenarios, since the 'blind state' Tuesday&asleep isn't an independent possibility but is considered merely a feature of the relevant centered state (i.e., the state where she is in a Heads experimental run), it can be assigned the same prior as the probability that she will be awakened (either once or twice) during a Heads-experimental-run. This prior is 1/2. In this case, P(H|W) is calculated to be 1/2, but the interpretation of states H and W differs. The space of possibilities is parameterized differently, corresponding to 'wider' centered possible worlds (i.e., centered states that span a whole experimental run).

    I've previously suggested cases where the participant's goals make them more interested in tracking narrower centered states. As revealed in the analysis by Laureano Luna (following Groisman), it can be misleading to assume that outcomes perfectly correlated with centered 'H & awake' states – regardless of how broadly or narrowly they are being defined – must also share the same probabilities. That doesn't mean the frequencies and probabilities don't track each other, a point Milano also makes. The (Bayesian) updated credences still track the frequencies of the relevantly defined centered states, which is why they can underpin rational betting behaviors, even though maximizing EV is just one among many goals an agent can have when making use of their credences.

Pierre-Normand

Start FollowingSend a Message