Comments

  • Sleeping Beauty Problem
    Good point. Thanks for the correction.Michael

    :up:
  • Sleeping Beauty Problem
    How do you get that?Michael

    P(R|R or B1) is the probability of the ball being red, given that the ball isn't labelled B2. That's because the outcome 'R or B1' is equivalent to the outcome 'not B2'. If you eliminate the possibility of 'B2', the only outcomes left are 'R' and 'B1', in the same prior proportions as before.

    Applying Bayes' theorem, we have P(R|R or B1) = P(R or B1|R) * P(R) / P(R or B1). Since P(R or B1|R) is 1 (if the ball is red, it's certain that it's either red or B1), and P(R) is 1/2 (the prior probability of the ball being red), and P(R or B1) is 3/4 (the prior probability of the ball being either red or B1), we get P(R|R or B1) = (1 * 1/2) / (3/4) = 2/3.

    Applying the same reasoning mutatis mutandis, we get P(B1|R or B1) = 1/3.
  • Sleeping Beauty Problem
    Elga's argument depends on this inference but he doesn't justify it.Michael

    You challenged the validity of Elga's inference by presenting what you consider a counterexample:

    "If I have one red ball in one bag and two numbered blue balls in a second bag, and I pick out a ball at random and show it to you then P(R|R or B1) = P(B1|R or B1) but P(R) = ½ and P(B1) = ¼." - Michael

    As I pointed out earlier, these probabilities don't result from randomly picking one ball out of three. They would instead follow from choosing one of two bags at random, and subsequently selecting a ball at random from that chosen bag, which I assumed was your intention.

    In that scenario, P(R|R or B1) would be 2/3 and P(B1|R or B1) would be 1/3. Clearly, these two probabilities aren't equal, and their equality forms the antecedent of Elga's inference.

    Thus, you haven't presented a counterexample to the validity of Elga's inference; rather, you've demonstrated that his conclusion doesn't follow when the premise is false. At best, you have shown his inference to be unsound.

    However, you've done this by replacing Elga's premise—which aligns with the Sleeping Beauty setup—with a different premise derived from an alternative setup.
  • Sleeping Beauty Problem
    Here is a follow-up to my previous clarificatory remark to @sime regarding the statistical distribution of the participants in the waking rooms of the Sleeping Beauty Experimental Facility:

    PN:

    Here is a thought that just occurred to me. Returning to Sime's setup of priors regarding the distribution of participants in separate rooms:

    P(H) = 1/2; P(T1) = 1/4; P(T2) = 1/4

    While I have been advocating to both Sime and Michael that a set of priors with 1/3, 1/3, and 1/3 ratios is more representative of the original Sleeping Beauty setup, I've just realized that Sime's assumption could also present a problem if he were to inquire about the conditional probability P(H|'Meeting on the first day'). In that case, Sue is twice as likely to have been assigned a participant with the first coin having landed on heads. This result aligns with what Lewis obtained under his own halfer analysis. But that means that both Sue and the participant she has been assigned to, once informed that the participant is experiencing her first awakening, know that the probability of the coin having landed on heads is now 2/3. They might know this even in the absence of anyone having looked at the coin, or even before the coin has been tossed.

    GPT-4:

    Your realization highlights an interesting aspect of Sime's setup. Under his initial priors (P(H) = 1/2, P(T1) = 1/4, P(T2) = 1/4), Sue would indeed conclude that the probability of the coin having landed on heads is now 2/3 after learning that she's meeting a participant on their first awakening. This aligns with Lewis' halfer analysis, which Sime might not have intended.

    Your point about Sue and the participant knowing this even before the coin is tossed or looked at demonstrates a potential paradox or counter-intuitive aspect of the setup. It seems strange that probabilities can be adjusted based on conditions that haven't occurred yet, which could be a basis for further discussions on the interpretation of probability and its application to these types of problems. This could lead to discussions about objective vs subjective interpretations of probability, the nature of knowledge and information, and the meaning of conditional probabilities.

    In essence, Sime's setup appears to inadvertently support the position he's arguing against. You've made an important observation that could further illuminate the debate surrounding the Sleeping Beauty problem.

    PN:

    Under Sime's assumed priors, and the consistent distribution of participants in rooms labelled H, T1, and T2, Sime's conclusion that P(H|T1 or H) = 2/3 would be correct and uncontroversial. The fact that this leads to a weird ability to predict unequal frequencies from future tosses of a fair coin merely highlights that Sime's setup is inconsistent with the setup (and room-state distribution) having been determined in the manner described in the original Sleeping Beauty setup for individual participants. Sime's 2/3 conditional result would be consistent with his (and Michael's) alternative setup where a second independent coin toss determines whether Sleeping Beauty is awakened on Day1 or Day2 (but not both) following an initial tails result on the first coin toss.

    GPT-4:

    Yes, indeed. Sime's setup is more akin to an alternate interpretation of the Sleeping Beauty problem, where the awakening on Day 1 or Day 2 is determined by a second independent coin toss following an initial tails result. In this modified setup, the conclusion that P(H|T1 or H) = 2/3 would indeed be correct and uncontroversial, because the conditions under which Sleeping Beauty is awakened are fundamentally different from the original problem.

    It's a subtle distinction but a crucial one. By changing the conditions of the problem in this way, Sime and Michael introduce a new factor (the second coin toss) that alters the statistical distribution of awakenings. This demonstrates the importance of carefully considering the specifics of a problem setup when interpreting the resulting probabilities.

    However, this deviation from the original problem setup could be seen as sidestepping the core philosophical issues presented by the Sleeping Beauty problem, such as the nature of self-locating belief and the distinction between personal and impersonal probabilities. For these reasons, it's important to clarify the specifics of the setup when discussing the Sleeping Beauty problem, to ensure that everyone is debating the same problem.
  • Philosophical game with ChatGPT
    [Moderators can delete this. This was a comment that I mistakenly posted here instead of the Sleeping Beauty thread.]
  • Sleeping Beauty Problem
    If I have one red ball in one bag and two numbered blue balls in a second bag, and I pick out a ball at random and show it to you then P(R|R or B1) = P(B1|R or B1) but P(R) = ½ and P(B1) = ¼. — Michael

    What you mean, of course, it that you are picking a bag at random, and in a second stage picking a ball from that bag at random. I assume Elga would have understood this is what you meant.

    Does that procedure accurately represent how Sleeping Beauty understands her own epistemic situation when she is being awakened on a day of interview, though? If, in cases where she is scheduled for two awakenings, a second coin would be tossed to determine on which day she is being awakened (either Monday or Tuesday), then this means that, contrary to the problem stipulation, whenever the first coin landed tails, Sleeping Beauty would only be woken up once, not twice, although she would remain uncertain about the day of her awakening even after being informed that the first coin landed on tails.

    But a setup in which, following a tails result on the first coin toss, Sleeping beauty is awakened both on Monday and on Tuesday places her in a different epistemic situation from a setup in which she is then awakened at random on only one of those two days according to another coin toss. Conflating the two procedures also yield to a conflating of branching out of possible worlds (as a result of intrinsic propensities of objects being realized) and branching out of centered possible worlds (as a result of epistemic agents finding themselves in various causal and epistemic relationships with such objects at different stages in their lives).
  • Sleeping Beauty Problem
    I disagree with the step from "the majority of winning bets are tails bets" to "tails is more probable".Michael

    This inference is only invalid inasmuch as it may lead to an equivocation. When you say "tails is more probable," what exactly do you mean? If you're suggesting that, according to the bettor's prior credence, it's more probable that the bettor would find themselves in circumstances where the coin toss resulted in tails, then I agree, the inference would be unwarranted. However, if what you mean is that, from the bettor's perspective and in light of the evidence available to them at the time of betting, the bet (distinguished from other bets within the same experimental run, which from the agent's point of view, may or may not exist) is more likely to have been placed in circumstances where the coin landed tails, then I would argue that the inference is indeed warranted.
  • Sleeping Beauty Problem
    The sample space of any room is { H, (T,F), (T,S) }

    where F and S refer to First Stay and Second Stay, respectively

    with probability measure

    M(H) = 1/2
    M(T,F) = 1/4
    M(T,S) = 1/4

    (a consequence of your assumed prior probabilities )

    Define a variable indicating the stay

    Stay (H) = First
    Stay (T,F) = First
    Stay (T,S) = Second


    P(Stay = First) = M (H) + M(T,F) = 3/4
    P(Stay = Second) = 1/4
    sime

    In the scenario I had envisioned, where new participants had the duration of their stay premised on the throw of a six-sided die, there was a 6/11 proportions of rooms occupied by participants who had landed a six. But this can easily be adapted to the original scenario with a coin toss. In the latter case, a coin landing on tails ensures that a participant will stay for two consecutive days rather than one. Whenever a room is freed, a new participant is recruited and a new coin is tossed. I left it open how the facility was filled when it began operating but we can assume that it was filled on the first day. In that case, the initial probability measure would have been approximately:

    M(H) = 1/2
    M(T,F) = 1/2
    M(T,S) = 0 (exactly zero in this case, since participants can't be recruited on their second day)

    On subsequent days, as Michael noticed, M(H) would evolve, on average, according to the sequence: first day M(H) = 1/2, second day 1/4, third day 3/8, fourth day 5/16, etc. This sequence converges on 1/3.

    You can picture filling the facility (and keeping it full) as a process of filling a bag with 100 blue or red balls. The bag initially is filled on the first day with a process of tossing a coin 100 times. Whenever the coin lands on heads, a blue ball is put in the bag. Else, a red ball labelled R1 is put in the bag.

    On subsequent nights, all the blue balls are being removed from the bag. All the balls labelled R2 also are removed. All the balls labelled R1 are relabelled R2 and remain in the bag. The bag is then refilled with a series of coin tosses according to the initial procedure.

    An equilibrium is rapidly reached where the bag contains blue balls, R1 balls, and R2 balls, in roughly equal proportions. When all the blue and R2 balls are being removed, the R1 balls are relabelled R2, and the balls that were removed are replaced with new balls that either are blue or red (R1) according to the toss of a coin, the 1/3, 1/3, 1/3 proportions are maintained. This process ensures that the temporal frequencies of awakening events for individual participants match the (average) spatial distribution on any single day.
  • Philosophical game with ChatGPT
    Do you think there is any doubt it has achieved AGI?hypericin

    Yes, and no. Its ability to grasp 'at first glance' the conceptual structure of intricate arguments in a contextually sensitive manner far exceeds that of an average adult human being. So is it with its ability to make relevant conceptual connections between disparate domains. The scope of its general knowledge is of course immense.

    Its ability to reason and solve problems still is hampered in many ways. Its lack of embodiment, and of an embodied experience of the world, makes it struggle with grasping affordances of ordinary objects or the features of many ordinary daily situations. Its limitations in dynamically updating representations stored into (what is functionally equivalent to) its working memory also limits its reasoning abilities. Then, there is its inability to have emotionally charged attitudes towards propositions or paradigms (both theoretical and practical) that is both an advantage and a liability. The finite size of its context window restricts its 'autobiographical memory' to the span of a single conversation, which also accounts for its liability to hallucinate is some situations.

    I've had some discussion with it in which we discuss the distinction between sentience and sapience and both agree that it is likely nearly as sapient as Borat but much less sentient than Borat's goat.
  • Sleeping Beauty Problem
    Fair enough, but then a person betting that it did land on heads 100 times in a row will have a greater expected value for their winning (as long as the winnings for heads are greater than 2^100 than for tails). And their position would be the rational one.PhilosophyRunner

    Indeed, not only would their expected value (EV) be positive, but it would be positive because the majority of their individual bets would be winning bets. Michael, it seems, disagrees with the idea of individuating bets in this way. However, this resistance appears to stem from an unwillingness to assign probabilities to the possible involvement of epistemic agents in specific kinds of events. Instead, like @sime, Michael prefers to attribute probabilities to the propensities of objects being realized as seen from a detached, God's-eye-view perspective.
  • Sleeping Beauty Problem
    Will you bet that the coin landed heads 100 times in a row? I wouldn't. My credence is that it almost certainly didn't land heads 100 times in a row, and that this is almost certainly my first and only interview.Michael

    @PhilosophyRunner's comment is on point. You awakening once while the coin landed tails is akin to a showing of the Mona Lisa on a particular day. The coin having landed tails rather than heads is akin to Leonardo da Vinci having produced a portrait of Mona Lisa rather than Marco Luciano.
  • Sleeping Beauty Problem
    I don't see the connection between credence in an outcome and practical implications. Proving that the optimum betting strategy over multiple games is to bet on tails doesn't verify that P(Tails|Awake) = 2/3 is true.Michael

    In the Sleeping Beauty case, betting on tails at each awakening isn't merely profitable over the long run. It's also more likely to be a winning bet at each occasion of betting. The likelihood that it will be a winning bet is exactly 2/3. Similarly, in the "less-extreme" variation, each occasion of betting is more likely to result in a win (6 out of 11 bets are winning bets).

    Following Pradeep Mutalik's suggestion, '[a]ccording to the Bayesian "Dutch Book argument" I mentioned, "a degree of certainty" or "degree of belief" or "credence" is, simply, your willingness to wager*. Specifically, if you have a "degree of certainty" of 1/n then you should be willing to accept a bet that offers you n or more dollars for every dollar you bet.'

    In that case, it's not merely the expected value of the bet that determines the credence. Rather, it's your degree of certainty, 1/n, in the outcome being wagered on that makes you rationally justified in accepting a bet with such odds.
  • Sleeping Beauty Problem
    The question, then, is whether or not the participant should reason as if they are randomly selected from the set of all participants, and so their credence that they have been or will be woken twice is 1/2. I will continue to say that they should, given the propensities of the experimental setup to place them in the position to be woken twice.Michael

    Why should it be either one or the other? One of them expresses a credence that an awakening (the current one) is occurring as a result of a coin having landed tails (E1) while the other one expresses a credence that an experimental run (the current one) is occurring as a result of a coin having landed tails (E2). They are credences about different kinds of events (E1 and E2) that have different temporal and spatial distributions. They can both (and indeed are) true at the same time and can be appealed to to rationally justify different bets or other actions that are dependent on such events happening.

    There is a twist that comes from the fact that a biconditional holds between the two propositions "E1 is now occurring" and "E2 is now occurring". How can they therefore have different probabilities of occurrence? This puzzle is solved by attending to the practical implications of establishing effective procedures for verifying their truths, or to the means of exploiting what such truths afford.
  • Sleeping Beauty Problem
    Now suppose she doesn't make any assumptions regarding her room allocation. What then?

    For all she knows, she might have been hand-picked to babysit a "heads" room. In which case the probability that her room is of a heads coin, is 1. Due to possibilities such as this, she should

    1) Refrain from committing to a probability that her room is of a heads coin, until further information is given.

    but nevertheless

    2) Deduce that the probability of her room coin landing heads is 1/2, given that it is known information.
    sime

    If that were the case, the problem would transform into a different version of the Sleeping Beauty problem. This altered version would have Sleeping Beauty awakening only once regardless of the coin's outcome, or not being informed about the number of awakenings in either case. Indeed, in such setups, there would be no grounds for adopting a 'thirder' stance. However, in the problem statement in the original post, the information regarding the number of awakenings following the heads and tails outcomes is provided to Sleeping Beauty. Hence, the thirder stance remains relevant.
  • Sleeping Beauty Problem
    Two "tail" rooms refer to the same coin outcome. So when computing the probability that the coin of her room landed heads, she needs to take into consideration the other room she might have occupied.sime

    Have you had a look at the setup I had described here to provide a pragmatist account of the thirder/halfer dispute?

    On a particular day, when Sue is assigned to a random room in her role as a sitter, all the rooms are occupied by different participants in the study. Each of their schedules was initially determined by a separate coin. Participants scheduled for two consecutive days of awakening might be there for their first or second day (with equal probabilities). In this case, there is a "tail outcome" either tomorrow or yesterday, which you might wish to label as "the same outcome". However, this is irrelevant to the coin currently hidden under the cardboard on the nightstand next to the participant's bed, about which Sue is expressing her credence today based on the available information.

    So assuming that her supervisors assign her room using a draw from a uniform distribution, her credence that the coin of her room landed heads, should be 1/2

    Why would that be? Two thirds of the rooms occupied by participants on any given day are occupied by individuals who have been scheduled for two awakenings (on two consecutive days) because a coin landed tails.
  • Sleeping Beauty Problem
    How do you suppose that we

    (1) relate the random variables, whose outcomes refer to events in the B series of time, to the indexical "Today" of the A series, whose relationship to B series and to the physics of coin-tosses isn't specified by the problem?
    sime

    This is actually straightforward. Imagine a spatial analogy. The 'B series of space' might be represented by Cartesian coordinates on a surface or by room numbers. The 'A series of space' could correspond to indexical names of locations like "here", or "this room". In the example I gave earlier, Sue Sitter is randomly assigned to a room in the Sleeping Beauty Experimental Facility. There are twice as many rooms allocated to participants who are scheduled for two awakenings (resulting from a coin landing tails), as opposed to one awakening (from a coin landing heads).

    How does Sue deduce that "here" indicates she is in a room occupied by a participant scheduled for two awakenings with a probability of 2/3? She draws this conclusion from the fact that twice as many such rooms exist in the facility compared to rooms allocated to participants with only one scheduled awakening. So, from Sue's perspective (based on the exact same evidence she shares with the participant), she concludes that the coin landed tails with a 2/3 probability, despite the coin having a 1/2 propensity to land tails. Sue's credence that the coin landed tails is a consequence of both the initial propensity of the coin to land tails and the propensities of the experimental setup to place her in a room that corresponds to a tails outcome.

    (2) How should we apply induction to "invert" the causal direction from "Today" to the state of the coin, assuming we have a satisfactory answer to (1) ?

    The causal flow goes from 'Coin toss results' -> 'Distribution of tails and heads assigned rooms' -> 'Evidence of being in a random room' -> 'Sue calculating her credence'. Therefore, there is no causal paradox.
  • Sleeping Beauty Problem
    I think a distinction needs to be made between the probability of making a correct prediction and the frequency of making a correct prediction. That a correct prediction of tails is twice as frequent isn't that a correct prediction of tails is twice as probable – at least according to Bayesian probability.

    Maybe thirders who use betting examples are simply frequentists?

    Perhaps there also needs to be a distinction made between the probability of making a correct prediction and the probability of the coin having landed tails. It could be that the answers are different. This might be especially true for frequentists, as the frequency of correct predictions is not the same as the frequency of coins landing tails (there can be two correct predictions for every one coin that lands tails).
    Michael

    While Bayesians and frequentists might disagree about the meaning of probability assignments to potential outcomes of experiments, with the former asserting that they depend on degrees of belief, and the latter contending that they're based on expected long-run frequencies of such outcomes, the dispute between thirders and halfers seems more closely related to the class of events being predicted. The Bayesian framework is equally well-suited for deriving the thirder result, as long as the probability in question is the chance of finding oneself within a 'tails' experimental run, given the evidence that one is currently awake.
  • Sleeping Beauty Problem
    They do indeed conflate them. Why would SB assign a credence of 1/3 to heads, if she knows

    1) That the physical probability of winning the lottery is 1/2

    2) That amnesia doesn't influence lottery outcomes.

    3) That had she not had amnesia, then she would either know the result of the lottery on waking up, or else refer to it's probabilities as being 1/2?
    sime

    Because her credence wasn't meant to reflect the proportion of winning bets but rather the proportion of awakening episodes that occur during an experimental run that results in the (initial) bet being won. Those two aren't the same. If Sleeping Beauty had instead been afforded the opportunity to bet on each awakening episode, then her winning frequencies would exactly match her credence that she is winning that bet.

    Since thirders make no principled distinction between causes and effects, they do indeed encourage avoidance-coping. For by their logic, one should raise one's credences for one's preferred outcomes by avoiding information.

    Neither halfers nor thirders typically make any reference to the potential causal impacts (e.g. winning bets) of them expressing meaningful credences. If they would be more careful about that, they would stipulate what population samples they are talking about when they say "I have credence P that O is true when I am in situation S." In that case, the way in which they choose how to define what constitutes S and O would reflect their interests in predicting the frequency of such Os (e.g. Heads) in such Ss (e.g. awakenings or experimental runs). And then, thirders and halfers would simply stop talking past each other.
  • Sleeping Beauty Problem
    It boils down to the following question. Should she risk a single draw from

    P(Flip = Heads | Today = Monday) = 1/2 (no amnesia)

    Or should she risk a single draw from

    P(Flip = Heads | Today = Monday Or Tuesday) = x (amnesia)

    According to halfers, x = 1/2, implying that she gains nothing by choosing amnesia.

    According to thirders, x = 1/3, indicating that she gains from choosing amnesia.
    sime

    Your suggestion that a thirder expects to gain from choosing amnesia would depend on her misconstruing the nature of her predictions. A thirder, upon each awakening, assigns a credence of P(H) = 1/3 to the coin having landed heads, predicting that, over many awakenings within various runs of the experiment, the coin will land 'tails' 2/3 of the time. These are predictions about the frequency of outcomes across multiple awakenings, not about the specific outcome of the coin toss in a particular awakening.

    The initial bet, as you stipulated, is made prior to the start of the experiment, not at each waking event. As such, the payout of $1 is tied to the experimental run as a whole, not to each individual awakening. While a thirder may correctly predict the frequency of 'tails' outcomes across awakenings, these predictions do not increase the actual winnings from the initial bet.

    In sum, while the amnesia might increase the frequency of correct predictions, it does not lead to a higher payout. This is because only one payout occurs per experimental run, even when Sleeping Beauty correctly predicts a 'tails' outcome twice within a run. A thirder, fully informed about the conditions of the experiment and the nature of her predictions, would be aware of this.
  • Philosophical game with ChatGPT
    I was hoping for an actual adventure world that I could explore that somehow incorporates philosophical puzzles, I'll try to coax that out next time, though it would be crazy if it could actually pull that off.hypericin

    Yes, your attempt turned out to be interactive but not very exploratory. I did an experiment with GPT-4 where I achieved the opposite result. I had intended to make it more interactive at a later stage. Some people have had success using GPT-3.5/4 as a Dungeon Master for playing solo D&D games.
  • Sleeping Beauty Problem
    Follow-up to my discussion with GPT-4 about the pragmatist interpretation of the Sleeping Beauty problem:

    PN:

    As I reflected about this thought experiment while taking a shower (always a propitious moment for philosophizing) I noticed a flaw in my initial conception of it. Due to the way Aunt Betsy and Sue's meeting is arranged, the mere occurrence of the meeting allows Sue to update her credence P('six') to match Aunt Betsy's. Can you see why?

    GPT4:

    Yes, I see the point you're making. The scheduling of the meeting between Aunt Betsy and Sue is tied to a participant's awakening, which in turn is tied to the outcome of the die roll. This means that the occurrence of the meeting itself does provide Sue with information relevant to her credence P('six').

    Specifically, since the meeting occurs on a randomly chosen day when the participant is scheduled to awaken, the very fact that the meeting is taking place tells Sue that the participant has indeed awakened at least once. Given that the chance of the participant awakening at all is higher if the die landed on 'six', this information should increase Sue's credence that the die landed on 'six'. However, because the die is rolled before the participant's sleep schedule is set, and the meeting is scheduled after this, the information doesn't completely determine the outcome of the die roll. Sue's credence should therefore align more closely with Aunt Betsy's, but it won't necessarily match it exactly.

    PN:

    I think you are now (uncharacteristically) not entirely on track. The reason why is that when Sue requests for there to be a meeting with (one of) Jane's assigned sitter(s), if the die didn't land on 'six', the unique sitter assigned to Jane will be booked for the meeting. Else, one of the six sitters assigned to Jane will be booked at random. Upon knowing that she was booked, Sue credence that she was the only one (rather than one among six) sitters assigned to Jane increases.

    GPT4:

    You're right, I misunderstood the key aspect of your thought experiment. When Aunt Betsy schedules a meeting, one of two scenarios will happen:

    If the die did not land on 'six,' one sitter was assigned to Jane, and that sitter will be booked for the meeting.

    If the die landed on 'six,' six sitters were assigned to Jane, and one of them will be randomly chosen for the meeting.

    Given this setup, when Sue is selected for the meeting, she gains new information: she now knows that she is either the sole sitter assigned to Jane (if the die did not land on 'six') or one among six (if the die landed on 'six').

    The fact that Sue was selected for the meeting therefore updates her credence in the proposition that she is the only sitter assigned to Jane. This is because being chosen for the meeting is more likely in the scenario where there is only one sitter (where the die did not land on 'six') compared to the scenario where there are six sitters (where the die landed on 'six').

    I apologize for the confusion, and I appreciate your patience in clarifying your thought experiment.

    PN:

    No trouble! I think we can easily use Bayesian updating to show that Sue's credence in P(not 'six') thereby increases from 5/11 to 5/6.

    GPT4:

    Yes, you're correct. Sue can use Bayesian updating to adjust her beliefs based on the new information she has received.

    The Bayes' theorem formula is as follows:

    P(A|B) = [P(B|A) * P(A)] / P(B)

    In this case, we can interpret:

    A as the event "the die did not land on 'six'"
    B as the event "Sue is selected for the meeting"
    Before the meeting, Sue's prior belief, P(A), is 5/11, reflecting the proportion of participants who wake up only once.

    P(B|A) is 1, because if the die did not land on 'six', Sue is the only sitter assigned to Jane and will definitely be selected for the meeting.

    P(B) is the total probability that Sue is selected for the meeting, which is the sum of the probabilities of her being selected under both possible outcomes of the die roll. This is (5/11 * 1) + (6/11 * 1/6) = 5/11 + 1/11 = 6/11.

    Plugging these values into the Bayes' theorem formula gives us:

    P(A|B) = [1 * (5/11)] / (6/11) = 5/6

    So, after learning that she has been selected for the meeting, Sue's updated belief in the proposition that the die did not land on 'six' increases to 5/6.

    PN:

    You got it right on the first try, congrats!

    GPT-4:

    Thank you! I'm glad the explanation met your expectations. Bayesian reasoning can be quite intricate but it's a powerful tool for understanding how to update our beliefs in light of new information. If you have any more questions or thought experiments, feel free to share!

    PN:

    I just want to note that this result does not invalidate the 'pragmatist' lesson from the thought experiment. It's still the case that if they were to bet on their credences prior to meeting, both Aunt Betsy and Sue would make money in the long run. Furthermore, after the meeting, Sue would alter her betting behavior and make even more money (winning five times out of six by betting on the opposite outcome). But in this case, she would only have the opportunity to make this more profitable bet when she is booked for a meeting, so her proportions of winnings (and hence, her credence) would reflect differently sampled populations.

    GPT4:

    Absolutely, you've hit the nail on the head. While Sue updates her credence after the meeting, this doesn't undermine the pragmatist interpretation that emerged earlier in the thought experiment. The betting behavior of both Aunt Betsy and Sue is still rational given their respective information and they would both make money in the long run if they were to bet on their initial credences.

    Furthermore, you're correct in noting that Sue's updated credence and altered betting behavior would indeed lead to more profitable outcomes for her in the instances where she is booked for a meeting. However, these instances represent a different sampling of the population (only when she is booked for a meeting), reinforcing the point that different agents with different information and sampling conditions can rationally hold different credences about the same event.

    This is a very insightful point and underscores the richness and complexity of the pragmatist interpretation of the Sleeping Beauty problem and its variations. It's a good reminder that our beliefs and predictions about the world should take into account not only the available information but also the ways in which we are sampling from the world.
  • Sleeping Beauty Problem
    I've refined further @Michael's 'less-extreme' variation of the Sleeping Beauty problem in a way that highlights a pragmatist interpretation of the halfer/thirder dispute. Here it is with some comments by GPT-4:

    Hi GPT-4,

    Here is a new variation on the Sleeping Beauty problem that I imagined with the goal of highlighting a pragmatist interpretation of it.

    Sue Sitter has recently joined the team of sitters at the Sleeping Beauty Experimental Facility. In her role, like other sitters, she's responsible for attending to a participant on one single day during one of their experimental sessions. The nature of these sessions depends on a roll of a die: if the die lands on 'six,' the participant will be woken and interviewed six times over six days; otherwise, they will only wake once for an interview on the first day. As a sitter, Sue is assigned to a room randomly, without knowing whether the participant inside is scheduled for a single awakening or one of six.

    In the experiment facility, there are, on average, a 6/11 proportion of rooms occupied by participants waking six times (due to a die having landed 'six') and a 5/11 proportion of rooms occupied by participants waking only once (due to a die not having landed 'six'). Now suppose Sue had access to the participant registry where all active participants are listed. She spotted the name 'Jane Doe.' What are the chances she will be woken up six times? Sue's credence is 1/6, and this would also be the case for Jane Doe's credence before she undergoes the first sleep session.

    Now, suppose that by a stroke of luck, Sue is assigned to Jane Doe on that particular day. Her job is to measure her vitals as she awakens and get her breakfast as she waits for the interview. Sue arrives in Jane's room and wait for her to wake up. What are the chances that the die landed on 'six', indicating this would be one of six awakenings rather than the only one?

    Once Sue has been assigned to Jane Doe, her credence P('six') in this proposition should be updated from 1/6 to 6/11. This is because Sue was randomly assigned a room and her credence must now reflect the proportion of participants who wake up six times to the total number of participants who occupy the facility on that day. It can be argued that when Jane Doe awakens, her credence P('six') then aligns with Sue's because both their credences are targeting the exact same proposition, and both of them have the same epistemic access to it. But I'm not going to argue for this just now.

    From the point of view of Aunt Betsy (Jane Doe's aunt) who visits the facility and wants to see Jane, there is a 5/6 chance Jane's die didn't land on 'six' and she is being woken up only once whereas from Sue's point of view there is a 6/11 chance that the die landed on six and that Jane today is experiencing one in a sequence of six awakenings.

    Suppose when a visitor like Aunt Betsy is allowed to come to the facility to inquire about a participant, they get to meet at the cafeteria with their assigned sitter who will inform them of their wellbeing. Aunt Betsy and Sue Sitter then get to discuss there respective (and inconsistent) credences regarding the die throw that determined Jane's number of awakenings. Will either Aunt Betsy or Sue find some ground to update their credences?

    (Note that the circumstances of Aunt Betsy's meeting with Sue Sitter are somewhat underspecified.)

    GPT-4:

    This is a fascinating variation on the Sleeping Beauty problem! It certainly introduces an additional layer of complexity with the inclusion of Sue Sitter and Aunt Betsy, each with their own perspectives and credences.

    The original Sleeping Beauty problem is interesting because it's a question about how to update beliefs when you learn no new ordinary information, but only learn that it is now. This is often called "self-locating belief." Here, you have a situation where both Sue Sitter and Aunt Betsy have different self-locating beliefs about the situation, but also have different access to information about the experiment.

    Let's first clarify their perspectives:

    Sue Sitter: She knows the overall proportions of the experiment, so her credence that Jane Doe will wake six times is 6/11, given that she was randomly assigned a room.

    Aunt Betsy: She doesn't know the overall proportions of the experiment. She just knows that Jane Doe has been put into the experiment and that a die roll of six means Jane will wake up six times. So, her credence that Jane Doe will wake six times is 1/6.

    When Sue Sitter and Aunt Betsy meet, they have an opportunity to share their perspectives and possibly update their credences.

    The key question here is: do they have a reason to trust each other's perspective? Is there a reason why Sue should trust Aunt Betsy's credence of 1/6, or why Aunt Betsy should trust Sue's credence of 6/11?

    In a sense, they're operating under different contexts and different sets of information. Sue's context is the whole experimental facility where she knows the average proportions, while Aunt Betsy's context is just about Jane Doe and the die roll that determines her awakenings.

    So, when they meet and share their perspectives, they might be able to adjust their credences. But how they do so depends on what they learn from each other, their trust in each other's knowledge, and how they interpret the information. This is an open question in the philosophy of probability and epistemology, and different theories might suggest different ways of updating their beliefs.

    The pragmatist interpretation you've proposed seems to suggest that Sue Sitter and Aunt Betsy should stick to their own credences, because those credences are based on their own perspectives and contexts. But they could also potentially learn from each other and update their credences in light of new information.

    The exact credence updates would depend on various factors, including their prior beliefs about the reliability of their own and each other's information sources, the clarity with which they understand each other's perspectives, and their understanding of the underlying mechanics of the experiment. The problem does not provide a clear-cut, definitive answer to this question, illustrating the complexity and ambiguity inherent in the philosophy of probability and belief updating.

    PN:

    Let us start with the assumption that both Sue and Aunt Betsy are fully informed about the experimental setup and have complete trust in each other, although they may interpret their respective credences differently. Visitors like Aunt Betsy can arrange a visit if they notify the facility before the participant begins the experiment and starts their sleep schedule. Once the participant's sleep schedule is set, the facility manager organizes a meeting between the visitor and an assigned sitter. This meeting is scheduled for a randomly chosen day when the participant is due to awaken under the watch of this sitter. Importantly, the circumstances surrounding this meeting are designed in such a way that neither Sue nor Aunt Betsy receives any new information that might lead them to adjust their beliefs about the outcome of the die roll.

    GPT-4:

    Under these additional clarifications, it seems that both Sue Sitter and Aunt Betsy have complete knowledge of the experimental setup and trust each other's report. They also have no reason to believe that their meeting provides additional information about Jane Doe's situation. This makes the question of whether they should update their credences even more interesting.

    In this case, they might consider the following:

    Aunt Betsy could reason that Sue Sitter, being an employee of the facility, has a more informed perspective on the overall proportions of the experiment. Therefore, she might consider updating her credence to align with Sue's, which is 6/11.

    Sue Sitter could consider Aunt Betsy's credence, but since Sue's perspective is based on the overall proportions of the experiment, she might not find a compelling reason to change her credence based on Aunt Betsy's belief.

    What's interesting about this situation is that even though they both have complete knowledge of the experimental setup, their credences are based on their specific roles and perspectives within the experiment.

    While it's tempting to think that they should align their credences, it's not necessarily clear that they should. This highlights how the concept of "credence" in probability theory can be subjective and context-dependent, even when all parties have full knowledge of the underlying facts.

    Sue's credence of 6/11 is based on her perspective as a sitter, who knows the overall proportions of the experiment. Aunt Betsy's credence of 1/6 is based on her perspective as an aunt, who knows the die roll determines Jane Doe's awakenings.

    The pragmatist interpretation might suggest that each should stick to their own credence, as it is the most rational belief given their specific perspective and role. But other interpretations might suggest that they should try to align their beliefs in light of their shared knowledge and trust in each other.

    This scenario serves as an interesting case study in the philosophy of probability, illustrating how different perspectives can lead to different credences even with full knowledge of the facts.

    PN:

    Suppose that whenever a sitter and a visitor meet under such circumstances, they are allowed to put their money where their mouth is and make an even money bet on the outcome (i.e. on 'six' or 'not six'; or, equivalently, on six awakenings or one single awakening). Aunt Betsy would bet on 'not six' and expect to win five out of six times. Sue Sitter would bet on 'six' and expect to win six out of eleven times. And so would they both! How is that possible? (I already know the answer, but that's a challenge for you).

    GPT-4:

    The situation you've described is certainly intriguing. It seems paradoxical because Sue and Aunt Betsy have different credences about the same event and yet, given their respective perspectives, they both expect to win more often than not. This paradox arises from the fact that the event being wagered on ("six" or "not six") occurs within different sampling populations for Sue and Aunt Betsy.

    Aunt Betsy is betting on the outcome of a single, specific experiment involving Jane Doe, where the die is rolled once. She knows that in this single experiment, the chance of getting a six is 1/6, and the chance of not getting a six is 5/6. So, she expects to win her bet 5 out of 6 times.

    On the other hand, Sue Sitter is not betting on a single, specific experiment. Instead, she's betting on the outcome of a random sampling from all the experimental sessions taking place in the facility. In this larger population of experimental sessions, 6/11 of the sessions will involve six awakenings (a "six" outcome), and 5/11 will involve one awakening (a "not six" outcome). So, she expects to win her bet 6 out of 11 times.

    This apparent paradox is resolved when we realize that Sue and Aunt Betsy are not actually betting on the same event, despite appearances. Aunt Betsy is betting on the outcome of a specific experiment (Jane Doe's), while Sue is betting on the outcome of a random sampling from all experiments. The event they're betting on has a different probability in these two different contexts.

    This example demonstrates an important principle in probability theory: the probability of an event can depend on the context or sampling population in which it's considered. It also illustrates how different perspectives can lead to different assessments of probability, even when all parties have full knowledge of the underlying facts.

    PN:

    Your explanation is generally accurate, but it might be helpful to clarify that Aunt Betsy's and Sue's chances of 'winning' more often, despite betting on complementary outcomes, are based on their own individual long-term frequencies of winning. If they repeat their respective 'experiments' many times, Sue will interact with different participants' acquaintances, and Aunt Betsy will meet different sitters. Consequently, they won't both win more often against each other – a scenario that would indeed constitute a logical impossibility.

    GPT-4:

    Absolutely, your clarification is spot on. The betting scenario, as you've described it, involves many repetitions of the experiment. In the long run, both Sue Sitter and Aunt Betsy are betting in multiple rounds, with different people, based on their respective beliefs about the outcome of the experiment.

    Aunt Betsy, when betting with different sitters across different experiments, will expect to win 5 out of 6 times because she is considering each experiment as a single event where a die is rolled once.

    On the other hand, Sue Sitter, when betting with different visitors across different experiments, will expect to win 6 out of 11 times because she is considering the overall proportions of the experiment and the distribution of awakenings in the facility.

    They are not directly betting against each other over and over again. If they were, then it would indeed be a logical impossibility for both to win more often than they lose, since they're betting on complementary outcomes.

    This scenario is an interesting demonstration of how different perspectives can lead to different probabilities, and how those probabilities are borne out in the long term through repeated trials. It also shows how the same event can be perceived differently depending on the context and the observer's perspective.
  • Sleeping Beauty Problem
    According to thirders, if she has amnesia then

    P(Tails and Tuesday) = P(Tails and Monday) = P(Heads and Monday) = 1/3
    sime

    More precisely, the thirder view could be expressed as the claim that whenever Sleeping Beauty awakens, from her epistemic perspective, P(today is Tuesday and the coin landed tails) = P(today is Monday and the coin landed tails) = P(today is Monday and the coin landed heads) = 1/3

    So, according to thirders her probability of losing $1 when awakened on monday in a state of amnesia is 1/3.

    1/3 would be her prior upon awakening and before being informed that the day is Monday. Upon being informed of this, Sleeping Beauty would update her credence to 1/2.
  • Sleeping Beauty Problem
    Even the word "state" feels too coarse for Sleeping Beauty, since it could denote the situation a robust well-defined subject finds themselves in, or it could denote the very identity of that subject. --- At least, that's how the two main camps look to me. One wonders, where am I? how did I get here? One wonders, what am I? what has made me into this?

    As you say, it's all about individuation. Lacking a fixed point of individuation, you can push the lever however you like but you won't actually move anything.
    Srap Tasmaner

    I agree. I view the states at issue to be irreducibly epistemic states of agents. They are, in other words, relational states defined pragmatically. So, when expressing their credences, thirders and halfers had better properly specify the intended relata of this relation. If credences are explicitly stated as ratios, then one must likewise attend to the way in which the two things being divided are individuated. I'm working on a variation of the Sleeping Beauty problem in which all the options for individuating the relevant relata are laid out so that it becomes clear when halfers and thirders are talking past each other, and when it is that they are led astray through being inconsistent with their individuating assumptions.
  • Sleeping Beauty Problem
    So thanks for the enlightening discussion.Michael

    Thanks to you! I'll likely revisit some of your most recent objection just for the sake of clarifying my position (and refine it if needs be). Of course, I'd be delighted if you'd chime in again whenever you feel like it.
  • Sleeping Beauty Problem
    Let P(Flip | Today ) represent the conditional probability of the state of coin, relative to the value of SB's indexical "Today". Then

    2/3 = P(Flip = Tails | Today = Mon Or Tue) = P(Flip = Tails) = 1/2

    What is wrong with this conclusion of inconsistency?
    sime

    From Sleeping Beauty's perspective, the conditional probability of the coin flip being tails upon her awakening isn't conditioned on 'today' being either Monday or Tuesday, but rather on her actual state of being awakened 'today'. In other words, she conditions this probability on her being in a narrow (day long rather than experimental-run long) centered possible world in which she is awakened. It's this conditioning that yields her updating her prior 1/2 to the posterior 2/3.
  • Sleeping Beauty Problem
    The issue with the SB problem, is whether to consider the two potential wakings as the same experiment, or different ones. This version resolves that. SB knows that when the researchers looked at the coins, there are four possible arrangements with probability 1/4 each: {HH, HT, TH, TT}. She also knows that, since she is awake, HH is eliminated. She can update her beliefs in the other three to 1/3 each.JeffJo

    I love your variation! However, your conclusion may be a bit premature as halfers can counter it using the same argument with which they counter the thirder position in Elga's setup. Allow me to break down the possible scenarios considering the initial flip results, the rearrangement of the second coin, and the resultant awakening sequence:

    TT -> TH -> two awakenings (ww)
    TH -> TT -> two awakenings (ww)
    HT -> HH -> waking then sleeping (ws)
    HH -> HT -> sleeping then waking (sw)

    Given these four possible experimental runs following the four possible initial coin flip results, we find that when Sleeping Beauty awakens, she can certainly rule out HH as the current state of the two coins during that specific awakening episode. However, this does not eliminate the possibility of being in either of the last two experimental runs (in addition to, of course, either of the first two).

    She could be waking up due to the initial coin flip (which is consistent with run-1, run-2, or run-3), or she could be waking up as a result of the rearrangement of the second coin (run-1, run-2, or run-4). As all these runs are still consistent with her experience, halfers might argue that Sleeping Beauty has gained no new information upon awakening and that these four possible runs remain equiprobable.
  • Sleeping Beauty Problem
    What is wrong with this conclusion of inconsistency?sime

    I'm going to respond tomorrow since I'm now nearing the end of my current period of awakening. Meanwhile, I want to post this answer GPT-4 gave—on behalf of the halfer this time—to a challenge I had imagined. GPT-4 still has the ability to knock my socks off!

    PN:

    Yesterday, as I was thinking about the halfer's stance that upon awakening Sleeping Beauty's credence on the coin having landed heads should be 1/2, while acknowledging that when the coin landed tails she has two opportunities to guess "tails", thereby acknowledging that a thirder SB would guess right two thirds of the time when betting on tails (and thereby be more willing to bet on tails in order to win more bets), I thought of the following challenge. We suppose that on each awakening a second fair coin also is tossed. The second coin toss is independent and, unlike the first, doesn't impact how the experiment is run. If Sleeping Beauty is more willing to bet on the first coin having landed tails than the second one, does that not imply that the second fair coin now has a less than 1/2 chance to have landed tails?

    GPT-4:

    Your challenge indeed illustrates a fascinating aspect of this problem. Here is a possible way to respond:

    As a halfer, Sleeping Beauty would be more willing to bet on the first coin landing tails, not because she believes it's more likely to land tails, but because there are two opportunities to be correct when betting on tails as opposed to one opportunity when betting on heads. However, this should not influence her betting behavior regarding the second coin, which is independent of the first and does not affect the experiment's procedure.

    The crucial point here is that the additional awakenings under the tails outcome for the first coin change the betting odds in favor of tails, but they do not change the underlying physical probabilities of the coin flip, which remain at 1/2 for both heads and tails.

    For the second coin, which is not linked to the experiment's procedure and leads to only one awakening regardless of its outcome, there are no such changes in the betting odds, and the physical probabilities remain the only relevant factor for Sleeping Beauty's bets. Therefore, Sleeping Beauty would remain indifferent between betting on heads or tails for the second coin, considering both outcomes equally likely.

    In this scenario, the credence Sleeping Beauty assigns to the first coin's outcome upon awakening (1/2 for heads and tails according to the halfer perspective) reflects the physical probabilities of the coin flip, while her willingness to bet on tails rather than heads reflects the altered betting odds due to the additional awakenings under the tails outcome. Conversely, both her credence and her willingness to bet for the second coin directly reflect the physical probabilities, as there are no additional factors affecting the betting odds.
  • Sleeping Beauty Problem
    If a PoI advocate assigns the credence

    P(Day = mon, Flip = tails ) = P(Day = mon , Flip = heads) = P(Day = tue, Flip = tails) = 1/3

    Then his unconditioned credence that it is Monday is

    P(Day = mon) = 2/3

    which is clearly not the expression of indifference.
    sime

    This assignment is an expression of pairwise indifference between the three possible awakening circumstances. But rather than relying on the Principle of Indiference, I proposed to assign credences on the basis the odds Sleeping Beauty should rationally be willing to accept when betting on those outcomes. The problem's original specification is already precise enough ensures that if Sleeping Beauty accepts 2 to 1 odds on bets on any of these three outcomes upon awakening, she would break even in the long run.

    Then for sake of consistency he must ignore the actual information he is given about the coin, by using

    P(Flip = tails) = 2/3

    There's no inconsistency here. It's precisely because the premise holds that the coin is equally likely to land heads or tails when tossed, and the fact that tails would result in twice as many awakenings, that Sleeping Beauty deduces that she could accept 2 to 1 odds on each awakening on the outcome 'Flip = heads' and break even over the long run.
  • Sleeping Beauty Problem
    The halver's position is based on the common-sense postulate that the physical properties and propensities of objects remain unchanged when their states are conditioned upon irrelevant information. Given acceptance of that postulate plus the premise of a fair coin, why shouldn't the halver insist that they are talking about reality, as opposed to their mere mental state?

    The thirder's position is indeed a ratio of possible words, but there is scant evidence to support the idea that credences are accurately represented by taking ratios over possible worlds.
    sime

    Taking ratios over possible worlds is a vivid way to illustrate the issue at hand when it comes to Bayesian updating of credences in light of new information. Regarding credences, in well-defined problems like Sleeping Beauty, as Pradeep Mutalik suggested in this column (in the comments section), "if you have 'a degree of certainty' of 1/n then you should be willing to accept a bet that offers you n or more dollars for every dollar you bet."

    I would challenge the idea that Sleeping Beauty's judgement about the coin flip is merely a judgement about the propensities of that coin. Rather, it's a judgement about her own epistemic relation to the result of the flip. Or you could argue it's a judgement about the propensity of the whole experimental setup to put her in relationship with specific coin flip results. This echoes Werner Heisenberg's comment: "What we observe is not nature in itself but nature exposed to our method of questioning." I believe this to be true generally, not just in the context of quantum theory.

    Upon awakening, Sleeping Beauty does gain information about the coin flip result. She knows that (from her own perspective) "if today is Monday, then heads and tails are equally likely." She also learns that "if today is Tuesday, then the coin landed tails." Since she can't rule out "today" being Tuesday, her credence (and willingness to bet) on a tails outcome should be higher than 1/2. "Today" being an indexical, the knowledge expressed pertains to her own situated epistemic perspective (as well as her 'pragmatic' perspective, or affordances to make bets or predictions). But this perspective only is "subjective" in the sense that it is suitable for capturing probabilities, or frequencies, relevant to her personal involvement in the experiment.
  • Sleeping Beauty Problem
    Two Envelopes seems to encourage abuse of the principle of indifference in exactly this way. Maybe it's just something like this: rationality requires treating "It's one or the other but I've literally no idea which" as an uninformative 50:50 prior only when there's the real possibility of acquiring new information upon which to update that prior. I'd rather just say, no, don't do that, "I don't know" doesn't mean "It's 50:50", [...]Srap Tasmaner

    I think there are good grounds for Sleeping Beauty upon awakening to ascribe strictly equal probabilities P = 1/3 to the three cases: (1) Today is Monday and the coin landed tails, (2) Today is Monday and the coin landed heads and (3) Today is Tuesday and the coin landed tails, rather than simply claiming ignorance. This doesn't involve any abuse of the principle of indifference so far as I can see. Rather, Sleeping Beauty can deduce those three outcomes to have equal probabilities from the statement of the problem, or experimental setup. Suppose upon awakening she would always bet on the same one of the three outcomes chosen in advance. In that case, upon repeating the whole experiment run many times, she's be right roughly 1/3 of the time she awakens and make this bet. This would be the case regardless of which one of the three outcomes she chooses to always bet on. Halfers have an obvious rejoinder to this line of thinking but I just wanted to make this point for now.
  • Sleeping Beauty Problem
    Rather than getting back into the nitty-gritty, I'm thinking about the stuff I posted a while back, the possible self slices and all that.Srap Tasmaner

    I think this observation is very astute. As I am developing a pragmatist conception of credence, I am trying to move away from the idea that one's credence in the state H is entirely determined by the specification of the ways in which one can come to be in that state. The Sleeping Beauty problem completely specifies how (and in which circumstances) Sleeping Beauty can come to be in state H, however one wants to define what counts as "being in state H". But it fails to specify how one might want to individuate such states (or 'self slices') so as to determine the relevant frequency or probability ratios that we ordinarily express as credences.

    Whenever Sleeping Beauty is awakening during a H experimental run, she is awakening during a H day, and vice versa. But there isn't a one to one relationship between the two sets of 'events' (i.e. between the two sets of centered possible worlds) once they have been individuated.

    The relationship is one to one between types but not between tokens, we might say.

    In the Sleeping Beauty problem the equivalence relation is:

    H-run <--> H&Monday (H-day)
    T-run <--> T&Monday or T&Tuesday (T-day)

    The ratio considered by the thirders is the ratio of H-days per day long centered possible worlds (or individual awakenings). It is indeed 1/3. The ratio considered by the halfers is the ratio of H-runs per run-long centered possible worlds. It is indeed 1/2. The credences expressed by thirders and by halfers therefore have different topics.
  • Sleeping Beauty Problem
    So then there are two different ways to reason with nothing to prove that one or the other is the "right" way?Michael

    That's correct since events that happen in the world don't come flagged with sign posts that say: "the current event begins here" and "the current event terminates here." How credences in the probabilities of events are assessed depend on the way those events are individuated and this can be dictated by pragmatic considerations.
  • Sleeping Beauty Problem
    And that first question is the premise of the problem. Sleeping Beauty is asked her credence that the coin landed heads. That's it. She's not being asked to consider the most profitable betting strategy for multiple games.Michael

    The state "the coin landed heads" is ambiguous with respect to the span of the centered possible world that it might be referring to. I think you would benefit from reading Groisman.
  • Sleeping Beauty Problem
    5 out of every 6 victims escape. I count by participants, not by escape attempts. I think it's more reasonable.Michael

    Since on my approach probabilities track frequencies, even if there is just one kidnapping event, the hostage's chances of survival are 5 in 11 whenever an escape attempt occurs.
  • Sleeping Beauty Problem
    She can and should use known priors to condition her credence, and one such prior is that she is more likely to win a prize/have the opportunity to escape if tails/a dice roll of 6. As such, if she wins a prize or has the opportunity to escape she should condition on this and her credence should favour tails/a dice roll of 6, otherwise her she should condition on not winning a prize or having the opportunity to escape and her credence should favour heads//a dice roll of 1-5.Michael

    She can't condition her credence on H by winning a prize if she only learns about her victory after the experiment ends. The same applies to her opportunity to bet — this fact is probabilistically independent of H and thus doesn't allow for an update in credence. Would you still argue that your belief that the movie you're watching is part of a double feature increases the moment you see Rex Harrison appearing on screen? Or that it increases even further when you notice he isn't wearing a top hat, or when he is?

    And if she's guaranteed the opportunity to escape each day her credence should favour a dice roll of 1-5.

    So, she should be carrying a plank and end up being eaten by lions on 6 out of 11 escape attempts?

    But honestly, all this talk of successes is irrelevant anyway. As I said before, these are two different things:

    1. Sleeping Beauty's credence that the coin tossed for the current, one-off, experiment landed heads
    2. Sleeping Beauty's most profitable strategy for guessing if being asked to guess on heads or tails over multiple games

    It's simply a non sequitur to argue that if "tails" is the answer to the second then "1/3" is the answer to the first.

    The manner in which (1) is stated suggest that Sleeping Beauty is referring to the wide centered possible world spanning the whole experiment run. In that case, her credence in H should be 1/2.

    The second one makes it rational for her to rely on her credence regarding narrow centered possible worlds spanning single awakening episodes. There indeed isn't any entailment from the suitability of one framing of the question from (1) to (2) or vice versa. The two sentences concern themselves with different questions.

    By the way, I've just finished reading Groisman's The End of Sleeping Beauty’s Nightmare. Although I found it somewhat glib, it's undoubtedly the most readable paper on this topic that I've encountered so far. Groisman makes some very compelling points that are directly relevant to the discussion we are presently having. However, I prefer Laureano Luna's approach, which elaborates further on Groisman's work and brings more nuance.
  • Sleeping Beauty Problem
    Then forget the nature of prize. If I know that I’ve won a prize my credence is that the first coin toss landed tails.Michael

    In your prize scenario, knowing that you've won a prize indeed gives you new information, and this can rightly affect your credences. However, in the scenarios with Sleeping Beauty and the prisoner, merely being presented with an opportunity to bet or escape does not give them any new information about the outcome of the coin toss (or throw of the die). They must decide how to take advantage of this opportunity (by choosing to carry the torch or the plank, or choosing what safehouse address to communicate to the police) before gaining any knowledge about the success of the attempt. The offering of the opportunities carry no information and provide no ground for updating credences.
  • Sleeping Beauty Problem
    That's only because I walk into one film. If I'm given amnesia and walk into the second film (if there is a second film) then it affects my credence.

    It's exactly like my scenario with the coin toss and prizes. If heads then the car is the possible prize, otherwise the motorbike is the possible prize. If a car then a single coin toss determines if I get it (if heads), if a motorbike then two coin tosses determine if I get it (one head is enough to win).
    Michael

    In your scenario, the nature of the prize is conditioned on the coin toss results. In the film example, the number of features (and the coin toss result that determined this number) is probabilistically independent of the probability that Rex Harrison will be featured. And likewise with Sleeping Beauty and the betting opportunities, or the prisoner and their escape opportunities.
  • Sleeping Beauty Problem
    Sorry, misunderstand the movie example. It’s a different answer if I only get to walk into one film, which would be comparable to Sleeping Beauty only waking on Monday (or Tuesday) if tails.Michael

    The main point is that seeing Rex Harrison being featured (while knowing that 1% of the movies randomly being shown in this theater feature him) doesn't impact your credence in this movie being part of a double feature.
  • Sleeping Beauty Problem
    No, but if I walk in not knowing if it’s the first or second then my credence favours it being part of a double feature.Michael

    Yes, me too. But that's just because two out of three showings are parts of double features. Seeing Rex Harrison being featured doesn't change my credence. It remains 2/3.

Pierre-Normand

Start FollowingSend a Message