• Michael
    15.4k
    If you repeated the experiment a trillion times, and kept a note of whether you guess was correct or not each time, and I did the same. We would find that I got it correct more than you. By the law of large numbers that would mean the outcome I guessed for was more probable than yours.PhilosophyRunner

    More frequent but not more probable.

    If the game is played once I wouldn't argue that the coin most likely landed heads 100 times in a row and that my interview is most likely a 100-heads-in-a-row interview. I would argue that the coin most likely didn't land heads 100 times in a row and that this is most likely my first and only interview.

    I think using frequencies over multiple games to argue for the probability in a single game is a non sequitur.
  • Pierre-Normand
    2.4k
    Fair enough, but then a person betting that it did land on heads 100 times in a row will have a greater expected value for their winning (as long as the winnings for heads are greater than 2^100 than for tails). And their position would be the rational one.PhilosophyRunner

    Indeed, not only would their expected value (EV) be positive, but it would be positive because the majority of their individual bets would be winning bets. Michael, it seems, disagrees with the idea of individuating bets in this way. However, this resistance appears to stem from an unwillingness to assign probabilities to the possible involvement of epistemic agents in specific kinds of events. Instead, like @sime, Michael prefers to attribute probabilities to the propensities of objects being realized as seen from a detached, God's-eye-view perspective.
  • Michael
    15.4k
    Indeed, not only would their expected value (EV) be positive, but it would be positive because the majority of their individual bets would be winning bets. Michael, it seems, disagrees with the idea of individuating bets in this way.Pierre-Normand

    I disagree with the step from "the majority of winning bets are tails bets" to "tails is more probable".

    It's either a non sequitur or affirming the consequent, where the implicit premise is "if tails is more probable then the majority of winning bets are tails bets".

    In this case the majority of winning bets are tails bets only because you get to place more bets if it's tails.

    This is why, as I have often said, betting examples just don't answer the question at all. They're a red herring. Betting on tails might be more profitable, but it is still the case that one's credence should be that P(Heads|Awake) = 1/2.
  • sime
    1.1k


    The sample space of any room is { H, (T,F), (T,S) }

    where F and S refer to First Stay and Second Stay, respectively

    with probability measure

    M(H) = 1/2
    M(T,F) = 1/4
    M(T,S) = 1/4

    (a consequence of your assumed prior probabilities )

    Define a variable indicating the stay

    Stay (H) = First
    Stay (T,F) = First
    Stay (T,S) = Second


    P(Stay = First) = M (H) + M(T,F) = 3/4
    P(Stay = Second) = 1/4

    That's all that can be said, unless i've overlooked an aspect of your problem. Which stay it is would give new information to the sitter about the coin, but alas she doesn't know this information. . To get a hotel for the purposes of obtaining a statistically interpretable result, simple take the product-space of the sample spaces for each room, and take the product of each room-specific measure M.

    As before, the prior probability of the coin landing heads is given in the premises, and the sitter has no new information upon which to condition the state of the coin, meaning that it's probabilities remain unchanged.

    I think the version of the hotel in which subjects are assigned to different rooms on each awakening is more interesting, because it reveals the importance of counterfactual reasoning when the sitter allocates her credences, which thirders blindly ignore in their "picture theory" of credence allocation.
  • Michael
    15.4k
    @Pierre-Normand

    Thought you might be interested in my short exchange with Elga:

    Dear Professor Elga,

    I've read your paper Self-locating belief and the Sleeping Beauty problem and hope you could answer a question I have regarding your argument. You state that "P(T1|T1 or T2) = P(T2|T1 or T2), and hence P(T1) = P(T2)" and by implication state that P(H1|H1 or T1) = P(T1|H1 or T1), and hence P(H1) = P(T1).

    However I cannot see in the paper where this inference is justified, as it is not valid a priori.

    If I have one red ball in one bag and two numbered blue balls in a second bag, and I pick out a ball at random and show it to you then P(R|R or B1) = P(B1|R or B1) but P(R) = ½ and P(B1) = ¼.

    So the (double-)halfer can accept that P(H1|H1 or T1) = P(T1|H1 or T1) but reject your assertion that P(H1) = P(T1) follows. Is there something in your paper that I missed to justify this inference?

    Thanks for your time.
    — Michael

    Dear Michael,

    Thanks for your interest in this stuff. The form of reasoning I had in mind was the following chain of entailments:

    P(X|X or Y) = P(Y|X or Y)
    P(X&(X or Y))/P(X or Y) = P(Y&(X or Y))/P(X or Y)
    P(X)/P(X or Y) = P(Y)/P(X or Y)
    P(X) = P(Y).

    I wish you the best with your research.
    — Elga

    Unfortunately I don't quite see how it addresses my counterexample, which seems to show that there must be a mistake with that chain of entailments, but I won't push him on it.
  • PhilosophyRunner
    302
    I think using frequencies over multiple games to argue for the probability in a single game is a non sequitur.Michael

    I simply can't agree with this. Using frequencies over multiple games to argue for the probabilities in a single game is a fundamental way probabilities are calculated.

    If you ask me what the probability of this dice in my hand will roll a 6, I can roll the dice a million times and that will give me credence for the probability that a 6 will roll the next time I roll the dice. And so on.

    If 6 rolls 900,000 out of the million times, I am completely justified having credence that 6 is more likely to roll on the next single, one off roll I will do.

    You seem to be suggesting that 6 having rolled 900,000 times out of a million should not affect my thought on the probability of 6 being rolled in the next one off event at all. That makes no sense to me.
  • PhilosophyRunner
    302
    Indeed, not only would their expected value (EV) be positive, but it would be positive because the majority of their individual bets would be winning bets. Michael, it seems, disagrees with the idea of individuating bets in this way. However, this resistance appears to stem from an unwillingness to assign probabilities to the possible involvement of epistemic agents in specific kinds of events. Instead, like sime, Michael prefers to attribute probabilities to the propensities of objects being realized as seen from a detached, God's-eye-view perspective.Pierre-Normand

    Exactly, the disagreement stems from the perceptive from with the probability is being calculated.
  • Michael
    15.4k
    Using frequencies over multiple games to argue for the probabilities in a single game is a fundamental way probabilities are calculated.PhilosophyRunner

    Only when it's appropriate to do so. It is in the case of rolling a dice, it isn't in the case of counting the number of awakenings.

    Again, it doesn't matter that if the coin lands heads 100 times in a row then I will be woken 2101 times. When I'm put to sleep, woken up, and asked my credence that the coin landed heads 100 times in a row – or my credence that my current interview is a 100-heads-in-a-row interview – the only thing that's relevant is the probability of a coin landing 100 heads in a row, which is . It simply doesn't matter that if the experiment were repeated 2100 times then interviews are 100-heads-in-a-row interviews.

    If you want to say that it must still have to do with frequencies, then what matters is the frequency of a coin landing heads 100 times in a row, not the frequency of interviews that follow the coin landing heads 100 times in a row. You're using an irrelevant frequency to establish the probability.
  • PhilosophyRunner
    302

    Ok, let me try a different method. Will your stance change if the question asked to sleeping beauty is "What is the probability that you see the coin with it's heads up when you look at it now."

    Would this change your stance? Maybe it is clearer that the frequency with she looks at each outcome affect the probability that she will see that outcome. If she looks more often on tails (twice as often), then she is more likely to see tails on a given look.
  • Michael
    15.4k
    That I get to see something twice doesn't mean that I'm twice as likely to see it. It just means I get to see it twice.
  • PhilosophyRunner
    302
    No. That I get to see something twice doesn't mean that I'm twice as likely to see it. It just means I get to see it twice.Michael

    You are not twice as likely to see it. A given seeing of it is twice as likely to be tails. Those two are very different things.

    You are interested in "it." That is your event for which you are calculating your probabilities

    I am interested in "seeing of it." That is my event for which I am calculating probabilities.

    Those are two very different events - "it" and "seeing of it" are different events.
  • Michael
    15.4k
    A given seeing of it is twice as likely to be tails.PhilosophyRunner

    This is an ambiguous claim. It is true that if you randomly select a seeing from the set of all possible seeings then it is twice as likely to be a tails-seeing, but the experiment doesn't work by randomly selecting a seeing from the set of all possible seeings and then "giving" it to Sleepy Beauty. It works by tossing a coin, and then either she sees it once or she sees it twice.

    If we return to my example of tossing the coin 100 times, assume there are 2100 participants. Each participant knows two things:

    1. Of the 2102 interviews, 2101 follow the coin landing heads 100 times

    2. Of the 2100 participants, the coin landed heads 100 times for 1 of them

    You are suggesting that they should ignore 2 and use 1 to infer a credence of .

    I am saying that they should ignore 1 and use 2 to infer a credence of .

    Although it's true that most interviews follow the coin landing heads 100 times, every single one of those interviews belongs to a single participant, and for each participant the probability that they are that single participant is .

    So although it's true that "any given interview is twice as likely to have followed the coin landing heads 100 times" it is false that "my interview is twice as likely to have followed the coin landing heads 100 times".

    And by the exact same token, although it's true that "any given interview is twice as likely to be tails" it is false that "my interview is twice as likely to be tails".

    The likelihood of your interview being a tails interview is equal to the likelihood that the coin landed tails in your experiment, which is .
  • Pierre-Normand
    2.4k
    The sample space of any room is { H, (T,F), (T,S) }

    where F and S refer to First Stay and Second Stay, respectively

    with probability measure

    M(H) = 1/2
    M(T,F) = 1/4
    M(T,S) = 1/4

    (a consequence of your assumed prior probabilities )

    Define a variable indicating the stay

    Stay (H) = First
    Stay (T,F) = First
    Stay (T,S) = Second


    P(Stay = First) = M (H) + M(T,F) = 3/4
    P(Stay = Second) = 1/4
    sime

    In the scenario I had envisioned, where new participants had the duration of their stay premised on the throw of a six-sided die, there was a 6/11 proportions of rooms occupied by participants who had landed a six. But this can easily be adapted to the original scenario with a coin toss. In the latter case, a coin landing on tails ensures that a participant will stay for two consecutive days rather than one. Whenever a room is freed, a new participant is recruited and a new coin is tossed. I left it open how the facility was filled when it began operating but we can assume that it was filled on the first day. In that case, the initial probability measure would have been approximately:

    M(H) = 1/2
    M(T,F) = 1/2
    M(T,S) = 0 (exactly zero in this case, since participants can't be recruited on their second day)

    On subsequent days, as Michael noticed, M(H) would evolve, on average, according to the sequence: first day M(H) = 1/2, second day 1/4, third day 3/8, fourth day 5/16, etc. This sequence converges on 1/3.

    You can picture filling the facility (and keeping it full) as a process of filling a bag with 100 blue or red balls. The bag initially is filled on the first day with a process of tossing a coin 100 times. Whenever the coin lands on heads, a blue ball is put in the bag. Else, a red ball labelled R1 is put in the bag.

    On subsequent nights, all the blue balls are being removed from the bag. All the balls labelled R2 also are removed. All the balls labelled R1 are relabelled R2 and remain in the bag. The bag is then refilled with a series of coin tosses according to the initial procedure.

    An equilibrium is rapidly reached where the bag contains blue balls, R1 balls, and R2 balls, in roughly equal proportions. When all the blue and R2 balls are being removed, the R1 balls are relabelled R2, and the balls that were removed are replaced with new balls that either are blue or red (R1) according to the toss of a coin, the 1/3, 1/3, 1/3 proportions are maintained. This process ensures that the temporal frequencies of awakening events for individual participants match the (average) spatial distribution on any single day.
  • Pierre-Normand
    2.4k
    I disagree with the step from "the majority of winning bets are tails bets" to "tails is more probable".Michael

    This inference is only invalid inasmuch as it may lead to an equivocation. When you say "tails is more probable," what exactly do you mean? If you're suggesting that, according to the bettor's prior credence, it's more probable that the bettor would find themselves in circumstances where the coin toss resulted in tails, then I agree, the inference would be unwarranted. However, if what you mean is that, from the bettor's perspective and in light of the evidence available to them at the time of betting, the bet (distinguished from other bets within the same experimental run, which from the agent's point of view, may or may not exist) is more likely to have been placed in circumstances where the coin landed tails, then I would argue that the inference is indeed warranted.
  • Pierre-Normand
    2.4k
    If I have one red ball in one bag and two numbered blue balls in a second bag, and I pick out a ball at random and show it to you then P(R|R or B1) = P(B1|R or B1) but P(R) = ½ and P(B1) = ¼. — Michael

    What you mean, of course, it that you are picking a bag at random, and in a second stage picking a ball from that bag at random. I assume Elga would have understood this is what you meant.

    Does that procedure accurately represent how Sleeping Beauty understands her own epistemic situation when she is being awakened on a day of interview, though? If, in cases where she is scheduled for two awakenings, a second coin would be tossed to determine on which day she is being awakened (either Monday or Tuesday), then this means that, contrary to the problem stipulation, whenever the first coin landed tails, Sleeping Beauty would only be woken up once, not twice, although she would remain uncertain about the day of her awakening even after being informed that the first coin landed on tails.

    But a setup in which, following a tails result on the first coin toss, Sleeping beauty is awakened both on Monday and on Tuesday places her in a different epistemic situation from a setup in which she is then awakened at random on only one of those two days according to another coin toss. Conflating the two procedures also yield to a conflating of branching out of possible worlds (as a result of intrinsic propensities of objects being realized) and branching out of centered possible worlds (as a result of epistemic agents finding themselves in various causal and epistemic relationships with such objects at different stages in their lives).
  • Pierre-Normand
    2.4k
    Here is a follow-up to my previous clarificatory remark to @sime regarding the statistical distribution of the participants in the waking rooms of the Sleeping Beauty Experimental Facility:

    PN:

    Here is a thought that just occurred to me. Returning to Sime's setup of priors regarding the distribution of participants in separate rooms:

    P(H) = 1/2; P(T1) = 1/4; P(T2) = 1/4

    While I have been advocating to both Sime and Michael that a set of priors with 1/3, 1/3, and 1/3 ratios is more representative of the original Sleeping Beauty setup, I've just realized that Sime's assumption could also present a problem if he were to inquire about the conditional probability P(H|'Meeting on the first day'). In that case, Sue is twice as likely to have been assigned a participant with the first coin having landed on heads. This result aligns with what Lewis obtained under his own halfer analysis. But that means that both Sue and the participant she has been assigned to, once informed that the participant is experiencing her first awakening, know that the probability of the coin having landed on heads is now 2/3. They might know this even in the absence of anyone having looked at the coin, or even before the coin has been tossed.

    GPT-4:

    Your realization highlights an interesting aspect of Sime's setup. Under his initial priors (P(H) = 1/2, P(T1) = 1/4, P(T2) = 1/4), Sue would indeed conclude that the probability of the coin having landed on heads is now 2/3 after learning that she's meeting a participant on their first awakening. This aligns with Lewis' halfer analysis, which Sime might not have intended.

    Your point about Sue and the participant knowing this even before the coin is tossed or looked at demonstrates a potential paradox or counter-intuitive aspect of the setup. It seems strange that probabilities can be adjusted based on conditions that haven't occurred yet, which could be a basis for further discussions on the interpretation of probability and its application to these types of problems. This could lead to discussions about objective vs subjective interpretations of probability, the nature of knowledge and information, and the meaning of conditional probabilities.

    In essence, Sime's setup appears to inadvertently support the position he's arguing against. You've made an important observation that could further illuminate the debate surrounding the Sleeping Beauty problem.

    PN:

    Under Sime's assumed priors, and the consistent distribution of participants in rooms labelled H, T1, and T2, Sime's conclusion that P(H|T1 or H) = 2/3 would be correct and uncontroversial. The fact that this leads to a weird ability to predict unequal frequencies from future tosses of a fair coin merely highlights that Sime's setup is inconsistent with the setup (and room-state distribution) having been determined in the manner described in the original Sleeping Beauty setup for individual participants. Sime's 2/3 conditional result would be consistent with his (and Michael's) alternative setup where a second independent coin toss determines whether Sleeping Beauty is awakened on Day1 or Day2 (but not both) following an initial tails result on the first coin toss.

    GPT-4:

    Yes, indeed. Sime's setup is more akin to an alternate interpretation of the Sleeping Beauty problem, where the awakening on Day 1 or Day 2 is determined by a second independent coin toss following an initial tails result. In this modified setup, the conclusion that P(H|T1 or H) = 2/3 would indeed be correct and uncontroversial, because the conditions under which Sleeping Beauty is awakened are fundamentally different from the original problem.

    It's a subtle distinction but a crucial one. By changing the conditions of the problem in this way, Sime and Michael introduce a new factor (the second coin toss) that alters the statistical distribution of awakenings. This demonstrates the importance of carefully considering the specifics of a problem setup when interpreting the resulting probabilities.

    However, this deviation from the original problem setup could be seen as sidestepping the core philosophical issues presented by the Sleeping Beauty problem, such as the nature of self-locating belief and the distinction between personal and impersonal probabilities. For these reasons, it's important to clarify the specifics of the setup when discussing the Sleeping Beauty problem, to ensure that everyone is debating the same problem.
  • Michael
    15.4k
    Does that procedure accurately represent how Sleeping Beauty understands her own epistemic situation when she is being awakened on a day of interview, though?Pierre-Normand

    It's not intended to. It's intended to show that this inference is not valid a priori:

    P(A|A or B) = P(B|A or B)
    ∴ P(A) = P(B)

    Elga's argument depends on this inference but he doesn't justify it.

    His chain of entailments when applied to my counterexample leads to a false conclusion, and so it needs to be explained why this chain of entailments is valid for the Sleeping Beauty case.
  • Michael
    15.4k
    However, if what you mean is that, from the bettor's perspective and in light of the evidence available to them at the time of betting, the bet (distinguished from other bets within the same experimental run, which from the agent's point of view, may or may not exist) is more likely to have been placed in circumstances where the coin landed tails, then I would argue that the inference is indeed warranted.Pierre-Normand

    I believe this response to PhilosophyRunner addresses this claim. Specifically:

    Although it's true that most interviews follow the coin landing heads 100 times, every single one of those interviews belongs to a single participant, and for each participant the probability that they are that single participant is .

    So although it's true that "any given interview is twice as likely to have followed the coin landing heads 100 times" it is false that "my interview is twice as likely to have followed the coin landing heads 100 times"
  • Pierre-Normand
    2.4k
    Elga's argument depends on this inference but he doesn't justify it.Michael

    You challenged the validity of Elga's inference by presenting what you consider a counterexample:

    "If I have one red ball in one bag and two numbered blue balls in a second bag, and I pick out a ball at random and show it to you then P(R|R or B1) = P(B1|R or B1) but P(R) = ½ and P(B1) = ¼." - Michael

    As I pointed out earlier, these probabilities don't result from randomly picking one ball out of three. They would instead follow from choosing one of two bags at random, and subsequently selecting a ball at random from that chosen bag, which I assumed was your intention.

    In that scenario, P(R|R or B1) would be 2/3 and P(B1|R or B1) would be 1/3. Clearly, these two probabilities aren't equal, and their equality forms the antecedent of Elga's inference.

    Thus, you haven't presented a counterexample to the validity of Elga's inference; rather, you've demonstrated that his conclusion doesn't follow when the premise is false. At best, you have shown his inference to be unsound.

    However, you've done this by replacing Elga's premise—which aligns with the Sleeping Beauty setup—with a different premise derived from an alternative setup.
  • Michael
    15.4k
    In that scenario, P(R|R or B1) would be 2/3 and P(B1|R or B1) would be 1/3.Pierre-Normand

    How do you get that?
  • Pierre-Normand
    2.4k
    How do you get that?Michael

    P(R|R or B1) is the probability of the ball being red, given that the ball isn't labelled B2. That's because the outcome 'R or B1' is equivalent to the outcome 'not B2'. If you eliminate the possibility of 'B2', the only outcomes left are 'R' and 'B1', in the same prior proportions as before.

    Applying Bayes' theorem, we have P(R|R or B1) = P(R or B1|R) * P(R) / P(R or B1). Since P(R or B1|R) is 1 (if the ball is red, it's certain that it's either red or B1), and P(R) is 1/2 (the prior probability of the ball being red), and P(R or B1) is 3/4 (the prior probability of the ball being either red or B1), we get P(R|R or B1) = (1 * 1/2) / (3/4) = 2/3.

    Applying the same reasoning mutatis mutandis, we get P(B1|R or B1) = 1/3.
  • Michael
    15.4k
    Good point. Thanks for the correction.
  • Pierre-Normand
    2.4k
    Good point. Thanks for the correction.Michael

    :up:
  • Pierre-Normand
    2.4k
    I believe this response to PhilosophyRunner addresses this claim. Specifically: [...]Michael

    I'd be happy to revisit this objection but I'm going to stand back and watch a little how @PhilosophyRunner grapples with it. If I see that they are in trouble, I may jump back into the ring. I've heard that's how things are done at the WWE.
  • Michael
    15.4k
    P(Heads | Mon or Tue) = P(Mon or Tue | Heads) * P(Heads) / P(Mon or Tue)
    P(Heads | Mon or Tue) = 1 * 1/2 / 1
    P(Heads | Mon or Tue) = 1/2
    Michael

    Going back to this for a moment, I think a better way to write this would be:

    P(Heads|H1 or T1 or T2) = P(H1 or T1 or T2|Heads) * P(Heads) / P(H1 or T1 or T2)

    If Elga is right in saying that P(H1), P(T1), and P(T2) sum to 1 then P(H1 or T1 or T2) = 1.

    So P(Heads|H1 or T1 or T2) = .

    If he's right when he says that "[you] receiv[e no] new information [but] you have gone from a situation in which you count your own temporal location as irrelevant to the truth of H, to one in which you count your own temporal location as relevant to the truth of H" then it seems correct to say that Sleeping Beauty is just being asked about P(Heads|H1 or T1 or T2).
  • Pierre-Normand
    2.4k
    Going back to this for a moment, I think a better way to write this would be:

    P(Heads|H1 or T1 or T2) = P(H1 or T1 or T2|Heads) * P(Heads) / P(H1 or T1 or T2)

    If Elga is right in saying that P(H1), P(T1), and P(T2) sum to 1 then P(H1 or T1 or T2) = 1.

    So P(Heads|H1 or T1 or T2) = 1/2

    If he's right when he says that "[you] receiv[e no] new information [but] you have gone from a situation in which you count your own temporal location as irrelevant to the truth of H, to one in which you count your own temporal location as relevant to the truth of H" then it seems correct to say that Sleeping Beauty is just being asked about P(Heads|H1 or T1 or T2).
    Michael

    Your calculation seems correct, but it doesn't adequately account for the new capacity Jane gains to refer to her own temporal location using an indexical expression when updating her credence. Instead, you've translated her observation ("I am awake today") into an impersonal overview of the entire experiment ("I am scheduled to be awakened either under circumstances H1, T1, or T2"). The credence you've calculated reflects Sleeping Beauty's opinion on the ratio, over many iterations of the experiment, of (1) the number of runs resulting from a heads result, to (2) the total number of experimental runs. Indeed, this ratio is 1/2, but calculating it doesn't require her to consider the knowledge that today falls within the set {H1, T1, T2}.

    Let's reconsider the scenario you proposed yesterday with two Sleeping Beauties, Jane and Helen. If the coin lands heads, Jane is awakened once and Helen twice, and vice versa if it lands tails. They occupy the same waking room on three consecutive days (twice by either Jane or Helen, depending on the coin toss). On non-awakening days, they remain in a sleeping room. Sue Sitter is assigned randomly to attend to whoever is awakened in the waking room on one of three randomly selected days.

    When Sue finds Jane in the assigned room, and assuming she knows the participants and the experimental setup, her prior probabilities would be:

    P(Jane awake today) = P(JAT) = 1/2, and P(H) = 1/2

    Her updated credence for H is P(H|JAT) = P(JAT|H) * P(H) / P(JAT) = (1/3*1/2) / (1/2) = 1/3

    Jane's priors for any random day during the experiment would be exactly the same as Sue's. When Jane is awakened on a day when Sue is assigned to her, Jane has the same information Sue has about herself, and so she can update her credence for H in the same way. She concludes that the probability of this kind of awakening experience, resulting from a heads result, is half as probable, and thus half as frequent, as identical awakening experiences resulting from a tails result. This conclusion doesn't impact the ratio of the frequency of heads-result runs to the total number of experimental runs, which remains at 1/2 from anyone's perspective.
  • Michael
    15.4k
    Your calculation seems correct, but it doesn't adequately account for the new capacity Jane gains to refer to her own temporal location using an indexical expression when updating her credence. Instead, you've translated her observation ("I am awake today") into an impersonal overview of the entire experiment ("I am scheduled to be awakened either under circumstances H1, T1, or T2"). The credence you've calculated reflects Sleeping Beauty's opinion on the ratio, over many iterations of the experiment, of (1) the number of runs resulting from a heads result, to (2) the total number of experimental runs. Indeed, this ratio is 1/2, but calculating it doesn't require her to consider the knowledge that today falls within the set {H1, T1, T2}.Pierre-Normand

    I've just taken what Elga said. He says:

    Combining results, we have that P(H1) = P(T1) = P(T2). Since these credences sum to 1, P(H1)=1/3.

    If P(H1), P(T1), and P(T2) sum to 1 then P(H1 or T1 or T2) = 1.

    Where P(H1) means "the coin landed heads and today is Monday", P(T1) means "the coin landed tails and today is Monday", and P(T2) means "the coin landed tails and today is Tuesday".
  • Michael
    15.4k
    When Sue finds Jane in the assigned room, and assuming she knows the participants and the experimental setup, her prior probabilities would be:

    P(Jane awake today) = P(JAT) = 1/2, and P(H) = 1/2

    Her updated credence for H is P(H|JAT) = P(JAT|H) * P(H) / P(JAT) = (1/3*1/2) / (1/2) = 1/3

    Jane's priors for any random day during the experiment would be exactly the same as Sue's. When Jane is awakened on a day when Sue is assigned to her, Jane has the same information Sue has about herself, and so she can update her credence for H in the same way. She concludes that the probability of this kind of awakening experience, resulting from a heads result, is half as probable, and thus half as frequent, as identical awakening experiences resulting from a tails result. This conclusion doesn't impact the ratio of the frequency of heads-result runs to the total number of experimental runs, which remains at 1/2 from anyone's perspective.
    Pierre-Normand

    I've already stated why I disagree with this. The manner in which the sitter is assigned a room isn't the manner in which Sleeping Beauty is assigned a room, and so their credences will differ.

    Sue should reason as if her room was randomly selected from the set of all rooms, because it was.
    Jane should reason as if she was randomly selected from the set of all participants, because she was (via the coin flip).

    This is clearer with my extreme example.

    of sitters will sit in on a 100 Heads interview, and so their credence should be P(100 Heads) = .

    of participants will have a 100 Heads interview, so their credence should be P(100 Heads) = .

    The fact that the one participant who has a 100 Heads interview will have 2101 of them is irrelevant. It is only rational for each participant to reason that they are almost certainly not the participant who will have 2101 interviews, and so that this is almost certainly their first and only interview, and so that the coin almost certainly didn't land heads 100 times. This is, again, what I explained to PhilosophyRunner here.

    The claim that because most interviews are a 100 Heads interview then my interview is most likely a 100 Heads interview is a non sequitur. Only if most participants have a 100 Heads interview could it follow that my interview is most likely a 100 Heads interview.
  • Pierre-Normand
    2.4k
    Jane should reason as if she was randomly selected from the set of all participants, because she was (via the coin flip).Michael

    Indeed, Jane reasons as if she was randomly selected from the set of all participants, as this forms the basis of her prior probabilities. What you seem to suggest is that she should not, or cannot, assign any credence or probability to specific features of her current awakening episode. However, if she is informed that the coin landed on tails, then she should and does update her credence P(H) to 1/2. She could only do this if her initial prior was 1/3. Otherwise, she'd end up in a paradox, akin to Lewis's divination of the future.

    The introduction of Sue Sitter is designed to demonstrate that Jane can update her credence based on the same information available to Sue, even though the circumstances leading them to their shared situation are different. Specifically, the selection process for Sue ensures that, despite being fully aware of the experimental setup (like Jane), she shares Jane's state of ignorance (with no need for amnesia-inducing drugs) about whether her current awakening episode is unique, the first of two, or the last of two. Yet, concerning this awakening episode, they share the same priors and the same information necessary to update their credences.

    If you are uncomfortable with Jane reflecting specifically about her current awakening episode, consider this: Let Jane reflect on Sue's reasoning about the probability P(H). If she deems Sue's reasoning correct, she could adjust her own credence to match Sue's, since they both concern the exact same coin toss result. Surely, Jane cannot reasonably say: 'Yes, I see you are right to conclude that the probability of the coin having landed on heads is 1/3, based on the information we share. But my belief is that it's actually 1/2.'"
  • Michael
    15.4k
    Surely, Jane cannot reasonably say: 'Yes, I see you are right to conclude that the probability of the coin having landed on heads is 1/3, based on the information we share. But my belief is that it's actually 1/2.'"Pierre-Normand

    Sue's reasoning is right for Sue but wrong for Jane (and vice versa) given that of sitters will sit in on a 100 Heads interview but of participants will have a 100 Heads interview.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.