Comments

  • Sleeping Beauty Problem
    But what determines the right question to ask isn't the statement of the Sleeping Beauty problem as such but rather your interest or goal in asking the question. I gave examples where either one is relevant.Pierre-Normand

    Are you referring to the safehouse and escape? That's a different scenario entirely.

    I flip a coin. If heads then I flip again. If heads you win a car, otherwise you win nothing. If the first flip is tails then I flip again. If heads you win a motorbike, otherwise I flip again. If heads you win a motorbike, otherwise you win nothing.

    I do this in secret and then tell you that you've won a prize. Given that you're more likely to win a prize if it's tails then it's reasonable to believe that it was most likely tails.

    Now consider a similar scenario, but if heads then you win a car and if tails then you win two motorbikes. I do this is in secret and tell you that you've won at least one prize. It is not reasonable to believe that it was most likely tails.

    Your safehouse and escape example is analogous to that first case. Your introduction of opportunities to escape changes the answer, and so it isn't comparable to the Sleeping Beauty case where there is nothing analogous to an escape opportunity with which to reassess the probability of the coin toss.
  • Sleeping Beauty Problem
    we are changing the structure of the problem and making it unintelligible that we should set the prior P(W) to 3/4.Pierre-Normand

    That's precisely the point, and why I suggested ignoring days and just saying that if heads then woken once and if tails then woken twice. P(W) = 1.

    That P(Heads & Tuesday (or second waking)) consideration is a distraction that leads you to the wrong conclusion.
  • Sleeping Beauty Problem
    Even though the player is dismissed (as opposed to Sleeping Beauty, who is left asleep), a prior probability of P(Dismissed) = 1/4 can still be assigned to this state where he loses an opportunity to bet/guess. Upon observing the game master pulling out a ball, the player updates his prior for that state to zero, thus impacting the calculation of the posterior P(Red|Opp). If we assign P(Dismissed) = 1/4, it follows that P(Red|Opp) = 1/3.Pierre-Normand

    If you're going to reason this way then you also need to account for the same with blue. You reach in after the second blue and pull out nothing. So really P(Dismissed) > 1/4.

    I just don't think it makes sense to reason this way.

    If it helps, consider no tequila after the final ball. After being given a ball and asked your credence you're then dismissed if it's either red or second blue.
  • Sleeping Beauty Problem
    Your revised scenario seems to neglect the existence of a state where the player is being dismissed.Pierre-Normand

    It doesn't. You're dismissed after red or the second blue.

    It is still the case that if I don't know whether this is Monday or Tails then I reason as if my ball is randomly selected from one of the two bags, such that P(R) = 1/2 and P(B1) = P(B2) = 1/4 (or just P(R) = P(B) = 1/2). It better reflects how the experiment is actually conducted.

    I don't reason as if my ball is randomly selected from a pile such that P(R) = P(B1) = P(B2) = 1/3.
  • Sleeping Beauty Problem
    This scenario doesn't accurately reflect the Sleeping Beauty experiment. Instead, imagine that one bag is chosen at random. You are then given one ball from that bag, but you're not allowed to see it just yet. You then drink a shot of tequila that causes you to forget what just happened. Finally, you are given another ball from the same bag, unless the bag is now empty, in which case you're dismissed. The balls are wrapped in aluminum foil, so you can't see their color. Each time you're given a ball, you're invited to express your credence regarding its color (or to place a bet, if you wish) before unwrapping it.Pierre-Normand

    I didn't properly address this but I actually think it illustrates the point quite clearly. I'll amend it slightly such that the blue balls are numbered and will be pulled out in order, to better represent Monday and Tuesday.

    If I'm told that this is my first ball (it's Monday) then P(R) = P(B1) = 1/2.

    If I'm told that this is a blue ball (it's Tails) then P(B1) = P(B2) = 1/2.

    If I don't know anything then I should reason as if my ball was randomly selected from one of the two bags, and so P(R) = 1/2 and P(B1) = P(B2) = 1/4 (or just P(R) = P(B) = 1/2).

    This contrasts with your reasoning as if my ball is randomly selected from a pile such that P(R) = P(B1) = P(B2) = 1/3.

    At the very least this shows how halfers can be double halfers to avoid Lewis' P(Heads | Monday) = 2/3.
  • Sleeping Beauty Problem
    Would not a halfer say that they are equally as likely?Pierre-Normand

    Equally likely to happen, such that P(Monday & Heads) = P(Monday & Tails) = P(Tuesday & Tails) = 1/2, as per that earlier Venn diagram, but not equally likely that today is that interview, because if P(Heads) = 1/2 then P(Monday & Heads) = 1/2, and so P(Monday & Tails) + P(Tuesday & Tails) = 1/2, therefore if P(Monday & Tails) = P(Tuesday & Tails) then P(Monday & Tails) = P(Tuesday & Tails) = 1/4.

    This was Lewis' reasoning in his paper.
  • Sleeping Beauty Problem
    Sleeping Beauty's inability to single out any one of those possible awakenings as more or less likely than anotherPierre-Normand

    Well, that’s the very thing being debated. A halfer might say that a Monday & Heads awakening is twice as likely as a Monday & Tails awakening, and so it is a non sequitur to argue that because Tails awakenings are twice as frequent in the long run they are twice as likely.

    So how does the thirder argue that they are equally likely if not by first reasoning as if an interview is randomly selected from the set of possible interviews?
  • Sleeping Beauty Problem
    If she opts to track her awakenings (centered possible worlds), her credence in heads is 1/3.Pierre-Normand

    How does one do this if not by reasoning as if one's interview is randomly selected from the set of possible interviews?
  • Sleeping Beauty Problem
    It seems quite counterintuitive that if my credence concerns the outcome of the experimental run I'm in, it is P(10) = 1/10, and if it's the outcome of the present awakening, it's P(10) = 10/19, and that both outcomes are perfectly correlated.Pierre-Normand

    So this goes back to what I said before. Either we reason as if we’re randomly selected from the set of all participants, and so P(10) = 1/10, or we reason as if our interview is randomly selected from the set of all interviews, and so P(10) = 10/19.

    Given that the experiment doesn’t work by randomly selecting an interview from the set of all interviews, I don’t think it rational to reason as if it is. The experiment works by rolling a dice, and so it is only rational to reason as if we’re randomly selected from the set of all participants.

    How we chose to bet just has no bearing on one’s credence that one is likely to win. With your lottery example we play even if we know that we’re most likely to lose (and in fact I play the lottery even though the expected value of winning is less than the cost). And with this example I might bet on 10 even if my credence is that it is less likely, simply because I know that I will win in the long run (or, if playing one game, I’m just willing to take the risk because of the greater expected value).
  • Sleeping Beauty Problem
    Yes, it is rational to believe that if you repeat the game enough times then you will win more than you lose, but it is still irrational to believe that if you play the game once then you are most likely to win.
  • Sleeping Beauty Problem
    For instance, suppose you offer me the opportunity to purchase a $100 lottery ticket that carries a one in a septillion chance of winning me $200 septillion. Despite the expected value being positive, it may not be reasonable for me to purchase the ticket. However, it would be a logical fallacy to extrapolate from this example and conclude that it would also be unreasonable for me to buy a $100 lottery ticket with a one in ten chance of winning me $2000. Given I'm not in immediate need of this $100, it might actually be quite unreasonable for me to pass up such an opportunity, even though I stand to lose $100 in nine times out of ten.Pierre-Normand

    It would be unreasonable of you to believe that you are most likely to win, even if it’s financially reasonable to play.
  • Sleeping Beauty Problem
    I did mention this. There are two ways to reason:

    1. I should reason as if I am randomly selected from the set of possible participants
    2. I should reason as if my interview is randomly selected from the set of possible interviews

    I do the former, he does the latter.

    My use of variants, such as that of tossing the coin 100 times, was to show that applying his reasoning leads to what I believe is an absurd conclusion (that even if the experiment is only done once it is rational to believe that P(100 Heads) = 2/3).

    Although he accepts this conclusion, so at least he’s consistent.

    But you’re right that this fundamental disagreement on how best to reason might make these arguments irresolvable. That’s why I’ve moved on to critiquing Elga’s argument, which is of a different sort, and to an application of Bayes’ theorem with what I believe are irrefutable terms (although we disagree over whether or not the result actually answers the problem).
  • Feature requests
    I can’t see it
  • Feature requests
    Or you could make it so that the Join discussion doesn't show if we're logged in, i.e. assigned to a category that only Guests can view?
  • Sleeping Beauty Problem
    I'm not sure what you mean by the sampling mechanism. There is one experiment with one coin toss. We both appear to agree on that.
  • Sleeping Beauty Problem
    Rather, it pointed out that your calculation of P(Heads | Monday or Tuesday) = 1/2 simply restates the unconditional probability P(H) without taking into account Sleeping Beauty's epistemic situation.Pierre-Normand

    As Elga says:

    This belief change is unusual. It is not the result of your receiving new information — you were already certain that you would be awakened on Monday. (We may even suppose that you knew at the start of the experiment exactly what sensory experiences you would have upon being awakened on Monday.) Neither is this belief change the result of your suffering any cognitive mishaps during the intervening time — recall that the forgetting drug isn’t administered until well after you are first awakened. So what justifies it?

    The answer is that you have gone from a situation in which you count your own temporal location as irrelevant to the truth of H, to one in which you count your own temporal location as relevant to the truth of H.

    Sleeping Beauty's "epistemic situation" is only that her current situation is relevant. She doesn't learn anything new. All she knows is that her temporal location is either Monday or Tuesday. Before the experiment began this wasn't relevant, and so she only considers P(H). After being woken up this is relevant, and so she considers P(H | Mon or Tue).

    That they both give the same answer (because Monday or Tuesday is trivially true) just suggests that Lewis was right. It really is as simple as (in his words) "Only new relevant evidence, centred or uncentered, produces a change in credence; and the evidence (H1 ∨ H2 ∨ H3) is not relevant to HEADS vs TAILS".

    The argument you've put forward could be seen as suggesting that the vast body of literature debating the halfer, thirder, and double-halfer solutions has somehow missed the mark, treating a trivial problem as a complex one. This isn't an argument from authority. It's just something to ponder over.Pierre-Normand

    Well, I would also think that my argument that P(A|A or B) = P(B|A or B) doesn't entail P(A) = P(B) is quite trivial. Maybe I've made a mistake (whether with this or my interpretation of Elga), or maybe Elga did. I'll admit that the former is most likely, but my reasoning appears sound.
  • Sleeping Beauty Problem
    So ChatGPT is saying that P(Heads | today is Monday or Tuesday) = 1/2 is trivially true. Doesn't that just prove my point?
  • Sleeping Beauty Problem
    And I think it's even better to not consider days and just consider number of times wakened. So first she is woken up, then put to sleep, then a coin is tossed, and if tails she's woken again. Then we don't get distracted by arguing that her being asleep on Tuesday if Heads is part of the consideration. It doesn't make sense to say that she's asleep during her second waking if Heads.

    With this reasoning I think Bayes' theorem is simple. The probability of being woken up is 1 and the probability of being woken up if Heads is 1. That she's woken up a second time if Tails is irrelevant.
    Michael

    In fact there's an even simpler way to phrase Bayes' theorem, even using days (where "Mon or Tue" means "today is Monday or Tuesday").

    P(Heads | Mon or Tue) = P(Mon or Tue | Heads) * P(Heads) / P(Mon or Tue)
    P(Heads | Mon or Tue) = 1 * 1/2 / 1
    P(Heads | Mon or Tue) = 1/2
  • Sleeping Beauty Problem
    Since the setup of the experiment doesn't even require that anyone look at the result of the toss before Monday night, nothing changes if the toss is actually performed after Sleeping Beauty's awakening. In that case the credences expressed on Monday are about a future coin toss outcome rather than an already actualized one.Pierre-Normand

    I think this is a better way to consider the issue. Then we don't talk about Heads & Monday or Tails & Monday. There is just a Monday interview and then possibly a Tuesday interview. It's not the case that two thirds of all interviews are Tails interviews; it's just the case that half of all experiments have Tuesday interviews. Which is why it's more rational to reason as if one is randomly selected from the set of possible participants rather than to reason as if one's interview is randomly selected from the set of possible interviews (where we distinguish between Heads & Monday and Tails & Monday).

    And I think it's even better to not consider days and just consider number of times wakened. So first she is woken up, then put to sleep, then a coin is tossed, and if tails she's woken again. Then we don't get distracted by arguing that her being asleep on Tuesday if Heads is part of the consideration. It doesn't make sense to say that she's asleep during her second waking if Heads.

    With this reasoning I think Bayes' theorem is simple. The probability of being woken up is 1 and the probability of being woken up if Heads is 1. That she's woken up a second time if Tails is irrelevant. As such:



    This, incidentally, would be my answer to Milano's "Bayesian Beauty".

    I don't have access to Stalnaker's paper to comment on that.
  • Sleeping Beauty Problem
    If so, this would suggest a highly unusual implication - that one could acquire knowledge about future events based solely on the fact that someone else would be asleep at the time of those events.Pierre-Normand

    Elga's reasoning has its own unusual implication. In his own words:

    Before being put to sleep, your credence in H was 1/2. I’ve just argued that when you are awakened on Monday, that credence ought to change to 1/3. This belief change is unusual. It is not the result of your receiving new information — you were already certain that you would be awakened on Monday.

    ...

    Thus the Sleeping Beauty example provides a new variety of counterexample to Bas Van Fraassen’s ‘Reflection Principle’ (1984:244, 1995:19), even an extremely qualified version of which entails the following:

    "Any agent who is certain that she will tomorrow have credence x in proposition R (though she will neither receive new information nor suffer any cognitive mishaps in the intervening time) ought now to have credence x in R."

    I'm inclined towards double-halfer reasoning. P(Heads) = P(Heads | Monday) = 1/2, much like P(Red) = P(Red|Red or Blue 1) = 1/2. Even if the experiments are not exactly the same, I suspect something much like it is going on, again given the Venn diagram here. I just think the way the Sleeping Beauty problem is written makes this harder to see.
  • Sleeping Beauty Problem
    It looks like you may have misinterpreted Elga's paper. He doesn't define P as an unconditional probability. In fact, he expressly defines P as "the credence function you ought to have upon first awakening." Consequently, P(H1) and P(T1) are conditional on Sleeping Beauty being in a centered possible world where she is first awakened. The same applies to P(R) and P(B1), which are conditional on you being in a centered possible world where you are presented with a ball still wrapped in aluminum foil before being given a tequila shot.Pierre-Normand

    I don't see how this entails that P(A|A or B) = P(B|A or B) entails P(A) = P(B).

    My example proves that this doesn't follow where P is the credence function I ought to have after being explained the rules of my game. Elga doesn't explain why it follows where P is the credence function I ought to have upon first awakening.

    It certainly doesn't follow a priori, and so without any further explanation his argument fails.

    To understand what P(R) entails, let's look at the situation from the perspective of the game master. At the start of the game, there is one red ball in one bag and two blue balls in the other. The game master randomly selects a bag and takes out one ball (without feeling around to see if there is another one). They hand this ball to you. What's the probability that this ball is red?Pierre-Normand

    1/2.
  • Sleeping Beauty Problem
    Here, P(R|R or B1) is the probability that the ball you've just received is red, conditioned on the information (revealed to you) that this is the first ball you've received in this run of the experiment. In other words, you now know you haven't taken a shot of tequila. Under these circumstances, P(R) = P(B1) = 1/2.Pierre-Normand

    There is a difference between these two assertions:

    1. P(R|R or B1) = P(B1|R or B1)
    2. P(R) = P(B1)

    The first refers to conditional probabilities, the second to unconditional probabilities, and in my example the first is true but the second is false.

    This scenario doesn't accurately reflect the Sleeping Beauty experiment.Pierre-Normand

    Even if it doesn't, it does show that Elga's assertion that if P(A|A or B) = P(B|A or B) then P(A) = P(B) is not true a priori, and as he offers no defence of this assertion with respect to the Sleeping Beauty experiment his argument doesn't prove that P(H1) = 1/3.
  • Sleeping Beauty Problem
    I think the above in fact shows the error in Elga's paper:

    But your credence that you are in T1, after learning that the toss outcome is Tails, ought to be the same as the conditional credence P(T1|T1 or T2), and likewise for T2. So P(T1|T1 or T2) = P(T2|T1 or T2), and hence P(T1) = P(T2).

    ...

    But your credence that the coin will land Heads (after learning that it is Monday) ought to be the same as the conditional credence P(H1|H1 or T1). So P(H1|H1 or T1)=1/2, and hence P(H1) = P(T1).

    Combining results, we have that P(H1) = P(T1) = P(T2). Since these credences sum to 1, P(H1)=1/3.

    There is a red ball in one bag and two numbered blue balls in a second bag. You will be given a ball at random. According to Elga's reasoning:

    1. P(B1|B1 or B2) = P(B2|B1 or B2), therefore P(B1) = P(B2)

    2. P(R|R or B1) = 1/2, therefore P(R) = P(B1)

    3. Therefore, P(R) = P(B1) = P(B2) = 1/3

    The second inference and so conclusion are evidently wrong, given that P(R) = 1/2 and P(B1) = P(B2) = 1/4.

    So his reasoning is a non sequitur.
  • Sleeping Beauty Problem
    I would like the halfer to explain why ruling out the Tuesday scenario doesn't affect their credence in the coin toss outcome at all.Pierre-Normand

    I've been thinking about this and I think there's a simple analogy to explain it.

    I have one red ball in one bag and two blue balls in a second bag. I am to give you a ball at random. Your credence that the ball will be red should be .

    Being told that it's Monday is just like being told that the second bag only contains one blue ball. It does nothing to affect your credence that the ball you will be given is red.
  • Sleeping Beauty Problem
    My current credence P(H) is 1/2, but if I were placed in this exact same situation repeatedly, I would expect the outcome H to occur one third of the time.Pierre-Normand

    I wouldn't say that the outcome H occurs one third of the time. I would say that one third of interviews happen after H occurs, because two interviews happen after every tails.

    I think thirders commit a non sequitur when they claim that tails is twice as likely. Amnesia between interviews doesn't make it any less fallacious.
  • Sleeping Beauty Problem
    Sleeping Beauty's calculation that P(H) = 1/3 doesn't hinge on her participation in the experiment being repeated. She's aware that if the coin lands heads, she will be awakened once, but if it lands tails, she will be awakened twice. If we run this experiment once with three participants, and all three of them bet on T every time they are awakened, they will be correct 2/3 of the time on average, which aligns with their credences.Pierre-Normand

    This goes back to what I said before. There are two ways to reason:

    1. I should reason as if I am randomly selected from the set of all participants
    2. I should reason as if my interview is randomly selected from the set of all interviews

    Why would Sleeping Beauty reason as if the experiment was conducted multiple times and that her current interview was randomly selected from that set of all possible interviews, given that that's not how the experiment is conducted?

    The experiment is conducted by tossing a coin, and so it is only rational to reason as if she was randomly selected from the set of all possible participants.
  • Sleeping Beauty Problem
    If we run this experiment once with three participants, and all three of them bet on T every time they are awakened, they will be correct 2/3 of the time on average, which aligns with their credences.Pierre-Normand

    2/3 bets are right, but that’s because you get to bet twice if it’s tails. That doesn’t prove that tails is more likely. With 4 participants, 1/2 of participants are right whether betting heads or tails. You can frame bets to seemingly support either conclusion.

    Although you literally said in your previous post that betting is irrelevant, so why go back to it?
  • Sleeping Beauty Problem
    Most of her awakenings occur on the rare occasion when 100 tosses yield heads, which forms the basis for her credence P(100H) being greater than 1/2.Pierre-Normand

    Except the experiment is only conducted once. Either all her interviews follow one hundred heads or all her interviews (one) follow not one hundred heads.

    The second is more likely. That’s really all there is to it. I would say it’s irrational for her to reason any other way.

    However, the Sleeping Beauty problem specifically inquires about her credence, not about the rationality of her attempt to maximize her expected value, or her preference for some other strategy (like maximizing the number of wins per experimental run rather than average gain per individual bet).

    Even if she were to endorse your perspective on the most rational course of action (which doesn't seem unreasonable to me either), this wouldn't influence her credence. It would simply justify her acting in a manner that doesn't prioritize maximizing expected value on the basis of this credence.
    Pierre-Normand

    And this is precisely why the betting examples that you and others use don’t prove your conclusion.
  • Sleeping Beauty Problem
    The only significant divergence lies in the frequency of opportunities: the hostage can't be provided with frequent chances to escape without invalidating the analogy, whereas Sleeping Beauty can be given the chance to guess (or place a bet) every single day she awakens without undermining the experiment.

    However, we can further refine the analogy by allowing the hostage to escape unharmed in all instances, but with the caveat that he will be recaptured unknowingly and re-administered the amnesia-inducing drug. This would align the scenarios more closely.
    Pierre-Normand

    This is heading towards a betting example, which as I've explained before is misleading. There are three different ways to approach it:

    1. The same participant plays the game 2100 times. If they bet on 100 heads then eventually they will win more than they lose, and so it is rational to bet on 100 heads.

    2. 2100 participants play the game once. If they bet on 100 heads then almost everybody will lose, and so it is rational to not bet on 100 heads (even though the one winner's winnings exceed every losers' losses).

    3. One participant plays the game once. If they bet on 100 heads then they are almost certain to lose, and so it is rational to not bet on 100 heads.

    Given that the very premise of the experiment is that it is to only be run once, a rational person would only consider 3.

    And I really don’t see any counterexamples refuting 3. I will never reason or bet that 100 heads in a row is more likely.

    But even if the same participant were to repeat the experiment 2100 times, they don't bet on 100 heads because they think it's more likely, they bet on 100 heads because they know that eventually they will win, and that the amount they will win is greater than the amount they will lose.
  • Sleeping Beauty Problem


    And with this variation, do you not agree that the probability of it being heads is 3/8? Would you not also agree that the probability of it being heads in this scenario must be less than the probability of it being heads in the traditional scenario, where being woken up on your assigned day(s) is guaranteed? If so then it must be that the probability of it being heads in the traditional scenario is greater than 3/8, i.e. 1/2.
  • Sleeping Beauty Problem
    However, consider a different scenario where the hostage has a small, constant probability ε of discovering the means of escape each day (case-2). In this scenario, stumbling upon this means of escape would provide the hostage with actionable evidence that he could use to update his credence. Now, he would believe with a probability of 6/11 that he's in safehouse #2, thereby justifying his decision to pick up the torch. Consequently, given that 6 out of 11 kidnapped victims who find the means to escape are surrounded by lions, 6 out of 11 would survive.Pierre-Normand

    Then this is a different scenario entirely. If we consider the traditional problem, it would be that after the initial coin toss to determine which days she could be woken, another coin toss each day determines if she will be woken (heads she will, tails she won't).

    So the probability that the coin landed heads and she wakes on Monday is 1/4, the probability that the coin lands tails and she wakes on Monday is 1/4, and the probability that the coin lands tails and she wakes on Tuesday is 1/4. A simple application of Bayes' theorem is:



    As compared to the normal situation which would be:

  • Sleeping Beauty Problem
    Also, as an aside, if you correctly reason that it's tails then you escape on the first day, and so you can rule out today being the second day (assuming you understand that you would always reason as if it's tails).
  • Sleeping Beauty Problem
    Your credence in each possibility is based on the number of ways in which you could find yourself in your current situation given the possible outcomes of the specific coin toss.Pierre-Normand

    That's the very point I disagree with, and is most evident with the example of tossing a coin 100 heads in a row. The possible outcomes have no bearing on my credence that the coin landed heads 100 times in a row. The only thing I would consider is that the coin landing heads 100 times in a row is so unlikely that it almost certainly didn't happen, and I think any rational person would agree.

    This is less clear to see with the Sleeping Beauty problem given that heads and tails are equally likely, and so prima facie it doesn't matter which you pick, but given that there are two opportunities to win with tails there's no reason not to pick tails.

    If you don't like to consider my extreme example because the numbers are too high then let's consider a simpler version. Rather than a coin toss it's a dice roll. If 1 - 5 then safehouse 1 with crocodiles for one day, if 6 then safehouse 2 with lions for six days. Any rational person would take the wooden plank, and 5 out of every 6 kidnapped victims would survive.
  • Sleeping Beauty Problem
    It's worth noting that your provided sequence converges on 1/3. If the captive is not keeping track of the date, their credence should indeed be exactly 1/3.Pierre-Normand

    I don't think this is relevant to the Sleeping Beauty problem. It's a different experiment with different reasoning.

    In this case you're in safehouse 1 if not tails yesterday and not tails today and you're in safehouse 2 if either tails yesterday or tails today. Obviously the latter is more likely. I think only talking about the preceding coin toss is a kind of deception.

    Also it converges to 1/3 only as you repeat the coin tossing, whereas in the traditional problem the coin is only tossed once.
  • Sleeping Beauty Problem
    First day P = 1/2, second day 1/4, third day 3/8, fourth day 5/16, etc.

    Not sure what this is supposed to show?
  • Sleeping Beauty Problem
    Actually that’s not right (starting third day). Need to think about this. First two days are right though.

    Not sure how this is at all relevant though.
  • Sleeping Beauty Problem
    Then on the first day P = 1/2, the second day P = 1/4, the third day P = 1/8, etc.
  • Sleeping Beauty Problem
    I don’t quite understand this example. There are multiple coin flips and no amnesia?
  • Sleeping Beauty Problem
    I’ve explained the error with betting examples before. Getting to bet twice if it’s tails doesn’t mean that tails is more likely.
  • Sleeping Beauty Problem


    Consider my extreme example. There are two ways to reason:

    1. of all interviews are 100 heads in a row interviews, therefore this is most likely a 100 heads in a row interview
    2. of all participants are 100 heads in a row participants, therefore I am most likely not a 100 heads in a row participant

    I would say that both are true, but also contradictory. Which reasoning it is proper to apply depends on the manner in which one is involved.

    For the sitter, his involvement is determined by being randomly assigned an interview, and so I think the first reasoning is proper. For the participant, his involvement is determined by tossing a coin 100 times, and so I think the second reasoning is proper.

    We might want to measure which reasoning is proper by appeals to bets or expected values or success rate or whatever, but then there are two ways to reason on that:

    1. of all guesses are correct if every guess is that the coin landed heads 100 times in a row
    2. of all participants are correct if they all guess that the coin landed heads 100 times in a row

    How do we determine which of these it is proper to apply?

    So maybe there is no right answer as such, just more or less proper (or more or less compelling). And all I can say is that if the experiment is being run just once, and I am put to sleep and then woken up, my credence that the coin landed heads 100 times in a row would be .