• Michael
    15.4k
    Most of her awakenings occur on the rare occasion when 100 tosses yield heads, which forms the basis for her credence P(100H) being greater than 1/2.Pierre-Normand

    Except the experiment is only conducted once. Either all her interviews follow one hundred heads or all her interviews (one) follow not one hundred heads.

    The second is more likely. That’s really all there is to it. I would say it’s irrational for her to reason any other way.

    However, the Sleeping Beauty problem specifically inquires about her credence, not about the rationality of her attempt to maximize her expected value, or her preference for some other strategy (like maximizing the number of wins per experimental run rather than average gain per individual bet).

    Even if she were to endorse your perspective on the most rational course of action (which doesn't seem unreasonable to me either), this wouldn't influence her credence. It would simply justify her acting in a manner that doesn't prioritize maximizing expected value on the basis of this credence.
    Pierre-Normand

    And this is precisely why the betting examples that you and others use don’t prove your conclusion.
  • Pierre-Normand
    2.4k
    Except the experiment is only conducted once.Michael

    I've also been working under the assumption that the experiment is conducted only once. Sleeping Beauty's calculation that P(H) = 1/3 doesn't hinge on her participation in the experiment being repeated. She's aware that if the coin lands heads, she will be awakened once, but if it lands tails, she will be awakened twice. If we run this experiment once with three participants, and all three of them bet on T every time they are awakened, they will be correct 2/3 of the time on average, which aligns with their credences.

    In the same vein, in your dice roll variation, Sleeping Beauty knows that if the die didn't land on six, she will be awakened once, but if it did land on six, she would be awakened six times. This information is sufficient to justify her credence P(6) = 6/11. If we have eleven participants each participating in this experiment once, and they all bet on 'six' every time they are awakened, they will be correct 6/11 of the time on average.
  • Michael
    15.4k
    If we run this experiment once with three participants, and all three of them bet on T every time they are awakened, they will be correct 2/3 of the time on average, which aligns with their credences.Pierre-Normand

    2/3 bets are right, but that’s because you get to bet twice if it’s tails. That doesn’t prove that tails is more likely. With 4 participants, 1/2 of participants are right whether betting heads or tails. You can frame bets to seemingly support either conclusion.

    Although you literally said in your previous post that betting is irrelevant, so why go back to it?
  • Michael
    15.4k
    Sleeping Beauty's calculation that P(H) = 1/3 doesn't hinge on her participation in the experiment being repeated. She's aware that if the coin lands heads, she will be awakened once, but if it lands tails, she will be awakened twice. If we run this experiment once with three participants, and all three of them bet on T every time they are awakened, they will be correct 2/3 of the time on average, which aligns with their credences.Pierre-Normand

    This goes back to what I said before. There are two ways to reason:

    1. I should reason as if I am randomly selected from the set of all participants
    2. I should reason as if my interview is randomly selected from the set of all interviews

    Why would Sleeping Beauty reason as if the experiment was conducted multiple times and that her current interview was randomly selected from that set of all possible interviews, given that that's not how the experiment is conducted?

    The experiment is conducted by tossing a coin, and so it is only rational to reason as if she was randomly selected from the set of all possible participants.
  • Pierre-Normand
    2.4k
    And this is precisely why the betting examples that you and others use don’t prove your conclusion.Michael

    The betting examples serve to illustrate that the credences held by thirders, unlike those held by halfers, align with the frequencies of outcomes in such a way that if they were to place bets based on these credences, their expected values would satisfy the equation EV = Σ(P(i)*$i), where P(i) represents the probabilities of outcomes and $i the corresponding payouts. This illustration does not necessitate that Sleeping Beauty actually seeks to maximize her expected value. Rather, it reveals that if this were her goal, her credences would guide her actions effectively. Moreover, if numerous participants were placed in identical situations to Sleeping Beauty's, their credences would correspond to the frequencies of the outcomes they anticipate, which aligns closely with the intuitive definition of a credence.

    On the other hand, it's challenging to reconcile a statement such as: 'My current credence P(H) is 1/2, but if I were placed in this exact same situation repeatedly, I would expect the outcome H to occur one third of the time.' This perspective, typically associated with halfers, seems to introduce an incongruity between stated credence and expected frequency of outcomes.
  • Michael
    15.4k
    My current credence P(H) is 1/2, but if I were placed in this exact same situation repeatedly, I would expect the outcome H to occur one third of the time.Pierre-Normand

    I wouldn't say that the outcome H occurs one third of the time. I would say that one third of interviews happen after H occurs, because two interviews happen after every tails.

    I think thirders commit a non sequitur when they claim that tails is twice as likely. Amnesia between interviews doesn't make it any less fallacious.
  • Michael
    15.4k
    I would like the halfer to explain why ruling out the Tuesday scenario doesn't affect their credence in the coin toss outcome at all.Pierre-Normand

    I've been thinking about this and I think there's a simple analogy to explain it.

    I have one red ball in one bag and two blue balls in a second bag. I am to give you a ball at random. Your credence that the ball will be red should be .

    Being told that it's Monday is just like being told that the second bag only contains one blue ball. It does nothing to affect your credence that the ball you will be given is red.
  • Michael
    15.4k
    I think the above in fact shows the error in Elga's paper:

    But your credence that you are in T1, after learning that the toss outcome is Tails, ought to be the same as the conditional credence P(T1|T1 or T2), and likewise for T2. So P(T1|T1 or T2) = P(T2|T1 or T2), and hence P(T1) = P(T2).

    ...

    But your credence that the coin will land Heads (after learning that it is Monday) ought to be the same as the conditional credence P(H1|H1 or T1). So P(H1|H1 or T1)=1/2, and hence P(H1) = P(T1).

    Combining results, we have that P(H1) = P(T1) = P(T2). Since these credences sum to 1, P(H1)=1/3.

    There is a red ball in one bag and two numbered blue balls in a second bag. You will be given a ball at random. According to Elga's reasoning:

    1. P(B1|B1 or B2) = P(B2|B1 or B2), therefore P(B1) = P(B2)

    2. P(R|R or B1) = 1/2, therefore P(R) = P(B1)

    3. Therefore, P(R) = P(B1) = P(B2) = 1/3

    The second inference and so conclusion are evidently wrong, given that P(R) = 1/2 and P(B1) = P(B2) = 1/4.

    So his reasoning is a non sequitur.
  • Pierre-Normand
    2.4k
    There is a red ball in one bag and two numbered blue balls in a second bag. You will be given a ball at random.Michael

    This scenario doesn't accurately reflect the Sleeping Beauty experiment. Instead, imagine that one bag is chosen at random. You are then given one ball from that bag, but you're not allowed to see it just yet. You then drink a shot of tequila that causes you to forget what just happened. Finally, you are given another ball from the same bag, unless the bag is now empty, in which case you're dismissed. The balls are wrapped in aluminum foil, so you can't see their color. Each time you're given a ball, you're invited to express your credence regarding its color (or to place a bet, if you wish) before unwrapping it.

    According to Elga's reasoning:

    1. P(B1|B1 or B2) = P(B2|B1 or B2), therefore P(B1) = P(B2)

    2. P(R|R or B1) = 1/2, therefore P(R) = P(B1)

    3. Therefore, P(R) = P(B1) = P(B2) = 1/3

    The second inference and so conclusion are evidently wrong, given that P(R) = 1/2 and P(B1) = P(B2) = 1/4.

    So his reasoning is a non sequitur.
    Michael

    Here, P(R|R or B1) is the probability that the ball you've just received is red, conditioned on the information (revealed to you) that this is the first ball you've received in this run of the experiment. In other words, you now know you haven't taken a shot of tequila. Under these circumstances, P(R) = P(B1) = 1/2.

    Incidentally, since we started this discussion, I've read Elga's and Lewis's papers. But I've also read most of Robert Stalnaker's "Another Attempt to Put Sleeping Beauty to Rest" (2013) and Silvia Milano's "Bayesian Beauty" (2020), both of which are illuminating. Stalnaker modifies Lewis' centred-world approach, and Milano reconciles such approaches (which interpret indexical content) with Bayesian principles.
  • Michael
    15.4k
    Here, P(R|R or B1) is the probability that the ball you've just received is red, conditioned on the information (revealed to you) that this is the first ball you've received in this run of the experiment. In other words, you now know you haven't taken a shot of tequila. Under these circumstances, P(R) = P(B1) = 1/2.Pierre-Normand

    There is a difference between these two assertions:

    1. P(R|R or B1) = P(B1|R or B1)
    2. P(R) = P(B1)

    The first refers to conditional probabilities, the second to unconditional probabilities, and in my example the first is true but the second is false.

    This scenario doesn't accurately reflect the Sleeping Beauty experiment.Pierre-Normand

    Even if it doesn't, it does show that Elga's assertion that if P(A|A or B) = P(B|A or B) then P(A) = P(B) is not true a priori, and as he offers no defence of this assertion with respect to the Sleeping Beauty experiment his argument doesn't prove that P(H1) = 1/3.
  • Pierre-Normand
    2.4k
    That’s not accurate. There is a difference between these two assertions:

    1. P(R|R or B1) = P(B1|R or B1)
    2. P(R) = P(B1)

    The first refers to conditional probabilities, the second to unconditional probabilities, and in my example the first is true but the second is false.
    Michael

    It looks like you may have misinterpreted Elga's paper. He doesn't define P as an unconditional probability. In fact, he expressly defines P as "the credence function you ought to have upon first awakening." Consequently, P(H1) and P(T1) are conditional on Sleeping Beauty being in a centered possible world where she is first awakened. The same applies to P(R) and P(B1), which are conditional on you being in a centered possible world where you are presented with a ball still wrapped in aluminum foil before being given a tequila shot.

    To understand what P(R) entails, let's look at the situation from the perspective of the game master. At the start of the game, there is one red ball in one bag and two blue balls in the other. The game master randomly selects a bag and takes out one ball (without feeling around to see if there is another one). They hand this ball to you. What's the probability that this ball is red? This conditional probability is represented as P(R). This also represents how you should update your credence about the color of the ball upon learning that this is the first (or the only) ball given to you before the tequila shot. In this scenario, both you and the game master occupy the same centered possible world and share the same evidence.

    Incidentally, I think Stalnaker and Milano's paper both produce arguments that are easier to follow and, it seems to me, more rigorous than Elga's valiant first attempt.
  • Michael
    15.4k
    It looks like you may have misinterpreted Elga's paper. He doesn't define P as an unconditional probability. In fact, he expressly defines P as "the credence function you ought to have upon first awakening." Consequently, P(H1) and P(T1) are conditional on Sleeping Beauty being in a centered possible world where she is first awakened. The same applies to P(R) and P(B1), which are conditional on you being in a centered possible world where you are presented with a ball still wrapped in aluminum foil before being given a tequila shot.Pierre-Normand

    I don't see how this entails that P(A|A or B) = P(B|A or B) entails P(A) = P(B).

    My example proves that this doesn't follow where P is the credence function I ought to have after being explained the rules of my game. Elga doesn't explain why it follows where P is the credence function I ought to have upon first awakening.

    It certainly doesn't follow a priori, and so without any further explanation his argument fails.

    To understand what P(R) entails, let's look at the situation from the perspective of the game master. At the start of the game, there is one red ball in one bag and two blue balls in the other. The game master randomly selects a bag and takes out one ball (without feeling around to see if there is another one). They hand this ball to you. What's the probability that this ball is red?Pierre-Normand

    1/2.
  • Pierre-Normand
    2.4k
    My example proves that this doesn't follow where P is the credence function I ought to have after being explained the rules of my game. Elga doesn't explain why it follows where P is the credence function I ought to have upon first awakening.Michael

    I apologize. I misunderstood this part of Elga's argument. Although P(T1) and P(H1) are the credences Sleeping Beauty should have upon being first awakened, they are not conditional on her being in a centered world where she is first awakened. It is rather P(H1|H1 or T1) that is thus conditioned.

    I'm going to study the argument more closely and comment later.

    Incidentally, the fact that Sleeping Beauty's credence that the coin landed (or soon is going to land) heads, upon first awakening, is being increased by 1/6 after she learns that today is Monday, is a point of agreement between Lewis and Elga.

    Elga's interpretation that her credence P(H) gets updated from 1/3 to 1/2 is easily accounted for by the fact that, after she can rule out today being Tuesday, the possible worlds [H1, asleep] and [T1, T2] remain equiprobable. Since the setup of the experiment doesn't even require that anyone look at the result of the toss before Monday night, nothing changes if the toss is actually performed after Sleeping Beauty's awakening. In that case the credences expressed on Monday are about a future coin toss outcome rather than an already actualized one. My previous remark that Sleeping Beauty and the "game master" here are in the same centered possible world and share the same evidence applies.

    Lewis's halfer argument, though, commits him to claim that Sleeping Beauty's credence P(H) gets updated from 1/2 to 2/3. In other words, after she is first awakened and told that today is Monday, it becomes rationally warranted for Sleeping Beauty to believe that the probability that the coin landed heads (or will land heads when it is being tossed later tonight) is 2/3. Lewis bites this bullet and explains this as a peculiar form of knowledge about the future. It would have been intriguing to know if Lewis would have extended this line of reasoning to the game master's credence, given that they share the same epistemic situation as Sleeping Beauty. If so, this would suggest a highly unusual implication - that one could acquire knowledge about future events based solely on the fact that someone else would be asleep at the time of those events.
  • Michael
    15.4k
    If so, this would suggest a highly unusual implication - that one could acquire knowledge about future events based solely on the fact that someone else would be asleep at the time of those events.Pierre-Normand

    Elga's reasoning has its own unusual implication. In his own words:

    Before being put to sleep, your credence in H was 1/2. I’ve just argued that when you are awakened on Monday, that credence ought to change to 1/3. This belief change is unusual. It is not the result of your receiving new information — you were already certain that you would be awakened on Monday.

    ...

    Thus the Sleeping Beauty example provides a new variety of counterexample to Bas Van Fraassen’s ‘Reflection Principle’ (1984:244, 1995:19), even an extremely qualified version of which entails the following:

    "Any agent who is certain that she will tomorrow have credence x in proposition R (though she will neither receive new information nor suffer any cognitive mishaps in the intervening time) ought now to have credence x in R."

    I'm inclined towards double-halfer reasoning. P(Heads) = P(Heads | Monday) = 1/2, much like P(Red) = P(Red|Red or Blue 1) = 1/2. Even if the experiments are not exactly the same, I suspect something much like it is going on, again given the Venn diagram here. I just think the way the Sleeping Beauty problem is written makes this harder to see.
  • Michael
    15.4k
    Since the setup of the experiment doesn't even require that anyone look at the result of the toss before Monday night, nothing changes if the toss is actually performed after Sleeping Beauty's awakening. In that case the credences expressed on Monday are about a future coin toss outcome rather than an already actualized one.Pierre-Normand

    I think this is a better way to consider the issue. Then we don't talk about Heads & Monday or Tails & Monday. There is just a Monday interview and then possibly a Tuesday interview. It's not the case that two thirds of all interviews are Tails interviews; it's just the case that half of all experiments have Tuesday interviews. Which is why it's more rational to reason as if one is randomly selected from the set of possible participants rather than to reason as if one's interview is randomly selected from the set of possible interviews (where we distinguish between Heads & Monday and Tails & Monday).

    And I think it's even better to not consider days and just consider number of times wakened. So first she is woken up, then put to sleep, then a coin is tossed, and if tails she's woken again. Then we don't get distracted by arguing that her being asleep on Tuesday if Heads is part of the consideration. It doesn't make sense to say that she's asleep during her second waking if Heads.

    With this reasoning I think Bayes' theorem is simple. The probability of being woken up is 1 and the probability of being woken up if Heads is 1. That she's woken up a second time if Tails is irrelevant. As such:



    This, incidentally, would be my answer to Milano's "Bayesian Beauty".

    I don't have access to Stalnaker's paper to comment on that.
  • Pierre-Normand
    2.4k
    That's exactly the implication of Elga's reasoning.Michael

    Lewis's treatment yields an incredible result. Elga's treatment yields an unsurprising result by means of a controversial method. This is why I prefer the treatments by Stalnaker and (especially) Milano. They both demystify the method and Milano, in addition, shows the self centered world method to be consistent with Bayesian conditionalization, and both to satisfy van Fraassen's principles of reflection.
  • Michael
    15.4k
    And I think it's even better to not consider days and just consider number of times wakened. So first she is woken up, then put to sleep, then a coin is tossed, and if tails she's woken again. Then we don't get distracted by arguing that her being asleep on Tuesday if Heads is part of the consideration. It doesn't make sense to say that she's asleep during her second waking if Heads.

    With this reasoning I think Bayes' theorem is simple. The probability of being woken up is 1 and the probability of being woken up if Heads is 1. That she's woken up a second time if Tails is irrelevant.
    Michael

    In fact there's an even simpler way to phrase Bayes' theorem, even using days (where "Mon or Tue" means "today is Monday or Tuesday").

    P(Heads | Mon or Tue) = P(Mon or Tue | Heads) * P(Heads) / P(Mon or Tue)
    P(Heads | Mon or Tue) = 1 * 1/2 / 1
    P(Heads | Mon or Tue) = 1/2
  • Pierre-Normand
    2.4k
    I think this is a better way to consider the issue. Then we don't talk about Heads & Monday or Tails & Monday. There is just a Monday interview and then possibly a Tuesday interview. It's not the case that two thirds of all interviews are Tails interviews; it's just the case that half of all experiments have Tuesday interviews. Which is why it's more rational to reason as if one is randomly selected from the set of possible participants rather than to reason as if one's interview is randomly selected from the set of possible interviews.Michael

    It appears that you are suggesting a false dichotomy. Logically, both (1) two-thirds of all interviews being Tails interviews, and (2) half of all experiments having Tuesday interviews can simultaneously hold true. Both (1) and (2) are in fact logical implications from the problem's stipulations. The terms 'heads', 'tails', 'Monday', and 'Tuesday' merely serve as convenient labels for the spectrum of possibilities. Solving a problem doesn't involve selectively ignoring one of its stipulations or dismissing its acknowledgement as irrational. If you intentionally disregard a stipulation of the problem (or a logical consequence thereof), you are effectively addressing a different problem.

    And I think it's even better to not consider days and just consider number of times wakened. So first she is woken up, then put to sleep, then a coin is tossed, and if tails she's woken again. Then we don't get distracted by arguing that her being asleep on Monday if Heads is part of the consideration. It doesn't make sense to say that she's asleep during her second waking if Heads.

    I don't think anyone said that she experiences anything on Tuesday when she sleeps. Milano addresses the related concern that it is not an epistemic possibility that Sleeping Beauty will experience the centered possible world in which it is Tuesday and she is asleep. That doesn't mean that this possibility can't be considered in Bayesian conditionalization. You can ascribe non-zero probabilities to states that you will not experience, which happens for instance when you buy life insurance.
  • Pierre-Normand
    2.4k
    Just for fun, I asked GPT-4 to comment on your Bayesian analysis.

    Reveal
    What do you make of Michael's purported solution to the Sleeping Beauty problem:

    "In fact there's an even simpler way to phrase Bayes' theorem, even using days.

    P(Heads | Monday or Tuesday) = P(Monday or Tuesday | Heads) * P(Heads) / P(Monday or Tuesday)
    P(Heads | Monday or Tuesday) = 1 * 1/2 / 1
    P(Heads | Monday or Tuesday) = 1/2"

    GPT-4

    Michael's computation seems to be misguided. He's attempting to use Bayes' theorem to calculate P(Heads | Monday or Tuesday), i.e., the probability of Heads given that it is either Monday or Tuesday. Here's why this approach doesn't work:

    First, the event "Monday or Tuesday" is trivial - it will always be either Monday or Tuesday, hence P(Monday or Tuesday) = 1. This factor does not contribute any information to the analysis.

    Second, P(Monday or Tuesday | Heads) equals 1, which indicates that if the coin lands Heads, it will definitely be either Monday or Tuesday, which is trivially true.

    Third, P(Heads) is the unconditional or prior probability of the coin landing Heads, which equals 1/2 under the problem setup.

    Thus, Michael's computation ends up simply restating the original unconditional probability of Heads, P(Heads) = 1/2, instead of calculating the conditional probability of Heads given it's either Monday or Tuesday.

    The Sleeping Beauty problem is trickier than a standard conditional probability problem because Sleeping Beauty's state of knowledge changes throughout the problem. On waking, she doesn't know what day it is, and this affects her credences in ways that can't be captured by Michael's use of Bayes' theorem. To accurately solve the problem, one needs to account for Sleeping Beauty's changing state of knowledge and the conditions under which she forms her beliefs.
  • Michael
    15.4k
    So ChatGPT is saying that P(Heads | today is Monday or Tuesday) = 1/2 is trivially true. Doesn't that just prove my point?
  • Pierre-Normand
    2.4k
    So ChatGPT is saying that P(Heads | Monday or Tuesday) = 1/2 is trivially true. Doesn't that just prove my point?Michael

    GPT-4 wasn't endorsing your conclusion. Rather, it pointed out that your calculation of P(Heads | Monday or Tuesday) = 1/2 simply restates the unconditional probability P(H) without taking into account Sleeping Beauty's epistemic situation.

    The argument you've put forward could be seen as suggesting that the vast body of literature debating the halfer, thirder, and double-halfer solutions has somehow missed the mark, treating a trivial problem as a complex one. This isn't an argument from authority. It's just something to ponder over.
  • Michael
    15.4k
    Rather, it pointed out that your calculation of P(Heads | Monday or Tuesday) = 1/2 simply restates the unconditional probability P(H) without taking into account Sleeping Beauty's epistemic situation.Pierre-Normand

    As Elga says:

    This belief change is unusual. It is not the result of your receiving new information — you were already certain that you would be awakened on Monday. (We may even suppose that you knew at the start of the experiment exactly what sensory experiences you would have upon being awakened on Monday.) Neither is this belief change the result of your suffering any cognitive mishaps during the intervening time — recall that the forgetting drug isn’t administered until well after you are first awakened. So what justifies it?

    The answer is that you have gone from a situation in which you count your own temporal location as irrelevant to the truth of H, to one in which you count your own temporal location as relevant to the truth of H.

    Sleeping Beauty's "epistemic situation" is only that her current situation is relevant. She doesn't learn anything new. All she knows is that her temporal location is either Monday or Tuesday. Before the experiment began this wasn't relevant, and so she only considers P(H). After being woken up this is relevant, and so she considers P(H | Mon or Tue).

    That they both give the same answer (because Monday or Tuesday is trivially true) just suggests that Lewis was right. It really is as simple as (in his words) "Only new relevant evidence, centred or uncentered, produces a change in credence; and the evidence (H1 ∨ H2 ∨ H3) is not relevant to HEADS vs TAILS".

    The argument you've put forward could be seen as suggesting that the vast body of literature debating the halfer, thirder, and double-halfer solutions has somehow missed the mark, treating a trivial problem as a complex one. This isn't an argument from authority. It's just something to ponder over.Pierre-Normand

    Well, I would also think that my argument that P(A|A or B) = P(B|A or B) doesn't entail P(A) = P(B) is quite trivial. Maybe I've made a mistake (whether with this or my interpretation of Elga), or maybe Elga did. I'll admit that the former is most likely, but my reasoning appears sound.
  • fdrake
    6.6k
    I've been reading along, I have a meta question for you both @Pierre-Normand@Michael - why is it helpful to discuss variants which are allegedly the same as the original problem when you both don't seem to agree what the sampling mechanism in the original problem is?
  • Michael
    15.4k
    I'm not sure what you mean by the sampling mechanism. There is one experiment with one coin toss. We both appear to agree on that.
  • Pierre-Normand
    2.4k
    I've been reading along, I have a meta question for you both Pierre-Normand@Michael - why is it helpful to discuss variants which are allegedly the same as the original problem when you both don't seem to agree what the sampling mechanism in the original problem is?fdrake

    I introduced the prisoner variant to satisfy Michael's insistence that Sleeping Beauty's credence only be evaluated within a single-run experimental context. It aimed to show that it is rational for the prisoner to act under the assumption that their safehouse is surrounded by crocodiles with probability 6/11 whenever they get an opportunity to act of the basis of their credence, and this in spite of the fact that they had initially been brought to a safehouse surrounded by lions with probability 6/11. It also highlighted how the credence of the participant (prisoner) might interact with the different credence of an external agent (the police). I think my distinction of case-1 and case-2 also helped diagnose the source of Michal's halfer intuition. It stems from confusing the standard Sleeping Beauty problem with a different but closely related one (that indeed may be construed as stemming from a different sampling mechanism).

    The blue and red balls mechanism was introduced by Michael but I proposed to refine it in order to better mirror the Sleeping Beauty protocol.

    Regarding the sampling mechanism, I believe the centered possible world approach offers a compelling interpretation, although Milano suggests that there may still be residual disagreement about setting priors. This approach is compelling because it allows the participant to reason about her epistemic situation in a natural way without the need to incorporate any weird metaphysical conjectures about how she finds herself in her current situation. However, I'm not entirely confident about this. Milano's chosen values for the parameters α and β seem well-argued to me, grounded in both the stipulations of the problem and uncontroversial rationality assumptions. But there may be something I've overlooked.
  • fdrake
    6.6k
    @Pierre-Normand

    Aye. But you're disagreeing on whether the coin toss is the only random thing in the experiment (and realises in pairs of days), whether it's appropriate to assign a random variable to model part of SB's credence (her "subjective probability") and the relationship between that random variable and the coin toss. Like the enduring disagreement you both had regarding whether the interviews can be seen as drawn individually from a bag and have probabilities assigned to that (which can be used to support a thirder position).

    The sampling mechanism determines what would create an item of data in the experiment. That could be "pairs of days" or "individual days" or "conflip result day pair triples" or whatever. The description in the OP doesn't determine one sampling mechanism, it just suggests one. Given such a mechanism, the calculations are pretty trivial.

    As an example, when you (Michael) were speaking about seeing the interviews as drawn from a bag, because SB's interview-days "observe" coin toss results, that specifies a sampling mechanism on individual interview days which are conditionally related to coin tosses. Two of those interview days occur in tails, one in heads, so the probability of heads is 1/3 in that case.

    Inversely, when you were speaking about seeing the awake-day pairs as drawn from a bag; (awake, not awake) for heads, (awake, awake) for tails, that either assigns the random variable "am I awake?" to the days (with values awake/not awake) or assigns a coinflip random variable to the pairs (awake, not awake) for heads and (awake, awake) for tails. If the sampling mechanism is specified the first way, the probability of heads given "awake" turns out to be 1/3 as before. But if it's on the pairs alone (and not the elements of the pairs) it turns out to be 1/2.

    As @sime was intimating, a lot of the disagreement comes from how you end up assigning random variables. EG using the principle of indifference over the days for SB's day credence isn't a "neutral" move with respect to the sampling mechanism, since it conceives what day it is as a random variable which can be subject to a prior. That isn't an available move if the constitutive events of the sample space are (awake, awake) and (awake, asleep) - since the elements of both pairs are stipulated to be nonrandom in that model. This is distinguished from the bivariate approach we spoke about earlier which yields a thirder position.

    This bottoms out in not agreeing on what constitutes the space of events for the random variables, rather than in the calculations I think!

    It yields a natural interpretation because it enables the participant to reason about her epistemic situation in a natural way without the need to import some weird metaphysical baggage about the ways in which she is being "dropped" in her current situationPierre-Normand

    The "metaphysical baggage" about being "dropped" into a day in the centred world case, as I see it, is a three sided equiprobable coin flip. It's only as mysterious as a coinflip. In the "non-centred" case, SB isn't "randomly dropped" into a day at all, she's instead dropped into a heads-awakening or a tails-awakening (I think).

    I ended up in a state of confusion in the calculations, having a few contradictions in reasoning, which this paper elevates into framings of the experiment (including SB's setting within it) having inconsistent sample spaces between the centred and non-centred accounts. Thus yielding a "dissolution" of the paradox of the form; it's only a paradox when centred and non-centred worlds are equated.
  • Michael
    15.4k
    I did mention this. There are two ways to reason:

    1. I should reason as if I am randomly selected from the set of possible participants
    2. I should reason as if my interview is randomly selected from the set of possible interviews

    I do the former, he does the latter.

    My use of variants, such as that of tossing the coin 100 times, was to show that applying his reasoning leads to what I believe is an absurd conclusion (that even if the experiment is only done once it is rational to believe that P(100 Heads) = 2/3).

    Although he accepts this conclusion, so at least he’s consistent.

    But you’re right that this fundamental disagreement on how best to reason might make these arguments irresolvable. That’s why I’ve moved on to critiquing Elga’s argument, which is of a different sort, and to an application of Bayes’ theorem with what I believe are irrefutable terms (although we disagree over whether or not the result actually answers the problem).
  • Pierre-Normand
    2.4k
    I ended up in a state of confusion in the calculations, having a few contradictions in reasoning, which this paper elevates into framings of the experiment (including SB's setting within it) having inconsistent sample spaces between the centred and non-centred accounts. Thus yielding a "dissolution" of the paradox of the form; it's only a paradox when centred and non-centred worlds are equated.fdrake

    Thanks! I'm going to read this paper. I like the conclusion: "Thus, we suggest that although it is true that Beauty is in a heads-awakening if and only if the coin landed heads, Beauty, upon awakening, should assign probability 1/3 to the former and probability 1/2 to the latter."

    Is that not closely analogous to my distinction of case-2 and case-1 (respectively) in my analysis of the prisoner scenario? My analysis also suggests a pragmatist interpretation of the choice between centered and non-centered accounts. The choice depends not on metaphysical preferences, and not either on arbitrary stipulations regarding sampling mechanisms, but rather on the specification of the use one intends to make of one's credence in the outcome.
  • fdrake
    6.6k
    ↪fdrake I did mention this. There are two ways to reason:

    1. I should reason as if I am randomly selected from the set of possible participants
    2. I should reason as if my interview is randomly selected from the set of possible interviews
    Michael

    My use of variants, such as that of tossing the coin 100 times, was to show that applying his reasoning leads to what I believe is an absurd conclusion (that even if the experiment is only done once it is rational to believe that P(100 Heads) = 2/3).Michael

    Aye! I remember these. It was a good point. This is what put me onto the idea that there's contradictions inherent in the framing. If you end up trying to use the tiny probability of P(100 Heads) to update SB's interview credence (combining the "awakening" process with the "day sampling=coinflip" process), I think you end up in clown logic land.

    Only I was wrong in saying that it was "the thirder's" position which was incoherent, it was my assumption that (roughly) the sampling mechanisms you intimated can be unproblematically combined. They appear to be talking about the same thing, but they do so so differently you end up with contradictions.
  • Pierre-Normand
    2.4k
    I don’t think any reasonable person would believe this. I certainly wouldn’t.Michael

    Perhaps a rational individual might not believe in the plausibility of being woken up and put back to sleep 2^101 times. But even if this extreme scenario makes the consideration of such a highly unlikely albeit extraordinary event unreasonable, it does not follow that thirder-like solutions to a less extreme version of the Sleeping Beauty problem would be equally unreasonable.

    For instance, suppose you offer me the opportunity to purchase a $100 lottery ticket that carries a one in a septillion chance of winning me $200 septillion. Despite the expected value being positive, it may not be reasonable for me to purchase the ticket. However, it would be a logical fallacy to extrapolate from this example and conclude that it would also be unreasonable for me to buy a $100 lottery ticket with a one in ten chance of winning me $2000. Given I'm not in immediate need of this $100, it might actually be quite unreasonable for me to pass up such an opportunity, even though I stand to lose $100 nine times out of ten.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.