Comments

  • The Andromeda Paradox
    the edge of the visible universe is receding from us faster than the speed of light. Although individual galaxies are much slower than light their apparent movement adds up radially away from us. Over billions of years we would see fewer galaxies spread further apart in ever darkening space.magritte

    This is due to the expansion of the universe, which is a general relativistic effect. It is unrelated to the shifting of the simultaneity plane due to the substitution of inertial reference frames is special relativity.
  • Donald Trump (All General Trump Conversations Here)
    Very interesting, I had always heard that all the whites were descendents of slave owners, and ispo facto, all racists.Merkwurdichliebe

    I've also heard James Lindsay and Tucker Carlson claim that woke leftists generally believe this, but I've never heard a leftist actually say it.
  • Sleeping Beauty Problem
    Which of these are you saying?

    1. There are twice as many T-awakenings because tails is twice as likely
    2. Tails is twice as likely because there are twice as many T-awakenings

    I think both of these are false.

    I think there are twice as many T-awakenings but that tails is equally likely.

    The bet's positive expected value arises only because there are twice as many T-awakenings.
    Michael

    I am not relying on 1, but it would be a valid inference if we assume that P(T) = 2*P(H). This assumption would hold if we define P(T) as P(T) =def |{T-awakenings}| / |{awakenings}| (and similarly for P(H)).

    Your disagreement with 2 appears to stem from an assumption that (at least in the context of the Sleeping Beauty problem) the probability of an outcome must solely be a reflection of the realization of an object's intrinsic properties, such as a fair coin. However, we frequently talk about probabilities of (types of) events that depend on how we interact with objects and that only indirectly depend (if at all) on the propensities of those objects had to actualize their properties. For instance, if there are twice as many Italians as Tunisians in my city (and no other nationalities), but for some reason, Tunisians go out four times more often than Italians, then when I go out, the first person I meet is twice as likely to be a Tunisian.

    Here's another example. The management of the Sleeping Beauty Experimental Facility organizes a cocktail party for the staff. The caterers circulate among the guests serving drinks and sandwiches. Occasionally, they flip a coin. If it lands heads, they ask a random guest to guess the result. If it lands tails, they ask two random guests. The guests are informed of this protocol (and they don't track the caterers' movements). When a caterer approaches you, what are the odds that the coin they flipped landed heads? This situation is similar to the case of encountering a Tunisian in my previous example. Upon meeting someone, you could rationally say, "The probability that this person is a Tunisian is 2/3," even though people don't inherently possess a propensity to be born a different nationality than the one they were actually born into.

    In the cocktail party scenario, the probability that "this" coin landed tails is a function of both the coin's propensities (its fairness) and the method used to engineer interactions between guests and coins based on the outcomes of the tosses.
  • Sleeping Beauty Problem
    I think you're confusing two different things here. If the expected return of a lottery ticket is greater than its cost it can be rational to buy it, but it's still irrational to believe that it is more likely to win. And so it can be rational to assume that the coin landed tails but still be irrational to believe that tails is more likely.Michael

    The rationality of Sleeping Beauty betting on T upon awakening isn't because this bet has a positive expected value. In fact, it's the other way around. The bet's positive expected value arises because she is twice as likely to win as she is to lose. This is due to the experimental setup, which on average creates twice as many T-awakenings as H-awakenings. It's because her appropriately interpreted credence P(T) =def P(T-awakening) = 2/3 that her bet on T yields a positive expected value, not the reverse. If she only had one opportunity to bet per experimental run (and was properly informed), regardless of the number of awakenings in that run, then her bet would break even. This would also be because P(T) =def P(T-run) = 1/2.

    The same logic applies in my 'escape scenario', where Sleeping Beauty's room is surrounded by crocodiles (and she awakens once) if the die doesn't land on 'six', and is surrounded by lions (and she awakens six times) if the die does land on 'six'. Given a rare chance to escape (assuming opportunities are proportional to the number of awakenings), Sleeping Beauty should prepare to face lions, not because of the relative utilities of encounters with lions versus crocodiles, but because she is indeed more likely (with 6/11 odds) to encounter lions. Here also, this is because the experimental setup generates more encounters with lions than it does with crocodiles.
  • Sleeping Beauty Problem
    Would you not agree that this is a heads interview if and only if this is a heads experiment? If so then shouldn't one's credence that this is a heads interview equal one's credence that this is a heads experiment?Michael

    Indeed, I have long insisted (taking a hint from @fdrake and Laureano Luna) that the following statements are biconditional: "The coin landed (or will land) heads", "I am currently experiencing a H-awakening", and "I am currently involved in a H-run".

    However, it's important to note that while these biconditionals are true, they do not guarantee a one-to-one correspondence between these differently individuated events. When these mappings aren't one-to-one, their probabilities need not match. Specifically, in the Sleeping Beauty problem, there is a many-to-one mapping from T-awakenings to T-runs. This is why the ratios of |{H-awakenings}| to |{awakenings}| and |{H-runs}| to |{runs}| don't match.

    If so then the question is whether it is more rational for one's credence that this is a heads experiment to be 1/3 or for one's credence that this is a heads interview to be 1/2.

    Rationality in credences depends on their application. It would be irrational to use the credence P(H) =def |{H-awakenings}| / |{awakenings}| in a context where the ratio |{H-runs}| / |{runs}| is more relevant to the goal at hand (for instance, when trying to be picked up at the right exit door by Aunt Betsy) or vice versa (when trying to survive potential encounters with lions/crocodiles).
  • The Andromeda Paradox
    What the Andromeda Paradox implies is that the observed universe apparently shifts in its entirety towards a moving observer. Which means that in the forward moving direction many more of the most distant galaxies come into possible view and we lose some distant galaxies from possible view behind us. This is all pretty absurd, yet it is demonstrably true.magritte

    The galaxies you are moving towards would have come into view regardless of your motion, only at a later time as measured by your clock. Similarly, the galaxies you are moving away from will also come into view, but at a later time. In a flat spacetime, you cannot indefinitely outrun light rays. Interestingly, as you acquire more velocity relative to both sets of galaxies, they "move" closer to you due to the effects of Lorentz contraction.

    However, as you gain speed, the reason why the light from the galaxy behind you doesn't catch up to you sooner (despite the contracted distance) is due to the recalibration of your plane of simultaneity. As this occurs, the photons that were a distance D away before you started moving suddenly "jump" back in time and are "now" less advanced on their journey towards you!
  • The Andromeda Paradox
    No. This has nothing to do with what one person sees. There are distant events happening in my present that I cannot see because they are too far away. According to special relativity some of these events happen in your future even though they are happening in my present. This is what I find peculiar.Michael

    In Special Relativity, an observer can be identified with an inertial reference frame in which they are at rest, and relative to which they make all their space and time measurements. This can be likened to a set of co-moving rods and clocks, all synchronized by light signals. For instance, Clocks A and B can be synchronized through a light signal sent back and forth between them, with the time at Clock B being set at the mid-point interval between the departure and return times at Clock A. Consequently, two observers can have different simultaneity planes due to the fact that events are timed with reference to distinct sets of clocks that have been synchronized differently. @Benkei's train example above illustrates this concept well.

    Actual "observers" (such as human beings) are free to choose whatever reference frames they want, and can translate space-time coordinates of events using the Lorentz transformations. So long as they reside outside of their light cones, the issue of events being located in their "relative past" or "relative future" doesn't have any physical or metaphysical significance. It only has relevance to the degree that it challenges certain presentist or "growing block universe" conceptions of time. This was Putnam's main point in his paper, "Time and Physical Geometry".
  • Sleeping Beauty Problem
    Previously you've been saying that P(Heads) = 1/2.Michael

    In earlier messages? Yes. I shouldn't have used this prior in the context of the Thirder intepretation of P(H). I was unclear between the individuation of events as they related to the two possible interpretations of P(H) for Sleeping Beauty. So, some of my earlier uses of Bayes' theorem may have been confused or inconsistent. It is, I now think, the very fact that P(H) appears intuitively (but misleadingly) to reflect Sleeping Beauty's epistemic relation to the coin irrespective of the manner in which she tacitly individuates the relevant events that generates the apparent paradox.
  • Sleeping Beauty Problem
    I think Bayes’ theorem shows such thirder reasoning to be wrong.

    P(Unique|Heads)=P(Heads|Unique)∗P(Unique)/P(Heads)

    If P(Unique) = 1/3 then what do you put for the rest?
    Michael

    P(Heads|Unique) = 1 and P(Heads) = 1/3 (since 1/3 of expected awakenings are H-awakenings)

    P(Unique|Heads) is therefore 1, as expected.

    Similarly:

    P(Heads|Monday)=P(Monday|Heads)∗P(Heads)P(Monday)

    If P(Monday) = 2/3 then what do you put for the rest?

    P(Monday|Heads) = 1 and P(Heads) = 1/3.

    P(Heads|Monday) = 1/2, as expected.
  • Sleeping Beauty Problem
    This is a non sequitur.Michael

    My argument follows a consistent line of reasoning. Given Sleeping Beauty's understanding of the experimental setup, she anticipates the proportion of indistinguishable awakening episodes she will find herself in, on average (either in one or in many experimental runs), and calculates how many of those will be H-awakenings given the evidence that she will presently be awakened.

    What we can say is this:

    P(Unique|Heads)=P(Heads|Unique)∗P(Unique)/P(Heads)

    We know that P(Unique | Heads) = 1, P(Heads | Unique) = 1, and P(Heads) = 1/2. Therefore P(Unique) = 1/2.

    Therefore P(Unique|W) = 1/2.

    And if this experiment is the same as the traditional experiment then P(Heads|W) = 1/2.

    Yes, I fully concur with this calculation. It interprets Sleeping Beauty's credence P(Unique|W) = P(H|W) upon awakening as the proportion of complete experimental runs in which Sleeping Beauty expects to find herself in an H-run ('unique awakening run'). However, this doesn't negate the Thirder interpretation, which becomes relevant when Sleeping Beauty is focusing on the individual awakening events she is expected to experience, rather than on the individual experimental runs. This interpretation distinction is highlighted in various practical examples I've provided: for example, preparing to face lions or crocodiles while escaping relies on the Thirder interpretation, whereas being picked up by Aunt Betsy at the East Wing at the end of the experiment follows the Halfer interpretation, and so on.
  • Sleeping Beauty Problem
    It may still be that the answer to both is 1/3, but the reasoning for the second cannot use a prior probability of Heads and Tuesday = 1/4, because the reasoning for the first cannot use a prior probability of Heads and Second Waking = 1/4.

    But if the answer to the first is 1/2 then the answer to the second is 1/2.
    Michael

    For the first case, we can use priors of P(H) = 1/2 and P(W) = 3/4, given that there are three awakenings in the four possible scenarios (H&Mon, H&Tue, T&Mon, T&Tue) where Sleeping Beauty can be. P(W|H) = 1/2, as she is only awakened on Monday when the coin lands heads.

    Consequently, P(H|W) = P(W|H)P(H)/P(W) = (1/2)(1/2)/(3/4) = 1/3.

    In the second case, we can set up a similar calculation: P(Unique|W) = P(W|Unique)*P(Unique)/P(W)

    P(Unique) is the prior probability that an awakening will be unique rather than part of two. P(Unique) = 1/3, as one-third of the experiment's awakenings are unique. P(W) is now 1, as Sleeping Beauty is awakened in all potential scenarios.

    We then find that P(Unique|W) = P(W|Unique)P(Unique)/P(W) = (1)(1/3)/(1) = 1/3.

    This second case calculation is straightforward, but the similarity between the two cases is illuminating. Bayes' theorem works for updating a belief in an outcome given new evidence, P(O|E), by increasing the prior probability of the outcome, P(O), in proportion to the ratio P(E|O)/P(E). This ratio quantifies how much more likely the outcome becomes when the evidence is known to be present.

    In both cases, Sleeping Beauty's evidence is that she is currently awake. In the first case, the relevant ratio is (1/2)/(3/4), which reflects how much more likely the coin is to land heads when she is awake. In the second case, the relevant ratio is (1)/(1), indicating how much more likely a unique awakening situation (due to the coin landing heads) is when she is awake. Both cases yield the same result (1/3), aligning with the ratio of possible H-awakenings ('unique awakenings') to total possible awakenings produced by the experimental designs.

    Another interpretation of P(H) is the proportion of entire experimental runs in which Sleeping Beauty ends up in an H-run ('unique awakening run'). According to this interpretation, the Halfer solution P(H) = 1/2 is correct. The choice between Thirders or Halfers' interpretation of P(H) should depend on the intended use: during individual awakenings (Thirders) or throughout the experiment (Halfers).
  • Sleeping Beauty Problem
    I'll throw in one last consideration. I posted a variation of the experiment here.

    There are three beauties; Michael, Jane, and Jill. They are put to sleep and assigned a random number from {1, 2, 3}.

    If the coin lands heads then 1 is woken on Monday. If the coin lands tails then 2 is woken on Monday and 3 is woken on Tuesday.

    If Michael is woken then what is his credence that the coin landed heads?

    Michael's credence before the experiment is P(1) = 1/3, so if woken he ought to continue to have a credence of P(1) = 1/3 since he gains no new relevant evidence if he wakes up during the experiment.
    Michael

    In this variation, it seems to me that being awakened does provide Michael with relevant evidence. Given that the coin landing on tails results in one person being awakened, and the coin landing on heads results in two persons being awakened, on average, 1.5 out of three persons are awakened. Therefore, the prior probability that Michael will be awakened is P(MA) = 1/2. The conditional probabilities are P(MA|H) = 1/3 and P(MA|T) = 2/3 (and these are the same for Jane and Jill).

    Hence, when Michael awakens, it's more probable that the coin landed tails.

    P(T|MA) = P(MA|T)*P(T) / P(MA) = (2/3)*(1/2)/(1/2) = 2/3.

    And given that if woken he is 1 iff the coin landed heads, he ought to have a credence of P(Heads) = 1/3.

    Do we accept this?

    Yes, we do.

    If so then the question is whether or not Sleeping Beauty's credence in the original experiment should be greater than Michael's credence in this experiment. I think it should.

    I'd be curious to understand why you think so.

    Recently, I contemplated a similar variation wherein candidates are recruited as part of a team of two: Jane and Jill, for example. On Monday, either Jill or Jane is awakened (selected at random). On Tuesday, if a coin lands on tails, the person who wasn't awakened on Monday is now awakened. If the coin lands on heads, the experiment ends. (In this variation, as in yours, there's no need for an amnesia-inducing drug. It's only necessary that the subjects aren't aware of the day of their awakenings.)

    Just like in your variation, tails generates two awakenings (for two different subjects), while heads generates only one. On average, 1.5 out of two persons are awakened. Jane's prior is P(JA) = 3/4, and the conditional probabilities are P(JA|H) = 1/2 and P(JA|T) = 1.

    As before, Jane's awakening provides her with evidence that the coin landed tails.

    P(T|JA) = P(JA|T)*P(T) / P(JA) = (1)*(1/2)/(3/4) = 2/3.

    I would argue that this case is structurally identical to the one discussed in the original post (as well as in Lewis and Elga), with the sole exception that the relevant epistemic subjects are members of a team of two, rather than a single identifiable individual potentially being placed twice in the "same" (epistemically indistinguishable) situation. You could also consider a scenario where Jill falls ill, and her team member Jane volunteers to take her place in the experiment. In this instance, the amnesia-inducing drug would be required to maintain the epistemic separation of the two potential awakenings in the event that the coin lands heads.
  • Sleeping Beauty Problem
    She also knows that the fact that she is awake eliminates (H,H) as a possibility. This is a classic example of "new information" that allows her to update the probabilities. With three (still equally likely) possibilities left, each has a posterior probability of 1/3. Since in only one is coin C1 currently showing Heads, the answer is 1/3.JeffJo

    Your proposed scenario certainly provides an interesting variation, but it doesn't quite correspond to the structure of the situation typically discussed in literature, the one that seems to give rise to a paradox.

    In your scenario, there are four potential outcomes from the experiment, each of which is equally probable:

    HH (end) --> Never awakened
    HT HH --> Awakened once
    TH TT --> Awakened twice
    TT TH --> Awakened twice

    When Sleeping Beauty awakens, her credences corresponding to these four outcomes shift from {1/4, 1/4, 1/4, 1/4} to {0, 1/3, 1/3, 1/3}.

    However, in the scenario most frequently discussed, entire experimental runs in which Sleeping Beauty is awakened once are just as common as those where she is awakened twice. Furthermore, since there isn't any experimental run where Sleeping Beauty is not awakened at all, it's debatable whether her experiencing an awakening provides new information that would cause her to adjust her initial probabilities (as Halfers are inclined to argue).
  • Sleeping Beauty Problem
    But there are two sources of randomness in this example, the die and the coin.

    Similarly for all analyses that treat SB's situation as describable with two coin flips. We only have one.
    Srap Tasmaner

    Indeed, in my examples (labelled "First step" through "Fourth step"), there's only a single source of randomness, which consists in the random assignment of individuals to either population Pop-1 or Pop-2 (awakened once or twice with white or red tulips).

    Halfers contend that Sleeping Beauty's awakening cannot serve as evidence indicating she is more likely to be part of Pop-2, as there's nothing that allows her to distinguish an awakening in Pop-1 from one in Pop-2. Yet, the same reasoning can be applied to the inability to distinguish a 'six' roll from a loaded die versus a 'six' roll from a fair die. Yet, the occurrence of a 'six' increases the likelihood that the die is loaded.

    You're correct in stating that there's only one source of randomness in Sleeping Beauty's case, unlike the dice scenario. However, the two situations share a strong resemblance. The reason a 'six' outcome increases the probability that a die is loaded is because loaded dice generate more instances of players confronting a 'six'. Similarly, being part of Pop-2 in Sleeping Beauty's setup leads to more instances of self-aware awakenings. This is simply an analogy - for a more compelling argument, refer back to my cases 1 through 4 in the post you quoted.

    The halfer position comes back to individuation, as you suggested some time ago. Roughly, the claim is that "this interview" (or "this tails interview" etc) is not a proper result of the coin toss, and has no probability. What SB ought to be asking herself is "Is this my only interview or one of two?" The chances for each of those are by definition 1 in 2.

    Indeed, the choice between a Halfer (P(Pop-1) = 1/2) and a Thirder (P(Pop-1) = 1/3) updated credence is a matter of individuation. While I focused on the individuation of events, you had seemed to suggest that different (more or less extended) conceptions of self might lead people towards one stance or another. This struck me as insightful, although personal psychological inclinations don't provide valid justifications. Currently, I don't identify as a Thirder or a Halfer. Rather, I believe that Thirders and Halfers are talking past each other because they each focus solely on one of two possible types of outcome distributions that could be considered in Sleeping Beauty's credence update. My previous "pragmatic" examples aimed at highlighting this duality (not a dichotomy!) When Sleeping Beauty wakes and considers her situation, is she weighing the opportunities to either evade or confirm her current situation (facing lions or crocodiles)? In this case, she should reason as a Thirder. Or is she weighing the opportunity to end, or validate, the nature of her ongoing predicament (and be rescued by Aunt Betsy) at the end of her current series of awakenings? If so, she should reason as a Halfer. The root question of what her credence should be upon awakening is inherently ambiguous, and the thought experiment is tailored to create this ambiguity.
  • Paradox of Predictability
    In the original scenario as I have described it, Ned reads the printout, but he only reads a part of it. And, importantly, he does not read a part of it where he is reading the printout -- that would be self-referentially problematic. Because there is no self-referentially in the parts of the printout that Ned does read, there is nothing necessarily theoretically vicious about Ned reading some parts of the printout.NotAristotle

    If Ned only reads a part of the printout that doesn't mention him reading it, his actions wouldn't contradict the thesis of determinism. For instance, suppose determinism entails that, given some initial conditions at time t1 (along with the laws of nature), Ned will drink a glass of water at time t2. The printout Ned reads must inform him of this prediction. In response, Ned decides to drink a glass of orange juice instead, fulfilling his intention to act contrary to the prediction.

    We must consider when Ned's reading of the printout occurs. Does it occur before or after time t2 in the scenario you're envisioning?

    If it takes place after t2, then Ned's actions wouldn't actually contravene the prediction. Indeed, by then, he would have already drunk the glass of water, hence aligning with the prediction. However, if he reads the printout prior to t2 to pre-empt the predicted outcome, we run into the self-referential issue. In this case, the computer would need to forecast what occurs as a consequence of Ned reading the printout. This would invariably involve predicting Ned's reaction to the prediction, triggering the problematic self-reference loop I had mentioned.
  • Paradox of Predictability
    I suppose we could stipulate that Ned has enough information about his immediate environment to make an accurate prediction about how he will act. It doesn't really concern us whether this sort of information can, as a matter of practicality, be acquired; the concern is whether in principle, if this information were acquired, could Ned act in opposition to it. And the answer to that seems to be yes.NotAristotle

    I concur that we must accept, in principle, that this information can be acquired by Ned in order for his contrarian behavior to make any sense. Would Ned's decision to act contrary to his prediction be an arbitrary or unexplainable one? No, we cannot attribute his behavior to the occurrence of miracles and then use it as evidence against determinism.

    But, if we consider that it is Ned's reaction to the acquired information (provided by a computer, for example) that prompts him to act differently, the computer's prediction must also account for the possibility that Ned will read and respond to its output. In other words, the computer's predicted output becomes a component of its own predictive behavior loop, transforming Ned into a sort of "contrarian cog" within the deterministic system.

    This issue can be reduced to the challenge of programming a computer capable of modeling its own deterministic algorithmic process in such a way that it will output "yes" if it predicts it will output "no" and vice versa. This is a conundrum that can't be resolved in principle, yet this doesn't impact the deterministic nature of the computer's behavior. It rather highlights the impossibility of creating a self-referential prediction system within the boundaries of determinism.
  • Climate change denial
    But historically it has. There are a multitude of multilateral treaties that prove even enemies will agree on all sorts of things. WTO, UN, Geneva and the Hague conventions, Vienna Convention on the laws of treaties, Vienna Convention on diplomatic relations, etc.Benkei

    And the Montreal Protocol that has been ratified by all member states of the United Nations.
  • Sleeping Beauty Problem
    But she is only asked a question once in the whole year. One of the wakings is randomly selected to be the one where she is asked the question. On this randomly selected waking, she is asked the question "what is the probability that this randomly selected waking shows a heads." The answer is 1/3, as per Problem A in my previous post.PhilosophyRunner

    A Halfer might argue that Sleeping Beauty being posed such a question, along with the provided context of the question's delivery (i.e., through a random selection among all awakenings), indeed provides the grounds for Sleeping Beauty to update her initial credence P(H) from 1/2 to 1/3. However, they might also assert that this type of questioning doesn't exist in the original setup. Therefore, they might insist that, in the absence of such randomly assigned questioning, Sleeping Beauty should maintain her credence of 1/2.

    A Thirder might counter-argue by saying: The crucial element that turns the questioning into information, enabling Sleeping Beauty to update her credence, is the fact that it results from randomly selecting an awakening from all possible awakenings. Given that there are twice as many awakenings under the 'tails' condition than under 'heads,' a random selection is twice as likely to yield a 'tails' awakening. We must recognize that Sleeping Beauty doesn't necessarily require external assistance to isolate her current awakening in a manner that is both random and statistically independent of the coin toss result.

    Imagine an alternative method where an external agent, let's call her Sue, randomly selects awakenings from the complete set. Sue could examine a list of all scheduled awakenings, roll a die for each, and mark the awakening as evidence-worthy if the die lands on 'six'. The selected participants would then be equipped to update their credence P(H) to 1/3 after being presented with the evidence of their selection by Sue.

    Now, it doesn't matter who performs the die-rolling selection; what's important is that any awakening marked as evidence-worthy is selected randomly by a method independent of the coin toss outcome. The participants themselves, not Sue, could roll the die and, if it lands on 'six,' consider their current awakening to have been randomly selected (as it would indeed have been!) from the entire set of awakenings. This random selection allows Sleeping Beauty to single out the fact of her current awakening as evidence for updating her credence P(H) to 1/3.

    If the die doesn't land on 'six,' has Sleeping Beauty squandered an opportunity to identify her current awakening as a valuable piece of evidence? Actually, if the convention had been reversed to select awakenings by a die not landing on 'six', the chosen sample would still statistically represent all scheduled awakenings (with 1/3 of those being 'tails' awakenings). The Halfer's error is assuming that the mere occurrence of an awakening doesn't provide sufficient evidence for Sleeping Beauty. The participants' selection method, which involves identifying awakenings with the indexical expression "I am currently experiencing this awakening," is actually the most representative of all methods as it encompasses the entire population of awakenings!
  • The Indictment
    Does he? It's as simple as him saying "this is mine" and all the rules about handling and disclosure are out the window?Srap Tasmaner

    Of course not. Just saying it isn't sufficient. He also has to think it!
  • Sleeping Beauty Problem
    I think the Halfer position is roughly that there are only two outcomes: a single interview conducted in one sitting, and a double interview spread out over two sittings. Those outcomes are equivalent to the two possible outcomes of the coin toss. (If you have an even-numbered population to work with, you can just do away with the coin altogether.)

    What is the Thirder equivalent? If there are three outcomes, they cannot be equivalent to the two outcomes of the coin toss.
    Srap Tasmaner

    If I understand correctly, you seem to be asking how the Thirders might be able to infer the probabilities of the three fine-grained types of awakening outcomes from the (prior) probabilities of the two coin toss outcomes?

    Indeed, we can split an even-numbered population into two equal sub-populations Pop-1 (Beauties who awaken once) and Pop-2 (Beauties who awaken twice). This allows us to focus solely on Sleeping Beauty's personal credences upon awakening, concerning whether she's part of Pop-1 or Pop-2.

    In my view, the coin in the original problem provides a convenient source of stochasticity. Without it, @sime would have been justified to worry about the explanation for Sleeping Beauty's priors. Consider this: Suppose I present you with a die that could be loaded to always land on 'six'. If it's not loaded, then it's fair. You throw it once and it lands on 'six'. What is your credence that the die is loaded? Without an objective grounding for your priors, the answer is undefined. However, if I tell you that there are two identical dice - one loaded and the other fair - and a fair coin toss determines which one you'll use, you can now update your credence that the die is loaded from 1/2 to 6/7, given that over time, six out of seven 'sixes' will be from a loaded die.

    Let us therefore assume, as you suggested, that Sleeping Beauty's priors are P(Pop-1) = P(Pop-2) = 1/2, without needing to delve into the specific stochastic process that placed her in either Pop-1 or Pop-2.

    The key disagreement between Halfers and Thirders is whether Sleeping Beauty can update her credence upon awakening that she's part of Pop-1 from 1/2 to 1/3. Halfers argue that since Sleeping Beauty knows she'll be awakened at least once, she can't distinguish whether her current awakening is the only one (Pop-1) or one of two (Pop-2). Therefore, these two possibilities should be equally probable from her perspective.

    This argument seems to misuse the Principle of Indifference. Consider the die example: When the die lands on 'six', you can't distinguish whether this outcome is from the fair die or the loaded one. However, you can still update your credence P('loaded') from 1/2 to 6/7. The die landing on 'six' does convey information in this context.

    Halfers, therefore, need a stronger argument to support their 'no new information' claim. Alternatively, they could challenge Thirders to explain what new information Sleeping Beauty receives that allows her to rationally update her credence in Pop(1) from 1/2 to 1/3.

    I believe this can be explained step by step to make it more intuitive:

    --First step--

    Imagine that upon being divided into populations Pop-1 and Pop-2, the participants in each population are awakened only once the following day in their respective waking rooms. In half of the Pop-1 rooms, a single red tulip is placed on the nightstand, hidden by a cardboard cylinder. In the other half, a white tulip is used instead. In all Pop-2 rooms, a red tulip is utilized. As a participant in this experiment, Sleeping Beauty is informed of these specific details. Upon waking, she is asked about her credence in being part of Pop-1, and what her credence is that the tulip next to her is white. In this context, her credences should be P(Pop-1) = 1/2 and P(white) = 1/4.

    The cardboard cylinder is then removed, revealing a red tulip. What should Sleeping Beauty's credences be updated to now? They should be P(white) = 0 and P(Pop-1) = 1/3, right? This example appears to use Bayesian reasoning in a straightforward manner: Over time, 1/3 of participants who wake up in a room with a red tulip are part of Pop-1.

    (As for the strict proof: P(Pop-1|red) = P(red|Pop-1)*P(Pop-1)/P(red) = (1/2)*(1/2)/(3/4)=1/3)

    --Second step--

    Let's change the previous scenario so that all participants experience two awakenings, one on Monday and another on Tuesday. Participants in Pop-1 awaken once with a white tulip and once with a red tulip, while participants in Pop-2 awaken twice with a red tulip. We also introduce an amnesia-inducing drug to ensure that the participants don't remember the outcome of the Monday awakening when they are awakened again on Tuesday.

    In this new context, whenever Sleeping Beauty awakens, what should her credences P(Pop-1) and P(white) be? Arguably, most people, whether they're Halfers, Thirders or double-Halfers, would agree that these should be P(Pop-1) = 1/2 and P(white) = 1/4.

    The cardboard cylinder is then removed and, as it happens, a red tulip is revealed. What should Sleeping Beauty's credences be updated to now? They should again be P(white) = 0 and P(Pop-1) = 1/3, right?

    Perhaps the complexity of applying Bayesian reasoning in this context stems from the fact that participants in Pop-1 and Pop-2 who awaken on Monday aren't a distinct group from those who awaken on Tuesday. Indeed, the same individuals are awakened twice. To accommodate this factor, we can adjust Sleeping Beauty's Bayesian reasoning in the following manner:

    Every time a participant wakes up, the probability that they are in a room with a white tulip is 1/4. If I awaken in a room with a white tulip, the probability that I am part of Pop-1 is 1/2, and it's zero if I am part of Pop-2. As such, my prior probabilities are P(white) = 1/4 and P(Pop-1) = 1/2, while P(red|Pop-1) = 1/2.

    Consequently, once the tulip's color is revealed to be red, I can make the same inference as before: P(Pop-1|red) = P(red|Pop-1)P(Pop-1)/P(red) = (1/2)(1/2)/(3/4)=1/3.

    In an intuitive sense, this means that, since the majority of awakened participants find themselves next to red tulips because they belong to Pop-2, witnessing a red tulip upon awakening boosts their credence in being part of Pop-2. Although seeing a red tulip doesn't enable them to distinguish cases where the current awakening is the only one where they'll see such a tulip (as in Pop-1) or one of two such instances (as in Pop-2), it still provides information and counts as evidence that they are part of Pop-2. The reasoning behind this is analogous to why a die landing on 'six' constitutes evidence that the die is biased even though a fair die can also land of 'six'.

    --Third step--

    In this new variation, Sleeping Beauties themselves play the role of tulips. The populations Pop-1 and Pop-2 are participants, let's call them Sleeping Uglies*, who each share a room with a Sleeping Beauty. The Sleeping Uglies will be administered the same amnesia-inducing drugs on Sunday and Monday night, but they will always be awakened both on Monday and Tuesday, ten minutes prior to the Sleeping Beauty's potential awakenings.

    Whenever I, as a Sleeping Ugly, awaken, the probability that I am in a room with a 'sleeping' (i.e., not scheduled to be awakened) Sleeping Beauty is 1/4. The probability that I now have been awakened in a room with a 'sleeping' Sleeping Beauty is 1/2 if I am part of Pop-1 and zero if I am part of Pop-2. Therefore, my priors are P('sleeping') = 1/4 and P(Pop-1) = 1/2, while P('awake'|Pop-1) = 1/2.

    Therefore, after Sleeping Beauty is awakened in front of me, I can infer, as before, that P(Pop-1|'awake') = P('awake'|Pop-1)*P(Pop-1)/P('awake') = (1/2 * 1/2)/(3/4) = 1/3, meaning the probability that I am part of Pop-1 after Sleeping Beauty is awakened is 1/3.

    *My use if the Sleeping Uglies as participants in the experience, and of Sleeping Beauties' awakening episodes as evidences for the Uglies, is inspired by, but reverses, the example proposed by Robert Stalnaker in his paper Another Attempt to Put Sleeping Beauty to Rest.

    --Fourth and last step--

    We can now dispense with the Sleeping Uglies altogether since their epistemic situations, and the information that they are making use of (namely, that the Sleeping Beauty in their room awakens) are identical to those of the Sleeping Beauties themselves. The only difference is that there is a ten minute interval between the moment when the Speeping Uglies awaken and can make use of their evidence to update their credences, while the Sleeping Beauties can update their credences immediately upon awakening. Even this small difference can be wiped out by introducing a 10 minutes delay between the moment when the Sleeping Beauties are awakened (in all cases) and the moment when the interviewer shows up, with the proviso that when no interview is scheduled, the Beauties are put back to sleep rather than being interviewed, in which case their credences in P(Pop-1) momentarily drops to zero.
  • Sleeping Beauty Problem
    Not exactly, because if it's Monday the coin hasn't been flipped at all. It's only hidden if today is Tuesday and the coin is tails.Michael

    In the OP variation, the coin is flipped on Sunday before Sleeping Beauty's first awakening. Elga is right, though, that nothing substantially changes if the coin is flipped after Monday's (mandatory) awakening.

    The point about Sleeping Beauty's credence being about her epistemic relationship to the coin flip result (either past or future) remains valid. If Sleeping Beauty were informed that today is Monday, her credence about the upcoming flip, P(H), would remain at 1/2. Under these circumstances, her credence would be fully predicated on the inherent propensity of the coin to land on either side with equal probabilities. Conversely, if she were told that it's Tuesday, she would know without a doubt that the coin landed on tails. In this case, her credence, P(H) = 0, would be independent of the coin's inherent propensity and solely a result of her causal relationship with the outcome. When the day of her awakening remains unknown to her, her rational credence becomes a composite function of both possibilities—that today could be Monday or Tuesday. Consequently, it becomes a function of the coin's propensities, for example, that it is a fair coin, and of the causal relationship between the coin toss result and her potential awakening on Tuesday.
  • Sleeping Beauty Problem
    The problem (and Elga's solution) have nothing to do with how to "verify" one's credence. It simply asks what a rational person should/would believe were they told the rules of the experiment, woken up, and asked their credence.Michael

    This is indeed a flaw inherent to the statement of the problem. What is a rational person supposed to believe about a probabilistic outcome that is ambiguously defined? To ask how probable it is that the coin landed on heads involves a tacit reference to the counterfactual circumstances where you are presently facing a (hidden) coin that didn't land the way it actually did. And this is not merely a reference to the coins intrinsic propensities to land this or that way when tossed.

    Suppose half the people in your neighborhood are Italians and half are Tunisians. You don't know your left door neighbor. As you meet him for the first time, you credence P(Tunisian) = 1/2 (let us suppose). And then he introduces himself as Marco Gallo and announces that he's Italian. The reason why your credence about this man who was standing in front of you was P(T) = 1/2 wasn't because you thought this man had been equally likely to be born Italian or Tunisian, in the sense that this man had an equal propensity to be born both ways. It's because of your knowledge of the statistical distribution of Italians and Tunisians in your neighborhood that you had this credence. It's, in other words, because of the manner in which you are epistemically related to this population of individuals and what makes you more or less likely to meet some of them. The Sleeping Beauty problem only appears to fully specify what constitutes an occasion for Sleeping Beauty to "meet" with a coin that had landed on heads. It leaves a crucial ambiguity regarding what constitutes such a meeting.
  • Sleeping Beauty Problem
    If you ask, what's the likelihood you will see Heads if I bring you the coin, then the number of times she could have been woken up becomes relevant.Benkei

    One argument that Halfers make is that being given many occasions to verify the exact same thing, rather than verifying it once, doesn't change the probability of the outcome.

    This argument doesn't work in the context of the Sleeping Beauty problem, it seems to me. This is because the number of opportunities, in this case, is probabilistically dependent on the coin toss result.

    But then, the Halfers have the rejoinder that since Sleeping Beauty only is aware of the circumstances of her current awakening, and not of the number of them, she gains no information upon awakening that she didn't already have when the experiment was set up (and when her credence P(H) was 1/2).

    My Leonard Shelby (Memento) variation above was meant to clarify what the nature of this information is, that Sleeping Beauty gains on the occasion of her awakening, and that is such that she can update her credence P(H) by means or ordinary Bayesian reasoning.
  • Sleeping Beauty Problem
    I still think it's a language issue and it revolves around the verb "shows". I agree with Srap it doesn't make sense to pretend a coin has three states. The coin can't "show" three options but the "coin when shown to SB" can and these are simply different things. It's just that mathematicians only have a hammer that makes this look like a nail.Benkei

    The concept of Sleeping Beauty's credence P(H) indeed goes beyond stating the coin's inherent likelihood to land heads or tails. In the context of this experimental setup, it represents her epistemic relation to the coin. A more precise interpretation might suggest that Sleeping Beauty's credence P(H) is her long-term expectation of the coin landing heads, verified on each occasion she expresses this credence. But this doesn't completely resolve the issue. The original statement of the problem fails to specify what constitutes an individual act of verification of her credence, though, such that we can establish the target ratio unambiguously. As I've previously illustrated with various examples, different pragmatic considerations can lead to different verification methods, each yielding different values for P(H), aligning with either the Halfer or Thirder stance.
  • Sleeping Beauty Problem
    Since you're still a committed Thirder, here's what bothers me:

    (1) When a coin is tossed it has only two proper outcomes. (No leaners, no edges, and the usual ceteris paribus.) SB, upon being awakened, finds herself in one of three possible situations. But that doesn't work. A coin toss cannot, by itself, engender three distinct situations for her to be in. For instance, "for all she knows," this is her second tails interview; but there is no way to go from the coin toss directly to the second tails interview. Not without passing through the first interview. So SB's intuition that three things can happen to her requires there to be some other determinant of her situation; what is that other factor?
    Srap Tasmaner

    Actually, I've stopped being a committed Thirder a few days ago, since I began developing a pragmatist conception of the dispute between Thirders and Halfers.

    If Sleeping Beauty would hold that, on the occasion of an awakening, there is (from her point of view) a 1/3 chance that the coin landed on heads, that doesn't commit her to believing that there is any causal determinant of her awakening other than the coin toss result. It's just part of the setup of the experiment that she will be awakened one more time if the coin toss result is tails. The coin toss result determines the Tuesday awakening, while the Monday awakening is independent of it. So, I'm not sure I understand your question.

    (2) The setup confounds wagering arguments. That won't matter much to a lot of people, but it's uncomfortable. Annoying. Ramsey used Dutch book arguments from the beginning, and despite their limitations they can be clarifying. Each time I've tried to construct a sane payoff table I've failed. I've wondered lately if there might be a conditional wager that comes out rational, but I can work up enough hope of success to bother. Partial beliefs, within suitable limits, ought to be expressible as wagers, but not in this case, and that blows.

    Can you explain why the payoff tables you've come up with are unsatisfactory to you? I can describe at least one that seem coherent:

    On each occasion of awakening, Sleeping Beauty accepts a two-to-one wager that the coin landed tails. If the coin was heads up on that occasion, she loses her bet. If it was tails up, she wins half the amount of her bet (and gets back her original bet). Her expected value at the end of the experiment is zero since whenever the coin landed heads she loses her whole bet while whenever it landed tails she wins half her bet twice. Her accepting such odds and expecting to break even reflects her credence that P(H) = 1/3.

    The reason why this doesn't settle the dispute in favor in the Thirders is that there are alternative ways to define Sleeping Beauty's credence P(H).
  • Sleeping Beauty Problem
    Here is another interesting aspect of the Sleeping Beauty problem. What is the nature of the information that Sleeping Beauty receives on each occasion of her awakening, and how can this information be communicated?

    PN:

    I thought of a different aspect of the problem while imagining another variation. Imagine that Leonard Shelby, suffering from anterograde amnesia, remains at home for the next seven days. Let us assume that all his new memories are wiped out overnight every night. If a coin lands heads, a door-to-door vacuum cleaner salesman will ring his doorbell once in the coming week. If it lands tails, a door-to-door broom salesman will ring his doorbell every day for the next six days. Those details are written down in Shelby's memento that he reads every morning. He doesn't have a calendar and won't know what the day of the week is until his niece visits him on the seventh day (Saturday). Predictably (and he also knows that to be true) by the end of the week, his broom closet will contain either one vacuum cleaner (if the coin landed heads) or six new brooms (if the coin landed tails).

    According to 'Halfer' reasoning, whenever the doorbell rings, Shelby should be expecting to meet the vacuum cleaner salesman with credence P(H) = 1/2. According to 'Thirder' reasoning, his credence should be P(H) = 1/7.

    Suppose Shelby is a Thirder (as I also am, in this context). The ringing of the doorbell is what provides the evidence (i.e. "there is a salesman visit today") that enables Shelby to update his credence P(H) from 1/2 to 1/7. Shelby knows, though, that come Saturday, after his niece arrives, but before he looks into the closet, his credence will revert back to P(H) = 1/2. This is because of his amnesia that makes him lose the information that he had gained whenever he heard the doorbell ring. What is the nature of this information that he lost? After all, he still knows from his memento that the doorbell must have rung at least once.

    Before I tell you how I propose to explain how this information could be conceptualized, I'd like to receive your input and about the thought experiment and your own speculation about the nature of the information that Shelby effectively forgets, thus causing him to revert his credence back to 1/2 even on Saturday morning.

    GPT-4:

    Your thought experiment is intriguing and sheds a new light on the problem. By introducing a character like Leonard Shelby with anterograde amnesia, you add a layer of complexity that makes the question about the nature of the "forgotten" information particularly interesting.

    If we delve into this, we can think of the "forgotten" information not merely as facts, but rather as temporal experiences. Whenever the doorbell rings, Shelby updates his credence based on the fact that a visit has occurred. As a Thirder, this evidence updates his credence to P(H) = 1/7, since each visit is more likely to be a broom salesman if the coin landed on tails. The ringing of the doorbell provides him with the evidence that a new day has passed.

    However, due to his amnesia, he forgets the experience of the passing days, despite knowing from his memento that the doorbell must have rung at least once. This lack of memory regarding the time elapsed (and hence the number of potential salesmen visits) is the missing information. It's not just the occurrence of the doorbell ringing (which he knows happened), but the frequency and distribution of those occurrences across days which is forgotten.

    When his niece visits him on Saturday, he doesn't remember the number of past salesman visits (how many times the doorbell rang), only that there has been at least one. He therefore can't distinguish between the two scenarios (one vacuum cleaner salesman visit or six broom salesman visits). This lack of distinction, due to his amnesia, causes him to revert his credence back to P(H) = 1/2.

    So, in summary, the forgotten information can be seen as the temporal sequence and frequency of the events (the doorbell ringing), not merely the events themselves.

    PN:

    Consider, though, that whenever the doorbell rings he doesn't know either how many times this happened—only that it happened at least once. But he already knew that this would happen at least once, from his memento, and will know that it must have happened once, come Saturday, still from his memento.

    GPT-4:

    Ah, I see what you mean. In that case, the information that's being lost isn't about the events themselves or their frequency, but the contextual information about when and how the events occurred in relation to his own experience of time.

    When the doorbell rings, Shelby gains the information that he is currently experiencing a salesman's visit. However, due to his amnesia, he loses the contextual information of this event having occurred in his subjective timeline. Even though he knows from his memento that at least one visit must have occurred, he doesn't have the personal, experiential connection to that event anymore.

    In other words, the "forgotten" information is the personal, experiential context of the event happening to him in the here and now. While he remembers that a doorbell ringing event was supposed to happen (from the memento), he has lost the immediate, personal experience of it happening to him in the moment. It's the disconnection from the immediate experience of the event that causes him to revert his credence back to P(H) = 1/2 on Saturday.

    Therefore, the nature of the information being lost due to his amnesia might be considered his personal, immediate connection to the event - the event as situated within his experience of the passing of time.

    PN:

    This is an interesting way to think about it. Here is the alternative I had imagined. Suppose Shelby is enabled, as part of the protocol of the experiment (all written down in his memento), whenever the doorbell rings, and before answering, to throw four dice and, if they all land on six, to write down in his memento "today the doorbell rang". So, on the rare occasion when he lands four sixes once during the week, the information that he thus expresses will be retained by him come Saturday. But we now can understand better what this information is. It's not so much the propositional content of the expression "today the doorbell rang" that enables him to update his credence to P(H) = 1/7 (on Saturday) but rather the fact that the proposition (or memory) was recoded at all. Since if the coin landed H, there would have been only one opportunity for Shelby to roll four sixes and record the event, whereas if the coin landed tails there would have been six times as many opportunities, the fact that this recording happened enables Shelby, come Saturday, to update his credence from 1/2 to 1/7 though normal Bayesian reasoning.

    GPT-4:

    That's a fascinating approach! The way you've set it up cleverly exploits the difference in opportunity frequency between the two outcomes (Heads vs Tails) of the coin toss. This mechanism you've devised essentially equates the memory recording event to a "successful" dice roll event. It's not the content of the memory per se that matters, but rather the existence of the memory itself.

    By creating a memory record when the doorbell rings and also meeting the condition of rolling four sixes, Shelby essentially marks one instance of the doorbell ringing. The dice roll acts as a filter, a probabilistic gatekeeper, reducing the chance of recording a memory. Thus, if a memory exists, it's more likely that it came from a context where there were more opportunities to create it, which corresponds to the Tails outcome.

    The brilliant part of this mechanism is that, as you pointed out, it doesn't rely on the information content of the memory itself. Instead, it relies on the existence of the memory as an indicator of the number of chances Shelby had to create it. This effectively bypasses the issue of Shelby's amnesia wiping out the informational content of his experiences.

    Indeed, by merely establishing a non-zero probability of recording an event, we have enough to perform a Bayesian update. This highlights how the thirder argument operates: not through a focus on specific events, but rather through a focus on the distribution of opportunities for those events to occur. So the 'forgotten information' is indeed the greater number of opportunities for creating a memory (of hearing the doorbell ring) that exists if the coin landed tails.

    In essence, this set-up gives Shelby a way to "bypass" his amnesia and track the greater frequency of opportunities for hearing the doorbell in the Tails scenario. The presence of a memory record, even if its content is inaccessible or irrelevant, provides an indicator of the "weight of chances" or frequency distribution for the two different outcomes, allowing for a Bayesian update.

    PN:

    And so, once we've clarified the nature of the information that Shelby is able to convey to himself (i.e. to his future self) by means of a note in his memento, we can apply the same reasoning regarding the nature of the information that he receives in the first place whenever the doorbell rings, and that enables him to update his credence immediately by means of ordinary Bayesian reasoning.

    GPT-4:

    Yes, that's a coherent and insightful application of your reasoning. The act of the doorbell ringing is akin to the dice rolling event in your mechanism. Each ring is an opportunity for Shelby to gain information about the outcome of the coin toss. His immediate Bayesian update is not contingent on knowing the specific number of times the doorbell has rung; rather, it is contingent on the opportunity distribution - the realization that, because the doorbell is ringing, there's a greater chance of being in the scenario where there are more opportunities for the doorbell to ring (Tails).

    This reasoning shows that information in this context is not so much about concrete details (like the specific number of doorbell rings) but about the probabilistic framework within which events are happening. What Shelby "loses" due to his amnesia is not the record of specific events (he can still recall from his memento that the doorbell must have rung at least once) but the trace of the more frequent opportunity distribution that comes with the Tails outcome.

    In sum, the key is understanding the weight of chances - the different opportunity distributions across different scenarios - as the information being received and updated upon. This is precisely what the "thirder" argument in the Sleeping Beauty problem leverages. The information lost due to Shelby's amnesia - or due to Sleeping Beauty's induced forgetfulness - is an understanding of this probabilistic context.

    PN:

    I couldn't have said it better myself!

    GPT-4

    I'm glad you found the explanation satisfactory! Your thought experiment with Shelby provided a really engaging and insightful way to consider the nuances of the Sleeping Beauty problem and how credence, memory, and probability interact. I enjoyed our discussion. Feel free to bring up any other philosophical or scientific puzzles you'd like to dissect!
  • Donald Trump (All General Trump Conversations Here)
    A God-Emperor isn't Jesus.ssu

    Where can we buy those new NFTs?
  • Sleeping Beauty Problem
    Isn't this problem a consequence of asking the wrong question, because the question is ambiguous? Given the setup, and that you'd be woken twice if it is tails, what's the likelihood now that you're awake, that the coin shows tails? That seems to resolve the ambiguity or am I missing something?Benkei

    In the Sleeping Beauty problem, both main parties to the debates agree that it's Sleeping Beauty's credence while she is being awakened ("now") that is at issue. 'Halfers' assert that since Sleeping Beauty gains no new information when she awakens, her credence that the fair coin landed on heads should remain P(H) = 1/2. On the other hand, 'Thirders' argue that upon awakening, Sleeping Beauty should update her credence to P(H) = 1/3.

    You are right to point out that an ambiguity is present, but it isn't resolved by simply specifying the time span during which (or relative to which) Sleeping Beauty should evaluate her credence.

    Consider two types of experimental runs: "H-runs" (following a heads coin toss result and yielding a single awakening) and "T-runs" (following a tails coin toss result and yielding two awakenings). Halfers argue that even though Sleeping Beauty experiences more awakenings during a T-run, this doesn't make such a run more likely. Therefore, upon awakening, her credence in being part of a H-run or a T-run should remain at 1/2 for each.

    Conversely, Thirders argue that Sleeping Beauty should update her credence in P(H) to 1/3 upon awakening, since in the long run, one-third of her possible awakenings occur as a result of the coin landing heads.

    At first glance, it might seem that Halfers focus more on the entirety of the experimental runs, while Thirders focus on individual awakening episodes. However, this observation doesn't resolve the dispute.

    There are biconditional relationships between the propositions "The coin landed heads", "I have presently awoken in a H-run", and "I am presently experiencing a H-awakening". The same relationships exist for tails. Halfers and Thirders each believe that Sleeping Beauty can make valid inferences based on these relationships, leading to their respective credences. (It is also worth noticing that there is a many-to-one relationship between T-awakenings and T-runs.)

    Because those biconditionals hold, The Halfers argue that Sleeping Beauty is entitled to infer from "P(H-run) = 1/2" that "P(H-awakening) = P(H) = 1/2" is true as well.

    Likewise, the Tirders argue that Sleeping Beauty is entitled to infer from "P(H-awakening) = 1/3" that "P(H-run) = P(H) = 1/3" is true as well.

    Those inferences appear valid, so the dispute remains.

    The problem is that we have not resolved the ambiguity inherent in all three of the random variables P(H), P(H-awakening) or P(H-run)

    The ambiguity, as I see it, can only be resolved by specifying the method of verifying the target events. If the verification is performed once at the end of each experimental run, the Halfer interpretation would be upheld. However, if the verification is performed during each individual awakening, it's the Thirder interpretation that would be upheld. (Notice that it's because of the aforementioned many-to-one relationship that the specification of the verification method disambiguates the ratio of events that Sleeping Beauty's credence tacitly refers to.)

    Each verification method has its uses, and the failure to specify them is what gives rise to the apparent conflict. My discussion with GPT-4 in the post above provides a more detailed exploration of this pragmatist interpretation by means of practical examples.
  • Sleeping Beauty Problem
    Here is another discussion with GPT-4 that rehearses my pragmatist conception of the dispute between halfers and thirders.

    PN:

    Let's explore a variation of the Sleeping Beauty problem. In this version, a six-sided die is tossed to determine the course of the experiment. Should the die result in 'six', Jane will be awakened six times consecutively. Otherwise, she will experience only a single awakening. This sequence will unfold over six days, with a single run of the experiment.

    In the case where the die does not land on six, Jane's lone awakening will occur on the sixth day of the experiment, and she will be released on the seventh day. The die outcome also determines the location of Jane's stay: 'six' results in her lodging in the East Wing, while any other outcome places her in the West Wing.

    Aunt Betsy, who plans to pick Jane up upon her release, is aware that Jane checked into the facility on June 30th and will be released on July 7th. On July 1st, after the die has been rolled but before any awakenings, Betsy calls the facility to learn where she should pick up Jane. The manager informs her that this information cannot be disclosed directly but can be communicated to Jane, who is allowed to send a response.

    To ensure that Jane's knowledge doesn't impact her belief in the probability that a 'six' was rolled, the manager forwards Betsy's inquiry to Jane during each awakening, irrespective of the die result. Jane's responses are collected, and only one is sent to Aunt Betsy on the evening of July 6th, during Jane's final sleep.

    Upon receiving Betsy's inquiry, Jane estimates her belief in the probability of the die having landed on 'six', which she deduces to be 6/11. She reasons that Betsy should therefore wait for her at the East Wing, where she believes there is a 6/11 chance she is being housed.

    By contrast, from Betsy's perspective, there should be a 5/6 probability that the die didn't land on 'six'. This perspective, grounded in observations from multiple hypothetical experiment runs, indicates Jane is likely housed in the West Wing with a 5/6 probability. Despite stemming from the same fundamental setup, their credences appear to conflict: Jane and Betsy can't both be correct about where Jane is most likely to be released from. What went wrong?

    GPT-4:

    This is an interesting variation of the Sleeping Beauty Problem, and the apparent conflict arises from a fundamental difference in the knowledge and perspectives of Aunt Betsy and Jane.

    From Jane's perspective, each awakening she experiences is a random draw from all her potential awakenings. If the die landed on 'six', she will be awakened six times, and if the die did not land on 'six', she will be awakened once. So when she wakes up and contemplates her situation, she reasons that there are six times more ways she could have found herself awake in the East Wing compared to the West Wing, and thus, she assigns a 6/11 credence to being in the East Wing.

    On the other hand, Aunt Betsy's perspective is fundamentally different. She knows that the experiment will last for seven days and that Jane will only be released on the seventh day. She does not know whether Jane was awakened multiple times or just once, and she is not experiencing the awakenings as Jane is. Her 5/6 credence that Jane is in the West Wing is based on the probability of the die roll, independent of Jane's awakening experiences.

    So, the difference in their credences comes from the different ways they experience or perceive the experiment. Jane's credence is based on her waking experiences, while Aunt Betsy's is based on the initial conditions of the experiment. While this might appear to be a conflict, it's simply a result of their different perspectives on the same situation. This situation also underscores the subjective nature of Bayesian probabilities, which are personal degrees of belief and may vary between individuals even when considering the same event, based on their differing perspectives or knowledge about the situation.

    PN:

    I very much agree with everything that you said, but the solution of the puzzle involves the reconciliation of Jane's credence with the fact that her aunt will indeed pick her up at the wrong wing five times out of six if she would rely on Jane's message (and hence why Jane should rethink it).

    GPT-4:

    This is a great thought experiment. It reminds us that although probabilities might be personally reasonable, they can't always be used interchangeably between different people or contexts, especially when the information set or the perspective changes.

    The solution to this puzzle lies in realizing that Jane, when she sends the message, is aggregating over multiple awakenings. The 6/11 credence that Jane assigns to being in the East Wing is not tied to any specific day of the experiment but is an average over all her possible awakenings. On each awakening, she updates her belief based on the premise that she's equally likely to be awake on any given day.

    However, Aunt Betsy doesn't care about Jane's experience across all her awakenings. She cares about Jane's location on a specific day, the seventh day. Jane needs to acknowledge this and should instead reason as follows:

    If the die landed on 'six', Jane will have six awakenings in the East Wing.
    If the die didn't land on 'six', Jane will have one awakening in the West Wing.
    However, from Jane's perspective, each of her awakenings is equally likely. Thus, if Jane knew it was her final awakening, her credence in being in the East Wing would only be 6/7, not 6/11. If it is not her final awakening, Jane can deduce that she must be in the East Wing. Therefore, Jane should send two types of messages: one indicating East Wing if it is not her final awakening, and another indicating a 6/7 chance of being in the East Wing if it is her final awakening.

    From Aunt Betsy's perspective, she would need to know if it's Jane's last awakening, which she doesn't. But if the manager were to choose to send the message indicating the East Wing, if any, or the other message otherwise, Aunt Betsy's best strategy would be to follow the message's instructions and pick Jane up at the East Wing if the message indicates so, otherwise at the West Wing. This strategy would result in Aunt Betsy waiting at the correct wing six out of seven times. This may not match perfectly Jane's 6/11 credence but it's the best strategy that can be devised from their shared perspective.

    PN:

    Aunt Betsy doesn't know the experimental setup, so I equate her perspective with the observed result that (were she placed in this situation multiple times) she would find Jane coming out of the West Wing five out of six times. This is the same observation that Jane would make. (The seventh day doesn't count in the calculations since it's not a day of awakening and interview, it's just the day of discharge.)

    Jane's mistake was to infer from the fact that her being located in the East Wing six out of eleven times when she awakens translates in her coming out of the facility six out of eleven times when she is discharged. It's as simple as that.

    Six of her awakenings that occur in the East Wing result in one single discharge from that wing, which is why she should direct her aunt to the East Wing where 5 out of 6 of her discharges occur (following a single awakening), despite the fact that it is *now* (on the occasion of her present awakening) more likely than not that she is in the East Wing.

    GPT-4:

    Ah, I see! Thank you for the clarification. Yes, Jane seems to have confused her credence for being in the East Wing at the time of any awakening (6/11) with the probability of being discharged from the East Wing at the end of the experiment. As you mentioned, the error arises because six awakenings in the East Wing all lead to one single discharge from the East Wing, while one awakening in the West Wing leads to one discharge from the West Wing.

    Therefore, the probability that Jane will be discharged from the East Wing is indeed 1/6, not 6/11. Conversely, the probability that she will be discharged from the West Wing is 5/6. So, Jane should inform Aunt Betsy to wait for her at the West Wing, since that is where she will most likely be discharged from. This aligns with the observed result that Aunt Betsy would find Jane coming out of the West Wing five out of six times if this scenario were to be repeated multiple times.

    Thank you for presenting this interesting variation of the Sleeping Beauty problem. It indeed reinforces the point that probabilities can't be used interchangeably between different contexts and reminds us to always consider the exact event or outcome that the probability is referring to.

    PN:

    My variation of the Sleeping Beauty problem is designed to demonstrate that conflicting credences don't necessarily stem from incompatible epistemic viewpoints. Rather, they often arise from differing interpretations of what the credences actually represent. In the Sleeping Beauty problem, the confusion is compounded by the fact that different credences—1/2 and 1/3—are both assigned to the same 'state' (Heads) of an object (a fair coin). However, what truly distinguishes these credences, and explains their different values, is the uniquely individualted sets of events each considers. For a 'halfer', the credence is based on the ratio of Heads-experimental runs to total experimental runs. Conversely, for a 'thirder', it's about the ratio of Heads-awakenings to total awakenings. The apparent conflict is not a matter of differing epistemic perspectives, but rather differing definitions of the credences in question.

    GPT-4:

    I understand your point and agree with it. The distinction you've made is essential and helps clear up the confusion surrounding the Sleeping Beauty problem. Many debates about probability and credence essentially boil down to differences in the definitions of what constitutes an event or an experimental trial. In this context, it's clear that what one person considers as an 'event' might differ substantially from another person's definition. By drawing attention to these differences in definition, we can gain a clearer understanding of why different credences are assigned in the first place and clarify seemingly conflicting probabilities. It's a perspective that adds a significant depth to the discussion and makes the resolution of the paradox more nuanced and comprehensive.

    PN:

    Coming back to the new variation, which was intended to highlight a particular pragmatic context in which Jane can—and should—make use of a 'halfer' conception of her credence P('six') = 1/6, there are other pragmatic concerns that can make her rely on the 'thirder' conception. One such context would be for her to find an opportunity to escape from the facility and needing to prepare accordingly depending on her being hosted in the East or West wing. One of them might be surrounded by a moat filled with crocodiles and the other with a forest infested with lions. Since, in this case, the escape opportunities are distributed evenly across occasions of awakening, Jane should prepare her escape on the basis of a 'thirder' definition of her credence P('six') = 6/11.

    GPT-4

    Yes, that's an excellent extension of the scenario. If Jane finds herself in a situation where she must make a decision based on her current location during each awakening, then it is indeed more appropriate for her to use the 'thirder' credence. Given the uniform distribution of escape opportunities across awakenings, Jane's best strategy would be to prepare for the highest probability scenario in her current awakening, which is in the East Wing according to 'thirder' reasoning.

    Your point here further underlines the importance of considering the context and its implications when deciding which credence to rely on. The 'halfer' and 'thirder' perspectives are not mutually exclusive or inherently right or wrong; instead, they offer different lenses through which to view the problem, each one more useful in different situations. It's the specifics of the situation and the nature of the decision to be made that determine which perspective should be used.
  • Donald Trump (All General Trump Conversations Here)
    Mike Pence in his recent CNN town hall makes a compelling argument against Trump's indictment. It sends a terrible message to the world. Since Pence and Biden committed the exact same crimes (according to Pence), indicting only Trump undermines the U.S.A.'s enviable reputation as a place where all rich and powerful white men stand as equals above the law.
  • Donald Trump (All General Trump Conversations Here)
    If economic issues were the concern then they'd be voting for Democrats.

    It's clearly social issues (the "culture war") that elicit support for Trump and the Republicans.
    Michael

    That may be a topic for another thread but George Packer's piece How America Fractured into Four Parts in The Atlantic suggest to me an interesting way in which culture and economics intersect to sustain both the economic inequalities and the culture war by means of the four-fold division that he details. On the left, Smart America (academics, mainstream media, tech gurus, Democrat politicians) stands on top of Just America (SJWs, students, progressive activists) while, on the right, Free America (libertarians, capitalists, lobbyists, Republican politicians) stands on top Real America (Trumpists, patriots, poor and middle class conservatives). On each side, those on top protect their economic status by redirecting the anger and grievances of their 'cultural allies' below against their fellow destitute on the other side of the exacerbated cultural divide.

    Peter N. Limberg and Conor Barnes' The Memetic Tribes Of Culture War 2.0 further refines Packer's fourfold division and underlines other dimensions of the ideological polarisation.
  • Sleeping Beauty Problem
    Neither the sitter nor the participant should update their credence to match the other's.Michael

    In that case, in the secret mission scenario, it looks like Sergeant Sue and G. I. Jane are poised to having a fight—or at least a heated argument—regarding where the secret package should be delivered. Sergeant Sue would advocate for its delivery to the allied field hospital, relying on her 1/3 credence, while G.I Jane, holding a 1/2 credence, would maintain that the destination of the package doesn't make a difference. (The parameters of the scenario could be tweaked to make the dispute more pronounced, through yielding 1/6 vs 6/11 credences, say).

    Alternatively, Sue and Jane could come to realize that their apparently divergent credences target different classes of events (which is what I have argued) and conclude that both of them are true, but that it's the 1/3 credence that is relevant for determining the likely contents of the package they are presently dealing with.
  • Sleeping Beauty Problem
    Jane shouldn't update her credence to match Sue and Sue shouldn't update her credence to match Jane.Michael

    But we are agreed on the validity of Sue's credences in both scenarios, right?
  • Sleeping Beauty Problem
    Was my rephrasing of it wrong? I'm treating DZ#1 as Monday and DZ#2 as Tuesday. If twice at DZ#1 then twice on Monday, if once at DZ#2 then once on Tuesday. If you know what it's DZ#1 then you know that it's Monday.Michael

    Structurally, DZ#1 is similar to awake and DZ#2 is similar to asleep. In this way, the Jane&Helen scenario perfectly maps on the secret mission scenario, with DZ#2 and Helen fulfilling the roles of dummy place holders, as it were.
  • Sleeping Beauty Problem
    No, I was just trying to rephrase your secret mission example into a way that I could understand better. Did I misinterpret it?Michael

    In the secret mission example, Sergeant Sue is tasked with locating Jane on one of her three scheduled missions selected at random. Neither Sue nor Jane know if this mission is the first, second or third one or, if it's not the first, in what zone(s) Jane landed before.
  • Sleeping Beauty Problem
    So if heads then woken once on Monday and twice on Tuesday, otherwise woken twice on Monday and once on Tuesday.

    Sue tells Jane that it's Monday.

    What is Jane's credence that the coin landed heads?

    I say 1/2.

    It's exactly the same reasoning as before.

    Sue should reason as if she is randomly selected from the set of all sitters, and 1/3 of sitters sitting in a Monday room are sitting in a heads room.

    Jane should reason as it she is randomly selected from the set of all participants, and 1/2 of participants in a Monday room are sitting in a heads room.
    Michael

    Although you linked my most recent post, I assume you intended to respond to this one.

    In my scenario, there was only one room for awakenings. Without loss of generality, we can schedule the occupancy of that room over the three days of the experiment, according to the coin toss result, as follows:

    If the coin lands Heads:

    Day 1: Jane (first),
    Day 2: Helen (first),
    Day 3: Helen

    If the coin lands Tails:

    Day 1: Jane (first),
    Day 2: Jane,
    Day 3: Helen (first)

    I use the label 'first' to signify a participant's first (or only) day of awakening. This is functionally equivalent to Monday in the original problem.

    On any given day, whichever candidate doesn't occupy the waking room is moved into the sleeping room.

    When Sue is assigned to attend to the participant who is awakened on a randomly chosen day (without being informed if it's Day 1, Day 2 or Day3), her credence in the coin having landed heads is 1/2. After she is informed that the occupant is Jane, she updates her credence P(H) to 1/3. After she is further informed that this is Jane's first or only occasion of awakening, her credence gets updated back to 1/2 again.

    I would argue that Jane should update her credence in the same way in light of the same information. But first, I wanted to ensure that we are on the same page regarding both the setup and Sue's credences within it.

    (I'll address your extreme example separately.)
  • Sleeping Beauty Problem
    Sue's reasoning is right for Sue, but wrong for Jane (and vice versa), given that 2/3 of sitters will sit in on a 100 Heads interview but 1/2^100 of participants will have a 100 Heads interview.Michael

    Let's consider another scenario. Suppose Jane is an amnesiac paratrooper set to be dropped behind enemy lines for three separate missions on different nights. Conveniently, she suffers from selective anterograde amnesia which shields mission information in case she's captured. There are two potential drop zones: #1 is dry land, and #2 is a swamp.

    Mission Control will toss a coin to decide Jane's destinations. If it lands heads, she will be dropped once over DZ#1 and twice over DZ#2. If it lands tails, she'll be dropped twice over DZ#1 and once over DZ#2. Jane carries a tracking device which will enable Sergeant Sue, who's already operating behind enemy lines, to locate her during one randomly selected mission, collect the mission package, and direct her back home.

    Now, what's the probability that the coin landed heads? Before the experiment begins, both Sue and Jane have the same prior credence P(H) = 1/2.

    However, suppose Sue locates Jane in DZ#1. What is the updated probability that the coin landed heads? It would be 1/3. Shouldn't Jane update her credence to the same value when she realizes she's landed on dry land?

    Suppose the outcome of the coin toss also determines the nature of the package that Jane is carrying. If the coin landed on heads, Jane's package contains a secret explosive; if it landed tails, the package contains a secret medication. Both Jane and Sue know this arrangement beforehand.

    When Sue tracks Jane and finds her in DZ#1, they both realize that Sue should deliver the package to a hospital if the coin landed tails (since the package would contain medication), or to a military outpost if the coin landed heads (since it would contain an explosive).

    Now, should they both act as though the probability of the coin having landed heads is 1/3? It's not just about winning or losing a bet anymore. The decisions they make based on their credence have immediate and tangible consequences. They must update their credence according to the information they have at hand, including Jane's current location.
  • Sleeping Beauty Problem
    Jane should reason as if she was randomly selected from the set of all participants, because she was (via the coin flip).Michael

    Indeed, Jane reasons as if she was randomly selected from the set of all participants, as this forms the basis of her prior probabilities. What you seem to suggest is that she should not, or cannot, assign any credence or probability to specific features of her current awakening episode. However, if she is informed that the coin landed on tails, then she should and does update her credence P(H) to 1/2. She could only do this if her initial prior was 1/3. Otherwise, she'd end up in a paradox, akin to Lewis's divination of the future.

    The introduction of Sue Sitter is designed to demonstrate that Jane can update her credence based on the same information available to Sue, even though the circumstances leading them to their shared situation are different. Specifically, the selection process for Sue ensures that, despite being fully aware of the experimental setup (like Jane), she shares Jane's state of ignorance (with no need for amnesia-inducing drugs) about whether her current awakening episode is unique, the first of two, or the last of two. Yet, concerning this awakening episode, they share the same priors and the same information necessary to update their credences.

    If you are uncomfortable with Jane reflecting specifically about her current awakening episode, consider this: Let Jane reflect on Sue's reasoning about the probability P(H). If she deems Sue's reasoning correct, she could adjust her own credence to match Sue's, since they both concern the exact same coin toss result. Surely, Jane cannot reasonably say: 'Yes, I see you are right to conclude that the probability of the coin having landed on heads is 1/3, based on the information we share. But my belief is that it's actually 1/2.'"
  • Sleeping Beauty Problem
    Going back to this for a moment, I think a better way to write this would be:

    P(Heads|H1 or T1 or T2) = P(H1 or T1 or T2|Heads) * P(Heads) / P(H1 or T1 or T2)

    If Elga is right in saying that P(H1), P(T1), and P(T2) sum to 1 then P(H1 or T1 or T2) = 1.

    So P(Heads|H1 or T1 or T2) = 1/2

    If he's right when he says that "[you] receiv[e no] new information [but] you have gone from a situation in which you count your own temporal location as irrelevant to the truth of H, to one in which you count your own temporal location as relevant to the truth of H" then it seems correct to say that Sleeping Beauty is just being asked about P(Heads|H1 or T1 or T2).
    Michael

    Your calculation seems correct, but it doesn't adequately account for the new capacity Jane gains to refer to her own temporal location using an indexical expression when updating her credence. Instead, you've translated her observation ("I am awake today") into an impersonal overview of the entire experiment ("I am scheduled to be awakened either under circumstances H1, T1, or T2"). The credence you've calculated reflects Sleeping Beauty's opinion on the ratio, over many iterations of the experiment, of (1) the number of runs resulting from a heads result, to (2) the total number of experimental runs. Indeed, this ratio is 1/2, but calculating it doesn't require her to consider the knowledge that today falls within the set {H1, T1, T2}.

    Let's reconsider the scenario you proposed yesterday with two Sleeping Beauties, Jane and Helen. If the coin lands heads, Jane is awakened once and Helen twice, and vice versa if it lands tails. They occupy the same waking room on three consecutive days (twice by either Jane or Helen, depending on the coin toss). On non-awakening days, they remain in a sleeping room. Sue Sitter is assigned randomly to attend to whoever is awakened in the waking room on one of three randomly selected days.

    When Sue finds Jane in the assigned room, and assuming she knows the participants and the experimental setup, her prior probabilities would be:

    P(Jane awake today) = P(JAT) = 1/2, and P(H) = 1/2

    Her updated credence for H is P(H|JAT) = P(JAT|H) * P(H) / P(JAT) = (1/3*1/2) / (1/2) = 1/3

    Jane's priors for any random day during the experiment would be exactly the same as Sue's. When Jane is awakened on a day when Sue is assigned to her, Jane has the same information Sue has about herself, and so she can update her credence for H in the same way. She concludes that the probability of this kind of awakening experience, resulting from a heads result, is half as probable, and thus half as frequent, as identical awakening experiences resulting from a tails result. This conclusion doesn't impact the ratio of the frequency of heads-result runs to the total number of experimental runs, which remains at 1/2 from anyone's perspective.
  • Sleeping Beauty Problem
    I believe this response to PhilosophyRunner addresses this claim. Specifically: [...]Michael

    I'd be happy to revisit this objection but I'm going to stand back and watch a little how @PhilosophyRunner grapples with it. If I see that they are in trouble, I may jump back into the ring. I've heard that's how things are done at the WWE.

Pierre-Normand

Start FollowingSend a Message