Comments

  • Sleeping Beauty Problem
    Beauty gets cloned with all her memories on a given flip, such that each Monday and Tuesday has a 50% chance of resulting in a new clone being created.Count Timothy von Icarus

    Are you suggesting that one new clone is always created, but the coin flip determines on which day? Furthermore, are Sleeping Beauty and her clones only awakened on Wednesdays? Regardless, I believe that your experiment could completely ignore the element of randomness introduced by the coin flip, and Sleeping Beauty's uncertainty on Wednesday would still exist, solely because she doesn't know if she's the original Beauty or a clone.

    Based on (an admittedly simple) Bayesian take, Beauty should be increasingly confident that she is the real Beauty with each passing week. The whole idea is that repeated trials should move the dial in our probability estimates. And yet, this doesn't seem right, no?

    It doesn't seem to me that Bayesian principles would allow Sleeping Beauty to grow increasingly confident with each passing week that she isn't a clone. That's because the design of the experiment ensures that she survives if she is the original Beauty. She can recall (or inherit once the "fake" memory) that every previous week, upon awakening, she had (from her viewpoint) a 1/2 chance of being a clone. However, her past survival events weren't probabilistically independent events, given that the survival of the original is always ensured by the fact that she is the original. It remains true, however, that until she leaves the room and survives yet again, her belief that she is currently the original is reduced to 1/2.

    On edit: After asking GPT-4 (always the sycophant) to check my English, I discussed another issue with it.
    Reveal

    PN: Rethinking about it, I've imagined another form of faulty reasoning that might influence Sleeping Beauty in thinking that her continued survival is an increasingly unlikely event. Can you imagine what it is?

    GPT-4: It might be related to the well-known but fallacious gambler's fallacy, which involves the mistaken belief that, if something happens more frequently than normal during a certain period, it will happen less frequently in the future, or that if something happens less frequently than normal during some period, it will happen more frequently in the future. [snip]

    Is that the kind of reasoning you were considering?

    PN: I was rather imagining that Sleeping Beauty wasn't thinking clearly about her own personal identity. She may have thought (or Count Timothy von Icarus on her behalf) that on each previous week there was a 1/2 chance that "she" (Sleeping Beauty) would survive. Thereby the idea of multiplying independent odds. But whenever she awakens on Wednesday, Sleeping Beauty's uncertainty can't coherently be expressed as "what are the odds that I, Sleeping Beauty, am the original" but rather as "what are the odds that 'I', whoever 'I' am, am the original Sleeping Beauty or a clone."

    GPT-4: I see, that makes a lot of sense! In this case, Sleeping Beauty's potential misunderstanding arises from a confusion about personal identity and what it means for "her" to survive. You're right that the probability she should be considering is not "what are the odds that 'Sleeping Beauty' survives" but "what are the odds that 'I', this conscious observer that is experiencing this moment, am the original Sleeping Beauty versus a clone."

    In other words, each time she awakens, there is always a 'new' Sleeping Beauty (from the perspective of that conscious observer) asking this question, and for each of them, the odds are always 1/2 of being the original. There is no compounding effect from week to week because each instance of questioning is, in a sense, a unique and independent event from the perspective of the one experiencing it. Each conscious observer is asking about their own identity in that moment, not the abstract identity of 'Sleeping Beauty' across all time.

    This is a nuanced but important distinction, and getting it right is crucial for correctly understanding the Sleeping Beauty problem and its variations.

    PN: "This is a nuanced but important distinction, and getting it right is crucial for correctly understanding the Sleeping Beauty problem and its variations."

    I don't think it's crucial for understanding the original problem, where issues of personal identity (in the guise of questions about reidentification of the self over time) don't arise, but it may be relevant to some of the extensions regarding simulations, the multiverse, the anthropic principle, etc.

    GPT-4: You're absolutely right. In the original Sleeping Beauty problem, questions of personal identity indeed don't play a significant role, as the problem focuses on updating beliefs based on self-locating uncertainty.

    However, as you pointed out, these considerations become quite important when dealing with variations of the problem that involve cloning, simulations, or multiverse scenarios, where the 'identity' of the observer might be more difficult to determine or may split into multiple instances.
  • Sleeping Beauty Problem
    If a die rolls a 6 then Sleeping Beauty is woken six times otherwise she is woken once. When woken what is her credence that the die rolled a 6?

    Halfers have to say 1/6

    and thirders have to say 6/11.

    Before she is first put to sleep she is to bet on whether or not the die will roll a 6 – paid out at the end of the experiment – and each time she is woken she is allowed to change her bet.

    If she bets according to her credence then both halfers and thirders have to say that before she is first put to sleep she will bet that the die will not roll a 6.

    Thirders then have to say that when woken she will change her bet and bet that the die did roll a 6.

    Are thirders willing to commit to their position and change their bet?
    Michael

    Thirders wouldn't change their bet in this scenario. Although it's true that in this setup, a bettor aware of P("6") = 6/11 (i.e., the proportion of "H-awakenings" to "All-awakenings" equals 6/11) might be tempted to bet on the outcome "6". They're also aware that a successful bet will be rewarded only once at the end of the experimental run, no matter how many times they've made that assessment.

    Here's where the nuance comes in: over the course of six experimental runs, they would, on average, place a winning bet 6 times out of 11. This is if we imagine each individual bet as standing alone. However, due to the rules of this experiment, the six winning bets will only result in a single even-money payout. The lack of profitability in this scenario doesn't fault the credence in the outcome. Instead, it reveals a unique aspect of this setup where multiple winning bets are not individually rewarded. Therefore, the "bet" one ought to make doesn't straightforwardly track one's credence in the outcome of the die roll, but rather, it must take into account the rules of payout in this specific experimental setup.
  • Sleeping Beauty Problem
    Both of these are true (note the tense):

    1. To reach the toucon enclosure I must first turn right at the fork and then pass the tiger enclosure
    2. The probability that I will turn right at the fork is 1/2

    When I wake and consider my credence that the next enclosure is the toucon enclosure I consider what must have happened (or not happened) for the next enclosure to be the toucon enclosure. I know that I must have first turned right at the fork (A) and then passed the tiger enclosure (B).

    P(A, B) = P(A) × P(B|A)

    My claim is that the probability of having turned right at the fork is equal to the probability of turning right at the fork, i.e. 1/2.

    Your claim is that the probability of having turned right at the fork is equal to the fraction of all encountered enclosures which are right-side enclosures, i.e. 2/3.

    I don't think your claim makes any sense. The probability of the first event having happened isn't determined by what could happen after that first event happens. The probability of the first event having happened is determined only by the probability of that first event happening.
    Michael

    It actually often makes sense that the probability of an event having happened is determined by what has been found to happen (as a consequence of it) after that first event happened. Take the simple example of a coin toss: The initial probability that a coin would land heads was 1/2. But suppose we have tossed the coin, and we now see it landed heads. Our updated probability that it landed heads is now 1. In this case, our current situation—our observing the fact of the coin having landed heads—has completely determined the probability of the previous event, the coin landing heads. This may seem trivial, but it is a similar principle at play in our journey through the zoo, and it is also key to the Sleeping Beauty problem. The probability of one finding oneself in a specific situation is not only determined by the initial probabilities of different outcomes (or paths taken) but also by the subsequent encounters or observations that are stipulated to occur as a result of those outcomes. Importantly, it is precisely when the proportion of these subsequent observations (or encounters) is dependent on the earlier outcome that those observations warrants a Bayesian updating of our credence.

    Let's look back at our zoo journey. Right now, as we approach an enclosure, what would have had to happen for us to be approaching a hippo, tiger, or toucan enclosure? For a hippo enclosure, we must have taken a new fork after passing either a hippo or a toucan enclosure. For a toucan enclosure, we must have walked past a tiger enclosure. Every fork is equally likely to lead to a hippo or a tiger enclosure directly, so we can outline the possible scenarios as follows:

    Hippo -> Hippo (1/2) or Tiger (1/2)
    Toucan -> Hippo (1/2) or Tiger (1/2)
    Tiger -> Toucan (1)

    Now, let's consider (for the sake of argument) that we are equally likely to have just passed any of the three types of enclosures. This leads to the following scenarios with their probabilities:

    1/3 of the time we passed a Hippo -> we're now approaching a Hippo (1/6) or Tiger (1/6)
    1/3 of the time we passed a Toucan -> we're now approaching a Hippo (1/6) or Tiger (1/6)
    1/3 of the time we passed a Tiger -> we're now approaching a Toucan (1/3)

    This shows that, even if we start with equal chances of having just passed any kind of enclosure, and even if every new fork is equally likely to lead directly to a H-path or T-path, the equal probabilities of approaching each kind of enclosure at each new step are maintained. This refutes your argument that we should have a 1/2 chance of approaching a hippo enclosure based on the equal chances of taking a H-path or T-path at the previous fork. It is precisely because every new fork was equally likely to lead directly to a H-path or T-path that, whenever we are approaching an enclosure, the probability that it is a hippo enclosure is 1/3.

    Let us now consider a closer analogy to the Sleeping Beauty problem where there only is one fork (and only one coin toss).

    It's Christmas morning and young Leonard Shelby is seated beside a gleaming Christmas tree, eyes wide with excitement. Two of his favorite aunts, Jane and Sue, have come bearing gifts. Each year, it's a tradition that one aunt gifts a singular, unique and expensive present, while the other aunt bestows upon Leonard two less expensive, yet equally fun presents. Who brings the unique gift is decided by a flip of a fair coin - heads for Jane and tails for Sue.

    This year, all three gifts are packaged identically, and their order of presentation to Leonard is randomized. Due to the condition Leonard has had since his childhood (unlike what happens in the Memento movie), he forgets about the gifts as soon as he unwraps them. This makes every unwrapping experience as exciting as the first one. A small note accompanies each gift, indicating which aunt has brought it. Given the symmetries of this setup, before unwrapping any gift, Leonard's initial, or prior, probabilities are quite straightforward: there's a 1/2 chance the gift is from Jane, a 1/2 chance the coin landed heads, and a 1/3 chance that the gift is the unique one.

    Now, let's consider a scenario where Leonard reads the attached card and learns that the gift is from Jane. What does this tell him about the coin toss? Well, if the coin landed heads, Jane would be the one to give the unique gift. But if it was tails, Jane would have two gifts to give. Knowing this, Leonard updates his belief about the coin toss. Now that he knows the gift is from Jane, the probability of the coin having landed heads, P(H|Jane), is reduced to 1/3.

    This key conclusion is supported by Bayes' theorem, allowing Leonard to update his beliefs in light of the new evidence. Bayes' theorem here shows that the probability of heads, given that the gift is from Jane, is equal to the initial probability of Jane being the one to give the gift, given heads (1/3), times the prior probability of heads (1/2), divided by the prior probability of the gift being from Jane (1/2). This gives us a revised probability of 1/3 for heads, given that the gift is from Jane.

    In short: P(H|Jane) = P(Jane|H)P(H)/P(Jane) = (1/3)(1/2)/(1/2) = 1/3.

    Okay, so now imagine a similar scenario, but instead of gift-giving aunts and Christmas, it involves a scientific experiment Leonard is participating in. In this experiment, if a coin lands heads, Leonard is interviewed once in a room in the West Wing (let's call it Jane's Wing) and twice in a room in the East Wing (Sue's Wing). If the coin lands tails, the number of interviews in each wing is reversed. Similar to the Christmas scenario, Leonard is interviewed three times exactly and his priors before any interview are: P(West Wing) = 1/2, P(H) = 1/2, and P(Unique Interview) = 1/3. (In more details, his priors are: P(West-HU) = P(East-T1) = P(East-T2) = P(East-HU) = P(West-T1) = P(West-T2) = 1/6 where interviews ("awakenings") rather than gifts are labeled as unique ("U"), first ("1") or second ("2").

    But now, let's say that Leonard finds out he's being interviewed in the West Wing. This new information allows Leonard to update his belief about the coin toss, similar to what happened in the Christmas scenario. Using Bayes' theorem again, Leonard finds that P(H|West Wing) = 1/3. In other words, given that he's in the West Wing (where the structure of the experiment is identical to the original Sleeping Beauty problem), Leonard's credence in the coin having landed heads is 1/3.

    And there you have it! We've demonstrated that when considering the structure of the experiment and the new information that Leonard possesses, the probabilities he assigns to the coin having landed heads or tails should be updated. The key insight here is that the information Leonard gains in the revised scenario—namely that he's being interviewed in the West Wing—doesn't actually provide him with any new facts that he wasn't already aware of in the original Sleeping Beauty problem. Instead, learning he's in the West Wing simply makes explicit to Leonard the fact that he is in a situation which perfectly mirrors the original Sleeping Beauty setup. This underlines the fact that in the original scenario, Sleeping Beauty already has all the information necessary to update her beliefs in line with the Thirder position.
  • Sleeping Beauty Problem
    This is a fallacy:

    If Monday, P(Monday-Heads) = P(Monday-Tails)
    If Tails, P(Monday-Tails) = P(Tuesday-Tails)
    Therefore, P(Monday-Heads) = P(Monday-Tails) = P(Tuesday-Tails)

    The conclusion doesn't follow, because the first two equalities depend on the conditionals being true.

    You can see this by observing that

    P(Monday-Heads) = 1/2
    P(Monday-Tails) = 1/4
    P(Tuesday-Tails) = 1/4

    Also satisfies the two conditional statements, without satisfying the conclusion
    hypericin

    The reason why the Double-halfer splits the probability P(Tails) = 1/2 between P(Monday-Tails) and P(Tuesday-Tails) is because they treat them as exclusive outcomes as if a second coin would be tossed to determine if Sleeping Beauty would be awakened on Monday or Tuesday, but not both.

    Elsewhere, I made my argument more explicit. Let me rehearse it here by transposing it to the zoo variation:

    I must first justify the inferences from:

    1. P(Tiger|First) = P(Hippo|First)
    2. P(Tiger|T-path) = P(Toucan|T-path)

    to

    1b. P(Tiger) = P(Hippo)
    2b. P(Tiger) = P(Toucan)

    The first inference is justified by the fact that placing more enclosures with different animals in them on either path doesn't alter the relative probabilities of the Tiger or Hippo outcomes since I will be seeing those new animals (such as toucans) in addition to seeing the tigers and hippos and not to the exclusion of them.

    The second inference is justified by the fact that generating alternative timelines where I don't see either tigers or toucans doesn't alter their relative probabilities (but rather lowers both in equal proportion). The second inference is actually common ground between Thirders, Halfers and Double-halfers, which is presumably why Michael only challenged the first.

    See also my most recent reply to Michael regarding this example.
  • Sleeping Beauty Problem
    You need to prove this inference:

    P(Hippo|Hippo or Tiger) = P(Tiger|Hippo or Tiger)
    Therefore P(Hippo) = P(Tiger)
    Michael

    This inference follows if we consider what is excluded by the condition "Hippo or Tiger". The case where Leonard is seeing a second enclosure on his path (which always contains toucans) is excluded. Since this second encounter is guaranteed whenever Leonard sees tigers, adding this extra encounter doesn't affect the relative probabilities of P(Hippo) and P(Tiger). However, it does reduce the total probability of him facing either a hippo or a tiger enclosure, i.e., P(Hippo or Tiger).

    This reasoning becomes more intuitive if we consider Leonard's entire visit to the zoo and adopt a frequency approach. I understand that you dislike this method, but bear with me as it may illuminate a blind spot in your understanding. Leonard can reason that since he is traversing as many H-path segments as T-path segments on average, he is encountering as many tiger enclosures as he is hippo enclosures. This is because each type of path segment contains precisely one enclosure of each kind, respectively. The presence of toucan enclosures on T-path segments doesn't diminish the number of tiger enclosures in proportion to the hippo enclosures he encounters, but it does increase the total number of T-enclosures (or average number, if we consider only one single fork) relative to the number of hippo enclosures he encounters.

    Just like in the Sleeping Beauty case, each fork (or coin toss) can be seen as a random generator of T- and H-events, producing twice as many of the former. Creating more than one T-awakening when the coin lands tails (or more than one encounter with an enclosure when a T-path is taken) dilutes the probability of all the individual events (since they are being experienced one at a time) but increases the total probability of the T-events. Lastly, since the additional T-events being generated aren't generated to the exclusion of the first one, but rather in addition to it, they don't alter the relative probabilities of an H-awakening and a T-first-awakening (or of Hippo relative to Tiger).

    I’m only considering one fork as only that is comparable to the Sleeping Beauty problem. What’s true of multiple forks isn’t true of one fork, as evidenced by (1);

    1. The next enclosure is the toucon [sic] enclosure iff I first turned right at the fork (P = 1/2) and then passed the tiger enclosure.

    This isn’t true if there are two forks.

    So what is wrong about my analysis of one fork?
    Michael

    The same reasoning about probabilities and frequencies that applies to multiple forks (or repeated Sleeping Beauty experiments) also holds for a single fork (or a single Sleeping Beauty experiment).

    Consider a brief visit to the zoo where Leonard only takes the first fork, with the intention of taking the exit-shortcut at the next fork. In such cases, half of the time, he sees hippos, and the other half, he sees both tigers and toucans. Given this, Leonard can reason that since his brief zoo visits will put him in front of hippo, tiger, or toucan enclosures with equal probabilities (namely 1/2 each), the probability that the current enclosure he is seeing is a toucan enclosure is 1/3.

    The Double-halfer reasoning errs by treating the Tiger and Toucan events as mutually exclusive—as if a second coin toss generates a second, probabilistically independent event—when in fact they both occur on the same timeline whenever either of them occurs.

    Your claim that "The next enclosure is the toucan enclosure iff I first turned right at the fork (P = 1/2) and then passed the tiger enclosure," is an assumption that can't be put forward without begging the question against the Thirder. You need to substantiate, rather than presuppose, that when you're nearing an enclosure, there's a 1/2 chance the path you're on is a T-path.

    We can, however, agree on the following: P(Hippo|H-path) = 1, and P(Toucan|T-path) = P(Tiger|T-path) = 1/2. The Thirder argument, though, posits that whenever Leonard faces an enclosure, P(H-path) = 1/3, and consequently,

    P(Hippo) = P(Hippo|H-path)P(H-path)+P(Hippo|T-path)P(T-path) = 1/3.

    Likewise,

    P(Toucan) = P(Toucan|T-path)P(T-path)+P(Toucan|H-path)P(H-path) = (1/2)(2/3) = 1/3.

    The justification for P(H-path) = P(Hippo) = 1/3 was provided above. The creation of additional T-encounters when a T-path is taken dilutes the probability of individual T-encounters, but raises the cumulative probability of T-encounters. This doesn't change the relative probabilities of Hippo and the first Tiger encounter, since the extra encounters are not produced at the exclusion of the first one, but are in addition to it. It's this crucial point that differentiates the Thirder and Double-Halfer perspectives.
  • Sleeping Beauty Problem
    The next enclosure is the toucon enclosure iff I first turned right at the fork (P = 1/2) and then passed the tiger enclosure.

    2. My credence that the next enclosure is the toucon enclosure is equal to the probability that the first event happened multiplied by the probability that the second (dependent) event happened.
    Michael

    (I had assumed that the H-path was the left path at half of the forks, but this is inconsequential since Leonard always forgets which path he took.)

    Your reasoning is not entirely misguided. Let's consider a typical path Leonard might navigate through the zoo, guided by four pre-tossed coins:

    Tiger, Toucan -- Hippo -- Hippo -- Tiger, Toucan -- Exit

    On this path, Leonard will encounter six enclosures within four path segments. The frequency of encounters with each type of enclosure are typically equal, regardless of the manner in which they are divided among different path segments.

    First, let's revisit the Thirder argument regarding any enclosure Leonard might approach:

    1. Conditionally on its being a first encounter on a path segment, P(Tiger) = P(Hippo)
    2. Conditionally on Leonard being on a T-path segment, P(Tiger) = P(Toucan)
    3. The three possible outcomes are exhaustive and mutually exclusive
    4. Therefore, P(Tiger) = P(Hippo) = P(Toucan) = 1/3

    This makes sense because, on a typical journey through the zoo, like the one illustrated above, Leonard can expect to encounter each type of enclosure with the same frequency, variance notwithstanding.

    The Double-halfer analysis, on the other hand, posits Leonard on either a H-path or a T-path segment with equal probability. The justification for this comes from assuming that Leonard considers his current path segment as equally likely to be a H-path or a T-path from the time he took the previous fork. This reasoning is relevant and accurate until Leonard approaches an enclosure and has the opportunity to infer which path segment it is located on. If Leonard were to ignore the type of enclosure and only guess the path he's on, confirming his guess only when he reaches the next fork (or zoo exit), regardless of the number of times he might make this guess along the path, then he would find that he was on a H-path half of the time, supporting the Halfer thesis. (This, however, wouldn't vindicate the Double-halfer thesis since encountering tigers does not exclude, but rather guarantees, his also encountering toucans on the same path, and vice versa.)

    However, if we acknowledge that each encounter with an enclosure is an occasion for Leonard to be located on a H-path, and the layout of the zoo (and path navigation process) ensures that these occasions occur 1/3 of the time, we realize that, from this new episodic perspective, the probability that Leonard is on a H-path when he approaches an enclosure isn't independent of the number of such encounters. This is because the likelihood of the encountered enclosure being a H-enclosure (and thus Leonard being on a H-path) isn't solely determined by the process that placed Leonard on this path (the fork and coin toss) but also by the proportion of occasions Leonard has to be on such a path.
  • Sleeping Beauty Problem
    This is not true. There are three possible awakenings, Monday-Heads, Tuesday-Heads, Tuesday-Tails, and SB's job on awakening is determine the probability that she is experiencing each of these. The coin has a 50% chance of landing heads, and if it does, the awakening will be on Monday 100% of the time. Therefore, P(Monday-Heads) = 50%. The coin has a 50% chance of landing tails, and if it does, the awakening will be on Monday 50% of the time, and Tuesday 50% of the time. Therefore, P(Tuesday-Heads) = P(Tuesday-Tails) = 25%. If this is true, and I don't see how it can be reasonably argued against, on each awakening the coin is equally likely to be heads and tails.hypericin

    Are you evaluating the probabilities of the three possible outcomes occurring from the point of view of an external observer or from Sleeping Beauty's own epistemic perspective whenever she awakens?

    From an external observer perspective, each awakening—H-awakening, T-first-awakening, and T-second-awakening—have an equal chance of 1/2 to occur. Note that the sum of these probabilities exceeds 1 because the two T-awakenings aren't mutually exclusive but rather, they are concurrent within the same timeline.

    If you are evaluating the probabilities from Sleeping Beauty's own epistemic perspective when she awakens, then a careful analysis of the situation shows that the probability of each is 1/3.

    Consider first the two possible outcomes conditional on today being Monday. Since Sleeping Beauty always is awakened on Monday regardless of the coin toss result, P(Monday-Heads) = P(Monday-Tails). Consider next the two possible outcomes conditional on the coin having landed tails. Since in that case Sleeping Beauty is awakened once on Monday and once on Tuesday, P(Monday-Tails) = P(Tuesday-Tails), which is something that the Thirders, Halfers and Double-halfers all agree on. We therefore have that P(Monday-Heads) = P(Monday-Tails) = P(Tuesday-Tails). Lastly, since Sleeping Beauty isn't inquiring about the probabilities that any of those three outcomes will occur at least once during her current experimental run, but rather about the probability that her current awakening episode is the realization of one of those three outcomes, the three possibilities are exclusive and exhaustive, and their probabilities must therefore sum up to 1. They therefore all three are 1/3, and P(Tails) = P(Monday-Tails) + P(Tuesday-Tails) = 2/3.

    This argument is illustrated in a more picturesque way in my variation: Leonard Shelby Visits the Sleeping Beauty Zoo.
  • Sleeping Beauty Problem
    @Michael

    Consider Leonard Shelby's journey through the "Sleeping Beauty Zoo". In this zoo, each fork in the path presents two options - one path (H-path) leads to a hippo enclosure, while the other (T-path) leads to a tiger enclosure followed by a toucan enclosure. Each path ends with a new similar fork until the zoo exit is reached. Due to Leonard's condition of anterograde amnesia, he forgets his previous encounter (with an enclosure or fork) whenever he reaches a new enclosure. Despite his memory loss, Leonard knows that typically, due to the zoo peculiar layout, he encounters each of the three animals with equal frequency, on average, during his visits.

    Now, let's look at a particular moment of Leonard's visit. As he walks, before reaching a new enclosure, he might reason this way: "Since each fork in the path gives an equal chance of leading to a T-path or an H-path, there is a 50% chance that the next enclosure I'll see will have a hippo." Thus, when he approaches an enclosure, he might conclude there is a 25% chance of it being a tiger enclosure, and a 25% chance of it being a toucan enclosure.

    Is this reasoning accurate? Not quite, because it neglects a crucial shift in perspective from "timeline" to "episodic".

    To clarify, let's imagine Leonard uses a series of coin tosses to predetermine his path at each fork and records the sequence in his notebook. On average, his predetermined path will lead him to an equal number of hippo, tiger, and toucan enclosures. So, whenever he approaches an enclosure during his visit (even if he exits the zoo after only one path segment), he can reasonably assume a 1/3 chance that it is a hippo, tiger, or toucan enclosure.

    One might ask: Isn't there a contradiction between Leonard's initial assumption of a 50% chance of being on a H-path and his subsequent conclusion of a 1/3 chance of seeing a hippo in the enclosure?

    The answer is no. Although Leonard travels an equal number of H-path and T-path segments, he encounters twice as many T-enclosures. Because of the way the T-enclosures map (two to one) to the T-path segments, Leonard encounters hippos and tigers with equal frequency after taking any fork (which makes P(hippo) = P(tiger), and he encounters tigers and toucans with equal frequency while traveling any T-path (which makes P(tiger) = P(toucan). Since all three possibilities are exclusive from the episodic perspective, he expects all three animals to appear with equal probability at any enclosure, even though T-paths exclude H-paths, and tiger-enclosures and toucan-enclosures occur concurrently.

    Keeping this in mind, let me now address you "flowchart" argument:

    1. Sleeping Beauty is given amnesia
    2. She is asked her credence that a coin has been tossed
    3. A coin is tossed
    4. If the coin lands tails then:
    4A. She is given amnesia
    4B. She is asked her credence that a coin has been tossed

    Thirder reasoning is that because step 2 is twice as likely to occur as step 4B then I am twice as likely to be in step 2 as step 4B.

    Halfer reasoning is that because step 2 is twice as likely to occur as step 4B and that because if 4B will occur then I am equally likely to be in step 2 as step 4B then I am three times as likely to be in step 2 as step 4B.
    Michael

    From the timeline perspective, the following scenarios are equiprobable:

    1. SB asked P(second-awakening), coin lands heads, exit
    2. SB asked P(second-awakening), coin lands tails, SB asked P(second-awakening), exit

    where P(second-awakening) is Sleeping Beauty's credence, on the occasion of an awakening, that it is her second awakening within the current experimental run.

    From the episodic perspective, Sleeping Beauty knows that conditionally on her present awakening being the first, it is equally probable that it is a H-awakening (and that the coin will land heads) or that it is a T-first-awakening (and that the coin will land tails). She also knows that in the event the coin will land (or has landed) tails, it is equiprobable that she is experiencing a T-first-awakening or a T-second awakening. Since the three possible outcomes are exclusive from her episodic perspective, their probabilities must sum up to 1 and since P(H-awakening) = P(T-first-awakening) and P(T-first-awakening) = P(T-second awakening), all three possible outcomes must have probability 1/3.

    The Halfer error lies in incorrectly treating the T-first-awakening and T-second-awakening as equally probable and mutually exclusive alternatives within the T-timeline. This leads to an improper division of the T-timeline's probability between these two events. In reality, the T-first-awakening and T-second-awakening are part of the same timeline and their probabilities should not be divided, but understood as part of the cumulative likelihood of being on the T-timeline. This is similar to the presence of as many tiger or toucan enclosures along Leonard's overall (or average) path as there are hippo enclosure increasing the overall probability of him encountering a T-enclosure on each occasion he approaches one to 2/3 (i.e. P(T-enclosure) = (P(tiger or toucan)/P(tiger or toucan or hippo) = 2/3).
  • Sleeping Beauty Problem
    These mean two different things:

    1. My credence favours tails awakenings
    2. There are more tails awakenings than heads awakenings

    I don’t think we can move forward if you insist that they mean the same thing.
    Michael

    It's precisely because they mean different things that I've provided detailed arguments for deducing 1 from 2 (alongside with other premises). However, the truth of 2 certainly is relevant to the deduction of 1. Nobody would be a Thirder in a scenario where coins lading tails would generate as many awakenings as coins landing heads.
  • Sleeping Beauty Problem
    My current interview being the first or the second T-awakening are exclusive events.Michael

    They indeed are. As you get involved in the experiment and your perspective shifts from the timeline (before the experiment begins) to the episodic one (whenever you are awakened), relative to the current interview, the two possible outcomes T-first and T-second now are exclusive. However, this doesn't mean that their probabilities shift from 1/2 each (as they were in the timeline perspective) to 1/4 each (as they would if a second coin were tossed to chose only one to be actualized between them). Instead, your finding yourself in a T-awakening episode doesn't exclude the other one from the current timeline; it merely shifts it to the other concurrent episode in this timeline.

    The difference between the Thirder and Double-halfer reasoning can be illustrated this way:

    If a coin lands heads, you are allowed to pick one ball from an unlabeled "H-bag" containing one blue ball. If the coin lands tails, you are allowed to pick two balls, one at a time, from a "T-bag" containing two red balls. Therefore, there are three possible ball picking episodes: B, R1, and R2. We assume, as usual, that when the opportunity arises to pick a ball from a bag, you forget if it's your first or second opportunity.

    A Double-halfer would reason that since you were equally likely to have been presented with a T-bag and, in that case, you are equally likely to be experiencing R1 or R2, P(R1)=P(R2)=1/4.

    In contrast, a Thirder would point out that picking a red ball doesn't exclude the other one from the current timeline but rather guarantees it. The implication of this is that the additional opportunity to pick a second red ball from the T-bag does not reduce P(R1) relative to P(B) but rather increases P(R) = P(R1 or R2) by providing a second opportunity. This is especially apparent from the timeline perspective, where P(B)=P(R1)=1/2 regardless of how many more red balls there might be in the T-bag for you to all pick consecutively. The equiprobability between P(R1) and P(B) doesn't change when we shift from the timeline perspective to the episodic perspective, because on each picking occasion, although you can't know if it's the first one, you do know that P(R|first) = P(B|first). In other words, if you were to ask the experimenter presenting you the bag if this is your first pick, and receive a truthful positive answer, you would know that P(B)=P(R1)= 1/2. The next step in the argument is the straightforward inference from P(R|first) = P(B|first) to P(R1) = P(B). Given that, on all accounts, P(R1) = P(R2), and that all three outcomes are exclusive from the episodic perspective, it follows that they all have a probability of 1/3

    But I brought up something like this here:Michael

    Indeed, I intended to address this separately but haven't yet gotten round to doing it. This will be my next order of business.
  • Sleeping Beauty Problem
    There is only one meaning I'm using: "the degree to which I believe that the proposition is true".

    If I am certain that A is true if and only if B is true then the degree to which I believe that A is true is equal to the degree to which I believe that B is true. This is true for all As and Bs.
    Michael

    This overlooks the issue that your credence can change over time when your epistemic perspective changes. If your separate uses of the expression P(H) don't take into account the epistemic perspective within which they're intended to be evaluated, you risk equivocation.

    Given the above, as I said before, these cannot all be true:

    1. My current interview is a heads interview iff I have been assigned one heads interview
    2. The fraction of interviews which are heads interviews is 1/3
    3. The fraction of experiments which have one heads interview is 1/2
    4. My credence that my current interview is a heads interview is equal to the fraction of interviews which are heads interviews
    5. My credence that I have been assigned one heads interview is equal to the fraction of experiments which have one heads interview

    You seem to assert that 4 and 5 are true by definition, but they're not. Given the definition of the term "credence", and given the truth of 1, 2, and 3, it must be that one or both of 4 and 5 are false.
    Michael

    4 and 5 aren't true by definition; rather, they are definitions. Definitions specify how terms are to be used in a given context, and it's the subsequent argumentation that can be evaluated as true or false in light of those definitions. In this case, it appears that you intend for "My credence that my current interview is a heads interview" and "My credence that I have been assigned one heads interview" to both represent Sleeping Beauty's episodic perspective, yet the defining clause in 5 would be more appropriate for a timeline perspective. This potentially equivocal definition could lead to confusion. If, however, 5 is meant to convey a timeline perspective, then 4 and 5 are both reasonable, complementary definitions addressing distinct questions.

    So simply asserting that "the fraction of interviews which are heads interviews is 1/3, therefore my credence that my current interview is a heads interview" is a non sequitur.

    Granted, such an argument would be a bit quick and likely an enthymeme. However, I didn't present it in that way. I provided more explicit steps and premises in my previous post, explaining how attending to the distinction between the two epistemic perspectives (timeline and episodic) allows us to conclude that, in the episodic perspective, P(H)=P(T-first), P(T-first)=P(T-second), and since all three events are mutually exclusive in this perspective, they must each have a probability of 1/3.
  • Sleeping Beauty Problem
    I know I referred you to one of my previous posts, but I’ll respond to this directly too.

    We’re discussing credence.

    If I am certain that A is true if and only if B is true and if I am pretty sure that A is true then ipso facto I am pretty sure that B is true.
    Michael

    When I previously addressed this inference of yours, I conceded that it is generally valid, but I also pointed out that it involved a possible conflation of two meanings of the predicate P(). The problem I identified wasn't with the validity of the inference (within the context of probability calculus), but rather with the conflation that could occur when the expression P(A) appears twice in your demonstration.

    What makes you "pretty sure" that A is true is the expectation that A is much more likely to occur than not-A. As such, this probabilistic judgment is implicitly comparative. It is therefore dependent on how you individuate and count not only A events but also not-A events. As I've argued elsewhere, a shift in epistemic perspective can alter the way you count not-Heads events (i.e., Tails events), transforming them from non-exclusive to exclusive. For example, when you move from considering possible world timelines to specific awakening episodes, what were concurrent alternatives (not-H events) become exclusive possibilities. This change in perspective modifies the content of your comparative judgment "H is much more likely to occur than not-H," and consequently affects your credence.
  • Sleeping Beauty Problem
    I accept 6 and reject 5. My credence that my current interview is a heads interview isn't equal to the fraction of interviews which are heads interviews.

    My argument is:

    P1. My credence is the degree to which I believe that a proposition is true
    P2. My current interview is a heads interview iff I have been assigned one heads interview
    C1. Therefore my credence that my current interview is a heads interview is equal to my credence that I have been assigned one heads interview (from P1 and P2)
    P3. If I have been assigned at random by a fair coin toss either one heads interview or two tails interviews then the probability that I have been assigned one heads interview is 1/2
    P4. I have been assigned at random by a fair coin toss either one heads interview or two tails interviews
    C2. Therefore the probability that I have been assigned one heads interview is 1/2
    (from P3 and P4)
    P5. My credence that I have been assigned one heads interview is equal to the probability that I have been assigned one heads interview
    C3. Therefore my credence that I have been assigned one heads interview is 1/2
    (from C2 and P5)
    C4. Therefore my credence that my current interview is a heads interview is 1/2
    (from C1 and C3)
    Michael

    The issue arises from a conflation of two distinct ways of individuating events and counting probabilities. We can see this more clearly if we distinguish between the 'timeline perspective' and the 'episodic perspective'. Your propositions P1 through P4 and C1 though C4 above frequently shift between those two perspectives, which vitiates the validity of some inferences.

    Let's consider the shift from the 'timeline perspective' to the 'episodic perspective'. In the timeline perspective, there are two possibilities ("possible worlds"): an H-timeline and a T-timeline, each with an equal chance of 1/2. The T-timeline, however, comprises two distinct awakening episodes ("centered possible worlds"). This does not create more exclusive events sharing the probability space; rather, it articulates the unique structure of the T-timeline.

    Therefore, when shifting to the episodic perspective, it would be a mistake is to divide the probability of the T-timeline (1/2) between the two T-awakenings, suggesting each has a probability of 1/4. This line of thinking presumes these awakenings to be exclusive events within the T-timeline (as they might be if a second coin would be tossed to determine which one of the two would occur). However, that's not the case. In the T-timeline, the two awakenings are guaranteed to occur concurrently if at all; the only unknown is which of them Sleeping Beauty currently finds herself in.

    The shift from the timeline perspective to the episodic perspective is not a straightforward Bayesian update on new information. Instead, it's a shift in how we count the alternatives. This shift happens automatically when Sleeping Beauty awakes, because she can't tell apart the two T-episodes and what were concurrent possibilities become exclusive possibilities.

    Once we've dealt with the faulty reasoning that made it appear like the T-first-awakening and T-second awakening were lowered from 1/2 to 1/4 when shifting to the episodic perspective, we can now see how the equiprobability between the H-awakening and T-first-awakening must also be retained when we shift perspectives. When Sleeping Beauty wakes up and doesn't know whether it's Monday or Tuesday, this doesn't change the equiprobability of a H-awakening or a T-first-awakening that she would express her credence in were she to know that it's Monday. Instead, her ignorance about the day of the week introduces an additional possibility—that of her being in a T-second-awakening—which in turn increases the total probability of being in a T-awakening.

    So, since the shift to the episodic perspective preserves both the equiprobabilities P(T-first-awakening) = P(T-second awakening) and P(T-first-awakening) = P(H-awakening), and all three outcomes are exclusive from this perspective, the probabilities must sum up to 1 and therefore must shift from 1/2, 1/2, and 1/2 to 1/3, 1/3 and 1/3.
  • Sleeping Beauty Problem
    Then before the experiment starts the thirder will say "since I now know that I will soon rationally infer that the coin will have landed heads with probability 1/3 (on the basis of no new information), I can already infer this right now, before the coin is tossed."

    But I think this is wrong.
    Michael

    You are correct that this would be wrong. The entire aim of my variation (and the Leonard Shelby variation before it) was to highlight that there is indeed some new information available to Sleeping Beauty upon awakening, and that this information can be retained by her on Wednesday through a note. This information wasn't available to her before the experiment began, and isn't available to her on Wednesday when she doesn't receive a note.

    The objective of my discussion was also to highlight the unique nature of this information. It's not a form of information indicating, consistent with this information, a higher proportion of possible worlds in which the coin landed tails. Indeed, the proportion of possible worlds remains exactly the same. It's rather information that is generated by placing Sleeping Beauty in a higher proportion of centered possible worlds (her distinct awakening episodes within a single experimental run) within the T-run timelines.

    This type of information is the same as the information that is transmitted from her to her own future self (on Wednesday) when she awakens, by selecting twice as many future recipients of the note in the long run when it is a T-note. This is akin to the information you gained that someone you met was a Tunisian with a probability of 2/3, not because there are twice as many Tunisians as there are Italians (there were actually as many of each in the city), but because Tunisians venture outside of their homes twice as often, doubling your opportunities of meeting them. Likewise, the setup of the Sleeping Beauty experiment makes the coins that land tails twice as likely to "meet" her on the occasion of her awakenings.

    They're not nearly identical. On Wednesday she knows that she only had the opportunity once. When she wrote the note she didn't know that it was her only opportunity. So contrary to the above, there is new information on Wednesday.Michael

    I would argue that the situations are nearly identical since this new knowledge is almost inconsequential. Suppose n = 100. On Wednesday, Sleeping Beauty knows that she only had a single opportunity and can thus rule out the miniscule 0.01% chance that she would have had two opportunities. The probability that the note she obtained was an H-note (produced on the occasion of an H-awakening) therefore is P(H-note|single)/(P(H-note|single)+P(T-note|single)) where

    P(H-note|single) = P(single|H-note)P(H-note)/P(single) = (1)(0.5%)/(0.0149) ≈ 0.3356
    (since P(single) = 0.5%+(1/2)*0.0198 = 0.0149, and we had calculated the 0.0198 before)

    and

    P(T-note|single) = P(single|T-note)P(T-note)/P(single) = (99%)(1%)/(0.0149) ≈ 0.6644

    As expected, P(H-note|single) and P(T-note|single) sum up to 1, and the probability that the note Sleeping Beauty obtained on Wednesday was an H-note rises to 0.3356. This figure is slightly larger than 1/3 only because the rare cases of H-runs where two opportunities to write a note were present are discounted when she only receives one.

    I address this here.Michael

    Thank you! I'll respond to this separately.
  • Sleeping Beauty Problem
    And as it grows smaller, P(H) tends to 1. I don't understand the relevance of any of these three answers.

    Why is the correct answer given by any of these situations, let alone by the situation where n is arbitrarily large?
    Michael

    The issue with making n small is that it allows Sleeping Beauty on Wednesday to decrease her credence P(H) regarding the origin of the single note. This is because (1) she did not receive two notes and (2) in a significant proportion of cases where a T-run occurs, two such notes are generated instead of one. This makes her epistemic situation dissimilar to her situation when she experiences a particular awakening episode. During such episodes, she can never know that there are two of them due to her amnesia. Making n large makes Sleeping Beauty's epistemic situation on Wednesday, when she receives a note, nearly identical to her situation when she wrote the note, since the Bayesian updating she can perform on the basis of the note being unique is negligible.

    When evaluating P(H) on Wednesday, assuming n >> 1, the question Sleeping Beauty is asking is:

    "What are the odds that this single note that I received was written by me during an H-awakening?"

    The answer is approximately 1/3. However, the note could only have been written during an H-awakening if the coin landed on H. Therefore, P(H) is 1/3.

    The second step in the reasoning is to consider that when Sleeping Beauty awakens and finds an opportunity to write a note, she knows that when she reads it on Wednesday (except on the very rare occasion when she finds two notes) she will be able to rationally infer that the odds that the note was written during an H-awakening are 1/3. Since it is now certain that she will read the note on Wednesday and will possess no more information regarding the circumstances of production of the current note than she currently has, she can already infer that this note is being written by her during an H-awakening with 1/3 odds.

    A streamlined version of Sleeping Beauty's inference is: "Since I now know that I will soon rationally infer that this note was written during an H-awakening with probability 1/3 (on the basis of no new information), I can already infer this right now." (Here, I am making use of van Fraassen's reflection principle.)

    The last step in the argument requires reflecting on the case where Sleeping Beauty doesn't find an opportunity to write a note. In that case, when she awakens, she can reason counterfactually:

    "If I had had an opportunity to write a note to myself, I would then have inferred on Wednesday that P(H) (regarding the current awakening episode that is the source of the note) is 1/3, and hence known now that P(H) is 1/3. But the odds that I am currently experiencing an H-awakening are probabilistically independent of my finding an opportunity to write a note. Therefore, they are 1/3 and the only reason why I will be unable to know this when I awaken on Wednesday (and rather infer that P(H) = 1/2) is because I will have lost the special causal connection that I currently have to my present awakening episode.

    Note that when Sleeping Beauty doesn't receive a note on Wednesday, her credence P(H) = 1/2 doesn't merely differ in value from her credence P(H) = 1/3 during awakenings; the predicates P() also have different meanings. During awakenings, P(H) refers to the odds that her current awakening episode is occurring during a coin toss that landed heads. On Wednesday, P(H) refers to the odds that the experimental run she is exiting from was an H-run. While in each case the biconditionals "I am now in an H-awakening iff I am now (and will be) in an H-run" or (on Wednesday) "I was in an H-awakening iff I am now in an H-run" hold, the probabilities don't necessarily match due to the two-to-one mapping between T-awakenings and T-runs.

    To emphasize this last point, suppose Sleeping Beauty writes a note on each awakening occasion and the experiment is run many times. She ends up with a collection of identical notes, approximately two-thirds of which were written during T-awakenings. She now has lost track of the pairing between the notes. Two things can now be true at the same time:

    (1) Since 1/3 of those notes are H-notes, Sleeping Beauty was right during the occasions where she wrote them to believe P(H-note) = 1/3 and hence that P(H) = 1/3.

    (2) Since at the end of each experimental run, Sleeping Beauty received either one H-note or two T-notes with equal probabilities, the probability that the individual note(s) she received were T-notes (or H-notes) was 1/2 (or 1/2). In other words, in advance of counting how many notes there were on any given Wednesday, Sleeping Beauty could point at the note(s) and say that they were equally likely to be H-notes or T-notes.

    This analysis again highlights how Halfers and Thirders can both be right at the same time but talk past each other when they fail to attend precisely to their respective definitions of P(H), and especially how H and T outcomes are to be individuated and counted.

    Here, I've asked GPT-4 to summarise the argument and highlight the main points:
    Reveal
    Summary

    1. Expanding the Protocol: You suggest a modification to the Sleeping Beauty experiment, where on rare occasions she gets the opportunity to write a note to her future self when she awakens. Importantly, the likelihood of being able to write a note does not depend on whether it is a T-awakening or an H-awakening.

    2. Receiving the Note: When she awakens on Wednesday and receives one note, the probability of the note having been written during a T-awakening is roughly 2/3, assuming the opportunities to write are rare. This is because there are twice as many opportunities to write a note during a T-run compared to an H-run.

    3. Retaining Credence: This enables Sleeping Beauty to maintain her updated credence of P(H) = 1/3 from her awakening into Wednesday. She can rationally infer from receiving the note that it's twice as likely it was written during a T-awakening.

    4. Understanding the Information: The fact that Sleeping Beauty can retain this information when reading the note on Wednesday suggests that the information available to her upon receiving the note is essentially the same information she had access to upon each awakening. Specifically, it's the information that two times out of three, an awakening corresponds to the coin landing Tails.

    5. Counterfactual Reasoning: When Sleeping Beauty doesn't get the opportunity to write a note, she can use counterfactual reasoning. She knows that had she been able to write a note, she would have concluded on Wednesday that P(H) was 1/3, so she can infer the same during the awakening.

    6. Different Meanings of P(H): When Sleeping Beauty doesn't receive a note on Wednesday, her credence P(H) = 1/2 isn't merely a different value than P(H) = 1/3 during awakenings, the predicates P() also have different meanings. During awakenings, P(H) refers to the odds of her current awakening being a H-awakening. On Wednesday, P(H) refers to the odds that the experimental run she is exiting from was an H-run.

    7. Individuation of H and T Outcomes: The argument concludes by stressing the importance of how H and T outcomes are to be individuated and counted. Misunderstandings or disagreements about this can lead to Halfers and Thirders talking past each other.
  • Sleeping Beauty Problem
    Though I don't see why I should accept your claim that if "she receives a single note on Wednesday, Sleeping Beauty comes to be causally and epistemically related to the coin result in the exact same manner as she was when she originally wrote the note."Michael

    This is because, when the experimental protocol is expanded to enable Sleeping Beauty to hand notes to her future self in such a manner, the episodes of her receiving a note on Wednesday are produced twice as often in the long run when the coin has landed tails. On the occasion where she awakens and is offered the opportunity to write a note, Sleeping Beauty therefore is enabled to reason thus:

    "When I will read again the note that I am currently writing, on Wednesday, I will be able to rationally infer that it is twice as likely that this note was written by me on the occasion of a T-awakening. But it can only be the case that it will have been written on the occasion of a T-awakening if I am now experiencing a T-awakening. Therefore, it is now twice as likely that I am experiencing a T-awakening."

    Notice also that, since the probability that Sleeping Beauty would be offered an opportunity at any given awakening to write a note is the same regardless of whether it is an H-awakening or a T-awakening, being offered such an opportunity gives her no independent ground to update her credence.

    Note that, as you said yourself, if the probability of her writing a note is 1/2 then if she finds exactly one note then her credence in Heads is 1/2.

    That was only in the specific case where n = 2. As n grows larger, P(H) tends towards 1/3.
  • Sleeping Beauty Problem
    If tails then:

    The probability of her writing on Monday is 1/100
    The probability of her writing on Tuesday is 1/100
    Michael

    Yes.

    The probability of her writing on both Monday and Tuesday is 1/100 * 1/100 = 1/10,000
    The probability of her writing on neither Monday or Tuesday is 1 - (1/100 * 1/100) = 9,999/10,000

    The latter is actually 99/100*99/100 = 0.9801 ("both" and "neither" aren't complementary cases.)

    The probability of her writing on Monday or Tuesday but not both is (1/100 + 1/100) / 2 = 1/100

    It's actually 1 - "both" - "neither" = 1 - 0.0001 - 0.9801 = 0.0198 ≈ 2%, which is roughly twice the probability of writing a note in the H case.[/quote]
  • Sleeping Beauty Problem
    If heads and n = 100 then the probability of writing a note is 1/100

    If tails and n = 100 then the probability of writing exactly one note is 1/100.

    So if she finds exactly one note her credence in heads is 1/2.
    Michael

    Her probability of writing a note is 1/100 on each occasion she awakens. Since she awakens twice when the coin lands tails, her probability of writing a note is 2/100 when a T-experimental run occurs (discounting the 1/10000 cases where she writes two notes).
  • Sleeping Beauty Problem
    [...]But notice that as the probability of writing a note each time approaches 1 the "greater likelihood" of it having been tails gets smaller, approaching 1.[...]Michael

    Yes, indeed, which is why I edged my specification by stipulating that the occasions to write a note were rare.

    If Sleeping Beauty would receive two notes on Wednesday, she'd be able to infer that there were two awakenings and hence that the coin didn't land heads. On the earlier occasions when she was writing those notes, by contrast, she wasn't able to know this. When the probability that she would be able to write a note on each awakening occasion is exactly 1/2, the overlapping cases are just numerous enough to enable her to infer on Wednesday, when she receives one single note, that P(H) = 1/2.

    As the occasions to write a note become rarer (e.g. 1/n with n >> 1), the frequency of those overlapping notes become negligible (n times as many single notes are received as double notes) and Sleeping Beauty's epistemic state (i.e. the value of her credence) approaches asymptotically her epistemic state as she was writing the note. And, as I had suggested in my previous post, this is because when she receives a single note on Wednesday, Sleeping Beauty comes to be causally and epistemically related to the coin result in the exact same manner as she was when she originally wrote the note.

    I'll address the other cases and analyses you have presented separately.
  • Sleeping Beauty Problem
    Nothing is ruled out when woken or asked her credence that wasn’t already ruled out before the experiment started.

    Even Elga understood this:
    Michael

    I believe Elga was mistaken about this. There actually is some information that becomes available to Sleeping Beauty when she awakens, though the nature of this information is rather peculiar. I discussed the nature of this information with GPT-4 in this earlier post.

    What informs Sleeping Beauty about the likelihood that the coin landed (or will land) tails, allowing her to update her credence from 1/2 to 2/3, is the fact that she awakens and that, whenever she awakens, the coin landed (or will land) tails two times out of three. After the experiment is over, and she is awoken on Wednesday (assuming she always receives the amnesia-inducing drug after each interview), this information is lost to her, and her credence reverts back to 1/2. The reason why she can't retain the information available to her during each awakening is that this information pertains specifically to the state of the coin in relation to her current episode of awakening. Upon awakening on Wednesday, she loses this information because she loses the ability to refer deictically to her past episodes of awakening (not even knowing how many of them there were).

    This loss of information can be emphasized further by modifying the experiment in such a way that the information is not lost by her on Wednesday. Suppose we update the protocol so that on rare occasions, which present themselves with equal probability on each awakening episode, Sleeping Beauty is able to write down a note saying "I have now been awakened and interviewed." She can retain this note and read it again on Wednesday. Upon rereading the note on Wednesday, she can reason that it is twice as likely that such a note was produced if the coin landed tails since she would have been twice as likely to write it during such an experimental run. Therefore, she can retain her updated credence P(T) = 2/3 through ordinary Bayesian reasoning.

    The key realization is that the same information that allows Sleeping Beauty to update her credence P(H) from 1/2 to 2/3 upon receiving the note, is inherent in every awakening she experiences due to the causal structure of the experiment. Each awakening serves as an implicit notification of her being in one of the two potential kinds of awakening episodes, which are twice as likely to occur if the coin landed tails. This causal relationship between coin toss results and awakenings, established by the experimental setup, provides information that is available to her in every awakening, even when she doesn't have the opportunity to physically write it down. Essentially, the note merely serves to extend this causal relationship to her Wednesday state, providing her with twice as many opportunities to receive the note if the coin landed tails.
  • Sleeping Beauty Problem
    Pierre-Normand is saying that P(X) refers to the ratio of Xs to non-Xs in some given reference class.

    I'm saying that P(X) refers to the degree to which I believe X to be true.

    If P(X) refers to the degree to which I believe X to be true, and if I believe that A iff B, then P(A) = P(B).
    Michael

    Actually, I suggested that P(X) could be understood as referring to the ratio of |{X}| to (|{X}| + |{not-X}|) in epistemically identical situations with respect to X. There is some flexibility in defining what the relevant situations are.

    In the case where Sleeping Beauty can say "I am experiencing an H-awakening iff I am experiencing an H-run", and there is a one-to-one mapping between H-awakenings and H-runs, we still can't logically infer that P(H-awakening) = P(H-run). This is because one can define P(H-awakening) as |{H-awakening}|/|{awakenings}| and similarly define P(H-run) as |{H-run}|/|{run}| where {awakenings} and {run} are representative sets (and |x| denotes cardinality). For the inference to hold, you would also need a one-to-one mapping between the sets of T-runs and T-awakenings.

    So, the grounds for a Thirder's credence P(H-awakening) being 1/3 (where it is defined as |{H-awakening}|/|{awakenings}|) simply comes from the propensity of the experimental setup to generate twice as many T-awakenings as H-awakenings.

    You argue that the number of T-awakenings is irrelevant to the determination of Sleeping Beauty's credence P(H-awakening) because her having multiple opportunities to guess the coin toss result when it lands tails doesn't impact the proportion of tails outcomes. However, while it doesn't impact |{H-run}|/|{run}|, it does impact |{H-awakening}|/|{awakenings}|, which is why I argue that Halfers and Thirders are talking past each other.

    Consider an alternative experiment setup where Sleeping Beauty awakens less often rather than more often when the coin lands tails. For instance, we could eliminate Tuesday awakenings altogether and ensure that Sleeping Beauty awakens once on Monday when the coin lands heads, and only half the time when it lands tails (by tossing a second coin, say). This setup would, in the long run, reverse the ratio of H-awakenings to T-awakenings compared to the original setup. In that case, when Sleeping Beauty awakens, would you still argue that her credence P(H-awakening) remains 1/2? A Thirder would argue that her credence should now increase to 2/3, based on the same frequency-ratio reasoning.
  • Sleeping Beauty Problem
    In your scenario there are a bunch of flashes going off in a forest and me, a passer-by, randomly sees one of them. This is comparable to a sitter being assigned a room.Michael

    The passer-by sees all of the flashes and does not know the genetic status of the fireflies producing them. This is analogous to Sleeping Beauty experiencing all of her awakenings but not knowing if they're unique (generated by a coin having landed heads) or one of a series of two (generated by a coin having landed tails). The witness's credence P(XYZ) = 2/3 applies to all of the flashes they are witnessing, just like Sleeping Beauty's credence P(T) = 2/3 applies to all the awakenings she is experiencing, not merely to random samplings of them.

    What is true of a random sampling of these awakening episodes (or flash sightings), due to the fact that the sampling would represent the relevant frequencies, is even more applicable to the total population of awakening episodes. However, in the latter case, no additional sampling method (nor the presence of a randomly assigned sitter) is required.
  • Sleeping Beauty Problem
    @Michael

    Let me adjust my previous firefly case to meet your objection.

    We can assume that half of the fireflies have gene XYZ, which causes them to flash twice every five minutes. The other half, lacking gene XYZ, flash once every five minutes.

    A witness can see every flash and thus is guaranteed to see the first flash of every firefly. The second flash, however, is optional as it depends on the firefly having gene XYZ. This mimics the guaranteed and optional awakenings in the Sleeping Beauty problem.

    When the witness sees a flash, they know it could either be a first flash (which is guaranteed from every firefly) or a second flash (which is optional and only comes from the fireflies with gene XYZ).

    Just like in the Sleeping Beauty problem, every flash is an 'awakening' for the witness. The presence of gene XYZ is akin to a coin landing tails (T), leading to an optional second flash (analogous to the T-Tuesday awakening).

    Upon witnessing a flash, the observer's credence that they're seeing a firefly with gene XYZ should be more than 1/2, as the witness cannot conclusively rule out that it's a second, optional flash. This aligns with the reasoning that P(T) > 1/2 for Sleeping Beauty when she cannot rule out the possibility of T-Tuesday.

    This analogy illustrates how an increased frequency of a particular event (the witnessing of a second flash, or T-Tuesday) can impact overall credence.
  • Sleeping Beauty Problem
    That's not what I said.

    In the Sleeping Beauty problem I am guaranteed to wake up at least once if tails and guaranteed to wake up at least once if heads. The coin toss does not determine the likelihood of me waking up. It only determines the number of times I'm woken up. But the frequency is irrelevant. The only thing that matters is the guarantee.
    Michael

    When Sleeping Beauty awakens, she could potentially be experiencing either a guaranteed awakening (i.e. T-Monday or H-Monday) or an optional awakening (i.e. T-Tuesday). Since she cannot definitively rule out the possibility of experiencing an optional awakening, this uncertainty should affect her credence P(T), as P(T) = P(T-Monday) + P(T-Tuesday), and P(T-Monday) is always equal to P(H-Monday) regardless of the value of P(T-Tuesday). Therefore, P(T) should be greater than 1/2 whenever Sleeping Beauty cannot conclusively rule out the possibility of it being T-Tuesday.
  • Sleeping Beauty Problem
    This has nothing to do with credence.

    I am asked to place two bets on a single coin toss. If the coin lands heads then only the first bet is counted. What is it rational to to? Obviously to bet on tails. Even though my credence isn't that tails is more likely. The same principle holds in the Sleeping Beauty experiment where I'm put to sleep and woken up either once or twice depending on a coin toss. That it's rational to bet on tails isn't that my credence is that it's most likely tails; it's that I know that if it is tails I get to bet twice.

    The same principle holds with the dice roll and the escape attempts.
    Michael

    I find it unusual that you maintain that when faced with a potential outcome O in a situation S, your credence P(O) should only reflect the intrinsic propensity of an object to generate O, disregarding how O affects the likelihood of you being in this situation.

    Consider a scenario where fireflies are equally likely to have gene XYZ, which makes them brighter and thus twice as likely to be seen from the same distance. If you happen to be in proximity to such a firefly, it is twice as likely to catch your attention when it has the XYZ gene. Therefore, from a population where half of the fireflies have this gene, you witness twice as many flashes from the ones carrying XYZ. According to your logic, your credence about any given firefly flash should remain P(XYZ) = 1/2 (because the firefly generating it had a 50% chance of inheriting this gene), despite the fact that you would have twice as many betting opportunities on fireflies with the XYZ gene. You seem to consider this increase in betting opportunities irrelevant to your credence P(XYZ), even though your encounters with such fireflies are twice as frequent.

    This line of reasoning appears to be an ad hoc restriction on the common understanding of credence, primarily designed to disqualify the Thirder interpretation of the Sleeping Beauty problem from the outset. This restriction seems to have limited applicability outside of this specific problem. In most cases, we focus more on the overall frequency of the outcomes in proportion to the relevantly similar situations, rather than on the intrinsic propensities of the objects generating these outcomes.
  • Sleeping Beauty Problem
    It just doesn't make sense to say that A iff B but that P(A) != P(B). And Bayes' theorem shows that P(A) = P(B).

    I have indeed conceded that the inference is valid (as are the applications of Bayes' theorem predicated on it) as long as we avoid equivocating the meaning of P(). To prevent this equivocation, we must be mindful of the specific ratio implicitly referred to when we discuss Sleeping Beauty's credence P(H). It's important to ensure that, when you lay out your demonstrations, you do not switch between two inconsistent definitions of P(), even within the same formula.

    Consider again the pragmatic dice scenario where Sleeping Beauty is awakened six times in the East Wing if the die lands on 'six', and awakened once in the West Wing otherwise. It's rational for her to instruct her Aunt Betsy to wait for her at the West Wing exit, because once the experimental run concludes, the odds of her exiting there are P(not-'six') = 5/6. This also implies that P(not-'six'-awakening) is 5/6, if we understand it to mean that in five out of six potential runs of awakenings she awakens into, she finds herself in not-'six' runs (regardless of the number of times she awakens in that run). However, what does not logically follow is that P'(not-'six') = 5/6, if we interpret this to mean that in five out of six potential awakening episodes, she finds herself in not-'six' episodes. The relevant ratio in this context is P'(not-'six') = 6/11. This ratio is also the relevant one for her to predict from which wing she would likely exit from if she had a chance to escape during any given awakening episode.
  • Sleeping Beauty Problem
    That doesn't mean that the credence isn’t transitive. My premises "fail" to account for it because it's irrelevant.

    A iff B
    P(B) = 1/2
    Therefore, P(A) = 1/2
    Michael

    While this kind of inference is often valid, it doesn't apply in the Sleeping Beauty problem.

    Credences, or probabilities, can be thought of as ratios. My credence at a given time for an outcome O reflects the proportion of cases where O occurs in a similar situation S. In other words, it is the "ideal" or "long run" ratio O/S. For instance, my credence that a randomly drawn card from a shuffled deck is a spade, P(Spade) = 1/4, reflects my belief that the ratio of spade outcomes to card-drawing situations is 1/4.

    The general validity of the inference you propose is based on the assumption that the mapping between O and S is constant. However, this mapping is contentious in the Sleeping Beauty problem, with Halfers and Thirders disagreeing, resulting in conflicting interpretations of P(Heads).

    As long as Halfers and Thirders stick to their own definitions, this isn't a problem—though it can lead to miscommunication. Being aware of these divergent definitions also helps avoid invalid inferences.

    Let's take A as Sleeping Beauty being in a H-awakening episode and B as her being in a H-run. While A iff B holds true, note that:

    P(B) = 1/2 = B/O, where O represents a representative set of experimental runs.

    P(A) = 1/3 = A/O', where O' represents a representative set of awakening episodes.

    Equating P(B) and P(A) and inferring one from the other can only be valid if O remains constant—in other words, if the mapping from potential outcomes to potential situations doesn't change.
  • Sleeping Beauty Problem
    But haven't you lost Sleeping Beauty's other constraint, that the chances of encountering one Italian or two Tunisians are equal?Srap Tasmaner

    In my original cosmopolitan analogy, the equal Italian and Tunisian populations mirrors the even likelihood of the coin landing on either side in the Sleeping Beauty problem. What makes it more likely to encounter a Tunisian—despite the equal population—is that Tunisians go for walks twice as often on average, increasing the odds of an encounter. This mirrors how Sleeping Beauty is woken up twice when the coin lands tails.

    To fine-tune the analogy and preserve the feature of the Sleeping Beauty problem you've pointed out, we can assume that initially, you're equally likely to encounter an Italian or a Tunisian—perhaps because Tunisians walk in hidden pairs. When you meet a member of a Tunisian pair for the first time, their sibling ensures they are the next one you meet. Thus, when you have met an Italian, or a Tunisian for the second time in a row, your next encounter is equally likely to be with an Italian or a Tunisian, analogous to the Sleeping Beauty problem where a new coin toss (and a new Monday awakening) occurs after each heads or second tails awakening. Despite this, two-thirds of your encounters are with Tunisians, so the odds that any given encounter is with a Tunisian remain 2/3. (We can assume that the experiment begins with a small number of random "dummy" encounters to ensure that you lose track of the first "experimental" encounter.)
  • Sleeping Beauty Problem
    If you want a closer analogy with pedestrians, it's Tunisians walking around in pairs. If the chances of meeting an Italian or a pair of Tunisians are equal, then the chances of meeting *a* Tunisian are either nil, since you can't meet just one, or the same as meeting a pair.

    Look at how hang-around times affect the pedestrian-encountering odds. Roughly, if you miss a short walker, you've missed him, but if you miss a long walker you get another chance. That's not how Sleeping Beauty works at all. There's no way to miss your first tatils interview but still catch the second one.
    Srap Tasmaner

    If I were to adjust the analogy, suppose that meeting a Tunisian pedestrian guarantees that you have met or will meet their sibling either in the previous or next encounter. In this scenario, would your credence that the pedestrian you're encountering is a Tunisian change? As long as you meet Tunisians twice as often as Italians, your credence P(Tunisian encounter) should remain 2/3 at the time of each individual encounter, regardless of the pairing situation.
  • Sleeping Beauty Problem
    This is an ambiguous claim. If there are half as many Tunisians but they go out four times as often but are only out for 10 mins, whereas Italians are out for 20 mins, then it would be that Tunisians are around equally as often as measured by time out. The only way you could get this to work is if the argument is set out exactly as I have done above:

    A1. there are twice as many Tunisian walkers as Italian walkers (out right now)
    A2. if (right now) I meet a walker at random from a random distribution of all walkers (out right now) then I am twice as likely to meet a Tunisian walker

    But there's nothing comparable to "if (right now) I meet a walker at random from a random distribution of all walkers (out right now)" that has as a consequent "then my interview is twice as likely to be a T-interview".
    Michael

    To fine-tune the analogy, let's assume that there are an equal number of Tunisians and Italians, that they are out for the same duration, and that Tunisians go out twice as frequently. Importantly, there's no need for an extraneous process of random selection to generate an encounter with a citizen, or a tossed coin, in either example. In the cosmopolitan encounter case, the random distributions of citizens in the street at any given time (with, on average, twice as many Tunisians out) directly result in twice as many encounters with Tunisians. It is a straightforward causal relationship between the distribution of wanderers and the distribution of encounters. Similarly, in the Sleeping Beauty case, the setup guarantees that Sleeping Beauty will encounter twice as many coins having landed tails when she awakens, simply by ensuring that she is awakened twice as often when the coins land tails.
  • Sleeping Beauty Problem
    P1. If I am assigned at random either a T-interview set or a H-interview set then my interview set is equally likely to be a T-interview set
    P2. I am assigned at random either a T-interview set or a H-interview set
    P3. My interview is a T-interview iff my interview set is a T-interview set
    C1. My interview is equally likely to be a T-interview

    The premises are true and the conclusion follows, therefore the conclusion is true.
    Michael

    The conclusion doesn't follow because, while the biconditional expressed in P3 is true, this biconditional does not guarantee a one-to-one correspondence between the set of T-interviews and the set of T-runs (or "T-interview sets"). Instead, the correspondence is two-to-one, as each T-run includes two T-interviews. This is a central defining feature of the Sleeping Beauty problem that your premises fail to account for.
  • Sleeping Beauty Problem
    In the case of the meetings we have:

    *P1) there are twice as many Tunisian walkers
    *P2) if I meet a walker at random then I am twice as likely to meet a Tunisian walker (from *P1)
    *P3) I meet a walker at random
    *C) I am twice as likely to have met a Tunisian walker (from *P2 and *P3)

    In Sleeping Beauty's case we have:

    P1) there are twice as many tails interviews
    P2) ?
    P3) I am in an interview
    C) I am twice as likely to be in a tails interview

    What is your (P2) that allows you to derive (C)? It doesn't follow from (P1) and (P3) alone.
    Michael

    You are introducing premises *P2 and *P3 in an attempt to emphasize a perceived disanalogy between the cosmopolitan meeting scenario and the Sleeping Beauty problem. Both *P1 and *P2 seem to imply that there exists a pre-determined set of potential encounters (many Tunisians and half as many Italians strolling around), from which a random selection process subsequently generates an encounter. There indeed is no analogous situation in the Sleeping Beauty problem, as there isn't a pre-determined set of pre-tossed coins from which Sleeping Beauty randomly encounters one upon awakening. However, I would argue that this misrepresents the cosmopolitan meeting scenario.

    In the cosmopolitan situation, the probability of meeting a Tunisian doubles because Tunisians are around twice as often. There is no need for a random selection from a pre-existing set of potential encounters. Similarly, in the Sleeping Beauty problem, coins that have landed on tails "hang around" twice as long (i.e., for two sequential awakenings instead of one), which makes it twice as likely for Sleeping Beauty to encounter this outcome each time she is awakened and interviewed throughout the experiment.

    The random process is fully specified by the equal distribution of coin toss outcomes (over the long run) and the longer "hang around" times of tails outcomes. Likewise, in the cosmopolitan meeting case, the process is fully specified by the equal distribution of Italians and Tunisians in the city and the increased frequency of encounters generated by Tunisians due to their longer "hang around" times. In neither case are additional random selection processes from a pre-determined set of possible encounters necessary.
  • Sleeping Beauty Problem
    Your argument is that: if 1) there are twice as many T-awakenings and if 2) I randomly select one of the awakenings then 3) it is twice as likely to be a T-awakening.

    This is correct. But the manner in which the experiment is conducted is such that 2) is false.
    Michael

    Your point (2) doesn't factor into my argument. I've consistently held to the premise, as dictated by the problem statement, that Sleeping Beauty awakens once when the coin lands heads and twice when it lands tails. There's no necessity for an external agent to select an awakening, just as there's no need for someone to choose a street encounter. Instead, Sleeping Beauty, upon each awakening (or encounter), should consider the long-term distribution of these awakenings (or encounters) to formulate a rational belief about the current situation.
  • Sleeping Beauty Problem
    If we were to use the meetings example then:

    1. A coin is tossed
    2. If heads then 1 Italian walks the streets
    3. If tails then 2 Tunisians walk the streets
    4. Sleeping Beauty is sent out into the streets

    What is the probability that she will meet a Tunisian?
    Michael

    In the Sleeping Beauty problem, she isn't asked to estimate the probability of being awakened in the future with the coin having landed heads. Instead, she's awakened and then questioned about her current belief regarding the coin's outcome. To maintain this structure in the street encounter example, we should consider Sleeping Beauty meeting a wanderer and then being asked to consider the probability that this wanderer is an Italian. If, over time, the setup leads to twice as many Tunisian encounters (perhaps because Tunisians wander about twice as long as Italians), then Sleeping Beauty's rational credence should be P(Italian) = 1/3.
  • Sleeping Beauty Problem
    "there are twice as many Tunisian-meetings" isn't biconditional with "there are half as many Tunisians and Tunisians go out four times more often" and so A doesn't use circular reasoning.Michael

    However, you seem to agree that in this scenario, one is twice as likely to encounter a Tunisian. The conclusion that there are twice as many Tunisian-meetings emerges from the premises: (1) there are half as many Tunisians and (2) Tunisians venture out four times more often. This inference is simply an intermediate step in the argumentation, providing an explanation for why there are twice as many Tunisian-meetings. Analogously, the Sleeping Beauty setup explains why there are twice as many T-awakenings. If the reason for twice as many Tunisian-meetings is that Tunisians venture out twice as often (assuming there are an equal number of Tunisians and Italians), then the analogy with the Sleeping Beauty scenario is precise. The attribute of being Tunisian can be compared to a coin landing tails, and encountering them on the street can be paralleled to Sleeping Beauty encountering such coins upon awakening. In the Sleeping Beauty setup, coins that land tails are 'venturing out' more often.
  • Sleeping Beauty Problem
    This is just repeating the same thing in a different way. That there are twice as many T-awakenings just is that Sleeping Bauty is awaked twice as often if tailsMichael

    You accepted the validity of the reasoning when probability was deduced from frequencies in the Tunisian-meetings scenario. Why is this reasoning acceptable for people who were born Tunisian but questionable for coins that landed tails?
  • Sleeping Beauty Problem
    In this case:

    1. there are twice as many Tunisian-meetings because Tunisian-meetings are twice as likely
    2. Tunisian-meetings are twice as likely because there are half as many Tunisians and Tunisians go out four times more often

    This makes sense.

    So:

    1. there are twice as many T-awakenings because T-awakenings are twice as likely
    2. T-awakenings are twice as likely because ...

    How do you finish 2? It's circular reasoning to finish it with "there are twice as many T-awakenings".
    Michael

    To fill in your number 2 with no circularity, we can draw a parallel to the first example:

    2a. Tunisian-meetings are twice as likely because there are half as many Tunisians and Tunisians go out four times more often (and thus, Tunisian-meetings are twice as frequent as Italian-meetings)

    Likewise:

    2b. T-awakenings are twice as likely because, based on the experiment's design, Sleeping Beauty is awakened twice as often when the coin lands tails (and thus, T-awakenings are twice as frequent as H-awakenings)
  • Sleeping Beauty Problem
    Starting here you argued that P(Heads) = 1/3.

    So, what do you fill in here for the example of one person woken if heads, two if tails?
    Michael

    I see. I was filling up a template that you had provided where P(Monday) = 2/3, thus making it clear that we were quantifying awakening episodes.

    In that case P(Monday|Heads) = 1, and P(Heads) = 1/3 since one third of the awakenings are H-awakenings.

    Therefore P(Heads|Monday) = P(Monday|Heads)∗P(Heads)/P(Monday) = (1)*(1/3)/(2/3) = 1/2.

    Likewise, P(Heads|Awake) = P(Awake|Heads)∗P(Heads)/P(Awake) = (1)*(1/3)/(1) = 1/3

    Note that when we quantify awakening episodes, P(Awake|Heads) = 1 since all H-awakenings are awakenings.
  • Sleeping Beauty Problem
    What wouldn't make sense is just to say that Tunisian-meetings are twice as likely because there are twice as many Tunisian-meetings. That is a non sequitur.Michael

    But why wouldn't it make sense? For example, if you're an immigration lawyer and your secretary has arranged for you to meet with twice as many Tunisians as Italians in the upcoming week, when you walk into a meeting without knowing the client's nationality, isn't it logical to say that it's twice as likely to be with a Tunisian?

    To make this comparable to the Sleeping Beauty problem; there are two Sleeping Beauties, one will be woken if heads, two will be woken if tails. When woken, what is their credence in heads? In such a situation the answer would be 1/3. Bayes' theorem for this is:

    P(Heads|Awake)=P(Awake|Heads)∗P(Heads)/P(Awake)

    =(1/2)∗(1/2) / (3/4)=1/3

    =1/3

    This isn't comparable to the traditional problem.
    Michael

    Why isn't it comparable? I had proposed an identical version earlier. One effective way to erase Sleeping Beauty's memory without any side effects from an amnesia-inducing drug might be to switch her with her identical twin for the second awakening. They would each only experience one awakening at most as part of a team. Their epistemic perspectives regarding the coin toss would remain the same, and therefore so should their rational credences.

    Incidentally, what is your version of Bayes' theorem for this where P(Heads) = 1/3?

    I am unsure what it is that you are asking here.
  • The Andromeda Paradox
    If you want to be very precise with the terminology, the Andromeda Paradox shows that some spacelike separated event in my present is some spacelike separated event in some other person's causal future even though that person is also a spacelike separated event in my present. I find that peculiar.Michael

    In essence, you're saying that even though a distant event currently lies beyond your ability to influence it (due to the fact that any influence you exert cannot travel faster than light), someone else, presently positioned closer to the event, can influence it.

    Some event (A1) in my (A0) future is spacelike separated from some event (B0) in someone else's (B1) past, even though this person is spacelike separated from my present. It might be impossible for me to interact with B1 (or for B1 to interact with A1), but Special Relativity suggests that A1 is inevitable, hence why this is an argument for a four-dimensional block universe, which may have implications for free will and truth.Michael

    If we let c approach infinity, Galilean spacetime converges with Lorentzian spacetime. In this case, the "absolute elsewhere" of an event (the region outside of the light cone) shrinks into a unique simultaneity hyperplane. In Galilean spacetime, an observer at a given time views any event in its (absolute) past as "inevitable." In Lorentzian spacetime, an observer deems "inevitable" any event that resides either in its (absolute) past light cone or in its (also absolute) elsewhere region. The "inevitability" relation between observers-at-a-time (events) and other observers-at-a-time becomes intransitive.

    This intransitivity means that even if

    1. A1 is inevitable by B1, and
    2. B1 is inevitable by A0,
    it does not follow that (3) A1 is inevitable by A0.

    This inference is invalid because the inability of A0 to affect A1 indirectly by influencing B1 does not mean that A0 can't influence A1 directly.

Pierre-Normand

Start FollowingSend a Message