• Pierre-Normand
    2.4k
    Sue's reasoning is right for Sue, but wrong for Jane (and vice versa), given that 2/3 of sitters will sit in on a 100 Heads interview but 1/2^100 of participants will have a 100 Heads interview.Michael

    Let's consider another scenario. Suppose Jane is an amnesiac paratrooper set to be dropped behind enemy lines for three separate missions on different nights. Conveniently, she suffers from selective anterograde amnesia which shields mission information in case she's captured. There are two potential drop zones: #1 is dry land, and #2 is a swamp.

    Mission Control will toss a coin to decide Jane's destinations. If it lands heads, she will be dropped once over DZ#1 and twice over DZ#2. If it lands tails, she'll be dropped twice over DZ#1 and once over DZ#2. Jane carries a tracking device which will enable Sergeant Sue, who's already operating behind enemy lines, to locate her during one randomly selected mission, collect the mission package, and direct her back home.

    Now, what's the probability that the coin landed heads? Before the experiment begins, both Sue and Jane have the same prior credence P(H) = 1/2.

    However, suppose Sue locates Jane in DZ#1. What is the updated probability that the coin landed heads? It would be 1/3. Shouldn't Jane update her credence to the same value when she realizes she's landed on dry land?

    Suppose the outcome of the coin toss also determines the nature of the package that Jane is carrying. If the coin landed on heads, Jane's package contains a secret explosive; if it landed tails, the package contains a secret medication. Both Jane and Sue know this arrangement beforehand.

    When Sue tracks Jane and finds her in DZ#1, they both realize that Sue should deliver the package to a hospital if the coin landed tails (since the package would contain medication), or to a military outpost if the coin landed heads (since it would contain an explosive).

    Now, should they both act as though the probability of the coin having landed heads is 1/3? It's not just about winning or losing a bet anymore. The decisions they make based on their credence have immediate and tangible consequences. They must update their credence according to the information they have at hand, including Jane's current location.
  • Michael
    15.6k


    So if heads then woken once on Monday and twice on Tuesday, otherwise woken twice on Monday and once on Tuesday.

    Sue tells Jane that it's Monday.

    What is Jane's credence that the coin landed heads?

    I say 1/2.

    It's exactly the same reasoning as before.

    Sue should reason as if she is randomly selected from the set of all sitters, and 1/3 of sitters sitting in a Monday room are sitting in a heads room.

    Jane should reason as it she is randomly selected from the set of all participants, and 1/2 of participants in a Monday room are sitting in a heads room.

    This reasoning is much clearer to see in the 100 Heads example, and I don't see how any counterexample example is going to change my mind about 100 Heads. If I'm one of the participants I will only ever reason that P(100 Heads) = . I am almost certainly not the one participant who will have 2101 interviews.

    I should no more update my credence to match my sitter’s than she should update hers to match mine.
  • Pierre-Normand
    2.4k
    So if heads then woken once on Monday and twice on Tuesday, otherwise woken twice on Monday and once on Tuesday.

    Sue tells Jane that it's Monday.

    What is Jane's credence that the coin landed heads?

    I say 1/2.

    It's exactly the same reasoning as before.

    Sue should reason as if she is randomly selected from the set of all sitters, and 1/3 of sitters sitting in a Monday room are sitting in a heads room.

    Jane should reason as it she is randomly selected from the set of all participants, and 1/2 of participants in a Monday room are sitting in a heads room.
    Michael

    Although you linked my most recent post, I assume you intended to respond to this one.

    In my scenario, there was only one room for awakenings. Without loss of generality, we can schedule the occupancy of that room over the three days of the experiment, according to the coin toss result, as follows:

    If the coin lands Heads:

    Day 1: Jane (first),
    Day 2: Helen (first),
    Day 3: Helen

    If the coin lands Tails:

    Day 1: Jane (first),
    Day 2: Jane,
    Day 3: Helen (first)

    I use the label 'first' to signify a participant's first (or only) day of awakening. This is functionally equivalent to Monday in the original problem.

    On any given day, whichever candidate doesn't occupy the waking room is moved into the sleeping room.

    When Sue is assigned to attend to the participant who is awakened on a randomly chosen day (without being informed if it's Day 1, Day 2 or Day3), her credence in the coin having landed heads is 1/2. After she is informed that the occupant is Jane, she updates her credence P(H) to 1/3. After she is further informed that this is Jane's first or only occasion of awakening, her credence gets updated back to 1/2 again.

    I would argue that Jane should update her credence in the same way in light of the same information. But first, I wanted to ensure that we are on the same page regarding both the setup and Sue's credences within it.

    (I'll address your extreme example separately.)
  • Michael
    15.6k
    Although you linked to my most recent post, I assume you intended to respond to this one.Pierre-Normand

    No, I was just trying to rephrase your secret mission example into a way that I could understand better. Did I misinterpret it?

    If not then it appears to be saying the same thing as the above?
  • Pierre-Normand
    2.4k
    No, I was just trying to rephrase your secret mission example into a way that I could understand better. Did I misinterpret it?Michael

    In the secret mission example, Sergeant Sue is tasked with locating Jane on one of her three scheduled missions selected at random. Neither Sue nor Jane know if this mission is the first, second or third one or, if it's not the first, in what zone(s) Jane landed before.
  • Michael
    15.6k
    Was my rephrasing of it wrong? I'm treating DZ#1 as Monday and DZ#2 as Tuesday. If twice at DZ#1 then twice on Monday, if once at DZ#2 then once on Tuesday. If you know that it's DZ#1 then you know that it's Monday.
  • Michael
    15.6k
    I would argue that Jane should update her credence in the same way in light of the same information.Pierre-Normand

    Jane should reason as if she is randomly selected from the set of all participants, because she is.
    Sue should reason as if she is randomly selected from the set of all sitters, because she is.

    Jane shouldn't update her credence to match Sue and Sue shouldn't update her credence to match Jane.

    I think my extreme example shows why. It's just not rational for each participant to reason that the coin most likely landed heads 100 times but it is rational for each sitter to reason that they are sitting in on a 100 heads experiment.
  • Pierre-Normand
    2.4k
    Was my rephrasing of it wrong? I'm treating DZ#1 as Monday and DZ#2 as Tuesday. If twice at DZ#1 then twice on Monday, if once at DZ#2 then once on Tuesday. If you know what it's DZ#1 then you know that it's Monday.Michael

    Structurally, DZ#1 is similar to awake and DZ#2 is similar to asleep. In this way, the Jane&Helen scenario perfectly maps on the secret mission scenario, with DZ#2 and Helen fulfilling the roles of dummy place holders, as it were.
  • Pierre-Normand
    2.4k
    Jane shouldn't update her credence to match Sue and Sue shouldn't update her credence to match Jane.Michael

    But we are agreed on the validity of Sue's credences in both scenarios, right?
  • Michael
    15.6k
    But we are agreed on the validity of Sue's credences in both scenarios, right?Pierre-Normand

    Yes, I said as much with my extreme example.

    Given that of sitters will sit in on a 100 Heads interview it is rational for each sitter to reason that they are likely sitting in on a 100 Heads interview.

    But given that of participants will have a 100 Heads interview it is rational for each participant to reason that their interview is likely not a 100 Heads interview.

    Neither the sitter nor the participant should update their credence to match the other's. They each belong to a different reference class.

    And if they reason this way then 2101 sitters will be right (once) and 2100 - 1 will be wrong (once), and 2100 - 1 participants will be right (once) and 1 will be wrong (2101 times).

    I'd say that's all the evidence I need to justify my credence.
  • Pierre-Normand
    2.4k
    Neither the sitter nor the participant should update their credence to match the other's.Michael

    In that case, in the secret mission scenario, it looks like Sergeant Sue and G. I. Jane are poised to having a fight—or at least a heated argument—regarding where the secret package should be delivered. Sergeant Sue would advocate for its delivery to the allied field hospital, relying on her 1/3 credence, while G.I Jane, holding a 1/2 credence, would maintain that the destination of the package doesn't make a difference. (The parameters of the scenario could be tweaked to make the dispute more pronounced, through yielding 1/6 vs 6/11 credences, say).

    Alternatively, Sue and Jane could come to realize that their apparently divergent credences target different classes of events (which is what I have argued) and conclude that both of them are true, but that it's the 1/3 credence that is relevant for determining the likely contents of the package they are presently dealing with.
  • Pierre-Normand
    2.4k
    Here is another discussion with GPT-4 that rehearses my pragmatist conception of the dispute between halfers and thirders.

    PN:

    Let's explore a variation of the Sleeping Beauty problem. In this version, a six-sided die is tossed to determine the course of the experiment. Should the die result in 'six', Jane will be awakened six times consecutively. Otherwise, she will experience only a single awakening. This sequence will unfold over six days, with a single run of the experiment.

    In the case where the die does not land on six, Jane's lone awakening will occur on the sixth day of the experiment, and she will be released on the seventh day. The die outcome also determines the location of Jane's stay: 'six' results in her lodging in the East Wing, while any other outcome places her in the West Wing.

    Aunt Betsy, who plans to pick Jane up upon her release, is aware that Jane checked into the facility on June 30th and will be released on July 7th. On July 1st, after the die has been rolled but before any awakenings, Betsy calls the facility to learn where she should pick up Jane. The manager informs her that this information cannot be disclosed directly but can be communicated to Jane, who is allowed to send a response.

    To ensure that Jane's knowledge doesn't impact her belief in the probability that a 'six' was rolled, the manager forwards Betsy's inquiry to Jane during each awakening, irrespective of the die result. Jane's responses are collected, and only one is sent to Aunt Betsy on the evening of July 6th, during Jane's final sleep.

    Upon receiving Betsy's inquiry, Jane estimates her belief in the probability of the die having landed on 'six', which she deduces to be 6/11. She reasons that Betsy should therefore wait for her at the East Wing, where she believes there is a 6/11 chance she is being housed.

    By contrast, from Betsy's perspective, there should be a 5/6 probability that the die didn't land on 'six'. This perspective, grounded in observations from multiple hypothetical experiment runs, indicates Jane is likely housed in the West Wing with a 5/6 probability. Despite stemming from the same fundamental setup, their credences appear to conflict: Jane and Betsy can't both be correct about where Jane is most likely to be released from. What went wrong?

    GPT-4:

    This is an interesting variation of the Sleeping Beauty Problem, and the apparent conflict arises from a fundamental difference in the knowledge and perspectives of Aunt Betsy and Jane.

    From Jane's perspective, each awakening she experiences is a random draw from all her potential awakenings. If the die landed on 'six', she will be awakened six times, and if the die did not land on 'six', she will be awakened once. So when she wakes up and contemplates her situation, she reasons that there are six times more ways she could have found herself awake in the East Wing compared to the West Wing, and thus, she assigns a 6/11 credence to being in the East Wing.

    On the other hand, Aunt Betsy's perspective is fundamentally different. She knows that the experiment will last for seven days and that Jane will only be released on the seventh day. She does not know whether Jane was awakened multiple times or just once, and she is not experiencing the awakenings as Jane is. Her 5/6 credence that Jane is in the West Wing is based on the probability of the die roll, independent of Jane's awakening experiences.

    So, the difference in their credences comes from the different ways they experience or perceive the experiment. Jane's credence is based on her waking experiences, while Aunt Betsy's is based on the initial conditions of the experiment. While this might appear to be a conflict, it's simply a result of their different perspectives on the same situation. This situation also underscores the subjective nature of Bayesian probabilities, which are personal degrees of belief and may vary between individuals even when considering the same event, based on their differing perspectives or knowledge about the situation.

    PN:

    I very much agree with everything that you said, but the solution of the puzzle involves the reconciliation of Jane's credence with the fact that her aunt will indeed pick her up at the wrong wing five times out of six if she would rely on Jane's message (and hence why Jane should rethink it).

    GPT-4:

    This is a great thought experiment. It reminds us that although probabilities might be personally reasonable, they can't always be used interchangeably between different people or contexts, especially when the information set or the perspective changes.

    The solution to this puzzle lies in realizing that Jane, when she sends the message, is aggregating over multiple awakenings. The 6/11 credence that Jane assigns to being in the East Wing is not tied to any specific day of the experiment but is an average over all her possible awakenings. On each awakening, she updates her belief based on the premise that she's equally likely to be awake on any given day.

    However, Aunt Betsy doesn't care about Jane's experience across all her awakenings. She cares about Jane's location on a specific day, the seventh day. Jane needs to acknowledge this and should instead reason as follows:

    If the die landed on 'six', Jane will have six awakenings in the East Wing.
    If the die didn't land on 'six', Jane will have one awakening in the West Wing.
    However, from Jane's perspective, each of her awakenings is equally likely. Thus, if Jane knew it was her final awakening, her credence in being in the East Wing would only be 6/7, not 6/11. If it is not her final awakening, Jane can deduce that she must be in the East Wing. Therefore, Jane should send two types of messages: one indicating East Wing if it is not her final awakening, and another indicating a 6/7 chance of being in the East Wing if it is her final awakening.

    From Aunt Betsy's perspective, she would need to know if it's Jane's last awakening, which she doesn't. But if the manager were to choose to send the message indicating the East Wing, if any, or the other message otherwise, Aunt Betsy's best strategy would be to follow the message's instructions and pick Jane up at the East Wing if the message indicates so, otherwise at the West Wing. This strategy would result in Aunt Betsy waiting at the correct wing six out of seven times. This may not match perfectly Jane's 6/11 credence but it's the best strategy that can be devised from their shared perspective.

    PN:

    Aunt Betsy doesn't know the experimental setup, so I equate her perspective with the observed result that (were she placed in this situation multiple times) she would find Jane coming out of the West Wing five out of six times. This is the same observation that Jane would make. (The seventh day doesn't count in the calculations since it's not a day of awakening and interview, it's just the day of discharge.)

    Jane's mistake was to infer from the fact that her being located in the East Wing six out of eleven times when she awakens translates in her coming out of the facility six out of eleven times when she is discharged. It's as simple as that.

    Six of her awakenings that occur in the East Wing result in one single discharge from that wing, which is why she should direct her aunt to the East Wing where 5 out of 6 of her discharges occur (following a single awakening), despite the fact that it is *now* (on the occasion of her present awakening) more likely than not that she is in the East Wing.

    GPT-4:

    Ah, I see! Thank you for the clarification. Yes, Jane seems to have confused her credence for being in the East Wing at the time of any awakening (6/11) with the probability of being discharged from the East Wing at the end of the experiment. As you mentioned, the error arises because six awakenings in the East Wing all lead to one single discharge from the East Wing, while one awakening in the West Wing leads to one discharge from the West Wing.

    Therefore, the probability that Jane will be discharged from the East Wing is indeed 1/6, not 6/11. Conversely, the probability that she will be discharged from the West Wing is 5/6. So, Jane should inform Aunt Betsy to wait for her at the West Wing, since that is where she will most likely be discharged from. This aligns with the observed result that Aunt Betsy would find Jane coming out of the West Wing five out of six times if this scenario were to be repeated multiple times.

    Thank you for presenting this interesting variation of the Sleeping Beauty problem. It indeed reinforces the point that probabilities can't be used interchangeably between different contexts and reminds us to always consider the exact event or outcome that the probability is referring to.

    PN:

    My variation of the Sleeping Beauty problem is designed to demonstrate that conflicting credences don't necessarily stem from incompatible epistemic viewpoints. Rather, they often arise from differing interpretations of what the credences actually represent. In the Sleeping Beauty problem, the confusion is compounded by the fact that different credences—1/2 and 1/3—are both assigned to the same 'state' (Heads) of an object (a fair coin). However, what truly distinguishes these credences, and explains their different values, is the uniquely individualted sets of events each considers. For a 'halfer', the credence is based on the ratio of Heads-experimental runs to total experimental runs. Conversely, for a 'thirder', it's about the ratio of Heads-awakenings to total awakenings. The apparent conflict is not a matter of differing epistemic perspectives, but rather differing definitions of the credences in question.

    GPT-4:

    I understand your point and agree with it. The distinction you've made is essential and helps clear up the confusion surrounding the Sleeping Beauty problem. Many debates about probability and credence essentially boil down to differences in the definitions of what constitutes an event or an experimental trial. In this context, it's clear that what one person considers as an 'event' might differ substantially from another person's definition. By drawing attention to these differences in definition, we can gain a clearer understanding of why different credences are assigned in the first place and clarify seemingly conflicting probabilities. It's a perspective that adds a significant depth to the discussion and makes the resolution of the paradox more nuanced and comprehensive.

    PN:

    Coming back to the new variation, which was intended to highlight a particular pragmatic context in which Jane can—and should—make use of a 'halfer' conception of her credence P('six') = 1/6, there are other pragmatic concerns that can make her rely on the 'thirder' conception. One such context would be for her to find an opportunity to escape from the facility and needing to prepare accordingly depending on her being hosted in the East or West wing. One of them might be surrounded by a moat filled with crocodiles and the other with a forest infested with lions. Since, in this case, the escape opportunities are distributed evenly across occasions of awakening, Jane should prepare her escape on the basis of a 'thirder' definition of her credence P('six') = 6/11.

    GPT-4

    Yes, that's an excellent extension of the scenario. If Jane finds herself in a situation where she must make a decision based on her current location during each awakening, then it is indeed more appropriate for her to use the 'thirder' credence. Given the uniform distribution of escape opportunities across awakenings, Jane's best strategy would be to prepare for the highest probability scenario in her current awakening, which is in the East Wing according to 'thirder' reasoning.

    Your point here further underlines the importance of considering the context and its implications when deciding which credence to rely on. The 'halfer' and 'thirder' perspectives are not mutually exclusive or inherently right or wrong; instead, they offer different lenses through which to view the problem, each one more useful in different situations. It's the specifics of the situation and the nature of the decision to be made that determine which perspective should be used.
  • Benkei
    7.7k
    Isn't this problem a consequence of asking the wrong question, because the question is ambiguous? Given the setup, and that you'd be woken twice if it is tails, what's the likelihood now that you're awake, that the coin shows tails? That seems to resolve the ambiguity or am I missing something?
  • Pierre-Normand
    2.4k
    Isn't this problem a consequence of asking the wrong question, because the question is ambiguous? Given the setup, and that you'd be woken twice if it is tails, what's the likelihood now that you're awake, that the coin shows tails? That seems to resolve the ambiguity or am I missing something?Benkei

    In the Sleeping Beauty problem, both main parties to the debates agree that it's Sleeping Beauty's credence while she is being awakened ("now") that is at issue. 'Halfers' assert that since Sleeping Beauty gains no new information when she awakens, her credence that the fair coin landed on heads should remain P(H) = 1/2. On the other hand, 'Thirders' argue that upon awakening, Sleeping Beauty should update her credence to P(H) = 1/3.

    You are right to point out that an ambiguity is present, but it isn't resolved by simply specifying the time span during which (or relative to which) Sleeping Beauty should evaluate her credence.

    Consider two types of experimental runs: "H-runs" (following a heads coin toss result and yielding a single awakening) and "T-runs" (following a tails coin toss result and yielding two awakenings). Halfers argue that even though Sleeping Beauty experiences more awakenings during a T-run, this doesn't make such a run more likely. Therefore, upon awakening, her credence in being part of a H-run or a T-run should remain at 1/2 for each.

    Conversely, Thirders argue that Sleeping Beauty should update her credence in P(H) to 1/3 upon awakening, since in the long run, one-third of her possible awakenings occur as a result of the coin landing heads.

    At first glance, it might seem that Halfers focus more on the entirety of the experimental runs, while Thirders focus on individual awakening episodes. However, this observation doesn't resolve the dispute.

    There are biconditional relationships between the propositions "The coin landed heads", "I have presently awoken in a H-run", and "I am presently experiencing a H-awakening". The same relationships exist for tails. Halfers and Thirders each believe that Sleeping Beauty can make valid inferences based on these relationships, leading to their respective credences. (It is also worth noticing that there is a many-to-one relationship between T-awakenings and T-runs.)

    Because those biconditionals hold, The Halfers argue that Sleeping Beauty is entitled to infer from "P(H-run) = 1/2" that "P(H-awakening) = P(H) = 1/2" is true as well.

    Likewise, the Tirders argue that Sleeping Beauty is entitled to infer from "P(H-awakening) = 1/3" that "P(H-run) = P(H) = 1/3" is true as well.

    Those inferences appear valid, so the dispute remains.

    The problem is that we have not resolved the ambiguity inherent in all three of the random variables P(H), P(H-awakening) or P(H-run)

    The ambiguity, as I see it, can only be resolved by specifying the method of verifying the target events. If the verification is performed once at the end of each experimental run, the Halfer interpretation would be upheld. However, if the verification is performed during each individual awakening, it's the Thirder interpretation that would be upheld. (Notice that it's because of the aforementioned many-to-one relationship that the specification of the verification method disambiguates the ratio of events that Sleeping Beauty's credence tacitly refers to.)

    Each verification method has its uses, and the failure to specify them is what gives rise to the apparent conflict. My discussion with GPT-4 in the post above provides a more detailed exploration of this pragmatist interpretation by means of practical examples.
  • Pierre-Normand
    2.4k
    Here is another interesting aspect of the Sleeping Beauty problem. What is the nature of the information that Sleeping Beauty receives on each occasion of her awakening, and how can this information be communicated?

    PN:

    I thought of a different aspect of the problem while imagining another variation. Imagine that Leonard Shelby, suffering from anterograde amnesia, remains at home for the next seven days. Let us assume that all his new memories are wiped out overnight every night. If a coin lands heads, a door-to-door vacuum cleaner salesman will ring his doorbell once in the coming week. If it lands tails, a door-to-door broom salesman will ring his doorbell every day for the next six days. Those details are written down in Shelby's memento that he reads every morning. He doesn't have a calendar and won't know what the day of the week is until his niece visits him on the seventh day (Saturday). Predictably (and he also knows that to be true) by the end of the week, his broom closet will contain either one vacuum cleaner (if the coin landed heads) or six new brooms (if the coin landed tails).

    According to 'Halfer' reasoning, whenever the doorbell rings, Shelby should be expecting to meet the vacuum cleaner salesman with credence P(H) = 1/2. According to 'Thirder' reasoning, his credence should be P(H) = 1/7.

    Suppose Shelby is a Thirder (as I also am, in this context). The ringing of the doorbell is what provides the evidence (i.e. "there is a salesman visit today") that enables Shelby to update his credence P(H) from 1/2 to 1/7. Shelby knows, though, that come Saturday, after his niece arrives, but before he looks into the closet, his credence will revert back to P(H) = 1/2. This is because of his amnesia that makes him lose the information that he had gained whenever he heard the doorbell ring. What is the nature of this information that he lost? After all, he still knows from his memento that the doorbell must have rung at least once.

    Before I tell you how I propose to explain how this information could be conceptualized, I'd like to receive your input and about the thought experiment and your own speculation about the nature of the information that Shelby effectively forgets, thus causing him to revert his credence back to 1/2 even on Saturday morning.

    GPT-4:

    Your thought experiment is intriguing and sheds a new light on the problem. By introducing a character like Leonard Shelby with anterograde amnesia, you add a layer of complexity that makes the question about the nature of the "forgotten" information particularly interesting.

    If we delve into this, we can think of the "forgotten" information not merely as facts, but rather as temporal experiences. Whenever the doorbell rings, Shelby updates his credence based on the fact that a visit has occurred. As a Thirder, this evidence updates his credence to P(H) = 1/7, since each visit is more likely to be a broom salesman if the coin landed on tails. The ringing of the doorbell provides him with the evidence that a new day has passed.

    However, due to his amnesia, he forgets the experience of the passing days, despite knowing from his memento that the doorbell must have rung at least once. This lack of memory regarding the time elapsed (and hence the number of potential salesmen visits) is the missing information. It's not just the occurrence of the doorbell ringing (which he knows happened), but the frequency and distribution of those occurrences across days which is forgotten.

    When his niece visits him on Saturday, he doesn't remember the number of past salesman visits (how many times the doorbell rang), only that there has been at least one. He therefore can't distinguish between the two scenarios (one vacuum cleaner salesman visit or six broom salesman visits). This lack of distinction, due to his amnesia, causes him to revert his credence back to P(H) = 1/2.

    So, in summary, the forgotten information can be seen as the temporal sequence and frequency of the events (the doorbell ringing), not merely the events themselves.

    PN:

    Consider, though, that whenever the doorbell rings he doesn't know either how many times this happened—only that it happened at least once. But he already knew that this would happen at least once, from his memento, and will know that it must have happened once, come Saturday, still from his memento.

    GPT-4:

    Ah, I see what you mean. In that case, the information that's being lost isn't about the events themselves or their frequency, but the contextual information about when and how the events occurred in relation to his own experience of time.

    When the doorbell rings, Shelby gains the information that he is currently experiencing a salesman's visit. However, due to his amnesia, he loses the contextual information of this event having occurred in his subjective timeline. Even though he knows from his memento that at least one visit must have occurred, he doesn't have the personal, experiential connection to that event anymore.

    In other words, the "forgotten" information is the personal, experiential context of the event happening to him in the here and now. While he remembers that a doorbell ringing event was supposed to happen (from the memento), he has lost the immediate, personal experience of it happening to him in the moment. It's the disconnection from the immediate experience of the event that causes him to revert his credence back to P(H) = 1/2 on Saturday.

    Therefore, the nature of the information being lost due to his amnesia might be considered his personal, immediate connection to the event - the event as situated within his experience of the passing of time.

    PN:

    This is an interesting way to think about it. Here is the alternative I had imagined. Suppose Shelby is enabled, as part of the protocol of the experiment (all written down in his memento), whenever the doorbell rings, and before answering, to throw four dice and, if they all land on six, to write down in his memento "today the doorbell rang". So, on the rare occasion when he lands four sixes once during the week, the information that he thus expresses will be retained by him come Saturday. But we now can understand better what this information is. It's not so much the propositional content of the expression "today the doorbell rang" that enables him to update his credence to P(H) = 1/7 (on Saturday) but rather the fact that the proposition (or memory) was recoded at all. Since if the coin landed H, there would have been only one opportunity for Shelby to roll four sixes and record the event, whereas if the coin landed tails there would have been six times as many opportunities, the fact that this recording happened enables Shelby, come Saturday, to update his credence from 1/2 to 1/7 though normal Bayesian reasoning.

    GPT-4:

    That's a fascinating approach! The way you've set it up cleverly exploits the difference in opportunity frequency between the two outcomes (Heads vs Tails) of the coin toss. This mechanism you've devised essentially equates the memory recording event to a "successful" dice roll event. It's not the content of the memory per se that matters, but rather the existence of the memory itself.

    By creating a memory record when the doorbell rings and also meeting the condition of rolling four sixes, Shelby essentially marks one instance of the doorbell ringing. The dice roll acts as a filter, a probabilistic gatekeeper, reducing the chance of recording a memory. Thus, if a memory exists, it's more likely that it came from a context where there were more opportunities to create it, which corresponds to the Tails outcome.

    The brilliant part of this mechanism is that, as you pointed out, it doesn't rely on the information content of the memory itself. Instead, it relies on the existence of the memory as an indicator of the number of chances Shelby had to create it. This effectively bypasses the issue of Shelby's amnesia wiping out the informational content of his experiences.

    Indeed, by merely establishing a non-zero probability of recording an event, we have enough to perform a Bayesian update. This highlights how the thirder argument operates: not through a focus on specific events, but rather through a focus on the distribution of opportunities for those events to occur. So the 'forgotten information' is indeed the greater number of opportunities for creating a memory (of hearing the doorbell ring) that exists if the coin landed tails.

    In essence, this set-up gives Shelby a way to "bypass" his amnesia and track the greater frequency of opportunities for hearing the doorbell in the Tails scenario. The presence of a memory record, even if its content is inaccessible or irrelevant, provides an indicator of the "weight of chances" or frequency distribution for the two different outcomes, allowing for a Bayesian update.

    PN:

    And so, once we've clarified the nature of the information that Shelby is able to convey to himself (i.e. to his future self) by means of a note in his memento, we can apply the same reasoning regarding the nature of the information that he receives in the first place whenever the doorbell rings, and that enables him to update his credence immediately by means of ordinary Bayesian reasoning.

    GPT-4:

    Yes, that's a coherent and insightful application of your reasoning. The act of the doorbell ringing is akin to the dice rolling event in your mechanism. Each ring is an opportunity for Shelby to gain information about the outcome of the coin toss. His immediate Bayesian update is not contingent on knowing the specific number of times the doorbell has rung; rather, it is contingent on the opportunity distribution - the realization that, because the doorbell is ringing, there's a greater chance of being in the scenario where there are more opportunities for the doorbell to ring (Tails).

    This reasoning shows that information in this context is not so much about concrete details (like the specific number of doorbell rings) but about the probabilistic framework within which events are happening. What Shelby "loses" due to his amnesia is not the record of specific events (he can still recall from his memento that the doorbell must have rung at least once) but the trace of the more frequent opportunity distribution that comes with the Tails outcome.

    In sum, the key is understanding the weight of chances - the different opportunity distributions across different scenarios - as the information being received and updated upon. This is precisely what the "thirder" argument in the Sleeping Beauty problem leverages. The information lost due to Shelby's amnesia - or due to Sleeping Beauty's induced forgetfulness - is an understanding of this probabilistic context.

    PN:

    I couldn't have said it better myself!

    GPT-4

    I'm glad you found the explanation satisfactory! Your thought experiment with Shelby provided a really engaging and insightful way to consider the nuances of the Sleeping Beauty problem and how credence, memory, and probability interact. I enjoyed our discussion. Feel free to bring up any other philosophical or scientific puzzles you'd like to dissect!
  • Srap Tasmaner
    4.9k


    Since you're still a committed Thirder, here's what bothers me:

    (1) When a coin is tossed it has only two proper outcomes. (No leaners, no edges, and the usual ceteris paribus.) SB, upon being awakened, finds herself in one of three possible situations. But that doesn't work. A coin toss cannot, by itself, engender three distinct situations for her to be in. For instance, "for all she knows," this is her second tails interview; but there is no way to go from the coin toss directly to the second tails interview. Not without passing through the first interview. So SB's intuition that three things can happen to her requires there to be some other determinant of her situation; what is that other factor?

    (2) The setup confounds wagering arguments. That won't matter much to a lot of people, but it's uncomfortable. Annoying. Ramsey used Dutch book arguments from the beginning, and despite their limitations they can be clarifying. Each time I've tried to construct a sane payoff table I've failed. I've wondered lately if there might be a conditional wager that comes out rational, but I can work up enough hope of success to bother. Partial beliefs, within suitable limits, ought to be expressible as wagers, but not in this case, and that blows.
  • jgill
    3.8k
    They ask her one question after each time she awakens, however: What is the probability that the coin shows heads?

    Meaning comes up heads. 1/2
  • Benkei
    7.7k
    I still think it's a language issue and it revolves around the verb "shows". I agree with Srap it doesn't make sense to pretend a coin has three states. The coin can't "show" three options but the "coin when shown to SB" can and these are simply different things. It's just that mathematicians only have a hammer that makes this look like a nail.
  • Michael
    15.6k
    In Elga's paper the question is "to what degree ought you believe that the outcome of the coin toss is Heads?"
  • Benkei
    7.7k
    What does she mean? If the questioner clarifies what she means, you can actually give an answer, instead of trying to guess what it means and then end up with two competing interpretations that are at odds with each other. This "problem" is really of a total different type than, for instance, the two envelope problem to me, where I don't detect any type of ambiguity.

    In this question, it revolves around the word "believe". If you just straight up asked, what was the likelihood of the coin toss being heads? Then the number of times she wakes up is obviously irrelevant. If you ask, what's the likelihood you will see Heads if I bring you the coin, then the number of times she could have been woken up becomes relevant. It seems more about semantics than probability to me. But then I have a very strong aversion against probability calculations because I sucked at them as a kid. So there's that. :wink:
  • Pierre-Normand
    2.4k
    Since you're still a committed Thirder, here's what bothers me:

    (1) When a coin is tossed it has only two proper outcomes. (No leaners, no edges, and the usual ceteris paribus.) SB, upon being awakened, finds herself in one of three possible situations. But that doesn't work. A coin toss cannot, by itself, engender three distinct situations for her to be in. For instance, "for all she knows," this is her second tails interview; but there is no way to go from the coin toss directly to the second tails interview. Not without passing through the first interview. So SB's intuition that three things can happen to her requires there to be some other determinant of her situation; what is that other factor?
    Srap Tasmaner

    Actually, I've stopped being a committed Thirder a few days ago, since I began developing a pragmatist conception of the dispute between Thirders and Halfers.

    If Sleeping Beauty would hold that, on the occasion of an awakening, there is (from her point of view) a 1/3 chance that the coin landed on heads, that doesn't commit her to believing that there is any causal determinant of her awakening other than the coin toss result. It's just part of the setup of the experiment that she will be awakened one more time if the coin toss result is tails. The coin toss result determines the Tuesday awakening, while the Monday awakening is independent of it. So, I'm not sure I understand your question.

    (2) The setup confounds wagering arguments. That won't matter much to a lot of people, but it's uncomfortable. Annoying. Ramsey used Dutch book arguments from the beginning, and despite their limitations they can be clarifying. Each time I've tried to construct a sane payoff table I've failed. I've wondered lately if there might be a conditional wager that comes out rational, but I can work up enough hope of success to bother. Partial beliefs, within suitable limits, ought to be expressible as wagers, but not in this case, and that blows.

    Can you explain why the payoff tables you've come up with are unsatisfactory to you? I can describe at least one that seem coherent:

    On each occasion of awakening, Sleeping Beauty accepts a two-to-one wager that the coin landed tails. If the coin was heads up on that occasion, she loses her bet. If it was tails up, she wins half the amount of her bet (and gets back her original bet). Her expected value at the end of the experiment is zero since whenever the coin landed heads she loses her whole bet while whenever it landed tails she wins half her bet twice. Her accepting such odds and expecting to break even reflects her credence that P(H) = 1/3.

    The reason why this doesn't settle the dispute in favor in the Thirders is that there are alternative ways to define Sleeping Beauty's credence P(H).
  • Pierre-Normand
    2.4k
    I still think it's a language issue and it revolves around the verb "shows". I agree with Srap it doesn't make sense to pretend a coin has three states. The coin can't "show" three options but the "coin when shown to SB" can and these are simply different things. It's just that mathematicians only have a hammer that makes this look like a nail.Benkei

    The concept of Sleeping Beauty's credence P(H) indeed goes beyond stating the coin's inherent likelihood to land heads or tails. In the context of this experimental setup, it represents her epistemic relation to the coin. A more precise interpretation might suggest that Sleeping Beauty's credence P(H) is her long-term expectation of the coin landing heads, verified on each occasion she expresses this credence. But this doesn't completely resolve the issue. The original statement of the problem fails to specify what constitutes an individual act of verification of her credence, though, such that we can establish the target ratio unambiguously. As I've previously illustrated with various examples, different pragmatic considerations can lead to different verification methods, each yielding different values for P(H), aligning with either the Halfer or Thirder stance.
  • Michael
    15.6k
    The original statement of the problem fails to specify what constitutes an individual act of verification of her credence, though, such that we can establish the target ratio unambiguously. As I've previously illustrated with various examples, different pragmatic considerations can lead to different verification methods, each yielding different values for P(H), aligning with either the Halfer or Thirder stance.Pierre-Normand

    It sort of addresses this in a footnote:

    The precise effect of the drug is to reset your belief-state to what it was just before you were put to sleep at the beginning of the experiment. If the existence of such a drug seems fanciful, note that it is possible to pose the problem without it — all that matters is that the person put to sleep believes that the setup is as I have described it.

    The problem (and Elga's solution) have nothing to do with how to "verify" one's credence. It simply asks what a rational person should/would believe were they told the rules of the experiment, woken up, and asked their credence.

    So for the sake of the problem we can assert that, unknown to Sleeping Beauty, she's only actually woken up once.
  • Pierre-Normand
    2.4k
    If you ask, what's the likelihood you will see Heads if I bring you the coin, then the number of times she could have been woken up becomes relevant.Benkei

    One argument that Halfers make is that being given many occasions to verify the exact same thing, rather than verifying it once, doesn't change the probability of the outcome.

    This argument doesn't work in the context of the Sleeping Beauty problem, it seems to me. This is because the number of opportunities, in this case, is probabilistically dependent on the coin toss result.

    But then, the Halfers have the rejoinder that since Sleeping Beauty only is aware of the circumstances of her current awakening, and not of the number of them, she gains no information upon awakening that she didn't already have when the experiment was set up (and when her credence P(H) was 1/2).

    My Leonard Shelby (Memento) variation above was meant to clarify what the nature of this information is, that Sleeping Beauty gains on the occasion of her awakening, and that is such that she can update her credence P(H) by means or ordinary Bayesian reasoning.
  • Michael
    15.6k
    (2) The setup confounds wagering arguments. That won't matter much to a lot of people, but it's uncomfortable. Annoying. Ramsey used Dutch book arguments from the beginning, and despite their limitations they can be clarifying. Each time I've tried to construct a sane payoff table I've failed. I've wondered lately if there might be a conditional wager that comes out rational, but I can work up enough hope of success to bother. Partial beliefs, within suitable limits, ought to be expressible as wagers, but not in this case, and that blows.Srap Tasmaner

    You might be interested in When betting odds and credences come apart: More worries for Dutch book arguments.
  • Pierre-Normand
    2.4k
    The problem (and Elga's solution) have nothing to do with how to "verify" one's credence. It simply asks what a rational person should/would believe were they told the rules of the experiment, woken up, and asked their credence.Michael

    This is indeed a flaw inherent to the statement of the problem. What is a rational person supposed to believe about a probabilistic outcome that is ambiguously defined? To ask how probable it is that the coin landed on heads involves a tacit reference to the counterfactual circumstances where you are presently facing a (hidden) coin that didn't land the way it actually did. And this is not merely a reference to the coins intrinsic propensities to land this or that way when tossed.

    Suppose half the people in your neighborhood are Italians and half are Tunisians. You don't know your left door neighbor. As you meet him for the first time, you credence P(Tunisian) = 1/2 (let us suppose). And then he introduces himself as Marco Gallo and announces that he's Italian. The reason why your credence about this man who was standing in front of you was P(T) = 1/2 wasn't because you thought this man had been equally likely to be born Italian or Tunisian, in the sense that this man had an equal propensity to be born both ways. It's because of your knowledge of the statistical distribution of Italians and Tunisians in your neighborhood that you had this credence. It's, in other words, because of the manner in which you are epistemically related to this population of individuals and what makes you more or less likely to meet some of them. The Sleeping Beauty problem only appears to fully specify what constitutes an occasion for Sleeping Beauty to "meet" with a coin that had landed on heads. It leaves a crucial ambiguity regarding what constitutes such a meeting.
  • Michael
    15.6k
    To ask how probable it is that the coin landed on heads involves a tacit reference to the counterfactual circumstances where you are presently facing a (hidden) coin that didn't land the way it actually did.Pierre-Normand

    Not exactly, because if it's Monday the coin hasn't been flipped at all. It's only hidden if today is Tuesday and the coin is tails.
  • Michael
    15.6k
    I think it's also worth paying particular attention to the way Elga phrased the problem and the solution:

    When you are first awakened, to what degree ought you believe that the outcome of the coin toss is Heads?

    ...

    I've just argued that when you are awakened on Monday, that credence ought to change to 1/3.

    ...

    But you were also certain that upon being awakened on Monday you would have credence 1/3 in H.

    The Tuesday interview is actually irrelevant to Elga's argument (which is why he says in that footnote that Sleeping Beauty only needs to believe that the experiment will be conducted in this way).

    So Elga argues that on Monday, before the coin has been flipped, Sleeping Beauty's credence (not knowing that it is Monday) should be P(Heads) = .
  • Pierre-Normand
    2.4k
    Not exactly, because if it's Monday the coin hasn't been flipped at all. It's only hidden if today is Tuesday and the coin is tails.Michael

    In the OP variation, the coin is flipped on Sunday before Sleeping Beauty's first awakening. Elga is right, though, that nothing substantially changes if the coin is flipped after Monday's (mandatory) awakening.

    The point about Sleeping Beauty's credence being about her epistemic relationship to the coin flip result (either past or future) remains valid. If Sleeping Beauty were informed that today is Monday, her credence about the upcoming flip, P(H), would remain at 1/2. Under these circumstances, her credence would be fully predicated on the inherent propensity of the coin to land on either side with equal probabilities. Conversely, if she were told that it's Tuesday, she would know without a doubt that the coin landed on tails. In this case, her credence, P(H) = 0, would be independent of the coin's inherent propensity and solely a result of her causal relationship with the outcome. When the day of her awakening remains unknown to her, her rational credence becomes a composite function of both possibilities—that today could be Monday or Tuesday. Consequently, it becomes a function of the coin's propensities, for example, that it is a fair coin, and of the causal relationship between the coin toss result and her potential awakening on Tuesday.
  • PhilosophyRunner
    302
    This is an ambiguous claim. It is true that if you randomly select a seeing from the set of all possible seeings then it is twice as likely to be a tails-seeing, but the experiment doesn't work by randomly selecting a seeing from the set of all possible seeings and then "giving" it to Sleepy Beauty. It works by tossing a coin, and then either she sees it once or she sees it twice.Michael

    That is the difference in interpretation I am talking about. For you the question is asking about how a fair coin toss will land. For me it is about the seeings of the coin, not the coin toss.

    Let's say The SB experiment is carried out every week for a year - 52 time in total. On a given time that SB wakes up, what should her credence be that the coin at that given wake up event is heads? I'll come back to this - let's call it problem A.

    Well let me do this experiment again with one small change - call it problem B. The SB experiment is done every week for a year - 52 times. But instead of asking SB her credence at each wake up event, we are going to take a picture at each wake up event, then select one for her to answer about at the end. At the end of the year we will have on average 26 pictures of heads and 52 pictures of tails, each representing a unique wake up event. They are all put in a bag and one is picked out, then SB is asked "at this particular wake up event, what is the probability that the coin was showing heads, as seen in the picture?" Of course the answer is 1/3.

    The answer to problem B is clearly 1/3 and I think we both will agree here. The problem A is the same question that is asked to SB - on a given wake up event, she is asked in the moment about the probability of the coin showing heads. So the answer in problem A is also 1/3.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.