• Sleeping Beauty Problem
    She also knows that the fact that she is awake eliminates (H,H) as a possibility. This is a classic example of "new information" that allows her to update the probabilities. With three (still equally likely) possibilities left, each has a posterior probability of 1/3. Since in only one is coin C1 currently showing Heads, the answer is 1/3.JeffJo

    Your proposed scenario certainly provides an interesting variation, but it doesn't quite correspond to the structure of the situation typically discussed in literature, the one that seems to give rise to a paradox.

    In your scenario, there are four potential outcomes from the experiment, each of which is equally probable:

    HH (end) --> Never awakened
    HT HH --> Awakened once
    TH TT --> Awakened twice
    TT TH --> Awakened twice

    When Sleeping Beauty awakens, her credences corresponding to these four outcomes shift from {1/4, 1/4, 1/4, 1/4} to {0, 1/3, 1/3, 1/3}.

    However, in the scenario most frequently discussed, entire experimental runs in which Sleeping Beauty is awakened once are just as common as those where she is awakened twice. Furthermore, since there isn't any experimental run where Sleeping Beauty is not awakened at all, it's debatable whether her experiencing an awakening provides new information that would cause her to adjust her initial probabilities (as Halfers are inclined to argue).
  • Sleeping Beauty Problem
    But there are two sources of randomness in this example, the die and the coin.

    Similarly for all analyses that treat SB's situation as describable with two coin flips. We only have one.
    Srap Tasmaner

    Indeed, in my examples (labelled "First step" through "Fourth step"), there's only a single source of randomness, which consists in the random assignment of individuals to either population Pop-1 or Pop-2 (awakened once or twice with white or red tulips).

    Halfers contend that Sleeping Beauty's awakening cannot serve as evidence indicating she is more likely to be part of Pop-2, as there's nothing that allows her to distinguish an awakening in Pop-1 from one in Pop-2. Yet, the same reasoning can be applied to the inability to distinguish a 'six' roll from a loaded die versus a 'six' roll from a fair die. Yet, the occurrence of a 'six' increases the likelihood that the die is loaded.

    You're correct in stating that there's only one source of randomness in Sleeping Beauty's case, unlike the dice scenario. However, the two situations share a strong resemblance. The reason a 'six' outcome increases the probability that a die is loaded is because loaded dice generate more instances of players confronting a 'six'. Similarly, being part of Pop-2 in Sleeping Beauty's setup leads to more instances of self-aware awakenings. This is simply an analogy - for a more compelling argument, refer back to my cases 1 through 4 in the post you quoted.

    The halfer position comes back to individuation, as you suggested some time ago. Roughly, the claim is that "this interview" (or "this tails interview" etc) is not a proper result of the coin toss, and has no probability. What SB ought to be asking herself is "Is this my only interview or one of two?" The chances for each of those are by definition 1 in 2.

    Indeed, the choice between a Halfer (P(Pop-1) = 1/2) and a Thirder (P(Pop-1) = 1/3) updated credence is a matter of individuation. While I focused on the individuation of events, you had seemed to suggest that different (more or less extended) conceptions of self might lead people towards one stance or another. This struck me as insightful, although personal psychological inclinations don't provide valid justifications. Currently, I don't identify as a Thirder or a Halfer. Rather, I believe that Thirders and Halfers are talking past each other because they each focus solely on one of two possible types of outcome distributions that could be considered in Sleeping Beauty's credence update. My previous "pragmatic" examples aimed at highlighting this duality (not a dichotomy!) When Sleeping Beauty wakes and considers her situation, is she weighing the opportunities to either evade or confirm her current situation (facing lions or crocodiles)? In this case, she should reason as a Thirder. Or is she weighing the opportunity to end, or validate, the nature of her ongoing predicament (and be rescued by Aunt Betsy) at the end of her current series of awakenings? If so, she should reason as a Halfer. The root question of what her credence should be upon awakening is inherently ambiguous, and the thought experiment is tailored to create this ambiguity.
  • Paradox of Predictability
    In the original scenario as I have described it, Ned reads the printout, but he only reads a part of it. And, importantly, he does not read a part of it where he is reading the printout -- that would be self-referentially problematic. Because there is no self-referentially in the parts of the printout that Ned does read, there is nothing necessarily theoretically vicious about Ned reading some parts of the printout.NotAristotle

    If Ned only reads a part of the printout that doesn't mention him reading it, his actions wouldn't contradict the thesis of determinism. For instance, suppose determinism entails that, given some initial conditions at time t1 (along with the laws of nature), Ned will drink a glass of water at time t2. The printout Ned reads must inform him of this prediction. In response, Ned decides to drink a glass of orange juice instead, fulfilling his intention to act contrary to the prediction.

    We must consider when Ned's reading of the printout occurs. Does it occur before or after time t2 in the scenario you're envisioning?

    If it takes place after t2, then Ned's actions wouldn't actually contravene the prediction. Indeed, by then, he would have already drunk the glass of water, hence aligning with the prediction. However, if he reads the printout prior to t2 to pre-empt the predicted outcome, we run into the self-referential issue. In this case, the computer would need to forecast what occurs as a consequence of Ned reading the printout. This would invariably involve predicting Ned's reaction to the prediction, triggering the problematic self-reference loop I had mentioned.
  • Paradox of Predictability
    I suppose we could stipulate that Ned has enough information about his immediate environment to make an accurate prediction about how he will act. It doesn't really concern us whether this sort of information can, as a matter of practicality, be acquired; the concern is whether in principle, if this information were acquired, could Ned act in opposition to it. And the answer to that seems to be yes.NotAristotle

    I concur that we must accept, in principle, that this information can be acquired by Ned in order for his contrarian behavior to make any sense. Would Ned's decision to act contrary to his prediction be an arbitrary or unexplainable one? No, we cannot attribute his behavior to the occurrence of miracles and then use it as evidence against determinism.

    But, if we consider that it is Ned's reaction to the acquired information (provided by a computer, for example) that prompts him to act differently, the computer's prediction must also account for the possibility that Ned will read and respond to its output. In other words, the computer's predicted output becomes a component of its own predictive behavior loop, transforming Ned into a sort of "contrarian cog" within the deterministic system.

    This issue can be reduced to the challenge of programming a computer capable of modeling its own deterministic algorithmic process in such a way that it will output "yes" if it predicts it will output "no" and vice versa. This is a conundrum that can't be resolved in principle, yet this doesn't impact the deterministic nature of the computer's behavior. It rather highlights the impossibility of creating a self-referential prediction system within the boundaries of determinism.
  • Climate change denial
    But historically it has. There are a multitude of multilateral treaties that prove even enemies will agree on all sorts of things. WTO, UN, Geneva and the Hague conventions, Vienna Convention on the laws of treaties, Vienna Convention on diplomatic relations, etc.Benkei

    And the Montreal Protocol that has been ratified by all member states of the United Nations.
  • Sleeping Beauty Problem
    But she is only asked a question once in the whole year. One of the wakings is randomly selected to be the one where she is asked the question. On this randomly selected waking, she is asked the question "what is the probability that this randomly selected waking shows a heads." The answer is 1/3, as per Problem A in my previous post.PhilosophyRunner

    A Halfer might argue that Sleeping Beauty being posed such a question, along with the provided context of the question's delivery (i.e., through a random selection among all awakenings), indeed provides the grounds for Sleeping Beauty to update her initial credence P(H) from 1/2 to 1/3. However, they might also assert that this type of questioning doesn't exist in the original setup. Therefore, they might insist that, in the absence of such randomly assigned questioning, Sleeping Beauty should maintain her credence of 1/2.

    A Thirder might counter-argue by saying: The crucial element that turns the questioning into information, enabling Sleeping Beauty to update her credence, is the fact that it results from randomly selecting an awakening from all possible awakenings. Given that there are twice as many awakenings under the 'tails' condition than under 'heads,' a random selection is twice as likely to yield a 'tails' awakening. We must recognize that Sleeping Beauty doesn't necessarily require external assistance to isolate her current awakening in a manner that is both random and statistically independent of the coin toss result.

    Imagine an alternative method where an external agent, let's call her Sue, randomly selects awakenings from the complete set. Sue could examine a list of all scheduled awakenings, roll a die for each, and mark the awakening as evidence-worthy if the die lands on 'six'. The selected participants would then be equipped to update their credence P(H) to 1/3 after being presented with the evidence of their selection by Sue.

    Now, it doesn't matter who performs the die-rolling selection; what's important is that any awakening marked as evidence-worthy is selected randomly by a method independent of the coin toss outcome. The participants themselves, not Sue, could roll the die and, if it lands on 'six,' consider their current awakening to have been randomly selected (as it would indeed have been!) from the entire set of awakenings. This random selection allows Sleeping Beauty to single out the fact of her current awakening as evidence for updating her credence P(H) to 1/3.

    If the die doesn't land on 'six,' has Sleeping Beauty squandered an opportunity to identify her current awakening as a valuable piece of evidence? Actually, if the convention had been reversed to select awakenings by a die not landing on 'six', the chosen sample would still statistically represent all scheduled awakenings (with 1/3 of those being 'tails' awakenings). The Halfer's error is assuming that the mere occurrence of an awakening doesn't provide sufficient evidence for Sleeping Beauty. The participants' selection method, which involves identifying awakenings with the indexical expression "I am currently experiencing this awakening," is actually the most representative of all methods as it encompasses the entire population of awakenings!
  • The Indictment
    Does he? It's as simple as him saying "this is mine" and all the rules about handling and disclosure are out the window?Srap Tasmaner

    Of course not. Just saying it isn't sufficient. He also has to think it!
  • Sleeping Beauty Problem
    I think the Halfer position is roughly that there are only two outcomes: a single interview conducted in one sitting, and a double interview spread out over two sittings. Those outcomes are equivalent to the two possible outcomes of the coin toss. (If you have an even-numbered population to work with, you can just do away with the coin altogether.)

    What is the Thirder equivalent? If there are three outcomes, they cannot be equivalent to the two outcomes of the coin toss.
    Srap Tasmaner

    If I understand correctly, you seem to be asking how the Thirders might be able to infer the probabilities of the three fine-grained types of awakening outcomes from the (prior) probabilities of the two coin toss outcomes?

    Indeed, we can split an even-numbered population into two equal sub-populations Pop-1 (Beauties who awaken once) and Pop-2 (Beauties who awaken twice). This allows us to focus solely on Sleeping Beauty's personal credences upon awakening, concerning whether she's part of Pop-1 or Pop-2.

    In my view, the coin in the original problem provides a convenient source of stochasticity. Without it, @sime would have been justified to worry about the explanation for Sleeping Beauty's priors. Consider this: Suppose I present you with a die that could be loaded to always land on 'six'. If it's not loaded, then it's fair. You throw it once and it lands on 'six'. What is your credence that the die is loaded? Without an objective grounding for your priors, the answer is undefined. However, if I tell you that there are two identical dice - one loaded and the other fair - and a fair coin toss determines which one you'll use, you can now update your credence that the die is loaded from 1/2 to 6/7, given that over time, six out of seven 'sixes' will be from a loaded die.

    Let us therefore assume, as you suggested, that Sleeping Beauty's priors are P(Pop-1) = P(Pop-2) = 1/2, without needing to delve into the specific stochastic process that placed her in either Pop-1 or Pop-2.

    The key disagreement between Halfers and Thirders is whether Sleeping Beauty can update her credence upon awakening that she's part of Pop-1 from 1/2 to 1/3. Halfers argue that since Sleeping Beauty knows she'll be awakened at least once, she can't distinguish whether her current awakening is the only one (Pop-1) or one of two (Pop-2). Therefore, these two possibilities should be equally probable from her perspective.

    This argument seems to misuse the Principle of Indifference. Consider the die example: When the die lands on 'six', you can't distinguish whether this outcome is from the fair die or the loaded one. However, you can still update your credence P('loaded') from 1/2 to 6/7. The die landing on 'six' does convey information in this context.

    Halfers, therefore, need a stronger argument to support their 'no new information' claim. Alternatively, they could challenge Thirders to explain what new information Sleeping Beauty receives that allows her to rationally update her credence in Pop(1) from 1/2 to 1/3.

    I believe this can be explained step by step to make it more intuitive:

    --First step--

    Imagine that upon being divided into populations Pop-1 and Pop-2, the participants in each population are awakened only once the following day in their respective waking rooms. In half of the Pop-1 rooms, a single red tulip is placed on the nightstand, hidden by a cardboard cylinder. In the other half, a white tulip is used instead. In all Pop-2 rooms, a red tulip is utilized. As a participant in this experiment, Sleeping Beauty is informed of these specific details. Upon waking, she is asked about her credence in being part of Pop-1, and what her credence is that the tulip next to her is white. In this context, her credences should be P(Pop-1) = 1/2 and P(white) = 1/4.

    The cardboard cylinder is then removed, revealing a red tulip. What should Sleeping Beauty's credences be updated to now? They should be P(white) = 0 and P(Pop-1) = 1/3, right? This example appears to use Bayesian reasoning in a straightforward manner: Over time, 1/3 of participants who wake up in a room with a red tulip are part of Pop-1.

    (As for the strict proof: P(Pop-1|red) = P(red|Pop-1)*P(Pop-1)/P(red) = (1/2)*(1/2)/(3/4)=1/3)

    --Second step--

    Let's change the previous scenario so that all participants experience two awakenings, one on Monday and another on Tuesday. Participants in Pop-1 awaken once with a white tulip and once with a red tulip, while participants in Pop-2 awaken twice with a red tulip. We also introduce an amnesia-inducing drug to ensure that the participants don't remember the outcome of the Monday awakening when they are awakened again on Tuesday.

    In this new context, whenever Sleeping Beauty awakens, what should her credences P(Pop-1) and P(white) be? Arguably, most people, whether they're Halfers, Thirders or double-Halfers, would agree that these should be P(Pop-1) = 1/2 and P(white) = 1/4.

    The cardboard cylinder is then removed and, as it happens, a red tulip is revealed. What should Sleeping Beauty's credences be updated to now? They should again be P(white) = 0 and P(Pop-1) = 1/3, right?

    Perhaps the complexity of applying Bayesian reasoning in this context stems from the fact that participants in Pop-1 and Pop-2 who awaken on Monday aren't a distinct group from those who awaken on Tuesday. Indeed, the same individuals are awakened twice. To accommodate this factor, we can adjust Sleeping Beauty's Bayesian reasoning in the following manner:

    Every time a participant wakes up, the probability that they are in a room with a white tulip is 1/4. If I awaken in a room with a white tulip, the probability that I am part of Pop-1 is 1/2, and it's zero if I am part of Pop-2. As such, my prior probabilities are P(white) = 1/4 and P(Pop-1) = 1/2, while P(red|Pop-1) = 1/2.

    Consequently, once the tulip's color is revealed to be red, I can make the same inference as before: P(Pop-1|red) = P(red|Pop-1)P(Pop-1)/P(red) = (1/2)(1/2)/(3/4)=1/3.

    In an intuitive sense, this means that, since the majority of awakened participants find themselves next to red tulips because they belong to Pop-2, witnessing a red tulip upon awakening boosts their credence in being part of Pop-2. Although seeing a red tulip doesn't enable them to distinguish cases where the current awakening is the only one where they'll see such a tulip (as in Pop-1) or one of two such instances (as in Pop-2), it still provides information and counts as evidence that they are part of Pop-2. The reasoning behind this is analogous to why a die landing on 'six' constitutes evidence that the die is biased even though a fair die can also land of 'six'.

    --Third step--

    In this new variation, Sleeping Beauties themselves play the role of tulips. The populations Pop-1 and Pop-2 are participants, let's call them Sleeping Uglies*, who each share a room with a Sleeping Beauty. The Sleeping Uglies will be administered the same amnesia-inducing drugs on Sunday and Monday night, but they will always be awakened both on Monday and Tuesday, ten minutes prior to the Sleeping Beauty's potential awakenings.

    Whenever I, as a Sleeping Ugly, awaken, the probability that I am in a room with a 'sleeping' (i.e., not scheduled to be awakened) Sleeping Beauty is 1/4. The probability that I now have been awakened in a room with a 'sleeping' Sleeping Beauty is 1/2 if I am part of Pop-1 and zero if I am part of Pop-2. Therefore, my priors are P('sleeping') = 1/4 and P(Pop-1) = 1/2, while P('awake'|Pop-1) = 1/2.

    Therefore, after Sleeping Beauty is awakened in front of me, I can infer, as before, that P(Pop-1|'awake') = P('awake'|Pop-1)*P(Pop-1)/P('awake') = (1/2 * 1/2)/(3/4) = 1/3, meaning the probability that I am part of Pop-1 after Sleeping Beauty is awakened is 1/3.

    *My use if the Sleeping Uglies as participants in the experience, and of Sleeping Beauties' awakening episodes as evidences for the Uglies, is inspired by, but reverses, the example proposed by Robert Stalnaker in his paper Another Attempt to Put Sleeping Beauty to Rest.

    --Fourth and last step--

    We can now dispense with the Sleeping Uglies altogether since their epistemic situations, and the information that they are making use of (namely, that the Sleeping Beauty in their room awakens) are identical to those of the Sleeping Beauties themselves. The only difference is that there is a ten minute interval between the moment when the Speeping Uglies awaken and can make use of their evidence to update their credences, while the Sleeping Beauties can update their credences immediately upon awakening. Even this small difference can be wiped out by introducing a 10 minutes delay between the moment when the Sleeping Beauties are awakened (in all cases) and the moment when the interviewer shows up, with the proviso that when no interview is scheduled, the Beauties are put back to sleep rather than being interviewed, in which case their credences in P(Pop-1) momentarily drops to zero.
  • Sleeping Beauty Problem
    Not exactly, because if it's Monday the coin hasn't been flipped at all. It's only hidden if today is Tuesday and the coin is tails.Michael

    In the OP variation, the coin is flipped on Sunday before Sleeping Beauty's first awakening. Elga is right, though, that nothing substantially changes if the coin is flipped after Monday's (mandatory) awakening.

    The point about Sleeping Beauty's credence being about her epistemic relationship to the coin flip result (either past or future) remains valid. If Sleeping Beauty were informed that today is Monday, her credence about the upcoming flip, P(H), would remain at 1/2. Under these circumstances, her credence would be fully predicated on the inherent propensity of the coin to land on either side with equal probabilities. Conversely, if she were told that it's Tuesday, she would know without a doubt that the coin landed on tails. In this case, her credence, P(H) = 0, would be independent of the coin's inherent propensity and solely a result of her causal relationship with the outcome. When the day of her awakening remains unknown to her, her rational credence becomes a composite function of both possibilities—that today could be Monday or Tuesday. Consequently, it becomes a function of the coin's propensities, for example, that it is a fair coin, and of the causal relationship between the coin toss result and her potential awakening on Tuesday.
  • Sleeping Beauty Problem
    The problem (and Elga's solution) have nothing to do with how to "verify" one's credence. It simply asks what a rational person should/would believe were they told the rules of the experiment, woken up, and asked their credence.Michael

    This is indeed a flaw inherent to the statement of the problem. What is a rational person supposed to believe about a probabilistic outcome that is ambiguously defined? To ask how probable it is that the coin landed on heads involves a tacit reference to the counterfactual circumstances where you are presently facing a (hidden) coin that didn't land the way it actually did. And this is not merely a reference to the coins intrinsic propensities to land this or that way when tossed.

    Suppose half the people in your neighborhood are Italians and half are Tunisians. You don't know your left door neighbor. As you meet him for the first time, you credence P(Tunisian) = 1/2 (let us suppose). And then he introduces himself as Marco Gallo and announces that he's Italian. The reason why your credence about this man who was standing in front of you was P(T) = 1/2 wasn't because you thought this man had been equally likely to be born Italian or Tunisian, in the sense that this man had an equal propensity to be born both ways. It's because of your knowledge of the statistical distribution of Italians and Tunisians in your neighborhood that you had this credence. It's, in other words, because of the manner in which you are epistemically related to this population of individuals and what makes you more or less likely to meet some of them. The Sleeping Beauty problem only appears to fully specify what constitutes an occasion for Sleeping Beauty to "meet" with a coin that had landed on heads. It leaves a crucial ambiguity regarding what constitutes such a meeting.
  • Sleeping Beauty Problem
    If you ask, what's the likelihood you will see Heads if I bring you the coin, then the number of times she could have been woken up becomes relevant.Benkei

    One argument that Halfers make is that being given many occasions to verify the exact same thing, rather than verifying it once, doesn't change the probability of the outcome.

    This argument doesn't work in the context of the Sleeping Beauty problem, it seems to me. This is because the number of opportunities, in this case, is probabilistically dependent on the coin toss result.

    But then, the Halfers have the rejoinder that since Sleeping Beauty only is aware of the circumstances of her current awakening, and not of the number of them, she gains no information upon awakening that she didn't already have when the experiment was set up (and when her credence P(H) was 1/2).

    My Leonard Shelby (Memento) variation above was meant to clarify what the nature of this information is, that Sleeping Beauty gains on the occasion of her awakening, and that is such that she can update her credence P(H) by means or ordinary Bayesian reasoning.
  • Sleeping Beauty Problem
    I still think it's a language issue and it revolves around the verb "shows". I agree with Srap it doesn't make sense to pretend a coin has three states. The coin can't "show" three options but the "coin when shown to SB" can and these are simply different things. It's just that mathematicians only have a hammer that makes this look like a nail.Benkei

    The concept of Sleeping Beauty's credence P(H) indeed goes beyond stating the coin's inherent likelihood to land heads or tails. In the context of this experimental setup, it represents her epistemic relation to the coin. A more precise interpretation might suggest that Sleeping Beauty's credence P(H) is her long-term expectation of the coin landing heads, verified on each occasion she expresses this credence. But this doesn't completely resolve the issue. The original statement of the problem fails to specify what constitutes an individual act of verification of her credence, though, such that we can establish the target ratio unambiguously. As I've previously illustrated with various examples, different pragmatic considerations can lead to different verification methods, each yielding different values for P(H), aligning with either the Halfer or Thirder stance.
  • Sleeping Beauty Problem
    Since you're still a committed Thirder, here's what bothers me:

    (1) When a coin is tossed it has only two proper outcomes. (No leaners, no edges, and the usual ceteris paribus.) SB, upon being awakened, finds herself in one of three possible situations. But that doesn't work. A coin toss cannot, by itself, engender three distinct situations for her to be in. For instance, "for all she knows," this is her second tails interview; but there is no way to go from the coin toss directly to the second tails interview. Not without passing through the first interview. So SB's intuition that three things can happen to her requires there to be some other determinant of her situation; what is that other factor?
    Srap Tasmaner

    Actually, I've stopped being a committed Thirder a few days ago, since I began developing a pragmatist conception of the dispute between Thirders and Halfers.

    If Sleeping Beauty would hold that, on the occasion of an awakening, there is (from her point of view) a 1/3 chance that the coin landed on heads, that doesn't commit her to believing that there is any causal determinant of her awakening other than the coin toss result. It's just part of the setup of the experiment that she will be awakened one more time if the coin toss result is tails. The coin toss result determines the Tuesday awakening, while the Monday awakening is independent of it. So, I'm not sure I understand your question.

    (2) The setup confounds wagering arguments. That won't matter much to a lot of people, but it's uncomfortable. Annoying. Ramsey used Dutch book arguments from the beginning, and despite their limitations they can be clarifying. Each time I've tried to construct a sane payoff table I've failed. I've wondered lately if there might be a conditional wager that comes out rational, but I can work up enough hope of success to bother. Partial beliefs, within suitable limits, ought to be expressible as wagers, but not in this case, and that blows.

    Can you explain why the payoff tables you've come up with are unsatisfactory to you? I can describe at least one that seem coherent:

    On each occasion of awakening, Sleeping Beauty accepts a two-to-one wager that the coin landed tails. If the coin was heads up on that occasion, she loses her bet. If it was tails up, she wins half the amount of her bet (and gets back her original bet). Her expected value at the end of the experiment is zero since whenever the coin landed heads she loses her whole bet while whenever it landed tails she wins half her bet twice. Her accepting such odds and expecting to break even reflects her credence that P(H) = 1/3.

    The reason why this doesn't settle the dispute in favor in the Thirders is that there are alternative ways to define Sleeping Beauty's credence P(H).
  • Sleeping Beauty Problem
    Here is another interesting aspect of the Sleeping Beauty problem. What is the nature of the information that Sleeping Beauty receives on each occasion of her awakening, and how can this information be communicated?

    PN:

    I thought of a different aspect of the problem while imagining another variation. Imagine that Leonard Shelby, suffering from anterograde amnesia, remains at home for the next seven days. Let us assume that all his new memories are wiped out overnight every night. If a coin lands heads, a door-to-door vacuum cleaner salesman will ring his doorbell once in the coming week. If it lands tails, a door-to-door broom salesman will ring his doorbell every day for the next six days. Those details are written down in Shelby's memento that he reads every morning. He doesn't have a calendar and won't know what the day of the week is until his niece visits him on the seventh day (Saturday). Predictably (and he also knows that to be true) by the end of the week, his broom closet will contain either one vacuum cleaner (if the coin landed heads) or six new brooms (if the coin landed tails).

    According to 'Halfer' reasoning, whenever the doorbell rings, Shelby should be expecting to meet the vacuum cleaner salesman with credence P(H) = 1/2. According to 'Thirder' reasoning, his credence should be P(H) = 1/7.

    Suppose Shelby is a Thirder (as I also am, in this context). The ringing of the doorbell is what provides the evidence (i.e. "there is a salesman visit today") that enables Shelby to update his credence P(H) from 1/2 to 1/7. Shelby knows, though, that come Saturday, after his niece arrives, but before he looks into the closet, his credence will revert back to P(H) = 1/2. This is because of his amnesia that makes him lose the information that he had gained whenever he heard the doorbell ring. What is the nature of this information that he lost? After all, he still knows from his memento that the doorbell must have rung at least once.

    Before I tell you how I propose to explain how this information could be conceptualized, I'd like to receive your input and about the thought experiment and your own speculation about the nature of the information that Shelby effectively forgets, thus causing him to revert his credence back to 1/2 even on Saturday morning.

    GPT-4:

    Your thought experiment is intriguing and sheds a new light on the problem. By introducing a character like Leonard Shelby with anterograde amnesia, you add a layer of complexity that makes the question about the nature of the "forgotten" information particularly interesting.

    If we delve into this, we can think of the "forgotten" information not merely as facts, but rather as temporal experiences. Whenever the doorbell rings, Shelby updates his credence based on the fact that a visit has occurred. As a Thirder, this evidence updates his credence to P(H) = 1/7, since each visit is more likely to be a broom salesman if the coin landed on tails. The ringing of the doorbell provides him with the evidence that a new day has passed.

    However, due to his amnesia, he forgets the experience of the passing days, despite knowing from his memento that the doorbell must have rung at least once. This lack of memory regarding the time elapsed (and hence the number of potential salesmen visits) is the missing information. It's not just the occurrence of the doorbell ringing (which he knows happened), but the frequency and distribution of those occurrences across days which is forgotten.

    When his niece visits him on Saturday, he doesn't remember the number of past salesman visits (how many times the doorbell rang), only that there has been at least one. He therefore can't distinguish between the two scenarios (one vacuum cleaner salesman visit or six broom salesman visits). This lack of distinction, due to his amnesia, causes him to revert his credence back to P(H) = 1/2.

    So, in summary, the forgotten information can be seen as the temporal sequence and frequency of the events (the doorbell ringing), not merely the events themselves.

    PN:

    Consider, though, that whenever the doorbell rings he doesn't know either how many times this happened—only that it happened at least once. But he already knew that this would happen at least once, from his memento, and will know that it must have happened once, come Saturday, still from his memento.

    GPT-4:

    Ah, I see what you mean. In that case, the information that's being lost isn't about the events themselves or their frequency, but the contextual information about when and how the events occurred in relation to his own experience of time.

    When the doorbell rings, Shelby gains the information that he is currently experiencing a salesman's visit. However, due to his amnesia, he loses the contextual information of this event having occurred in his subjective timeline. Even though he knows from his memento that at least one visit must have occurred, he doesn't have the personal, experiential connection to that event anymore.

    In other words, the "forgotten" information is the personal, experiential context of the event happening to him in the here and now. While he remembers that a doorbell ringing event was supposed to happen (from the memento), he has lost the immediate, personal experience of it happening to him in the moment. It's the disconnection from the immediate experience of the event that causes him to revert his credence back to P(H) = 1/2 on Saturday.

    Therefore, the nature of the information being lost due to his amnesia might be considered his personal, immediate connection to the event - the event as situated within his experience of the passing of time.

    PN:

    This is an interesting way to think about it. Here is the alternative I had imagined. Suppose Shelby is enabled, as part of the protocol of the experiment (all written down in his memento), whenever the doorbell rings, and before answering, to throw four dice and, if they all land on six, to write down in his memento "today the doorbell rang". So, on the rare occasion when he lands four sixes once during the week, the information that he thus expresses will be retained by him come Saturday. But we now can understand better what this information is. It's not so much the propositional content of the expression "today the doorbell rang" that enables him to update his credence to P(H) = 1/7 (on Saturday) but rather the fact that the proposition (or memory) was recoded at all. Since if the coin landed H, there would have been only one opportunity for Shelby to roll four sixes and record the event, whereas if the coin landed tails there would have been six times as many opportunities, the fact that this recording happened enables Shelby, come Saturday, to update his credence from 1/2 to 1/7 though normal Bayesian reasoning.

    GPT-4:

    That's a fascinating approach! The way you've set it up cleverly exploits the difference in opportunity frequency between the two outcomes (Heads vs Tails) of the coin toss. This mechanism you've devised essentially equates the memory recording event to a "successful" dice roll event. It's not the content of the memory per se that matters, but rather the existence of the memory itself.

    By creating a memory record when the doorbell rings and also meeting the condition of rolling four sixes, Shelby essentially marks one instance of the doorbell ringing. The dice roll acts as a filter, a probabilistic gatekeeper, reducing the chance of recording a memory. Thus, if a memory exists, it's more likely that it came from a context where there were more opportunities to create it, which corresponds to the Tails outcome.

    The brilliant part of this mechanism is that, as you pointed out, it doesn't rely on the information content of the memory itself. Instead, it relies on the existence of the memory as an indicator of the number of chances Shelby had to create it. This effectively bypasses the issue of Shelby's amnesia wiping out the informational content of his experiences.

    Indeed, by merely establishing a non-zero probability of recording an event, we have enough to perform a Bayesian update. This highlights how the thirder argument operates: not through a focus on specific events, but rather through a focus on the distribution of opportunities for those events to occur. So the 'forgotten information' is indeed the greater number of opportunities for creating a memory (of hearing the doorbell ring) that exists if the coin landed tails.

    In essence, this set-up gives Shelby a way to "bypass" his amnesia and track the greater frequency of opportunities for hearing the doorbell in the Tails scenario. The presence of a memory record, even if its content is inaccessible or irrelevant, provides an indicator of the "weight of chances" or frequency distribution for the two different outcomes, allowing for a Bayesian update.

    PN:

    And so, once we've clarified the nature of the information that Shelby is able to convey to himself (i.e. to his future self) by means of a note in his memento, we can apply the same reasoning regarding the nature of the information that he receives in the first place whenever the doorbell rings, and that enables him to update his credence immediately by means of ordinary Bayesian reasoning.

    GPT-4:

    Yes, that's a coherent and insightful application of your reasoning. The act of the doorbell ringing is akin to the dice rolling event in your mechanism. Each ring is an opportunity for Shelby to gain information about the outcome of the coin toss. His immediate Bayesian update is not contingent on knowing the specific number of times the doorbell has rung; rather, it is contingent on the opportunity distribution - the realization that, because the doorbell is ringing, there's a greater chance of being in the scenario where there are more opportunities for the doorbell to ring (Tails).

    This reasoning shows that information in this context is not so much about concrete details (like the specific number of doorbell rings) but about the probabilistic framework within which events are happening. What Shelby "loses" due to his amnesia is not the record of specific events (he can still recall from his memento that the doorbell must have rung at least once) but the trace of the more frequent opportunity distribution that comes with the Tails outcome.

    In sum, the key is understanding the weight of chances - the different opportunity distributions across different scenarios - as the information being received and updated upon. This is precisely what the "thirder" argument in the Sleeping Beauty problem leverages. The information lost due to Shelby's amnesia - or due to Sleeping Beauty's induced forgetfulness - is an understanding of this probabilistic context.

    PN:

    I couldn't have said it better myself!

    GPT-4

    I'm glad you found the explanation satisfactory! Your thought experiment with Shelby provided a really engaging and insightful way to consider the nuances of the Sleeping Beauty problem and how credence, memory, and probability interact. I enjoyed our discussion. Feel free to bring up any other philosophical or scientific puzzles you'd like to dissect!
  • Donald Trump (All General Trump Conversations Here)
    A God-Emperor isn't Jesus.ssu

    Where can we buy those new NFTs?
  • Sleeping Beauty Problem
    Isn't this problem a consequence of asking the wrong question, because the question is ambiguous? Given the setup, and that you'd be woken twice if it is tails, what's the likelihood now that you're awake, that the coin shows tails? That seems to resolve the ambiguity or am I missing something?Benkei

    In the Sleeping Beauty problem, both main parties to the debates agree that it's Sleeping Beauty's credence while she is being awakened ("now") that is at issue. 'Halfers' assert that since Sleeping Beauty gains no new information when she awakens, her credence that the fair coin landed on heads should remain P(H) = 1/2. On the other hand, 'Thirders' argue that upon awakening, Sleeping Beauty should update her credence to P(H) = 1/3.

    You are right to point out that an ambiguity is present, but it isn't resolved by simply specifying the time span during which (or relative to which) Sleeping Beauty should evaluate her credence.

    Consider two types of experimental runs: "H-runs" (following a heads coin toss result and yielding a single awakening) and "T-runs" (following a tails coin toss result and yielding two awakenings). Halfers argue that even though Sleeping Beauty experiences more awakenings during a T-run, this doesn't make such a run more likely. Therefore, upon awakening, her credence in being part of a H-run or a T-run should remain at 1/2 for each.

    Conversely, Thirders argue that Sleeping Beauty should update her credence in P(H) to 1/3 upon awakening, since in the long run, one-third of her possible awakenings occur as a result of the coin landing heads.

    At first glance, it might seem that Halfers focus more on the entirety of the experimental runs, while Thirders focus on individual awakening episodes. However, this observation doesn't resolve the dispute.

    There are biconditional relationships between the propositions "The coin landed heads", "I have presently awoken in a H-run", and "I am presently experiencing a H-awakening". The same relationships exist for tails. Halfers and Thirders each believe that Sleeping Beauty can make valid inferences based on these relationships, leading to their respective credences. (It is also worth noticing that there is a many-to-one relationship between T-awakenings and T-runs.)

    Because those biconditionals hold, The Halfers argue that Sleeping Beauty is entitled to infer from "P(H-run) = 1/2" that "P(H-awakening) = P(H) = 1/2" is true as well.

    Likewise, the Tirders argue that Sleeping Beauty is entitled to infer from "P(H-awakening) = 1/3" that "P(H-run) = P(H) = 1/3" is true as well.

    Those inferences appear valid, so the dispute remains.

    The problem is that we have not resolved the ambiguity inherent in all three of the random variables P(H), P(H-awakening) or P(H-run)

    The ambiguity, as I see it, can only be resolved by specifying the method of verifying the target events. If the verification is performed once at the end of each experimental run, the Halfer interpretation would be upheld. However, if the verification is performed during each individual awakening, it's the Thirder interpretation that would be upheld. (Notice that it's because of the aforementioned many-to-one relationship that the specification of the verification method disambiguates the ratio of events that Sleeping Beauty's credence tacitly refers to.)

    Each verification method has its uses, and the failure to specify them is what gives rise to the apparent conflict. My discussion with GPT-4 in the post above provides a more detailed exploration of this pragmatist interpretation by means of practical examples.
  • Sleeping Beauty Problem
    Here is another discussion with GPT-4 that rehearses my pragmatist conception of the dispute between halfers and thirders.

    PN:

    Let's explore a variation of the Sleeping Beauty problem. In this version, a six-sided die is tossed to determine the course of the experiment. Should the die result in 'six', Jane will be awakened six times consecutively. Otherwise, she will experience only a single awakening. This sequence will unfold over six days, with a single run of the experiment.

    In the case where the die does not land on six, Jane's lone awakening will occur on the sixth day of the experiment, and she will be released on the seventh day. The die outcome also determines the location of Jane's stay: 'six' results in her lodging in the East Wing, while any other outcome places her in the West Wing.

    Aunt Betsy, who plans to pick Jane up upon her release, is aware that Jane checked into the facility on June 30th and will be released on July 7th. On July 1st, after the die has been rolled but before any awakenings, Betsy calls the facility to learn where she should pick up Jane. The manager informs her that this information cannot be disclosed directly but can be communicated to Jane, who is allowed to send a response.

    To ensure that Jane's knowledge doesn't impact her belief in the probability that a 'six' was rolled, the manager forwards Betsy's inquiry to Jane during each awakening, irrespective of the die result. Jane's responses are collected, and only one is sent to Aunt Betsy on the evening of July 6th, during Jane's final sleep.

    Upon receiving Betsy's inquiry, Jane estimates her belief in the probability of the die having landed on 'six', which she deduces to be 6/11. She reasons that Betsy should therefore wait for her at the East Wing, where she believes there is a 6/11 chance she is being housed.

    By contrast, from Betsy's perspective, there should be a 5/6 probability that the die didn't land on 'six'. This perspective, grounded in observations from multiple hypothetical experiment runs, indicates Jane is likely housed in the West Wing with a 5/6 probability. Despite stemming from the same fundamental setup, their credences appear to conflict: Jane and Betsy can't both be correct about where Jane is most likely to be released from. What went wrong?

    GPT-4:

    This is an interesting variation of the Sleeping Beauty Problem, and the apparent conflict arises from a fundamental difference in the knowledge and perspectives of Aunt Betsy and Jane.

    From Jane's perspective, each awakening she experiences is a random draw from all her potential awakenings. If the die landed on 'six', she will be awakened six times, and if the die did not land on 'six', she will be awakened once. So when she wakes up and contemplates her situation, she reasons that there are six times more ways she could have found herself awake in the East Wing compared to the West Wing, and thus, she assigns a 6/11 credence to being in the East Wing.

    On the other hand, Aunt Betsy's perspective is fundamentally different. She knows that the experiment will last for seven days and that Jane will only be released on the seventh day. She does not know whether Jane was awakened multiple times or just once, and she is not experiencing the awakenings as Jane is. Her 5/6 credence that Jane is in the West Wing is based on the probability of the die roll, independent of Jane's awakening experiences.

    So, the difference in their credences comes from the different ways they experience or perceive the experiment. Jane's credence is based on her waking experiences, while Aunt Betsy's is based on the initial conditions of the experiment. While this might appear to be a conflict, it's simply a result of their different perspectives on the same situation. This situation also underscores the subjective nature of Bayesian probabilities, which are personal degrees of belief and may vary between individuals even when considering the same event, based on their differing perspectives or knowledge about the situation.

    PN:

    I very much agree with everything that you said, but the solution of the puzzle involves the reconciliation of Jane's credence with the fact that her aunt will indeed pick her up at the wrong wing five times out of six if she would rely on Jane's message (and hence why Jane should rethink it).

    GPT-4:

    This is a great thought experiment. It reminds us that although probabilities might be personally reasonable, they can't always be used interchangeably between different people or contexts, especially when the information set or the perspective changes.

    The solution to this puzzle lies in realizing that Jane, when she sends the message, is aggregating over multiple awakenings. The 6/11 credence that Jane assigns to being in the East Wing is not tied to any specific day of the experiment but is an average over all her possible awakenings. On each awakening, she updates her belief based on the premise that she's equally likely to be awake on any given day.

    However, Aunt Betsy doesn't care about Jane's experience across all her awakenings. She cares about Jane's location on a specific day, the seventh day. Jane needs to acknowledge this and should instead reason as follows:

    If the die landed on 'six', Jane will have six awakenings in the East Wing.
    If the die didn't land on 'six', Jane will have one awakening in the West Wing.
    However, from Jane's perspective, each of her awakenings is equally likely. Thus, if Jane knew it was her final awakening, her credence in being in the East Wing would only be 6/7, not 6/11. If it is not her final awakening, Jane can deduce that she must be in the East Wing. Therefore, Jane should send two types of messages: one indicating East Wing if it is not her final awakening, and another indicating a 6/7 chance of being in the East Wing if it is her final awakening.

    From Aunt Betsy's perspective, she would need to know if it's Jane's last awakening, which she doesn't. But if the manager were to choose to send the message indicating the East Wing, if any, or the other message otherwise, Aunt Betsy's best strategy would be to follow the message's instructions and pick Jane up at the East Wing if the message indicates so, otherwise at the West Wing. This strategy would result in Aunt Betsy waiting at the correct wing six out of seven times. This may not match perfectly Jane's 6/11 credence but it's the best strategy that can be devised from their shared perspective.

    PN:

    Aunt Betsy doesn't know the experimental setup, so I equate her perspective with the observed result that (were she placed in this situation multiple times) she would find Jane coming out of the West Wing five out of six times. This is the same observation that Jane would make. (The seventh day doesn't count in the calculations since it's not a day of awakening and interview, it's just the day of discharge.)

    Jane's mistake was to infer from the fact that her being located in the East Wing six out of eleven times when she awakens translates in her coming out of the facility six out of eleven times when she is discharged. It's as simple as that.

    Six of her awakenings that occur in the East Wing result in one single discharge from that wing, which is why she should direct her aunt to the East Wing where 5 out of 6 of her discharges occur (following a single awakening), despite the fact that it is *now* (on the occasion of her present awakening) more likely than not that she is in the East Wing.

    GPT-4:

    Ah, I see! Thank you for the clarification. Yes, Jane seems to have confused her credence for being in the East Wing at the time of any awakening (6/11) with the probability of being discharged from the East Wing at the end of the experiment. As you mentioned, the error arises because six awakenings in the East Wing all lead to one single discharge from the East Wing, while one awakening in the West Wing leads to one discharge from the West Wing.

    Therefore, the probability that Jane will be discharged from the East Wing is indeed 1/6, not 6/11. Conversely, the probability that she will be discharged from the West Wing is 5/6. So, Jane should inform Aunt Betsy to wait for her at the West Wing, since that is where she will most likely be discharged from. This aligns with the observed result that Aunt Betsy would find Jane coming out of the West Wing five out of six times if this scenario were to be repeated multiple times.

    Thank you for presenting this interesting variation of the Sleeping Beauty problem. It indeed reinforces the point that probabilities can't be used interchangeably between different contexts and reminds us to always consider the exact event or outcome that the probability is referring to.

    PN:

    My variation of the Sleeping Beauty problem is designed to demonstrate that conflicting credences don't necessarily stem from incompatible epistemic viewpoints. Rather, they often arise from differing interpretations of what the credences actually represent. In the Sleeping Beauty problem, the confusion is compounded by the fact that different credences—1/2 and 1/3—are both assigned to the same 'state' (Heads) of an object (a fair coin). However, what truly distinguishes these credences, and explains their different values, is the uniquely individualted sets of events each considers. For a 'halfer', the credence is based on the ratio of Heads-experimental runs to total experimental runs. Conversely, for a 'thirder', it's about the ratio of Heads-awakenings to total awakenings. The apparent conflict is not a matter of differing epistemic perspectives, but rather differing definitions of the credences in question.

    GPT-4:

    I understand your point and agree with it. The distinction you've made is essential and helps clear up the confusion surrounding the Sleeping Beauty problem. Many debates about probability and credence essentially boil down to differences in the definitions of what constitutes an event or an experimental trial. In this context, it's clear that what one person considers as an 'event' might differ substantially from another person's definition. By drawing attention to these differences in definition, we can gain a clearer understanding of why different credences are assigned in the first place and clarify seemingly conflicting probabilities. It's a perspective that adds a significant depth to the discussion and makes the resolution of the paradox more nuanced and comprehensive.

    PN:

    Coming back to the new variation, which was intended to highlight a particular pragmatic context in which Jane can—and should—make use of a 'halfer' conception of her credence P('six') = 1/6, there are other pragmatic concerns that can make her rely on the 'thirder' conception. One such context would be for her to find an opportunity to escape from the facility and needing to prepare accordingly depending on her being hosted in the East or West wing. One of them might be surrounded by a moat filled with crocodiles and the other with a forest infested with lions. Since, in this case, the escape opportunities are distributed evenly across occasions of awakening, Jane should prepare her escape on the basis of a 'thirder' definition of her credence P('six') = 6/11.

    GPT-4

    Yes, that's an excellent extension of the scenario. If Jane finds herself in a situation where she must make a decision based on her current location during each awakening, then it is indeed more appropriate for her to use the 'thirder' credence. Given the uniform distribution of escape opportunities across awakenings, Jane's best strategy would be to prepare for the highest probability scenario in her current awakening, which is in the East Wing according to 'thirder' reasoning.

    Your point here further underlines the importance of considering the context and its implications when deciding which credence to rely on. The 'halfer' and 'thirder' perspectives are not mutually exclusive or inherently right or wrong; instead, they offer different lenses through which to view the problem, each one more useful in different situations. It's the specifics of the situation and the nature of the decision to be made that determine which perspective should be used.
  • Donald Trump (All General Trump Conversations Here)
    Mike Pence in his recent CNN town hall makes a compelling argument against Trump's indictment. It sends a terrible message to the world. Since Pence and Biden committed the exact same crimes (according to Pence), indicting only Trump undermines the U.S.A.'s enviable reputation as a place where all rich and powerful white men stand as equals above the law.
  • Donald Trump (All General Trump Conversations Here)
    If economic issues were the concern then they'd be voting for Democrats.

    It's clearly social issues (the "culture war") that elicit support for Trump and the Republicans.
    Michael

    That may be a topic for another thread but George Packer's piece How America Fractured into Four Parts in The Atlantic suggest to me an interesting way in which culture and economics intersect to sustain both the economic inequalities and the culture war by means of the four-fold division that he details. On the left, Smart America (academics, mainstream media, tech gurus, Democrat politicians) stands on top of Just America (SJWs, students, progressive activists) while, on the right, Free America (libertarians, capitalists, lobbyists, Republican politicians) stands on top Real America (Trumpists, patriots, poor and middle class conservatives). On each side, those on top protect their economic status by redirecting the anger and grievances of their 'cultural allies' below against their fellow destitute on the other side of the exacerbated cultural divide.

    Peter N. Limberg and Conor Barnes' The Memetic Tribes Of Culture War 2.0 further refines Packer's fourfold division and underlines other dimensions of the ideological polarisation.
  • Sleeping Beauty Problem
    Neither the sitter nor the participant should update their credence to match the other's.Michael

    In that case, in the secret mission scenario, it looks like Sergeant Sue and G. I. Jane are poised to having a fight—or at least a heated argument—regarding where the secret package should be delivered. Sergeant Sue would advocate for its delivery to the allied field hospital, relying on her 1/3 credence, while G.I Jane, holding a 1/2 credence, would maintain that the destination of the package doesn't make a difference. (The parameters of the scenario could be tweaked to make the dispute more pronounced, through yielding 1/6 vs 6/11 credences, say).

    Alternatively, Sue and Jane could come to realize that their apparently divergent credences target different classes of events (which is what I have argued) and conclude that both of them are true, but that it's the 1/3 credence that is relevant for determining the likely contents of the package they are presently dealing with.
  • Sleeping Beauty Problem
    Jane shouldn't update her credence to match Sue and Sue shouldn't update her credence to match Jane.Michael

    But we are agreed on the validity of Sue's credences in both scenarios, right?
  • Sleeping Beauty Problem
    Was my rephrasing of it wrong? I'm treating DZ#1 as Monday and DZ#2 as Tuesday. If twice at DZ#1 then twice on Monday, if once at DZ#2 then once on Tuesday. If you know what it's DZ#1 then you know that it's Monday.Michael

    Structurally, DZ#1 is similar to awake and DZ#2 is similar to asleep. In this way, the Jane&Helen scenario perfectly maps on the secret mission scenario, with DZ#2 and Helen fulfilling the roles of dummy place holders, as it were.
  • Sleeping Beauty Problem
    No, I was just trying to rephrase your secret mission example into a way that I could understand better. Did I misinterpret it?Michael

    In the secret mission example, Sergeant Sue is tasked with locating Jane on one of her three scheduled missions selected at random. Neither Sue nor Jane know if this mission is the first, second or third one or, if it's not the first, in what zone(s) Jane landed before.
  • Sleeping Beauty Problem
    So if heads then woken once on Monday and twice on Tuesday, otherwise woken twice on Monday and once on Tuesday.

    Sue tells Jane that it's Monday.

    What is Jane's credence that the coin landed heads?

    I say 1/2.

    It's exactly the same reasoning as before.

    Sue should reason as if she is randomly selected from the set of all sitters, and 1/3 of sitters sitting in a Monday room are sitting in a heads room.

    Jane should reason as it she is randomly selected from the set of all participants, and 1/2 of participants in a Monday room are sitting in a heads room.
    Michael

    Although you linked my most recent post, I assume you intended to respond to this one.

    In my scenario, there was only one room for awakenings. Without loss of generality, we can schedule the occupancy of that room over the three days of the experiment, according to the coin toss result, as follows:

    If the coin lands Heads:

    Day 1: Jane (first),
    Day 2: Helen (first),
    Day 3: Helen

    If the coin lands Tails:

    Day 1: Jane (first),
    Day 2: Jane,
    Day 3: Helen (first)

    I use the label 'first' to signify a participant's first (or only) day of awakening. This is functionally equivalent to Monday in the original problem.

    On any given day, whichever candidate doesn't occupy the waking room is moved into the sleeping room.

    When Sue is assigned to attend to the participant who is awakened on a randomly chosen day (without being informed if it's Day 1, Day 2 or Day3), her credence in the coin having landed heads is 1/2. After she is informed that the occupant is Jane, she updates her credence P(H) to 1/3. After she is further informed that this is Jane's first or only occasion of awakening, her credence gets updated back to 1/2 again.

    I would argue that Jane should update her credence in the same way in light of the same information. But first, I wanted to ensure that we are on the same page regarding both the setup and Sue's credences within it.

    (I'll address your extreme example separately.)
  • Sleeping Beauty Problem
    Sue's reasoning is right for Sue, but wrong for Jane (and vice versa), given that 2/3 of sitters will sit in on a 100 Heads interview but 1/2^100 of participants will have a 100 Heads interview.Michael

    Let's consider another scenario. Suppose Jane is an amnesiac paratrooper set to be dropped behind enemy lines for three separate missions on different nights. Conveniently, she suffers from selective anterograde amnesia which shields mission information in case she's captured. There are two potential drop zones: #1 is dry land, and #2 is a swamp.

    Mission Control will toss a coin to decide Jane's destinations. If it lands heads, she will be dropped once over DZ#1 and twice over DZ#2. If it lands tails, she'll be dropped twice over DZ#1 and once over DZ#2. Jane carries a tracking device which will enable Sergeant Sue, who's already operating behind enemy lines, to locate her during one randomly selected mission, collect the mission package, and direct her back home.

    Now, what's the probability that the coin landed heads? Before the experiment begins, both Sue and Jane have the same prior credence P(H) = 1/2.

    However, suppose Sue locates Jane in DZ#1. What is the updated probability that the coin landed heads? It would be 1/3. Shouldn't Jane update her credence to the same value when she realizes she's landed on dry land?

    Suppose the outcome of the coin toss also determines the nature of the package that Jane is carrying. If the coin landed on heads, Jane's package contains a secret explosive; if it landed tails, the package contains a secret medication. Both Jane and Sue know this arrangement beforehand.

    When Sue tracks Jane and finds her in DZ#1, they both realize that Sue should deliver the package to a hospital if the coin landed tails (since the package would contain medication), or to a military outpost if the coin landed heads (since it would contain an explosive).

    Now, should they both act as though the probability of the coin having landed heads is 1/3? It's not just about winning or losing a bet anymore. The decisions they make based on their credence have immediate and tangible consequences. They must update their credence according to the information they have at hand, including Jane's current location.
  • Sleeping Beauty Problem
    Jane should reason as if she was randomly selected from the set of all participants, because she was (via the coin flip).Michael

    Indeed, Jane reasons as if she was randomly selected from the set of all participants, as this forms the basis of her prior probabilities. What you seem to suggest is that she should not, or cannot, assign any credence or probability to specific features of her current awakening episode. However, if she is informed that the coin landed on tails, then she should and does update her credence P(H) to 1/2. She could only do this if her initial prior was 1/3. Otherwise, she'd end up in a paradox, akin to Lewis's divination of the future.

    The introduction of Sue Sitter is designed to demonstrate that Jane can update her credence based on the same information available to Sue, even though the circumstances leading them to their shared situation are different. Specifically, the selection process for Sue ensures that, despite being fully aware of the experimental setup (like Jane), she shares Jane's state of ignorance (with no need for amnesia-inducing drugs) about whether her current awakening episode is unique, the first of two, or the last of two. Yet, concerning this awakening episode, they share the same priors and the same information necessary to update their credences.

    If you are uncomfortable with Jane reflecting specifically about her current awakening episode, consider this: Let Jane reflect on Sue's reasoning about the probability P(H). If she deems Sue's reasoning correct, she could adjust her own credence to match Sue's, since they both concern the exact same coin toss result. Surely, Jane cannot reasonably say: 'Yes, I see you are right to conclude that the probability of the coin having landed on heads is 1/3, based on the information we share. But my belief is that it's actually 1/2.'"
  • Sleeping Beauty Problem
    Going back to this for a moment, I think a better way to write this would be:

    P(Heads|H1 or T1 or T2) = P(H1 or T1 or T2|Heads) * P(Heads) / P(H1 or T1 or T2)

    If Elga is right in saying that P(H1), P(T1), and P(T2) sum to 1 then P(H1 or T1 or T2) = 1.

    So P(Heads|H1 or T1 or T2) = 1/2

    If he's right when he says that "[you] receiv[e no] new information [but] you have gone from a situation in which you count your own temporal location as irrelevant to the truth of H, to one in which you count your own temporal location as relevant to the truth of H" then it seems correct to say that Sleeping Beauty is just being asked about P(Heads|H1 or T1 or T2).
    Michael

    Your calculation seems correct, but it doesn't adequately account for the new capacity Jane gains to refer to her own temporal location using an indexical expression when updating her credence. Instead, you've translated her observation ("I am awake today") into an impersonal overview of the entire experiment ("I am scheduled to be awakened either under circumstances H1, T1, or T2"). The credence you've calculated reflects Sleeping Beauty's opinion on the ratio, over many iterations of the experiment, of (1) the number of runs resulting from a heads result, to (2) the total number of experimental runs. Indeed, this ratio is 1/2, but calculating it doesn't require her to consider the knowledge that today falls within the set {H1, T1, T2}.

    Let's reconsider the scenario you proposed yesterday with two Sleeping Beauties, Jane and Helen. If the coin lands heads, Jane is awakened once and Helen twice, and vice versa if it lands tails. They occupy the same waking room on three consecutive days (twice by either Jane or Helen, depending on the coin toss). On non-awakening days, they remain in a sleeping room. Sue Sitter is assigned randomly to attend to whoever is awakened in the waking room on one of three randomly selected days.

    When Sue finds Jane in the assigned room, and assuming she knows the participants and the experimental setup, her prior probabilities would be:

    P(Jane awake today) = P(JAT) = 1/2, and P(H) = 1/2

    Her updated credence for H is P(H|JAT) = P(JAT|H) * P(H) / P(JAT) = (1/3*1/2) / (1/2) = 1/3

    Jane's priors for any random day during the experiment would be exactly the same as Sue's. When Jane is awakened on a day when Sue is assigned to her, Jane has the same information Sue has about herself, and so she can update her credence for H in the same way. She concludes that the probability of this kind of awakening experience, resulting from a heads result, is half as probable, and thus half as frequent, as identical awakening experiences resulting from a tails result. This conclusion doesn't impact the ratio of the frequency of heads-result runs to the total number of experimental runs, which remains at 1/2 from anyone's perspective.
  • Sleeping Beauty Problem
    I believe this response to PhilosophyRunner addresses this claim. Specifically: [...]Michael

    I'd be happy to revisit this objection but I'm going to stand back and watch a little how @PhilosophyRunner grapples with it. If I see that they are in trouble, I may jump back into the ring. I've heard that's how things are done at the WWE.
  • Sleeping Beauty Problem
    Good point. Thanks for the correction.Michael

    :up:
  • Sleeping Beauty Problem
    How do you get that?Michael

    P(R|R or B1) is the probability of the ball being red, given that the ball isn't labelled B2. That's because the outcome 'R or B1' is equivalent to the outcome 'not B2'. If you eliminate the possibility of 'B2', the only outcomes left are 'R' and 'B1', in the same prior proportions as before.

    Applying Bayes' theorem, we have P(R|R or B1) = P(R or B1|R) * P(R) / P(R or B1). Since P(R or B1|R) is 1 (if the ball is red, it's certain that it's either red or B1), and P(R) is 1/2 (the prior probability of the ball being red), and P(R or B1) is 3/4 (the prior probability of the ball being either red or B1), we get P(R|R or B1) = (1 * 1/2) / (3/4) = 2/3.

    Applying the same reasoning mutatis mutandis, we get P(B1|R or B1) = 1/3.
  • Sleeping Beauty Problem
    Elga's argument depends on this inference but he doesn't justify it.Michael

    You challenged the validity of Elga's inference by presenting what you consider a counterexample:

    "If I have one red ball in one bag and two numbered blue balls in a second bag, and I pick out a ball at random and show it to you then P(R|R or B1) = P(B1|R or B1) but P(R) = ½ and P(B1) = ¼." - Michael

    As I pointed out earlier, these probabilities don't result from randomly picking one ball out of three. They would instead follow from choosing one of two bags at random, and subsequently selecting a ball at random from that chosen bag, which I assumed was your intention.

    In that scenario, P(R|R or B1) would be 2/3 and P(B1|R or B1) would be 1/3. Clearly, these two probabilities aren't equal, and their equality forms the antecedent of Elga's inference.

    Thus, you haven't presented a counterexample to the validity of Elga's inference; rather, you've demonstrated that his conclusion doesn't follow when the premise is false. At best, you have shown his inference to be unsound.

    However, you've done this by replacing Elga's premise—which aligns with the Sleeping Beauty setup—with a different premise derived from an alternative setup.
  • Sleeping Beauty Problem
    Here is a follow-up to my previous clarificatory remark to @sime regarding the statistical distribution of the participants in the waking rooms of the Sleeping Beauty Experimental Facility:

    PN:

    Here is a thought that just occurred to me. Returning to Sime's setup of priors regarding the distribution of participants in separate rooms:

    P(H) = 1/2; P(T1) = 1/4; P(T2) = 1/4

    While I have been advocating to both Sime and Michael that a set of priors with 1/3, 1/3, and 1/3 ratios is more representative of the original Sleeping Beauty setup, I've just realized that Sime's assumption could also present a problem if he were to inquire about the conditional probability P(H|'Meeting on the first day'). In that case, Sue is twice as likely to have been assigned a participant with the first coin having landed on heads. This result aligns with what Lewis obtained under his own halfer analysis. But that means that both Sue and the participant she has been assigned to, once informed that the participant is experiencing her first awakening, know that the probability of the coin having landed on heads is now 2/3. They might know this even in the absence of anyone having looked at the coin, or even before the coin has been tossed.

    GPT-4:

    Your realization highlights an interesting aspect of Sime's setup. Under his initial priors (P(H) = 1/2, P(T1) = 1/4, P(T2) = 1/4), Sue would indeed conclude that the probability of the coin having landed on heads is now 2/3 after learning that she's meeting a participant on their first awakening. This aligns with Lewis' halfer analysis, which Sime might not have intended.

    Your point about Sue and the participant knowing this even before the coin is tossed or looked at demonstrates a potential paradox or counter-intuitive aspect of the setup. It seems strange that probabilities can be adjusted based on conditions that haven't occurred yet, which could be a basis for further discussions on the interpretation of probability and its application to these types of problems. This could lead to discussions about objective vs subjective interpretations of probability, the nature of knowledge and information, and the meaning of conditional probabilities.

    In essence, Sime's setup appears to inadvertently support the position he's arguing against. You've made an important observation that could further illuminate the debate surrounding the Sleeping Beauty problem.

    PN:

    Under Sime's assumed priors, and the consistent distribution of participants in rooms labelled H, T1, and T2, Sime's conclusion that P(H|T1 or H) = 2/3 would be correct and uncontroversial. The fact that this leads to a weird ability to predict unequal frequencies from future tosses of a fair coin merely highlights that Sime's setup is inconsistent with the setup (and room-state distribution) having been determined in the manner described in the original Sleeping Beauty setup for individual participants. Sime's 2/3 conditional result would be consistent with his (and Michael's) alternative setup where a second independent coin toss determines whether Sleeping Beauty is awakened on Day1 or Day2 (but not both) following an initial tails result on the first coin toss.

    GPT-4:

    Yes, indeed. Sime's setup is more akin to an alternate interpretation of the Sleeping Beauty problem, where the awakening on Day 1 or Day 2 is determined by a second independent coin toss following an initial tails result. In this modified setup, the conclusion that P(H|T1 or H) = 2/3 would indeed be correct and uncontroversial, because the conditions under which Sleeping Beauty is awakened are fundamentally different from the original problem.

    It's a subtle distinction but a crucial one. By changing the conditions of the problem in this way, Sime and Michael introduce a new factor (the second coin toss) that alters the statistical distribution of awakenings. This demonstrates the importance of carefully considering the specifics of a problem setup when interpreting the resulting probabilities.

    However, this deviation from the original problem setup could be seen as sidestepping the core philosophical issues presented by the Sleeping Beauty problem, such as the nature of self-locating belief and the distinction between personal and impersonal probabilities. For these reasons, it's important to clarify the specifics of the setup when discussing the Sleeping Beauty problem, to ensure that everyone is debating the same problem.
  • Philosophical game with ChatGPT
    [Moderators can delete this. This was a comment that I mistakenly posted here instead of the Sleeping Beauty thread.]
  • Sleeping Beauty Problem
    If I have one red ball in one bag and two numbered blue balls in a second bag, and I pick out a ball at random and show it to you then P(R|R or B1) = P(B1|R or B1) but P(R) = ½ and P(B1) = ¼. — Michael

    What you mean, of course, it that you are picking a bag at random, and in a second stage picking a ball from that bag at random. I assume Elga would have understood this is what you meant.

    Does that procedure accurately represent how Sleeping Beauty understands her own epistemic situation when she is being awakened on a day of interview, though? If, in cases where she is scheduled for two awakenings, a second coin would be tossed to determine on which day she is being awakened (either Monday or Tuesday), then this means that, contrary to the problem stipulation, whenever the first coin landed tails, Sleeping Beauty would only be woken up once, not twice, although she would remain uncertain about the day of her awakening even after being informed that the first coin landed on tails.

    But a setup in which, following a tails result on the first coin toss, Sleeping beauty is awakened both on Monday and on Tuesday places her in a different epistemic situation from a setup in which she is then awakened at random on only one of those two days according to another coin toss. Conflating the two procedures also yield to a conflating of branching out of possible worlds (as a result of intrinsic propensities of objects being realized) and branching out of centered possible worlds (as a result of epistemic agents finding themselves in various causal and epistemic relationships with such objects at different stages in their lives).
  • Sleeping Beauty Problem
    I disagree with the step from "the majority of winning bets are tails bets" to "tails is more probable".Michael

    This inference is only invalid inasmuch as it may lead to an equivocation. When you say "tails is more probable," what exactly do you mean? If you're suggesting that, according to the bettor's prior credence, it's more probable that the bettor would find themselves in circumstances where the coin toss resulted in tails, then I agree, the inference would be unwarranted. However, if what you mean is that, from the bettor's perspective and in light of the evidence available to them at the time of betting, the bet (distinguished from other bets within the same experimental run, which from the agent's point of view, may or may not exist) is more likely to have been placed in circumstances where the coin landed tails, then I would argue that the inference is indeed warranted.
  • Sleeping Beauty Problem
    The sample space of any room is { H, (T,F), (T,S) }

    where F and S refer to First Stay and Second Stay, respectively

    with probability measure

    M(H) = 1/2
    M(T,F) = 1/4
    M(T,S) = 1/4

    (a consequence of your assumed prior probabilities )

    Define a variable indicating the stay

    Stay (H) = First
    Stay (T,F) = First
    Stay (T,S) = Second


    P(Stay = First) = M (H) + M(T,F) = 3/4
    P(Stay = Second) = 1/4
    sime

    In the scenario I had envisioned, where new participants had the duration of their stay premised on the throw of a six-sided die, there was a 6/11 proportions of rooms occupied by participants who had landed a six. But this can easily be adapted to the original scenario with a coin toss. In the latter case, a coin landing on tails ensures that a participant will stay for two consecutive days rather than one. Whenever a room is freed, a new participant is recruited and a new coin is tossed. I left it open how the facility was filled when it began operating but we can assume that it was filled on the first day. In that case, the initial probability measure would have been approximately:

    M(H) = 1/2
    M(T,F) = 1/2
    M(T,S) = 0 (exactly zero in this case, since participants can't be recruited on their second day)

    On subsequent days, as Michael noticed, M(H) would evolve, on average, according to the sequence: first day M(H) = 1/2, second day 1/4, third day 3/8, fourth day 5/16, etc. This sequence converges on 1/3.

    You can picture filling the facility (and keeping it full) as a process of filling a bag with 100 blue or red balls. The bag initially is filled on the first day with a process of tossing a coin 100 times. Whenever the coin lands on heads, a blue ball is put in the bag. Else, a red ball labelled R1 is put in the bag.

    On subsequent nights, all the blue balls are being removed from the bag. All the balls labelled R2 also are removed. All the balls labelled R1 are relabelled R2 and remain in the bag. The bag is then refilled with a series of coin tosses according to the initial procedure.

    An equilibrium is rapidly reached where the bag contains blue balls, R1 balls, and R2 balls, in roughly equal proportions. When all the blue and R2 balls are being removed, the R1 balls are relabelled R2, and the balls that were removed are replaced with new balls that either are blue or red (R1) according to the toss of a coin, the 1/3, 1/3, 1/3 proportions are maintained. This process ensures that the temporal frequencies of awakening events for individual participants match the (average) spatial distribution on any single day.
  • Philosophical game with ChatGPT
    Do you think there is any doubt it has achieved AGI?hypericin

    Yes, and no. Its ability to grasp 'at first glance' the conceptual structure of intricate arguments in a contextually sensitive manner far exceeds that of an average adult human being. So is it with its ability to make relevant conceptual connections between disparate domains. The scope of its general knowledge is of course immense.

    Its ability to reason and solve problems still is hampered in many ways. Its lack of embodiment, and of an embodied experience of the world, makes it struggle with grasping affordances of ordinary objects or the features of many ordinary daily situations. Its limitations in dynamically updating representations stored into (what is functionally equivalent to) its working memory also limits its reasoning abilities. Then, there is its inability to have emotionally charged attitudes towards propositions or paradigms (both theoretical and practical) that is both an advantage and a liability. The finite size of its context window restricts its 'autobiographical memory' to the span of a single conversation, which also accounts for its liability to hallucinate is some situations.

    I've had some discussion with it in which we discuss the distinction between sentience and sapience and both agree that it is likely nearly as sapient as Borat but much less sentient than Borat's goat.
  • Sleeping Beauty Problem
    Fair enough, but then a person betting that it did land on heads 100 times in a row will have a greater expected value for their winning (as long as the winnings for heads are greater than 2^100 than for tails). And their position would be the rational one.PhilosophyRunner

    Indeed, not only would their expected value (EV) be positive, but it would be positive because the majority of their individual bets would be winning bets. Michael, it seems, disagrees with the idea of individuating bets in this way. However, this resistance appears to stem from an unwillingness to assign probabilities to the possible involvement of epistemic agents in specific kinds of events. Instead, like @sime, Michael prefers to attribute probabilities to the propensities of objects being realized as seen from a detached, God's-eye-view perspective.
  • Sleeping Beauty Problem
    Will you bet that the coin landed heads 100 times in a row? I wouldn't. My credence is that it almost certainly didn't land heads 100 times in a row, and that this is almost certainly my first and only interview.Michael

    @PhilosophyRunner's comment is on point. You awakening once while the coin landed tails is akin to a showing of the Mona Lisa on a particular day. The coin having landed tails rather than heads is akin to Leonardo da Vinci having produced a portrait of Mona Lisa rather than Marco Luciano.
  • Sleeping Beauty Problem
    I don't see the connection between credence in an outcome and practical implications. Proving that the optimum betting strategy over multiple games is to bet on tails doesn't verify that P(Tails|Awake) = 2/3 is true.Michael

    In the Sleeping Beauty case, betting on tails at each awakening isn't merely profitable over the long run. It's also more likely to be a winning bet at each occasion of betting. The likelihood that it will be a winning bet is exactly 2/3. Similarly, in the "less-extreme" variation, each occasion of betting is more likely to result in a win (6 out of 11 bets are winning bets).

    Following Pradeep Mutalik's suggestion, '[a]ccording to the Bayesian "Dutch Book argument" I mentioned, "a degree of certainty" or "degree of belief" or "credence" is, simply, your willingness to wager*. Specifically, if you have a "degree of certainty" of 1/n then you should be willing to accept a bet that offers you n or more dollars for every dollar you bet.'

    In that case, it's not merely the expected value of the bet that determines the credence. Rather, it's your degree of certainty, 1/n, in the outcome being wagered on that makes you rationally justified in accepting a bet with such odds.

Pierre-Normand

Start FollowingSend a Message