Comments

  • Sleeping Beauty Problem
    The answer to problem B is clearly 1/3 and I think we both will agree here. The problem A is the same question that is asked to SB - on a given wake up event, she is asked in the moment about the probability of the coin showing heads. SO the answer in problem A is also 1/3.PhilosophyRunner

    It’s not the same because she isn’t given a randomly selected waking after 52 weeks. She’s given either one waking or two, determined by a coin toss.

    The manner in which the experiment is conducted matters.
  • Sleeping Beauty Problem
    I think it's also worth paying particular attention to the way Elga phrased the problem and the solution:

    When you are first awakened, to what degree ought you believe that the outcome of the coin toss is Heads?

    ...

    I've just argued that when you are awakened on Monday, that credence ought to change to 1/3.

    ...

    But you were also certain that upon being awakened on Monday you would have credence 1/3 in H.

    The Tuesday interview is actually irrelevant to Elga's argument (which is why he says in that footnote that Sleeping Beauty only needs to believe that the experiment will be conducted in this way).

    So Elga argues that on Monday, before the coin has been flipped, Sleeping Beauty's credence (not knowing that it is Monday) should be P(Heads) = .
  • Sleeping Beauty Problem
    To ask how probable it is that the coin landed on heads involves a tacit reference to the counterfactual circumstances where you are presently facing a (hidden) coin that didn't land the way it actually did.Pierre-Normand

    Not exactly, because if it's Monday the coin hasn't been flipped at all. It's only hidden if today is Tuesday and the coin is tails.
  • Sleeping Beauty Problem
    (2) The setup confounds wagering arguments. That won't matter much to a lot of people, but it's uncomfortable. Annoying. Ramsey used Dutch book arguments from the beginning, and despite their limitations they can be clarifying. Each time I've tried to construct a sane payoff table I've failed. I've wondered lately if there might be a conditional wager that comes out rational, but I can work up enough hope of success to bother. Partial beliefs, within suitable limits, ought to be expressible as wagers, but not in this case, and that blows.Srap Tasmaner

    You might be interested in When betting odds and credences come apart: More worries for Dutch book arguments.
  • Sleeping Beauty Problem
    The original statement of the problem fails to specify what constitutes an individual act of verification of her credence, though, such that we can establish the target ratio unambiguously. As I've previously illustrated with various examples, different pragmatic considerations can lead to different verification methods, each yielding different values for P(H), aligning with either the Halfer or Thirder stance.Pierre-Normand

    It sort of addresses this in a footnote:

    The precise effect of the drug is to reset your belief-state to what it was just before you were put to sleep at the beginning of the experiment. If the existence of such a drug seems fanciful, note that it is possible to pose the problem without it — all that matters is that the person put to sleep believes that the setup is as I have described it.

    The problem (and Elga's solution) have nothing to do with how to "verify" one's credence. It simply asks what a rational person should/would believe were they told the rules of the experiment, woken up, and asked their credence.

    So for the sake of the problem we can assert that, unknown to Sleeping Beauty, she's only actually woken up once.
  • Sleeping Beauty Problem
    In Elga's paper the question is "to what degree ought you believe that the outcome of the coin toss is Heads?"
  • Donald Trump (All General Trump Conversations Here)
    I don’t think he broke the law nor do I care if he did.NOS4A2

    Well that just says everything.
  • Donald Trump (All General Trump Conversations Here)
    He’s the one being persecuted.NOS4A2

    It's spelled "prosecuted".
  • Donald Trump (All General Trump Conversations Here)
    https://storage.courtlistener.com/recap/gov.uscourts.flsd.648653/gov.uscourts.flsd.648653.3.0.pdf

    37 counts.

    The classified documents Trump stored in his boxes included information regarding defense and weapons capabilities of both the United States and foreign countries; United States nuclear programs; potential vulnerabilities of the United States and its allies to military attack, and plans for possible retaliation in response to a foreign attack



    a. In July 2021, at Trump National Golf Club in Bedminster, New Jersey ("The Bedminster Club"), during an audio-recorded meeting with a writer, a publisher, and two members of his staff, none of whom possessed a security clearance, TRUMP showed and described a "plan of attack" that TRUMP said was prepared for him by the Department of Defense and a senior military official. TRUMP told the individuals that the plan was "highly confidential" and "secret". TRUMP also said "as president I could have declassified it," and, "Now I can't, you know, but this is still a secret.
  • Donald Trump (All General Trump Conversations Here)
    Trump lawyers quit classified documents case

    Two lawyers who represented Donald Trump in the months before the former president was indicted on federal charges over his handling of classified documents quit working for him Friday morning.

    The attorneys, Jim Trusty and John Rowley, did not explain in detail why they had resigned, other than to say that “this is a logical moment” to do so given his indictment Thursday in U.S. District Court in Miami.

    Trusty and Rowley also said they will no longer represent Trump in a pending federal criminal probe into his efforts to overturn his loss in the 2020 election to President Joe Biden.
  • Donald Trump (All General Trump Conversations Here)
    Biden and the deep-state going after their political opponents once again.NOS4A2

    Special Counsel going after a criminal.

    I’m sure none of it is to distract from Biden’s bribery scandal.NOS4A2

    Me too. It's not a new investigation.
  • Sleeping Beauty Problem
    But we are agreed on the validity of Sue's credences in both scenarios, right?Pierre-Normand

    Yes, I said as much with my extreme example.

    Given that of sitters will sit in on a 100 Heads interview it is rational for each sitter to reason that they are likely sitting in on a 100 Heads interview.

    But given that of participants will have a 100 Heads interview it is rational for each participant to reason that their interview is likely not a 100 Heads interview.

    Neither the sitter nor the participant should update their credence to match the other's. They each belong to a different reference class.

    And if they reason this way then 2101 sitters will be right (once) and 2100 - 1 will be wrong (once), and 2100 - 1 participants will be right (once) and 1 will be wrong (2101 times).

    I'd say that's all the evidence I need to justify my credence.
  • Sleeping Beauty Problem
    I would argue that Jane should update her credence in the same way in light of the same information.Pierre-Normand

    Jane should reason as if she is randomly selected from the set of all participants, because she is.
    Sue should reason as if she is randomly selected from the set of all sitters, because she is.

    Jane shouldn't update her credence to match Sue and Sue shouldn't update her credence to match Jane.

    I think my extreme example shows why. It's just not rational for each participant to reason that the coin most likely landed heads 100 times but it is rational for each sitter to reason that they are sitting in on a 100 heads experiment.
  • Sleeping Beauty Problem
    Was my rephrasing of it wrong? I'm treating DZ#1 as Monday and DZ#2 as Tuesday. If twice at DZ#1 then twice on Monday, if once at DZ#2 then once on Tuesday. If you know that it's DZ#1 then you know that it's Monday.
  • Sleeping Beauty Problem
    Although you linked to my most recent post, I assume you intended to respond to this one.Pierre-Normand

    No, I was just trying to rephrase your secret mission example into a way that I could understand better. Did I misinterpret it?

    If not then it appears to be saying the same thing as the above?
  • UFOs
    Do you wish that UFOs, Alien Abductions, and Alien Visits were, in fact, REAL, meaning our planet has been visited by aliens from another star system, and that aliens may be present on our planet right now?BC

    No. I reckon they’d more likely be a threat to us than a help.
  • Sleeping Beauty Problem


    So if heads then woken once on Monday and twice on Tuesday, otherwise woken twice on Monday and once on Tuesday.

    Sue tells Jane that it's Monday.

    What is Jane's credence that the coin landed heads?

    I say 1/2.

    It's exactly the same reasoning as before.

    Sue should reason as if she is randomly selected from the set of all sitters, and 1/3 of sitters sitting in a Monday room are sitting in a heads room.

    Jane should reason as it she is randomly selected from the set of all participants, and 1/2 of participants in a Monday room are sitting in a heads room.

    This reasoning is much clearer to see in the 100 Heads example, and I don't see how any counterexample example is going to change my mind about 100 Heads. If I'm one of the participants I will only ever reason that P(100 Heads) = . I am almost certainly not the one participant who will have 2101 interviews.

    I should no more update my credence to match my sitter’s than she should update hers to match mine.
  • Sleeping Beauty Problem
    Surely, Jane cannot reasonably say: 'Yes, I see you are right to conclude that the probability of the coin having landed on heads is 1/3, based on the information we share. But my belief is that it's actually 1/2.'"Pierre-Normand

    Sue's reasoning is right for Sue but wrong for Jane (and vice versa) given that of sitters will sit in on a 100 Heads interview but of participants will have a 100 Heads interview.
  • Sleeping Beauty Problem
    When Sue finds Jane in the assigned room, and assuming she knows the participants and the experimental setup, her prior probabilities would be:

    P(Jane awake today) = P(JAT) = 1/2, and P(H) = 1/2

    Her updated credence for H is P(H|JAT) = P(JAT|H) * P(H) / P(JAT) = (1/3*1/2) / (1/2) = 1/3

    Jane's priors for any random day during the experiment would be exactly the same as Sue's. When Jane is awakened on a day when Sue is assigned to her, Jane has the same information Sue has about herself, and so she can update her credence for H in the same way. She concludes that the probability of this kind of awakening experience, resulting from a heads result, is half as probable, and thus half as frequent, as identical awakening experiences resulting from a tails result. This conclusion doesn't impact the ratio of the frequency of heads-result runs to the total number of experimental runs, which remains at 1/2 from anyone's perspective.
    Pierre-Normand

    I've already stated why I disagree with this. The manner in which the sitter is assigned a room isn't the manner in which Sleeping Beauty is assigned a room, and so their credences will differ.

    Sue should reason as if her room was randomly selected from the set of all rooms, because it was.
    Jane should reason as if she was randomly selected from the set of all participants, because she was (via the coin flip).

    This is clearer with my extreme example.

    of sitters will sit in on a 100 Heads interview, and so their credence should be P(100 Heads) = .

    of participants will have a 100 Heads interview, so their credence should be P(100 Heads) = .

    The fact that the one participant who has a 100 Heads interview will have 2101 of them is irrelevant. It is only rational for each participant to reason that they are almost certainly not the participant who will have 2101 interviews, and so that this is almost certainly their first and only interview, and so that the coin almost certainly didn't land heads 100 times. This is, again, what I explained to PhilosophyRunner here.

    The claim that because most interviews are a 100 Heads interview then my interview is most likely a 100 Heads interview is a non sequitur. Only if most participants have a 100 Heads interview could it follow that my interview is most likely a 100 Heads interview.
  • Sleeping Beauty Problem
    Your calculation seems correct, but it doesn't adequately account for the new capacity Jane gains to refer to her own temporal location using an indexical expression when updating her credence. Instead, you've translated her observation ("I am awake today") into an impersonal overview of the entire experiment ("I am scheduled to be awakened either under circumstances H1, T1, or T2"). The credence you've calculated reflects Sleeping Beauty's opinion on the ratio, over many iterations of the experiment, of (1) the number of runs resulting from a heads result, to (2) the total number of experimental runs. Indeed, this ratio is 1/2, but calculating it doesn't require her to consider the knowledge that today falls within the set {H1, T1, T2}.Pierre-Normand

    I've just taken what Elga said. He says:

    Combining results, we have that P(H1) = P(T1) = P(T2). Since these credences sum to 1, P(H1)=1/3.

    If P(H1), P(T1), and P(T2) sum to 1 then P(H1 or T1 or T2) = 1.

    Where P(H1) means "the coin landed heads and today is Monday", P(T1) means "the coin landed tails and today is Monday", and P(T2) means "the coin landed tails and today is Tuesday".
  • Sleeping Beauty Problem
    P(Heads | Mon or Tue) = P(Mon or Tue | Heads) * P(Heads) / P(Mon or Tue)
    P(Heads | Mon or Tue) = 1 * 1/2 / 1
    P(Heads | Mon or Tue) = 1/2
    Michael

    Going back to this for a moment, I think a better way to write this would be:

    P(Heads|H1 or T1 or T2) = P(H1 or T1 or T2|Heads) * P(Heads) / P(H1 or T1 or T2)

    If Elga is right in saying that P(H1), P(T1), and P(T2) sum to 1 then P(H1 or T1 or T2) = 1.

    So P(Heads|H1 or T1 or T2) = .

    If he's right when he says that "[you] receiv[e no] new information [but] you have gone from a situation in which you count your own temporal location as irrelevant to the truth of H, to one in which you count your own temporal location as relevant to the truth of H" then it seems correct to say that Sleeping Beauty is just being asked about P(Heads|H1 or T1 or T2).
  • Sleeping Beauty Problem
    Good point. Thanks for the correction.
  • Sleeping Beauty Problem
    In that scenario, P(R|R or B1) would be 2/3 and P(B1|R or B1) would be 1/3.Pierre-Normand

    How do you get that?
  • Sleeping Beauty Problem
    However, if what you mean is that, from the bettor's perspective and in light of the evidence available to them at the time of betting, the bet (distinguished from other bets within the same experimental run, which from the agent's point of view, may or may not exist) is more likely to have been placed in circumstances where the coin landed tails, then I would argue that the inference is indeed warranted.Pierre-Normand

    I believe this response to PhilosophyRunner addresses this claim. Specifically:

    Although it's true that most interviews follow the coin landing heads 100 times, every single one of those interviews belongs to a single participant, and for each participant the probability that they are that single participant is .

    So although it's true that "any given interview is twice as likely to have followed the coin landing heads 100 times" it is false that "my interview is twice as likely to have followed the coin landing heads 100 times"
  • Sleeping Beauty Problem
    Does that procedure accurately represent how Sleeping Beauty understands her own epistemic situation when she is being awakened on a day of interview, though?Pierre-Normand

    It's not intended to. It's intended to show that this inference is not valid a priori:

    P(A|A or B) = P(B|A or B)
    ∴ P(A) = P(B)

    Elga's argument depends on this inference but he doesn't justify it.

    His chain of entailments when applied to my counterexample leads to a false conclusion, and so it needs to be explained why this chain of entailments is valid for the Sleeping Beauty case.
  • UFOs
    Not necessarily so, just because we would do it does not mean that they would have the same motivations we do.Sir2u

    Certainly not necessarily so, but unless we're something special it stands to reason that at least one would.

    First of all, why would they have to be more advanced than we are. True, there are many older galaxies out there that could have developed highly intelligent life forms along time ago, but there is also evidence that many galaxies have already died out. Anyone of the many galaxies could have life similar to our own at with the same level of technology, thus unable to come visiting.Sir2u

    Of course it's possible, and one explanation for the Fermi paradox is that we are one of the first intelligent species in the galaxy. But given that the oldest planet in the Milky Way is 12.7 billion years old and the Earth is only 4.5 billion years old, it would appear reasonable to infer that there were advanced civilisations long before us.

    Second point, a million years ago when they set out it would have been impossible for them to even guess that we might appear on this planet. So why would they head in this direction instead of one of the other millions of possibilities in all of the other galaxies?Sir2u

    Just considering species born in the Milky Way, as I said before, the conjecture is that a species would explore all of it. Assuming the resources are available and they don't die out first, it's unclear why they wouldn't.

    Last point, no one said that intelligent life is common.Sir2u

    Actually, lots of people do. It's called the mediocrity principle. Of course others also propose the Rare Earth hypothesis in opposition.
  • Sleeping Beauty Problem
    A given seeing of it is twice as likely to be tails.PhilosophyRunner

    This is an ambiguous claim. It is true that if you randomly select a seeing from the set of all possible seeings then it is twice as likely to be a tails-seeing, but the experiment doesn't work by randomly selecting a seeing from the set of all possible seeings and then "giving" it to Sleepy Beauty. It works by tossing a coin, and then either she sees it once or she sees it twice.

    If we return to my example of tossing the coin 100 times, assume there are 2100 participants. Each participant knows two things:

    1. Of the 2102 interviews, 2101 follow the coin landing heads 100 times

    2. Of the 2100 participants, the coin landed heads 100 times for 1 of them

    You are suggesting that they should ignore 2 and use 1 to infer a credence of .

    I am saying that they should ignore 1 and use 2 to infer a credence of .

    Although it's true that most interviews follow the coin landing heads 100 times, every single one of those interviews belongs to a single participant, and for each participant the probability that they are that single participant is .

    So although it's true that "any given interview is twice as likely to have followed the coin landing heads 100 times" it is false that "my interview is twice as likely to have followed the coin landing heads 100 times".

    And by the exact same token, although it's true that "any given interview is twice as likely to be tails" it is false that "my interview is twice as likely to be tails".

    The likelihood of your interview being a tails interview is equal to the likelihood that the coin landed tails in your experiment, which is .
  • Sleeping Beauty Problem
    That I get to see something twice doesn't mean that I'm twice as likely to see it. It just means I get to see it twice.
  • Sleeping Beauty Problem
    Using frequencies over multiple games to argue for the probabilities in a single game is a fundamental way probabilities are calculated.PhilosophyRunner

    Only when it's appropriate to do so. It is in the case of rolling a dice, it isn't in the case of counting the number of awakenings.

    Again, it doesn't matter that if the coin lands heads 100 times in a row then I will be woken 2101 times. When I'm put to sleep, woken up, and asked my credence that the coin landed heads 100 times in a row – or my credence that my current interview is a 100-heads-in-a-row interview – the only thing that's relevant is the probability of a coin landing 100 heads in a row, which is . It simply doesn't matter that if the experiment were repeated 2100 times then interviews are 100-heads-in-a-row interviews.

    If you want to say that it must still have to do with frequencies, then what matters is the frequency of a coin landing heads 100 times in a row, not the frequency of interviews that follow the coin landing heads 100 times in a row. You're using an irrelevant frequency to establish the probability.
  • Sleeping Beauty Problem
    @Pierre-Normand

    Thought you might be interested in my short exchange with Elga:

    Dear Professor Elga,

    I've read your paper Self-locating belief and the Sleeping Beauty problem and hope you could answer a question I have regarding your argument. You state that "P(T1|T1 or T2) = P(T2|T1 or T2), and hence P(T1) = P(T2)" and by implication state that P(H1|H1 or T1) = P(T1|H1 or T1), and hence P(H1) = P(T1).

    However I cannot see in the paper where this inference is justified, as it is not valid a priori.

    If I have one red ball in one bag and two numbered blue balls in a second bag, and I pick out a ball at random and show it to you then P(R|R or B1) = P(B1|R or B1) but P(R) = ½ and P(B1) = ¼.

    So the (double-)halfer can accept that P(H1|H1 or T1) = P(T1|H1 or T1) but reject your assertion that P(H1) = P(T1) follows. Is there something in your paper that I missed to justify this inference?

    Thanks for your time.
    — Michael

    Dear Michael,

    Thanks for your interest in this stuff. The form of reasoning I had in mind was the following chain of entailments:

    P(X|X or Y) = P(Y|X or Y)
    P(X&(X or Y))/P(X or Y) = P(Y&(X or Y))/P(X or Y)
    P(X)/P(X or Y) = P(Y)/P(X or Y)
    P(X) = P(Y).

    I wish you the best with your research.
    — Elga

    Unfortunately I don't quite see how it addresses my counterexample, which seems to show that there must be a mistake with that chain of entailments, but I won't push him on it.
  • UFOs
    The main argument I see is not if aliens exist, but why would the come here? Any ideas about that?Sir2u

    I don’t think it’s specifically about coming here. One of the arguments is just that a sufficiently advanced civilisation would colonise their entire galaxy, even if just with unmanned probes, whether for research or to find resources.

    At 10% the speed of light it would take a million years to cross the Milky Way. If intelligent life is common you’d have expected someone to have done it in the last few billion years.
  • Sleeping Beauty Problem
    Indeed, not only would their expected value (EV) be positive, but it would be positive because the majority of their individual bets would be winning bets. Michael, it seems, disagrees with the idea of individuating bets in this way.Pierre-Normand

    I disagree with the step from "the majority of winning bets are tails bets" to "tails is more probable".

    It's either a non sequitur or affirming the consequent, where the implicit premise is "if tails is more probable then the majority of winning bets are tails bets".

    In this case the majority of winning bets are tails bets only because you get to place more bets if it's tails.

    This is why, as I have often said, betting examples just don't answer the question at all. They're a red herring. Betting on tails might be more profitable, but it is still the case that one's credence should be that P(Heads|Awake) = 1/2.
  • Sleeping Beauty Problem
    If you repeated the experiment a trillion times, and kept a note of whether you guess was correct or not each time, and I did the same. We would find that I got it correct more than you. By the law of large numbers that would mean the outcome I guessed for was more probable than yours.PhilosophyRunner

    More frequent but not more probable.

    If the game is played once I wouldn't argue that the coin most likely landed heads 100 times in a row and that my interview is most likely a 100-heads-in-a-row interview. I would argue that the coin most likely didn't land heads 100 times in a row and that this is most likely my first and only interview.

    I think using frequencies over multiple games to argue for the probability in a single game is a non sequitur.
  • Sleeping Beauty Problem
    Fair enough, but then a person betting that it did land on heads 100 times in a row will have a greater expected value for their winning (as long as the winnings for heads are greater than 2^100 than for tails). And their position would be the rational one.PhilosophyRunner

    It can be rational in the sense that it can be profitable to bet when the expected value is greater than the cost, much like a lottery that costs £1 with a prize of £2,000,000 and a probability of winning of 1/1,000,000.

    But it's not rational to believe that (especially when playing once) that I am most likely to win betting that it landed heads 100 times. You're not most likely to win. The odds of winning are .
  • Sleeping Beauty Problem
    Following Pradeep Mutalik's argument, according to the Bayesian "Dutch Book argument", "a degree of certainty" or "degree of belief" or "credence" is essentially your willingness to wager. Specifically, if you have a "degree of certainty" of 1/n, then you should be willing to accept a bet that offers you n or more dollars for every dollar you bet.

    In that case, it's not merely the expected value of the bet that determines the credence. Rather, it's your degree of certainty, 1/n, in the outcome being wagered on that makes you rationally justified in accepting a bet with such odds.
    Pierre-Normand

    Then apply this to my case of tossing the coin one hundred times, and where the experiment is only run once.

    Will you bet that the coin landed heads 100 times in a row? I wouldn't. My credence is that it almost certainly didn't land heads 100 times in a row, and that this is almost certainly my first and only interview.

    That I would be woken up a large number of times if it did land heads 100 times in a row just doesn't affect my credence or willingness to bet that it did at all.

    And the same if I was one of 2100 participants taking part. One person might win, but I will almost certainly not be the winner.

    Only if I got to repeat the experiment 2100 times would I bet that it did. But not because my credence for any particular experiment has increased; it's because my credence is that I'm likely to win at least once in 2100 attempts, and that the winnings for that one time will exceed the sum of all my losses.
  • Sleeping Beauty Problem
    Yes, an individual tails interview event is twice as probable. A tails interview where Monday ans Tuesday interviews are grouped together is equally likely as a heads interview. it comes back to the language of the question and interpretation.PhilosophyRunner

    That I believe is a bad interpretation of probability.

    The probability of the coin landing heads is 1/2, leading to one interview.
    The probability of the coin landing tails is 1/2, leading to two interviews.

    The probability that there will be a heads interview is 1/2.
    The probability that there will be a tails interview is 1/2.

    This is the correct interpretation.
  • Sleeping Beauty Problem
    Take away the amnesia. Does it follow that because there are two interviews after every tails that a tails interview is twice as probable?

    Throwing in amnesia doesn't convert the increased frequency into an increased probability.