The answer to problem B is clearly 1/3 and I think we both will agree here. The problem A is the same question that is asked to SB - on a given wake up event, she is asked in the moment about the probability of the coin showing heads. SO the answer in problem A is also 1/3. — PhilosophyRunner
When you are first awakened, to what degree ought you believe that the outcome of the coin toss is Heads?
...
I've just argued that when you are awakened on Monday, that credence ought to change to 1/3.
...
But you were also certain that upon being awakened on Monday you would have credence 1/3 in H.
To ask how probable it is that the coin landed on heads involves a tacit reference to the counterfactual circumstances where you are presently facing a (hidden) coin that didn't land the way it actually did. — Pierre-Normand
(2) The setup confounds wagering arguments. That won't matter much to a lot of people, but it's uncomfortable. Annoying. Ramsey used Dutch book arguments from the beginning, and despite their limitations they can be clarifying. Each time I've tried to construct a sane payoff table I've failed. I've wondered lately if there might be a conditional wager that comes out rational, but I can work up enough hope of success to bother. Partial beliefs, within suitable limits, ought to be expressible as wagers, but not in this case, and that blows. — Srap Tasmaner
The original statement of the problem fails to specify what constitutes an individual act of verification of her credence, though, such that we can establish the target ratio unambiguously. As I've previously illustrated with various examples, different pragmatic considerations can lead to different verification methods, each yielding different values for P(H), aligning with either the Halfer or Thirder stance. — Pierre-Normand
The precise effect of the drug is to reset your belief-state to what it was just before you were put to sleep at the beginning of the experiment. If the existence of such a drug seems fanciful, note that it is possible to pose the problem without it — all that matters is that the person put to sleep believes that the setup is as I have described it.
I don’t think he broke the law nor do I care if he did. — NOS4A2
He’s the one being persecuted. — NOS4A2
The classified documents Trump stored in his boxes included information regarding defense and weapons capabilities of both the United States and foreign countries; United States nuclear programs; potential vulnerabilities of the United States and its allies to military attack, and plans for possible retaliation in response to a foreign attack
…
a. In July 2021, at Trump National Golf Club in Bedminster, New Jersey ("The Bedminster Club"), during an audio-recorded meeting with a writer, a publisher, and two members of his staff, none of whom possessed a security clearance, TRUMP showed and described a "plan of attack" that TRUMP said was prepared for him by the Department of Defense and a senior military official. TRUMP told the individuals that the plan was "highly confidential" and "secret". TRUMP also said "as president I could have declassified it," and, "Now I can't, you know, but this is still a secret.
Two lawyers who represented Donald Trump in the months before the former president was indicted on federal charges over his handling of classified documents quit working for him Friday morning.
The attorneys, Jim Trusty and John Rowley, did not explain in detail why they had resigned, other than to say that “this is a logical moment” to do so given his indictment Thursday in U.S. District Court in Miami.
Trusty and Rowley also said they will no longer represent Trump in a pending federal criminal probe into his efforts to overturn his loss in the 2020 election to President Joe Biden.
But we are agreed on the validity of Sue's credences in both scenarios, right? — Pierre-Normand
I would argue that Jane should update her credence in the same way in light of the same information. — Pierre-Normand
Although you linked to my most recent post, I assume you intended to respond to this one. — Pierre-Normand
Surely, Jane cannot reasonably say: 'Yes, I see you are right to conclude that the probability of the coin having landed on heads is 1/3, based on the information we share. But my belief is that it's actually 1/2.'" — Pierre-Normand
When Sue finds Jane in the assigned room, and assuming she knows the participants and the experimental setup, her prior probabilities would be:
P(Jane awake today) = P(JAT) = 1/2, and P(H) = 1/2
Her updated credence for H is P(H|JAT) = P(JAT|H) * P(H) / P(JAT) = (1/3*1/2) / (1/2) = 1/3
Jane's priors for any random day during the experiment would be exactly the same as Sue's. When Jane is awakened on a day when Sue is assigned to her, Jane has the same information Sue has about herself, and so she can update her credence for H in the same way. She concludes that the probability of this kind of awakening experience, resulting from a heads result, is half as probable, and thus half as frequent, as identical awakening experiences resulting from a tails result. This conclusion doesn't impact the ratio of the frequency of heads-result runs to the total number of experimental runs, which remains at 1/2 from anyone's perspective. — Pierre-Normand
Your calculation seems correct, but it doesn't adequately account for the new capacity Jane gains to refer to her own temporal location using an indexical expression when updating her credence. Instead, you've translated her observation ("I am awake today") into an impersonal overview of the entire experiment ("I am scheduled to be awakened either under circumstances H1, T1, or T2"). The credence you've calculated reflects Sleeping Beauty's opinion on the ratio, over many iterations of the experiment, of (1) the number of runs resulting from a heads result, to (2) the total number of experimental runs. Indeed, this ratio is 1/2, but calculating it doesn't require her to consider the knowledge that today falls within the set {H1, T1, T2}. — Pierre-Normand
Combining results, we have that P(H1) = P(T1) = P(T2). Since these credences sum to 1, P(H1)=1/3.
P(Heads | Mon or Tue) = P(Mon or Tue | Heads) * P(Heads) / P(Mon or Tue)
P(Heads | Mon or Tue) = 1 * 1/2 / 1
P(Heads | Mon or Tue) = 1/2 — Michael
In that scenario, P(R|R or B1) would be 2/3 and P(B1|R or B1) would be 1/3. — Pierre-Normand
However, if what you mean is that, from the bettor's perspective and in light of the evidence available to them at the time of betting, the bet (distinguished from other bets within the same experimental run, which from the agent's point of view, may or may not exist) is more likely to have been placed in circumstances where the coin landed tails, then I would argue that the inference is indeed warranted. — Pierre-Normand
Although it's true that most interviews follow the coin landing heads 100 times, every single one of those interviews belongs to a single participant, and for each participant the probability that they are that single participant is .
So although it's true that "any given interview is twice as likely to have followed the coin landing heads 100 times" it is false that "my interview is twice as likely to have followed the coin landing heads 100 times"
Does that procedure accurately represent how Sleeping Beauty understands her own epistemic situation when she is being awakened on a day of interview, though? — Pierre-Normand
Not necessarily so, just because we would do it does not mean that they would have the same motivations we do. — Sir2u
First of all, why would they have to be more advanced than we are. True, there are many older galaxies out there that could have developed highly intelligent life forms along time ago, but there is also evidence that many galaxies have already died out. Anyone of the many galaxies could have life similar to our own at with the same level of technology, thus unable to come visiting. — Sir2u
Second point, a million years ago when they set out it would have been impossible for them to even guess that we might appear on this planet. So why would they head in this direction instead of one of the other millions of possibilities in all of the other galaxies? — Sir2u
Last point, no one said that intelligent life is common. — Sir2u
A given seeing of it is twice as likely to be tails. — PhilosophyRunner
Using frequencies over multiple games to argue for the probabilities in a single game is a fundamental way probabilities are calculated. — PhilosophyRunner
Dear Professor Elga,
I've read your paper Self-locating belief and the Sleeping Beauty problem and hope you could answer a question I have regarding your argument. You state that "P(T1|T1 or T2) = P(T2|T1 or T2), and hence P(T1) = P(T2)" and by implication state that P(H1|H1 or T1) = P(T1|H1 or T1), and hence P(H1) = P(T1).
However I cannot see in the paper where this inference is justified, as it is not valid a priori.
If I have one red ball in one bag and two numbered blue balls in a second bag, and I pick out a ball at random and show it to you then P(R|R or B1) = P(B1|R or B1) but P(R) = ½ and P(B1) = ¼.
So the (double-)halfer can accept that P(H1|H1 or T1) = P(T1|H1 or T1) but reject your assertion that P(H1) = P(T1) follows. Is there something in your paper that I missed to justify this inference?
Thanks for your time. — Michael
Dear Michael,
Thanks for your interest in this stuff. The form of reasoning I had in mind was the following chain of entailments:
P(X|X or Y) = P(Y|X or Y)
P(X&(X or Y))/P(X or Y) = P(Y&(X or Y))/P(X or Y)
P(X)/P(X or Y) = P(Y)/P(X or Y)
P(X) = P(Y).
I wish you the best with your research. — Elga
The main argument I see is not if aliens exist, but why would the come here? Any ideas about that? — Sir2u
Indeed, not only would their expected value (EV) be positive, but it would be positive because the majority of their individual bets would be winning bets. Michael, it seems, disagrees with the idea of individuating bets in this way. — Pierre-Normand
If you repeated the experiment a trillion times, and kept a note of whether you guess was correct or not each time, and I did the same. We would find that I got it correct more than you. By the law of large numbers that would mean the outcome I guessed for was more probable than yours. — PhilosophyRunner
Fair enough, but then a person betting that it did land on heads 100 times in a row will have a greater expected value for their winning (as long as the winnings for heads are greater than 2^100 than for tails). And their position would be the rational one. — PhilosophyRunner
Following Pradeep Mutalik's argument, according to the Bayesian "Dutch Book argument", "a degree of certainty" or "degree of belief" or "credence" is essentially your willingness to wager. Specifically, if you have a "degree of certainty" of 1/n, then you should be willing to accept a bet that offers you n or more dollars for every dollar you bet.
In that case, it's not merely the expected value of the bet that determines the credence. Rather, it's your degree of certainty, 1/n, in the outcome being wagered on that makes you rationally justified in accepting a bet with such odds. — Pierre-Normand
Yes, an individual tails interview event is twice as probable. A tails interview where Monday ans Tuesday interviews are grouped together is equally likely as a heads interview. it comes back to the language of the question and interpretation. — PhilosophyRunner
