Most of her awakenings occur on the rare occasion when 100 tosses yield heads, which forms the basis for her credence P(100H) being greater than 1/2. — Pierre-Normand
However, the Sleeping Beauty problem specifically inquires about her credence, not about the rationality of her attempt to maximize her expected value, or her preference for some other strategy (like maximizing the number of wins per experimental run rather than average gain per individual bet).
Even if she were to endorse your perspective on the most rational course of action (which doesn't seem unreasonable to me either), this wouldn't influence her credence. It would simply justify her acting in a manner that doesn't prioritize maximizing expected value on the basis of this credence. — Pierre-Normand
Except the experiment is only conducted once. — Michael
If we run this experiment once with three participants, and all three of them bet on T every time they are awakened, they will be correct 2/3 of the time on average, which aligns with their credences. — Pierre-Normand
Sleeping Beauty's calculation that P(H) = 1/3 doesn't hinge on her participation in the experiment being repeated. She's aware that if the coin lands heads, she will be awakened once, but if it lands tails, she will be awakened twice. If we run this experiment once with three participants, and all three of them bet on T every time they are awakened, they will be correct 2/3 of the time on average, which aligns with their credences. — Pierre-Normand
And this is precisely why the betting examples that you and others use don’t prove your conclusion. — Michael
My current credence P(H) is 1/2, but if I were placed in this exact same situation repeatedly, I would expect the outcome H to occur one third of the time. — Pierre-Normand
I would like the halfer to explain why ruling out the Tuesday scenario doesn't affect their credence in the coin toss outcome at all. — Pierre-Normand
But your credence that you are in T1, after learning that the toss outcome is Tails, ought to be the same as the conditional credence P(T1|T1 or T2), and likewise for T2. So P(T1|T1 or T2) = P(T2|T1 or T2), and hence P(T1) = P(T2).
...
But your credence that the coin will land Heads (after learning that it is Monday) ought to be the same as the conditional credence P(H1|H1 or T1). So P(H1|H1 or T1)=1/2, and hence P(H1) = P(T1).
Combining results, we have that P(H1) = P(T1) = P(T2). Since these credences sum to 1, P(H1)=1/3.
There is a red ball in one bag and two numbered blue balls in a second bag. You will be given a ball at random. — Michael
According to Elga's reasoning:
1. P(B1|B1 or B2) = P(B2|B1 or B2), therefore P(B1) = P(B2)
2. P(R|R or B1) = 1/2, therefore P(R) = P(B1)
3. Therefore, P(R) = P(B1) = P(B2) = 1/3
The second inference and so conclusion are evidently wrong, given that P(R) = 1/2 and P(B1) = P(B2) = 1/4.
So his reasoning is a non sequitur. — Michael
Here, P(R|R or B1) is the probability that the ball you've just received is red, conditioned on the information (revealed to you) that this is the first ball you've received in this run of the experiment. In other words, you now know you haven't taken a shot of tequila. Under these circumstances, P(R) = P(B1) = 1/2. — Pierre-Normand
This scenario doesn't accurately reflect the Sleeping Beauty experiment. — Pierre-Normand
That’s not accurate. There is a difference between these two assertions:
1. P(R|R or B1) = P(B1|R or B1)
2. P(R) = P(B1)
The first refers to conditional probabilities, the second to unconditional probabilities, and in my example the first is true but the second is false. — Michael
It looks like you may have misinterpreted Elga's paper. He doesn't define P as an unconditional probability. In fact, he expressly defines P as "the credence function you ought to have upon first awakening." Consequently, P(H1) and P(T1) are conditional on Sleeping Beauty being in a centered possible world where she is first awakened. The same applies to P(R) and P(B1), which are conditional on you being in a centered possible world where you are presented with a ball still wrapped in aluminum foil before being given a tequila shot. — Pierre-Normand
To understand what P(R) entails, let's look at the situation from the perspective of the game master. At the start of the game, there is one red ball in one bag and two blue balls in the other. The game master randomly selects a bag and takes out one ball (without feeling around to see if there is another one). They hand this ball to you. What's the probability that this ball is red? — Pierre-Normand
My example proves that this doesn't follow where P is the credence function I ought to have after being explained the rules of my game. Elga doesn't explain why it follows where P is the credence function I ought to have upon first awakening. — Michael
If so, this would suggest a highly unusual implication - that one could acquire knowledge about future events based solely on the fact that someone else would be asleep at the time of those events. — Pierre-Normand
Before being put to sleep, your credence in H was 1/2. I’ve just argued that when you are awakened on Monday, that credence ought to change to 1/3. This belief change is unusual. It is not the result of your receiving new information — you were already certain that you would be awakened on Monday.
...
Thus the Sleeping Beauty example provides a new variety of counterexample to Bas Van Fraassen’s ‘Reflection Principle’ (1984:244, 1995:19), even an extremely qualified version of which entails the following:
"Any agent who is certain that she will tomorrow have credence x in proposition R (though she will neither receive new information nor suffer any cognitive mishaps in the intervening time) ought now to have credence x in R."
Since the setup of the experiment doesn't even require that anyone look at the result of the toss before Monday night, nothing changes if the toss is actually performed after Sleeping Beauty's awakening. In that case the credences expressed on Monday are about a future coin toss outcome rather than an already actualized one. — Pierre-Normand
That's exactly the implication of Elga's reasoning. — Michael
And I think it's even better to not consider days and just consider number of times wakened. So first she is woken up, then put to sleep, then a coin is tossed, and if tails she's woken again. Then we don't get distracted by arguing that her being asleep on Tuesday if Heads is part of the consideration. It doesn't make sense to say that she's asleep during her second waking if Heads.
With this reasoning I think Bayes' theorem is simple. The probability of being woken up is 1 and the probability of being woken up if Heads is 1. That she's woken up a second time if Tails is irrelevant. — Michael
I think this is a better way to consider the issue. Then we don't talk about Heads & Monday or Tails & Monday. There is just a Monday interview and then possibly a Tuesday interview. It's not the case that two thirds of all interviews are Tails interviews; it's just the case that half of all experiments have Tuesday interviews. Which is why it's more rational to reason as if one is randomly selected from the set of possible participants rather than to reason as if one's interview is randomly selected from the set of possible interviews. — Michael
And I think it's even better to not consider days and just consider number of times wakened. So first she is woken up, then put to sleep, then a coin is tossed, and if tails she's woken again. Then we don't get distracted by arguing that her being asleep on Monday if Heads is part of the consideration. It doesn't make sense to say that she's asleep during her second waking if Heads.
So ChatGPT is saying that P(Heads | Monday or Tuesday) = 1/2 is trivially true. Doesn't that just prove my point? — Michael
Rather, it pointed out that your calculation of P(Heads | Monday or Tuesday) = 1/2 simply restates the unconditional probability P(H) without taking into account Sleeping Beauty's epistemic situation. — Pierre-Normand
This belief change is unusual. It is not the result of your receiving new information — you were already certain that you would be awakened on Monday. (We may even suppose that you knew at the start of the experiment exactly what sensory experiences you would have upon being awakened on Monday.) Neither is this belief change the result of your suffering any cognitive mishaps during the intervening time — recall that the forgetting drug isn’t administered until well after you are first awakened. So what justifies it?
The answer is that you have gone from a situation in which you count your own temporal location as irrelevant to the truth of H, to one in which you count your own temporal location as relevant to the truth of H.
The argument you've put forward could be seen as suggesting that the vast body of literature debating the halfer, thirder, and double-halfer solutions has somehow missed the mark, treating a trivial problem as a complex one. This isn't an argument from authority. It's just something to ponder over. — Pierre-Normand
I've been reading along, I have a meta question for you both Pierre-Normand@Michael - why is it helpful to discuss variants which are allegedly the same as the original problem when you both don't seem to agree what the sampling mechanism in the original problem is? — fdrake
It yields a natural interpretation because it enables the participant to reason about her epistemic situation in a natural way without the need to import some weird metaphysical baggage about the ways in which she is being "dropped" in her current situation — Pierre-Normand
I ended up in a state of confusion in the calculations, having a few contradictions in reasoning, which this paper elevates into framings of the experiment (including SB's setting within it) having inconsistent sample spaces between the centred and non-centred accounts. Thus yielding a "dissolution" of the paradox of the form; it's only a paradox when centred and non-centred worlds are equated. — fdrake
↪fdrake I did mention this. There are two ways to reason:
1. I should reason as if I am randomly selected from the set of possible participants
2. I should reason as if my interview is randomly selected from the set of possible interviews — Michael
My use of variants, such as that of tossing the coin 100 times, was to show that applying his reasoning leads to what I believe is an absurd conclusion (that even if the experiment is only done once it is rational to believe that P(100 Heads) = 2/3). — Michael
I don’t think any reasonable person would believe this. I certainly wouldn’t. — Michael
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.