[...]But notice that as the probability of writing a note each time approaches 1 the "greater likelihood" of it having been tails gets smaller, approaching 1.[...] — Michael
Nothing is ruled out when woken or asked her credence that wasn’t already ruled out before the experiment started.
Even Elga understood this: — Michael
Pierre-Normand is saying that P(X) refers to the ratio of Xs to non-Xs in some given reference class.
I'm saying that P(X) refers to the degree to which I believe X to be true.
If P(X) refers to the degree to which I believe X to be true, and if I believe that A iff B, then P(A) = P(B). — Michael
In your scenario there are a bunch of flashes going off in a forest and me, a passer-by, randomly sees one of them. This is comparable to a sitter being assigned a room. — Michael
That's not what I said.
In the Sleeping Beauty problem I am guaranteed to wake up at least once if tails and guaranteed to wake up at least once if heads. The coin toss does not determine the likelihood of me waking up. It only determines the number of times I'm woken up. But the frequency is irrelevant. The only thing that matters is the guarantee. — Michael
This has nothing to do with credence.
I am asked to place two bets on a single coin toss. If the coin lands heads then only the first bet is counted. What is it rational to to? Obviously to bet on tails. Even though my credence isn't that tails is more likely. The same principle holds in the Sleeping Beauty experiment where I'm put to sleep and woken up either once or twice depending on a coin toss. That it's rational to bet on tails isn't that my credence is that it's most likely tails; it's that I know that if it is tails I get to bet twice.
The same principle holds with the dice roll and the escape attempts. — Michael
It just doesn't make sense to say that A iff B but that P(A) != P(B). And Bayes' theorem shows that P(A) = P(B).
That doesn't mean that the credence isn’t transitive. My premises "fail" to account for it because it's irrelevant.
A iff B
P(B) = 1/2
Therefore, P(A) = 1/2 — Michael
But haven't you lost Sleeping Beauty's other constraint, that the chances of encountering one Italian or two Tunisians are equal? — Srap Tasmaner
If you want a closer analogy with pedestrians, it's Tunisians walking around in pairs. If the chances of meeting an Italian or a pair of Tunisians are equal, then the chances of meeting *a* Tunisian are either nil, since you can't meet just one, or the same as meeting a pair.
Look at how hang-around times affect the pedestrian-encountering odds. Roughly, if you miss a short walker, you've missed him, but if you miss a long walker you get another chance. That's not how Sleeping Beauty works at all. There's no way to miss your first tatils interview but still catch the second one. — Srap Tasmaner
This is an ambiguous claim. If there are half as many Tunisians but they go out four times as often but are only out for 10 mins, whereas Italians are out for 20 mins, then it would be that Tunisians are around equally as often as measured by time out. The only way you could get this to work is if the argument is set out exactly as I have done above:
A1. there are twice as many Tunisian walkers as Italian walkers (out right now)
A2. if (right now) I meet a walker at random from a random distribution of all walkers (out right now) then I am twice as likely to meet a Tunisian walker
But there's nothing comparable to "if (right now) I meet a walker at random from a random distribution of all walkers (out right now)" that has as a consequent "then my interview is twice as likely to be a T-interview". — Michael
P1. If I am assigned at random either a T-interview set or a H-interview set then my interview set is equally likely to be a T-interview set
P2. I am assigned at random either a T-interview set or a H-interview set
P3. My interview is a T-interview iff my interview set is a T-interview set
C1. My interview is equally likely to be a T-interview
The premises are true and the conclusion follows, therefore the conclusion is true. — Michael
In the case of the meetings we have:
*P1) there are twice as many Tunisian walkers
*P2) if I meet a walker at random then I am twice as likely to meet a Tunisian walker (from *P1)
*P3) I meet a walker at random
*C) I am twice as likely to have met a Tunisian walker (from *P2 and *P3)
In Sleeping Beauty's case we have:
P1) there are twice as many tails interviews
P2) ?
P3) I am in an interview
C) I am twice as likely to be in a tails interview
What is your (P2) that allows you to derive (C)? It doesn't follow from (P1) and (P3) alone. — Michael
Your argument is that: if 1) there are twice as many T-awakenings and if 2) I randomly select one of the awakenings then 3) it is twice as likely to be a T-awakening.
This is correct. But the manner in which the experiment is conducted is such that 2) is false. — Michael
If we were to use the meetings example then:
1. A coin is tossed
2. If heads then 1 Italian walks the streets
3. If tails then 2 Tunisians walk the streets
4. Sleeping Beauty is sent out into the streets
What is the probability that she will meet a Tunisian? — Michael
"there are twice as many Tunisian-meetings" isn't biconditional with "there are half as many Tunisians and Tunisians go out four times more often" and so A doesn't use circular reasoning. — Michael
This is just repeating the same thing in a different way. That there are twice as many T-awakenings just is that Sleeping Bauty is awaked twice as often if tails — Michael
In this case:
1. there are twice as many Tunisian-meetings because Tunisian-meetings are twice as likely
2. Tunisian-meetings are twice as likely because there are half as many Tunisians and Tunisians go out four times more often
This makes sense.
So:
1. there are twice as many T-awakenings because T-awakenings are twice as likely
2. T-awakenings are twice as likely because ...
How do you finish 2? It's circular reasoning to finish it with "there are twice as many T-awakenings". — Michael
Starting here you argued that P(Heads) = 1/3.
So, what do you fill in here for the example of one person woken if heads, two if tails? — Michael
What wouldn't make sense is just to say that Tunisian-meetings are twice as likely because there are twice as many Tunisian-meetings. That is a non sequitur. — Michael
To make this comparable to the Sleeping Beauty problem; there are two Sleeping Beauties, one will be woken if heads, two will be woken if tails. When woken, what is their credence in heads? In such a situation the answer would be 1/3. Bayes' theorem for this is:
P(Heads|Awake)=P(Awake|Heads)∗P(Heads)/P(Awake)
=(1/2)∗(1/2) / (3/4)=1/3
=1/3
This isn't comparable to the traditional problem. — Michael
Incidentally, what is your version of Bayes' theorem for this where P(Heads) = 1/3?
If you want to be very precise with the terminology, the Andromeda Paradox shows that some spacelike separated event in my present is some spacelike separated event in some other person's causal future even though that person is also a spacelike separated event in my present. I find that peculiar. — Michael
Some event (A1) in my (A0) future is spacelike separated from some event (B0) in someone else's (B1) past, even though this person is spacelike separated from my present. It might be impossible for me to interact with B1 (or for B1 to interact with A1), but Special Relativity suggests that A1 is inevitable, hence why this is an argument for a four-dimensional block universe, which may have implications for free will and truth. — Michael
the edge of the visible universe is receding from us faster than the speed of light. Although individual galaxies are much slower than light their apparent movement adds up radially away from us. Over billions of years we would see fewer galaxies spread further apart in ever darkening space. — magritte
Very interesting, I had always heard that all the whites were descendents of slave owners, and ispo facto, all racists. — Merkwurdichliebe
Which of these are you saying?
1. There are twice as many T-awakenings because tails is twice as likely
2. Tails is twice as likely because there are twice as many T-awakenings
I think both of these are false.
I think there are twice as many T-awakenings but that tails is equally likely.
The bet's positive expected value arises only because there are twice as many T-awakenings. — Michael
I think you're confusing two different things here. If the expected return of a lottery ticket is greater than its cost it can be rational to buy it, but it's still irrational to believe that it is more likely to win. And so it can be rational to assume that the coin landed tails but still be irrational to believe that tails is more likely. — Michael
Would you not agree that this is a heads interview if and only if this is a heads experiment? If so then shouldn't one's credence that this is a heads interview equal one's credence that this is a heads experiment? — Michael
If so then the question is whether it is more rational for one's credence that this is a heads experiment to be 1/3 or for one's credence that this is a heads interview to be 1/2.
What the Andromeda Paradox implies is that the observed universe apparently shifts in its entirety towards a moving observer. Which means that in the forward moving direction many more of the most distant galaxies come into possible view and we lose some distant galaxies from possible view behind us. This is all pretty absurd, yet it is demonstrably true. — magritte
No. This has nothing to do with what one person sees. There are distant events happening in my present that I cannot see because they are too far away. According to special relativity some of these events happen in your future even though they are happening in my present. This is what I find peculiar. — Michael
Previously you've been saying that P(Heads) = 1/2. — Michael
I think Bayes’ theorem shows such thirder reasoning to be wrong.
P(Unique|Heads)=P(Heads|Unique)∗P(Unique)/P(Heads)
If P(Unique) = 1/3 then what do you put for the rest? — Michael
Similarly:
P(Heads|Monday)=P(Monday|Heads)∗P(Heads)P(Monday)
If P(Monday) = 2/3 then what do you put for the rest?
This is a non sequitur. — Michael
What we can say is this:
P(Unique|Heads)=P(Heads|Unique)∗P(Unique)/P(Heads)
We know that P(Unique | Heads) = 1, P(Heads | Unique) = 1, and P(Heads) = 1/2. Therefore P(Unique) = 1/2.
Therefore P(Unique|W) = 1/2.
And if this experiment is the same as the traditional experiment then P(Heads|W) = 1/2.
It may still be that the answer to both is 1/3, but the reasoning for the second cannot use a prior probability of Heads and Tuesday = 1/4, because the reasoning for the first cannot use a prior probability of Heads and Second Waking = 1/4.
But if the answer to the first is 1/2 then the answer to the second is 1/2. — Michael
I'll throw in one last consideration. I posted a variation of the experiment here.
There are three beauties; Michael, Jane, and Jill. They are put to sleep and assigned a random number from {1, 2, 3}.
If the coin lands heads then 1 is woken on Monday. If the coin lands tails then 2 is woken on Monday and 3 is woken on Tuesday.
If Michael is woken then what is his credence that the coin landed heads?
Michael's credence before the experiment is P(1) = 1/3, so if woken he ought to continue to have a credence of P(1) = 1/3 since he gains no new relevant evidence if he wakes up during the experiment. — Michael
And given that if woken he is 1 iff the coin landed heads, he ought to have a credence of P(Heads) = 1/3.
Do we accept this?
If so then the question is whether or not Sleeping Beauty's credence in the original experiment should be greater than Michael's credence in this experiment. I think it should.
She also knows that the fact that she is awake eliminates (H,H) as a possibility. This is a classic example of "new information" that allows her to update the probabilities. With three (still equally likely) possibilities left, each has a posterior probability of 1/3. Since in only one is coin C1 currently showing Heads, the answer is 1/3. — JeffJo
But there are two sources of randomness in this example, the die and the coin.
Similarly for all analyses that treat SB's situation as describable with two coin flips. We only have one. — Srap Tasmaner
The halfer position comes back to individuation, as you suggested some time ago. Roughly, the claim is that "this interview" (or "this tails interview" etc) is not a proper result of the coin toss, and has no probability. What SB ought to be asking herself is "Is this my only interview or one of two?" The chances for each of those are by definition 1 in 2.
In the original scenario as I have described it, Ned reads the printout, but he only reads a part of it. And, importantly, he does not read a part of it where he is reading the printout -- that would be self-referentially problematic. Because there is no self-referentially in the parts of the printout that Ned does read, there is nothing necessarily theoretically vicious about Ned reading some parts of the printout. — NotAristotle
I suppose we could stipulate that Ned has enough information about his immediate environment to make an accurate prediction about how he will act. It doesn't really concern us whether this sort of information can, as a matter of practicality, be acquired; the concern is whether in principle, if this information were acquired, could Ned act in opposition to it. And the answer to that seems to be yes. — NotAristotle
But historically it has. There are a multitude of multilateral treaties that prove even enemies will agree on all sorts of things. WTO, UN, Geneva and the Hague conventions, Vienna Convention on the laws of treaties, Vienna Convention on diplomatic relations, etc. — Benkei
But she is only asked a question once in the whole year. One of the wakings is randomly selected to be the one where she is asked the question. On this randomly selected waking, she is asked the question "what is the probability that this randomly selected waking shows a heads." The answer is 1/3, as per Problem A in my previous post. — PhilosophyRunner