They ruled that out before the experiment begun. You might as well say that they can rule out it being the case that the coin landed heads and that this is day 3. — Michael
Yes. — Michael
The question is whether or not it is rational for the participant to reason this way. Given that the experiment doesn't work by randomly assigning an interview to them from the set of all interviews, I don't think it is. The experiment works by randomly assigning an interview set from the set of all interview sets (which is either the head set or the tail set), and so I believe it is more rational to reason in this way. — Michael
Why? — Michael
A more comparable example would be if there are four doors, two containing a goat and two containing a car. You pick a door (say 1) and then Monty opens one of the two doors that contain a goat (say 2). What is the probability that your chosen door contains a car? What is the probability that the car is behind door 3, or door 4? — Michael
To apply this to the traditional problem: there are two participants; one will be woken on Monday, one on both Monday and Tuesday, determined by a coin toss.
I am one of the participants. What is the probability that I am woken twice?
Do I reason as if I am randomly selected from the set of all participants, and so that I am equally likely to be woken twice, or do I reason as if my interview is randomly selected from the set of all interviews, and so that I am more likely to be woken twice?
Halfers do the former, thirders the latter.
Which is the most rational?
Given the way the experiment is conducted I think the former (halfer) reasoning is the most rational. — Michael
I think this is basically a Monty Hall problem. I would say that the probability that I will be put to sleep is 1/2, that the probability that the person to my left will be put to sleep is 1/2, that the probability that the person to my right will be put to sleep is 1/2, and that the probability that one of the other two will be put to sleep is 1/2. — Michael
The double halfer approach does entail:
P(Heads & Monday) = P(Tails & Monday) = P(Tails and Tuesday) = 1/2 — Michael
Finding out that today is Monday just removes the blue circle. — Michael
This isn't comparable to the Sleeping Beaty problem because being a participant isn't guaranteed. That makes all the difference. — Michael
Regarding betting, expected values, and probability:
Rather than one person repeat the experiment 2^100 times, the experiment is done on 2^100 people, with each person betting that the coin will land heads 100 times in a row. 2^100 - 1 people lose, and 1 person wins, with the winner's winnings exceeding the sum of the losers' losses. The expected value of betting that the coin will land heads 100 is greater than the cost, but the probability of winning is still 1/2 ^100
Even though I could win big, it is more rational to believe that I will lose. — Michael
She's certainly able to update it on the basis of her knowledge that she might be awoken an even more absurdly large number of times as a consequence of this very unlikely event. I'm saying that it's irrational of her to.
The only rational approach, upon waking, is to recognize that it landing heads 100 times in a row is so unlikely that it almost certainly didn't, and that this is her first and only interview. — Michael
The difference is that the unconditional probability of being called up is very low, and so just being called up at all affects one's credence. In the Sleeping Beauty case (both the normal and my extreme version), she's guaranteed to be awoken either way. — Michael
There's actually two spaces. See here. — Michael
Then you have to say the same about my extreme example. Even when she knows that the experiment is only being run once, Sleeping Beauty's credence that the coin landed heads 100 times in a row is greater than here credence that it didn't.
And I think that's an absurd conclusion, showing that your reasoning is false. — Michael
I never buy betting arguments unless the random variables are set up! — fdrake
They describe completely different approaches to modelling the problem. That doesn't immediately tell us which SB ought to model the situation as, or whether they're internally coherent. — fdrake
1. If the experiment is run once, what is Sleeping Beauty's credence that the coin landed heads?
2. If the experiment is repeated several times, what is the probability that a randomly selected interview from the set of all interviews followed the coin landing heads?
Thirders answer the second question, which I believe is the wrong answer to the first question. The experiment doesn't work by randomly selecting an interview from a set of interviews after repeating the experiment several times and then dropping Sleeping Beauty into it. — Michael
My reasoning is that P(Awake) = 0.5 given that there are 6 possible outcomes and I will be awake if one of these is true:
1. Heads and I am 1
2. Tails and I am 2
3. Tails and I am 3 — Michael
I don't think it makes sense to say P(Awake) = 3/4. P(Awake) is just the probability that she will be woken up, which is 1. — Michael
The question which has been eating me is "What is the probability of the day being Tuesday?". I think it's necessary to be able to answer that question for the thirder position. But I've not found a way of doing it yet that makes much sense. Though I'm sure there is a way! — fdrake
I think you numbers there are wrong. See this. — Michael
Also this makes no sense. You can't have a probability of 2. — Michael
Being able to bet twice if it lands tails, and so make more money, doesn’t make it more likely that it landed tails; it just means you get to bet twice.
You might as well just say: you can place a £1 bet on a coin toss. If you correctly guess heads you win £1; if you correctly guess tails you win £2.
Obviously it’s better to bet on tails, but not because tails is more probable. — Michael
How do you condition on such a thing? What values do you place into Bayes' theorem?
P(Heads|Questioned)=P(Questioned|Heads)∗P(Heads) / P(Questioned) — Michael
The simplest "experiment" is just to imagine yourself in Sleeping Beauty's shoes. — Michael
There's the question of whether the "Bivariate Distribution Specification" reflects the envelope problem. It doesn't reflect the one on Wiki. The reason being the one on the wiki generates the deviate (A,A/2) OR (A,2A) exclusively when allocating the envelope, which isn't reflected in the agent's state of uncertainty surrounding the "other envelope" being (A/2, 2A).
It only resembles the one on the Wiki if you introduce the following extra deviate, another "flip" coinciding to the subject's state of uncertainty when pondering "the other envelope": — fdrake
You can conclude either strategy is optimal if you can vary the odds (Bayes or nonconstant probability) or the loss function (not expected value). Like if you don't care about amounts under 20 pounds, the optimal strategy is switching. Thus, I'm only really interested in the version where "all results are equally likely", since that seems essential to the ambiguity to me. — fdrake
As I wrote, the prior probabilities wouldn't be assigned to the numbers (5,10,20), they'd be assigned to the pairs (5,10) and (10,20). If your prior probability that the gameshow host would award someone a tiny amount like 5 is much lower than the gigantic amount 20, you'd switch if you observed 10. But if there's no difference in prior probabilities between (5,10) and (10,20), you gain nothing from seeing the event ("my envelope is 10"), because that's equivalent to the disjunctive event ( the pair is (5,10) or (10,20) ) and each constituent event is equally likely — fdrake
Edit: then you've got to calculate the expectation of switching within the case (5,10) or (10,20). If you specify your envelope is 10 within case... that makes the other envelope nonrandom. If you specify it as 10 here and think that specification impacts which case you're in - (informing whether you're in (5,10) or (10,20), that's close to a category error. Specifically, that error tells you the other envelope could have been assigned 5 or 20, even though you're conditioning upon 10 within an already fixed sub-case; (5,10) or (10,20).
The conflation in the edit, I believe, is where the paradox arises from. Natural language phrasing doesn't distinguish between conditioning "at the start" (your conditioning influencing the assignment of the pair (5,10) or (10,20) - no influence) or "at the end" (your conditioning influencing which of (5,10) you have, or which of (10,20) you have, which is totally deterministic given you've determined the case you're in).
[...]This battle you define is therefore one over authority, meaning it is a political battle between the progressives and the orthodox (lower case), but it is not, as you claim, just a foolish error by the transexuals in not appreciating the old rule that sex and gender correlate. They wish to overthrow that old rule — Hanover
Seems to me that one of the big players who’s completely failed to catch this train, is Amazon. I’ve been using Alexa devices for about eighteen months, and they’re pretty lame - glorified alarm clocks, as someone said. — Wayfarer
Nevertheless, if they observe n=10 in the first envelope, I still think there's a problem with assigning a probability distribution on the values (5, 20) in the other envelope. This is because that stipulates there being three possible values in the envelopes combined; (5, 10, 20); whereas the agent knows only two are possible. [...] — fdrake
And given that the larger number is twice the value of the smaller number, the probability that the other side is half the value is 1/2 and the probability that the other side is twice the value is 1/2.
Which step in this line of reasoning do you disagree with? — Michael
Thanks! Actually as far as I know, it’s still ChatGPT - I’m signing in via OpenAI although whether the engine is the same as GPT-4, I know not. Also appreciate the ref to Haugeland. — Wayfarer
It might by chance find a correct reference. But Equally it might make up a new reference. — Banno
A Bayesian analysis reveals that the culprit of the paradox is the assignment of a non-informative prior to the distribution that generates the envelopes contents. — sime
Maybe Heidegger got it from there. — Jamal
Imagine feeling obliged to defend this degenerate. — Mikie