She's certainly able to update it on the basis of her knowledge that she might be awoken an even more absurdly large number of times as a consequence of this very unlikely event. I'm saying that it's irrational of her to.
The only rational approach, upon waking, is to recognize that it landing heads 100 times in a row is so unlikely that it almost certainly didn't, and that this is her first and only interview. — Michael
The difference is that the unconditional probability of being called up is very low, and so just being called up at all affects one's credence. In the Sleeping Beauty case (both the normal and my extreme version), she's guaranteed to be awoken either way. — Michael
There's actually two spaces. See here. — Michael
Then you have to say the same about my extreme example. Even when she knows that the experiment is only being run once, Sleeping Beauty's credence that the coin landed heads 100 times in a row is greater than here credence that it didn't.
And I think that's an absurd conclusion, showing that your reasoning is false. — Michael
I never buy betting arguments unless the random variables are set up! — fdrake
They describe completely different approaches to modelling the problem. That doesn't immediately tell us which SB ought to model the situation as, or whether they're internally coherent. — fdrake
1. If the experiment is run once, what is Sleeping Beauty's credence that the coin landed heads?
2. If the experiment is repeated several times, what is the probability that a randomly selected interview from the set of all interviews followed the coin landing heads?
Thirders answer the second question, which I believe is the wrong answer to the first question. The experiment doesn't work by randomly selecting an interview from a set of interviews after repeating the experiment several times and then dropping Sleeping Beauty into it. — Michael
My reasoning is that P(Awake) = 0.5 given that there are 6 possible outcomes and I will be awake if one of these is true:
1. Heads and I am 1
2. Tails and I am 2
3. Tails and I am 3 — Michael
I don't think it makes sense to say P(Awake) = 3/4. P(Awake) is just the probability that she will be woken up, which is 1. — Michael
The question which has been eating me is "What is the probability of the day being Tuesday?". I think it's necessary to be able to answer that question for the thirder position. But I've not found a way of doing it yet that makes much sense. Though I'm sure there is a way! — fdrake
I think you numbers there are wrong. See this. — Michael
Also this makes no sense. You can't have a probability of 2. — Michael
Being able to bet twice if it lands tails, and so make more money, doesn’t make it more likely that it landed tails; it just means you get to bet twice.
You might as well just say: you can place a £1 bet on a coin toss. If you correctly guess heads you win £1; if you correctly guess tails you win £2.
Obviously it’s better to bet on tails, but not because tails is more probable. — Michael
How do you condition on such a thing? What values do you place into Bayes' theorem?
P(Heads|Questioned)=P(Questioned|Heads)∗P(Heads) / P(Questioned) — Michael
The simplest "experiment" is just to imagine yourself in Sleeping Beauty's shoes. — Michael
There's the question of whether the "Bivariate Distribution Specification" reflects the envelope problem. It doesn't reflect the one on Wiki. The reason being the one on the wiki generates the deviate (A,A/2) OR (A,2A) exclusively when allocating the envelope, which isn't reflected in the agent's state of uncertainty surrounding the "other envelope" being (A/2, 2A).
It only resembles the one on the Wiki if you introduce the following extra deviate, another "flip" coinciding to the subject's state of uncertainty when pondering "the other envelope": — fdrake
You can conclude either strategy is optimal if you can vary the odds (Bayes or nonconstant probability) or the loss function (not expected value). Like if you don't care about amounts under 20 pounds, the optimal strategy is switching. Thus, I'm only really interested in the version where "all results are equally likely", since that seems essential to the ambiguity to me. — fdrake
As I wrote, the prior probabilities wouldn't be assigned to the numbers (5,10,20), they'd be assigned to the pairs (5,10) and (10,20). If your prior probability that the gameshow host would award someone a tiny amount like 5 is much lower than the gigantic amount 20, you'd switch if you observed 10. But if there's no difference in prior probabilities between (5,10) and (10,20), you gain nothing from seeing the event ("my envelope is 10"), because that's equivalent to the disjunctive event ( the pair is (5,10) or (10,20) ) and each constituent event is equally likely — fdrake
Edit: then you've got to calculate the expectation of switching within the case (5,10) or (10,20). If you specify your envelope is 10 within case... that makes the other envelope nonrandom. If you specify it as 10 here and think that specification impacts which case you're in - (informing whether you're in (5,10) or (10,20), that's close to a category error. Specifically, that error tells you the other envelope could have been assigned 5 or 20, even though you're conditioning upon 10 within an already fixed sub-case; (5,10) or (10,20).
The conflation in the edit, I believe, is where the paradox arises from. Natural language phrasing doesn't distinguish between conditioning "at the start" (your conditioning influencing the assignment of the pair (5,10) or (10,20) - no influence) or "at the end" (your conditioning influencing which of (5,10) you have, or which of (10,20) you have, which is totally deterministic given you've determined the case you're in).
[...]This battle you define is therefore one over authority, meaning it is a political battle between the progressives and the orthodox (lower case), but it is not, as you claim, just a foolish error by the transexuals in not appreciating the old rule that sex and gender correlate. They wish to overthrow that old rule — Hanover
Seems to me that one of the big players who’s completely failed to catch this train, is Amazon. I’ve been using Alexa devices for about eighteen months, and they’re pretty lame - glorified alarm clocks, as someone said. — Wayfarer
Nevertheless, if they observe n=10 in the first envelope, I still think there's a problem with assigning a probability distribution on the values (5, 20) in the other envelope. This is because that stipulates there being three possible values in the envelopes combined; (5, 10, 20); whereas the agent knows only two are possible. [...] — fdrake
And given that the larger number is twice the value of the smaller number, the probability that the other side is half the value is 1/2 and the probability that the other side is twice the value is 1/2.
Which step in this line of reasoning do you disagree with? — Michael
Thanks! Actually as far as I know, it’s still ChatGPT - I’m signing in via OpenAI although whether the engine is the same as GPT-4, I know not. Also appreciate the ref to Haugeland. — Wayfarer
It might by chance find a correct reference. But Equally it might make up a new reference. — Banno
A Bayesian analysis reveals that the culprit of the paradox is the assignment of a non-informative prior to the distribution that generates the envelopes contents. — sime
Maybe Heidegger got it from there. — Jamal
Imagine feeling obliged to defend this degenerate. — Mikie
My point here is that this is not some sort of performance/act - this is genuine. — EricH
So trans folks can stand on the universal stage, with the rest of us, as fellow actors of equal status and value. — universeness
Oh come on? Do you really think trans folks would go through the absolute trauma of surgery based transition as an 'act ........ of sorts? — universeness
But much the same architecture. It's still just picking the next word from a list of expected words. — Banno
There are some things I don't get. I ran some jokes by it, and it consistently ranked the trash jokes as bad, and the hilarious jokes as hilarious. And it would give a good analysis of why the joke worked (or didn't). How can a random process produce those results? — RogueAI
I tested the Bing AI in the following way: I have a low-priority mathematics page on Wikipedia, so I asked Bing what is known of this particular subject? Now, there are a smattering of papers on the internet on this subject; what Bing supplied was the first introductory paragraphs of my webpage, word for word. That's all. — jgill
I don't see any consistency between these two statements. If, following the laws of nature is a requirement for determinism, and "stochastic" refers to actions describable by probability rather than law, then it would definitely be true that the stochasticity of quantum indeterminacies supports the rejection of determinism. — Metaphysician Undercover
But until then, what do you make of unconscious determinants of free decisions in the human brain? — Michael
Does determinism allow for stochastic quantum mechanics? — Michael
Until anyone can show that an action is not self-generated — NOS4A2
Is this a difference that contradicts determinism?
If someone asks me how I beat some opponent at some computer game, I can describe it in such terms as predicting their moves, using attacks that they’re weak against, etc., or I can describe it as pressing the right buttons at the right times. Your approach to free will seems similar to the first kind of explanation and the determinist’s approach seems similar to the second kind of explanation. But they’re not at odds. They’re just different ways of talking.
So I would think that if you accept the underlying determinism then your position is compatibilist, not libertarian. — Michael
I know little about computers, but on the face of it seems to me that, even if the CPU maps inputs to outputs in the same way whatever program it is running, the actual inputs and outputs themselves are not the same. — Janus
