I like this explanation. And I thought of a possibly better way explain how "unknown" is used in the TEP, by analogy:n the case of the two envelopes paradox, the case is similar. The player never has the opportunity to chose which branch to take at the first node. So, the player must treat this bifurcation as occurring within a black box, as it were, and assign each branch some probability. But, unlike my example with two equally biased dice, those probabilities are unknown. — Pierre-Normand
The puzzling part is about our understanding the mathematics, not how we use it to solve the problem. But that still makes it a probability problem. People who know only a little have difficulty understanding why the simple 5v/4 answer isn't right, and people who know more tend to over-think it, trying to derive more information from it that is there.My current, and I think "final", position is that this isn't really a probability puzzle at all. Here are my arguments for my view and against yours. — Srap Tasmaner
That's because the higher/lower question is the only one we can assign a probability to. There is on;ly one kind of probability that you can place on values. That's "valid," meaning there is a set of possibilities, and their probabilities sum to 1. Any other kind - frequentiest, bayesian, subjective, objective, informative, non-informative, or any other adjective you can think of - is outside teh scope of the problem.1. The only probability anyone has ever managed to assign any value to is the probability of choosing the larger (or the smaller) envelope -- and even that is only the simplest noninformative prior.
Correct.2. All other probabilities used in solutions such as yours are introduced only to immediately say that we do not and cannot know what their values are.
4. Much less the PDF on that space.[/quote
Careful. "PDF" usually refers to a "Probability Density Function," which means the sample space is continuous. We have a distribution for a discrete sample space.
Th only thing we can say about it (or the sample space ) is, is that it still must be valid. A valid sample space has a maximum value. A valid distribution implies there are values in the sample space where there is an expected loss.
This is a red herring. It only has meaning if we know the distribution, and we don't. So it has no meaning.5. By the time the player chooses, a value for X has been determined.
I assume you mean the amounts the benefactor puts in the envelopes (this isn't presented as a game show). That's why I usually speak generically about values. That can apply to the minimum value, which is usually what x refers to in this thread, the difference d which turns out to be the same thing as x but can be more intuitive, the value v in your envelope which can be x or 2x, and the total t (so x=t/3).6. We might also describe that as the host's choice of a value for X.
Then I'm not sure what you mean - it appears some of mine. If you are given v, and so have two x's, you have to consider the relative probabilities of those two x's.7. That choice is the very first step of the game and yet it appears nowhere in the probabilistic solutions, which in effect treat X as a function of the player's choices and what the player observes.
Please, get "updating" out of your mind here.10. The probabilistic model can safely be abandoned once it's determined that there will never be any evidence upon which to base a prior much less update one.
The point is that I'm saying both. You need to understand the various kinds of "variables."what is the advantage of saying that the variable X takes a value from an unknown and unknowable sample space, with an unknown and unknowable PDF, rather than saying X is not a variable but simply an unknown?
In the now canonical example of Michael's £10, he could say either:
(a) the other envelope must contain £20 or £5, but he doesn't know which; or
(b) there's a "50:50 chance" the other envelope contains £20 or £5, and thus the other envelope is worth £12.50.
I say (a) is true and (b) is false.
The fact that you use an expectation formula.What compels us to say that it is probabilistic ...
And it is even more obvious you want to use statistics anywhere you can, no matter how inappropriate. The lexicon of both probability and statistics is the same, since statistics uses probability. It applies it to the experimental data you keep talking about, and of which we have none.I already figured out that your field was not statistics, — Jeremiah
You can't just enumerate a set of cases, and claim each is equally likely. If you could, there would be a 50% chance of winning, or losing, the lottery. — JeffJo
Since my sample space was a perfectly valid sample space, and I never mentioned events at all, it demonstrates your "very bad understanding" of those terms. It was a very bad choice of a sample space for this problem, for the reasons I was trying to point out and stated quite clearly. But you apparently didn't read that.That is a very bad understanding of what a sample space and an event is. — Jeremiah
Actually, I was applying it. Improperly with the intent to demonstrate why its restriction is important:You are not applying your Principle of Indifference there,
The Principle of Indifference places a restriction on the possibilities that it applies to: they have to be indistinguishable except for their names. You can't just enumerate a set of cases, and claim each is equally likely. — JeffJo
I didn't say we should (A) use a probability (B) density (C) curve. I stated correctly that there (A) must be a probability (B) distribution for the (C) set of possible values, and that any expectation formula must take this distribution into account. Even if you don't know it.Furthermore, it makes no sense to use a probability density curve on this problem, — Jeremiah
The other way to phrase the difference is that my solution uses the same value for the chosen envelope — Michael
And you are ignoring my comparison of two different ways we can know something about the values.It doesn’t make sense to consider those situations where the chosen envelope doesn’t contain 10.
True. But when that variable is a random variable, we must consider the probability that the variable has the value we are using, at each point where we use one.We should treat what we know as a constant and what we don’t know as a variable.
Statistics uses repeated observations of outcomes from a defined sample space, to make inference about the probability space associated with that sample space. — JeffJo
I just said that. That is exactly what I said. — Jeremiah
You may well have. If you did, I accepted it as correct and have forgotten it. If you want to debate what it means, and why that isn't what you said above, "refer it to me over" again. Whatever that means.I already posted the definition of an event from one of my books, which I will refer to you over. I will always go with my training over you.
Maybe true in some cases. But "event" is not one of them. Look it up again, and compare it to what I said.One thing I was taught in my first stats class was that the lexicon was not standardized.
Statistics uses repeated observations of outcomes from a defined sample space, to make inference about the probability space associated with that sample space.Statistics is a data science and uses repeated random events to make inference about an unknown distribution. We don't have repeated random events, we have one event. Seems like a clear divide to me. You can't learn much of anything about an unknown distribution with just one event. — Jeremiah
When all you consider is the relative chances of "low" compared to "high," this is true. When you also consider a value v, you need to use the relative chances of "v is the lower value" compared to "v is the higher value." This requires you to know the distribution of possible values in the envelopes. Since the OP doesn't provide this information, you can't use your solution. and no matter how strongly you feel that there must be a way to get around this, you can't.I'm not really sure that this addresses my main question. ... There's a 50% chance of picking the lower-value envelope, and so after having picked an envelope it's in an "unknown state" that has a 50% chance of being either the lower- or the higher-value envelope? — Michael
Why would you think that?The objective Bayesian will say that an already-flipped coin has a 50% probability of being heads, even if it's actually tails, and that my £10 envelope has a 50% probability of being the smaller amount, even if it's actually the larger amount, whereas the frequentist would deny both of these (as far as I'm aware). — Michael
The solution has always been what I posted on the first page of this thread in post number 6, which has also been my stance this entire thread. A statistical solution has never been a viable option, which has also been my stance this entire thread. The truth is this problem has always been really simple to solve, it is untangling all the speculations and assumptions that confounded it. — Jeremiah
You could have X or 2X. If you have X and you switch then you get 2X but lose X so you gain X; so you get a +1 X. However, if you have 2X and switch then you gain X and lose 2X; so you get a -1 X. — Jeremiah
The purpose is to show why the formula (v/2)/2 + (2v)/2 = 5v/4 is wrong. The approach behind the formulation is indeed correct; it just makes a mistake that doesn't show up in the formula. And can't, if you accept the assertion "it is pointless to consider the conditional probability."The limit does not need to be specified, as the envelopes will never step outside the limit. Mathematically you cannot determine if you have the the smaller amounts or larger amounts as you can never rule out which case you are in. You can speculate on such things, but you can't quantify them. It is pointless to consider the conditional probability since both cases are subjectively equal in probability, it would still boil down to a coin flip. You can do it for completeness, but it really makes no difference. — Jeremiah
How could anyone who has read this thread have possibly concluded that I ever made this conclusion? When all I said was that any use of statistics - which you did advocate repeatedly - was inappropriate for a probability problem or a thought problem?How could anyone who has read this thread possibly concluded I was ever advocating for a statistical solution. I have been very clear that a statistical approach is incorrect. — Jeremiah
So that "observational data set" is the "experimental data set," isn't it? With each sample being an instance of the experiment "how does a single member of population X behave in circumstances Y?""Statistics is used on an experimental data set from repeated trials." — JeffJo
Yes, and it is also used on observational data sets to make generalized inferences about a population. — Jeremiah
So read the statement in its context, where I said exactly that. You are removing it from its context to make it look bad:Then go ahead and switch envelopes in the OP. — JeffJo
There is not enough information to calculate expected gain. — Jeremiah
But if you don't care about chances, only the possibility of gain? ... Then go ahead and switch envelopes in the OP. Just don't expect a gain. That can't be determined from the information.
So read the statement in its context, where I said exactly that. You are removing it from its context to make it look bad:it gives you a strategy that works on your assumed prior — JeffJo
Assuming your prior is correct, that is. — Jeremiah
Even in that more general case, the Bayesian approach can give a switching strategy with a positive expected net gain. — andrewk
No, it gives you a strategy that works on your assumed prior, not necessarily on reality.
Yes, as I have said repeatedly. And if you read the entire thread, you will see that this has been my point all along. Even though you don't know what the distribution is, you still have to treat whatever value you are using as a random variable with a probability distribution, and not simply "as an unknown." Which is what you have advocated.The point is that there must be a prior distribution for how the envelopes were filled — JeffJo
True, but you will have no knowledge of what that may be. — Jeremiah
And what people did I criticize this way? I simply pointed out that this problem is controversial because of an error that is routinely made everywhere the controversy exists.And yet you didn't read the posts, did you? Not then, maybe you read a few more after I pushed you. I may be an ass, but at least I read a thread before criticizing people. — Jeremiah
The simple truth is that you have been misinterpreting me since you joined the conversation. I saw it from your first response to me. I looked at your post and realized you were making false assumptions based on viewing post out of context of the thread. I knew if I enegaged you on that level the conversation would consistent of me untangling all of your misconceptions. — Jeremiah
You can't just enumerate a set of cases, and claim each is equally likely. If you could, there would be a 50% chance of winning, or losing, the lottery. — JeffJo
That is a very bad understanding of what a sample space and an event is. You are not applying your Principle of Indifference there — Jeremiah
Ya, great math there. — Jeremiah
Yes, my point was that the lottery example is a very bad description of a sample space. — JeffJo
It makes no sense to use a probability density curve[1] on this problem, considering X would only be selected ONCE[2], which means X<2X ALWAYS[3] (given that X is positive and not 0). That means no matter what X is the expected value will always be 1/2X+X[4], in every single case.
If you try to fit X to a statistical[5] distribution you are just piling assumptions on top of assumptions[6]. You are making assumptions about the sampling[7] distribution and the variance[8]. Assumptions in which you do not have the data to justify. You are also making assumptions about how X was even selected.[9] — Jeremiah
Then, despite the fact that I tried to address only those posts that had a smidgen of relevancy, or ones you pointed out as significant (and later claimed were not), you continued to insist you wouldn't read what I wrote. And it's quite clear you didn't; or at least that you didn't understand any of it.I am not doing this, not until you actually read all of my posts in this thread. — Jeremiah
I'm still confused. This makes it sound like the switching argument isn't fallacious -- it just makes an unwarranted assumption. — Srap Tasmaner
Do we need to assume that X is not continuous? If it is, all these probabilities are just 0, aren't they? — Srap Tasmaner
Jeremiah only got into the sims & distributions business because everyone was talking about these things and it was his intention to put an end to all the speculation and get the discussion back on track. It seemed to me he did this reluctantly with the intention of showing that even if you made some assumptions you shouldn't -- this has always been his view -- it might not help in the way you think it does. — Srap Tasmaner
I was being terse. A longer version of what I said is "So '2X' is meaningless if you try to use it as a value." This thread has gone on too long, and I didn't want to have to explain the mathematics of probability theory any more than I already have.Isn't 2X just a transformation of X that doubles the possible values in X? — Andrew M
The amount you have is $x. The other envelope contains either $2x or $x/2. If it's $2x then you gain $x by switching. If it's $x/2 then you lose $x/2 by switching. — Michael
My simulations were there to display the inherit ambiguity in defining a prior distribution. X is an unknown, treat it like an unknown. — Jeremiah
And my response to these sentiments has always been that you can't define/calculate the prior distribution, and that it was a misguided effort to even try (as you did).I have also already shown that trying to calculate expected returns is a misguided effort. — Jeremiah
Choosing any explicit distribution for the OP is indeed misguided, which is why your simulations were misguided. That, and the fact that your conclusions could be proven without such modeling.I think in terms of modeling the actual OP the inclusion of distributions or switching strategies is misguided. — Jeremiah
However, these models do provide a useful platform to investigate properties that may be applied to other probabilistic aspects.
And that's because of what I explained here. — Michael
We've already established that the expected gain is
E(B∣A=a)=P(X=a∣A=a)2a+P(2X=a∣A=a)a/2 — Michael
The objective probabilities of X=a∣A=a and 2X=a∣A=a depend on how the host selects the value of X.
If he selects it at random from a distribution that includes a2 and a then the objective probability of X=a∣A=a is 0.5 …
So there would be an objective expected gain.
What if we just say that, having observed the value of our envelope to be a, then the expected value of the other is 3X - a for some unknown X? That formula, unlike the expected value formula, doesn't require any probabilities to be filled in. It's uninformative, but so what? — Srap Tasmaner
On the other hand, I think the right way to look at it is what I've been saying lately:
there are two choices;
the host's choice determines how much can be gained or lost by switching;
the player's choices determines whether they gain or lose. — Srap Tasmaner
Certainly. It isn't "objective." I thought I made that pretty clear.So if I pick a number at random from {1, 2, 3, 4, 5, 6, 7, 8, 9, 10} you disagree that the objective probability that I will pick 5 is 0.1? — Michael
And that's the point. You cannot know this in the two envelope problem, when you know what value you are switching from. Unless, of course, you know how the amounts were determined. Do you?If I know that the odds are even then I will play.
I'm assuming, based on the first sentence here, that you left a "not" out of the second?If I know that the odds aren't even then I might be willing to play, depending on the odds. If I don't know the odds then I will play.
Yep. Now, do you know how the odds of having only a $5 and a $10 compare to only having a $10 and a $20? No? Then you can't say that the chances of gaining are the same as the chances of losing.And if all he had was a $10 bill and a $20 bill then my chances of picking High = $10 is nil.
Say "I don't know how to compare the chances of gaining to the chance of losing."So what am I supposed to do if I don't know how the values are selected?
Nope. The PoI applies only if you can establish that there is a causal equivalence between every member of the set to which you apply it. It is specifically inapplicable here, because there cannot be a strategy for filling the envelopes where it is true for an arbitrary value you see.As I said here, if I don't have any reason to believe that X = 5 is more likely than X = 10 then why wouldn't I switch? I think the principle of indifference is entirely appropriate in this circumstance.
That is a different question. The point is that the risk is unknowable, and probably not 50%. Whether you think that a $5 loss is acceptable regardless of the risk is a subjective decision only you can make.There's more to gain than there is to lose, and a loss of £5 is an acceptable risk.
There is never any case, no matter what envelope you picked from whatever pair, in which you have a non-zero chance of gaining and a non-zero chance of losing — Srap Tasmaner
Well, I can't address your disagreement unless you explain why you feel that way. That characterization is correct. There may be different ways people express their uncertainty, but it still boils down to the same concept.In broad terms I do not disagree with that characterisation. — andrewk
What kind of differences are you talking about? There is no single way to express a sample space, and in fact what constitutes an "outcome" is undefined. We've experienced that here: some will use a random variable V that means the value in your envelope, while others represent the same property of the process by the difference D (which is the low value as well).But there is often more than one way to represent uncertainty, and these lead to different probability spaces. I have referred previously to the observation that in finance many different, mutually incompatible probability spaces can be used to assign a value to a portfolio of derivatives.
Maybe I was mixing Andrews up. I apologize.And what you seem to be avoiding with that attitude, is that the expectation formula (v/2)/2 + (2v)/2 is already assuming: — JeffJo
I am not an advocate for that expectation formula, so I don't see why you'd think I am avoiding those objections to it.
The highlighted assertion is incorrect. First off, "objective probability" means the "likelihood of a specific occurrence, based on repeated random experiments and measurements (subject to verification by independent experts), instead of on subjective evaluations." Se have no such repeated measurements, so any assessment of Pr(X=a∣A=a) is subjective.If he selects it at random from a distribution that includes a2 and a then the objective probability of X=a∣A=a is 0.5 and the objective probability of 2X=a∣A=a is 0.5. So there would be an objective expected gain. — Michael
So, you are willing to risk losing $5 for the chance to gain $10? Regardless of the odds behind that risk?If there's £10 in my envelope and I know that the other envelope contains either £5 or £20 because I know that one envelope contains twice as much as the other then I have a reason to switch; I want an extra £10 and am willing to risk losing £5 to get it. — Michael
I differ from that perspective is that I reject the notion that there is such a thing as a 'real' probability (aka 'true', 'raw', 'correct', 'absolute' or 'observer independent' probability). — andrewk
I did read the thread. You did not read my replies. Like this one, where I said "you have no information that would let you calculate [the expected gain or loss]" and you replied with "you can't really calculate the expected value" as if I hadn't just said the same thing.If you do look, THERE IS PROBABLY AN EXPECTED GAIN OR LOSS, but you have no information that would let you calculate it. This is different from knowing it is 0. — JeffJo
So since you don't know which case you are in after seeing Y and they are not equal you can't really calculate the expected value. Now if you never opened A and never saw Y, that is a different story. — Jeremiah
You did not read the thread. — Jeremiah
I think the confusion comes when you switch from
E(other) = (larger)P(picked smaller) + (smaller)P(picked larger)
where the probabilities of picking smaller and larger are equal, to
E(other | picked = a) = (2a)P(picked smaller | picked = a) + (a/2)P(picked larger | picked = a)
because it's tempting to think these conditional probabilities are equal, just like the unconditional probabilities above, but this we do not know. — Srap Tasmaner
You are misinterpreting what I said, what your link says and your source is Wikipedia. — Jeremiah
Bayesian inference is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available.
...
H stands for any hypothesis whose probability may be affected by data (called evidence below). Often there are competing hypotheses, and the task is to determine which is the most probable.
Pr(H), the prior probability, is the estimate of the probability of the hypothesis H.
↪Jeremiah
Why is equiprobability simple but other priors aren't? — fdrake
And if I see £10 then I stand to gain £10 and I stand to lose £5. — Michael
You don't "guess" a prior. Priors have to be justified. If you don't know you use an uninformative prior. — Jeremiah
And that unknown value in his pocket has a distribution. We don't need to "check it," as long as the symbolic probability space we use satisfies the requirements of being a probability space.I agree in the strictness sense of the definition there is an unknown distribution, but as far was we know it was whatever was in his pocket when he filled the envelopes. We can't select a distribution to use, as we have no way to check it. — Jeremiah
Jeremiah oversimplifies.If you use a distribution you are making assumptions not included in the OP. I pointed this out before. — Jeremiah