• Jeremiah
    1.5k
    It should be noted the a so called "switching strategy" can only work if it has the time to learn the distribution. It only works because in the long run it can gather enough information to approximate the distribution, but at the start its predictions will be very unreliable.


    Consider @Michael's simulation he posted: Here

    Now he ran it for 10,000 times, that means it was able to gather a lot of information on the distribution but what if we try it with 5 times, what is our expected gain then? I will use his code but change the number of times to run it.

    Let's give go at 5 iterations :

    [1] "Gain: -0.112966601178782"

    At 10 iterations:

    [1] "Gain: 0.190537084398977"

    At 20 iterations:

    [1] "Gain: 0.468846391116595"

    At 30 iterations:

    [1] "Gain: 0.331561140647656"

    At 50 iterations:

    [1] "Gain: 0.146130465279402"

    At 100 iterations:

    [1] "Gain: 0.246130667031913"

    Finally at 100 iterations do we get Micheal's predicted expected returns. A "switching strategy" depends on repeat occurrences to work, as it has to gather that information. So just how many envelopes do you think you'll get to open?
  • Jeremiah
    1.5k
    A Bayesian inference is subject to the same fault, it will need several occurrences to correct the errors of baseless assumptions, before becoming accurate.
  • Jeremiah
    1.5k
    The truth is, the OP never mentioned you'll get to open 100 envelopes. In fact in the OP there are only two envelopes and you only get to pick one of them. These "switching strategies" are not applicable, as you don't have a chance to learn the distribution though repeatedly opening envelopes.
  • Michael
    15.4k
    Then you reject 1, because those are two different values of X.Srap Tasmaner

    I don't reject 1. Either a) the envelopes are valued at £10 and £20 (so X = 10) or, b) the envelopes are valued at £5 and £10 (so X = 5). As it's an unknown value of X, I don't know which it is.

    What you (and others) are saying is that if a) is the case then I gain X and if b) is the case then I lose X, which is true, but not symmetric, because if a) is the case then I gain £10 and if b) is the case then I lose £5.
  • Srap Tasmaner
    4.9k

    If X = 10 and your envelope is worth 10, you have the X envelope. By trading, you gain X. This is the X that matters. For any pair of envelopes, there is a single value of X. (If your envelope was worth 20, you would have the 2X envelope and would lose X by trading.)

    If X = 5 and your envelope is worth 10, you have the 2X envelope. By trading, you lose X. (If your envelope was worth 5, you would gain X by trading.)

    Every pair of envelopes has one larger and one smaller. You have an even chance of picking the larger or the smaller. If you picked the larger, you can only lose by trading. If you picked the smaller, you can only gain by trading. There is never any case, no matter what envelope you picked from whatever pair, in which you have a non-zero chance of gaining and a non-zero chance of losing. It's always one or the other and never both.
  • Michael
    15.4k
    If X = 10 and your envelope is worth 10, you have the X envelope. By trading, you gain X. This is the X that matters. For any pair of envelopes, there is a single value of X. (If your envelope was worth 20, you would have the 2X envelope and would lose X by trading.)

    If X = 5 and your envelope is worth 10, you have the 2X envelope. By trading, you lose X. (If your envelope was worth 5, you would gain X by trading.)

    Every pair of envelopes has one larger and one smaller. You have an even chance of picking the larger or the smaller. If you picked the larger, you can only lose by trading. If you picked the smaller, you can only gain by trading. There is never any case, no matter what envelope you picked from whatever pair, in which you have a non-zero chance of gaining and a non-zero chance of losing. It's always one or the other and never both.
    Srap Tasmaner

    I know this. If X = 10 then I gain £10 by switching. If X = 5 then I lose £5 by switching. I'm betting on the value of X being 10. It's a £5 bet with a 2:1 payout. If the value of X was selected at random from a distribution which includes both 5 and 10 then there's a 50% chance of winning. And even if it wasn't, given the principle of indifference I will assign an objective Bayesian probability of 50% to each event, which is all I have to work with. If I don't have any reason to believe that X = 5 is more likely than X = 10 then why wouldn't I switch? Not switching is an implicit bet that X = 5, which doesn't pay out (or cost) anything.
  • Srap Tasmaner
    4.9k

    Switching is not objectively worse than sticking. It's also not objectively better. Half the time switching is a mistake. Half the time sticking is a mistake. But that's because of your choice, not because of the values in the envelopes.

    But it is still false that you have an expected gain of 1/4 the value of your envelope. You really don't. All these justifications for assigning 50% to more possibilities than two envelopes can hold are mistaken. You picked from one pair of envelopes. This is the only pair that matters. You either have the bigger or the smaller. Trading the bigger is a loss, trading the smaller is a gain, and it's the same amount each way.

    (Maybe one day I'll figure out how to make this into a proper wager -- something with paying each other the value of the envelopes each ends up with. As it is, you make money either way.)
  • Pierre-Normand
    2.4k
    But it is still false that you have an expected gain of 1/4 the value of your envelope. You really don't. All these justifications for assigning 50% to more possibilities than two envelopes can hold are mistaken. You picked from one pair of envelopes. This is the only pair that matters. You either have the bigger or the smaller. Trading the bigger is a loss, trading the smaller is a gain, and it's the same amount each way.Srap Tasmaner

    I agree that in the context of any real world instantiation of this problem, what you say is true (because there is no real world instantiation of this problem, with finite stocks of money, that satisfies the condition of equiprobability tacitly assumed in the original formulation). The challenge that @Michael would have to answer is this: Why is it, on his view, that it isn't rationally mandatory for him to switch his choice even before he has looked into his envelope? And if it is rationally mandatory for him to switch without even looking -- because whatever the content X of his envelope might be, the expected gain from switching is 0.25X -- then why isn't it also rationally mandatory for him to switch yet again, if given the opportunity, without even looking?
  • Andrew M
    1.6k
    *** You might have (3) and (4) a little wrong but I can't judge. The McDonnell & Abbott paper makes noises about the player using Cover's strategy having no knowledge of the PDF of X.Srap Tasmaner

    There is an expected gain of X from strategic switching if the algorithm generates a random number between the X and 2X envelope amounts. In this case, the player will switch only when the chosen amount is less than the random number (thus ending up with the other envelope with 2X) and stick otherwise (thus keeping her chosen envelope with 2X).

    However there needs to be a distribution to generate the random number from. If that distribution is too wide or too dissimilar to the envelope distribution, the probability of the above situation occurring could be vanishingly small such that it is never realizable in practice. So for the strategy to be useful, at least a ballpark estimate of the maximum possible amount would be needed.
  • Srap Tasmaner
    4.9k
    between the X and 2X envelope amountsAndrew M

    Right. The paper Jeremiah linked talks about this too. I was thinking about this on a 6-hour drive a few days ago, and I agree that in general we're talking vanishing smallness. However -- one neat thing about how the game works is that the numbers, and thus the intervals get bigger and bigger faster and faster. Or, rather: all of these strategies are mainly designed to avoid big losses, so we don't really care if small intervals are hard to hit; we only care about the really big intervals and those are if not easy, at least easier to hit. Any big loss avoided will end up pushing you over breaking even.

    This practical reasoning stuff I find endlessly cool -- but it's only barely related to the "puzzle" aspect here.
  • Srap Tasmaner
    4.9k

    I agree completely and have so argued. All you really have to do to get the ball rolling is designate the value in the envelope. It's the innocent "Let Y = ..." This is what I love about that paper Jeremiah linked. I have repeatedly voiced my bafflement that just assigning variables a certain way can lead to so much trouble, and the paper addresses that directly.
  • andrewk
    2.1k
    Right. The paper Jeremiah linked talks about this too.
    That paper appears to put forward the same position as mine: that always-switching delivers no expected gain, even if the envelope has been opened, but that a strategy based on switching only if the observed amount is less than some pre-selected value delivers a positive expected gain.
  • Srap Tasmaner
    4.9k

    Yes, and the cutoff can be entirely arbitrary, but the effect will often be tiny. (I spent a few minutes trying to get a feel for how this works and was seeing lots of 1.000417.... sorts of numbers. The argument is sound, so I probably won't spend any more time trying to figure out how to simulate knowing nothing about the sample space and its PDF.)
  • Andrew M
    1.6k
    But suppose you don't do this. Suppose you just select some X at random from {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}, put it in one envelope, and then put 2X in the other envelope. I select one at random and open it to see £10. Am I right in saying that {£5, £10} and {£10, £20} are equally likely?Michael

    Yes. That is because you know what the distribution is.

    If we don't have this information then we should apply the principle of indifference, which will have us say that {£5, £10} and {£10, £20} are equally likely.Michael

    Without knowing the distribution you don't know whether both the envelope pairs were possible and, if they were, whether they were equally weighted.

    Given that knowledge you can apply the principle of indifference (based on the envelope pair weightings being identical). And you can also calculate the expected gain. Otherwise all bets are off, so to speak.
  • Pierre-Normand
    2.4k
    The argument is sound, so I probably won't spend any more time trying to figure out how to simulate knowing nothing about the sample space and its PDF.Srap Tasmaner

    One way to represent "knowing nothing" might be to take the limit of an initial distribution that is uniform between 0 and M, where M tends towards infinity. In that case, the cutoff value that you can rely on to maximise your gain also tends towards infinity. In practice, that means always switching; and the expected added value from switching tends towards 0.25X, where X is the value of the envelope that you were initially dealt. This still appears to give rise to the same paradox whereby switching increases your expectation by 1.25 even though switching doesn't change anything to the probabilistic distribution of the value of the envelope that you end up holding. But the paradox only is apparent since your expectation from the non-switching strategy tends towards infinity and 1.25 times infinity (or aleph-zero) still is infinity. It is thus both true that your expectation from switching remains the same and is increased by a 1.25 factor. In this limiting case, though, it is infinitely unlikely that you will be dealt an envelope containing an amount smaller than M, however large M might be. This also explains why, in this limiting case, you should always switch.
  • Srap Tasmaner
    4.9k

    Thanks. (There is lots I have yet to learn, so some of this is going right by me -- for now.)

    I did wonder -- maybe a week ago? it's somewhere in the thread -- if there isn't an inherent bias in the problem toward switching because of the space being bounded to the left, where the potential losses are also getting smaller and smaller, but unbounded to the right, where the potential gains keep getting bigger and bigger.

    It's just that in the single instance, this is all an illusion. There is this decreasing-left-increasing-right image projected onto the single pair of envelopes in front of you, but you can't trust it. It even occurred to me that since there are not one but two bits of folk wisdom warning of this error -- "the grass is always greener .. " and "a bird in the hand ..." -- that this might be a widely recognized (but of course easily forgotten) cognitive bias.
  • Pierre-Normand
    2.4k
    I did wonder -- maybe a week ago? it's somewhere in the thread -- if there isn't an inherent bias in the problem toward switching because of the space being bounded to the left, where the potential losses are also getting smaller and smaller, but unbounded to the right, where the potential gains keep getting bigger and bigger.Srap Tasmaner

    Yes, you can have a uniform discrete distribution that is bounded between 0 and M such that the possible values are M, M/2, M/4, ... In that limit case also, if the player takes M as the cutoff for not-switching (that is, she only switches when she sees a value lower than M) her expectation is 1.25X whenever she switches and her expectation is X=M when she is dealt M (and therefore doesn't switch). Her overall expectation if she were rather to always switch would only be X. The limit case where M tends towards infinity also yields an always switching strategy (with M=infinity being the cutoff) with an expectation that is both X (=infinity) and 1.25X (=infinity).
  • Jeremiah
    1.5k


    You modeled a different scenario than the OP, then argued that it was impossible for you to be wrong, as you claimed there is no such thing as being correct. However, this is not art class, and it is possible to be incorrect.
  • JeffJo
    130
    If there's £10 in my envelope and I know that the other envelope contains either £5 or £20 because I know that one envelope contains twice as much as the other then I have a reason to switch; I want an extra £10 and am willing to risk losing £5 to get it.Michael
    So, you are willing to risk losing $5 for the chance to gain $10? Regardless of the odds behind that risk?

    Say I offer you the chance to play a game. It costs you $5 to play, but if you win I will return your $5, and give you $10. The game is to flip a fair coin, and if you get Heads N times in a row, you win.
    1. Say I tell you that N=1. Are you willing to play? I hope so, as the chances of winning are 1 in 2 and your expectation is ($-5)/2+($10)/2=$2.50.
    2. Say I tell you that N=10. Are you willing to play? (The chances of winning are now 1 in 1024.)
    3. Say we determine N by rolling a fair die. Are you willing to play? You could get odds of 1 in 2 again, but you could also get odds of 1 in 64. (The overall chances are just under 1 in 6.)
    4. Say I tell you that there is an integer written on every card in a deck of cards, and you determine N by drawing a card at random. Are you willing to play if I don't tell you what the numbers could be?

    You seem to resist accepting that the OP is most similar to game #4, and not at all similar to game #1. You may be able to calculate the chances of drawing a particular card, but if you don't know what is written on any particular card then this does not translate into a 1/52 chance of drawing a "1".

    In the OP, you have no way of knowing whether your benefactor was willing to part with more than $10. If all he had was a $5 bill and a $10 bill, then he can't. Your chances of picking Low=$5 or High=$10 were indeed 50% each, but your chances of picking Low=$10 were nil.

    But if you don't care about chances, only the possibility of gain? And so answered "yes" to all four game? Then go ahead and switch envelopes in the OP. Just don't expect a gain. That can't be determined from the information.
  • JeffJo
    130
    If he selects it at random from a distribution that includes a2 and a then the objective probability of X=a∣A=a is 0.5 and the objective probability of 2X=a∣A=a is 0.5. So there would be an objective expected gain.Michael
    The highlighted assertion is incorrect. First off, "objective probability" means the "likelihood of a specific occurrence, based on repeated random experiments and measurements (subject to verification by independent experts), instead of on subjective evaluations." Se have no such repeated measurements, so any assessment of Pr(X=a∣A=a) is subjective.

    The Principle of Indifference itself is a subjective assessment. But to apply it, you first must determine some kind of an equivalence between the origin of the cases to which you apply it. You can't do that to the values here, only the chance of picking High or Low.
  • Michael
    15.4k
    The highlighted assertion is incorrect. First off, "objective probability" means the "likelihood of a specific occurrence, based on repeated random experiments and measurements (subject to verification by independent experts), instead of on subjective evaluations." Se have no such repeated measurements, so any assessment of Pr(X=a∣A=a) is subjective.JeffJo

    So if I pick a number at random from {1, 2, 3, 4, 5, 6, 7, 8, 9, 10} you disagree that the objective probability that I will pick 5 is 0.1?

    So, you are willing to risk losing $5 for the chance to gain $10? Regardless of the odds behind that risk?JeffJo

    If I know that the odds are even then I will play. If I know that the odds aren't even then I might be willing to play, depending on the odds. If I don't know the odds then I will play.

    In the OP, you have no way of knowing whether your benefactor was willing to part with more than $10. If all he had was a $5 bill and a $10 bill, then he can't. Your chances of picking Low=$5 or High=$10 were indeed 50% each, but your chances of picking Low=$10 were nil.JeffJo

    And if all he had was a $10 bill and a $20 bill then my chances of picking High = $10 is nil.

    So what am I supposed to do if I don't know how the values are selected? As I said here, if I don't have any reason to believe that X = 5 is more likely than X = 10 then why wouldn't I switch? I think the principle of indifference is entirely appropriate in this circumstance. There's more to gain than there is to lose, and a loss of £5 is an acceptable risk.
  • JeffJo
    130
    In broad terms I do not disagree with that characterisation.andrewk
    Well, I can't address your disagreement unless you explain why you feel that way. That characterization is correct. There may be different ways people express their uncertainty, but it still boils down to the same concept.

    But there is often more than one way to represent uncertainty, and these lead to different probability spaces. I have referred previously to the observation that in finance many different, mutually incompatible probability spaces can be used to assign a value to a portfolio of derivatives.
    What kind of differences are you talking about? There is no single way to express a sample space, and in fact what constitutes an "outcome" is undefined. We've experienced that here: some will use a random variable V that means the value in your envelope, while others represent the same property of the process by the difference D (which is the low value as well).

    But the significant difference in those portfolio analyses is the distribution function Pr(). Even though ones subjectivity may be based on past experience, there is no guarantee that the underlying process will be the same in the future, or in the cases being compared. The factors determining the actual values are not as consistent as is required for analysis.

    So any assessment is, by necessity, subjective to some degree. Different sample spaces (or probability spaces) simply allow the analysts to apply their subjectivity in different ways.

    But we really are getting off the point. Which is that they do use a probability space, not whether it is "real," "absolute," or even "correct." That is irrelevant. The point here is that, over the set of all possible probability spaces for the OP, there will either be some values of v where the expected change is non-zero, or the expected value over V is infinite. And in fact, the only way that the expected change is 5v/4, is if the probability distribution says the two possible pairs are equally likely. Which also implies you are assuming a probability space. The only way the expected change is identically zero, is if you don't know v or you know that Pr(v/2,v)=2*Pr(v,2v).

    And what you seem to be avoiding with that attitude, is that the expectation formula (v/2)/2 + (2v)/2 is already assuming: — JeffJo

    I am not an advocate for that expectation formula, so I don't see why you'd think I am avoiding those objections to it.
    Maybe I was mixing Andrews up. I apologize.
  • JeffJo
    130
    There is never any case, no matter what envelope you picked from whatever pair, in which you have a non-zero chance of gaining and a non-zero chance of losingSrap Tasmaner

    This emphasizes how a probability space changes based on your knowledge. Or rather, what knowledge is missing.

    I just flipped a coin on my desk. I can see whether it is Heads or Tails. So "there is never a case where there is a non-zero chance of Heads, and a non-zero chance of Tails."

    But If I ask you to assess the two cases, you should say that each has a 50%. The fact that the outcome is determined does not change how probability works for those who do not know the determination.

    Srap Tasmaner is saying that, to someone who knows what is in *both* envelopes, the possibility of gaining or losing is determined. Michael is saying that, to someone who doesn't see both, the two cases should be treated with probabilities that are >=0, and that add up to 1.

    The error is thinking that both must be 50%. Your chance of High or Low is 50% if you don't know the value in the one you chose, but it can't be determined if you do.
  • Srap Tasmaner
    4.9k
    If I know that the odds are even then I will play. If I know that the odds aren't even then I might be willing to play, depending on the odds. If I don't know the odds then I will play.Michael

    This particular quandary isn't supposed to arise in real life. A bookmaker first estimates the odds, and then the payouts are simply those odds minus the vigorish (his profit). If you know that a wager is paying off at 2:1, then you know the odds of it paying off are around 2:1 against.

    If this were not true then (a) bookmaking would not be a thing; more to the point (b) you should not be gambling. Placing a wager based only on the payout without considering the odds of winning is crazy. If the principle of indifference tells you to assume there's even odds whenever you don't know the odds, then the principle of indifference shouldn't be gambling either.

    As you've pointed out, in essence what happens here is that on finding $10, you pocket $5, and then place a $5 wager at 2:1 that the other envelope has $20. So the bookmaker in this case is paying you to play, which is no way to run a book. Suppose instead you had to pay that $5 for the opportunity to switch. That is, you give back the $10 plus pay $5 out of pocket. Would you still play? If the other envelope has $5, you've lost nothing, but if it has $20, you've made $15. That seems alright. A free chance to get $15 with no risk. But then you could have walked away with $10 for no risk.

    Still not sure how to make this into a proper wager. We need to zero-in on the expectation of $12.50 somehow.
  • JeffJo
    130
    So if I pick a number at random from {1, 2, 3, 4, 5, 6, 7, 8, 9, 10} you disagree that the objective probability that I will pick 5 is 0.1?Michael
    Certainly. It isn't "objective." I thought I made that pretty clear.

    Before I say whether I agree with the value, you should understand that "pick at random" does not mean "pick with uniform randomness." These two are often confused. Had you said "uniform", of course I would agree with that probability. But it would be neither objective nor subjective, it would be explicit.

    The best guess for a subjective probability is that you would pick uniformly. But if you want an objective probability, you need to find a test subject and get them to repeat the experiment many times. That's what objective probability means. And it has been shown over and over that people can't do that with uniformity.

    If I know that the odds are even then I will play.
    And that's the point. You cannot know this in the two envelope problem, when you know what value you are switching from. Unless, of course, you know how the amounts were determined. Do you?

    If I know that the odds aren't even then I might be willing to play, depending on the odds. If I don't know the odds then I will play.
    I'm assuming, based on the first sentence here, that you left a "not" out of the second?

    You can't know the odds when you look in an envelope and see a value. You can choose to play, not knowing the odds, but your calculation of the expectation is wrong.

    And if all he had was a $10 bill and a $20 bill then my chances of picking High = $10 is nil.
    Yep. Now, do you know how the odds of having only a $5 and a $10 compare to only having a $10 and a $20? No? Then you can't say that the chances of gaining are the same as the chances of losing.

    So what am I supposed to do if I don't know how the values are selected?
    Say "I don't know how to compare the chances of gaining to the chance of losing."

    As I said here, if I don't have any reason to believe that X = 5 is more likely than X = 10 then why wouldn't I switch? I think the principle of indifference is entirely appropriate in this circumstance.
    Nope. The PoI applies only if you can establish that there is a causal equivalence between every member of the set to which you apply it. It is specifically inapplicable here, because there cannot be a strategy for filling the envelopes where it is true for an arbitrary value you see.

    There's more to gain than there is to lose, and a loss of £5 is an acceptable risk.
    That is a different question. The point is that the risk is unknowable, and probably not 50%. Whether you think that a $5 loss is acceptable regardless of the risk is a subjective decision only you can make.
  • Srap Tasmaner
    4.9k
    Srap Tasmaner is saying that, to someone who knows what is in *both* envelopes, the possibility of gaining or losing is determined. Michael is saying that, to someone who doesn't see both, the two cases should be treated with probabilities that are >=0, and that add up to 1.

    The error is thinking that both must be 50%. Your chance of High or Low is 50% if you don't know the value in the one you chose, but it can't be determined if you do.
    JeffJo

    I do see that. From the player's point of view her uncertainty might as well be modeled as the outcome not yet having been determined and still subject to chance.

    On the other hand, I think the right way to look at it is what I've been saying lately:

    1. there are two choices;
    2. the host's choice determines how much can be gained or lost by switching;
    3. the player's choices determines whether they gain or lose.

    The player's choice actually happens in two steps, picking and then switching or not, but the effect is the same. You could pick A and switch to B, or you could pick B and stick. The player gets to determine whether they end up with A or B by whatever method they enjoy, but that's all they get to do. More reason to think switching is pointless.

    What's frustrating about the whole expected value calculation is that the point of doing it at all is not to tinker with the chances of getting the bigger of the two envelopes the host has offered -- it has to include tinkering with the amounts in those envelopes. There's nothing to be done with the choice because whether you choose in one step or two, it's just a coin flip. (Thus different from Monty Hall, where the key is understanding how your choice works, and the odds of having chosen correctly.)

    So I've been a bit strident about this because to bother with the expected value calculation here means including in your calculation events known to be counterfactual. Is this actually unusual, or am I being stupid? Events that haven't happened yet may become counterfactual by not happening and of course we set odds for those. But I keep thinking that that the player cannot choose until the host chooses -- that's the nature of the game -- and thus before there is an opportunity for the player to choose, some events have definitely become counterfactual already. I've thought that one way to describe this is to say that "the other envelope" is not well-defined until both choices have been made, and they are always made in order, (1) host, (2) player.

    So I agree with your point about the player modeling their uncertainty, the same point @andrewk and @Michael have made. But there are nuances here that puzzle me. If it turns out my concerns are misplaced, that's cool. The whole point of these sorts of puzzles is to sharpen your understanding of probability, which I am eager to do.

    ** ADDENDUM **

    I think the "counterfactual" stuff is wrong.

    It's perfectly intelligible to talk about the chances of something having happened or not happened in the past, and that's putting odds on a counterfactual. It's intelligible both in cases where you know what happened ("He just made a one in a hundred shot!") and in cases where you don't ("What's more likely? That I forgot my own name or that you typed it wrong?").

    So that's not it.

    That leaves two acceptable options:

    1. Ignore the values and only consider the odds that you picked the larger of the two offered; those odds are even.
    2. Consider the values but recognize that you do not know the probability of any particular value being in the other envelope -- in which case your calculation cannot be completed.
  • Srap Tasmaner
    4.9k
    You can't know the odds when you look in an envelope and see a value. You can choose to play, not knowing the odds, but your calculation of the expectation is wrong.JeffJo

    This is the point of the odds calculation I posted before, right? The observed value of the envelope provides no information that could help you decide whether you're in [a/2, a] or [a, 2a], because your choice is always a coin flip:



    (Which raises puzzles about how switching strategies work, and I'd have have to study more to be clear on that. If there is an upper bound, then you'd like to be able to look at a and determine that X=a/2 is more likely than X=a -- that is, that you're getting close to the upper bound and ought not switch. But that's all to one side.)
  • JeffJo
    130
    On the other hand, I think the right way to look at it is what I've been saying lately:

    there are two choices;
    the host's choice determines how much can be gained or lost by switching;
    the player's choices determines whether they gain or lose.
    Srap Tasmaner

    This is only true if we do not look in an envelope. That *is* the OP, but the other has also been discussed.

    It is true because we only need to consider one value for the host's choice, and so it divides out. If we look, we need to consider two. And there is no information about the relative probabilities of those two host-choices.

    These are the same two conclusions I have been "harping on" all along, and it is still true that they are the only possible conclusions. If you don't look, the two envelopes have the same expected value. If you do, there is not enough information to say how the probabilities split between having the higher, or lower, value.
  • Srap Tasmaner
    4.9k
    If you don't look, the two envelopes have the same expected value. If you do, there is not enough information to say how the probabilities split between having the higher, or lower, value.JeffJo

    Okay -- this is what I keep forgetting.

    Before you look, you could say both envelopes have an expected value of m=3X/2 for some X. Once you've looked in your envelope, its expected value is no longer m and therefore the expected value of the other is no longer m.

    So we are, contrary to my wishes, forced to consider the expected value of the other envelope, and that leads directly to considering more than one possible value for the other envelope, but with no knowledge of the probabilities that attach to those values.

    Thanks for repeating yourself until I get it. I will ask that less of you as time marches on.
  • Srap Tasmaner
    4.9k

    One more question:

    What if we just say that, having observed the value of our envelope to be a, then the expected value of the other is 3X - a for some unknown X? That formula, unlike the expected value formula, doesn't require any probabilities to be filled in. It's uninformative, but so what?
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment