• Srap Tasmaner
    4.9k
    There is a 50% chance that you observed a, because you chose and opened the X envelope; there is a 50% chance that you observed b, because you chose and opened the 2X envelope. The expected value of switching is:
    E = a/2 - b/4
    
    and since 2a = b
    E = a/2 - a/2 = b/4 - b/4 = 0.
    
  • Pierre-Normand
    2.4k
    This still looks like you're considering what would happen if we always stick or always switch over a number of repeated games. I'm just talking about playing one game. There's £10 in my envelope. If it's the lower bound then I'm guaranteed to gain £10 by switching. If it's the upper bound then I'm guaranteed to lose £5 by switching. If it's in the middle then there's an expected gain of £2.50 for switching. I don't know the distribution and so I treat each case as equally likely, as per the principle of indifference. There's an expected gain of £2.50 for switching, and so it is rational to switch.Michael

    This works if you are treating all the possible lower and upper bounds of the initial distribution as being equally likely, which is effectively the same as assuming a game where the distribution is uniform and unbounded. In that case, your expected value for switching is indeed 1.25 * v, conditionally on whatever value v you have found in your envelope, because there is no upper bound to the distribution. The paradox arises.

    If we then play repeated games then I can use the information from each subsequent game and switch conditionally, as per this strategy (or in R), to realize the .25 gain.

    If there is an upper bound M to the distribution, and you are allowed to play the game repeatedly, then you will eventually realize that the losses incurred whenever you switch after being initially dealt the maximum value M tend to wipe out your cumulative gains from the other situations. If you play the game x times, then your cumulative gain (which will tends towards zero) will tend, as x grows larger, towards being x times the average expected value of the gains for switching while playing the game only once. This average expected value will therefore also be zero. To repeat, conditionally on where v is situated in the bounded distribution, the expected value of switching could be either one of 2*v, 1.25*v or 0.5*v. On average, it will be v.
  • Srap Tasmaner
    4.9k
    The fallacious premise of the switching argument is that you could observe a given value, whichever envelope you chose and opened. If the envelopes are {5, 10}, you cannot observe 10 by selecting the smaller envelope; if the envelopes are {10, 20}, you cannot observe 10 by selecting the larger envelope. For each round of the game: X has a single value; there is a single pair of envelopes offered; they are valued at X and 2X; when you select an envelope, you select one valued at X or at 2X.
  • Andrew M
    1.6k
    This still looks like you're considering what would happen if we always stick or always switch over a number of repeated games. I'm just talking about playing one game.Michael

    You seem to be saying that with an unknown distribution, there is an expected gain from switching for one game even though over repeated games (with an unknown distribution) there isn't.

    For one game, would you be willing to pay up to 1.25 times the amount in the chosen envelope to switch (say, 1.2 times the amount)?

    If it's in the middle then there's an expected gain of £2.50 for switchingMichael

    There is with that distribution but not the {{5,10},{5,10},{5,10},{5,10},{10,20}} distribution. In this case, there is an expected loss of $2 for switching from $10.

    Without knowing the distribution, the player only knows that the unconditional expected gain is zero (i.e., the sum of the expected gains for each possible amount weighted against their probability of being observed). With the above distribution, that is (4 * $5 + 5 * -$2 + 1 * -$10) / 10 = ($20 - $10 - $10) / 10 = $0.

    If we then play repeated games then I can use the information from each game to switch conditionally, as per this strategy (or in R), to realize the .25 gain over the never-switcher.Michael

    The first game does not have any information from previous games so the player should not expect a .25 gain without that information. The player should only expect a gain (or loss) from switching when they know the distribution which is what the information built up over repeated games would provide.
  • JeffJo
    130
    Let's just look at our first interaction here:

    You can't just enumerate a set of cases, and claim each is equally likely. If you could, there would be a 50% chance of winning, or losing, the lottery.JeffJo

    By this, I was clearly referring to the valid discrete sample space {"Win", "Lose"}. An event space is a sigma-algebra on this set, and a valid example is {{}, {"Win"}, {"Lose"}, {"Win","Lose"}}. A probability space is the 3-tuple of those two sets and a measure space defined on the event space. By Kolmogorov's axioms, it is valid if it has the form {0,Q,1-Q,1}, where 0<=Q<=1.

    My statement above said that you can't simply apportion probability to the members of a valid sample space. It isn't that such a probability space is invalid in any way, it is because it is impractical,

    Yet you replied:
    That is a very bad understanding of what a sample space and an event is.Jeremiah
    Since my sample space was a perfectly valid sample space, and I never mentioned events at all, it demonstrates your "very bad understanding" of those terms. It was a very bad choice of a sample space for this problem, for the reasons I was trying to point out and stated quite clearly. But you apparently didn't read that.

    You are not applying your Principle of Indifference there,
    Actually, I was applying it. Improperly with the intent to demonstrate why its restriction is important:

    The Principle of Indifference places a restriction on the possibilities that it applies to: they have to be indistinguishable except for their names. You can't just enumerate a set of cases, and claim each is equally likely.JeffJo

    You went on:
    Furthermore, it makes no sense to use a probability density curve on this problem,Jeremiah
    I didn't say we should (A) use a probability (B) density (C) curve. I stated correctly that there (A) must be a probability (B) distribution for the (C) set of possible values, and that any expectation formula must take this distribution into account. Even if you don't know it.

    The reason your solution in post #6 turns out to be correct, is that this probability turns out to have no effect on the calculation: it divides out. That doesn't make it incorrect to use it - in fact, it is technically incorrect to not use it.
  • Jeremiah
    1.5k


    There are some interesting aspects to note if you diagram the 1.25X argument in comparison to the other two. Which clearly shows why it is a faulty argument.

    When considering the unselected envelope under the {x,2x} sample space objectively you have one true value and one false.

    Likewise, when considering the unselected envelope under the {x/2.2x} sample space objective, you have one true value and one false.

    However, it get's different when you consider the sample space {{x/2,x},{x,2x}}. Here you have objectively one true set and one false set. If I end up in the false set then I have two statements that are false. If I end up in the true set then I have one statement that is true, which only has a 1/4 weight in my consideration when it should have a 1/2 weight. So I reduce my chance of getting the true value, meaning I inflate my possibility of error.

    Here I sketched it out by hand to show it.

    https://ibb.co/dB7BUT

    I have also been thinking about the [0,M] argument.

    One of my books makes this distinction.

    The probabilist reasons from a known population to the outcome of a single experiment, the sample. In contrast, the statistician utilizes the theory of probability to calculate the probability of an observed sample and to infer from this the characteristics of an unknown population.
    Mathematical Statistics with Applications, Wackerly, Mendenhall, Scheaffer

    I would argue that the known population are the amounts in the envelope x and 2x and the unknown population is the distribution that x was selected from.
  • Baden
    16.3k
    Mod note: Personal comments will continue to be removed from this discussion. Please don't give us more work to do by responding to them but report anyone who gets personal with you and we will delete the comment or part thereof.
  • JeffJo
    130
    I already figured out that your field was not statistics,Jeremiah
    And it is even more obvious you want to use statistics anywhere you can, no matter how inappropriate. The lexicon of both probability and statistics is the same, since statistics uses probability. It applies it to the experimental data you keep talking about, and of which we have none.

    But if you feel I've mis-used terminology in any way, please point it out. I've pointed out plenty of yours, and you haven't refuted them.
  • Srap Tasmaner
    4.9k
    @Jeremiah, @JeffJo
    This pissing contest is detracting from the thread. Both of you quit it.
  • Jeremiah
    1.5k
    What is the difference between experimental data and observational data?
  • Jeremiah
    1.5k
    you haven't refuted them.JeffJo

    That's because I accept that you use a different vernacular.
  • Srap Tasmaner
    4.9k
    What if we did say that all of the player's choices are conditional on the host's choice? That is, suppose we had X = k, where k is some unknown constant. Then, using 'c' for our observed value,

    p = P(Y=X | Y=c, X=k) = P(c=k)

    Now whatever the value k is, the only permissible values for p are 0 and 1.

    The expectation for the unpicked envelope is then

    E(U) = P(c=k)2c + P(c=2k)c/2

    Once you've observed c, you know that either c=k or c=2k, but you don't know which. That is, observing c tells you one of c=k and c=2k is true (and one false), which is more specific than just

    P(X=c | Y=c) + P(X=c/2 | Y=c) = 1
  • Srap Tasmaner
    4.9k
    Here's a straightforward revision of the decision tree:
    envelope_tree_d.png
    Opening an envelope "breaks the symmetry," as the kids say, so that's the point of no return. Colors represent paths you can be on for each case.

    This one also uses 'c' for both branches, which the previous tree deliberately avoided. This time the intention is that either the red or the blue branch is probability 1 and the other 0. They conflict by design.
  • Jeremiah
    1.5k
    Something else I would like to point out, is that assuming that the probability of any given x must come from a discrete distribution is not necessarily true. In fact I used a selection method where the actual chance mechanism was applied to a continuous distribution in this very thread.


    two.envelopes <- function(){
    x <- (runif(1))
    x <- (abs(as.numeric(format(round(x, 3)))))*10 
    #randomly selects a number 
    #limits the number of decimal places x can have and muiltples x by 10 to simluate realistic dollar values.
    p <- c(x,2*x)
    A <- sample(p, 1, replace=F)
    #creates a vector with x and 2x then randomly selects one for A.
    if (A == x) {
    B <- 2*x
    } else {
    (B <- x)
    }
    return(c(A,B))
    }
    

    I used a continuous uniform distribution to randomly selected an x then formatted it into real dollar values. Now who is to say that such a chance mechanism was not used to fill the envelopes?

    If a probabilistic approach is to reason from a known population then the known population is x or 2x. Where x came from and how it was chosen is something we don't know.
  • Jeremiah
    1.5k
    Also. . . ,

    Since I have already displayed in this thread that x can come from any number of continuous distributions, this means we have no clue how and from where x was selected, and if we don't know then we should apply the the principle of indifference, right? Of course there are infinite possible distributions, and one over infinity is 0.
  • JeffJo
    130
    I apologize to this forum for allowing myself to be taken off topic by a troll.
    +++++

    The difficulty with the field of probability, is that there can be different ways to correctly address the same problem. Because there is no single sample space that describes a problem. Example:
    • Rolling two six-sided dice can use a set of 11 outcomes for the sum, a set of 21 unordered pairs of values, or a set of 36 ordered pairs.
    • Any of those can be used, but the last one allows you to easily apply the Principle of Indifference to get reasonable probabilities. This is because the PoI requires that we know the causes of the outcomes are all equivalent.

    That example doesn't mean there can't be vastly different solution methods that both get the same answer. There can. You can use a different method than I do, and get the same correct answer.

    The issues comes when two methods get different answers. If Jack says "I use method A and get answer X," while Jill says "I use method B and get answer Y," all we know for sure is that at least one is wrong. Bickering about why A is right does nothing to prove that it is, or that B is wrong.

    To resolve the issue, Jill would need to do two things: find a flaw in A, and identify how B does not make the same mistake. The Two Envelope Problem is trivial once you understand, and apply, these points.

    What is wrong with the 5/4 expectation: Any term in an expectation calculation has to be multiplied by a probability for the entirety of the event it represents.
    • A term that represents your envelope containing a value v, and the other containing v/2, must be multiplied by probability that represents your envelope containing v, and the other containing v/2.
    • A term that represents your envelope containing a value v, and the other containing 2v, must be multiplied by probability that represents your envelope containing v, and the other containing 2v.
    • If you are considering v to be a fixed value, even if that value is unknown, then those two possibilities are different outcomes, and may have different probabilities
    • Even though the chance of picking the lower or the higher is 50%, once you include a fixed value in the the entirety of the event, it can change.
    • Specifically, the unconditional probability that v is in your envelope and v/2 is in the other, is Pr(v/2,v)/2, where Pr(x,y) is the probability that the pair of envelopes were filled with x and y.
    • Similarly, the unconditional probability that v is in your envelope and 2v/is in the other, is Pr(v,2v)/2.
    • To make these conditional probabilities, you divide each by their sum.
    • This gives Pr(v/2,v)/[Pr(v/2,v)+Pr(v,2v)] and Pr(v,2v)/[Pr(v/2,v)+Pr(v,2v)], respectively.
    • Check: they add up to 1 and, if Pr(v/2,v)=Pr(v,2v), each is 50%.
    • The correct expectation by this method is v*[Pr(v/2,v)/2+2*Pr(v,2v)]/[Pr(v/2,v)+Pr(v,2v)].

    This is a correct solution, *if* you know the value in your envelope (even a fixed unknown value v), *and* you know the two probabilities involved (which is much more difficult if v is unknown). For most conceivable distributions, there are even values of v where this correct solution produces a gain.

    But the sticky issues of what the probabilities are is a big one. We can't use the PoI because the requirements I mentioned above are not met. If the supply of money is finite, then there must be some values of v where there is an expected loss, and the expected gain over the entire distribution of v will turn out to be 0.

    The 5v/4 expectation applies this method, but ignores the required terms Pr(v/2,v) and Pr(v,2v). That is its mistake. It would be right if you know, or could assume, these terms are equal. In the OP, we can't assume anything about the distribution, rendering this method useless.

    What is right with my calculation: Say the total in the two envelopes is t. Then one contains t/3, and the other contains 2t/3.
    • The unconditional probability that your envelope contains t/3 is Pr(t/3,2t/3)/2. Notice that this is the exact same formula as before, with the modified values.
    • The unconditional probability that your envelope contains 2t/3 is Pr(t/3,2t/3)/2.
    • To make these conditional probabilities, you divide each by their sum.
    • Since they are the same, this gives 50% each.
    • The expectation is (t/3)/2 + (-t/3)/2 = 0.
    • Even though we wound up not needing Pr(t/3,2t/3), including it made this solution more robust.

    This method is robustly correct. Even though it uses a similar outline to the previous one, it applies to the general problem when you don't know t. Because the unknown probability divides out. And, it gives the intuitive result that switching shouldn't matter. Its only flaw, if you want to call it that, is that it does not apply if you know what is in your envelope - then you need to consider different t's.
  • Jeremiah
    1.5k
    Calling me a troll is a personal attack.
  • Baden
    16.3k


    You've called yourself a troll several times (posts which I've had to waste time deleting and furthermore trolling is against the rules). Stop this now.
  • JeffJo
    130
    There is an interesting distribution proposed at https://en.wikipedia.org/wiki/Two_envelopes_problem#Second_mathematical_variant . Note that, like all distributions discussed so far in this thread, it is a discrete distribution and not a continuous one. Continuous distributions tend to be messy, and not very realistic.

    The envelopes are filled with ($1,$2) with probability 1/3, ($2,$4) with probability 2/9, ($4,$8) with probability 4/27, etc. Unless your envelope has $1, in which case your gain is $1, the expected value of the other envelope always 10% more than yours. But before you get too excited:

    • You can't apply my first method above to switching back.
    • Even though the expected value is at least (remember the $1 envelope?) 110% of yours, method #2 above is still correct. If you don't consider the value v in yours, the expected value of the two envelopes is the same.
    • It is left as an exercise for the reader to determine how the expected value of the other is 110% your value, but the two expected values are the same.
  • Srap Tasmaner
    4.9k

    My current, and I think "final", position is that this isn't really a probability puzzle at all. Here are my arguments for my view and against yours.

    1. The only probability anyone has ever managed to assign any value to is the probability of choosing the larger (or the smaller) envelope -- and even that is only the simplest noninformative prior.
    2. All other probabilities used in solutions such as yours are introduced only to immediately say that we do not and cannot know what their values are.
    3. The same is true for the sample space for X. Many have been used in examples and solutions but always with the caveat that we do not and cannot know what the sample space for X is.
    4. Much less the PDF on that space.
    5. By the time the player chooses, a value for X has been determined. Whatever happened before that is unknown, unknowable, and therefore irrelevant. As far the player is concerned, the PDF that matters assigns a value of 1 to some unknown value of X and 0 to everything else.
    6. We might also describe that as the host's choice of a value for X. Whatever the host had to choose from (for instance in the real-world example of cash found in my wallet), and whatever issues of probability and gamesmanship were involved, the host makes a choice before the player makes a choice. (In your multiple-and-duplicate envelopes analysis, which I found very helpful, you allow the player to choose the envelope pair and then choose an envelope from that pair. We needn't.)
    7. That choice is the very first step of the game and yet it appears nowhere in the probabilistic solutions, which in effect treat X as a function of the player's choices and what the player observes.
    8. The exact opposite is the case. The player makes choices but the consequences of those choices, what she will observe, is determined by the choice beforehand of the value of X.
    9. The values of the envelopes any player will see are fixed and unknown. We have only chosen to model them as variables as a way to express our uncertainty.
    10. The probabilistic model can safely be abandoned once it's determined that there will never be any evidence upon which to base a prior much less update one.

    Here's my question for you: what is the advantage of saying that the variable X takes a value from an unknown and unknowable sample space, with an unknown and unknowable PDF, rather than saying X is not a variable but simply an unknown?

    In the now canonical example of @Michael's £10, he could say either:

    (a) the other envelope must contain £20 or £5, but he doesn't know which; or
    (b) there's a "50:50 chance" the other envelope contains £20 or £5, and thus the other envelope is worth £12.50.

    I say (a) is true and (b) is false. The other envelope is worth £20 or £5, and he will gain or lose by switching, but we have no reason to think there is anything probabilistic about it, no reason to think that over many rounds Michael would see £20 about half the time and £5 about half the time, or even £20 some other fraction of the time and £5 the rest. What compels us to say that it is probabilistic but Michael assumes a probability he oughtn't, if we're only going to say that the actual probability is unknown and unknowable? Why not just say (a)?
  • Srap Tasmaner
    4.9k
    Here's my decision tree again, fleshed out in terms of @Michael's £10.
    envelope_tree_d.png
    The value of k is either 5 or 10, and you have no way of knowing which.

    k = 5

    If you observe 10, you are in the blue branch: you have picked the higher amount and only stand to lose k by switching with no chance of gaining. You could not have observed 10 in the red branch. At the end, the choice you face between sticking (10) and switching (5) is the exact same choice you faced at the beginning.

    For you, the blue branch is true and the red branch, in which you have the lower amount, and face a choice between sticking (10) and switching (20), is false.

    k = 10

    If you observe 10, you are in the red branch: you have picked the lower amount and only stand to gain k by switching with no chance of losing. You could not have observed 10 in the blue branch. At the end, the choice you face between sticking (10) and switching (20) is the exact same choice you faced at the beginning.

    For you, the red branch is true and the blue branch, in which you have the higher amount, and face a choice between sticking (10) and switching (5), is false.


    There are always only two envelopes with fixed values in play. Any choice you make is always between those same two envelopes with those same two values.


    ((The case descriptions sound like the lamest D&D campaign ever.))
  • JeffJo
    130
    My current, and I think "final", position is that this isn't really a probability puzzle at all. Here are my arguments for my view and against yours.Srap Tasmaner
    The puzzling part is about our understanding the mathematics, not how we use it to solve the problem. But that still makes it a probability problem. People who know only a little have difficulty understanding why the simple 5v/4 answer isn't right, and people who know more tend to over-think it, trying to derive more information from it that is there.

    1. The only probability anyone has ever managed to assign any value to is the probability of choosing the larger (or the smaller) envelope -- and even that is only the simplest noninformative prior.
    That's because the higher/lower question is the only one we can assign a probability to. There is on;ly one kind of probability that you can place on values. That's "valid," meaning there is a set of possibilities, and their probabilities sum to 1. Any other kind - frequentiest, bayesian, subjective, objective, informative, non-informative, or any other adjective you can think of - is outside teh scope of the problem.

    2. All other probabilities used in solutions such as yours are introduced only to immediately say that we do not and cannot know what their values are.
    Correct.

    4. Much less the PDF on that space.[/quote
    Careful. "PDF" usually refers to a "Probability Density Function," which means the sample space is continuous. We have a distribution for a discrete sample space.

    Th only thing we can say about it (or the sample space ) is, is that it still must be valid. A valid sample space has a maximum value. A valid distribution implies there are values in the sample space where there is an expected loss.
    5. By the time the player chooses, a value for X has been determined.
    This is a red herring. It only has meaning if we know the distribution, and we don't. So it has no meaning.

    6. We might also describe that as the host's choice of a value for X.
    I assume you mean the amounts the benefactor puts in the envelopes (this isn't presented as a game show). That's why I usually speak generically about values. That can apply to the minimum value, which is usually what x refers to in this thread, the difference d which turns out to be the same thing as x but can be more intuitive, the value v in your envelope which can be x or 2x, and the total t (so x=t/3).

    The point is that you do need to recognize how they act differently. Assuming you are "given" x means that there are to v's possible, and assuming that you are "given" v means there are two x's.

    7. That choice is the very first step of the game and yet it appears nowhere in the probabilistic solutions, which in effect treat X as a function of the player's choices and what the player observes.
    Then I'm not sure what you mean - it appears some of mine. If you are given v, and so have two x's, you have to consider the relative probabilities of those two x's.

    10. The probabilistic model can safely be abandoned once it's determined that there will never be any evidence upon which to base a prior much less update one.
    Please, get "updating" out of your mind here.

    what is the advantage of saying that the variable X takes a value from an unknown and unknowable sample space, with an unknown and unknowable PDF, rather than saying X is not a variable but simply an unknown?
    The point is that I'm saying both. You need to understand the various kinds of "variables."

    • In probability (but not necessarily statistics - this one place where terminology can vary) an experiment is a procedure where the result is not predictable. Not an instance where you perform it. (In statistics, it can refer to repeating the procedure multiple times.)
    • A random variable is an abstract concept only, for a measure you can apply to, and get a value from, every possible instance of the procedure. I represent it with an upper case letter like X.
    • "Random Variable" and "X" do not technically refer to any actual result, although this definition gets blurred in practice. They represent potential only.
    • So a random variable never strictly "has" a specific value. For a given random experiment, the possibilities are listed in a set called its range. So the range for X in our game could be something like {$5,$10,$20}.
    • An unknown is a placeholder for a specific value of a instance of the procedure. I use lower case letters, like x.
    • When we say X=x, what we mean is the event where the measure represented by X has value x.
    • Since X never really has a value, we can use this expression only as the argument for a probability function. We sometimes use the shorthand Pr(x) instead of Pr(X=x), since with the upper/lower case convention it is implied that the unknown x is a value taken from the range of X.

    In the now canonical example of Michael's £10, he could say either:

    (a) the other envelope must contain £20 or £5, but he doesn't know which; or
    (b) there's a "50:50 chance" the other envelope contains £20 or £5, and thus the other envelope is worth £12.50.

    I say (a) is true and (b) is false.

    (A) is true, and (B) cannot be determined as true or false without more information. We can say that there is an unknown probability 0<=q<=1 where the expectation is E=($5)*q + ($20)*(1-q) = $20-$15*q. Or in general, E(v,q(v)) = 2v-3*v*q(v)/2. (Note that q(v) means a function.)

    This is not worthless information, because we can make some deductions about how q varies over the range of V. Specifically, we can say that there must be some values where E(v,q(v)) is less v, others where it must be greater that v, and that the sum of E(v,q(v))*Pr(v) is zero.

    What compels us to say that it is probabilistic ...
    The fact that you use an expectation formula.
  • Pierre-Normand
    2.4k
    Here's my decision tree again (...)Srap Tasmaner

    Yours isn't really a decision tree that the player must make use of since there is no decision for the player to make at the first node. Imagine a game where there are two dice, one of which is biased towards six (and hence against one) and the other die is equally biased towards one (and hence against six). None of the dice are biased against any of the other possible results, 2,3,4 and 5, which therefore still have a 1/6 chance of occurring. Suppose the game involves two steps. In the first step, the player is dealt one of the two dice randomly. In the second step the player rolls this die and is awarded the result in dollars. What is the player's expected reward? It is $3.5, of course, and upon playing the game increasingly many times, the player can expect the exact same uniform random distribution of rewards ($1,$2,$3,$4,$5,$6) as she would expect from repeatedly throwing one single unbiased die. Indeed, so long as the two dice look the same, and thus can't be reidentified from one iteration of the game to the next, she would have no way to know that the dice aren't unbiased. There just isn't any point in distinguishing two steps of the "decision" procedure, since the first "step" isn't acted upon, yields no information to the player, and can thus be included into a black box, as it were. Either this game, played with two dice biased in opposite directions, or the same game played with one single unbiased die, can be simulated with the very same pseudo-random number generator. Those two games really only are two different implementations of the very same game, and both call for the exact same strategies being implemented for achieving a given goal.

    In the case of the two envelopes paradox, the case is similar. The player never has the opportunity to chose which branch to take at the first node. So, the player must treat this bifurcation as occurring within a black box, as it were, and assign each branch some probability. But, unlike my example with two equally biased dice, those probabilities are unknown. @JeffJo indeed treats them as unknown, but he demonstrates that, whatever they are, over the whole range of possible dealings of two envelopes that may occur at the first step of the game, they simply divide out in the calculation of the expected gain of the switching strategy, which is zero in all cases of possible (bounded, or, at least, convergent) initial distributions. Where @JeffJo's approach seems to me to be superior to yours is that it doesn't yield an incorrect verdict for the specific cases where the prior distribution is such as to yield envelope pairs where, conditionally on being dealt either the smaller or the larger amount from this pair, the expected gain from switching isn't zero. Your own approach seems to yield an incorrect result, in that case, it seems to me.
  • Srap Tasmaner
    4.9k
    If you are given v, and so have two x's, you have to consider the relative probabilities of those two x's.JeffJo

    Except that you cannot, and you know that you cannot.

    Suppose the sample space for X is simply {5}, one sole value. All the probabilities of assignments of values to X must add up to 1, so the assignment of the value 5 gets 1. Suppose the sample space for X is {1, 2, 3, 4, 5, 6, 7, 8, 9, 10} and the probability of each assignment is {0, 0, 0, 0, 1, 0, 0, 0, 0, 0}. What is the difference? Could I ever tell which of these was the case and would it matter to me if I could?

    I appreciate the rest of your comments, and may address some of them.
  • Janus
    16.2k
    It's truly remarkable that a question which is of no philosophical significance or interest could generate so many responses on a philosophy forum!
  • Srap Tasmaner
    4.9k
    Where JeffJo approach seems to me to be superior to yours is that it doesn't yield an incorrect verdict for the specific cases where the prior distribution is such as to yield envelope pairs where, conditionally on being dealt either the smaller or the larger amount from this pair, the expected gain from switching isn't zero. Your own approach seems to yield an incorrect result, in that case, it seems to me.Pierre-Normand

    Sorry, I'm not following this. This sounds like you think I said your expected gain when you have the smaller envelope is zero, which is insane.

    Yours isn't really a decision tree that the player must make use of since there is no decision for the player to make at the first node.Pierre-Normand

    Well now that's an interesting thing.

    Is it a decision? You may not immediately know the consequences of your decision, and you may have no rational basis for choosing one way or the other, but which way you decide will have consequences, and you will have the opportunity to know what they are.

    I've always thought the odd thing about the Always Switch argument is precisely that the game could well begin "You are handed an envelope ..." because the analysis takes hold whether their decision puts them in possession of the smaller or the larger envelope. That strikes me as fallacious. Your primary interest should be in picking the larger envelope, and having picked, figuring out whether you have the larger envelope. In my little real world example, it turns out gamesmanship played a far larger role than any probability.
  • Srap Tasmaner
    4.9k
    no philosophical significance or interestJanus

    Smile when you say that.
  • Srap Tasmaner
    4.9k

    https://www.urbandictionary.com/define.php?term=smile%20when%20you%20say%20that

    ((Evidently nearly coined by Owen Wister, author of The Virginian, the basis for one of favorite TV shows when I was a kid.))
  • Srap Tasmaner
    4.9k
    Once again, @Jeremiah, @JeffJo, @Pierre-Normand, and @andrewk, I'm terribly grateful for the patience you've shown me as I try to learn something about probability.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment