• Srap Tasmaner
    4.9k
    If we skipped all the preliminaries and just offered for sale, at the price of £10, envelopes advertised as containing "either £5 or £20" -- well, I'm guessing that would be illegal lots of places. There might be no envelope on offer containing £20, which would make this a simple scam.

    If you add a guarantee that there's at least one envelope worth more than the price of the envelope, you're on your way to reinventing either the lottery or the raffle.
  • Srap Tasmaner
    4.9k
    Bingo.JeffJo

    Saw what you did there.
  • Jeremiah
    1.5k
    The OP says x and 2x. If the players wants to think that means 5 or 20, then that is not the fault of the game master.
  • Jeremiah
    1.5k
    Don't confuse subjective expectations with objective reality.
  • Srap Tasmaner
    4.9k
    If the players wants to think that means 5 or 20, then that is not the fault of the game master.Jeremiah

    Of course. Someone about to buy such an envelope on the street ought to hope a friendly and helpful philosopher would be walking by to point out that

    p ∨ q p ⊻ q ↔ P(p) + P(q) = 1

    but that you cannot infer further any of these:

    • P(p) = P(q)
    • P(p) > 0
    • P(q) > 0

    edit: meant "exclusive or"
  • Jeremiah
    1.5k
    My equality is an objective truth and my math considering that equality is correct. So perhaps the problem is elsewhere.
  • Jeremiah
    1.5k
    It is not the job of the objective to conform to your subjective expectations.
  • Jeremiah
    1.5k
    It is the coin flip. There is a probability associated with the objective process and a probability associated with your chance of correctly guessing the outcome. If you did it right these two should match, but you can't just sweep aside the objective because it does not match your subjective expectations.

    *On my phone sorry for any typos.
  • Srap Tasmaner
    4.9k
    I still look at the problem this way:

    First you're presented with two sealed envelopes; you can't tell which is bigger.

    You open one and observe its value; you still can't tell which is bigger.

    If you had an estimate for how much money would be in play, observing a given value may or may not lead you to revise that estimate. Depends.

    The conditional probabilities also change when you know the value of one of the envelopes, but you don't know what they are anyway.

    The value you observe is no help making the one decision you have to make, so it's reasonable to treat the second choice you face as a repeat of the first, meaning it's just a coin flip: you're either going to gain or lose, relative to the first choice, but you don't have any idea which, not even probabilistically, and the amount you might gain is the same as the amount you might lose.

    If it would be your own money at stake here, you shouldn't be playing at all.
  • Srap Tasmaner
    4.9k
    The conditional probabilities also change when you know the value of one of the envelopes, but you don't know what they are anyway.Srap Tasmaner

    Clarifying a little. Suppose X=5. This is the sort of thing you don't know, but choice being what it is, you can still be confident that P(Y=X | X=?) = P(Y=2X | X=?).

    P(Y=X | X=5, Y=10) = 0, but for you that's P(Y=X | X=?, Y=10) = ??. 0 or 1. Changing that to a pair of terms multiplied by P(X=...) just moves your ignorance around, though it's better formally.

    You can recognize that conditioning on the value of your pick changes the probabilities, even though knowing exactly how they change amounts to knowing the value of X. And you don't.
  • Jeremiah
    1.5k
    If people just treated x as undefined and ignored Y these conflicts would not exist and the model would be objectively congruent. People keep working on the assumption of one limit but objectively there are actually three sets of bounds.
  • Pierre-Normand
    2.4k
    If it would be your own money at stake here, you shouldn't be playing at all.Srap Tasmaner

    One way to adjust the game so that your own money is at stake would be to merely write down the two amounts in the envelopes. The cost for playing is the value v that's written in your envelope. If you chose to play, and switch, then you must pay this amount v upfront and the game master must give you back the amount that's written down in the second envelope. On the assumption that you gain no knowledge at all (not even probabilistic knowledge) about the the probability that your cost is smaller than the potential reward, then the paradox ensues since if we make no assumption at all regarding the prior distribution being either bounded, or unbounded and uniform, then the arguments that the expected value of switching is v or 1.25v seem to be equally valid.
  • Pierre-Normand
    2.4k
    What I'm saying is that there is no real-world component present in the OP. You can use real-world examples to illustrate some properties, but that is all you can do. Illustrate. The OP itself is purely theoretical.JeffJo

    That's what I'm saying too.
  • Jeremiah
    1.5k
    Theoretical or not you still shouldn't approach it without consideration of objective processes.
  • Srap Tasmaner
    4.9k
    I keep thinking about how the two rounds of the game compare.

    The general problem would be something like this: can you improve your performance even in situations where you are unable to evaluate your past performance?

    I think the answer to this turns out to be yes, and I would consider that a result of some importance.

    ***

    There is more information in the second round -- it's unclear.
  • Pierre-Normand
    2.4k
    The general problem would be something like this: can you improve your performance even in situations where you are unable to evaluate your past performance?Srap Tasmaner

    I don't quite understand what you mean. What are you referring to as one's "past performance"? Is that the amount of money in one's envelope before one has been offered the opportunity to switch?
  • Srap Tasmaner
    4.9k

    Right, that was the idea. Whether it might be possible to make a better second (or later) choice without knowing how good the previous choice (or choices) was.

    I accidentally addressed this before, I think, when I noted that any arbitrary cutoff that helps you by falling in the [x, 2x] interval, works as a criterion of success (for having chosen 2x).

    I'm going to mull it over some more. The natural answer is of course not! but I'm not so sure. Any information you might use to improve future efforts is also information you could use to evaluate past ones... [That's an argument against the idea.]

    I'm just wondering if there's an alternative to the usual predict-test-revise cycle in cases where you can't get confirmation or disconfirmation, or at least don't get it straight off. In our case, that might be, for the iterated case, using a function or a cutoff value you revise as you go, without ever finding out how well your earlier picks went. Within the single game, finding out the value of one envelope is not enough to be confirmation of your guess, but you might still use it to make your next guess better.

    Is this any clearer? Apo and I have talked about the desirability of getting binary answers. I'm just thinking about how you might proceed without them. I expect such issues have been thoroughly chewed over already by people who actually do science!

    [ small clarification ]
  • Srap Tasmaner
    4.9k

    I suppose I'll have to have another look at the McDonnell & Abbott paper, because I think that's kind of what I'm talking about.

    To put it relatively starkly: are there cases in which you can know your performance is improving not because you can test that directly but because you know the procedure you're following to improve is theoretically sound? Imagine an iterated version of our game in which your winnings each round go into an account you don't have access to. You do want to increase your winnings, but you'll only know how much they've grown when you stick. In the meantime, you gather data from each round. You could say that it's just a different perspective on what the experiment is, and maybe that's all there is to it.

    I thought of one analogy: suppose you're firing artillery at an enemy position but you have no spotters. (Sometime in the past, no aerial surveillance or anything.) You could use your knowledge of how the enemy troops are usually disposed on this type of terrain, and also how they're likely to respond to bombardment. You could, for instance, attempt to fire at the near side of their line first -- using only physics and an estimate of how far away they are -- and then fire slightly farther, if you expect them to retreat a bit to get out of range. If you had some such general knowledge, you might be able to systematically increase the effectiveness of your strikes without ever seeing where they hit and how effective you were on target.

    It strikes me as a deep problem because so much of what we do is guided by faith in a process. Many of those processes have been tested and honed over time, but induction is still a leap of faith. Some things we do get to test directly, but some of the very big things we don't. I'm thinking not just of science here, but also our moral, social, and political decisions. I'd like to think certain acts of kindness or courage, small though they may be, might improve a society over time, but I don't expect to know that by experiment. Anyhow that's part of what's been in the back of my mind here.
  • andrewk
    2.1k
    I read the McDonnell and Abbott paper and was impressed by Cover's switching strategy, which is better than my proposed strategy of picking a threshold and then switching when the observed amount is below that (My strategy is covered as case (iii) on p3316).

    Cover's very clever (IMHO) strategy is to randomise the decision to switch, as a Bernoulli RV with probability p that is a monotonic decreasing function of the observed amount Y. Whereas the threshold strategy will deliver positive expected gains for some distributions of X and zero expected gains otherwise, the Cover strategy delivers a positive expected gain for every possible distribution of X.

    I do enjoy a clever idea, and that idea of making the decision to switch itself a random variable strikes me as very clever indeed. I especially like the fact that it delivers expected gains even with no knowledge whatsoever about the possible distribution of X, and even if X is a fixed amount (a 'point distribution'.

    It sounds like the initial idea was from TM Cover at Stanford, in this 1996 paper, but I've only read the later paper, by McDonnell and Abbott.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment