Comments

  • Mathematical Conundrum or Not? Number Six
    Even in that more general case, the Bayesian approach can give a switching strategy with a positive expected net gain.andrewk

    No, it gives you a strategy that works on your assumed prior, not necessarily on reality. The point of a Bayesian analysis is to guess at a prior, and refine it from evidence in the real world.
  • Mathematical Conundrum or Not? Number Six
    As an aside, I think we're saying the same thing from different angles.fdrake

    I agree. I think it is most convincing to present multiple angles. But it isn't (just) the sample space that is the issue. It is the probability space which includes:

    • A sample space S which lists every indivisible outcome.
    • An event space F which lists all of the combinations of outcomes from S that are of interest (with some restrictions we needn't go into).
    • A probability distribution function Pr(*) that associates a probability with every member of F.

    It is Pr(*) and its relationship to F that is the source of confusion.

    +++++

    I see the situation as this:

    Assume there's £10 in my envelope and that one envelope contains twice as much as the other. The other envelope must contain either £5 or £20. Each is equally likely and so the expected value of the other envelope is £12.50. I can then take advantage of this fact by employing this switching strategy to achieve this .25 gain.
    Michael

    The error is "Each is equally likely," and it is wrong because we need to work with F, not S. My very first post gave an example. Suppose I fill 9 pairs of envelopes with ($5,$10) and one pair with ($10,$20). I choose a pair at random, and present it to you saying "one contains $X and the other $2X, [but you do not know which envelope is which or what the number X is]."

      Aside: Since I am way too familiar with various formulations of the problem, I haven't been looking at the original here. It does use "X" where I used "V", and almost everybody except Michael has been using X to mean something different. This is likely one cause of confusion. So please note that in the future, I will only use V for the value in your envelope. To avoid confusion, I will not use X at all anymore, but use D for the absolute difference between the two envelopes; it is happy coincidence that this is also the lower value of the pair.


    Do you really believe, if you see $10, that there is a 50% that this is the smaller of the two envelope presented to you?

    No. There is a 10% chance that this is the smaller value, and a 90% chance it is the larger value. Because there was a 90% chance I chose the pair ($5,$10), and a 10% chance I chose the pair ($10,$20). The 50% chances you want to use don't even seem to play a role in this determination.

    The point is that, in the correct version for your calculation E=($V/2)*P1 + ($2V)*P2, the probability P1 is not the probability of picking the larger value. It is the probability of picking the larger value, given that the larger value is $10. In my example, that 90%. In the OP, you do not have information that will allow you to say what it is.

    Note that this doesn't mean that the expectation is that there is no change. In my example, it is ($5)*(90%)+($20)*(10%) = $6.50, and you shouldn't switch. In the OP, you still do not have information that will allow you to say what it is. But averaged over all possible values of V, there will be no expected gain.
  • Mathematical Conundrum or Not? Number Six
    I will not further debate such specifics.Jeremiah
    Of course you won't. You don't like facts that disagree with your beliefs.

    You have made various claims in this thread, I've told you which you are right, and which you wrong. All you can say is "I believe my solution is correct," without saying which it is that you think is correct. And I'm done trying to educate you.
  • Mathematical Conundrum or Not? Number Six

    You're going it again. Is X the value of my envelope or the value of the smallest envelope? You're switching between both and it doesn't make any sense, especially where you say "if you have the lower value envelope X/2".
    I hope I'm not talking down, that isn't my point. But probability really needs great care with terminology, and it has been noticeably lacking in this thread. This question is a result of that lack.

    Let's say that the possible values for envelopes are $5, $10, $20, and $40. This is sometimes called the range of a random variable that I will call V, for the value of your envelope.

    A conventional notation for a random variable is that we use the upper case V only to mean the entire set {$5,$10,$20,$40}. If we want to represent just one value, we use the lower case v or an expression like V=$10, or V=v if we want to treat it as unknown.

    Each random variable has a probability distribution that goes with it. The problem is, we don't know it for V.

    The conditions in the OP mean that there are three possibilities for the pair of envelopes: ($5,$10), ($10,$20), and ($20,$40). Various people have used the random variable X to mean the lower value of the pair; I think you are the only one who has used it to mean what I just called V. So X means the set {$5, $10, $20}. Notice how "$40" is not there - it can't be the low value.

    The most obvious random variable is what I have called R (this may be backwards from what I did before). So R=2 means that the other envelope has twice what yours does. Its range is {1/2,2}. It is the most obvious, since the probability for each value is clearly 50%.

    From this, we can derive the random variable W, which I use to mean the other envelope. (Others may have confusingly called it Y, or used that for the higher value; they were unclear). Its specific value is w=v*r, so its derived range is {$2.50,$5,$10,$20,$40,$80}. But we know that two of those are impossible. So what we do, is use a probability distribution where Pr(V=$5 & R=1/2)=0, and Pr(V=$20 & R=2)=0. Essentially, we include the impossible values that may come up in calculations in the range, and make them impossible in the probability distribution.

    Now, if we don't see a value in an envelope, we know that v-w must be either +x, or -x, with a 50% chance for either. So switching can't help. The point to note is that we don't know what value we might end up with; it could be anything in the full range of V.

    Your expectation calculation uses every value of W that is in its range, including the impossible ones. It has to, and that is why they are included in the range. Because of that, you can't use the probabilities for R alone - you need to do something to make $2.50 and $80 impossible.

    So you use conditional probabilities for R, not the unconditional ones:

    Exp(W|V=v) = (v/2)*Pr(R=1/2|V=v) + (2v)*Pr(R=2|V=v)
    = [(v/2)*Pr(R=1/2 & V=v) + (2v)*Pr(R=2 & V=v)] / [Pr(R=1/2 & V=v) + (2v)*Pr(R=2 & V=v)]

    We can't separate R and V in those probability expressions, because they are not indepepmndent. But we can if we transform V into X:

    = [(v/2)*Pr(R=1/2 & X=v/2) + (2v)*Pr(R=2 & X=v)] / [Pr(R=1/2 & X=v/2) + (2v)*Pr(R=2 & X=v)]
    = [(v/2)*Pr(R=1/2)*Pr(X=v/2) + (2v)*Pr(R=2)*(X=v)] / [Pr(R=1/2)*Pr(X=v/2) + (2v)*Pr(R=2)*Pr(X=v)]

    And since Pr(R=1/2)=Pr(R=2)=1/2, they divide out:

    = [(v/2)*Pr(X=v/2) + (2v)*Pr(X=v)] / [Pr(X=v/2) + (2v)*Pr(X=v)]

    Unfortunately, we don't (and can't) know the probabilities that remain. For some values of v, it may be that you gain by switching; but then for some others, you must lose. The average over all possible values of v is no gain or loss.

    What you did, was assume Pr(X=v/2) = Pr(X=v) for every value of v. That can never be true.
  • Mathematical Conundrum or Not? Number Six

    I specifically said that my simulation is not about finding expected results as that entire argument is flawed.
    If you did, that statement was completely obfuscated by your sloppy techniques. And it isn't the argument that is flawed, it is the assumption that you know the distribution. As shown by the points you have ignored in my posts.

    But if that is what you meant, why do you keep pointing out your simulations, which coudl prove something but, as you used them, do not.


    The thing to notice here is that in all cases the absolute value of the difference between column one and column two is always equal to the lesser of the two (save rounding errors). The lesser of the two is X.
    The thing to notice here is that you can't use your "X"- the lower value - as a condition unless you see both envelopes, which renders analysis pointless.


    I pointed out a few times that the sample space of event R and the sample space of event S are equal subsets of each other, which means mathematically we can treat them the same.

    • A sample space is a set of all possible outcomes. An event is a subset of the sample space. Events don't have sample spaces.
    • It not possible for two sets to be unequal subsets of each other.
    • We are less concerned with the sample spaces, than with the corresponding probability distributions.
    So the message of this posts seems to be "I really don't know what I am talking about, but will put words together that I hope will make it sound like I do."

    Please, correct me if I am wrong. I have tried to make sense of what you said, but much of it defines all attempts.


    R and S is the event you have X or 2X when you get A. By definition of A and B, if A=X then B =2X, or if A =2X then B=X. So by definition if A equals X then B cannot also equal X.
    This is unintelligble.

    Maybe you meant "R and S are events that are conditioned on you having chosen envelope A and it haing the low value or the high value, respectively. If you associate either with a value X from the sample space of possible values, which has nothing to do with R or S, then R means that B must have 2X, and S means that B must have X/2." But you were trying to dissuade Dawnstorm from what appears to have been a valid interpretation of another of your fractured statements; that X was simultaneously possible for both A and B. Even though he realized you could not have meant that, and said as much.

    But this still pointless. If you aren't given a value in X, all you need to do is point out that the potential loss is the same as the potential gain. If you are given a value, you do need the distributions. Which you would know, if you read my posts as well I as have (tried to) read yours.

    I also pointed out that as soon as you introduce Y they are no longer equal subsets and therefore mathematically cannot be treated the same.
    This may be what you were referring to when about "Y." It still refers to ill-defined sets, not distributions.
  • Mathematical Conundrum or Not? Number Six

    The point of this demonstration is to show that the possible distribution of A is the same as the possible distribution of B. ... So we see with a D test statistics of 0.0077 and a 0.92 p-value we don't have strong enough evidence to support the alternative hypothesis that the two distributions are reasonably different.
    It is provable without simulation that the the two distributions are the same, so this is pointless. We can accept that the distributions are the same. And it is is obvious you didn't read my posts describing why the simulation is pointless. In short, the "data" you apply "data science" to pertains only to how well your simulation addresses the provable fact.

    It is also provable that the distributions are not independent. Since you technically need to use a joint probability distribution for any analyses using two random variables, and you can only separate a joint distribution into individual distributions when they independent, this conclusion can have no bearing on the original problem. It is also obvious that you did not read my posts that explain this.

    You conclude that I did not read your posts, because I didn't comment on them. By not reading mine, you missed the fact that I don't need to comment. Conclusions drawn from evidence that has already been discredited do not need to be addressed.
  • Mathematical Conundrum or Not? Number Six

    I am happy with my correct results.
    And again, you won't say what results you mean.

    Your solution from page 1, that ...
    If you have X and you switch then you get 2X but lose X so you gain X; so you get a +1 X. However, if you have 2X and switch then you gain X and lose 2X; so you get a -1 X.
    ... is a correct solution to the original problem when you don't look in the envelope. The problem with it, is that it doesn't explain to why his program doesn't model the OP. That is something you never did correctly, and you refuse to accept that I did.

    Your conclusion from page 26, that...
    the possible distribution of A is the same as the possible distribution of B
    ... is also correct, although it is easier to prove it directly, But it is still irrelevant unless you determine that the two distributions are independent. AND THEY ARE NOT.

    It is this conclusion from page 26:
    the distribution in which X was selected from is not significant when assessing the possible outcome of envelope A and B concerning X or 2X.
    ... that is incorrect, as I just showed in my last post. The probability that A has the smaller value depends on the relative values of two probabilities in that distribution, so it is significant to the question you address here.

    Averaged over the entire distribution, there is no expected gain. Which you can deduce from your page-1 conclusion. For specific values, there can be expected gains or losses, and that depends on the distribution.
  • Mathematical Conundrum or Not? Number Six
    The point is that there must be a prior distribution for how the envelopes were filled, but the participant in the game has no knowledge of it. I express it as the probability of a pair, like Pr($5,$10) which means there is $15 split between the two. There is also a trivial prior distribution for whether you picked high, or low; it is 50% each.

    The common error is only recognizing the latter.

    If you try to calculate the expected value of the other envelope, based on an unknown value X in yours, then you need to know two probabilities from the unknown distribution. The probability that the other envelope contains X/2 is not 1/2, it is Pr(X/2,X)/[Pr(X/2,X)+Pr(X,2X)]. The 50% values from the second distribution are used to get this formula, but they divide out.

    The problem with the OP, is that we do not know these values, and can't make any sort of reasonable guess for them. But it turns out that the "you should switch" argument can be true:
      In Jeremiah's half-mormal simulation:
    • The probability that X=$5 is 0.704%, and that X=$10 is 0.484%.
    • The probability that A=$10 is (0.704%+0.484%)/2=0.594%.
    • Given that A has $10, the probability that B has $5 is (.704%/2)/0.594% = 59.3%
    • Given that A has $10, the probability that B has $20 is (.484%/2)/0.594% = 40.7%
    • Given that A has $10, the expected value of B is (59.3%)*$5 + (40.7%)*$20 = $11.11.
    • So it seems that if you look in your envelope, and see $10, you should switch.
    • In fact, you should switch if you know that you have less than $13.60. And there is a 66.7% chance of that. (Suggestion to Jeremiah: run your simulation three times: Always switch from A to B, always keep A, and switch only if A<$13.60. The first two will average - this is a guess - about $12, and the third will average about $13.60.)
    • It is the expected gain over all values of X that is $0, not an instance where X is given.

      The naive part of Jeremiah's analysis, is that knowing how A and B have the same distribution is not enough to use those distributions in the OP. He implicitly assumes they are independent, which is not true.

    So the OP is not solvable by this method. You can, however, solve it by another. You can calculate the expected gain by switching, based on an unknown difference D. Then you only need to use one probability from the unknown distribution, and it divides out.

    Conclusions:
    1. If you don't look in your envelope, there can be no benefit from switching.
      • This is not because the distributions for the two envelopes are the same ,...
      • ... even though it is trivial to prove that they are the same, without simulation.
      • The two distributions are not independent, so their equivalence is irrelevant.
      • It is because the expected gain by switching from the low value to the high, is the same as the expected loss from switching from high to low.
    2. If you do look in your envelope, you need to know the prior distribution of the values to determine the benefit from switching.
      • With such knowledge, it is quite possible (even likely) that switching will gain something. If Jeremiah could be bothered to use his simulation, he could prove this to himself.
      • But it is also possible that you could lose, and in the cases where you do, the amounts are greater. The average over all cases will always be no change.
  • Mathematical Conundrum or Not? Number Six

    If you want to address the original problem, it matters that by your methods, ignoring their other faults, the two envelopes can contain the same amount. I understand that you are more interested in appearing to be right, than in actually addressing that problem. But I prefer to address it.


    Exactly what do you think "[your] correct solution" solves? I told you what it addresses, but you decided you were done before hearing it.

    I've tried to explain to you why the issue you addressed is not the OP, but you have chosen not to accept that. If you don't want to "debate in circles," then I suggest you accept the possibility that the contributions of others may have more applicability than yours.
  • Mathematical Conundrum or Not? Number Six

    I am well aware 0 was a possible outcome, the code just runs better without the loops, and it was not significant enough to care.

    Use a "ceiling" function instead of a "round" function. Or just add 0.0005 before you round.

    Sorry my simulation proves you wrong.

    Any statistical analysis addresses a question. Yours addresses "Is the distribution of A the same as the distribution as B?" And it can only show what the answer most likely is, not prove it.

    But, the answer to that question is pretty easy to prove. As I did. So there wasn't much point in the statistical analysis, was there?

    What you didn't address, but could, is whether the two random variables are independent. Which they are not. Since having two identical distributions is meaningless if they are not independent, your simulation did not address anything of significance to the OP.

    Or what the expectation of the other envelope is relative to yours, and why the naive apparent gain can't be realized.Answer: With any bounded distribution, there is a smallest possible value where you will always gain a small amount by switching, and a largest value where you will always lose a large amount by switching. In between, you typically gain about 25% by switching once, and lose that 20% (remember, the baseline is now the switched-to value, so the dependency is reversed) by switching back. But over the entire distribution, the large potential loss if you had the largest value makes the overall expectation 0.

    Those are the issues that are important to the OP, and your simulation doesn't provide any useful information.
  • Mathematical Conundrum or Not? Number Six

    Right. That was after my first post. I read it before the rest of the thread, so I didn't think that was what you were referring to. I didn't want to go into this amount detail about it (hoping that my explanations would suffice), but you insist.

    Let X be a set of positive dollar values, and x0 be the lowest value in X.

    Let F(x) be a probability distribution function defined for every x in X, and for x0/2 as well. But F(x0/2)=0 since x0/2 isn't in X.

    Let Y be chosen uniformly from {1,2}.

    Define your random variable A = X*Y. So X=A/Y, where the denominator can be 1 or 2. This makes the probability distribution for A equal to F(A/1)/2+F(A/2)/2 = [F(A)+F(A/2)]/2. Call this F1(A).

    Define your random variable B = X*(3-Y). So X=B/(3-Y), where the denominator can be (3-1) or (3-2). This makes the probability distribution for B equal to F(B/(3-1))/2+F(B/(3-2)/2. But simple rearrangement shows that this F1(B).

    The point is that we can prove that the distributions for A and B are the same. And all your simulation shows, is that if you can correctly derive the result of a process, then a correct simulation of that process statistically approximates the result you derived. It says nothing about the OP; certainly not that the distribution is unimportant.

    And btw, the distribution you used in the version you posted is a discretization of the Half-normal distribution. And it is possible that it could put $0 in both envelopes.

    +++++

    And your simulation does nothing to explain why the expectation formula E = (X/2)/2 + (2X)/2 = 5X/4 is wrong. It is because the correct formulation, if someone picks your envelope A and it contains X, is:

      E = [(X/2) * F(X/2) + (2*X) * F(X)] / [F(X/2) + F(X)]

    Your simulation does nothing to show this. But you can in various ways. Choose any of your distributions, but fixing that "zero" problem.

    One way to test it, is to chose a range of values like between $10 and $11. If you have an amount in that range (ignore values outside it), your average gain should be about $2.50 if you switch. And this will be true whether you start with envelope A, or B. I can't say the exact amounts, because it depends on how F(10) differs from F(20). The exact amounts are the kind of thing you can find with a statistical simulation like yours.

    Or, just calculate the average percentage gain if you switch. You will find that it is about 25%, whether you switch from A to B, or from B to A.
  • Mathematical Conundrum or Not? Number Six

    I did look. Maybe you didn't read mine: "I mean examples where actual envelopes have been filled with actual money, and presented to an actual person. That's the only way statistics matter."

    If you simulate the problem, then all you are testing is your interpretation of the problem, warts and all. And applying statistics to the results may show how statistics work, but nothing about whether your interpretation is right or about the original problem. Micheal also simulated the problem.

    Whether or not you want to address it, the distribution of the possible values in the envelopes is required for the problem. Ignoring it is making a tacit assumption that includes an impossible distribution.
  • Mathematical Conundrum or Not? Number Six
    Sorry, I don't know MathJax. With having broken my foot on Friday (did you notice a gap in my responses?), it is taking all the effort I can spare to reply here. If you can point out an on-line tutorial, I'll look at it in my copious "free time."


    I'm just trying to understand the "need to" in that sentence.
    In short, the "need to" include the probability distribution is because the formulation of the "simple algebraic solution" is incorrect if you exclude it.

    I'm assuming that you understand the difference between conditional and unconditional probabilities. The expected value of the other envelope, given a certain value in your envelope, requires that you use probabilities conditioned on your envelope having that value. So even if X is unknown:

    1. The unconditional probability that the other envelope contains $X/2 is not 1/2, it is Pr(L=$X/2)/2.
    2. The unconditional probability that the other envelope contains $2X is not 1/2, it is Pr(L=$X)/2.
    3. If you have $X, those are the only possible cases.
    4. To get conditional probabilities, you need to divide the probability of each possible case by the sum of them all.

    So the expected value of the other envelope, given that you have $X, is:

      [($X/2) * Pr(L=$X/2) + (2*$X) * Pr(L=$X)] / [Pr(L=$X/2) + Pr(L=$X)]

    Note how this can be different, for different values of X, if Pr(*) varies. I'm not trying to "be that guy," but apparently nobody read my first post either. I pointed out examples of how that can happen: https://thephilosophyforum.com/discussion/comment/196299

    The "simple algebraic solution" assumes that the probability of any two possible values of L is the same, which makes this reduce to:

      [($X/2) + (2*$X)] / 2]

    Note that this assumption means that the probability of getting $10 is the same as getting $5, which is the same as getting $2.50, or $1.25, or $0.625, or $0.3125, etc. And it also implies that any two possible high values are the same, which means that getting $10 is just as likely as $10,240, or
    $10,485,760, or $10,737,418,240,etc. THERE IS NO UPPER BOUND TO THE VALUES IMPLIED BY THE "SIMPLE ALGEBRAIC SOLUTION."

    This is another reason (besides failing the requirement that the cases be equivalent except in name) that the Principle of Indifference can't be applied.
  • Mathematical Conundrum or Not? Number Six

    If, at that point, you postulate a value in an envelope, you need to postulate a probability distribution that covers all possible ways that value could be in an envelope. Even if it is unknown. Did you miss the part where I said that, if you don't know the value, you technically have to integrate over all possible sets? But that you can prove both envelopes have the same expectation with any legitimate distribution?

    And if, at that point, you postulate a value for any of these functionally equivalent quantities:

    • The lower of the two envelopes.
    • The higher of the two envelopes.
    • The absolute value of the difference.
    • The sum.
    ... then your solution only uses a probability for one, and it divides out. But if you look in it, you need two.

    Honestly, my conclusions are much the same as yours. I just explain them correctly. Why are you arguing? Do you even know what you are arguing about?
  • Mathematical Conundrum or Not? Number Six

    I've looked at them all,and seen no actual data. By which I mean examples where actual envelopes have been filled with actual money, and presented to an actual person. That's the only way statistics ("data science) matter.
  • Mathematical Conundrum or Not? Number Six
    Mixing random variables and unknowns can be very un-intuitive, especially to those who are out of practice.

    Doesn't this amount to saying that the loading of the envelopes and the selection of an envelope are independent events,
    That isn't as useful as you might think. You can assert how that independence is obvious, since the random variable R (where R is 1/2 or 2 if you picked low or high) is chosen without knowledge of the random variable L (the low value of the pair). But all this independence means is, for any values of the unknowns L1 and R1, that:

      Pr(L=L1&R=R1) = Pr(L=L1)*Pr(R=R1).

    What independence doesn't tell you, is how Pr(L=L1/2&R=2) = Pr(L=L1/2)/2 compares to Pr(L=L1/2&R=1/2) = Pr(L=L1)/2. If you need to compare them, independence does nothing for you.

    If you consider a value - like you do whenever you calculate an expectation - you have to condition on that value. Here, V is the random variable for your envelope's value:

      E = Exp(Other Envelope|V=V1) = (V1/2)*Pr(R=2|V=V1) + (2V1)*Pr(R=1/2|V=V1)
      E = (V1/2)*Pr(R=2&V=V1)/Pr(V=V1) + (2V1)*Pr(R=1/2&V=V1)/Pr(V=V1)
      E = V1*[Pr(R=2&V=V1)/2 + 2*Pr(R=1/2&V=V1)]/Pr(V=V1)

    Note that it is L that is independent of R, not V. So you can't separate Pr(R=1/2&V=V1) into Pr(R=1/2)*Pr(V=V1) = Pr(V=V1)/2. As a trivial example, if V=$1, then R can't be 2 and Pr(R=1/2&V=$1)=Pr(V=$1).

    Changing the random variable in this formulation (that is, once you started this way) from V to L, so you can use independence, doesn't help:

      E = V1*[Pr(R=2&V=V1)/2 + 2*Pr(R=1/2&V=V1)]/Pr(V=V1)
      E = V1*[Pr(R=2&L=V1/2)/2 + 2*Pr(R=1/2&L=V1)]/[Pr(R=2&L=V1/2) + Pr(R=1/2&L=V1)]
      E = 2*V1*[Pr(L=V1/2)/4 + Pr(L=V1)]/[Pr(L=V1/2) + Pr(L=V1)]

    This approach to an expectation inherently compares the probabilities of two possible combinations fo envelopes. This result applies whether you do know V1, or don't know V1. If you do, you need to know the relative probabilities of those two possible sets of envelopes, which you don't have. If you don't know V1, then you get the expectation by integrating over all possible values of V, which requires even more knowledge you don't have. With a lot of work (too much), you could prove that this will always be equal to the expectation of your envelope.

    There is an easier way, but it applies only if you don't know what is in your envelope. Use L instead of V from the beginning and condition on L=L1. Then you can use independence, you only need to use Pr(L=L1), and it divides out. The disadvantage is, that you can't answer if you look and see $10 in your envelope. WHICH IS CORRECT.

    ↪Jeremiah

    We don't have data in the OP; we have a theoretical problem only.
  • Mathematical Conundrum or Not? Number Six

    But I have read it now - the majority is either name calling, or the debating of INCORRECT interpretations (on almost all sides) of various concepts in probability. Regardless, what I have said so far, and will continue to say, applies to all of it, whether of not I cite how in every instance. Some of these concepts are:

    1. This problem is not about statistics. What you meant when you called it a "data science," is that it tries to apply the concepts of theoretical probability to real-world situations.

    2. In theoretical probability, "probability" is an undefined term. Given a sample space S, any corresponding set of numbers that satisfy the Kolmogorov axioms (all are >=0, if A and B are disjoint events then Pr(A)+Pr(B)=Pr(A&B), and Pr(S)=1), is a set of probabilities for that space. This does not interpret the meaning of the values.

    3. A random variable is a measurable quantity in the result of a random process that, within your knowledge, can have any value in some set of values. As a random variable, it has a probability distribution. This means that the lower value in the pair of envelopes is a random variable, that has a probability distribution. A "probability density curve" does not have to be "used to select X.," whatever it is you think that means. Before you look in an envelope, does your knowledge allow L it to have one value in some set, whether or not that set is known to you? Then it is a random variable that has a probability distribution (distributions apply to discrete random variables, as with amounts of money. Densities need a continuous random variable).

    A corrected version of your solution, if you don't look in an envelope: Let L be the random variable representing the low value in the envelopes. Say the "given event" is that the (unknown) low-value is L1; that is, one value in the set of possibilities is realized. (Note that L1 is an unknown, and not a random variable. Ask about the difference if it is unclear to you why I point this out.) There is a probability Pr(L=L1) that we can't possibly know, but it will turn out that we don't need to know it.

    Similarly, let R be a random variable representing the relative value of the envelope you pick. It can be 1/2, or 2. That is, if R=1/2, you picked the smaller envelope. Note that we can say Pr(R=1/2) = Pr(R=2) = 0.5 by the Principle of Indifference.

    The prior probability that L=L1 AND R=1/2 is Pr(L=L1)*Pr(R=1/2) = Pr(L=L1)/2.
    The prior probability that L=L1 AND R=2 is Pr(L=L1)*Pr(R=2) = Pr(L=L1)/2.

    The definition of the conditional probability for event A, given event B, is Pr(A|B)=Pr(A&B)/Pr(B):

    Pr(R=1/2|L=L1) = (Pr(L=L1)/2)/Pr(L=L1) = 1/2.
    Pr(R=2|L=L1) = (Pr(L=L1)/2)/Pr(L=L1) = 1/2.

    These are the probabilities you should use in your expectation calculation. Since Pr(L=L1) divides out, we don't need to know it. You are confusing the fact you can take a shortcut to get this result, with that shortcut being logically correct.

    Note how the value of your envelope different in these two cases: L1 in the first, and 2*L1 in the second. Michael's error is treating the "given" event as a fixed value for your envelope, but using the same shortcut. It no longer gets the right result.

    Let V be the value of your envelope. If we treat the "given event" as V=V1, then

    Pr(R=1/2|V=V1) = Pr(R=1/2&V=V1)/Pr(V=V1) = Pr(R=1/2&L=V1)/Pr(V=V1) = Pr(L=V1)/Pr(V=V1)/2
    Pr(R=2|V=V1) = Pr(R=2&V=V1)/Pr(V=V1) = Pr(R=2&L=V1/2)/Pr(V=V1) = Pr(L=V1/2)/Pr(V=V1)/2
    Finally,
    Pr(V=V1) = Pr(L=V1&R=1/2)+Pr(L=V1/2&R=2) = [Pr(L=V1)+Pr(L=V1/2)]/2

    So the numbers to use in the probability calculation are

    Pr(R=1/2|V=V1) = Pr(L=V1)/[Pr(L=V1)+Pr(L=V1/2)]
    Pr(R=2|V=V1) = Pr(L=V1/2)/[Pr(L=V1)+Pr(L=V1/2)]

    Now the distribution of the values matters. Micheal implicitly assumed Pr(L=V1)=Pr(L=V1/2).
  • Mathematical Conundrum or Not? Number Six
    I am not doing this, not until you actually read all of my posts in this thread.Jeremiah
    What? You don't want to address a correct analysis, until I weed through pages of debate that appears to be inconclusive? Because I can guarantee you, that applying my correct analysis can resolve them.

    How about I read until I can point out three errors? But some of them will be repeats of what I already have said. In fact, I'll start by stating it another way. Here are two proposed solutions:

    1. Your envelope has $X.The other has $X/2 or $2X, 50% chance each. So the expected value is ($X/2)/2+($2X)/2=$5X/4 and the expected gain by switching is $X/4.
    2. (You seem to have suggested this one once) The magnitude of the difference between the two envelopes is $D; so there is a 50% chance it is +$D, and a 50% chance it is -$D. The expected gain is 0.
    This is paradox, so one must be wrong. Yet, there is only one difference in the theoretical approach taken: #1 uses two possibilities for the contents of the two envelopes, while #2 uses only one. The error must be there.

    +++++

    You said: "You have to understand that X is a variable." This is incomplete. It's a random variable, representing (above) the (known or unknown) value of the envelope you picked. But some of those pages you want me to read confuse it with D. And even if you don't accept that you need to use it to describe your sample space, you still can represent it as a random variable.

    That means it has a range, and a prior distribution. The range has to include $X, $X/2, and $2X. But if the probability of $X/2 (or $2X) is non-zero, the range must include $X/4 (or $4X). And then it must include $X/8 (or $8X). If you don't want this to continue indefinitely - which implies arbitrarily small and large amounts are possible - then there has to be a zero probability in X's distribution for some powers of 2. The first solution above implicitly assumes that all powers of 2 have the same probability.

    +++++

    Michael said: "By switching there's a 50% chance of gaining an extra £10 and a 50% chance of losing £5." This is wrong. The chance of picking the larger envelope is indeed 50% when you don't consider an amount, but you have to include the probability of having the amount you consider.

    His program says that there was a 50% chance that the envelopes contained ($5,$10), but ignored the 25% possibility of picking the $5 envelope from that pair. There was another 50% chance that the envelopes contained ($10,$20), and he ignored the possibility of picking the $20 envelope. He considered only the possibility of picking the $10 envelope.

    His error is assuming a distribution for the values. The OP does not provide that information, even if you look and see $10. He calculated the expectation if you are given the 50:50 split between the two sets, and that you picked $10. That conditional expectation is indeed the switching gains. It just isn't the OP.

    +++++
    NoAxioms said: "My solution is to switch only if the amount in the envelope is an odd number." This recognizes that the probability of ($X/2,$X) must be zero if X is odd, but doesn't recognize that the probabilities of ($X/2,$X) and ($X,$2X) do not have to be the same if X is even.

    +++++
    You said: "What we have establish[ed] is that -X and X are equally likely to occur." You are misinterpreting how Michael is using X. Yours is the value in the lower envelope,which makes it the difference. His is the value in your envelope, which makes the difference -X/2 or +X. He gets an invalid answer, because he needs to consider the relative probabilities of the two sets of envelopes. You consider only one set, so you don't.

    I read more, and it was all thrashing. If you have a specific post you want me to read, point it out.
  • Mathematical Conundrum or Not? Number Six
    That is a very bad understanding of what a sample space and an event is. You are not applying your Principle of Indifference there, which states from your link: "The principle of indifference states that if the n possibilities are indistinguishable except for their names, then each possibility should be assigned a probability equal to 1/n." n in this case would be the total possible combinations of the lottery numbers.
    Yes, my point was that the lottery example is a very bad description of a sample space. In fact, It is the archtype for just that. But so is ignoring that you are assuming a distribution of amounts as well as whether you picked high or low. Maybe if you looked at my examples, you'd understand this. That was also a point I made.

    When you look in an envelope and see $10, it means that one of two possible events has occurred. The envelopes were filled with ($5,$10) AND you picked the higher, or the envelopes were filled with ($10,$20) AND you picked the lower. The PoI applies to whether you picked high or low, since those outcomes are equivalent except in name. It does not apply to whether the envelopes were filled with ($5,$10) or ($10,$20), yet you treat them as equally likely.

    The Two Envelope Problem cannot be solved without a distribution for the amounts, which is why you get a paradox when you ignore it.
  • Mathematical Conundrum or Not? Number Six
    You are playing a game for money. There are two envelopes on a table.
    You know that one contains $X and the other $2X, [but you do not
    know which envelope is which or what the number X is]. Initially you
    are allowed to pick one of the envelopes, to open it, and see that it
    contains $Y . You then have a choice: walk away with the $Y or return
    the envelope to the table and walk away with whatever is in the other
    envelope. What should you do?
    Imagine three variations of this game:
    • Two pairs of envelopes are prepared. One pair contains ($5,$10), and the other pair contains ($10,$20). You pick a pair at random, and then pick an envelope from that pair. You open it, and find $10. Should you switch? Definitely. The expected value of the other envelope is ($5+$20)/2=$12.50.
    • Ten pairs of envelopes are prepared. Nine pairs contain ($5,$10), and the tenth pair contains ($10,$20). You pick a pair at random, and then pick an envelope from that pair. You open it, and find $10. Should you switch? Definitely not. The expected value of the other envelope is (9*$5+$20)/10=$6.50.
    • You are presented with ten pairs of envelopes. You are told that some pairs contain ($5,$10), and the others contain ($10,$20). You pick a pair at random, and then pick an envelope from that pair. You open it, and find $10. Should you switch? You can't tell.

    (And note that in any of them, you know you will get $10 if you switch from either $5 or $20.)

    The error in most analyses of the Two Envelope Problem, is that they try to use only one random variable (you had a 50% chance to pick an envelope with $X or $2X) when the problem requires two (what are the possible values of X, and what are the probabilities of each?).

    The Principle of Indifference places a restriction on the possibilities that it applies to: they have to be indistinguishable except for their names. You can't just enumerate a set of cases, and claim each is equally likely. If you could, there would be a 50% chance of winning, or losing, the lottery. In the Two Envelope Problem, you need to know the distribution of the possible values of $X to answer the question
  • Mathematical Conundrum or Not? Number Five
    In standard SB, on (H & Tue) our Beauty receives no information at all.Srap Tasmaner
    But she isn't asked for her confidence in that situation, so this argument is a red herring. A very appealing red herring, as you go on to describe, but irrelevant nonetheless.

    The issue is, does she receive information on ~(H & Tue)? And the answer can't depend on how, or even if, she would receive information on (H & Tue).

    So, instead of the (H & Tue) protocol being "let her sleep," make it "take her to DisneyWorld." Just before she finds out where she is going, the Law of Total Probability says:

      Pr(H) = Pr(H|Interview)*Pr(Interview) + Pr(H|DisneyWorld)*Pr(DisneyWorld)

    Here,

      Pr(H) = 1/2 is the probability of Heads.
      Pr(Interview) = X is the probability she will be taken to an interview.
      Pr(DisneyWorld) = 1-X is the probability she will be taken to DisneyWorld.
      Pr(H|Interview) = Y is the probability of Heads in an interview (what the original problem asks for).
      Pr(H|DisneyWorld) = 1 is the probability of Heads at DisneyWorld.

    So we can now say that:
      1/2 = Y*X + (1-X)
      (1-Y)*X = 1/2

    If Y=1/2, as you believe, that means X=1. This is a contradiction, since it means she cannot be taken to DisneyWorld. Whether or not you accept that Y=1/3, the fact that there is a chance of going to DisneyWorld proves that Y<1/2.
  • Mathematical Conundrum or Not? Number Five
    Probability is about expectations.
    No, it isn't.

    Statistics is about expectations. Statistics uses Probability Theory to calculate expectations. In conventional experiments, we find that the value of the expectation is the value of the probability. The concepts of "probability" and "expectation" are very different.

    The Sleeping Beauty Problem bends conventionality, and makes statistics inapplicable, by making the number of trials depend on the result for which you are trying to assess a probability. So arguments about frequency and expectation are meaningless until the "bent" issues are resolved.

    Specifically: people disagree about whether there are two, three, or four disjoint events that comprise Beauty's sample space. On Sunday Night, the answer is clearly "two" since how the concept "today" applies at that time is different than when she is awakened during the experiment.

    Halfers want the number to be "three" inside the experiment. They treat it like Tuesday doesn't happen if Heads is rolled. The problem with the rationale they use, it that it only supports "two," which nobody thinks applies within the experiment. The inconsistency in their argument is that Monday is different than Tuesday if TAILS was flipped, but not if HEADS was flipped. In that case, Tuesday does not exist.

    You reject the existence of Tuesday yourself, when you say things like:
    it sure does look like the design of the experiment involves conditioning heads on ~Tuesday,
    That is not what the condition is. It is {AWAKE}, which is can also be written as ~{HEADS&Tuesday} or {HEADS&Monday,TAILS&Monday,TAILS&Tuesday}.

    I've presented an alternative problem that unequivocally demonstrates the solution. You give no reason for "not getting .. its equivalence to SB." It is my opinion that you "do not get it" because doing so would make you change your answer. To "get it," all you need do is let Beauty be wakened on HEADS+Tuesday, but do SOTAI.

    One thing I'm generally uncertain about is how strongly to lean on "what day today is" being random.
    "Random" is not a property of what you are looking at in an experiment. It is a property of what you know about it, but can't see. Either because the experiment hasn't happened yet, or it has but you can't see what happened.

    Here's an example that I've used in this thread, that directly addresses your uncertainty: Forget the coin, and the two possible awakenings. Put Beauty to sleep, and roll a six-sided die. Call the result of the roll R. Leave her asleep on five days during the following week, waking her only after R nights (so R=1 means Monday, R=2 means Tuesday, etc.). When she is awake, ask her for her confidence that today is Wednesday.

    Is the answer 1, meaning "what day today is, is not random," or is the answer 1/6, meaning "what day today is, is random" ?

    The reason it is random, is because Beauty has no evidence of what day it could be. Does this sound anything like the Sleeping Beauty Problem to you?
  • Mathematical Conundrum or Not? Number Five
    What happens on Tuesday&HEADS is a part of the HEADS protocol, so you excluded part of it. — JeffJo

    (a) No it isn't. From the OP:

    A fair coin will be tossed to determine which experimental procedure to undertake: if the coin comes up heads, Beauty will be awakened and interviewed on Monday only. If the coin comes up tails, she will be awakened and interviewed on Monday and Tuesday. In either case, she will be awakened on Wednesday without interview and the experiment ends.
    Srap Tasmaner
    Yes, it is. The bolded text tells the lab techs what to do - or more accurately, what not to do - on both days. It defines two protocols for TAILS: interview Monday, interview Tuesday. It defines two protocols for HEADS: interview Monday, sleep Tuesday. Even if they send her home that day, that would still be SOTAI.

    My point is that it can't matter what SOTAI is. There is a protocol for Tuesday&HEADS, and in an interview Beauty knows that it is not the protocol that is currently in progress.

    The only thing that matters is one for heads and two for tails
    What matters is that there is a protocol on both days for both HEADS and TAILS. And that one of these four protocols is inconsistent with Beauty being interviewed. You keep treating the fact that she sleeps through a day as if that makes the day nonexistent,or that it is not something the lab techs have to have included in their protocol.

    (b) If it were part of the heads protocol, by eliminating it, you would be eliminating heads as an outcome. Simply being interviewed would tell you the coin landed tails.
    ?????
    In general, I dislike the use of the word "eliminate." People forget that it means "An outcome which was possible has been shown to be incompatible with the current information state."

    On Sunday Night, Beauty knows that her information state during the experiment will be limited to a single day's experiences. She knows that there are four possible such states:

    1. The single day is Monday and HEADS flipped.
    2. The single day is Monday and TAILS flipped.
    3. The single day is Tuesday and HEADS flipped.
    4. The single day is Tuesday and TAILS flipped.
    If she is interviewed, she knows that one has been shown to be incompatible with her current information state.

    If that seems like a tendentious interpretation, consider what happens as you increase the number of tails interviews: whatever the ratio, that's your odds it was tails. Do a thousand tails interviews, and it's a near certainty -- according to thirders -- that a fair coin lands tails.
    Yep. Get two thousand volunteers. Order them randomly from #1 to #2,000. House #1 thru #1,0000 in the HEADS wing of your lab, and #1,001 thru #2,000 in the TAILS wing. Then flip your fair a coin.

    On each of the next 1,000 days, wake all of the volunteers in the wing that corresponds to the coin result, and one - the one whose number corresponds to the day - from the other wing. Ask each of the 1,001 awake volunteers for her confidence that her wing is not the one that corresponds to the coin that was flipped.

    Each of these women is in an experiment that is identical - except for the labels you put on coins and days - to what you just described. Each knows that 1,000 awake women came from one wing, and only 1 from the other.

    Yes, it is a fair coin flip. But Beauty is not asked about its flip in an information vacuum. She knows that there is a 1/N chance that she would have been interviewed today under one result, but a 100% chance under the other. Her confidence in the first result must be 1/(1+N).
  • Mathematical Conundrum or Not? Number Five
    If I condition on ~(Tuesday & HEADS), I exclude neither the heads protocol nor the tails protocol, as neither included it.
    This is incorrect. What happens on Tuesday&HEADS is a part of the HEADS protocol, so you excluded part of it. And you treat the various possibilities inconsistently.

    What you are trying to do is like trying to get a sum of 10 or more on two six-sided dice. If you look at only one, and see that it is not a 5 or a 6, can you conclude that the sum can't be 10 or more? After all, you have to have a 5 or a 6 to get that large of a sum, and you don't see one.

    Just like my example excluded the possibility that the unseen die is a 6, you excluded part of the HEADS protocol when you conditioned on ~(Tuesday & HEADS). Specifically, the part where SOTAI happens. Whether or not Beauty is awake to see it, it is still a part of the HEADS protocol and you are treating it as if it is not. You are inconsistent because you insist you can't separate the two parts of the TAILS protocol the same way.

    This helps me not at all.
    The "help" I am trying to offer, is to get you to see that you have to separate both protocols into individual days. And you are right, it will be of no help to you if you refuse to see this, just like you won't address my "four volunteers" proof that the answer is 1/3.

    +++++

    In probability, an outcome is a description of a result. A set of such descriptions with the property that every possible result of an experiment fits exactly one of the descriptions is called a sample space of the experiment.

    There can be more than one sample space, depending on what you are interested in describing. Possible sample spaces for rolling my two dice include 36 outcomes (if every ordered combination is considered), or 11 (if just the sum is considered), or 2 (if all you care about is whether the sum is, or isn't, 10 or more). But note that it is never wrong to use a larger sample space than the minimum required to distinguish what is of interest to you: the 36-element sample space describes the experiment where you are trying to get 10 or more, and in fact is easier to use. The most common mistake made by beginners in probability may be using the wrong sample space, and assuming the outcomes are equiprobable just because it is a sample space. ("I have two children, and there is a 1/3 chance that I have two boys because the sample space is 0, 1, or 2 boys!")

    An event is not the same as an outcome, it is a set of outcomes. The two are easily confused. The difference is that your schema for providing descriptions can, depending on the event, separate it into subsets that are also events. By definition, an outcome can't be separated that way unless you change the schema.

    So if your schema is to look at the sum, a 10 is a 10 whether the combination is (4,6) or (5,5). But that schema isn't useful if all you see is one die: "I see a 4" doesn't tell you anything about which "sum" description is appropriate. You have to change to a schema that describes the possible companions of the 4 you see.

    In your approach to Sleeping Beauty, you are considering Monday&TAILS and Tuesday&TAILS to be inseperable parts of the same outcome. Probably because they are both part of what you call "the TAILS protocol." That is, you consider Monday&TAILS, Tuesday&TAILS, and just TAILS to be different names for the same outcome. This is a point of view that is only valid from outside the experiment; the lab techs, or Beauty on Sunday night.

    What you are ignoring, is that when she is inside the experiment, even though she doesn't know which day it is, she does know that that it is not currently both days. So TAILS (or TAILS protocol) is not an outcome. She can separate it into the distinct outcomes Monday&TAILS and Tuesday&TAILS, and know that only one applies to the current moment. And the fact that you do make this distinction for HEADS requires you to do it with TAILS. (And even if you think it should not be necessary to do so, you can't be wrong by doing it.)

    The two-day protocol is irrelevant to Beauty, because she is inside the experiment and so participating in only one day. Her sample space is the set of four possible single-day protocols: {Monday&TAILS, Tuesday&TAILS, Monday&HEADS, Tuesday&HEADS}. Each has a prior probability of 1/4 to apply to a random single day in the experiment, which is all that Beauty knows is happening. But because she kn0ows SOTAI is happening on that single day, she ca rule out one of those outcomes.
  • Mathematical Conundrum or Not? Number Five
    When Beauty is asked, "What is your credence that the coin landed heads?" she knows there's a chance the experiment is using the heads protocol, in which case this is her one and only interview, and a chance that it is using the tails protocol, in which case this may be her first interview, last, or one of many, depending. By stipulation, there is no evidence she can use to distinguish one interview from another; all she has to go on is her knowledge of the experiment's design.
    It is indeed true that Beauty has no evidence that she can use to distinguish Monday from Tuesday. This does not mean that such evidence does not exist, only that she does not have it.

    Monday is a different day than Tuesday. You are treating them as the same event from Beauty's point of view, and they are not. They are within the overall experiment, but Beauty sees only half - one day of two - of what the overall experiment encompasses.

    I provided an un-refuted (and irrefutable) demonstration that the answer must be 1/3 here. But you apparently need another.

    So, do everything as in the original experiment, except don't tell Beauty that she might sleep though a day. Tell her that she will be interviewed on Monday, but only on Tuesday if Tails was flipped. If it was Heads, Something Other Than An Interview (SOTAI) will happen (you can even give an example: say that last week's volunteer was taken to DisneyWorld). BUT, the stipulation that she cannot distinguish the interviews still applies; only Interview/SOTAI are distinguishable.

    You were right before that some "discounting" needs to be done. You just did it incorrectly. Beauty does not see the entire experiment, she sees just one day of it. Using SOTAI demonstrates how she should do this discounting: To Beauty, there aren't just two protocols, there are four:

    Monday+Tails = I am to be Interviewed,
    Monday+Heads = I am to be Interviewed,
    Tuesday+Tails = I am to be Interviewed, and
    Tuesday+Heads = SOTAI.

    On Sunday Night, she knows that each of these is equally likely to occur. But from the point of view of the overall experiment, the events that happen on different days are not disjoint; each has a probability of 1/2 that it will occur in the future. There is no inconsistency, since two will happen.

    But when she is wakened, she sees only one day and not the entire experiment. She has evidence only of the present, not the future or past. So Monday+Tails and Tuesday+Tails are just as much a different presents, as are Monday+Heads and Tuesday+Heads.

    If SOTAI happens, she knows the sub-protocol was based on Tuesday+Heads. So the probability of Heads is 100%. This should be a big clue to Halfers, since an increased probability in some circumstances requires that it decrease in others.

    If she is interviewed, she can deduce that one of these equiprobable sub-protocols can't be the protocol responsible for today. The other three remain equiprobable, and only one includes Heads. So the probability of Heads is 1/3.

    Finally, note that it does not matter what SOTAI is, just that when she is in an interview she knows that SOTAI is not happening. Her logic is based entirely on the fact that the three possible interviews are indistinguishable from each other, but distinguishable from SOTAI. So, let SOTAI be "don't wake her up."
  • Mathematical Conundrum or Not? Number Five
    Demonstration of concept #1: Beauty is put to sleep on Sunday, as in the original problem. Then a single, fair six-sided die is rolled. Based on its result, she is wakened once during the ensuing week: 1=Monday, 2=Tuesday, ... 6=Saturday. While awake, she is asked for two beliefs: That a 3 rolled, and that it is Wednesday.

    I don't think anybody will argue that her belief that a 3 rolled should be 1/6. She has no evidence that could make it anything else. But "3 rolled" and "it is Wednesday" represent the same concept: if a 3 rolled, it must be Wednesday, and if anything except a 3 rolled, it can't be Wednesday.

    The point is, that "it is Wednesday" is a perfectly valid proposition to evaluate. Like "a 3 rolled," its probability is 1/6.

    Demonstration of concept #2: Same as #1, but two dice are rolled until the result is not doubles. Beauty is wakened twice, and the same amnesia drug is used in between wakings. She is asked about the same two beliefs.

    Note that "a 3 rolled" can refer to either die, so her belief on Sunday is 10/30=1/3 (remember, the six possible ways doubles could roll are eliminated). When awake during the week, she gains no information that can affect that, so her belief remains 1/3.

    "It is Wednesday" is still a valid proposition. Jeremiah, this does not mean that Beauty has "temporal awareness while asleep." Only the period of time from when she woke, until when is put to sleep again, exists in her awareness. But that entire period exists within a single day; a day that has a constant name, even if she does not know it. So she can represent that awareness with a probability for each possible name. And since she has no evidence that any name is more or less likely than "Wednesday," her belief must be 1/6 in Wednesday.

    It is is not surprising that this is half of her belief in a 3, since she wakes twice. In the combinations where a 3 rolled, she is awake on Wednesday half of the days.

    Demonstration of concept #3: Same as #2, but roll only once and accept doubles. In that case, she will be wakened only once.

    Her temporal awareness is still that it can be only one day, within a set of six equally-likely days. Her belief that it is Wednesday is still 1/6. (Whether a 3 rolled is not quite the same issue as "Heads" in the original problem, so I won't obfuscate the point by discussing it.

    +++++

    The halfer argument is based entirely on treating Monday and Tuesday as the same day in Beauty's awareness. They are not; Beauty cannot distinguish them through her senses, but she knows that one name has applied since she woke up, and the other name has not. She can treat that name with probability.

    Each of the four combinations "Heads+Monday, Heads+Tuesday, Tails+Monday, Tails+Tuesday" is equally likely to represent a random point in time during the experiment. The probability is 1/4 each.

    Yes, Tails+Tuesday" still can happen, even if Beauty has no awareness of it when it does. The point is that it can exist, she knows it can exist, and she knows she won;y be awake to see it. From that, she can update her belief in Heads to 1/3.
  • Mathematical Conundrum or Not? Number Five
    Four Beauties volunteer to undergo the following experiment and are told all of the following details: Four cards have been prepared; one says "You will sleep through Tuesday if the coin lands Heads." Another says "You will sleep through Tuesday if the coin lands Tails." The other two change "Tuesday" to "Monday." The cards are dealt to the four volunteers. Each can look at hers, but cannot share its information with the others.

    On Sunday all four she will be put to sleep, and a fair coin will be flipped. Three of the Beauties will be wakened on Monday, according to the coin result and their cards.They will be interviewed, and put back to sleep with an amnesia-inducing drug that makes them forget that awakening.

    This will be repeated on Tuesday.

    In the interview, each awake volunteer will be asked for her belief that the coin result is the one named on her card.


    Each volunteer knows that three volunteers are awake. Each knows that exactly one of these three has a card that names the coin result. None have any information that could make her believe that she is more,or less, likely to be the one. So what should her belief be?

    One of these volunteers is experiencing the exact same problem that started this thread. The others are experiencing equivalent problems that must have the same answer.