• Sleeping Beauty Problem
    She's certainly able to update it on the basis of her knowledge that she might be awoken an even more absurdly large number of times as a consequence of this very unlikely event. I'm saying that it's irrational of her to.

    The only rational approach, upon waking, is to recognize that it landing heads 100 times in a row is so unlikely that it almost certainly didn't, and that this is her first and only interview.
    Michael

    She does recognize that for the coin to land 100 times in a row is unconditionally unlikely. But why would it not be rational for her to condition her credence in the truth of this proposition on her knowledge that her awakening isn't an event that she can distinguish in respect of its cause, and that a very large number of such indistinguishable awakening events can stem from such an unlikely cause?
  • Sleeping Beauty Problem
    The difference is that the unconditional probability of being called up is very low, and so just being called up at all affects one's credence. In the Sleeping Beauty case (both the normal and my extreme version), she's guaranteed to be awoken either way.Michael

    I can easily adjust my lottery study example such that I am guaranteed to be selected but, once selected, the very (unconditionally) low event that led to my selection (alongside a very large number of people) still is more likely than not to have been the cause of my selection. All that is needed is to shorten the waitlist by about 98%.
  • Sleeping Beauty Problem
    There's actually two spaces. See here.Michael

    Yes, I agree with your representation.
  • Sleeping Beauty Problem
    Then you have to say the same about my extreme example. Even when she knows that the experiment is only being run once, Sleeping Beauty's credence that the coin landed heads 100 times in a row is greater than here credence that it didn't.

    And I think that's an absurd conclusion, showing that your reasoning is false.
    Michael

    I'm not sure why you think this is absurd. Compare again my lottery study example. Suppose there are one billion people on the waiting list. If a coin lands heads 20 times in a row, then, 100 million people get pulled from the list. Else, one single person gets pulled from the list. I am then informed that I got pulled from the list (but not whether I am alone or one from 100 million). Is it absurd for me to believe that the coin landed heads 20 times in a row? My credence in this proposition should be roughly 99%

    In Sleeping Beauty's case, your intuition that her credence in the high probability of the sequence of heads ought to be absurd apparently stems from your unwillingness to contemplate the possibility of her being able to updating it on the basis of her knowledge that she might be awoken an even more absurdly large number of times as a consequence of this very unlikely event. The need for her to update her credence upon being awakened stems from her lacks of power of distinguishing those events in respect of their causes (i.e. the coin flip results). Another reason why you are neglecting the need she has for updating her credence might be a result of your characterizing the experiment that is being run once as starting with a sequence of coin flips. But from Sleeping Beauty's perspective, it really begins with a random awakening.
  • Sleeping Beauty Problem
    There is a space of possible awakening/interview events A that are being characterised by the day in which they occur ((M)onday or (T)uesday) and by the state of a coin that has been flipped prior to them occurring ((H)eads or (T)ails). P(H) = P(T) = 0.5. The rest of the conditional dependencies are part of the stipulation of the problem or can be inferred from them.
  • Sleeping Beauty Problem
    I never buy betting arguments unless the random variables are set up!fdrake

    They are!
  • Sleeping Beauty Problem
    They describe completely different approaches to modelling the problem. That doesn't immediately tell us which SB ought to model the situation as, or whether they're internally coherent.fdrake

    One clue to this is to let SB bet on the outcome that her credence is about and see if her betting behavior leads her to realize the EV she is anticipating.
  • Sleeping Beauty Problem
    1. If the experiment is run once, what is Sleeping Beauty's credence that the coin landed heads?

    2. If the experiment is repeated several times, what is the probability that a randomly selected interview from the set of all interviews followed the coin landing heads?

    Thirders answer the second question, which I believe is the wrong answer to the first question. The experiment doesn't work by randomly selecting an interview from a set of interviews after repeating the experiment several times and then dropping Sleeping Beauty into it.
    Michael

    I would rather say that the experience works by ensuring that Sleeping Beauty finds herself being awoken in circumstances that she knows to be twice as likely to occur (because twice a frequent) as a result of a coin having landed heads than as a result of this coin having landed tails. This formulation entails no metaphysical queerness.
  • Sleeping Beauty Problem
    My reasoning is that P(Awake) = 0.5 given that there are 6 possible outcomes and I will be awake if one of these is true:

    1. Heads and I am 1
    2. Tails and I am 2
    3. Tails and I am 3
    Michael

    If you mean P(Awake) to refer to the probability of your being awakened at all (on at least one day) then P(Awake) is indeed 0.5. But in that case P(Awake|Heads) should, consistently with this interpretation, refer to your being awakened at all conditioned on the case where the coin landed heads. This is (1/3+2/3)/2 = 0.5
  • Sleeping Beauty Problem
    I don't think it makes sense to say P(Awake) = 3/4. P(Awake) is just the probability that she will be woken up, which is 1.Michael

    But I'm not saying that. What I'm saying is that she is being awoken every Mondays and she is awoken half the time on Tuesdays. So, on average, on a random day, she is being awoken 3/4 times. I am then using Bayes' theorem to deduce the probability of a random awakening having occurred on a Tuesday. But the result was rather trivial.
  • Sleeping Beauty Problem
    (I woke up early)

    The question which has been eating me is "What is the probability of the day being Tuesday?". I think it's necessary to be able to answer that question for the thirder position. But I've not found a way of doing it yet that makes much sense. Though I'm sure there is a way!fdrake

    P(Tuesday|Awoken) = (P(Awoken|Tuesday) / P(Awoken)) * P(Tuesday)

    Sleeping Beauty is awoken with probability 3/4 on an average day (Monday or Tuesday). On Tuesdays, she is awoken with P = 1/2. Therefore, P(Awoken|Tuesday) / P(Awoken) = (1/2)/(3/4) = 2/3.

    This (2/3) is the Bayesian updating factor. The unconditioned probability of the day being Tuesday is 1/2. The updated probability therefore is P(Tuesday|Awoken) = (2/3)*(1/2) = 1/3, as expected.
  • Sleeping Beauty Problem
    I think you numbers there are wrong. See this.Michael

    In the quoted post you say: "P(Awake|Heads) is just the prior probability that she will be woken up if the coin lands heads"

    I think my lottery study analogy suggests a better interpretation of the P(Awoken|Heads)/P(Awoken) Bayesian updating factor. But I must go to sleep now. Thanks for engaging! I'll be back with equal probability on one of my next two awakenings.
  • Sleeping Beauty Problem
    Also this makes no sense. You can't have a probability of 2.Michael

    This is not a probability. It's a ratio of probabilities that I have expressed as a ratio of corresponding frequencies. The updated probability P(Heads|Awoken) is 2/3. The quoted ratio being larger than one just reflects the fact that Bayesian updating results in a probability increase in this case.
  • Sleeping Beauty Problem
    Being able to bet twice if it lands tails, and so make more money, doesn’t make it more likely that it landed tails; it just means you get to bet twice.

    You might as well just say: you can place a £1 bet on a coin toss. If you correctly guess heads you win £1; if you correctly guess tails you win £2.

    Obviously it’s better to bet on tails, but not because tails is more probable.
    Michael

    It makes it twice as likely that individual bets are winning bets. Right? Likewise in Sleeping Beauty's problem, the fact that she is being awoken twice when the coin lands heads makes it more likely that a randomly selected awakening is the result of a coin having landed heads. When she if afforded the opportunity to make one singe bet on any given awakening, her expected value when making this bet is conditioned on the probability that this particular awakening is the result of the coin having landed heads. Do you agree that her expected value for this single bet (in my scenario) is $120? If she would rather judge that the probability for the coin having landed heads is 1/2, she should expect the expected value of her bet to be $90 and would be rationally justified to decline waging $100.
  • Sleeping Beauty Problem
    This is a follow-up to my previous post.

    How do you condition on such a thing? What values do you place into Bayes' theorem?

    P(Heads|Questioned)=P(Questioned|Heads)∗P(Heads) / P(Questioned)
    Michael

    In the case of Sue's selection to participate in the lottery study, we have

    P(Heads|Selected)=P(Selected|Heads)∗P(Heads) / P(Selected)

    Since on each fair coin toss, 1.5 participants are being selected on average and when the coin lands on heads 2 participants are selected, P(Selected|Heads) / P(Selected) is 2/1.5 = 4/3.

    P(Heads|Selected) therefore is 4/3 * P(Heads) = (4/3)*(1/2) = 2/3

    Likewise, in the case of Sleeping Beauty

    P(Heads|Awoken) =(P(Awoken|Heads) / P(Awoken)) * P(Heads) = 2/1.5 * 1/2 = 2/3
  • Sleeping Beauty Problem
    The simplest "experiment" is just to imagine yourself in Sleeping Beauty's shoes.Michael

    Wasn't that rather the Cinderella problem?

    You're inviting us to imagine ourselves in Sleeping Beauty's shoes to support the halfer position. However, there are other ways to imagine her situation that can be argued to support the thirder position. Consider the following thought experiment:

    Suppose we introduce a character, Sue, who signs up for a lottery study. She joins a waitlist where participants are selected one or two at a time based on a coin toss: two participants when it lands heads and one when it lands tails. Upon being selected, they're given the option to buy a ticket for $100 that could potentially be worth $180 if the coin had landed heads (or else is worth nothing).

    The expected value of the ticket, and whether Sue should purchase it, depends on her credence about the coin toss. If Sue holds the halfer position, her credence is 1/2, and the expected value of the ticket is $90. Hence, she shouldn't buy it. However, if Sue holds the thirder position, her credence in the proposition that the coin landed heads is 2/3, making the ticket's expected value $120. Hence, she should buy it.

    Sue could argue for the thirder position as follows: if she has been selected from the waiting list, it is twice as likely that she has been chosen (together with another participant) as a result of the coin landing heads. As a frequent participant in the study, Sue would find, over time, that she profits if she always buys the ticket (an average net gain of $20 per participation), which corroborates the thirder position.

    To make this scenario more directly analogous to the original problem, let's imagine that Sleeping Beauty, upon each awakening, can not only express her belief about the coin toss but also place a bet on it. In the long run, she would profit from taking the bet as a thirder, further reinforcing this position.

    The pitfall of the 'imagine-yourself-in-her-shoes' argument lies in conflating Sue's perspective with the experimenter's by focusing only on Sue's situation before the coin toss. Eventually, everyone on the waitlist will be selected, just as Sleeping Beauty is guaranteed to be awoken at least once. Her credence that the coin will land heads is 1/2 before being put to sleep, and the same is true for the study participants before they're selected. However, once the coin has been tossed and they've been chosen, their credence about the value of their tickets being $180 (and that the coin landed on heads) should be updated to 2/3. The same applies to Sue's credence that her current awakening was due to a coin landing heads up.
  • Two envelopes problem
    There's the question of whether the "Bivariate Distribution Specification" reflects the envelope problem. It doesn't reflect the one on Wiki. The reason being the one on the wiki generates the deviate (A,A/2) OR (A,2A) exclusively when allocating the envelope, which isn't reflected in the agent's state of uncertainty surrounding the "other envelope" being (A/2, 2A).

    It only resembles the one on the Wiki if you introduce the following extra deviate, another "flip" coinciding to the subject's state of uncertainty when pondering "the other envelope":
    fdrake

    In the Wikipedia article, the problem is set up thus: "Imagine you are given two identical envelopes, each containing money. One contains twice as much as the other. You may pick one envelope and keep the money it contains. Having chosen an envelope at will, but before inspecting it, you are given the chance to switch envelopes. Should you switch?"

    Your setup for the bivariate distribution specification is a special case of the problem statement and is perfectly in line with it. Let's call our participant Sue. Sue could be informed of this specific distribution, and it would represent her prior credence regarding the contents of the initially chosen envelope. If she were then to condition the Expected Value (EV) of switching on the hypothetical situation where her initially chosen envelope contains $10, the EV for switching, in that particular case, would be positive. This doesn't require an additional coin flip. She either is in the (5, 10) case or the (10, 20) case, with equal prior (and equal posterior) probabilities in this scenario. However, this is just one hypothetical situation.

    There are other scenarios to consider. For instance, if Sue initially picked an envelope containing $5, she stands to gain $5 with certainty by switching. Conversely, if she initially picked an envelope with $20, she stands to lose $10 with certainty by switching.

    Taking into account all three possibilities regarding the contents of her initially chosen envelope, her EV for switching is the weighted sum of the updated (i.e. conditioned) EVs for each case, where the weights are the prior probabilities for the three potential contents of the initial envelope. Regardless of the initial bivariate distribution, this calculation invariably results in an overall EV of zero for switching.

    This approach also underlines the flaw the popular argument that, if sound, would generate the paradox. If we consider an initial bivariate distribution where the potential contents of the larger envelope range from $2 to $(2^m) (with m being very large) and are evenly distributed, it appears that the Expected Value (EV) of switching, conditioned on the content of the envelope being n, is positive in all cases except for the special case where n=2^m. This would suggest switching is the optimal strategy. However, this strategy still yields an overall EV of zero because in the infrequent situations where a loss is guaranteed, the amount lost nullifies all the gains from the other scenarios. Generalizing the problem in the way I suggested illustrates that this holds true even with non-uniform and unbounded (though normalizable) bivariate distributions.

    The normalizability of any suitably chosen prior distribution specification (which represents Sue's credence) is essentially a reflection of her belief that there isn't an infinite amount of money in the universe. The fallacy in the 'always switch' strategy is somewhat akin to the flaw in Martingale roulette strategies.
  • Two envelopes problem
    You can conclude either strategy is optimal if you can vary the odds (Bayes or nonconstant probability) or the loss function (not expected value). Like if you don't care about amounts under 20 pounds, the optimal strategy is switching. Thus, I'm only really interested in the version where "all results are equally likely", since that seems essential to the ambiguity to me.fdrake

    If we assume that all results are equally likely, the EV of switching given that the chosen envelope was seen to contain n is (2n + n/2)/2 - n = 1.5n. Hence whatever value n might be seen in the initially chosen envelope, it is irrational not to switch (assuming only our goal is to maximize EV). This gives rise to the paradox since if, after the initial dealing, the other envelope had been chosen and its content seen, switching would still be +EV.

    As I wrote, the prior probabilities wouldn't be assigned to the numbers (5,10,20), they'd be assigned to the pairs (5,10) and (10,20). If your prior probability that the gameshow host would award someone a tiny amount like 5 is much lower than the gigantic amount 20, you'd switch if you observed 10. But if there's no difference in prior probabilities between (5,10) and (10,20), you gain nothing from seeing the event ("my envelope is 10"), because that's equivalent to the disjunctive event ( the pair is (5,10) or (10,20) ) and each constituent event is equally likelyfdrake

    I did indeed first assigned priors to the two cases—(5, 10) and (10, 20)—and only derivatively calculated priors regarding the possible contents of the first chosen envelope (or of the other one).

    Edit: then you've got to calculate the expectation of switching within the case (5,10) or (10,20). If you specify your envelope is 10 within case... that makes the other envelope nonrandom. If you specify it as 10 here and think that specification impacts which case you're in - (informing whether you're in (5,10) or (10,20), that's close to a category error. Specifically, that error tells you the other envelope could have been assigned 5 or 20, even though you're conditioning upon 10 within an already fixed sub-case; (5,10) or (10,20).

    The conflation in the edit, I believe, is where the paradox arises from. Natural language phrasing doesn't distinguish between conditioning "at the start" (your conditioning influencing the assignment of the pair (5,10) or (10,20) - no influence) or "at the end" (your conditioning influencing which of (5,10) you have, or which of (10,20) you have, which is totally deterministic given you've determined the case you're in).

    I am not conditioning within an already fixed sub-case. Seeing that there is $10 in one envelope still leaves it open that there might be $5 or $20 dollars in the other one. To be clear, you seems to be emphasizing the role of conditioning and timing, arguing that my observation of the $10 should not affect my decision about whether I'm in the (5,10) or (10,20) case. You appear to argue that, by the time I've observed the $10, the case is already determined, and I can't switch between cases based on my observation.

    However, it seems that you've misunderstood my use of Bayesian updating. I am not arguing that observing the $10 allows me to switch between cases. Rather, I'm saying that, given an observation of $10, I can update my beliefs about the probability of being in the (5,10) case or the (10,20) case.

    (Some further discussion with GPT-4)
    Reveal
    GPT-4

    The only suggestion I would make is to add a sentence or two at the end summarizing your main point, to ensure it is clear to fdrake and others following the discussion. Something like:

    "Essentially, I'm arguing that upon observing a value in the chosen envelope, we can revise our beliefs about the probabilities of the different cases, which subsequently alters the expected value of switching envelopes. This can resolve the two-envelope paradox without assuming a non-uniform prior or altering the objective of maximizing expected value."

    Other than that, your reply seems to cover all necessary points.

    PN:

    I think what ultimately resolves the paradox is that when we don't actually observe the content of the initially chosen envelope, then the EV if switching is the weighted average of the EV of switching for all the suitably conditioned (updated) cases regarding the possible contents of the other envelope. And this EV is zero. Is that clear?

    GPT-4

    Yes, your point is clear.

    When we don't observe the content of the initially chosen envelope, the Expected Value (EV) of switching indeed comes down to the weighted average of the EV for each possible content of the other envelope. This, however, requires an important assumption: the weights used in this average should represent our prior beliefs about the probabilities of each case (i.e., each possible content of the other envelope). The probabilities have to be updated (conditioned) based on the information available, which in this case is none as we don't observe the content of the initially chosen envelope.

    If these conditions are met, and given that one envelope has exactly double (or half) the money of the other, the overall EV of switching is indeed zero, which aligns with our intuition that without any extra information, both envelopes are equally likely to contain the larger amount.

    PN:

    About your remark that "the probabilities have to be updated (conditioned) based on the information available, which in this case is none as we don't observe the content of the initially chosen envelope," what I am rather assuming is that, for whatever content n of the initially chosen envelope, the EV of switching in that particular case, conditionally on our being in that case, is equal to the EV of switching calculated by means of Bayesian updating if we were to gain knowledge that we are in that case. Hence, it is not being assumed that we actually have the information.

    GPT-4

    Ah, I see what you're getting at now. You're right. I apologize for misunderstanding your point.

    In the case of the Two Envelopes Problem, you are considering the expected value of switching under each possible value of the initially chosen envelope (i.e., you're taking a "case by case" approach) and then averaging these values according to their prior probabilities. In other words, you are considering hypothetical situations in which you would know the value of the initially chosen envelope, even though you do not actually observe its content.

    This approach effectively evaluates the average EV of switching without actually knowing the content of the initially chosen envelope, but considering all possible outcomes.

    To rephrase and clarify: The expected value (EV) of switching needs to be calculated conditionally for each possible outcome. Then, these conditional EVs are averaged over, using the prior probabilities associated with each outcome (not necessarily uniform). As long as the expected values of switching are calculated properly, taking into account the specific conditions of each case, the overall expected value of switching will turn out to be zero, regardless of the specific shape of the prior distribution. This reflects the intuitive idea that without additional information, switching or not switching should yield the same overall expected outcome, hence resolving the paradox.
  • Gender is a social construct, transgender is a social construct, biology is not
    [...]This battle you define is therefore one over authority, meaning it is a political battle between the progressives and the orthodox (lower case), but it is not, as you claim, just a foolish error by the transexuals in not appreciating the old rule that sex and gender correlate. They wish to overthrow that old ruleHanover

    This is an very enlightening analogy.
  • Exploring the artificially intelligent mind of GPT4
    Seems to me that one of the big players who’s completely failed to catch this train, is Amazon. I’ve been using Alexa devices for about eighteen months, and they’re pretty lame - glorified alarm clocks, as someone said.Wayfarer

    They are in hot pursuit: Amazon plans to rework Alexa in the age of ChatGPT
  • Two envelopes problem
    Nevertheless, if they observe n=10 in the first envelope, I still think there's a problem with assigning a probability distribution on the values (5, 20) in the other envelope. This is because that stipulates there being three possible values in the envelopes combined; (5, 10, 20); whereas the agent knows only two are possible. [...]fdrake

    Your assertion that 'only two values are possible' for the contents of the envelopes in the two-envelope paradox deserves further exploration. If we consider that the potential amounts are $(5, 10, 20), we might postulate some prior probabilities as follows:

    P_1 = P(a) = P(($5, $10)) = 3/4,
    P_2 = P(b) = P(($10, $20)) = 1/4,

    which translates into priors for the unopened envelope:

    P_3 = P(A) = P(($5)) = 3/8,
    P_4 = P(B) = P(($10)) = 1/2,
    P_5 = P(C) = P(($20)) = 1/8.

    This distribution could reflect an informed guess about Joe, the envelope-filler, who is more likely to risk a smaller rather than a larger amount.

    Suppose Ann chooses an envelope. If it contains either $5 or $20, she can unambiguously update her priors to 1 and 0, or 0 and 1, respectively. The decision to switch or not becomes trivial. If, however, her envelope contains $10, she must update her beliefs about the contents of the other envelope using Bayes' theorem:

    P_updated(A) = P_updated((unseen=$5)) = P((unseen=$5) | (seen=$10)) = (1 * 3/8) / (1/2) = 3/4.

    Given this posterior, if Ann sees $10 in her envelope, the expected value (EV) for switching is negative:

    (3/4)$5 + (1/4)$20 - $10 = -$1.25.

    Therefore, she should retain her $10, as her prior for Joe having included $20 is sufficiently low. Regardless, before she inspects the second envelope, both outcomes ($5 or $20) remain possible.

    If we return to the original problem scenario (addressing @Michael's concern), where the first envelope remains sealed, the initial value probabilities become (3/8, 1/2, 1/8) for $5, $10, and $20 respectively. This gives an initial expected value of:

    3/8 * $5 + 1/2 * $10 + 1/8 * $20 = $9.375.

    The expected value if Ann switches relies on the weighted sum of the expected values for the unopened envelope, conditional on the potential contents of the chosen envelope. As choices of $5 and $20 guarantee $10 in the other envelope, while a choice of $10 leads to an expected value of $8.75 for the other envelope, this calculates to:

    3/8 * $10 + 1/2 * $8.75 + 1/8 * $10 = $9.375. (Lo and behold!)
  • Two envelopes problem
    And given that the larger number is twice the value of the smaller number, the probability that the other side is half the value is 1/2 and the probability that the other side is twice the value is 1/2.

    Which step in this line of reasoning do you disagree with?
    Michael

    There is nothing there that I disagree with. But I don't think the paradox arises if the values of the two envelopes are stipulated in advance ($10 and $20, say). The paradox arises when we stipulate that the probability of the second envelope containing the largest amount is 1/2 and is not conditionally dependent on the value of the initially chosen envelope. In the example that you give, the probability of the second envelope containing $20 conditionally on the first envelope containing $10 is 1 (and vice versa).
  • Two envelopes problem


    Indeed, I concur that the paradox can also manifest when the option to switch envelopes is offered prior to opening the initially chosen one. The resolution I (and @sime also, I think) proposed also applies in this scenario. The apparent rationality of switching, in this case, is predicated on the concept that the Expected Value (EV) of the decision is expressed as (10n + n/10)/2 - n, a value that remains positive irrespective of n. This line of thought, however, is based on the assumption that the probabilities for the second envelope containing either 10n or n/10 are independent of the value of n.

    If we adjust this assumption to reflect that these probabilities are indeed conditional upon n (in the manner that Bayesian updating with an informative prior would suggest), then it becomes plausible to hypothesize — and likely not too challenging to demonstrate — that the EV of switching remains zero.
  • Exploring the artificially intelligent mind of GPT4
    Thanks! Actually as far as I know, it’s still ChatGPT - I’m signing in via OpenAI although whether the engine is the same as GPT-4, I know not. Also appreciate the ref to Haugeland.Wayfarer

    Unless you are subscribing to ChatGPT Plus (for $20 per month), it's GPT-3.5 you have access to. When you subscribe to ChatGPT Plus, you can then select the GPT-4 model when you start a new conversation. You can also interact with another version of GPT-4 for free by using Microsoft's new Bing through the Edge browser.
  • Exploring the artificially intelligent mind of GPT4
    It might by chance find a correct reference. But Equally it might make up a new reference.Banno

    In my experience, GPT-3.5 is much more liable to make up references whenever there is any sort of gap in its knowledge. GPT-4 very seldom does so when the topic under discussion isn't very arcane and there is a sufficient amount of material in its training data for it to have been able to memorise it and/or extract the gist of it. GPT-4 is much more likely to spontaneously acknowledge that it doesn't know something. The big liability of LLMs is that, in those cases where (1) their knowledge and understanding of a topic is tenuous or nebulous, and (2) they ends up making stuff up about it, they are quite unable to become aware on their own that the opinion they expressed isn't derived from external sources. They don't know what it is that they know and what it is that they don't know. Their training data isn't directly accessible to them and their don't have meta-cognitive strategies that might enable them to distinguish recall from confabulation.
  • Two envelopes problem
    A Bayesian analysis reveals that the culprit of the paradox is the assignment of a non-informative prior to the distribution that generates the envelopes contents.sime

    My understanding and resolution of the paradox is somewhat aligned with this perspective. The paradox was first introduced to me about 30 years ago by a friend who was a professor in statistics at UQAM (Université du Québec à Montréal). After further thought (and only after I was introduced to Bayes' theorem) I realized that the situation where it appears beneficial to switch the initially chosen envelope arises when we make an unrealistic assumption: that our belief about the probability distribution over possible envelope contents is that it is both uniform and infinite.

    However, given any reasonably well-defined (and bounded) prior, opening one envelope may indeed inform our decision to either switch or stick to the original choice. This decision would be guided by the Expected Value (EV) of switching, which in turn is dictated by the revised probabilities concerning the potential contents of both envelopes. Notably, there's only one unique amount in the initially chosen envelope that would result in a zero EV for switching, rendering the choice indifferent.

    The paradox seems to emerge from the assumption that opening the initial envelope provides equal probabilities for the second envelope containing either 10n or n/10 the amount in the first one, irrespective of the value of n. This is where I believe the core misunderstanding lies.
  • Exploring the artificially intelligent mind of GPT4
    Maybe Heidegger got it from there.Jamal

    In his paper, 'Truth and Rule Following', John Haugeland artfully fuses Kant's concept of 'synthetic a priori' and the phenomenological/existential idea of 'always already there'. Although he does not extensively use these specific terminologies – I would need to revisit the paper for precise references – it's evident that both concepts heavily influence his ideas about 'constitutive rules' and 'existential commitments to constituted domains'. Haugeland acknowledges his indebtedness to both Kant (via Sellars) and Heidegger. I enjoyed how 'Wayfarer' engaged 'ChatGPT' (presumably GPT-4) to elaborate on this intricate connection. In future conversations, I might be inclined to explore with GPT-4 to what extent those notions highlight features its own artificial mind.
  • Donald Trump (All General Trump Conversations Here)
    Imagine feeling obliged to defend this degenerate.Mikie

    Some people find his performances merely laughable, others find them merely repugnant. Another false dichotomy.
  • Transgenderism and identity
    My point here is that this is not some sort of performance/act - this is genuine.EricH

    The nativist view posits that individuals are born either biological women or men, with the expectation that their gender expressions naturally align with their biological sexes. While some nativists acknowledge that gender dysphoria may be innate, they often label it as an anomaly.

    On the other hand, the social constructivist perspective suggests that gender expressions merely reflect societal norms relating to sex. Some even argue that biological sexes, not just gender expressions, are socially constructed.

    However, I believe that both views are rooted in shared assumptions that generate a false dichotomy.

    Indeed, the modes of gender expression available within a society or social group are socially constituted, representing prevailing norms. The arbitrariness or justification of these norms can be as varied as those of other societal norms, such as laws, ethical principles, customs, and etiquette.

    Judith Butler's performative view, as I understand it, is not necessarily a radical social constructivist stance. Rather, it can be reconciled with the idea that innate biological tendencies guide individuals towards certain gender expressions during their "normal" development. However, this does not imply that the specific modes of these expressions are innate or biologically predetermined. The modes themselves still are provided and shaped by the individual's culture and society.

    When an individual's subjectivity or spontaneous gender expression conflicts with societal norms, labeling this as an anomaly can be seen as a form of naturalistic fallacy. The fact that a majority of individuals in a society naturally align with or enact a particular social norm does not, on its own, provide broad justification for that norm. This majority alignment, however, does underscore that maintaining the status quo or conformity can often be more convenient and less disruptive. Yet this is a limited form of justification, one that frequently neglects the needs or rights of individuals and minority groups.

    Such broad justifications can easily veer into oppressive territory, particularly when they are justified through appeals to biological essentialism, another manifestation of the naturalistic fallacy.
  • Transgenderism and identity
    So trans folks can stand on the universal stage, with the rest of us, as fellow actors of equal status and value.universeness

    Well put!
  • Transgenderism and identity
    Oh come on? Do you really think trans folks would go through the absolute trauma of surgery based transition as an 'act ........ of sorts?universeness

    To be fair, if Judith Butler is right—and I think she broadly is—the gender expression of trans people indeed is an act of sorts. But then, so is the gender expression of cisgender people.
  • How ChatGPT works.
    But much the same architecture. It's still just picking the next word from a list of expected words.Banno

    It is the exact same underlying architecture. But most of the model's cognitive abilities are emergent features that only arise when the model is sufficiently scaled up. Saying that large language models are "merely" picking up the next word from a list just ignores all of those high-level emergent features. It pays no attention to the spontaneous functional organization being achieved by the neural-network as a consequence of its picking up and recombining in a contextually appropriate and goal-oriented manner the abstract patterns of significance and of reasoning that had originally been expressed in the training data (which is, by the way, strikingly similar to the way human beings learn how to speak and reason through exposure and reinforcement.)
  • How ChatGPT works.
    There are some things I don't get. I ran some jokes by it, and it consistently ranked the trash jokes as bad, and the hilarious jokes as hilarious. And it would give a good analysis of why the joke worked (or didn't). How can a random process produce those results?RogueAI

    @Hanover may have used GPT-3.5 rather than GPT-4. There is a significant difference in cognitive abilities between them.

    @Banno Thank for linking to this fantastic article! I'll read it as soon as I can.
  • Exploring the artificially intelligent mind of GPT4
    I tested the Bing AI in the following way: I have a low-priority mathematics page on Wikipedia, so I asked Bing what is known of this particular subject? Now, there are a smattering of papers on the internet on this subject; what Bing supplied was the first introductory paragraphs of my webpage, word for word. That's all.jgill

    So, it appears that your page is indexed, Bing did a search using relevant keywords from your prompt, and this was the only hit. You could try to ask it to tell you what it knows about the subject without doing a web search and see what happens. You can also probe its understanding of the content of the article by asking questions about it.
  • What is a good definition of libertarian free will?
    I don't see any consistency between these two statements. If, following the laws of nature is a requirement for determinism, and "stochastic" refers to actions describable by probability rather than law, then it would definitely be true that the stochasticity of quantum indeterminacies supports the rejection of determinism.Metaphysician Undercover

    For sure but libertarianism isn't the mere rejection of determinism. Libertarianism is the conjunction of two claims: (1) Free will isn't consistent with determinism, and (2) Human beings have free will. It is not sufficient that determinism be false for free will to be possible according to libertarians. It is merely a necessary condition. The libertarian philosopher Robert Kane distinguishes two tasks that he calls the ascent problem (proving incompatibilism) and the descent problem (making sense of libertarian free-will), and stresses that the second one is the most difficult:

    "Abstract arguments for incompatibilism that seem to get us to the top of the mountain are not good enough if we can’t get down the other side by making intelligible the incompatibilist freedom these arguments require. The air is cold and thin up there on Incompatibilist Mountain, and if one stays up there for any length of time without getting down the other side, one’s mind becomes clouded in mist and is visited by visions of noumenal selves, nonoccurrent causes, transempirical egos, and other fantasies." pp.13-14 in The Significance of Free Will
  • What is a good definition of libertarian free will?
    But until then, what do you make of unconscious determinants of free decisions in the human brain?Michael

    Most of the discussions that stems from Libet's experiments seem flawed to me for reasons that I had spelled out here.
  • What is a good definition of libertarian free will?
    Does determinism allow for stochastic quantum mechanics?Michael

    It doesn't but quantum indeterminacies often are seen to provide no help to libertarians. It is also my view that they provide no help since my focus is on agent causation and for our decisions to become rendered simply stochastic and unpredictable hardly restores our responsibilities for them qua agents.
  • What is a good definition of libertarian free will?
    Until anyone can show that an action is not self-generatedNOS4A2

    Lots of philosophers, and an even larger number of scientists, believe that they have shown exactly that (or that it is obvious and that denying it can only amount to a form of pre-scientific mysterianism). I don't believe anyone has actually shown that, but that is indeed the root of the disagreement.
  • What is a good definition of libertarian free will?
    Is this a difference that contradicts determinism?

    If someone asks me how I beat some opponent at some computer game, I can describe it in such terms as predicting their moves, using attacks that they’re weak against, etc., or I can describe it as pressing the right buttons at the right times. Your approach to free will seems similar to the first kind of explanation and the determinist’s approach seems similar to the second kind of explanation. But they’re not at odds. They’re just different ways of talking.

    So I would think that if you accept the underlying determinism then your position is compatibilist, not libertarian.
    Michael

    I accept the low-level determinism but deny that it, together with some thesis of supervenience, entails high-level determinism. Broadly, we may say that the doctrine of determinism entails that all the facts about the past together with the laws of nature uniquely determine the future. But I think that whenever we determine our own actions on the basis of our reasons for doing them (and likewise for the beliefs that we endorse), then, in those cases, the facts about the past and the laws of nature are irrelevant to the determination of our actions and beliefs as characterized in high-level terms.

    In order to make sense of this, it is necessary to delve a little deeper into the arguments that make the contrary thesis seem compelling (and that Jaegwon Kim has formalized as a causal exclusion argument). And it is also necessary to elucidate with some care the notion of possibility that is at issue in Harry Frankfurt's principle of alternative possibilities (PAP). When both of those tasks have been accomplished, it becomes easier to see how an agent-causal libertarianism can be reconciled with merely physical determinism. As I said to SophistiCat, I intend to recruit GPT-4's assistance for rewriting my paper on this topic in order to improve its readability.
  • What is a good definition of libertarian free will?
    I know little about computers, but on the face of it seems to me that, even if the CPU maps inputs to outputs in the same way whatever program it is running, the actual inputs and outputs themselves are not the same.Janus

    The mapping being the same means that the process is deterministic and insensitive to the high-level requirements of the word processing task. It is, we may say, the specific goal-oriented structure of the word-processing program (i.e. its design and functionality) that ensures that, when this program is loaded in memory, the user's imputed command to change the column width causes the words to redistribute themselves appropriately. The input-to-output mapping effected by the CPU on discrete chunks of 64 bytes doesn't explain this high-level behavior of the word-processor.

    And likewise with our high-level acculturated proclivity to organize our behaviours in a goal-oriented fashion, in relation to the low-level functioning of our brains and neurons. The main difference, of course, is that, as a tool, the word-processor's function is pre-programmed by us and remains fixed over time. We, on the other hands, are able to assess our own ultimate goals in accomplishing any task and revise them when appropriate. This ability that we have to reassess and modify our own goals is an essential part of the explanation (and justification) of our behaviours.

Pierre-Normand

Start FollowingSend a Message