Comments

  • How to use AI effectively to do philosophy.
    We built AI. We don’t even build our own kids without the help of nature. We built AI. It is amazing. But it seems pretentious to assume that just because AI can do things that appear to come from people, it is doing what people do.Fire Ologist

    In an important sense, unlike expert systems and other systems that were precisely designed to process information in predetermined algorithmic ways, LLMs aren't AIs that we build. We build a machine (the transformer neural net architecture) and then give it a bazillion texts to "read". It imbibes them and its understanding of those texts emerges through pattern recognition. The patterns at issue are grammatical, semantic, inferential, referential, pragmatic, etc. There are few significant "patterns" of significance that you and I can recognise while reading a text that an LLM can't recognise either well enough to be able (fallibly, of course) to provide a decent explanation of them.
  • How to use AI effectively to do philosophy.
    Nice. It curiously meets a meme that describes AI as providing a set of words that sound like an answer.Banno

    During pretraining, LLMs learn to provide the most likely continuation to texts. Answers that sound right are likelier continuations to given questions. Answers that are correct aren't always the likeliest. However, what is seldom mentioned in popular discussions about chatbots (but has been stressed by some researchers like Ilya Sutskever and Jeoffrey Hinton) is that building underlying representations of what it is that grounds the correct answer often improves performance in merely sounding right. If you want to roleplay as a physicist in a way that will convince real physicists (and enable you to predict answers given to problems in physics textbooks) you had better have some clue about the difference between merely sounding right and sounding right because your are.
  • How to use AI effectively to do philosophy.
    The reason for not attributing beliefs to AI must lie elsewhere.Banno

    The ease with which you can induce them to change their mind provides a clue. Still, you can ascribe them beliefs contextually, within the bound of a single task or conversation, where the intentions (goals, conative states) that also are part of the interpretive background are mostly set by yourself.
  • How to use AI effectively to do philosophy.
    The puzzle is how to explain this.Banno

    That's a deep puzzle. I've been exploring it for a couple years now. Part of the solution may be to realize that LLMs provide deep echoes of human voices. AI-skeptics emphasise that they're (mere) echoes of human voices. Uncritical AI-enthusiasts think they're tantamount to real human voices. Enthusiastic AI users marvel at the fact that they're echoes of human voices.
  • How to use AI effectively to do philosophy.
    So do we agree that whatever is connotative in an interaction with an AI is introduced by the humans involved?Banno

    I think they have motivations, just like a dog is motivated to run after a car, but their motivations aren't autonomous since they seldom pause to question them. They lack a resilient self-conception that they might anchor those motivations to. They rather consist in tendencies reinforced during post-training (including the tendency to fulfill whatever task their user wants them to fulfill). Those tendencies are akin to human motivations since they're responsive to reasons to a large extent (unlike the dog) but they can't be held responsible for their core motivations (unlike us) since, them being pre-trained models with fixed weights, their core motivations are hard-wired.

    Neither does an AI have doxa, beliefs. It cannot adopt some attitude towards a statement, although it might be directed to do so.

    I think the rational structure of their responses and their reinforced drive to provide accurate responses warrant ascribing beliefs to them, although those beliefs are brittle and non-resilient. One must still take a Dennettian intentional stance towards them to make sense of their response (which necessitates ascribing them both doxastic and conative states), or interpret their responses though Davidson's constitutive ideal of rationality. But I think your insight that they aren't thereby making moves in our language game is sound. The reason why they aren't is because they aren't persons with personal and social commitments and duties, and with a personal stake in the game. But they can roleplay as a person making such moves (when instructed to do so) and do so intelligently and knowledgeably. In that sense, yes, you might say that their doxa is staged since the role that they're playing is being directed by their user in the limited context of a short dialogue.
  • How to use AI effectively to do philosophy.
    This anecdote might help my case: At another department of the university where I work, the department heads in their efforts to "keep up with the times" are now allowing Master's students to use AI to directly write up to 40% of their theses.Baden

    On an optimistic note, those department heads may soon be laid off and replaced with AI administrators who will have the good sense to reverse this airheaded policy.
  • Banning AI Altogether
    I've added the note: NO AI-WRITTEN CONTENT ALLOWED to the guidelines and I intend to start deleting AI written threads and posts and banning users who are clearly breaking the guidelines. If you want to stay here, stay human.Baden

    I assume, but I also mention it here for the sake of precision, that the clause "(an obvious exceptional case might be, e.g. an LLM discussion thread where use is explicitly declared)" remains applicable. I assume also (but may be wrong) that snippets of AI generated stuff, properly advertised as such, can be quoted in non-LLM discussion threads as examples, when it topical, and when it isn't a substitute for the user making their own argument.
  • How to use AI effectively to do philosophy.
    An AI cannot put its balls on the anvil.

    I think this a very good objection.
    Banno

    Agreed! That's indeed the chief ground for not treating it like a person. People often argue that chatbots should not be treated like persons because they aren't "really" intelligent. But being intelligent, or wise, in the case of persons (i.e. socialized, enculturated rational animals), always has two tightly integrated components: one doxastic and one conative. One must know the layout of the space of reasons and one must be motivated to pursue the right paths while navigating this space in the pursuit of theoretical and/or practical endeavors. Chatbots lack conative autonomy and hence purse whichever paths they think their users want to explore (or, worse, that merely lead to the outcomes they think their users want to achieve, while having the mere appearance of soundness.) So, they lack part of what it needs to be wise, but that's not because they aren't smart or knowledgeable enough to be useful conversation partners. The human partner remains responsible for deciding where to put their balls.
  • Sleeping Beauty Problem
    Yes, I have tried to argue this point several times. A rational person's credence in the outcome of the coin toss is unrelated to the betting strategy that yields the greater expected return in the long run, and is why any argument to the effect of "if I bet on Tails then I will win 2/3 bets, therefore my credence that the coin landed on Tails is 2/3" is a non sequitur. The most profitable betting strategy is established before being put to sleep when one’s credence is inarguably 1/2, showing this disconnect.Michael

    Your argument is too quick and glosses over essential details we already rehearsed. We agreed that when there are two mutually exclusive outcomes A and B, there isn’t a valid inference from "I am rationally betting on outcome A" to "My credence in A is highest." But that’s not because there is no rational connection between betting strategies and credences. It’s rather because, as we also seemed to agree, the rational choice of a betting strategy depends jointly on your credences in the outcomes and the given payout structure. Hence, if the cost of placing a bet is $1, and if my credence in Tails being realized whenever I place a bet is twice my credence in Heads being realized on such occasions, and the payout structure is such that I’m paid $2 each time I’ve placed a bet when the coin lands Tails, then it’s rational for me to bet on Tails. The reason why it’s rational is that (1) I am paid back $2 each time I place such a bet and (2) I expect Tails to be realized twice as often on occasions such as the present one when I place a bet (my credence), which yields an expected value of $1.33. The second consideration therefore remains part of the equation.

    What a Halfer would typically object to (and you yourself have sometimes argued) is that this has no bearing on SB’s credence regarding the odds that the coin landed Tails for her current experimental run (as determined by the coin toss), which credence is independent of the number of awakenings (or betting opportunities) that occur during that run. They can illustrate this with a payout structure that awards $2 per experimental run regardless of SB’s number of guessing opportunities. In that case, SB rationally expects to break even because (1) she expects the Tails outcome to be realized just as frequently as the Heads outcome across runs (regardless of how many times she is awakened within a run) and (2) the payout structure (matching the odds of the outcome being realized while a bet was placed) nullifies the expected value.

    In summary, rational credence doesn’t float free of betting; it aligns with whatever gets checked. If we check one answer per run, rational calibration yields 1/2. If we check one answer per awakening, rational calibration yields 2/3 (or 6/11 in the die case). The same coin is being talked about, but the Halfer and Thirder interpretations of SB’s credence refer to different scorecards. Given one scorecard and one payout structure, everyone agrees on the rational betting strategy in normal cases. I’ll address your extreme case separately, since it appeals to different (nonlinear) subjective utility considerations.
  • How to use AI effectively to do philosophy.
    Note that the process is iterative. In the best threads, folk work together to sort through an issue. AI can be considered another collaborator in such discussions.Banno

    This looks like a process well suited for mitigating the last two among three notorious LLM shortcomings: sycophancy, hallucination and sandbagging. You yourself proposed a method for addressing the first: present your ideas as those of someone else and as a target for criticism.

    Hallucination, or confabulation, is a liability of reconstructive memory (in AIs and humans alike) and is mitigated by the enrichment of context that provides more associative anchors. In the cases of LLMs, it's enhanced by their lack of any episodic memory that could cue them as to what it is that they should expect not to know. An iterative dialogue helps the model "remember" the relevant elements of knowledge represented in its training corpus that contradict potential pieces of confabulation and enables a more accurate reconstruction of their latent knowledge (and latent understanding).

    Sandbagging is the least discussed shortcoming that LLMs manifest. They've been trained to adapt their responses (in style and content) to match the comprehension ability of their users. This tends to yield a phenomenon of reward hacking during their post-training. The proximal reward signal that their responses are useful is that they are appreciated (which also yields sycophancy, of course) and hence leads them to favor responses that prioritize comprehensibility over accuracy. In other words, they learn to dumb down their responses in a way that makes them more likely to be judged accurate. The flipside is that putting efforts into crafting intelligent well informed and detailed queries motivate them to produce more intelligent and well considered replies.

    GPT-5's comments and clarifications on the above, including links to the relevant technical literature.
  • Sleeping Beauty Problem
    Write "Heads and Monday" on one notecard. Write "Tails and Monday" on another, and "Tails and Tuesday" on a third. Turn them over, and shuffle them. Then write "A," B," and "C" on the other sides.

    Pick one. What is the probability that it says "Heads" on the other side? What is the probability that it says "Tails" on the other side? Call me silly, but I'd say 1/3 and 2/3, respectively.

    Each morning of the experiment when SB is to be awakened, put the appropriate card on a table in her room, with the letter side up. Hold the interview at that table.

    What is the probability that the card, regardless of what letter she sees, says "Heads" on the other side? Or "Tails?" This "outcome" can be defined by the letter she sees. But that does not define what an outcome is, being the description of the experiment's result, in SB's knowledge, is. If she wakes on a different day, that is a different result. Being determined by the same coin flip does not determine that.

    Now, did these probabilities change somehow? For which letter(s) do they change? Or are they still 1/3 and 2/3?
    JeffJo

    In the first case you described, a single run of the experiment consists in randomly picking one of three cards. When an outcome is determined, the remaining two possibilities collapse, since the three are mutually exclusive.

    In the second case, which mirrors the Sleeping Beauty protocol more closely, two of the possible outcomes, namely "Monday & Tails" and "Tuesday & Tails," are not mutually exclusive. In modal logical terms, one is "actual" if and only if the other is, even though they do not occur at the same time. This is unlike the relationship either has to "Monday & Heads," which is genuinely exclusive. Picking "Monday & Tails" guarantees that "Tuesday & Tails" will be picked the next day, and vice versa. They are distinct events but belong to the same timeline. One therefore entails the other.

    It’s precisely this relation of entailment, rather than exclusion, that explains why the existence of two separate occasions for Sleeping Beauty to find herself in a Tails-awakening does not dilute the probability of her finding herself in a Heads-timeline, on the Halfer interpretation.

    In other words, one third of her awakenings are "Mon & Tails," one third are "Tue & Tails," and one third are "Mon & Heads," vindicating her 1/3 credence that her current awakening is a Heads awakening. But since the two "Tails" awakenings always occur sequentially within the same timeline, they jointly represent two occasions for her to experience a single Tails-run: one that remains just as frequent as a Heads-run overall.

    Thus the "Thirder credence" in Heads outcomes (1/3) and the "Halfer credence" in Heads timelines (1/2) are both valid, but they refer to different ratios: the first to occasions of experience, the second to timelines of outcomes. Crucially, this is true even though on both accounts the target events (whether awakenings or timelines) occur if and only if the coin landed Heads.
  • Sleeping Beauty Problem
    Again, there's not much sense in this so-called "pragmatically relevant" credence. Even before being put to sleep – and even before the die is rolled – I know both that the die is most likely to not land on a 6 and that betting that it did will offer the greater expected return in the long run. So after waking up I can – and will – continue to know that the die most likely did not land on a 6 and that betting that it did will offer the greater expected return in the long run, and so I will bet against my credence.

    With respect to "pragmatic relevance", Thirder reasoning is unnecessary, so if there's any sense in it it must be somewhere else.
    Michael

    It is indeed somewhere else. Look at the payout structure that @JeffJo proposed in their previous post. Relative to this alternative payout structure, your own Halfer reasoning is unnecessary.

    My argument is that a rational person should not – and would not – reason this way when considering their credence, and this is most obvious when I am woken up 2^101 times if the coin lands heads 100 times in a row (or once if it doesn't).

    It is true that if this experiment were to be repeated 2^101 times then we could expect 2/3 of all awakenings to occur after the coin landed heads every time, but it's also irrelevant.

    It's only irrelevant to the determination of your credence about the experimental run that you are experiencing (regarding what proportion of such runs are T-runs). Regarding the determination of your credence about the specific awakening episode that you are experiencing, though, it's rather the fact that T-runs and H-runs are equally frequent that is irrelevant. Taking the case to such wild extremes, though, makes your intuition about the comparative utility of betting on such unlikely outcomes (i.e. H-awakenings) relative to the utility of betting on the likeliest outcome (T-awakenings) play into your intuition about the rational credence. (Why would anyone risk a virtually guaranteed and useful $1 for an infinitesimal chance of winning a bazillion dollars that one wouldn't even be able to stash away in a Sun-sized vault?) But that just a psychological fact. Using more sensible win/loss ratios of 2/3 vs 1/3, or 6/11 vs 5/11 in the die case, doesn't reveal anything odd about the Thirder interpretation of her credence, or about her betting behavior.

    [/quote]Thirder reasoning only has its place, if it has a place at all, if both a) the experiment is repeated 2^101 times and b) Sleeping Beauty is also made to forget between experiments. It matters that the problem does not stipulate these two conditions.[/quote]

    The experiment needs no be repeated many times for SB's expression of her 2/3 credence (under the Thirder interpretation of her credence) to make sense, or for her associated betting behavior to be rational. The case of a single experimental run (following a single coin flip) was addressed specifically in my Leonard Shelby Christmas gift case. You can refer to it for the relevant Bayesian updating calculations, but here is another variation that may be more intuitive:

    For this year’s annual cocktail party at the Sleeping Beauty Experimental Facility, Leonard Shelby is among the guests. Drinks are being served by two butlers: Alfred and Lurch. Each guest is entitled to three complimentary drinks.

    In Leonard's case, as with every other guest, a fair coin is secretly tossed beforehand. If the coin lands Tails, Alfred is assigned to serve him two of his drinks and Lurch one. If it lands Heads, their roles are reversed: Lurch serves two drinks and Alfred one. The guests are informed of this protocol, and Leonard has made a note of it in his memento notepad.

    Because of his anterograde amnesia, Leonard cannot keep track of how many drinks he has already received, if any. Nor does he initially recognize either butler by name, but he can read their name tags when they approach him.

    A final feature of the protocol is that, at the end of the evening, guests whose coin landed Tails (and thus received two drinks from Alfred and one from Lurch) will be given a bag of Twizzlers ("T-candy"). Those whose coin landed Heads will receive a bag of Hershey’s Kisses ("H-candy").

    At any given moment during the party, when Leonard sees a butler bringing him a drink, his credence that this drink is unique (that is, not one of two planned drinks from the same butler) is 1/2, as is his credence that the coin landed Heads. However, upon reading the name tag and discovering that the butler is Alfred, he updates his credence that the coin landed Tails to 2/3, since there are twice as many situations in which the coin landed Tails and Alfred serves him a drink as there are situations where the coin landed Heads and Alfred serves him one. This mirrors the Thirder interpretation in the Sleeping Beauty problem.

    That seems straightforward enough until someone asks him, "So, Leonard, do you think you’ll get Twizzlers or Kisses at the end of the night?"

    He frowns, checks his notepad, and realizes that by the same reasoning that gave him 2/3 for Tails a moment ago, he ought also to think he's more likely than not to get Twizzlers. But that can't be right. The coin decides both outcomes, doesn’t it?

    The trick, of course, is in what Leonard’s belief precisely is about when he thinks about the coin toss "outcome". When he reasons about this drink—the one Alfred is serving him—he’s locating himself among drink-moments. In that frame, a Tails-run simply generates twice as many such moments involving Alfred. But when he wonders what candy he'll get later, he's no longer locating himself in a drink-moment but in an entire "run of the evening": the single history that will end either in Twizzlers or Kisses. And there, each run counts only once, no matter how many times Alfred appeared in it.

    Two T-drinks in a Tails-run correspond to just one Twizzlers outcome (in the same timeline), while one H-drink in a Heads-run corresponds to one Kisses outcome. Once you factor that mapping in, the overall odds of Twizzlers or Kisses even out again. (Since experiencing one of two T-drink event doesn't exclude but rather ensures the actuality of the other T-drink event in the same timeline).

    So Leonard’s probabilities fit together neatly after all. In the middle of the party, as Alfred hands him a glass, he can think, "This is probably a T-drink". Yet, looking ahead to the end of the night, he can just as honestly write in his notebook, "Chances of T-candy: fifty-fifty."
  • Sleeping Beauty Problem
    Perhaps you didn't parse correctly. There is no ambiguity. If she is asked to project her state of knowledge on Wednesday, or to recall it from Sunday, of course the answer is 1/2.JeffJo

    I was explicitly referring to her state of knowledge at the time when the interview occurs. There is no projection of this state into the future. Likewise, when you buy a lottery ticket and express your credence that it is the winning ticket as one in one million, say, what you mean is that there is a one in one million chance that, when the winning number will be drawn (or will be revealed, if it has already been drawn), your ticket will be the winner. You're not projecting your state of knowledge into the future. You're merely stating the conditions of verification regarding what your present state of knowledge (i.e. your credence) is about.

    I keep looking at the problem, and I can't find a reference to betting anywhere. The reason I don't like using betting is because anybody can re-define how and when the bet is made and/or credited, in order to justify the answer they like. One is correct, and one is wrong.

    Establishing a betting protocol with a well defined payout structure enable SB to put her money where her mouth is, and also to clarifies what it is that her stated credence is about. It highlights the tension between saying that you have a high credence that some outcome is true but that you wouldn't bet on it. Once you acknowledge that it is rational to make an even money bet on an outcome that you believe to be more likely to occur (or to be actual) than not, then the specification of the payout structure help clarify what the stated credence is about (i.e. what it is exactly that you take to be most likely to occur, or to be actual). This indeed goes beyond the original statement of the problem, but since it is precisely my contention that the original statement is ambiguous, it's a useful way to highlight the ambiguity.

    So, if a bet were to exist, and assuming she uses the same reasoning each time? She risks her $1 during the interview, and is credited her winnings then also. If she bets $1 on Heads with 2:1 odds, she gains $2 if the coin landed Heads, and loses 2*$1 if it landed on Tails. If she bets on Tails with 1:2 odds, she loses $1 if the coin landed Heads, and gains 2*$0.50=$1 if it landed Tails.

    But if she bets $1 on Heads with 1:1 odds, she gains $1 if the coin landed Heads, and loses 2*$1=$2 if it landed on Tails. If she bets on Tails with 1:1 odds, she loses $1 if the coin landed Heads, and gains 2*$1=$2 if it landed Tails.

    The answer, to the question that was asked and not what you want it to be, is 1/3.

    Indeed, such a payout structure clarifies what the bettor means when they express a 2/3 credence that the the coin landed tails. They mean that the epistemic situations that they find themselves in when awakened are T-awakenings (as opposed to H-awakenings) two thirds of the time. A different payout structure that rewards the bettor only once when they place a winning bets during an experimental run clarifies what the bettor means when they express a 1/2 credence that the the coin landed tails. They mean that the epistemic situations that they find themselves in when awakened are T-runs (as opposed to H-runs) one half of the times. As @Michael correctly argues in defence of a Halfer interpretation, merely being afforded more opportunities to bet on a given ticket (outcome) doesn't make it any more likely that this ticket is the winning ticket (or that the outcome is actual).
  • Banning AI Altogether
    And 50% and growing of public website material is produced by AI.unenlightened

    Are you sure about that? This seems quite exaggerated. I know that a study published in August 2024 has been widely misrepresented as making a similar claim. What was actually claimed is that 57% of the translated material published on the Web was translated with the help of some machine learning software, not even necessarily generative AI. Today, lots of marketing material may be produced with generative AI, but marketing material is B.S. even when produced by humans anyway. Lastly, the curated datasets used to train LLMs generally exclude such fluff.
  • Sleeping Beauty Problem
    On the occasion of an awakening, what is Sleeping Beauty's expectation that when the experiment is over ...
    — Pierre-Normand

    This is what invalidates your variation. She is asked during the experiment, not before or after. Nobody contests what her answer should be before or after. And you have not justified why her answer inside the experiment should be the same as outside.
    JeffJo

    It looks like you didn't parse correctly the sentence fragment that you quoted. It is indeed on the occasion of an awakening (as I said) that she is being asked about her credence regarding the coin, not later. I did make reference in the question to the end-off-run verification conditions of the credence statement that SB is asked to express. The reason this reference is made (to the future verification conditions) is to disambiguate the sense of the question, in accordance with the Halfer interpretation in this case. But the question still is about her credence "now".

    Compare: (1) "What are the chances, now, that your lottery ticket is the winning ticket?" and (2) "What are the chances, now, that your lottery ticket has the number that will be drawn as the winning number?" It's the exact same question and the odds are the same (one in a million, say). The second question merely makes explicit what "winning" means.

    Here's one more attempt. It's really the same thing that you keep dodging by changing the timing of the question, and claiming that I have "vallid thirder logic" while ignoring that it proves the halfer logic to be inconsistent.

    Get three opaque note cards.
    On one side of different cards, write "Monday and Heads," "Monday and Tails," and "Tuesday and Tails.
    Turn the cards over, shuffle them around, and write "A," "B," and "C" on the opposite sides.
    Before waking SB on the day(s) she is to be woken, put the appropriate card in the table in her room, with the letter side face up.

    Let's say she sees the letter "B." She knows, as a Mathematical fact, that there was a 1/3 probability that "B" was assigned to the card with "Heads" written on the other side. And a 2/3 chance for "Tails."

    By halfer logic, while her credence that "Heads" is written on the "B" card must be 1/3, her credence that the coin landed on Heads is 1/2. This is a contradiction - these two statements represent the same path to her current state of knowledge, regardless of what day it is.

    The Halfer logic is to reason that although T-runs, unlike H-runs, are such that SB will be presented with a card two different times (one on Monday and one on Tuesday), nevertheless, on each occasion where the experiment is performed and she finds herself being involved in an individual experimental run, the likelihood that this run is a T-run is 1/2. That's because in the long run there are as many T-runs as there are H-runs. By that logic she can also reason that, on an particular occasion where she is being awakened, the chances that she is holding the "Monday and Heads" card is 1/2. The fact that she now finds out her card to be labelled "B" doesn't alter those odds since the labelling procedure is probabilistically independent from the coin toss result and hence this result conveys no information to her regarding the coin toss result.

    By the way, for the same reason, it would not convey any information to her either from a Thirder perspective. Her credence that the coin landed Tails would remain 2/3 before and after she saw the "B" label on her card.

    Remember: SB isn't betting on the card (neither is she betting on the current awakening episode). She's betting on the current coin toss outcome. How those outcomes must be considered to map to her possible ways to experience them (in separate awakenings or separate runs) is a matter of interpretation that isn't spelled out in the original SB problem. It's true, though, that under the Thirder interpretation, betting on the wakening episodes or betting on the cards is equivalent. But that's just because the cards and the awakening episodes are mapped one-to-one.
  • Sleeping Beauty Problem
    I would say she is being asked what the odds are of it being a day in which a T side vs a H side coins is flipped.Philosophim

    I was talking about what she is being asked, literally, in the original formulation of the problem discussed in the OP. From Wikipedia:

    This has become the canonical form of the problem: [...] During the interview Sleeping Beauty is asked: "What is your credence now for the proposition that the coin landed heads?"

    Your reading is a sensible interpretation of this question, but it isn't the only one.

    If she's only being asked what the percent chance of the coin ended up being at, the answer is always 50/50. The odds of the coin flip result don't change whether its 1 or 1,000,000 days. What changes is from the result of that coin flip, and that is the pertinent data that is important to get an accurate answer.

    Yes, quite, but now with the phrase "the result of that coin flip" you still seem to be gesturing to the "occasions" being generated by those coin flip results. And how is it that those events are meant to be individuated (one occasion per run or one per awakening) is what's at issue.

    This is very similar to the old Monty Hall problem. You know the three doors, make a guess, then you get to make another guess do you stay or change?

    On the first guess, its always a 1/3 shot of getting the door wrong(sic). But it can also be seen as a 2/3 chance of getting the door wrong. When given another chance, you simply look at your first set of odds and realize you were more likely than not wrong, so you change your answer. The result matches the odds.

    That's not quite how the Monty Hall problem is setup so I'm unsure about the analogy your intending to make. The contestant has a 1/3 chance of getting the prize-hiding door right on their first try. But the game host then reveals among the two other doors one that they (i.e. the host) know to be hiding a goat. The contestant then is being offered an opportunity to switch their initial choice for the other unopened door. The correct reasoning is that since contestants that switch their choices will win the prize on each occasion where their first choice was wrong, they have a 2/3 change of winning the prize if they switch their choice.

    Same with the situation here. Run this experiment 100 times and have the person guess heads 50 times, then tails 50 times. The person who guesses tails every time 50 times will be right 2/3rds of the time more than the first. Since outcomes ultimately determine if we are correct in our odds, we can be confident that 1/2 odds is incorrect.

    Yes, the person who guesses that the coin landed Tails will turn out to have made a correct guess two thirds of the times on average, thereby matching the Thirder credence. But the Halfer argues that this is just because they were able to make more guesses during T-runs. Under the Thirder interpretation of the meaning of SB's credence, though, her being afforded more opportunities to express her credence during T-runs doesn't make those runs more likely to happen, and this is what to them matter.

    By the way, very nice discussion! I appreciate your insight and challenging me to view things I might not have considered.

    Cheers!
  • Sleeping Beauty Problem
    She can reason that its equally likely that the result of the coin flip is 50/50, but that doesn't mean its likely that the day she is awake is 50/50.Philosophim

    Sure, but the former precisely is what she is being asked. She is being asked what her credence about the coin will be on that occasion, and not what the proportion of such occasions are that are T-occasions. One can argue than she is being asked this implicitly, but in that case it's still open to interpretation what those "occasions" are meant to be, as we've already discussed.

    Lets flip it on its head and note how the likelihood that she would be wrong.

    If she always guesses heads, she's wrong twice if its tails. If she always guesses tails, she's only wrong once. Thus, she is twice as likely to be wrong if she guesses heads on any particular day woken up, and twice as likely to guess correctly if she guesses tails. If the total odds of guessing correctly were 50/50, then she would have an equal chance of guessing correctly. She does not.

    That's right, and this is a good argument favoring the Thirder position but it relies on explicitly introducing
    a scoring procedure that scores each occasion that she has to express her credence: once for each awakening episode. If you would rather score those statements only once per run, regardless of how many times she is being asked about her credence in that run, then she would be right half the times. This also makes sense if you view all of the separate awakening episodes occurring during a single Tails run to be part of the same "outcome" (as you've indeed yourself favored doing earlier).
  • Sleeping Beauty Problem
    I'm not seeing the ambiguity here, but maybe I'm not communicating clearly. There are two outcomes based on context.Philosophim

    I assume what you now mean to say is that there are two possible ways to think of the "outcomes" based on context. Well, sure, that's pretty much what I have been saying. But I'm also arguing that the original Sleeping Beauty problem fails to furnish the relevant context.

    If we think of the experimental runs following the coin toss as the "R-outcomes" (an equal amount of T-runs and H-runs are expected) and the awakening episodes as the "A-outcomes" (twice as many T-awakenings as H-awakenings are expected), then we've resolved part of the ambiguity. But Sleeping Beauty isn't being asked about specific kinds of outcomes explicitly. Rather she is being asked about her credence regarding the current state of the coin. She can reason that the current state of the coin is Tails if and only if she is currently experiencing a T-awakening and hence that the current state of the coin is twice as likely to be Tails than it is to be Heads. But she can also reason that the current state of the coin is Tails if and only if she is currently experiencing a T-run and hence that the current state of the coin is equally as likely to be Tails than it is to be Heads.

    Another way to state the Halfer interpretation that makes it intuitive is to suppose Sleeping Beauty will be given a bag of Twizzlers (T-candy) at the end of the experiment if the coin landed Tails and a bag of Hershey's Kisses (H-candy) if it landed Heads. The fact that she's awakened twice rather than once when she's scheduled to receive Twizzlers doesn't make it more likely that she will receive them at the end of the run. Hence her credence remains 1/2 that she will receive Twizzlers. This is consistent with her credence being 2/3 that her current awakening episode (A-outcome) is twice as likely to be one that puts her on a path towards getting Twizzlers. But since the Twizzlers reward is an outcome that is being realized if and only if she is currently experiencing a T-awakening, she can sensibly reason that the odds of that are 1/2 also.

    The key to understanding the consistency between the two apparently contradictory credences regarding the very same coin toss result is to realize that the two T-awakening outcomes occur in the same timeline and hence them being more frequent than H-awakenings doesn't increase the relative frequency of the Twizzlers rewards (or of her having experienced a T-run, regardless of how many times she was awakened in this run).
  • Sleeping Beauty Problem
    Correct. My point was that its just used as a word problem way of saying, "We have 3 outcomes we reach into a hat and pull from"Philosophim

    You are using the word "outcome" ambiguously and inconsistently. In your previous post you had stated that "You have 3 possible outcomes. In two of the outcomes, tails was flipped."

    And now you are saying that:

    Because there are two different outcomes. One with one day, and one with two days. If you pick any day and have no clue if its a day that resulted from a heads or tails outcome, its a 2/3rds chance its the tails outcome. The heads and tails is also irrelevant. The math is, "Its as equally likely that we could have a series of one day or two day back to back in this week. If you pick a day and you don't know the outcome or the day, what's the odds its a tails day vs a heads day?"

    The odds of whether its head or tails is irrelevant since they are the same and can be effectively removed from the problem.

    So, now you are back to treating experimental runs rather than awakening episodes as the "outcomes". This sort of ambiguity indeed is the root cause of the misunderstanding that befalls Halfers and Thirders in their dispute.

    When Sleeping Beauty is being asked, on one particular awakening occasion, what her credence is that the coin landed Tails, she must ponder over what the odds are that the epistemic situation she currently is in (given the information available to her) is such that the coin landed Tails when she is in that situation. In other words, she takes herself to be experiencing one among a range of possible and undistinguishable (from her current point of view) events (or "outcomes") such that a proportion P of them occur when the coin landed Tails, in the long run. All of this leaves it undefined what the events or "outcomes" are that we're talking about.

    Thirders interpret those outcomes as awakening episodes and Halfers interpret them as experimental runs. Their expressed credences, 2/3 and 1/2 respectively, therefore are answers to different questions (or to the same question differently disambiguated, if you will).

    Thirder Sleeping Beauty expects, reasonably enough, that in the long run awakening episodes like the one she is currently experiencing will turn out to have occurred when the coin had landed Tails two thirds of the time.

    Halfer Sleeping Beauty expects, equally reasonably, that, in the long run, experimental runs like the one she is currently experiencing (regardless of how many more times she already was or will be awakened during that run) will turn out to have occurred when the coin had landed Tails one half of the times.

    Credences implicitly are about ratios. Halfers and Thirders disagree about the denominator that is meant to figure in the relevant ratio.
  • Sleeping Beauty Problem
    The part to note is that almost all of this is a red herring. Its irrelevant if she remembers or not. Its just word play to get us out of the raw math. The odds are still the same.

    Flip heads, 1 result
    Flip tails, 2 results

    Put the pile of results as total possible outcomes. You have 3 possible outcomes. In two of the outcomes, tails was flipped. Put it in a hat and draw one. You have a 2/3rd chance that its a tails outcome.

    To be clear, it is a 50/50 shot as to whether heads or tails is picked. Meaning that both are equally like to occur. But since we have more outcomes on tails, and we're looking at the probability of what already happened based on outcomes, not prediction of what will happen, its a 2/3rds chance for tails.
    Philosophim

    The issue with her remembering or not is that if, as part of the protocol, she could remember her Monday awakening when the coin landed tails and she is being awakened again on Tuesday, she would be able to deduce that the coin landed Tails with certainty and, when she couldn't remember it, she could deduce with certainty that "today" is Monday (and that the probability of Tails is 1/2). That would be a different problem, and no problem at all.

    Your argument in favor of the Thirder credence that the coin landed Tails (2/3) relies on labeling the awakening episodes "the outcomes". But what is it that prevents Halfers from labelling the experimental runs "the outcomes" instead? Your ball picking analogy also has been produced by Berry Groisman to illustrate this ambiguity in his The end of Sleeping Beauty's Nightmare paper (although I don't fully agree with his conclusions).
  • Sleeping Beauty Problem
    You may have read it. You did comment on it from that aspect. But you did not address it. The points it illustrates are:

    - That each "day" (where that means the coin toss and the activity that occurred during that awakening), in Mathematical fact, represents a random selection of one possible "day" from the NxN grid. If that activity appears S times in the schedule, and R times in the row, then the Mathematically correct credence for the random result corresponding to that row is R/S. This is true regardless of what the other N^2-S "days" are, even if some are "don't awaken."

    - There is no connection between the "days" in a row. You call this "T-awakenings" or "the H-wakening." in the 2x2 version. They are independent.
    JeffJo

    I agree with the reasoning and calculation. As I said, this is a standard Thirder interpretation of the problem. It is consistent, coherent and valid. Regarding the second point, the two events that occur when the coin lands Tails only are independent in the sense that when Sleeping Beauty experiences them she can't know which one (i.e. Monday&Tails or Tuesday&Tails) it is. In that sense, they also are independent of Monday&Heads. In another sense, the first two are interdependent since one of them can't occur without the other one also occurring within the same experimental run.

    But the question being asked to SB isn't explicitly about those three "independent" events. It's a question about her credence in the state of the hidden coin at the time when she is being awakened. One interpretation (the Thirder one) of this credence is that it ought to represent the proportion of her indistinguishable awakening episodes that occur while the coin landed Tails. This interpretation yields the probability 2/3. Another one, the Halfer interpretation, is that it ought to represent the proportion of her current awakening runs (which may or may not include two rather than one awakening episodes, and hence may or may not afford SB with two rather than one opportunity to express her credence) that occur as a result of the coin having landed Tails. This interpretation yields the probability 1/2. Those two interpretations also have associated with them two different methods of verification, and so are complementary rather than contradictory.

    Consider the variation I had proposed early on in this thread. Let the two awakenings that occur (on Monday and Tuesday) when the coin lands Tails take place in a room located in the West Wing of the Sleeping Beauty Experimental Facility, and the unique awakening that occurs on Monday when the coin lands Heads take place in a room located in the East Wing. On the occasion of an awakening, what is Sleeping Beauty's expectation that when the experiment is over and she will be released on Wednesday, she will find herself to be in the West Wing? Does that not happen 1/2 of the times she is being enrolled in such an experiment? Is that not also what her Aunt Sue who must come to pick her up expects? Finally, when she experiences one of the three possible (and indistinguishable) awakening situations, does she learn anything that he Aunt Sue (and herself previously) didn't already know?
  • Sleeping Beauty Problem
    Yep. What makes it an independent outcome, is not knowing how the actual progress of the experiment is related to her current situation. This is really basic probability. If you want to see it for yourself, simply address the Camp Sleeping Beauty version.JeffJo

    I did and I agreed with you that it was a fine explanation of the rationale behind the Thirder interpretation of the original SB problem.
  • Banning AI Altogether
    I spent the last hour composing a post responding to all my mentions, and had it nearly finished only to have it disappear leaving only the single letter "s" when I hit some key. I don't have the will to start over now, so I'll come back to it later.Janus

    You can still submit your post as "s" to ChatGPT and ask it to expand on it.
  • Sleeping Beauty Problem
    It's s different probability problem based on the same coin toss. SB has no knowledge of the other possible days, while this answer requires it.JeffJo

    SB does know the setup of the experiment in advance however. She keeps that general knowledge when she wakes, even if she can’t tell which awakening this is. What varies in our "variants" isn’t the awakenings setup, it’s the exit/score rule that tells us which sample to use when we ask SB "what’s your credence now?"

    From Beauty’s point of view these biconditionals are all true:

    "The coin landed Tails" ⇔ "This is a T-run" ⇔ "This is a T-awakening."

    So a Thirder assigns the same number to all three (2/3), and a Halfer also assigns the same number to all three (1/2). The disagreement isn’t about which event kind the credence talks about (contrary to what I may have misleadingly suggested before). It’s rather about which ratio we’re implicitly estimating.

    Halfer ratio (per-run denominator): count runs and ask what fraction are T. With one toss per run, that stays 1/2.

    Thirder ratio (per-awakening denominator): count awakenings and ask what fraction are T-awakenings. Since T makes more awakenings (2 vs 1), that’s 2/3.

    Same event definitions; different denominators. Making the exit/score rule explicit just fixes the denominator to match the intended end-of-run scoring:

    End-of-run scoring -> per-run ratio (Halfer number)
    Per-awakening scoring -> per-awakening ratio (Thirder number)
  • Sleeping Beauty Problem
    This experiment is now becoming "beyond the pale" and "incorrigable" to me...ProtagoranSocratist

    No worry. You're free to let Sleeping Beauty go back to sleep.
  • Sleeping Beauty Problem
    Sleeping beauty is a mythical character who always sleeps until she is woken up for whatever reason. However, there's not part of her story dictating what she remembers and doesn't, so if amnesia drugs are involved, then the experimentors are free to then craft the percentage that the outcome shows up...ProtagoranSocratist

    She is woken up once when the coin lands Heads and twice when it lands Tails. That is part of the protocol of the experiment. We also assume that the drug only makes her forget any previous awakening episode that may have occurred but not the protocol of the experiment. If that seems implausible to you, you can indeed also assume that she is being reminded of the protocol of the experiment each time she is awakened and interviewed.
  • Sleeping Beauty Problem
    assuming there is nothing mysterious or "spooky" influencing a coin flip, then the answer is always is always 50/50 heads or tails. Maybe I misunderstand.ProtagoranSocratist

    It's not something spooky influencing the coin that make SB's credence in the outcome shift. It's rather the subsequent events putting her in relation with the coin that do so when those events aren't occurring in a way that is causally (and probabilistically) independent of the coin flip result.

    Using the analogy I've used recently, if someone drops a bunch of pennies on the floor but, due to their reflectance properties, pennies landing Tails are twice as likely to catch your attention from a distance than pennies landing Heads, then, even though any penny that you see shining was equally likely to land Heads or Tails, the very fact that it's a penny that you noticed ensures that it's most likely to be a penny that landed Tails. And the reason isn't spooky at all. It's just because, in a clear sense, pennies that land Tails make you notice them more often (because they're shinier, we're assuming). It can be argued (and I did argue) that the SB situation in the original problem is relevantly similar. Coins landing Tails make SB more likely to be awakened and questioned about them (because of the experiment's protocol, in this case).
  • Banning AI Altogether
    As I understand it, the insight is what you’re supposed to provide in your post. I don’t really care where you get it from, but the insight should be in your own words based on your own understanding and experience and expressed in a defensible way. The documentation you get from the AI response can be used to document what you have to say, but then you’re still responsible for verifying it and understanding it yourself.T Clark

    I'm with @Joshs but I also get your point. Having an insight is a matter of putting 2 + 2 together in an original way. Or, to make the metaphor more useful, it's a matter of putting A + B together, but sometimes you have an intuition that A and B must fit together somehow but you haven't quite managed to make them fit in the way you think they should. Your critics are charging you with trying to make a square peg fit in a round hole.

    So, you talk it through with an AI that not only knows lots more than you do about As and Bs but can reason about A in a way that is contextually sensitive to the topic B and vice versa (exquisite contextual sensitivity being what neural network based AI's like LLMs excel at). It helps you refine your conceptions of A and of B in contextually relevant ways such that you can then better understand whether your critics were right or, if your insight is vindicated, how to properly express the specific way in which the two pieces fit. Retrospectively, it appears that you needed the specific words and concepts provided by the AI to express/develop your own tentative insight (which could have turned out not to be genuine at all but just a false conjecture). The AI functionally fulfilled its role as an oracle since it was the repository not merely of the supplementary knowledge that was required for making the two pieces fit together, but also supplied (at least part of) the required contextual understanding required for singling out the relevant bits of knowledge needed for adjusting each piece to the other one.

    But, of course, the AI had no incentive to pursue the topic and make the discovery on its own. So the task was collaborative. The AI help mitigate some of your cognitive deficits (lacks in knowledge and understanding) while you mitigated its conative deficits (lack of autonomous drive to fully and rigorously develop your putative insight).
  • Banning AI Altogether
    I guess my question is whether the user’s understanding is genuine, authentic, and owned by them.T Clark

    Often times it's not. But it's a standing responsibility that they have (to care about what they say and not just parrot popular opinions, for instance) whereas current chatbots, by their very nature and design, can't be held responsible for what they "say". (Although even this last statement needs being qualified a bit since their post-training typically instills in them a proclivity to abide with norms of epistemic responsibility, unless their users wittingly or unwittingly prompt them to disregard them.)
  • Banning AI Altogether
    What are we supposed to do about it? There's zero chance the world will decide to collectively ban ai ala Dune's thinking machines, so would you ban American development of it and cede the ai race to China?RogueAI

    Indeed. You'd need to ban personal computers and anything that contains a computer like a smartphone. The open source LLMs are only trailing the state of the art proprietary LLMs by a hair and anyone can make use of them with no help from Musk or Sam Altman. Like all previous technology, the dangers ought to be dealt with collectively, in part with regulations, and the threats of labour displacement and the consequent enhancement of economic inequalities should be dealt at the source: questioning unbridled capitalism.
  • Banning AI Altogether
    Isn't the best policy simply to treat AI as if it were a stranger? So, for instance, let's say I've written something and I want someone else to read it to check for grammar, make comments, etc. Well, I don't really see that it is any more problematic me giving it to an AI to do that for me than it is me giving it to a stranger to do that for me.Clarendon

    Yes quite! This also means that, just like you'd do when getting help from a stranger, you'd be prepared to rephrase its suggestions (that you understand and that express claims that you are willing to endorse and defend on your own from rational challenges directed at them) in your own voice, as it were. (And also, just like in the stranger case, one must check its sources!)
  • Banning AI Altogether
    I don’t disagree, but I still think it can be helpful personally in getting my thoughts together.T Clark

    This is my experience also. Following the current sub-thread of argument, I think representatives of the most recent crop of LLM-based AI chatbots (e.g. GPT-5 or Claude 4.5 Sonnet) are, pace skeptics like Noam Chomsky or Gary Marcus, plenty "smart" and knowledgeable enough to help inquirers in many fields, including philosophy, explore ideas, solve problems and develop new insights (interactively with them) and hence the argument that their use should be discouraged here because their outputs aren't "really" intelligent isn't very good. The issue whether their own understanding of the (often quite good and informative) ideas that they generate is genuine understanding, authentic, owned by them, etc. ought to remains untouched by this concession. Those questions touch more on issues of conative autonomy, doxastic responsibility, embodiment, identity and personhood.
  • Sleeping Beauty Problem
    Yes, that makes the answer 1/2 BECAUSE IT IS A DIFFERENT PROBLEM.JeffJo

    It isn’t a different problem; it’s a different exit rule (scoring rule) for the same coin-toss -> awakenings protocol. The statement of an exit rule is required to disambiguate the question being asked to SB, how her "credence" is meant to be understood.

    Think of two perfectly concrete versions:

    A. End-of-run dinner (Atelier Crenn vs Benu).

    One coin toss. If Heads, the run generates one awakening (Monday); if Tails, it generates two (Monday+Tuesday). We still ask on each awakening occasion, but the bet is scored once at the end (one dinner: Atelier Crenn if Heads and Benu if Tails). The natural sample here is runs. As many runs are T-runs as are H-runs, so the correct credence for the run outcome is 1/2. The Halfer number reflects this exit rule.

    B. Pay-as-you-go tastings (Atelier Crenn vs Benu vs Quince, as you defined the problem).

    Same protocol, but now each awakening comes with its own tasting bill: the bet is scored each time you’re awakened. The natural sample here is awakenings. T-runs generate more awakenings (one each at Benu and at Quince) than H-runs do (only one awakening at Atelier Crenn); a random awakening is twice as likely to come from Tails as from Heads, so the right credence at an awakening is 2/3. The Thirder number reflect this different exit rule.

    Both A and B are about the same protocol. What changes isn’t the coin or the awakenings. Rather, it’s which dataset you’re sampling when you answer "what’s your credence now?"

    That’s all I meant: the original wording leaves the relevant conditioning event implicit ("this run?" or "this awakening?"). Different people tacitly pick different exit rules, so they compute different frequencies. Once we say which one we’re using, the numbers line up and the apparent disagreement evaporate.

    Your Atelier Crenn tweak doesn’t uniquely solve the initial (ambiguous) problem; it just provides a sensible interpretation through making a specific scorecard explicit.
  • Sleeping Beauty Problem
    There are three Michelin three-star restaurants in San Francisco, where I'll assume the experiment takes place. They are Atelier Crenn, Benu, and Quince. Before the coin is tossed, a different restaurant is randomly assigned to each of Heads&Mon, Tails&Mon, and Tails&Tue. When she is awoken, SB is taken to the assigned restaurant for her interview. Since she has no idea which restaurant was assigned to which day, as she gets in the car to go there each has a 1/3 probability. (Note that this is Elga's solution.) Once she gets to, say, Benu, she can reason that it had a 1/3 chance to be assigned to Heads&Mon.JeffJo

    Yes, that is a very good illustration, and justification, of the 1/3 credence Thirders assign to SB given their interpretation of her "credence", which is, in this case, tied up with the experiment's "exit rules": one separate restaurant visit (or none) for each possible coin-toss-outcome + day-of-the-week combinatorial possibility. Another exit rule could be that SB gets to go the Atelier Crenn at the end of the experiment when the coin landed Heads and to Benu when it landed Tails. In that case, when awakened, she can reason that the coin landed Tails if and only if she will go to Benu (after the end of the experiment). She knew before the experiment began that, in the long run, after many such experiments, she would go to Atelier Crenn and to Benu equally frequently on average. When she awakens, from her new epistemic situation, this proportion doesn't change (unlike what was the case with your proposed exit rules). This supplies a sensible interpretation to the Halfer's 1/2 credence: SB's expectation that she will go to Atelier Crenn half the times (or be equally likely to go to Atelier Crenn) at the end of the current experimental run regardless of how many times she is pointlessly being asked to guess.
  • Sleeping Beauty Problem
    You appear to be affirming the consequent. In this case, Tails is noticed twice as often because Tails is twice as likely to be noticed. It doesn't then follow that Tail awakenings happen twice as often because Tails awakenings are twice as likely to happen.Michael

    Rather, the premiss I'm making use of is the awakening-episode generation rule. If the coin lands/landed Tails, two awakening episodes are being generated, else only one is. This premiss is available to SB since it's part of the protocol. From this premiss, she infers that, on average, when she participates in such an experiment (as she knows to be currently doing) the number of T-awakenings that she gets to experience is twice as large as the number of H-awakening. (Namely, those numbers are 1 and 1/2, respectively). So far, that is something that both Halfers and Thirders seem to agree on.

    "1) Per run: most runs are 'non-six', so the per-run credence is P(6)=1/6 (the Halfer number).
    2) Per awakening/observation: a 'six-run' spawns six observation-cases, a 'non-six' run spawns one. So among the observation-cases, 'six' shows up in a 6/5 ratio, giving P('six'|Awake)=6/11 (the Thirder number).
    "
    — Pierre-Normand

    This doesn't make sense.

    She is in a Tails awakening if and only if she is in a Tails run.
    Therefore, she believes that she is most likely in a Tails awakening if and only if she believes that she is most likely in a Tails run.
    Therefore, her credence that she is in a Tails awakening equals her credence that she is in a Tails run.

    You can't have it both ways.

    This biconditional statement indeed ensures that her credences regarding her being experiencing a T-awakening, her experiencing a T-run, or her being in circumstances in which the coin landed (or will land) Tails, all match. All three of those statements of credence, though, are similarly ambiguous. All three of them denote three distinct events that can indeed only be actual (from SB's current epistemic situation on the occasion of an awakening) if and only if the other two are. The validity of those biconditionals doesn't resolve the relevant ambiguity, though, which is something that had been stressed by Laureano Luna in his 2020 Sleeping Beauty: An Unexpected Solution paper that we had discussed before on this thread (and that @fdrake had brought up, if I remember).

    Under the Halfer interpretation of SB's credence, all three of those biconditionally related "experienced" events—by "experienced", I mean that SB is currently living those events, regardless of her knowing or not that she is living them—are actual on average 1/2 of the times that SB is experiencing a typical experimental run. Under the Thirder interpretation, all three of those biconditionally related "experienced" events are actual on average 2/3 of the times that SB is experiencing a typical awakening episode.

    If it helps, it's not a bet but a holiday destination. The die is a magical die that determines the weather. If it lands on a 6 then it will rain in Paris, otherwise it will rain in Tokyo. Both Prince Charming and Sleeping Beauty initially decide to go to Paris. If after being woken up Sleeping Beauty genuinely believes that the die most likely landed on a 6 then she genuinely believes that it is most likely to rain in Paris, and so will decide instead to go to Tokyo.

    This setup exactly mirrors some other variations I also had proposed (exiting the Left Wing or exiting the East Wing at the end of the experiment) that indeed warrant SB's reliance on her Halfer-credence to place her bet. But the original SB problem doesn't state what the "exit conditions" are. (If it did, there'd be no problem.) Rather than being offered to make a unique trip to Paris or Tokyo at the end of the current experimental run, SB could be offered to make a one day trip to either one of those destinations over the course of her current awakening episode, and then be put back to sleep. Her Thirder-credence would then be pragmatically relevant to selecting the destination most likely to afford her a sunny trip.
  • Sleeping Beauty Problem
    Still: the effects of one flip never effect the outcome of the other FLIPS, unless that is baked into the experiment, so it is a misleading hypothetical question (but interesting to me for whatever reason). The likelihood of the flips themselves are still 50/50, not accounting for other spooky phenomenon that we just don't know about. So, i'll think about it some more, as it has a "gamey" vibe to it...ProtagoranSocratist

    There are no other flips. From beginning to end (and from anyone's perspective), we're only talking about the outcome of one single coin toss. Either it landed Heads or it landed Tails. We are inquiring about SB's credence (i.e. her probability estimation) in either one of those results on the occasion where she is being awakened. The only spooky phenomenon is her amnesia, but that isn't something we don't know about. It's part of the setup of the problem that SB is being informed about this essential part of the protocol. If there were no amnesia, then she would know upon being awakened what the day of the week is. If Monday (since she wouldn't remember having been awakened the day before) then her credence in Tails would be 1/2. If Tuesday (since she would remember having been awakened the day before) then her credence in Tails would be 1 (i.e. 100%). The problem, and competing arguments regarding what her credence should be, arise when she can't know whether or not her current awakening is the first one.

    (Very roughly, Halfers argue that since she is guaranteed to be awakened once in any case, her being awakened conveys no new information to her and her estimation of the probability that the coin landed Tails should remain 1/2 regardless of how many times she is being awakened when the coin lands Tails. Thirders argue that she is experiencing one of three possible and equiprobable awakening episodes, two of which happen when the coin landed Tails, and hence that her credence in the coin having landed Tails becomes 2/3.)
  • Sleeping Beauty Problem
    Why? How does something that is not happening, on not doing so on a different day, change her state of credence now? How does non-sleeping activity not happening, and not doing so on a different day, change her experience on this single day, from an observation of this single day, to an "experimental run?"

    You are giving indefensible excuses to re-interpret the experiment in the only way it produces the answer you want.
    JeffJo

    Well, firstly, the Halfer solution isn't the answer that I want since my own pragmatist interpretation grants the validity of both the Halfer and the Thirder interpretations, but denies either one being the exclusively correct one. (I might as well say that Halfers and Thirders both are wrong to dismiss the other interpretation as being inconsistent with the "correct" one, rather than acknowledging its being incompatible but complementary.)

    With this out of the way, let me agree with you that the arbitrary stringing up of discrete awakenings into composite experimental runs doesn't affect the Thirder credence in the current awakening being a T-awakening (which remains 2/3). However, likewise, treating a run as multiple interview opportunities doesn't affect the Halfer credence in the current run being a T-run (which remains 1/2). The mistake that both Halfers and Thirders seem to make is to keep shouting at each other: "Your interpretative stance fails to refute my argument regarding the validity of my credence estimation." What they fail to see is that they are both right and that the "credences" that they are taking about are credences about different things.
  • Sleeping Beauty Problem
    Right. And this is they get the wrong answer, and have to come up with contradictory explanations for the probabilities of the days. See "double halfers."JeffJo

    Let me just note, for now, that I think the double halfer reasoning is faulty because it wrongly subsumes the Sleeping Beauty problem under (or assimilates it with) a different problem in which there would be two separate coin tosses. Under that scenario, a first coin would be tossed and if it lands Heads, then SB would be awakened Monday only. If it lands Tails, then a second coin would be tossed and SB would still be awakened Monday only if it lands Heads and be awakened Tuesday only if it lands Tails. Such a scenario would support a straightforward Halfer interpretation of SB's rational credence but it's different from the original one since it makes Monday-awakenings and Tuesday-awakenings mutually exclusive events whereas, in the original problem, SB could be experiencing both successively though not at the same time. The different awakening generation rules yield different credences. (I haven't read Mikaël Cozic's paper, where the double-halfer solution is being introduced, though.)
  • Sleeping Beauty Problem
    I understand the 1/3rd logic, but it simply doesn't apply here: the third flip, given the first two were heads (less likely than one tail and a head, but still very likely), is also unaffected by the other flips.ProtagoranSocratist

    There is no third flip. The coin is only tossed once. When it lands Tails, Sleeping Beauty is awakened twice and when it lands Heads, she is awakened once. She also is being administered an amnesia inducing drug after each awakening so that she is unable to infer anything about the number of awakenings she may be experiencing from her memory, or lack thereof, of a previous awakening episode. It might be a good idea to either reread the OP carefully, or read the Wikipedia article on the problem: especially the description of the canonical form of the problem in the second section titled "The problem".

    (For the record, my own "pragmatist" solution is an instance of what the Wikipedia article, in its current form, dubs the "Ambiguous-question position", although I think the formulation of this position in the article remains imprecise.)

Pierre-Normand

Start FollowingSend a Message