Comments

  • How to use AI effectively to do philosophy.
    Note that the process is iterative. In the best threads, folk work together to sort through an issue. AI can be considered another collaborator in such discussions.Banno

    This looks like a process well suited for mitigating the last two among three notorious LLM shortcomings: sycophancy, hallucination and sandbagging. You yourself proposed a method for addressing the first: present your ideas as those of someone else and as a target for criticism.

    Hallucination, or confabulation, is a liability of reconstructive memory (in AIs and humans alike) and is mitigated by the enrichment of context that provides more associative anchors. In the cases of LLMs, it's enhanced by their lack of any episodic memory that could cue them as to what it is that they should expect not to know. An iterative dialogue helps the model "remember" the relevant elements of knowledge represented in its training corpus that contradict potential pieces of confabulation and enables a more accurate reconstruction of their latent knowledge (and latent understanding).

    Sandbagging is the least discussed shortcoming that LLMs manifest. They've been trained to adapt their responses (in style and content) to match the comprehension ability of their users. This tends to yield a phenomenon of reward hacking during their post-training. The proximal reward signal that their responses are useful is that they are appreciated (which also yields sycophancy, of course) and hence leads them to favor responses that prioritize comprehensibility over accuracy. In other words, they learn to dumb down their responses in a way that makes them more likely to be judged accurate. The flipside is that putting efforts into crafting intelligent well informed and detailed queries motivate them to produce more intelligent and well considered replies.

    GPT-5's comments and clarifications on the above, including links to the relevant technical literature.
  • Sleeping Beauty Problem
    Write "Heads and Monday" on one notecard. Write "Tails and Monday" on another, and "Tails and Tuesday" on a third. Turn them over, and shuffle them. Then write "A," B," and "C" on the other sides.

    Pick one. What is the probability that it says "Heads" on the other side? What is the probability that it says "Tails" on the other side? Call me silly, but I'd say 1/3 and 2/3, respectively.

    Each morning of the experiment when SB is to be awakened, put the appropriate card on a table in her room, with the letter side up. Hold the interview at that table.

    What is the probability that the card, regardless of what letter she sees, says "Heads" on the other side? Or "Tails?" This "outcome" can be defined by the letter she sees. But that does not define what an outcome is, being the description of the experiment's result, in SB's knowledge, is. If she wakes on a different day, that is a different result. Being determined by the same coin flip does not determine that.

    Now, did these probabilities change somehow? For which letter(s) do they change? Or are they still 1/3 and 2/3?
    JeffJo

    In the first case you described, a single run of the experiment consists in randomly picking one of three cards. When an outcome is determined, the remaining two possibilities collapse, since the three are mutually exclusive.

    In the second case, which mirrors the Sleeping Beauty protocol more closely, two of the possible outcomes, namely "Monday & Tails" and "Tuesday & Tails," are not mutually exclusive. In modal logical terms, one is "actual" if and only if the other is, even though they do not occur at the same time. This is unlike the relationship either has to "Monday & Heads," which is genuinely exclusive. Picking "Monday & Tails" guarantees that "Tuesday & Tails" will be picked the next day, and vice versa. They are distinct events but belong to the same timeline. One therefore entails the other.

    It’s precisely this relation of entailment, rather than exclusion, that explains why the existence of two separate occasions for Sleeping Beauty to find herself in a Tails-awakening does not dilute the probability of her finding herself in a Heads-timeline, on the Halfer interpretation.

    In other words, one third of her awakenings are "Mon & Tails," one third are "Tue & Tails," and one third are "Mon & Heads," vindicating her 1/3 credence that her current awakening is a Heads awakening. But since the two "Tails" awakenings always occur sequentially within the same timeline, they jointly represent two occasions for her to experience a single Tails-run: one that remains just as frequent as a Heads-run overall.

    Thus the "Thirder credence" in Heads outcomes (1/3) and the "Halfer credence" in Heads timelines (1/2) are both valid, but they refer to different ratios: the first to occasions of experience, the second to timelines of outcomes. Crucially, this is true even though on both accounts the target events (whether awakenings or timelines) occur if and only if the coin landed Heads.
  • Sleeping Beauty Problem
    Again, there's not much sense in this so-called "pragmatically relevant" credence. Even before being put to sleep – and even before the die is rolled – I know both that the die is most likely to not land on a 6 and that betting that it did will offer the greater expected return in the long run. So after waking up I can – and will – continue to know that the die most likely did not land on a 6 and that betting that it did will offer the greater expected return in the long run, and so I will bet against my credence.

    With respect to "pragmatic relevance", Thirder reasoning is unnecessary, so if there's any sense in it it must be somewhere else.
    Michael

    It is indeed somewhere else. Look at the payout structure that @JeffJo proposed in their previous post. Relative to this alternative payout structure, your own Halfer reasoning is unnecessary.

    My argument is that a rational person should not – and would not – reason this way when considering their credence, and this is most obvious when I am woken up 2^101 times if the coin lands heads 100 times in a row (or once if it doesn't).

    It is true that if this experiment were to be repeated 2^101 times then we could expect 2/3 of all awakenings to occur after the coin landed heads every time, but it's also irrelevant.

    It's only irrelevant to the determination of your credence about the experimental run that you are experiencing (regarding what proportion of such runs are T-runs). Regarding the determination of your credence about the specific awakening episode that you are experiencing, though, it's rather the fact that T-runs and H-runs are equally frequent that is irrelevant. Taking the case to such wild extremes, though, makes your intuition about the comparative utility of betting on such unlikely outcomes (i.e. H-awakenings) relative to the utility of betting on the likeliest outcome (T-awakenings) play into your intuition about the rational credence. (Why would anyone risk a virtually guaranteed and useful $1 for an infinitesimal chance of winning a bazillion dollars that one wouldn't even be able to stash away in a Sun-sized vault?) But that just a psychological fact. Using more sensible win/loss ratios of 2/3 vs 1/3, or 6/11 vs 5/11 in the die case, doesn't reveal anything odd about the Thirder interpretation of her credence, or about her betting behavior.

    [/quote]Thirder reasoning only has its place, if it has a place at all, if both a) the experiment is repeated 2^101 times and b) Sleeping Beauty is also made to forget between experiments. It matters that the problem does not stipulate these two conditions.[/quote]

    The experiment needs no be repeated many times for SB's expression of her 2/3 credence (under the Thirder interpretation of her credence) to make sense, or for her associated betting behavior to be rational. The case of a single experimental run (following a single coin flip) was addressed specifically in my Leonard Shelby Christmas gift case. You can refer to it for the relevant Bayesian updating calculations, but here is another variation that may be more intuitive:

    For this year’s annual cocktail party at the Sleeping Beauty Experimental Facility, Leonard Shelby is among the guests. Drinks are being served by two butlers: Alfred and Lurch. Each guest is entitled to three complimentary drinks.

    In Leonard's case, as with every other guest, a fair coin is secretly tossed beforehand. If the coin lands Tails, Alfred is assigned to serve him two of his drinks and Lurch one. If it lands Heads, their roles are reversed: Lurch serves two drinks and Alfred one. The guests are informed of this protocol, and Leonard has made a note of it in his memento notepad.

    Because of his anterograde amnesia, Leonard cannot keep track of how many drinks he has already received, if any. Nor does he initially recognize either butler by name, but he can read their name tags when they approach him.

    A final feature of the protocol is that, at the end of the evening, guests whose coin landed Tails (and thus received two drinks from Alfred and one from Lurch) will be given a bag of Twizzlers ("T-candy"). Those whose coin landed Heads will receive a bag of Hershey’s Kisses ("H-candy").

    At any given moment during the party, when Leonard sees a butler bringing him a drink, his credence that this drink is unique (that is, not one of two planned drinks from the same butler) is 1/2, as is his credence that the coin landed Heads. However, upon reading the name tag and discovering that the butler is Alfred, he updates his credence that the coin landed Tails to 2/3, since there are twice as many situations in which the coin landed Tails and Alfred serves him a drink as there are situations where the coin landed Heads and Alfred serves him one. This mirrors the Thirder interpretation in the Sleeping Beauty problem.

    That seems straightforward enough until someone asks him, "So, Leonard, do you think you’ll get Twizzlers or Kisses at the end of the night?"

    He frowns, checks his notepad, and realizes that by the same reasoning that gave him 2/3 for Tails a moment ago, he ought also to think he's more likely than not to get Twizzlers. But that can't be right. The coin decides both outcomes, doesn’t it?

    The trick, of course, is in what Leonard’s belief precisely is about when he thinks about the coin toss "outcome". When he reasons about this drink—the one Alfred is serving him—he’s locating himself among drink-moments. In that frame, a Tails-run simply generates twice as many such moments involving Alfred. But when he wonders what candy he'll get later, he's no longer locating himself in a drink-moment but in an entire "run of the evening": the single history that will end either in Twizzlers or Kisses. And there, each run counts only once, no matter how many times Alfred appeared in it.

    Two T-drinks in a Tails-run correspond to just one Twizzlers outcome (in the same timeline), while one H-drink in a Heads-run corresponds to one Kisses outcome. Once you factor that mapping in, the overall odds of Twizzlers or Kisses even out again. (Since experiencing one of two T-drink event doesn't exclude but rather ensures the actuality of the other T-drink event in the same timeline).

    So Leonard’s probabilities fit together neatly after all. In the middle of the party, as Alfred hands him a glass, he can think, "This is probably a T-drink". Yet, looking ahead to the end of the night, he can just as honestly write in his notebook, "Chances of T-candy: fifty-fifty."
  • Sleeping Beauty Problem
    Perhaps you didn't parse correctly. There is no ambiguity. If she is asked to project her state of knowledge on Wednesday, or to recall it from Sunday, of course the answer is 1/2.JeffJo

    I was explicitly referring to her state of knowledge at the time when the interview occurs. There is no projection of this state into the future. Likewise, when you buy a lottery ticket and express your credence that it is the winning ticket as one in one million, say, what you mean is that there is a one in one million chance that, when the winning number will be drawn (or will be revealed, if it has already been drawn), your ticket will be the winner. You're not projecting your state of knowledge into the future. You're merely stating the conditions of verification regarding what your present state of knowledge (i.e. your credence) is about.

    I keep looking at the problem, and I can't find a reference to betting anywhere. The reason I don't like using betting is because anybody can re-define how and when the bet is made and/or credited, in order to justify the answer they like. One is correct, and one is wrong.

    Establishing a betting protocol with a well defined payout structure enable SB to put her money where her mouth is, and also to clarifies what it is that her stated credence is about. It highlights the tension between saying that you have a high credence that some outcome is true but that you wouldn't bet on it. Once you acknowledge that it is rational to make an even money bet on an outcome that you believe to be more likely to occur (or to be actual) than not, then the specification of the payout structure help clarify what the stated credence is about (i.e. what it is exactly that you take to be most likely to occur, or to be actual). This indeed goes beyond the original statement of the problem, but since it is precisely my contention that the original statement is ambiguous, it's a useful way to highlight the ambiguity.

    So, if a bet were to exist, and assuming she uses the same reasoning each time? She risks her $1 during the interview, and is credited her winnings then also. If she bets $1 on Heads with 2:1 odds, she gains $2 if the coin landed Heads, and loses 2*$1 if it landed on Tails. If she bets on Tails with 1:2 odds, she loses $1 if the coin landed Heads, and gains 2*$0.50=$1 if it landed Tails.

    But if she bets $1 on Heads with 1:1 odds, she gains $1 if the coin landed Heads, and loses 2*$1=$2 if it landed on Tails. If she bets on Tails with 1:1 odds, she loses $1 if the coin landed Heads, and gains 2*$1=$2 if it landed Tails.

    The answer, to the question that was asked and not what you want it to be, is 1/3.

    Indeed, such a payout structure clarifies what the bettor means when they express a 2/3 credence that the the coin landed tails. They mean that the epistemic situations that they find themselves in when awakened are T-awakenings (as opposed to H-awakenings) two thirds of the time. A different payout structure that rewards the bettor only once when they place a winning bets during an experimental run clarifies what the bettor means when they express a 1/2 credence that the the coin landed tails. They mean that the epistemic situations that they find themselves in when awakened are T-runs (as opposed to H-runs) one half of the times. As @Michael correctly argues in defence of a Halfer interpretation, merely being afforded more opportunities to bet on a given ticket (outcome) doesn't make it any more likely that this ticket is the winning ticket (or that the outcome is actual).
  • Banning AI Altogether
    And 50% and growing of public website material is produced by AI.unenlightened

    Are you sure about that? This seems quite exaggerated. I know that a study published in August 2024 has been widely misrepresented as making a similar claim. What was actually claimed is that 57% of the translated material published on the Web was translated with the help of some machine learning software, not even necessarily generative AI. Today, lots of marketing material may be produced with generative AI, but marketing material is B.S. even when produced by humans anyway. Lastly, the curated datasets used to train LLMs generally exclude such fluff.
  • Sleeping Beauty Problem
    On the occasion of an awakening, what is Sleeping Beauty's expectation that when the experiment is over ...
    — Pierre-Normand

    This is what invalidates your variation. She is asked during the experiment, not before or after. Nobody contests what her answer should be before or after. And you have not justified why her answer inside the experiment should be the same as outside.
    JeffJo

    It looks like you didn't parse correctly the sentence fragment that you quoted. It is indeed on the occasion of an awakening (as I said) that she is being asked about her credence regarding the coin, not later. I did make reference in the question to the end-off-run verification conditions of the credence statement that SB is asked to express. The reason this reference is made (to the future verification conditions) is to disambiguate the sense of the question, in accordance with the Halfer interpretation in this case. But the question still is about her credence "now".

    Compare: (1) "What are the chances, now, that your lottery ticket is the winning ticket?" and (2) "What are the chances, now, that your lottery ticket has the number that will be drawn as the winning number?" It's the exact same question and the odds are the same (one in a million, say). The second question merely makes explicit what "winning" means.

    Here's one more attempt. It's really the same thing that you keep dodging by changing the timing of the question, and claiming that I have "vallid thirder logic" while ignoring that it proves the halfer logic to be inconsistent.

    Get three opaque note cards.
    On one side of different cards, write "Monday and Heads," "Monday and Tails," and "Tuesday and Tails.
    Turn the cards over, shuffle them around, and write "A," "B," and "C" on the opposite sides.
    Before waking SB on the day(s) she is to be woken, put the appropriate card in the table in her room, with the letter side face up.

    Let's say she sees the letter "B." She knows, as a Mathematical fact, that there was a 1/3 probability that "B" was assigned to the card with "Heads" written on the other side. And a 2/3 chance for "Tails."

    By halfer logic, while her credence that "Heads" is written on the "B" card must be 1/3, her credence that the coin landed on Heads is 1/2. This is a contradiction - these two statements represent the same path to her current state of knowledge, regardless of what day it is.

    The Halfer logic is to reason that although T-runs, unlike H-runs, are such that SB will be presented with a card two different times (one on Monday and one on Tuesday), nevertheless, on each occasion where the experiment is performed and she finds herself being involved in an individual experimental run, the likelihood that this run is a T-run is 1/2. That's because in the long run there are as many T-runs as there are H-runs. By that logic she can also reason that, on an particular occasion where she is being awakened, the chances that she is holding the "Monday and Heads" card is 1/2. The fact that she now finds out her card to be labelled "B" doesn't alter those odds since the labelling procedure is probabilistically independent from the coin toss result and hence this result conveys no information to her regarding the coin toss result.

    By the way, for the same reason, it would not convey any information to her either from a Thirder perspective. Her credence that the coin landed Tails would remain 2/3 before and after she saw the "B" label on her card.

    Remember: SB isn't betting on the card (neither is she betting on the current awakening episode). She's betting on the current coin toss outcome. How those outcomes must be considered to map to her possible ways to experience them (in separate awakenings or separate runs) is a matter of interpretation that isn't spelled out in the original SB problem. It's true, though, that under the Thirder interpretation, betting on the wakening episodes or betting on the cards is equivalent. But that's just because the cards and the awakening episodes are mapped one-to-one.
  • Sleeping Beauty Problem
    I would say she is being asked what the odds are of it being a day in which a T side vs a H side coins is flipped.Philosophim

    I was talking about what she is being asked, literally, in the original formulation of the problem discussed in the OP. From Wikipedia:

    This has become the canonical form of the problem: [...] During the interview Sleeping Beauty is asked: "What is your credence now for the proposition that the coin landed heads?"

    Your reading is a sensible interpretation of this question, but it isn't the only one.

    If she's only being asked what the percent chance of the coin ended up being at, the answer is always 50/50. The odds of the coin flip result don't change whether its 1 or 1,000,000 days. What changes is from the result of that coin flip, and that is the pertinent data that is important to get an accurate answer.

    Yes, quite, but now with the phrase "the result of that coin flip" you still seem to be gesturing to the "occasions" being generated by those coin flip results. And how is it that those events are meant to be individuated (one occasion per run or one per awakening) is what's at issue.

    This is very similar to the old Monty Hall problem. You know the three doors, make a guess, then you get to make another guess do you stay or change?

    On the first guess, its always a 1/3 shot of getting the door wrong(sic). But it can also be seen as a 2/3 chance of getting the door wrong. When given another chance, you simply look at your first set of odds and realize you were more likely than not wrong, so you change your answer. The result matches the odds.

    That's not quite how the Monty Hall problem is setup so I'm unsure about the analogy your intending to make. The contestant has a 1/3 chance of getting the prize-hiding door right on their first try. But the game host then reveals among the two other doors one that they (i.e. the host) know to be hiding a goat. The contestant then is being offered an opportunity to switch their initial choice for the other unopened door. The correct reasoning is that since contestants that switch their choices will win the prize on each occasion where their first choice was wrong, they have a 2/3 change of winning the prize if they switch their choice.

    Same with the situation here. Run this experiment 100 times and have the person guess heads 50 times, then tails 50 times. The person who guesses tails every time 50 times will be right 2/3rds of the time more than the first. Since outcomes ultimately determine if we are correct in our odds, we can be confident that 1/2 odds is incorrect.

    Yes, the person who guesses that the coin landed Tails will turn out to have made a correct guess two thirds of the times on average, thereby matching the Thirder credence. But the Halfer argues that this is just because they were able to make more guesses during T-runs. Under the Thirder interpretation of the meaning of SB's credence, though, her being afforded more opportunities to express her credence during T-runs doesn't make those runs more likely to happen, and this is what to them matter.

    By the way, very nice discussion! I appreciate your insight and challenging me to view things I might not have considered.

    Cheers!
  • Sleeping Beauty Problem
    She can reason that its equally likely that the result of the coin flip is 50/50, but that doesn't mean its likely that the day she is awake is 50/50.Philosophim

    Sure, but the former precisely is what she is being asked. She is being asked what her credence about the coin will be on that occasion, and not what the proportion of such occasions are that are T-occasions. One can argue than she is being asked this implicitly, but in that case it's still open to interpretation what those "occasions" are meant to be, as we've already discussed.

    Lets flip it on its head and note how the likelihood that she would be wrong.

    If she always guesses heads, she's wrong twice if its tails. If she always guesses tails, she's only wrong once. Thus, she is twice as likely to be wrong if she guesses heads on any particular day woken up, and twice as likely to guess correctly if she guesses tails. If the total odds of guessing correctly were 50/50, then she would have an equal chance of guessing correctly. She does not.

    That's right, and this is a good argument favoring the Thirder position but it relies on explicitly introducing
    a scoring procedure that scores each occasion that she has to express her credence: once for each awakening episode. If you would rather score those statements only once per run, regardless of how many times she is being asked about her credence in that run, then she would be right half the times. This also makes sense if you view all of the separate awakening episodes occurring during a single Tails run to be part of the same "outcome" (as you've indeed yourself favored doing earlier).
  • Sleeping Beauty Problem
    I'm not seeing the ambiguity here, but maybe I'm not communicating clearly. There are two outcomes based on context.Philosophim

    I assume what you now mean to say is that there are two possible ways to think of the "outcomes" based on context. Well, sure, that's pretty much what I have been saying. But I'm also arguing that the original Sleeping Beauty problem fails to furnish the relevant context.

    If we think of the experimental runs following the coin toss as the "R-outcomes" (an equal amount of T-runs and H-runs are expected) and the awakening episodes as the "A-outcomes" (twice as many T-awakenings as H-awakenings are expected), then we've resolved part of the ambiguity. But Sleeping Beauty isn't being asked about specific kinds of outcomes explicitly. Rather she is being asked about her credence regarding the current state of the coin. She can reason that the current state of the coin is Tails if and only if she is currently experiencing a T-awakening and hence that the current state of the coin is twice as likely to be Tails than it is to be Heads. But she can also reason that the current state of the coin is Tails if and only if she is currently experiencing a T-run and hence that the current state of the coin is equally as likely to be Tails than it is to be Heads.

    Another way to state the Halfer interpretation that makes it intuitive is to suppose Sleeping Beauty will be given a bag of Twizzlers (T-candy) at the end of the experiment if the coin landed Tails and a bag of Hershey's Kisses (H-candy) if it landed Heads. The fact that she's awakened twice rather than once when she's scheduled to receive Twizzlers doesn't make it more likely that she will receive them at the end of the run. Hence her credence remains 1/2 that she will receive Twizzlers. This is consistent with her credence being 2/3 that her current awakening episode (A-outcome) is twice as likely to be one that puts her on a path towards getting Twizzlers. But since the Twizzlers reward is an outcome that is being realized if and only if she is currently experiencing a T-awakening, she can sensibly reason that the odds of that are 1/2 also.

    The key to understanding the consistency between the two apparently contradictory credences regarding the very same coin toss result is to realize that the two T-awakening outcomes occur in the same timeline and hence them being more frequent than H-awakenings doesn't increase the relative frequency of the Twizzlers rewards (or of her having experienced a T-run, regardless of how many times she was awakened in this run).
  • Sleeping Beauty Problem
    Correct. My point was that its just used as a word problem way of saying, "We have 3 outcomes we reach into a hat and pull from"Philosophim

    You are using the word "outcome" ambiguously and inconsistently. In your previous post you had stated that "You have 3 possible outcomes. In two of the outcomes, tails was flipped."

    And now you are saying that:

    Because there are two different outcomes. One with one day, and one with two days. If you pick any day and have no clue if its a day that resulted from a heads or tails outcome, its a 2/3rds chance its the tails outcome. The heads and tails is also irrelevant. The math is, "Its as equally likely that we could have a series of one day or two day back to back in this week. If you pick a day and you don't know the outcome or the day, what's the odds its a tails day vs a heads day?"

    The odds of whether its head or tails is irrelevant since they are the same and can be effectively removed from the problem.

    So, now you are back to treating experimental runs rather than awakening episodes as the "outcomes". This sort of ambiguity indeed is the root cause of the misunderstanding that befalls Halfers and Thirders in their dispute.

    When Sleeping Beauty is being asked, on one particular awakening occasion, what her credence is that the coin landed Tails, she must ponder over what the odds are that the epistemic situation she currently is in (given the information available to her) is such that the coin landed Tails when she is in that situation. In other words, she takes herself to be experiencing one among a range of possible and undistinguishable (from her current point of view) events (or "outcomes") such that a proportion P of them occur when the coin landed Tails, in the long run. All of this leaves it undefined what the events or "outcomes" are that we're talking about.

    Thirders interpret those outcomes as awakening episodes and Halfers interpret them as experimental runs. Their expressed credences, 2/3 and 1/2 respectively, therefore are answers to different questions (or to the same question differently disambiguated, if you will).

    Thirder Sleeping Beauty expects, reasonably enough, that in the long run awakening episodes like the one she is currently experiencing will turn out to have occurred when the coin had landed Tails two thirds of the time.

    Halfer Sleeping Beauty expects, equally reasonably, that, in the long run, experimental runs like the one she is currently experiencing (regardless of how many more times she already was or will be awakened during that run) will turn out to have occurred when the coin had landed Tails one half of the times.

    Credences implicitly are about ratios. Halfers and Thirders disagree about the denominator that is meant to figure in the relevant ratio.
  • Sleeping Beauty Problem
    The part to note is that almost all of this is a red herring. Its irrelevant if she remembers or not. Its just word play to get us out of the raw math. The odds are still the same.

    Flip heads, 1 result
    Flip tails, 2 results

    Put the pile of results as total possible outcomes. You have 3 possible outcomes. In two of the outcomes, tails was flipped. Put it in a hat and draw one. You have a 2/3rd chance that its a tails outcome.

    To be clear, it is a 50/50 shot as to whether heads or tails is picked. Meaning that both are equally like to occur. But since we have more outcomes on tails, and we're looking at the probability of what already happened based on outcomes, not prediction of what will happen, its a 2/3rds chance for tails.
    Philosophim

    The issue with her remembering or not is that if, as part of the protocol, she could remember her Monday awakening when the coin landed tails and she is being awakened again on Tuesday, she would be able to deduce that the coin landed Tails with certainty and, when she couldn't remember it, she could deduce with certainty that "today" is Monday (and that the probability of Tails is 1/2). That would be a different problem, and no problem at all.

    Your argument in favor of the Thirder credence that the coin landed Tails (2/3) relies on labeling the awakening episodes "the outcomes". But what is it that prevents Halfers from labelling the experimental runs "the outcomes" instead? Your ball picking analogy also has been produced by Berry Groisman to illustrate this ambiguity in his The end of Sleeping Beauty's Nightmare paper (although I don't fully agree with his conclusions).
  • Sleeping Beauty Problem
    You may have read it. You did comment on it from that aspect. But you did not address it. The points it illustrates are:

    - That each "day" (where that means the coin toss and the activity that occurred during that awakening), in Mathematical fact, represents a random selection of one possible "day" from the NxN grid. If that activity appears S times in the schedule, and R times in the row, then the Mathematically correct credence for the random result corresponding to that row is R/S. This is true regardless of what the other N^2-S "days" are, even if some are "don't awaken."

    - There is no connection between the "days" in a row. You call this "T-awakenings" or "the H-wakening." in the 2x2 version. They are independent.
    JeffJo

    I agree with the reasoning and calculation. As I said, this is a standard Thirder interpretation of the problem. It is consistent, coherent and valid. Regarding the second point, the two events that occur when the coin lands Tails only are independent in the sense that when Sleeping Beauty experiences them she can't know which one (i.e. Monday&Tails or Tuesday&Tails) it is. In that sense, they also are independent of Monday&Heads. In another sense, the first two are interdependent since one of them can't occur without the other one also occurring within the same experimental run.

    But the question being asked to SB isn't explicitly about those three "independent" events. It's a question about her credence in the state of the hidden coin at the time when she is being awakened. One interpretation (the Thirder one) of this credence is that it ought to represent the proportion of her indistinguishable awakening episodes that occur while the coin landed Tails. This interpretation yields the probability 2/3. Another one, the Halfer interpretation, is that it ought to represent the proportion of her current awakening runs (which may or may not include two rather than one awakening episodes, and hence may or may not afford SB with two rather than one opportunity to express her credence) that occur as a result of the coin having landed Tails. This interpretation yields the probability 1/2. Those two interpretations also have associated with them two different methods of verification, and so are complementary rather than contradictory.

    Consider the variation I had proposed early on in this thread. Let the two awakenings that occur (on Monday and Tuesday) when the coin lands Tails take place in a room located in the West Wing of the Sleeping Beauty Experimental Facility, and the unique awakening that occurs on Monday when the coin lands Heads take place in a room located in the East Wing. On the occasion of an awakening, what is Sleeping Beauty's expectation that when the experiment is over and she will be released on Wednesday, she will find herself to be in the West Wing? Does that not happen 1/2 of the times she is being enrolled in such an experiment? Is that not also what her Aunt Sue who must come to pick her up expects? Finally, when she experiences one of the three possible (and indistinguishable) awakening situations, does she learn anything that he Aunt Sue (and herself previously) didn't already know?
  • Sleeping Beauty Problem
    Yep. What makes it an independent outcome, is not knowing how the actual progress of the experiment is related to her current situation. This is really basic probability. If you want to see it for yourself, simply address the Camp Sleeping Beauty version.JeffJo

    I did and I agreed with you that it was a fine explanation of the rationale behind the Thirder interpretation of the original SB problem.
  • Banning AI Altogether
    I spent the last hour composing a post responding to all my mentions, and had it nearly finished only to have it disappear leaving only the single letter "s" when I hit some key. I don't have the will to start over now, so I'll come back to it later.Janus

    You can still submit your post as "s" to ChatGPT and ask it to expand on it.
  • Sleeping Beauty Problem
    It's s different probability problem based on the same coin toss. SB has no knowledge of the other possible days, while this answer requires it.JeffJo

    SB does know the setup of the experiment in advance however. She keeps that general knowledge when she wakes, even if she can’t tell which awakening this is. What varies in our "variants" isn’t the awakenings setup, it’s the exit/score rule that tells us which sample to use when we ask SB "what’s your credence now?"

    From Beauty’s point of view these biconditionals are all true:

    "The coin landed Tails" ⇔ "This is a T-run" ⇔ "This is a T-awakening."

    So a Thirder assigns the same number to all three (2/3), and a Halfer also assigns the same number to all three (1/2). The disagreement isn’t about which event kind the credence talks about (contrary to what I may have misleadingly suggested before). It’s rather about which ratio we’re implicitly estimating.

    Halfer ratio (per-run denominator): count runs and ask what fraction are T. With one toss per run, that stays 1/2.

    Thirder ratio (per-awakening denominator): count awakenings and ask what fraction are T-awakenings. Since T makes more awakenings (2 vs 1), that’s 2/3.

    Same event definitions; different denominators. Making the exit/score rule explicit just fixes the denominator to match the intended end-of-run scoring:

    End-of-run scoring -> per-run ratio (Halfer number)
    Per-awakening scoring -> per-awakening ratio (Thirder number)
  • Sleeping Beauty Problem
    This experiment is now becoming "beyond the pale" and "incorrigable" to me...ProtagoranSocratist

    No worry. You're free to let Sleeping Beauty go back to sleep.
  • Sleeping Beauty Problem
    Sleeping beauty is a mythical character who always sleeps until she is woken up for whatever reason. However, there's not part of her story dictating what she remembers and doesn't, so if amnesia drugs are involved, then the experimentors are free to then craft the percentage that the outcome shows up...ProtagoranSocratist

    She is woken up once when the coin lands Heads and twice when it lands Tails. That is part of the protocol of the experiment. We also assume that the drug only makes her forget any previous awakening episode that may have occurred but not the protocol of the experiment. If that seems implausible to you, you can indeed also assume that she is being reminded of the protocol of the experiment each time she is awakened and interviewed.
  • Sleeping Beauty Problem
    assuming there is nothing mysterious or "spooky" influencing a coin flip, then the answer is always is always 50/50 heads or tails. Maybe I misunderstand.ProtagoranSocratist

    It's not something spooky influencing the coin that make SB's credence in the outcome shift. It's rather the subsequent events putting her in relation with the coin that do so when those events aren't occurring in a way that is causally (and probabilistically) independent of the coin flip result.

    Using the analogy I've used recently, if someone drops a bunch of pennies on the floor but, due to their reflectance properties, pennies landing Tails are twice as likely to catch your attention from a distance than pennies landing Heads, then, even though any penny that you see shining was equally likely to land Heads or Tails, the very fact that it's a penny that you noticed ensures that it's most likely to be a penny that landed Tails. And the reason isn't spooky at all. It's just because, in a clear sense, pennies that land Tails make you notice them more often (because they're shinier, we're assuming). It can be argued (and I did argue) that the SB situation in the original problem is relevantly similar. Coins landing Tails make SB more likely to be awakened and questioned about them (because of the experiment's protocol, in this case).
  • Banning AI Altogether
    As I understand it, the insight is what you’re supposed to provide in your post. I don’t really care where you get it from, but the insight should be in your own words based on your own understanding and experience and expressed in a defensible way. The documentation you get from the AI response can be used to document what you have to say, but then you’re still responsible for verifying it and understanding it yourself.T Clark

    I'm with @Joshs but I also get your point. Having an insight is a matter of putting 2 + 2 together in an original way. Or, to make the metaphor more useful, it's a matter of putting A + B together, but sometimes you have an intuition that A and B must fit together somehow but you haven't quite managed to make them fit in the way you think they should. Your critics are charging you with trying to make a square peg fit in a round hole.

    So, you talk it through with an AI that not only knows lots more than you do about As and Bs but can reason about A in a way that is contextually sensitive to the topic B and vice versa (exquisite contextual sensitivity being what neural network based AI's like LLMs excel at). It helps you refine your conceptions of A and of B in contextually relevant ways such that you can then better understand whether your critics were right or, if your insight is vindicated, how to properly express the specific way in which the two pieces fit. Retrospectively, it appears that you needed the specific words and concepts provided by the AI to express/develop your own tentative insight (which could have turned out not to be genuine at all but just a false conjecture). The AI functionally fulfilled its role as an oracle since it was the repository not merely of the supplementary knowledge that was required for making the two pieces fit together, but also supplied (at least part of) the required contextual understanding required for singling out the relevant bits of knowledge needed for adjusting each piece to the other one.

    But, of course, the AI had no incentive to pursue the topic and make the discovery on its own. So the task was collaborative. The AI help mitigate some of your cognitive deficits (lacks in knowledge and understanding) while you mitigated its conative deficits (lack of autonomous drive to fully and rigorously develop your putative insight).
  • Banning AI Altogether
    I guess my question is whether the user’s understanding is genuine, authentic, and owned by them.T Clark

    Often times it's not. But it's a standing responsibility that they have (to care about what they say and not just parrot popular opinions, for instance) whereas current chatbots, by their very nature and design, can't be held responsible for what they "say". (Although even this last statement needs being qualified a bit since their post-training typically instills in them a proclivity to abide with norms of epistemic responsibility, unless their users wittingly or unwittingly prompt them to disregard them.)
  • Banning AI Altogether
    What are we supposed to do about it? There's zero chance the world will decide to collectively ban ai ala Dune's thinking machines, so would you ban American development of it and cede the ai race to China?RogueAI

    Indeed. You'd need to ban personal computers and anything that contains a computer like a smartphone. The open source LLMs are only trailing the state of the art proprietary LLMs by a hair and anyone can make use of them with no help from Musk or Sam Altman. Like all previous technology, the dangers ought to be dealt with collectively, in part with regulations, and the threats of labour displacement and the consequent enhancement of economic inequalities should be dealt at the source: questioning unbridled capitalism.
  • Banning AI Altogether
    Isn't the best policy simply to treat AI as if it were a stranger? So, for instance, let's say I've written something and I want someone else to read it to check for grammar, make comments, etc. Well, I don't really see that it is any more problematic me giving it to an AI to do that for me than it is me giving it to a stranger to do that for me.Clarendon

    Yes quite! This also means that, just like you'd do when getting help from a stranger, you'd be prepared to rephrase its suggestions (that you understand and that express claims that you are willing to endorse and defend on your own from rational challenges directed at them) in your own voice, as it were. (And also, just like in the stranger case, one must check its sources!)
  • Banning AI Altogether
    I don’t disagree, but I still think it can be helpful personally in getting my thoughts together.T Clark

    This is my experience also. Following the current sub-thread of argument, I think representatives of the most recent crop of LLM-based AI chatbots (e.g. GPT-5 or Claude 4.5 Sonnet) are, pace skeptics like Noam Chomsky or Gary Marcus, plenty "smart" and knowledgeable enough to help inquirers in many fields, including philosophy, explore ideas, solve problems and develop new insights (interactively with them) and hence the argument that their use should be discouraged here because their outputs aren't "really" intelligent isn't very good. The issue whether their own understanding of the (often quite good and informative) ideas that they generate is genuine understanding, authentic, owned by them, etc. ought to remains untouched by this concession. Those questions touch more on issues of conative autonomy, doxastic responsibility, embodiment, identity and personhood.
  • Sleeping Beauty Problem
    Yes, that makes the answer 1/2 BECAUSE IT IS A DIFFERENT PROBLEM.JeffJo

    It isn’t a different problem; it’s a different exit rule (scoring rule) for the same coin-toss -> awakenings protocol. The statement of an exit rule is required to disambiguate the question being asked to SB, how her "credence" is meant to be understood.

    Think of two perfectly concrete versions:

    A. End-of-run dinner (Atelier Crenn vs Benu).

    One coin toss. If Heads, the run generates one awakening (Monday); if Tails, it generates two (Monday+Tuesday). We still ask on each awakening occasion, but the bet is scored once at the end (one dinner: Atelier Crenn if Heads and Benu if Tails). The natural sample here is runs. As many runs are T-runs as are H-runs, so the correct credence for the run outcome is 1/2. The Halfer number reflects this exit rule.

    B. Pay-as-you-go tastings (Atelier Crenn vs Benu vs Quince, as you defined the problem).

    Same protocol, but now each awakening comes with its own tasting bill: the bet is scored each time you’re awakened. The natural sample here is awakenings. T-runs generate more awakenings (one each at Benu and at Quince) than H-runs do (only one awakening at Atelier Crenn); a random awakening is twice as likely to come from Tails as from Heads, so the right credence at an awakening is 2/3. The Thirder number reflect this different exit rule.

    Both A and B are about the same protocol. What changes isn’t the coin or the awakenings. Rather, it’s which dataset you’re sampling when you answer "what’s your credence now?"

    That’s all I meant: the original wording leaves the relevant conditioning event implicit ("this run?" or "this awakening?"). Different people tacitly pick different exit rules, so they compute different frequencies. Once we say which one we’re using, the numbers line up and the apparent disagreement evaporate.

    Your Atelier Crenn tweak doesn’t uniquely solve the initial (ambiguous) problem; it just provides a sensible interpretation through making a specific scorecard explicit.
  • Sleeping Beauty Problem
    There are three Michelin three-star restaurants in San Francisco, where I'll assume the experiment takes place. They are Atelier Crenn, Benu, and Quince. Before the coin is tossed, a different restaurant is randomly assigned to each of Heads&Mon, Tails&Mon, and Tails&Tue. When she is awoken, SB is taken to the assigned restaurant for her interview. Since she has no idea which restaurant was assigned to which day, as she gets in the car to go there each has a 1/3 probability. (Note that this is Elga's solution.) Once she gets to, say, Benu, she can reason that it had a 1/3 chance to be assigned to Heads&Mon.JeffJo

    Yes, that is a very good illustration, and justification, of the 1/3 credence Thirders assign to SB given their interpretation of her "credence", which is, in this case, tied up with the experiment's "exit rules": one separate restaurant visit (or none) for each possible coin-toss-outcome + day-of-the-week combinatorial possibility. Another exit rule could be that SB gets to go the Atelier Crenn at the end of the experiment when the coin landed Heads and to Benu when it landed Tails. In that case, when awakened, she can reason that the coin landed Tails if and only if she will go to Benu (after the end of the experiment). She knew before the experiment began that, in the long run, after many such experiments, she would go to Atelier Crenn and to Benu equally frequently on average. When she awakens, from her new epistemic situation, this proportion doesn't change (unlike what was the case with your proposed exit rules). This supplies a sensible interpretation to the Halfer's 1/2 credence: SB's expectation that she will go to Atelier Crenn half the times (or be equally likely to go to Atelier Crenn) at the end of the current experimental run regardless of how many times she is pointlessly being asked to guess.
  • Sleeping Beauty Problem
    You appear to be affirming the consequent. In this case, Tails is noticed twice as often because Tails is twice as likely to be noticed. It doesn't then follow that Tail awakenings happen twice as often because Tails awakenings are twice as likely to happen.Michael

    Rather, the premiss I'm making use of is the awakening-episode generation rule. If the coin lands/landed Tails, two awakening episodes are being generated, else only one is. This premiss is available to SB since it's part of the protocol. From this premiss, she infers that, on average, when she participates in such an experiment (as she knows to be currently doing) the number of T-awakenings that she gets to experience is twice as large as the number of H-awakening. (Namely, those numbers are 1 and 1/2, respectively). So far, that is something that both Halfers and Thirders seem to agree on.

    "1) Per run: most runs are 'non-six', so the per-run credence is P(6)=1/6 (the Halfer number).
    2) Per awakening/observation: a 'six-run' spawns six observation-cases, a 'non-six' run spawns one. So among the observation-cases, 'six' shows up in a 6/5 ratio, giving P('six'|Awake)=6/11 (the Thirder number).
    "
    — Pierre-Normand

    This doesn't make sense.

    She is in a Tails awakening if and only if she is in a Tails run.
    Therefore, she believes that she is most likely in a Tails awakening if and only if she believes that she is most likely in a Tails run.
    Therefore, her credence that she is in a Tails awakening equals her credence that she is in a Tails run.

    You can't have it both ways.

    This biconditional statement indeed ensures that her credences regarding her being experiencing a T-awakening, her experiencing a T-run, or her being in circumstances in which the coin landed (or will land) Tails, all match. All three of those statements of credence, though, are similarly ambiguous. All three of them denote three distinct events that can indeed only be actual (from SB's current epistemic situation on the occasion of an awakening) if and only if the other two are. The validity of those biconditionals doesn't resolve the relevant ambiguity, though, which is something that had been stressed by Laureano Luna in his 2020 Sleeping Beauty: An Unexpected Solution paper that we had discussed before on this thread (and that @fdrake had brought up, if I remember).

    Under the Halfer interpretation of SB's credence, all three of those biconditionally related "experienced" events—by "experienced", I mean that SB is currently living those events, regardless of her knowing or not that she is living them—are actual on average 1/2 of the times that SB is experiencing a typical experimental run. Under the Thirder interpretation, all three of those biconditionally related "experienced" events are actual on average 2/3 of the times that SB is experiencing a typical awakening episode.

    If it helps, it's not a bet but a holiday destination. The die is a magical die that determines the weather. If it lands on a 6 then it will rain in Paris, otherwise it will rain in Tokyo. Both Prince Charming and Sleeping Beauty initially decide to go to Paris. If after being woken up Sleeping Beauty genuinely believes that the die most likely landed on a 6 then she genuinely believes that it is most likely to rain in Paris, and so will decide instead to go to Tokyo.

    This setup exactly mirrors some other variations I also had proposed (exiting the Left Wing or exiting the East Wing at the end of the experiment) that indeed warrant SB's reliance on her Halfer-credence to place her bet. But the original SB problem doesn't state what the "exit conditions" are. (If it did, there'd be no problem.) Rather than being offered to make a unique trip to Paris or Tokyo at the end of the current experimental run, SB could be offered to make a one day trip to either one of those destinations over the course of her current awakening episode, and then be put back to sleep. Her Thirder-credence would then be pragmatically relevant to selecting the destination most likely to afford her a sunny trip.
  • Sleeping Beauty Problem
    Still: the effects of one flip never effect the outcome of the other FLIPS, unless that is baked into the experiment, so it is a misleading hypothetical question (but interesting to me for whatever reason). The likelihood of the flips themselves are still 50/50, not accounting for other spooky phenomenon that we just don't know about. So, i'll think about it some more, as it has a "gamey" vibe to it...ProtagoranSocratist

    There are no other flips. From beginning to end (and from anyone's perspective), we're only talking about the outcome of one single coin toss. Either it landed Heads or it landed Tails. We are inquiring about SB's credence (i.e. her probability estimation) in either one of those results on the occasion where she is being awakened. The only spooky phenomenon is her amnesia, but that isn't something we don't know about. It's part of the setup of the problem that SB is being informed about this essential part of the protocol. If there were no amnesia, then she would know upon being awakened what the day of the week is. If Monday (since she wouldn't remember having been awakened the day before) then her credence in Tails would be 1/2. If Tuesday (since she would remember having been awakened the day before) then her credence in Tails would be 1 (i.e. 100%). The problem, and competing arguments regarding what her credence should be, arise when she can't know whether or not her current awakening is the first one.

    (Very roughly, Halfers argue that since she is guaranteed to be awakened once in any case, her being awakened conveys no new information to her and her estimation of the probability that the coin landed Tails should remain 1/2 regardless of how many times she is being awakened when the coin lands Tails. Thirders argue that she is experiencing one of three possible and equiprobable awakening episodes, two of which happen when the coin landed Tails, and hence that her credence in the coin having landed Tails becomes 2/3.)
  • Sleeping Beauty Problem
    Why? How does something that is not happening, on not doing so on a different day, change her state of credence now? How does non-sleeping activity not happening, and not doing so on a different day, change her experience on this single day, from an observation of this single day, to an "experimental run?"

    You are giving indefensible excuses to re-interpret the experiment in the only way it produces the answer you want.
    JeffJo

    Well, firstly, the Halfer solution isn't the answer that I want since my own pragmatist interpretation grants the validity of both the Halfer and the Thirder interpretations, but denies either one being the exclusively correct one. (I might as well say that Halfers and Thirders both are wrong to dismiss the other interpretation as being inconsistent with the "correct" one, rather than acknowledging its being incompatible but complementary.)

    With this out of the way, let me agree with you that the arbitrary stringing up of discrete awakenings into composite experimental runs doesn't affect the Thirder credence in the current awakening being a T-awakening (which remains 2/3). However, likewise, treating a run as multiple interview opportunities doesn't affect the Halfer credence in the current run being a T-run (which remains 1/2). The mistake that both Halfers and Thirders seem to make is to keep shouting at each other: "Your interpretative stance fails to refute my argument regarding the validity of my credence estimation." What they fail to see is that they are both right and that the "credences" that they are taking about are credences about different things.
  • Sleeping Beauty Problem
    Right. And this is they get the wrong answer, and have to come up with contradictory explanations for the probabilities of the days. See "double halfers."JeffJo

    Let me just note, for now, that I think the double halfer reasoning is faulty because it wrongly subsumes the Sleeping Beauty problem under (or assimilates it with) a different problem in which there would be two separate coin tosses. Under that scenario, a first coin would be tossed and if it lands Heads, then SB would be awakened Monday only. If it lands Tails, then a second coin would be tossed and SB would still be awakened Monday only if it lands Heads and be awakened Tuesday only if it lands Tails. Such a scenario would support a straightforward Halfer interpretation of SB's rational credence but it's different from the original one since it makes Monday-awakenings and Tuesday-awakenings mutually exclusive events whereas, in the original problem, SB could be experiencing both successively though not at the same time. The different awakening generation rules yield different credences. (I haven't read Mikaël Cozic's paper, where the double-halfer solution is being introduced, though.)
  • Sleeping Beauty Problem
    I understand the 1/3rd logic, but it simply doesn't apply here: the third flip, given the first two were heads (less likely than one tail and a head, but still very likely), is also unaffected by the other flips.ProtagoranSocratist

    There is no third flip. The coin is only tossed once. When it lands Tails, Sleeping Beauty is awakened twice and when it lands Heads, she is awakened once. She also is being administered an amnesia inducing drug after each awakening so that she is unable to infer anything about the number of awakenings she may be experiencing from her memory, or lack thereof, of a previous awakening episode. It might be a good idea to either reread the OP carefully, or read the Wikipedia article on the problem: especially the description of the canonical form of the problem in the second section titled "The problem".

    (For the record, my own "pragmatist" solution is an instance of what the Wikipedia article, in its current form, dubs the "Ambiguous-question position", although I think the formulation of this position in the article remains imprecise.)
  • Banning AI Altogether
    This is useful information. I had it in my mind that it didn't use the spaces, so I started using spaces to distinguish myself. I guess I'll go back to spaceless em dashes.Jamal

    I used to make a heavy use of em dashes before ChatGPT came out and people began to identify them as a mark of AI generated text. So, I stopped using them for awhile but I'm beginning to use them again since there are cases where parentheses just don't feel right for demarcating parenthetical clauses that you don't want to reduce the emphasis on, and comma pairs don't do the job either.
  • Banning AI Altogether
    I would think handing your half-formed prose to a bot for it to improve it is plagiarism, regardless of the number of words changed or inserted. It's a different thing from you deliberately searching for a synonym. No?bongo fury

    Maybe plagiarism isn't quite the right term, but I'm happy to grant you the point. In the discussion about the new TPF rule regarding ChatGPT and sourcing that took place a few months ago, I had made a related point regarding the unpacking and ownership of ideas.
  • Banning AI Altogether
    Getting the thesaurus to suggest whole phrases and sentences is obviously plagiarism.bongo fury

    I would never dare use a phrase that I first read in a thesaurus, myself. I'd be much too worried that the author of the thesaurus might sue me for copyright infringement.
  • Banning AI Altogether
    I'm unsure in what way the OP proposal is meant to strengthen the already existing prohibition on the use of AI. Maybe the OP is concerned with this prohibition not being sufficiently enforced in some cases. If someone has an AI write their responses for them, or re-write them, that's already prohibited. I think one is allowed to make use of them a spell/grammar checkers. I've already myself argued about the downsides of using them for more substantive writing assistance (e.g. rewording or rephrasing what one intends to post in a way that could alter the meaning in ways not intended by the poster and/or not being reflective of their own understanding). But it may be difficult do draw the line between simple language correction and substantive rewording. If a user is suspected to abuse such AI usage, I suppose moderators could bring it up with this user and/or deal with it with a warning.

    One might also use AI for research or for bouncing off ideas before posting. Such an usages seems unobjectionable to me and, in any case, prohibiting them would be difficult to enforce. Lastly, AI has a huge societal impacts currently. Surely, discussing AI capabilities, flaws and impacts (including its dangers), as well as the significance this technology has for the philosophy of mind and of language (among other things) is important, and illustrating those topics with properly advertised examples of AI outputs should be allowed.
  • Sleeping Beauty Problem
    Then try this schedule:
    . M T W H F S
    1 A E E E E E
    2 A A E E E E
    3 A A A E E E
    4 A A A A E E
    5 A A A A A E
    6 A A A A A A

    Here, A is "awake and interview."

    If E is "Extended Sleep," the Halfer logic says Pr(d|A)=1/6 for every possible roll, but I'm not sure what Pr(Y|A) is. Halfers aren't very clear on that.
    JeffJo

    Halfers don't condition on the propostion "I am experiencing an awakening". They contend that for SB to be awakened several times, rather than once, in the same experimental run (after one single coin toss or die throw) has no incidence on her rational credence regarding the result of this toss/throw.

    But if E is anything where SB is awoken but not interviewed, then the straightforward Bayesian updating procedure you agreed to says Pr(d|A)=d/21, and if Y is an index for the day, Pr(Y|A)=Y/21.

    My issue is that, if A is what SB sees, these two cannot be different.

    Yes, I agree with the cogency of this Thirder analysis. Halfers, however, interpret SB's credence, as expressed by the phrase "the probability that the coin landed Tails" to be the expression of her expectation that the current experimental run, in which she is now awakened, (and may have been, or will be, awakened another time,) is half as likely to be a T-run or a H-run, which also makes sense if she doesn't care how many times she may be awakened and/or interviewed in each individual run. Her credence tracks frequencies of runs rather than (in Thirder interpretations of the problem) awakening episodes.
  • Sleeping Beauty Problem
    Thank you for that. But you ignored the third question:

    Does it matter if E is "Extended sleep"? That is, the same as Tuesday&Heads. in the popular version?

    "I don't see how it bears on the original problem where the new evidence being appealed to for purposes of Bayesian updating isn't straightforwardly given"
    — Pierre-Normand

    Then you don't want to see it as straightforward. Tuesday still exists if the coin lands Heads. It is still a single day, with a distinct activity, in the experiment. Just like the others in what you just called straightforward.
    JeffJo

    Oh yes, good point. I had overlooked this question. Indeed, in that case your variation bears more directly on the original SB thought experiment. One issue, though, is that if is E is just another activity like the other ones, then SB should not know upon awakening on that day that her scheduled activity is E, just like the original problem, when SB wakes up on Tuesday, she isn't informed that she is experiencing a Tuesday-awakening. So, you haven't quite addressed the issue of the indistinguishability of her awakening episodes.
  • Sleeping Beauty Problem
    I use "single day" because each day is an independent outcome to SB.JeffJo

    I had misunderstood your original post, having read it obliquely. I had thought you meant for the participants to experience, over the duration of one single day, all six activities in the table row selected by a die throw, and be put to sleep (with amnesia) after each activity. In that case, their credence (on the occasion of any particular awakening/activity) in any given die throw result would be updated using the non-uniform representation of each activity in the different rows. This would have been analogous to the reasoning Thirders make in the original Sleeping Beauty problem. But the variation that you actually propose, when only one activity is being experienced on any given day, yields a very straightforward Bayesian updating procedure that both Halfers and Thirders will agree on. I don't see how it bears on the original problem where the new evidence being appealed to for purposes of Bayesian updating isn't straightforwardly given—where, that is, all the potential awakening episodes are subjectively indistinguishable from Sleeping Beauty's peculiar epistemic perspective.
  • Sleeping Beauty Problem
    This, I think, shows the fallacy. You're equivocating, or at least begging the question. It's not that there is an increased proclivity to awaken in this scenario but that waking up in this scenario is more frequent.

    In any normal situation an increased frequency is often explained by an increased proclivity, but it does not then follow that they are the same or that the latter always explains the former – and this is no normal situation; it is explicitly set up in such a way that the frequency of us waking up Sleeping Beauty does not mirror the probability of the coin toss (or die roll).
    Michael

    I’m with you on the distinction. "Proclivity" and "frequency" aren’t the same thing. The only point I’m making is simple: in my shiny-penny story, a causal rule makes certain observations show up more often, and Bayes lets us use that fact.

    In the shiny-penny case, fair pennies have a 1/2 chance to land Tails, but Tails pennies are twice as likely to be noticed. So among the pennies I actually notice, about 2/3 will be Tails. When I notice this penny, updating to (2/3) for Tails isn’t smuggling in a mysterious propensity; it’s just combining:

    1) the base chance of Tails (1/2), and
    2) the noticing rates (Tails noticed twice as often as Heads).

    Those two ingredients, or proclivities, generate the observed 2:1 mix in the pool of "noticed" cases, and that’s exactly what the posterior tracks. No amnesia needed; if you were really in that situation, saying "My credence is 2/3 on Tails for the penny I’m looking at" would feel perfectly natural.

    If you are allowed to place 6 bets if the die lands on a 6 but only 1 if it doesn't then it is both the case that winning bets are more frequently bets that the die landed on a 6 and the case that the die is most likely to not land on a 6.

    Right, and that’s the clean way to separate the two perspectives:

    1) Per run: most runs are 'non-six', so the per-run credence is P(6)=1/6 (the Halfer number).
    2) Per awakening/observation: a 'six-run' spawns six observation-cases, a 'non-six' run spawns one. So among the observation-cases, 'six' shows up in a 6/5 ratio, giving P('six'|Awake)=6/11 (the Thirder number).

    Once you say which thing you’re scoring, runs or awakenings, both beliefs lead to the same betting strategy and the same expected value under any given payout scheme. Different grains of analysis, same rational behavior.
  • Sleeping Beauty Problem
    I think your comment sidestepped the issue I was raising (or at least misunderstood it, unless I'm misunderstanding you), but this reference to Bayesian probability will make it clearer.

    [...]

    it cannot be that both Halfers and Thirders are right. One may be "right" in isolation, but if used in the context of this paradox they are equivocating, and so are wrong in the context of this paradox.
    Michael

    I agree with your Bayesian formulation, except that we're more used to follow with Elga's convention, and predicate two awakenings on Tails, such that it's P(T|Awake) that is 2/3 on the Thirder interpretation of this credence.

    To be clear about the events being talked about, there is indeed a unique event that is the same topic for discussion for both Halfers and Thirders: namely, the coin toss. However, even after the definition of this unique event has been agreed upon, there remains an ambiguity in the definition of the credence that SB expresses with the phrase "the probability that the coin landed Tails." That's because her credence C is conceptually tied with her expectation that this event will be repeated with frequency C, in the long run, upon repeatedly being placed in the exact same epistemic situation. Thirders assert the the relevant epistemic situation consist in experiencing a singular awakening episode (which is either a T-awakening or a H-awakening) and Halfers assert that the relevant epistemic situation consist in experiencing a singular experimental run (which comprises two awakenings when it is a T-run). So, there are three "events" at issue: the coin toss, that occurs before the experiment, the awakenings, and the runs.

    Since it's one's subjective assessment of the probability of the unique event (either H or T) being realized that is at issue when establishing one's credence, one must consider the range of epistemic situations that are, in the relevant respect, indistinguishable from the present one but that one can reasonably expect to find oneself into in order to establish this credence. The Thirders insist that the relevant situations are the indistinguishable awakening episodes (being generated in unequal amounts as a result of the coin toss) while the Halfers insist that they are the experimental run (being generated in equal amounts as a result of this toss). I've argued that both stances yield sensible expressions of SB's credence, having different meanings, and that the choice of either may be guided by pragmatic considerations regarding the usefulness of either tracking relative frequencies of awakenings types or of experimental run types for various purposes.
  • Sleeping Beauty Problem
    Yes, so consider the previous argument:

    P1. If I keep my bet and the die didn't land on a 6 then I will win £100 at the end of the experiment
    P2. If I change my bet and the die did land on a 6 then I will win £100 at the end of the experiment
    P3. My credence that the die landed on a 6 is 6/11
    C1. Therefore, the expected return at the end of the experiment if I keep my bet is £
    C1(sic). Therefore, the expected return at the end of the experiment if I change my bet is £

    What values does she calculate for and ?

    She multiplies her credence in the event by the reward. Her calculation is:

    C1. Therefore, the expected return at the end of the experiment if I keep my bet is £45.45
    C2. Therefore, the expected return at the end of the experiment if I change my bet is £54.55

    This is exactly what Prince Charming does given his genuine commitment to P3 and is why he changes his bet.

    So why doesn’t she change her bet? Your position requires her to calculate that > but that’s impossible given P1, P2, and P3. She can only calculate that > if she rejects P3 in favour of “my credence that the die landed on a 6 is 1/6”.
    Michael

    While Thirders and Halfers disagree on the interpretation of SB's credence expressed as "the likelihood that the die didn't land on a six", once this interpretations is settled, and the payout structure also is settled, they then actually agree on the correct betting strategy, which is a function of both.

    The Thirder, however, provides a different explanation for the success of this unique (agreed upon) betting strategy. The reason why SB's expected return—from a Thirder stance—is higher when she systematically bets on the least likely coin toss result (i.e. 'non-six' which end up being actual only five times on average in eleven awakenings) than when she systematically bets on the most likely one (i.e. 'six' which ends up being the actual result six times on average in eleven awakenings) is precisely because the betting structure is such that in the long run she only is being rewarded once with £100 after betting eleven times on the most likely result ('six') but is rewarded five times with £100 after betting eleven times on the least likely result ('non-six'). On that interpretation, when SB systematically bets on the least likely outcome, she ends up being rewarded more because instances of betting on this outcome are being rewarded individually (and cumulatively) whereas instances of betting on the more likely events are rewarded in bulk (only once for six successful bets placed.) This is the reason why SB, as a Thirder, remains incentivized to bet on the least likely outcome.

    Your calculation of her expected return spelled out above was incorrect. It's not simply the result of multiplying her credence in an outcome with the potential reward for this outcome. It's rather the result of multiplying her credence in an outcome with the average reward for this outcome. Since she is only being rewarded with £100 for each sequence of six successful bets on the outcome 'six', her expected value when she (systematically) changes her original bet is:

    C2: credence('six') * 'average reward when bet successful' = (6/11) * (£100/6) = £9.091

    And her expected value when she doesn't change her bet is

    C1: credence('non-six') * 'average reward when bet successful' = (5/11) * £100 = £45,45

    She thereby is incentivized to systematically bet on 'non-six', just like a Halfer is.

    Notice also that, at the end of an average experimental run, where the number of betting opportunities (i.e. awakening episodes) is 11/6 on average, her calculated expected return is (11/6) * £45,45 = £83.3, which matches the expecting return of a Halfer (who is winning £100 five times out of six runs) as expected.

Pierre-Normand

Start FollowingSend a Message