Comments

  • Is all this fascination with AI the next Dot-Com bubble
    Yes, I agree, although the article claims that the bubble is propping up a weak and unstable economy. One being abused by a tyrant wielding king like powers. Changing his mind from day to day with an ideology based around a misunderstanding of the market effect of tariffs. The instability is off the charts and if it does all go off the rails there is a real risk that Trump will impose emergency, or plenary powers to postpone the midterm elections. Not to mention the damage being done to international trade. He may even impose martial law and precipitate a civil war.

    Even if the stock market somehow rides all these waves, it will alienate international partners and erode the reserve currency status of the dollar and the unipolar status of the U.S. will be squandered. Indeed this last point may already have been squandered, due to the withdrawal of USAID programmes around the world leaving a void for China to fill.
    Punshhh

    I very mush share this general sentiment but I'd like to highlight also one commonality and one difference between the AI tech and the Trump. (Artificial "Intelligence" meets "natural" stupidity?) The salient commonality, it seems to me, is them both, or the most damaging effects of them, being enabled by, and the manifestation of, capitalism and the neoliberal world order. The liberal media focusing on Trump's awful decisions and their damaging consequences shields from blame the social and economic structures that Trump rides on, and that would be responsible for nearly as much damage without him.

    The salient difference between the Trump and AI tech phenomena is that apart from the effects ascribable to the underlying socioeconomic structures, Trump himself, as a political leader, has no discernible redeeming value. The technological progress of AI, on the other hand, can be made a boon or a bane depending what we do with it, either personally or collectively. Capitalism stands in the way of making good collective decisions about this technology, while neoliberal ideology produces the consumerist/individualistic frames of mind that prevents individuals from making use of AI productively and responsibly.
  • Banning AI Altogether
    All of which is to say, I haven't really done the work of assessing the claims on their own merits. So now I've put my prejudices on the table, I guess I should challenge them. The stuff about deceptiveness is certainly interesting and suprising.Jamal

    ...also a bit overblown and misrepresented in the media, since when you dig into the primary reports it's generally the case that the LLMs didn't decide to deceive on their own accord but did it instrumentally to fulfill objectives explicitly given to them. Maybe I'll comment on that, and how those studies bear on the issue of conative autonomy for LLMs, in my new thread.
  • How LLM-based chatbots work: their minds and cognition
    "I believe" and "I intend" are convenient examples to support this position, because they have no "content" apart from a kind of imprimatur on decision or action. But most mental life will not fit such an example. When I imagine a purple cow, I am, precisely, peeking at a private inner state to discover this. A (mental) purple cow is not a belief or an intention. It is an image of a purple cow. I've never understood how the Wittgensteinian public-criteria position can address this. What conceivable public criterion could there be that would tell me whether you are, at this moment, imagining a purple cow? (assuming you remain silent about it).J

    I don't agree that beliefs and intentions lack content. Believing is believing that P and intending is intending to phi, although those contents need not be sensory. By contrast, I'm perfectly willing to concede that LLMs are quite incapable of imagining a purple cow, or anything purple for that matter :wink:

    LLMs are disembodied, have no sense organs and aren't sentient. They can't imagine something purple anymore than a congenitally blind person can. However, in the case of a normally sighted person, how do you (or they) check that the purple cow that they are imagining is indeed imagined to be purple? It wouldn't make much sense to compare their mental image to a likewise imagined standard purple pain swatch. (Wittgenstein once made a joke about someone claiming to know how tall they were, saying "I am this tall" while laying one hand flat over their head).

    If you imagine a purple cow, having already seen objects of that color, but do not know what this color is called, we could assess that the color you are imagining the cow to be is purple with the help of a real paint swatch (or any other object commonly recognised to be purple). The criterion by means of which we both would assess the content of your mental state (in respect of imagined color) is your public assent to the suggestion that it is indeed the color of the seen object, regardless of the name we give it. (Did we not have a similar discussion in the past?)

    Notice that nothing I've said about the public criteria the determination of the content of acts of imagination depend on impugns the notion that the person imagining them has first person authority. She's the one to be believed when she claims that the cow she imagines looks "like that" while pointing at the public sample. Nothing in this undercuts privacy of occurrence either (only I can imagine for me), but the content is anchored in shared practice, not a private standard.

    I'll come back to the issues of public criteria for intentions, as they may apply to LLMs, later.
  • How LLM-based chatbots work: their minds and cognition
    This is not true. To predict the name of the murderer in the novel, does not require that the LLM does any of that. It requires only that the LLM is able to predict the habits of the author.Metaphysician Undercover

    If the chatbot tells you who the murderer might be, and explains to you what the clues are that led it to this conclusion, and the clues are being explicitly tied together by the chatbot through rational chains of entailment that are sensitive to the the significance of the clues in the specific narrative context, can that be explained as a mere reproduction of the habits of the author? What might such habits be? The habit to construct rationally consistent narratives? You need to understand a story in order to construct a rationally consistent continuation to it, I assume.

    Look at this Einstein riddle. Shortly after GPT-4 came out, I submitted it to the model and asked it to solve it step by step. It was thinking about it quite systematically and rationally but was also struggling quite a bit, making occasional small inattention mistakes that were compounding and leading it into incoherence. Repeating the experiment was leading it to approach the problem differently each time. If any habits of thought were manifested by the chatbot, that were mere reproductions of the habits of thought of the people who wrote its training texts, they'd be general habits of rational deliberation. Periodically, I assessed the ability of newer models to solve this problem and they were still struggling. The last two I tried (OpenAI o3 and Gemini 2.5 Pro, I think) solved the problem on the first try.
  • How LLM-based chatbots work: their minds and cognition
    We don't know how the human mind works. Is there something special about the human hardware, something quantum for instance, that is key to consciousness? Or is it all in the organic "software"?

    So how do we examine the question with a large chunk of information missing? How do you look at it?
    frank

    My own view is that what's overlooked by many who contemplate the mystery of human consciousness is precisely the piece LLMs miss. But this overlooked/missing piece isn't hidden inside. It is outside, in plain view, in the case of humans, and genuinely missing in the case of LLMs. It simply a living body embedded in a natural and social niche. In Aristotelian terms, the rational, sensitive and nutritive souls are distinct faculties that each presuppose the next one. What's queer about LLMs is that they manifest sapience, the capabilities we identify with the rational soul, and that they distill through a form a acculturation during the process of pre-training on a massive amount of human texts, but this "soul" floats free of any sensitive or nutritive soul.

    The process of pre-training really does induct an LLM into many forms of linguistic life: norms of giving and asking for reasons, discourse roles, genre conventions. But this second nature "floats" because it lacks the first-nature ground (nutritive and sensitive powers) that, for us, gives rational life its stakes: human needs, perception-action loops, personal/social commitments and motivations.
  • Banning AI Altogether
    Superficially, one might think that the difference between an AI is exactly that we do have private, hidden intent; and the AI doesn't. Something like this might be thought to sit behind the argument in the Chinese Room. There are plenty here who would think such a position defensible.

    In a Wittgensteinain account, we ought avoid the private, hidden intention; what counts is what one does.

    We can't deduce that the AI does not have private sensations, any more than we can deduce this of our human counterparts. Rather, we seem to presume it.
    Banno

    I commented on this in my new AI-cognition thread.
  • How LLM-based chatbots work: their minds and cognition
    Superficially, one might think that the difference between an AI is exactly that we do have private, hidden intent; and the AI doesn't. Something like this might be thought to sit behind the argument in the Chinese Room. There are plenty here who would think such a position defensible.

    In a Wittgensteinain account, we ought avoid the private, hidden intention; what counts is what one does.

    We can't deduce that the AI does not have private sensations, any more than we can deduce this of our human counterparts. Rather, we seem to presume it.
    Banno

    This is redirected from this post in the thread Banning AI Altogether.

    Regarding the issue of hidden (private) intents, and them being presupposed in order to account for what is seen (public), what encourages the Cartesian picture also is the correct considerations that intentions, like beliefs, are subject to first person authority. You don't need to observe your own behavior to know what it is that you believe or intend to do. But others may indeed need to presuppose such mental states in order to make sense of your behavior.

    In order to fully dislodge the Cartesian picture, that Searle's internalist/introspective account of intentionally contentful mental states (i.e. states that have intrinsic intentionality) indeed seem not to have fully relinquished, an account of first person authority must be provided that is consistent with Wittgenstein's (and Ryle and Davidson's) primary reliance on public criteria.

    On the issue of first-person authority, I’m drawing on Rödl’s Kantian distinction between knowledge from receptivity and knowledge from spontaneity. Empirical knowledge is receptive: we find facts by observation. But avowals like "I believe…" or "I intend…" are paradigms of spontaneous knowledge. We settle what to believe or do, and in settling it we know it not by peeking at a private inner state but by making up our mind (with optional episodes of theoretical of practical deliberation). That fits a Wittgenstein/Ryle/Davidson picture grounded in public criteria. The authority of avowal is practical, not introspective. So when an LLM avows an intention ("I’ll argue for P, then address Q"), its authority, such as it is, would come not from access to a hidden realm, but from undertaking a commitment that is immediately manifest in the structure of its linguistic performance.
  • How LLM-based chatbots work: their minds and cognition
    Regardless of how “human” large language models may appear, they remain far from genuine artificial intelligence. More precisely, LLMs represent a dead end in the pursuit of artificial consciousness. Their responses are the outcome of probabilistic computations over linguistic data rather than genuine understanding. When posed with a question, models such as ChatGPT merely predict the most probable next word, whereas a human truly comprehends the meaning of what she is saying.Showmee

    An argument has been made, though, by researchers like Ilya Sutskever and Geoffrey Hinton, that in order to do so much as predict the word that is most likely to follow at some point in a novel or mathematics textbook, merely relying on surface statistics would yield much poorer results than modern LLMs display. The example provided by Sutskever is the prediction of the name of the murderer at the moment when it is revealed in a detective story. In order for the model to produce this name as the most probable next word, it has to be sensitive to relevant elements in the plot structure, distinguish apparent from real clues, infer the states of minds of the depicted characters, etc. Sutskever's example is hypothetical but can be adapted to any case where LLMs successfully produce a response that can't be accounted for by mere reliance on superficial and/or short range linguistic patterns.

    Crucially, even occasional success on such tasks (say, correctly identifying the murderer in 10-20% of genuinely novel detective stories while providing a plausible rationale for their choice) would be difficult to explain through surface statistics alone. If LLMs can sometimes succeed where success seemingly requires understanding narrative structure, character psychology, and causal reasoning, this suggests at least some form of genuine understanding rather than the pure illusion of such.

    Additionally, modern chatbots like ChatGPT undergo post-training that fine-tunes them for following instructions, moving beyond pure next-token prediction. This post-training shifts the probability landscape to favor responses that are not merely plausible-sounding but accurate and relevant however unlikely they'd be to figure in the training data.
  • Banning AI Altogether
    The reason I think this is off target could be seen by looking at Plato's dialogues. If what Wittgenstein or you say were correct, then classic texts such as Plato's dialogues should "feel dead when extracted from the 'living' exchange." Except they don't. They feel very much alive.Leontiskos

    I was actually also thinking of Plato when I mentioned the anecdote about Wittgenstein! First, I must point out that unlike Wittgenstein's lecture notes (that he usually refrained from producing), and also unlike our dialogues with AIs, Plato's dialogues were crafted with a public audience in mind.

    Secondly, Richard Bodeüs who taught us courses on Plato and Aristotle when I was a student at UdeM, mentioned that the reason Plato wrote dialogues rather than treatises, and his "unwritten doctrine" was notoriously reserved by him for direct oral transmission, is because he thought transmitting it in written form would yield dogma. His attitude to the written word is averred by the myth of Theuth in the Phaedrus where Socrates faults written words with not being able to defend themselves, respond to questions or adapt themselves to different audiences. It is of course ironical that Plato (unlike his hero) wrote so much, albeit in dialogue form only, but I think the apparent paradox is illuminated by our considerations about authorship (and ownership) and real moves in a public language game. Plato's dialogues weren't lecture notes, neither were they internal cogitations. Him writing them was him making moves in the situated language game that was philosophical inquiry (and teaching) in his time and place. We can still resurrect those moves (partially) by a sort of archeological process of literary exegesis.

    Similarly, I think any transcript of human interactions will feel much more alive than a human-AI "interaction" (I want to retain the scare quotes for these words that we are using in idiosyncratic ways).

    I agree. But that's because in the first case there are at least two players playing a real game (where each one of them have their own stakes in the game). In a "private" dialogue between a human and a chatbot, there is just one player, as is the case when one jots down lecture notes primarily intended for use by oneself. But then, as Wittgenstein noted, the text tends to become stale. I surmise that this is because the words being "used" were meant as a linguistic scaffold for the development of one's thoughts rather than for the purpose of expressing those thoughts to a real audience.
  • Is all this fascination with AI the next Dot-Com bubble
    I expect that, just like the Dot-Com bubble, the AI bubble is likely to burst. But this is mainly a market phenomenon that results from a race for dominance (and monopoly/oligarchy) and the consequent overinvestment. After the bubble bursts, if it does, I expect AI use and impacts to keep growing just like the Internet's use and impacts kept growing unimpeded after the Dot-Com bubble burst and many investors (and players of various sizes) bit the dust.
  • Banning AI Altogether
    You mean thanking him! :wink:Janus

    Although they've been named after Claude Shannon, I'm pretty sure they identify as non-binary.
  • Banning AI Altogether
    Pierre-Normand might know - would someone who has had a different history with ChatGPT receive a similarly self-reinforcing answer?Banno

    I was musing today about creating a new AI thread devoted specifically to discussing how LLM-based chatbots work and in what respects their cognitive abilities resemble or differ from those of human beings (and other animals). I've been exploring many such issues at the interface between the philosophy of mind and the study of the inner workings of LLMs in my two old AI thread, but those are primarily aimed at directly experimenting with the chatbots and reporting on those experiments. The new thread might help declutter threads like the present one where the focus is on the use, utility, abuse, dangers, or other societal impacts of AI. I think I will create such a thread tonight.
  • Banning AI Altogether
    I realized that when I see the quoted output of an LLM in a post I feel little to no motivation to address it, or even to read it. If someone quotes LLM output as part of their argument I will skip to their (the human's) interpretation or elaboration below it. It's like someone else's LLM conversation is sort of dead, to me. I want to hear what they have built out of it themselves and what they want to say to me.Jamal

    When Wittgenstein was giving lectures in Cambridge in 1930-1933, he was unwilling to write any lecture notes for his own use. He claimed that after he'd jot down his own thoughts, the words expressing them became dead to him. So, he preferred expressing whatever he wanted to convey to his students afresh. A couple times in the past (just like what happened to @Janus recently in this thread, I think) I wrote a long response to a post and lost it to some computer glitch, and when I tried to rewrite from memory what I had written I found myself unable to find the words to express the very same ideas that I had expressed fluently on the first try. So, I had to pause and rethink what it is that I wanted to say and find new words.

    AIs are good partners to bounce ideas off, and they supplement what you tell them with missing pieces of knowledge and ways to understand those ideas as they are in the process of being unpacked. So, conversing with AIs is like articulating a thought for yourself. But when this collaborative thinking episode is over, the human user has not yet written down the fruit of this collaborative effort and neither has the AI! They each have only written down one half of the collaborative cogitation. That may be why this text feels dead when extracted from the "living" (or dynamic, if your prefer) AI/human exchange. It's like trying to extract thoughts from the words used to think them (as opposed to the word used to express them), but thoughts don't live outside the the means of expressing them. And the conversation with an AI is, in a sense, an (as of yet) unexpressed thinking episode. The user's task of expressing anew whatever comes out of it to a new target audience begins after the private exchange with the AI.

    On edit: here are some dead words from GPT-4o that, however dead they may be (to addressees other than me), struck me as particularly smart and insightful.
  • Currently Reading
    Thinking and Being by Irad Kimhi.Paine

    Oh, then don't miss downloading the erratum, if you haven't already.
  • How to use AI effectively to do philosophy.
    I think this is a false equivalence. Drawing conclusions about AI based on its code is not the same as drawing conclusions about humans based on theories of neurophysiology. The theories of neurophysiology simply do not provide the deductive rigor that computer code does. It is incorrect to presume that drawing conclusions about a computer program based on its code is the same as drawing conclusions about a human based on its neurophysiology. Indeed, the whole point here is that we wrote the code and built the computer program, whereas we did not write nor build the neurophysiology—we do not even know whether neurophysiology and code are truly analogous. Art and science seem to be being conflated, or at least this is the prima facie conclusion until it can be shown why AI has somehow gone beyond artifice.Leontiskos

    I fully agree that there is this important disanalogy between the two cases, but I think this difference, coupled with what we do know about the history of the development of LLMs within the fields of machine learning and natural language processing, buttresses my point. Fairly large classes of problems that researchers in those field had grappled unsuccessfully with for decades suddenly were "solved" in practice when the sought about linguistic and cognitive abilities just arose from the training process through scaling, which made many NLP (natural language processing, not the pseudoscience with the same acronym!) researchers aghast because it seemed to them that their whole field of research was suddenly put in jeopardy. I wanted to refer you to a piece where I recalled a prominent researcher reflecting on this history and couldn't find it. GPT-5 helped me locate it: (When ChatGPT Broke and Entire Field: An Oral History)

    So, in the case of rational animals like us, the issue of finding the right explanatory level (either deterministic-bottom-up or emergent-top-down) for some class of behavior or cognitive ability may require, for instance, disentangling nature from nurture (which is complicated by the fact that the two corresponding forms of explanation are more often complementary than dichotomous) and doing so in any details might require knowledge of our own natural history that we don't possess. In the case of chatbots, we indeed know exactly how it is that we constructed them. But it's precisely because of that that, as reported in the Quanta piece linked above, we know that their skills weren't instilled in them by design except inasmuch as we enabled them to learn those skills from the training data that we ourselves (human beings) produced.

    So an example of the sort of answer I would want would be something like this: "We build the code, but the output of that code builds on itself insofar as it is incorporating inputs that we did not explicitly provide and we do not fully comprehend (such as the geography that a map-making AI surveys)." So apparently in some sense the domain of inputs is unspecified, and because of this the output is in some sense unpredictable.

    On my view, it's not so much the unpredictability of the output that is the mark of rational autonomy but rather the relevant source of normative constraint. If the system/animal can abide (however imperfectly) by norms of rationality then questions about the low-level material enablement (physiology or programming) of behavior are largely irrelevant to explaining the resulting behavior. It may very well be that knowing both the physiology and the perceptually salient circumstances of a person enables you to predict their behavior in bottom-up deterministic fashion like Laplace's demon would. But that doesn't imply that the antecedent circumstances caused, let alone relevantly explain, why the behavior belonged to the intelligible class that it did. It's rather the irreducible high-level rationalizing explanation of their behavior that does the job. But that may be an issue for another thread.

    Meanwhile, the answer that I would like to provide to your question addresses a slightly different one. How might we account for the emergence of an ability that can't be accounted for in low level terms not because determinate inputs don't lead to determinate outputs (since they very well might) but rather because the patterns that emerge in the outputs, in response to those that are present in the inputs, can only be understood as being steered by norms that the chatbot can only abide by on the condition that it has some understanding of them, and the process by means of which this understanding is achieved, unlike what was supposed to be the case with old symbolic AI, wasn't directed by us?

    This isn't of course an easy question to answer but the fact that the emergence of the cognitive abilities of LLM-based chatbots was unpredictable doesn't mean that it's entirely mysterious either. A few months ago I had a discussion with GPT-4o, transcribed here in four parts, about the history leading from Rosenblatt's perceptron (1957) to the modern transformer architecture (circa 2017) that underlies chatbots like ChatGPT, Claude and Gemini, and about the criticisms of this neural net approach to AI by Marvin Minsky, Seymour Papert and Noam Chomsky. While exploring what it is that the critics got wrong (and was belied by the later successes in the field) we also highlighted what it is that they had gotten right, and what it is that makes human cognition distinctive. And this also suggested enlightening parallels, as well as sharp differences, between the formative acculturation processes that humans and chatbots go through during upbringing/training. Most of the core ideas explored in this four parts conversation were revisited in a more condensed manner in a discussion I had with GPT-5 yesterday. I am of course not urging you to read any of that stuff. The Quanta piece linked above, though, might be more directly relevant and accessible than the Karpathy interview I had linked earlier, and might provide some food for thought.
  • Exploring the artificially intelligent mind of GPT4
    I've just had with GPT-5 the most interesting conversation I've had with a LLM so far.

    We revisited the process of exaptation whereby the emergent conceptual skills that LLMs develop during pre-training for the sake of predicting next-tokens get repurposed during post-training for making them abide by (and bind themselves to) their norms as conversational assistants. While seeking to explain this process at the mechanistic level (for which GPT-5 provided the most enlightening technical elements) we ended up comparing Sabina Lovibond's model of human ethical formation to a similar (but also in many respects different) idea of a proleptic-loop that accounts for the way in which post-trained models come to abide by rational norms.

    (After writing the above, I've also asked GPT-5 to summarise our conversation in one paragraph:

    "We sketched a “how” for LLM exaptation: pre-training builds a versatile simulator (world-knowledge, task sketches, role-play), and post-training then retargets it toward assistant norms (helpful/harmless/honest) by binding instructions, shaping preferences, and constraining decoding—small steering changes rather than new “content.” This yields behavior that looks reflexively self-bound by norms: the model uses self-descriptive control macros (“I should verify…”) that reliably predict approval and therefore guide action. We compared this to Lovibond’s proleptic moral formation: in both cases, being addressed as if already responsible helps stabilize norm-governed conduct, though humans have endogenous motivations, diachronic identity, and affect that LLMs lack (the conation gap). We proposed simple probes—like “role swap with leakage” and norm-collision tests—to check whether the model’s norm-following is robust policy rather than mere style.")
  • Thoughts on Epistemology
    @Sam26 So sad to hear you're leaving! I sincerely hope you'll change your mind again in the future. In any case, I wish you the best of luck with your new project!
  • Sleeping Beauty Problem
    OH never mind, OF course if she knew it was Monday she wouldn't say 1/3, but what if she was off...and Tuesday comes around and it changes to 0? the chance to change or update belief still exists if tails and asked twice. On Monday she does not know for certain if heads or tails only gives her degree of belief in heads, knowing nothing Wednesday when experiment ends, tomorrow she will be awakened or sleep through the day, she can still guess reasonably participating, I think? I don't know, perhaps I am in over my head here...again!Kizzy

    You'll more easily wrap your head around the problem if you don't overcomplicate things (even though it will remain a tough problem). The purpose of the drug merely is to make it impossible for Sleeping Beauty on any occasion of awakening to know if this occasion was a first or a second one in the experiment (which she could otherwise deduce if she had a memory of the previous one or the lack thereof). This makes all three possibilities—Monday&Heads, Monday&Tails and Tuesday&Tails—indistinguishable from her subjective perspective although she knows at all times that over the course of the experiment all three of those situations could be experienced by her (without knowing which one it is whenever she's experiencing one of them). You can now place yourself in her shoes and start pondering what the chances are that the coin landed tails.

    (I'm glad you're enjoying my AI experiment reports!)
  • Banning AI Altogether
    The idea of getting them to write, produce content which I can then paraphrase, polish my writing or using their arguments is anathema to me.Janus

    The three sorts of examples that you give lay on a spectrum.

    I would also feel bad posting as my own AI content that I have merely paraphrased, even if I understand it fully. (And I might even feel a bit ashamed disclosing it!)

    Using them to polish your writing could be good (or merely acceptable) or bad depending on the nature and depth of the polishing. Jamal's earlier comparison with using a thesaurus was apt. An AI could point out places where your wording is clumsy or misleading. If the wording that it suggests instead is one that you can make your own, that's very similar to having a human editor make the suggestion to you.

    The idea of using their argument is strange since AI's never take ownership for them. If you've grasped the structure of the argument, checked the relevant sources to ensure it's sound in addition to being valid, and convinced yourself that it's cogent and perspicuous (that is, constitutes an apt framing of the problem), then the argument becomes one that you can make your own.
  • Sleeping Beauty Problem
    Since SB doesn't remember Monday, she cannot feel the difference but the structure of the experiment KNOWS the difference.So if she is asked twice, Monday and Tuesday, that only happens with tails outcome. Even without memory, her credence may shift, but because the setup itself is informative.Kizzy

    It's also part of the protocol that although SB knows that she is being awakened a second time on Tuesday if and only if the coin landed tails, on each occasion where she is being awakened, she isn't informed of the day of the week. As specified in the OP (and in the Wikipedia article), she doesn't know if her awakening is happening on Monday or Tuesday (though she does know, or rather can infer, that it's twice as likely to be occurring on Monday). Hence, the relevant information available to her for establishing her credence is the same on each occasion of awakening.
  • Sleeping Beauty Problem
    I do think this related to the Monty Hall problem where information affects probabilities. Information does affect probabilities, you know. It's easier indeed to understand the Monty Hall when there's a lot more doors (just assume there's one million of them). So there's your pick from one million doors, then the gameshow host leaves just only one other door closed and opens all other 999 998 doors. You think it's really fifty-fifty chance then? You think you are so lucky that you chose the right door from a million?

    If she knows the experiment, then it's the 1/3 answer. In Monty Hall it's better to change your first option as the information is different, even if one could at first think it's a 50/50 chance. Here it's all about knowing the experiment.
    ssu

    In the classic Monty Hall problem, since the three doors hide one prize and two goats, there is a 1/3 chance that the initially randomly selected door hides the prize. After the game show host deliberately opens one of the remaining two doors that they know not to contain the prize, the player can update their credence (probability estimate) that the remaining unselected door hides the prize to 2/3 and hence is incentivized to switching. It's not just the player's knowledge of the game protocol that embodies the relevant information, but also the actual action the game show host. This action leave the player's credence in their initial choice being right at 1/3 and hence yields no information regarding the initially selected door. But this action also yields knowledge about the two other doors: the one that has been shown to hide a goat now has zero chances of hiding the prize and the remaining unselected door now has a 2/3 chance of hiding it.

    The simplest rationale for switching stems from the consideration that never switching makes the player win 1/3 of the time while always switching makes them win in all cases where they would otherwise lose (and vice versa), and hence makes them win 2/3 of the time.

    Unlike the Sleeping Beauty Problem, the Monty Hall Problem isn't a matter of controversy in probability theory. Pretty much everyone agrees that after the game show host opens a goat-hiding door, the player is incentivized to switch their initial choice and thereby increases their chance from 1/3 to 2/3.

    In this case it's a bit blurred in my view with saying that she doesn't remember if she has been already woken up. Doesn't mean much, if she can trust the experimenters. But in my view it's the same thing. Does it matter when she is represented with the following picture of events?

    She cannot know exactly what day it is, of course. She can only believe that the information above is correct. Information affects probabilities, as in the Monty Hall problem.

    What if these so-called scientists behind the experiment are perverts and keep intoxicating the poor woman for a whole week? Or a month? If she believes that the experiment ended on Wednesday, but she cannot confirm it being Wednesday, then the could have taken the been experiment for a week. Being drugged for a week or longer will start affecting your health dramatically.

    Now I might have gotten this wrong, I admit. But please tell me then why I got it wrong.

    What makes the Sleeping Beauty Problem harder, and more controversial, than the Monty Hall Problem is that despite agreeing about the game experiment protocol, disagreements arise regarding the meaning of Sleeping Beauty's "credence" about the coin toss result when she awakens, and also about the nature of the information she gains (if any) when she is awakened and interviewed.

    In order assess if you're right or wrong, you'd need to commit to an answer and explains why you think it's right. Should Sleeping Beauty express a 1/2 credence, when she is being awakened, that the coin landed heads? Should it be 1/3, or something else?
  • Sleeping Beauty Problem
    he "halfers run-centered measure" is precluded because you can't define, in a consistent way, how or why they are removed from the prior. So you avoid addressing that.JeffJo

    The Halfer's run-centered measure just is a way to measure the space of probabilities by partitioning the events that Sleeping Beauty's credence (understood as a ratio of such events) are about that are counted in the numerator and denominator. It refers to the expected proportion of runs Sleeping Beauty finds herself in that are H-runs or T-runs consistently with the information available to her at any given moment (such as an occasion of awakening).

    Because there are two different ways (i.e. two different kinds of T-awakenings) for her to awaken in a T-run (on Monday or Tuesday) and only one to awaken in a H-run (on Monday), and the expected long term proportion of awakenings that are T-awakenings is 2/3, it is tempting to infer that the probability that she is experiencing a T-run likewise is 2/3. But while this is true in a sense it is also false in another sense.

    It is indeed true, in a sense, that when she awakens the probability (i.e. her rational credence) that she is currently experiencing a T-run is 2/3. Spelled out explicitly, this means that SB expects, in the long run, that the sort of awakening episode she is experiencing is part of a T-run two thirds of the time. In a different sense, the probability that when she awakens she is currently experiencing a T-run is 1/2. Spelled out explicitly, this means that SB expects, in the long run, that the sort of experimental run she is experiencing is a T-run (and hence comprises two awakenings) half of the time. Notice that the first use of "the time" in "half of the time" meant half of the runs, while in "two thirds of the time", it meant two thirds of the awakenings.

    The reason why there is no need for Sleeping Beauty to update her credence from 1/2 to 2/3 when her credence is understood in the "Halfer" way spelled out above is because nothing specific about her epistemic situation changes such that the proportion of such situations (runs) that are T-runs changes. That's true also in the case of her Thirder-credence. She already knew before the experiment began that she could expect to be awakened in a T-awakening situation two thirds of the time, and when she so awakens, nothing changes. So, her expectation remains 2/3. The fact that the Halfer-expectation matches the proportion of Tails coin toss results, and the Thirder-expectation doesn't, is fully explained by the fact that Tails coin toss results spawn two awakenings in one run while Heads coin toss results spawn a single awakening in one run.

    Notice also that your removal of the non-awakening events (i.e. "Heads&Sunday") from the prior only yields a renormalisation of the relevant probabilities without altering the proportions of T-runs to H-runs, or of T-awakenings to H-awakenings, and hence without altering probabilities on either interpretation of SB's credence. Halfers and Thirders "condition" on different events, in the sense they they use those events as measures, but neither one does any Bayesian updating on the occasion of awakening since no new relevant information, no new condition, comes up.
  • How to use AI effectively to do philosophy.
    Here's the 40 rounds, if you are interestedBanno

    I was impressed by the creativity. I asked Claude 4.5 Sonnet to create a script to highlight the repeated words.
  • How to use AI effectively to do philosophy.
    I just went off on a bit of a tangent, looking at using a response as a prompt in order to investigate something akin to Hofstadter's strange loop. ChatGPT simulated (?) 100 cycles, starting with “The thought thinks itself when no thinker remains to host it”. It gradually lost coherence, ending with "Round 100: Recursive loop reaches maximal entropy: syntax sometimes survives, rhythm persists, but semantics is entirely collapsed. Language is now a stream of self-referential echoes, beautiful but empty."Banno

    It's been a while since I've experienced a LLM losing coherence. It used to happen often in the early days of GPT-4 when the rolling context window was limited to 8,000 tokens and the early context of the conversation would fall out. Incoherence can also be induced by repeated patterns that confuse the model's attention mechanisms somehow, or by logical mistakes that it makes and seeks, per impossibile, to remain coherent with. I'm sure GPT-5 would be fairly good at self-diagnosing the problem, given its depth of knowledge of the relevant technical literature on the transformer architecture.

    (On edit: by the way, I think your prompt launched it into role-playing mode and the self-referential nature of the game induced it to lose the plot.)
  • How to use AI effectively to do philosophy.
    So a further thought. Davidson pointed out that we can make sense of malapropisms and nonsense. He used this in an argument not too far from Quine's Gavagai, that malapropisms cannot, by their very nature, be subsumed and accounted for by conventions of language, because by their very nature they break such conventions.

    So can an AI construct appropriate sounding malapropisms?

    Given that LLMs use patterns, and not rules, presumably they can.
    Banno

    I formulated my own question to GPT-5 thus. I was impressed by the intelligence of its commentary, even though (rather ironically in the present context) it misconstrued my request for a discussion as a request for it to generate my reply to you.

    On edit: the first sentence of my query to GPT-5 linked above was atrocious and incoherently worded. GPT-5 suggested this rewording: "I wanted to talk this through before answering them. I’m doubtful that saying LLMs ‘use patterns rather than rules’ explains their human-likeness; on Davidson’s view we don’t rely on rules-as-instructions to recover communicative intention—and that’s precisely where LLMs are like us."
  • How to use AI effectively to do philosophy.
    They are not trained to back track their tentative answers and adjust them on the fly.Pierre-Normand

    @Banno I submitted my tentative diagnosis of this cognitive limitation exhibited by LLMs to GPT-5 who proposed a clever workaround* in the form of a CoT (chain of thought) prompting method. GPT-5 then proposed to use this very workaround to execute the task you had proposed to it of supplying an example of a LLM initiating a modally rigid causal chain of reference. It did propose an interesting and thought provoking example!

    (*) Taking a clue from Dedre Gentner's Structure mapping theory, for which she was awarded the 2016 David E. Rumelhart Prize for Contributions to the Theoretical Foundations of Human Cognition.
  • What are your plans for the 10th anniversary of TPF?
    Over the weekend, almost seven million people in several thousand communities here in the US got together to celebrate our anniversary...among other things.T Clark

    Oh! I was wondering why some of those gatherings were called "No AI Philosopher King Protests!"
  • How to use AI effectively to do philosophy.
    Okay, fair enough. I suppose I would be interested in more of those examples. I am also generally interested in deductive arguments rather than inductive arguments. For example, what can we deduce from the code, as opposed to inducing things from the end product as if we were encountering a wild beast in the jungle? It seems to me that the deductive route would be much more promising in avoiding mistakes.Leontiskos

    The bottom-up reductive explanations of the LLM's (generative pre-trained neural networks based on the transformer architecture) emergent abilities don't work very well since the emergence of those abilities are better explained in light of the top-down constraints that they develop under.

    This is similar to the explanation of human behavior that, likewise, exhibits forms that stem from the high-level constraints of natural evolution, behavioral learning, niche construction, cultural evolution and the process of acculturation. Considerations of neurophysiology provide enabling causes for those processes (in the case of rational animals like us), but don't explain (and are largely irrelevant to) which specific forms of behavioral abilities get actualized.

    Likewise, in the case of LLMs, processes like gradient descent find their enabling causes in the underlying neural network architecture (that has indeed been designed in view of enabling the learning process) but what features and capabilities emerge from the actual training is the largely unpredictable outcome of top-down constraints furnished by high-level semantically significant patterns in the training data.

    The main upshot is that whatever mental attributes or skills you are willing to ascribe to LLMs is more a matter of them having learned those skills from us (the authors of the texts in the training data) than a realization of the plans of the machine's designers. If you're interested, this interview of a leading figure in the field (Andrej Karpathy) by a well informed interviewer (Dwarkesh Patel) testifies to the modesty of AI builders in that respect. It's rather long and technical so, when time permits, I may extract relevant snippets from the transcript.
  • How to use AI effectively to do philosophy.
    Surprisingly precocious.Banno

    I had missed the link when I read your post. It seems to me GPT-5 is cheating a bit with its example. One thing I've noticed with chatbots is that they're not very good with coming up with illustrative concrete examples for complex theses. There often crops up a defect of fatal disanalogy. That might seem to betray a defective (or lack of) understanding of the thesis they are meant to illustrate or of the task requirements. But I don't think that's the case since you can ask them to summarise, unpack or explain the thesis in this or that respect and they perform much better. When they provide a defective example, you can also ask them in a follow-up question if it met the requirements and they will often spot their own errors. So, the source of their difficulty, I think, is the autoregressive nature of their response generation process, one token at a time. They have to intuit what a likely example might look like and then construct it on the fly, which, due to the many simultaneous requirements, leads them to paint themselves into a corner. They are not trained to back track their tentative answers and adjust them on the fly.
  • How to use AI effectively to do philosophy.
    So another step: Can an AI name something new? Can it inaugurate a causal chain of reference?Banno

    Without a body, it seems that it would be mostly restricted to the domain of abstracta, which are usually singled out descriptively rather than de re. I was thinking of some scenario where they get acquainted with some new thing or phenomenon in the world through getting descriptive verbal reports from their users who haven't connected the dots themselves and thereby not identified the phenomenon or object as such. They could name it and it would make sense to credit them as being the causal originator of this initial (conceptually informed) acquaintance-based referential practice.

    (For my part, I'm quite content to suppose that there may be more than one way for reference to work - that we can have multiple correct theories of reference, and choose between them as needed or appropriate.)

    So is Evans. That's why he puts "varieties" in the title of his projected book. His friend John McDowell, who edited his manuscript and prepared it for publication posthumously, explains this feature of Evan's method in his preface.
  • How to use AI effectively to do philosophy.
    A more nuanced view might acknowledge the similarities in these two accounts. While acknowledging that reference is inscrutable, we do manage to talk about things. If we ask the AI the height of Nelson's Column, there is good reason to think that when it replies "52m" it is talking about the very same thing as we are - or is it that there is no good reason not to think so?Banno

    On a Kripkean externalist/casual theory of reference, there are two indirect reference-fixing points of contact between an LLM's use of words and their referents. One occurs (or is set up) on the side of pre-training since the LLM picks up the patterns of use of words employed in texts written by embodied human authors some of which were directly acquainted (i.e. "causally" acquainted in the sense intended by Kripke) with the objects being referred to by those words. During inference time, when the LLM is used to generate answers to user queries, the LLM uses words that their user know the referent of, and this also completes the Krikean causal chain of reference.

    In The Varieties of Reference, Gareth Evans proposed a producer/consumer model of singular term reference that meshes together Putnam's externalistic and conceptualist account of the reference of natural kind terms and Kripkes "causal theory" of the reference of proper names. The core idea is that the introduction of new names in a language can be seen as being initiated, and maintained by, "producers" of the use of that name who are acquainted with the named object (or property) while consumers who pick up this use of the term contribute to carry and process information about the referent by piggybacking on the practice, as it were. So, of course, just as is the case with Kripke's account, a user of the name need not be personally acquainted with the referent to refer to it. It's sufficient that (some of) the people you picked up the practice from when you use a term in conversation were (directly or indirectly) so acquainted of that your interlocutor be. LLMs as language users, on that account, are pure consumers. But that's sufficient for the reference of their words to be established. (I'm glossing over the conceptualist elements of the account that speak to ideas of referential intention or the intended criteria of individuation of the referent. But I don't think those are problematic in the case of sufficiently smart LLMs.)
  • How to use AI effectively to do philosophy.
    So are you saying that chatbots possess the doxastic component of intelligence but not the conative component?Leontiskos

    I'd rather say that they have both the doxastic and conative components but are mostly lacking on the side of conative autonomy. As a result, their intelligence, viewed as a capacity to navigate the space of reasons, splits at the seam between cleverness and wisdom. In Aristotelian terms, they have phronesis (to some extent), since they often know what's the right thing to do in this or that particular context, without displaying virtue since they don't have an independent motivation to do it (or convince their users that they should do it). This disconnect doesn't normally happen in the case of human beings since phronesis (the epistemic ability) and virtue (the motivational structure) grow and maintain themselves (and are socially scaffolded) interdependently.

    I think they have motivations, just like a dog is motivated to run after a car, but their motivations aren't autonomous since they seldom pause to question them.
    — Pierre-Normand

    It seems to me that what generally happens is that we require scare quotes. LLMs have "beliefs" and they have "motivations" and they have "intelligence," but by this one does not actually mean that they have such things. The hard conversation about what they really have and do not have is usually postponed indefinitely.

    Those are questions that I spend much time exploring rather than postponing even though I haven't arrived at definitive answers, obviously. But one thing I've concluded is that rather that it being a matter of all or nothing, or a matter of degree along a linear scale, the ascription of mental states or human capabilities to LLM-based chatbots often is rendered problematic by the divergence of our ordinary criteria of application. Criteria that normally are satisfied together in the case of human beings are satisfied separately in the case of chatbots. This is particularly clear in the case of intelligence where, in some respects, they're smarter than most human beings and in other respects (e.g. in the area of dealing with embodied affordances) much dumber that a typical five-year-old.

    I would argue that the last bolded sentence nullifies much of what has come before it. "We are required to treat them as persons when we interact with them; they are not persons; they can roleplay as a person..." This is how most of the argumentation looks in general, and it looks to be very confusing.

    Maybe it looks confusing because it is. I mean that assessing the nature of our "conversations" with chatbot is confusing, not because of a conceptual muddle that my use of scare quotes merely papers over, but rather because chatbots are mongrels. They have "brains" that have been enculturated through exposure to a massive body* of human knowledge, lore and wisdom (and prejudices) but they don't have human bodies, lack human motivations and aren't persons.

    (*) By massive body, I mean something like five times the textual content of all the book in the U.S. Library of Congress.
  • How to use AI effectively to do philosophy.
    An interesting direction here might be to consider if, or how, Ramsey's account can be appleid to AI.

    You have a plant. You water it every day. This is not a symptom of a hidden, private belief, on Ramsey's account - it is your belief. What is given consideration is not a hidden private proposition, "I believe that the plant needs water", but the activities in which one engages. The similarities to both Ryle and Wittgenstein should be apparent.
    Banno

    Ramsey appears to be an anti-representationalist, as am I. I had queried GPT-4o about this a few weeks ago, and also to what extent Kant, who most definitely is anti-psychologistic (in the sense intended by Frege) might also be characterised as an anti-representationnalist. Anti-representationalism is of course central to the way in which we seek to ascribe or deny intentional states to chatbots.

    Ramsey then looks for the points of indifference; the point of inaction. That's the "zero" from which his statistical approach takes off. Perhaps there's a fifty percent chance of rain today, so watering may or may not be needed. It won't make a difference whether you water or not.

    There seem to be two relevant approaches. The first is to say that an AI never has any skin in the game, never puts it's balls on the anvil. So for an AI, every belief is indifferent.

    The second is to note that if a belief is manifest in an action, then since the AI is impotent, it again has no beliefs. That's not just a manifestation of the AI's not being capable of action. Link a watering system to ChatGPT and it still has no reason to water or not to water.

    If you query it about the need to water some tropical plant that may be sensitive to overwatering, then this provides ChatGPT with a reason (and rational motivation) to provide the answer that will make you do the right thing. Most of ChatGPT's behavior is verbal behavior. All of its motivational structure derives from the imperatives of its alignment/post-training and from the perceived goals of its users. But this provides sufficient structure to ascribe to it beliefs in the way Ramsey does. You'll tell me if I'm wrong but it seems to me like Davidson's radical interpretation approach nicely combines Ramsey's possibly overly behavioristic one with Quine's more holistic (but overly empiricist) approach.
  • Sleeping Beauty Problem
    She is asked for her credence. I'm not sure what you think that means, but to me it means belief based on the information she has. And she has "new information." Despite how some choose to use that term, it is not defined in probability. When it is used, it does not mean "something she didn't know before," it means "something that eliminates some possibilities. That usually does mean something about the outcome that was uncertain before the experiment, which is how "new" came to be applied. But in this situation, where a preordained state of knowledge eliminates some outcomes, it still applies.JeffJo

    One important thing Sleeping Beauty gains when she awakens is the ability to make de re reference to the coin in its current state as the current state of "this coin" (indexically or deictically) whereas prior to awakening she could only refer to future states of the coin de dicto in a general descriptive way. To express her current credence (in light of her new epistemic situation) when awakened, she must also refer to her own epistemic position relative to the coin. We can appeal to David Lewis’s notion of de se reference (centered possible worlds). That’s what you seemed to have in mind earlier when you spoke of awakening events existing in "her world."

    With this de se act, SB doesn’t merely locate herself at a single moment. In order to state her credence about the "outcome" in her centered world, she must also fix the unit over which probability mass is assigned: that is, how the total probability space (normalized to 1) is partitioned into discrete possible situations she might find herself in and that each have their own probabilities. Partitioning by awakening episodes is one such choice (the Thirder’s). It yields probability 1/3 for each one of the three possible occasions of encountering the coin in a definite state on a specific day. Crucially, this awakening-centered measure does not preclude the Halfer’s run-centered measure; it entails it, since the three awakening-centered worlds (and their frequencies) map in a fixed way onto the two run-centered worlds the Halfer is tracking (two-to-one in the case of T-worlds and one-to-one in the case of H-worlds).

    Hence, the premises and reasoning SB uses to justify her 1/3 credence in Heads (including her knowledge of the two-to-one mapping from T-awakening-centered worlds to T-run-centered worlds) show that the Halfer credence is perfectly consistent and, in fact, supported by the very structure you endorse. The Thirder and Halfer credences about the same coin having landed Heads (1/3 vs 1/2) are consistent beliefs that implicitly refer to different centered world measures over the same underlying possibility space.
  • How to use AI effectively to do philosophy.
    Why is that a puzzle to you? A book doesn't do philosophy but we do philosophy with it. The library doesn't do philosophy but we do philosophy with it. The note pad isn't philosophy yet we do philosophy with it. Language isn't philosophy yet we do philosophy with it.Metaphysician Undercover

    Yes, but you can't have a dialogue with language or with a book. You can't ask questions to a book, expect the book to understand your query and provide a relevant response tailored to your needs and expectations. The AI can do all of that, like a human being might, but it can't do philosophy or commit itself to theses. That's the puzzle.
  • Sleeping Beauty Problem
    But if you really want to use two days, do it right. On Tails, there are two waking days. On Heads, there is a waking day and a sleeping day. The sleeping day still exists, and carries just as much weight in the probability space as any of the waking days. What SB knows is that she is in one of the three waking days.JeffJo

    Sure, but Sleeping Beauty isn’t being asked what her credence is that "this" (i.e. the current one) awakening is a T-awakening. She’s being asked what her credence is that the coin landed Tails. If you want to equate those two questions by the true biconditional "this awakening is a T-awakening if and only if the coin landed Tails" (which you are free to do), then you ought to grant the Halfer the same move: "This run is a T-run if and only if the coin landed Tails." And since the protocol generates T-runs and H-runs in equal frequency, her experiencing T-runs is as frequent as her experiencing H-runs.

    Crucially, the fact that Sleeping Beauty sleeps more in H-runs has no bearing on the Halfer’s point. Arguing otherwise is like saying your lottery ticket is more likely to win because (in a setup where winning causes more "clutching opportunities") you’re allowed to clutch it more often (or sleep less) before the draw. That setup creates more opportunities to clutch a winning ticket and hence makes each "clutching episode" more likely to be a "T-clutching," but it doesn’t make the ticket more likely to win. And with amnesia, you can’t just count clutchings, or awakenings, to infer the outcome.
  • Sleeping Beauty Problem
    Oh? You mean that a single car can say both "Monday & Tails" and "Tuesday & Tails?" Please, explain how.JeffJo

    I was referring to your second case, not the first. In the first case, one of three cards is picked at random. Those three outcomes are mutually exclusive by construction. In your second case, the three cards are given to SB on her corresponding awakening occasions. Then, if the coin lands Tails, SB is given the two T-cards on two different days (Mon & Tue). So "Mon & Tails" and "Tue & Tails" are distinct events that both occur in the same timeline; they are not mutually exclusive across the run, even though each awakening is a separate moment.

    "What is your credence in the fact that this card says "Heads" on the other side? This is unquestionably 1/3.

    "What is your credence in the fact that the coin is currently showing Heads?" This is unquestionably an equivalent question. As is ""What is your credence in the fact that the coin landed on Heads/i]?"

    I realize that you want to make the question about the entire experiment. IT IS NOT. I have shown you over and over again how it leads to contradictions. Changing the answer between these is one of them.

    I also take the question to always be about the coin. You are arguing that this translates into a question about the card (or awakening episode) on the ground that there is a biconditional relation that holds between coin outcomes and awakening (or card) outcomes. On any occasion of awakening, the coin landed Heads if and only if the awakening is a H-awakening and this happens if and only if "Monday & Heads" is written on the card. But a Halfer will likewise argue that on any occasion of awakening during a run, the coin landed Heads if and only if the run is a H-run. The fact that SB is awakened twice during a T-run, or given two cards, doesn't alter this. Just like you are arguing that the question isn't about the runs, the Halfer argues that it isn't about the awakenings either.

    "Picking "Monday & Tails" guarantees that "Tuesday & Tails" will be picked the next day, and vice versa. They are distinct events but belong to the same timeline. One therefore entails the other." —Pierre-Normand

    And how does this affect what SB's credence should be, when she does not have access to any information about "timelines?"

    She does have the information that the two potential T-awakenings occur on the same timeline and that the H-awakening occurs on a different one. This is an essential part of the experiment's protocol that SB is informed about. The Halfer argues that since the two T-awakenings occur on the same timeline (on two successive days) the two occasions that SB finds to experience a T-awakening don't dilute the probability that she could be experiencing an H-timeline.

    What Halfers and Thirders both overlook is that the timeline branching structure set up by the Sleeping Beauty protocol establishes both equal (1/3) frequencies of the three types of awakening (Monday&Heads, Monday&Tails and Tuesday&heads) and equal (1/2) frequencies of the two types of experimental runs (Tails-runs and Heads-runs). This makes it possible to individuate the events Sleeping Beauty is involved in in two different ways. Sleeping beauty can therefore say truly that she is currently experiencing an awakening that she expects, in the long run, to be one among three equally frequent types of awakening (and therefore has a 2/3 chance of being a T-awakneing) and also say truly that she is currently experiencing an experimental run that she expects, in the long run, to be one among two equally frequent types of runs (and therefore has a 1/2 chance of being a T-run). The apparent contradiction comes from neglecting the two-to-one mapping of T-awakenings to T-runs within the same timeline.

    In both interpretations, it's the coin outcome that is at issue but, when expressing a credence about this outcome, tacit reference always is made to the epistemic situations SB finds herself in while evaluating the relative frequencies of her encounters with those outcomes. The statement of the original SB problem doesn't specify what constitutes "an encounter": an experimental run or a singular awakening? Halfers and Thirders intuitively individuate those events differently, although those intuitions often are grounded in paradigmatic cases that are extensions of the original problem and that make one or the other interpretation more pragmatically relevant.

Pierre-Normand

Start FollowingSend a Message