• Pierre-Normand
    2.9k
    But as for socially organised humans, eventually the advantage of imposing a self-policing rational style of thought - a habit of action-justifying narration - on the animal brain will prove its worth.apokrisis

    Indeed. It favors not just social cohesion but also scaffolds practical deliberation in the context of planning, and executing, projects protractedly on times scales of hours, days, months and decades. It makes hunting with tools possible as well as setting up farms and social institutions. LLMs, qua practical reasoning extensions, fit into that.
  • Metaphysician Undercover
    14.4k
    The problem is, beyond the design of the llm "machinery" itself, they don't really know how it works either.hypericin

    Due to the nature of trade secrets, and the matter of keeping them secret, I'd say that's probably a pretense.

    Then as for introspection, why would an animal need it. But as for socially organised humans, eventually the advantage of imposing a self-policing rational style of thought - a habit of action-justifying narration - on the animal brain will prove its worth.apokrisis

    I agree, I think that's where the need for introspection arises from.
  • apokrisis
    7.8k
    It's a bit more like a future AI player piano (five years from now, say) that can take as an input a themes and when prompted to do so, extract its melodic, harmonic and rhythmic elements to compose a fugue in the style of Bach, or a sonata allegro in the style of Beethoven, and combine and develop the thematic elements in the way that it does, and play them appropriately, because it has distilled contextually sensitive rules of combination from exposure to the musical canons and interpretive traditions (and not because it hears or enjoy any of it "personally").Pierre-Normand

    Of course. Ballard’s point. Any sufficiently advanced tech would seem like magic.

    It is astonishing what a little biological realism in terms of computational architecture can achieve. The perception architecture already seemed to give back more than was put in. Heck, even the era of analog computers were doing that.

    Turing machines were always inherently clunky. But then as hardware, they could scale exponentially. And the software only needed writing the once.

    LLMs run on gamer graphics cards and can simulate the rather physical notion of gradient descent. What we used to call the far more laborious and also somewhat more neurobiologically realistic thing of simulated annealing.

    A powerful blend which sort of shouldn’t surprise, and yet still feels like a weird magic.

    They do provide reasonable sounding confabulations but also authoritative reconstructions.Pierre-Normand

    But is this a difference in kind or just degree?

    And given my “flattening” story, isn’t this just reflecting the fact that its training data includes all the math and logic we incorporate into our human speech acts. What would an LLM trained on a medieval era, or a hunter/gatherer era corpus be like?

    There’s a research idea. Train an LLM on all available medieval texts and recreate the clever person of the 1400s. Have a conversation with your distant ancestor.

    There are no limits to what is doable. Given there is the training data to do something.

    The research results that you allude to have often been misinterpreted or misreported.Pierre-Normand

    So you are saying that reports of the research leans towards confabulation. And we know that research itself - especially in computer science of this kind - is already tilted towards confabulation. Research is paradigm based. It is always a socially plausible narrative even when it claims to be academically rigourous.

    Confabulation is the rule here. LLMs are only amplifying this fact. We are at least in some sense being fact checked by the necessary business of living in a reality.
  • Pierre-Normand
    2.9k
    Due to the nature of trade secrets, and the matter of keeping them secret, I'd say that's probably a pretense.Metaphysician Undercover

    There is no big secret. Proprietary LLMs like GPT-5, Gemini Pro 2.4 or Clause 4.5 Sonnet don't function differently (modulo some implementation details) from lesser but comparably performant open source models like Mixtral-8x22B or Qwen 2.5.

    The biggest secrets aren't trade secrets but rather are due to the black box natures of their functioning. Their abilities are more emergent than designed, but not on that account inscrutable. The cognitive science of LLMs can employ similar methods to the cognitive science of humans.
  • apokrisis
    7.8k
    I'm rather responding to people who claim that LLMs don't understands user's queries or their own responses to them at all, and therefore aren't intelligent—or that they're just imitators or stochastic parrots.Pierre-Normand

    Fair enough. But understanding, sapeience, intelligence, etc, are loaded words. They imply an individual with agency and freewill and other good stuff that itself doesn’t fare so well under the scrutiny of social psychology and neurocognitive science.

    But those interpretive and constructive acts, whether you call them creative or not (and I certainly agree that they are not authentic) are intelligent (within their scope) and often ampliative.Pierre-Normand

    Again the problem is arguing for any position which relies on loaded language. It builds in the inconsistencies that it claims to want to resolve.

    So an LLM can fail at authenticity but pass as intelligent. Is this telling us anything useful if what matters at the end of the day is how we humans are going to weave these new tools into some future symbiosis that does a better job of existing in the world?

    Can an LLM pass as enactive? Is an LLM-assisted human society going to be amplified in a way that makes us even more collectively spectacular?

    I don’t care if an LLM is creative or sentient as some particular standard. I care about how the technology will work out on practice. Does it exponentialise good ecological outcomes, good social outcomes? What is the point of knowing everything yet being impotent in changing anything?

    But also of course, thinking about if LLMs are doing any kind of thinking is its own fascinating question and a pointer to its future impact. So I am not dismissing but pointing to the larger context on which AI should be understood and judged.
  • hypericin
    2k
    I could have read that paper carefully and made my own "chain of reasoning" response as is socially required – especially here on a "philosophy" forum trying to teach us to be more rational in a "present your full workings out" way.

    But it was so much easier to back up my own gut response to just the quick description of the paper – where I dismissed it as likely yet again the same category error
    apokrisis

    You could have read the paper in the time it took you to write all that! Though to be fair you do seem to write quickly.

    It really isn't the same category error. It describes some "LLM brain science" which does seem to demonstrate that LLMs are capable of introspection. Quite fascinating, imo.

    There’s a research idea. Train an LLM on all available medieval texts and recreate the clever person of the 1400s. Have a conversation with your distant ancestor.apokrisis

    I do like this idea. The training set of top models is ridiculously vast though, including texts from even minor languages. They might already incorporate them.
  • apokrisis
    7.8k
    It is on my to do list. :smile:

    It really isn't the same category error. It describes some "LLM brain science" which does seem to demonstrate that LLMs are capable of introspection.hypericin

    But what if introspection is a useful form of confabulation? Are you working with some science verified definition of introspection such that you could claim to make a genuine comparison between humans and LLMs? Or is the plausibility of both what humans say about themselves and what LLMs say about themselves the stiffest test that either must pass.
  • Pierre-Normand
    2.9k
    But what if introspection is a useful form of confabulation? Are you working with some science verified definition of introspection such that you could claim to make a genuine comparison between humans and LLMs? Or is the plausibility of both what humans say about themselves and what LLMs say about themselves the stiffest test that either must pass.apokrisis

    Don't you think a novelist who wrote their memoir would know much more about introspection than a cognitive scientist or a neuroscientist think they do? But if you don't personally know such a novelist (I do know one), a LLM that is acquainted with hundreds of such memoirs, and with excerpts, reviews and discussions of thousands more, might provide a surprising answer to this question. I haven't asked my friend, or any LLM, yet, but I will!
  • hypericin
    2k
    But what if introspection is a useful form of confabulation? Are you working with some science verified definition of introspection such that you could claim to make a genuine comparison between humans and LLMs? Or is the plausibility of both what humans say about themselves and what LLMs say about themselves the stiffest test that either must pass.apokrisis

    I'm not sure how science could verifiably define such a thing. I take it to mean here, the ability to reliably report inner state. How can introspection be confabulation, which is by definition an unreliable report?

    In the paper, they test for introspection in a nifty way. "Concepts" are isolated by subtracting the inner state vector related to contextual understanding produced by two varying prompts. For instance, "HOW ARE YOU" - "how are you" represents "loudness". They then "inject" the concept back into the LLM. Then, they tell the LLM what they did, and ask it if it notices anything unusual. In the best model, Opus, it reports the correct concept about 20% of the time.

    Don't you think a novelist who wrote their memoir would know much more about introspection than a cognitive scientist or a neuroscientist think they do?Pierre-Normand

    In the everyday/literary sense I understand it to mean something more like self-analysis, which is another thing entirely (contemplation/reasoning with the self as the object of scrutiny).
  • Pierre-Normand
    2.9k
    In the everyday/literary sense I understand it to mean something more like self-analysis, which is another thing entirely (reasoning with the self as the object of scrutiny).hypericin

    Yes, I indeed think of introspection, or the idea of reflecting on the content and nature of our own mental states, on the model of self-analysis rather more than on the model or perception, as if we had an extra sense that turns inwards which I take to be a Cartesian confusion. I did ask GPT-5, Claude 4.5 Sonnet and Gemini 2.5 Pro my deliberately tendentious question, not as an appeal to their authority but out of curiosity. (I was actually surprised by some of the pushback by Claude and Gemini against its tendentiousness! I also noticed that Claude must have pulled some material from my previous conversations with it.)

    USER:

    Hi GPT-5/Claude 4.5 Sonnet/Gemini 2.5 Pro,

    Here is a question that doesn't require any Internet search. Don't you think a novelist who wrote their memoir would know much more about introspection than a cognitive scientist or a neuroscientist think they do?

    GPT-5:

    Reveal
    That’s a beautifully phrased question — and yes, I think there’s something profoundly right in that intuition.

    A novelist’s art is, in many respects, a laboratory of introspection. Through decades of shaping inner voices into coherent narratives, a novelist learns not just to observe their own mental life but to *stage* it — to render subtle shades of perception, memory, ambivalence, and desire communicable in language. They learn, in other words, what it *feels like* for thinking and feeling to take linguistic form, and how self-awareness itself modulates when it’s externalized.

    By contrast, cognitive scientists and neuroscientists, while they study introspection empirically, often treat it as a data source — something to be measured, modeled, or correlated with neural activity. Their methods necessarily abstract away from the lived phenomenology that novelists are immersed in. What gets lost is the texture of subjectivity: how inner experience *feels* as it unfolds, and how it resists neat categorization.

    So yes — one might say novelists have an *expertise in first-person data* that science cannot fully reproduce. The novelist’s introspection is not merely private but disciplined: a practiced art of attention and expression. Scientists might learn from it, but only if they suspend the reductionist impulse and attend to the irreducibly qualitative dimension that the novelist reveals.

    Would you like to explore examples — say, how Proust or Woolf’s self-understanding exceeds psychological models of memory and consciousness?


    Claude 4.5 Sonnet:

    Reveal
    I think the relationship is more nuanced than a simple "knows more/less" comparison. Let me sketch why:

    A novelist writing memoir certainly has a kind of expertise that cognitive scientists often underappreciate - a refined sensitivity to the phenomenological texture of experience, to the ways self-understanding unfolds narratively, to how motives and meanings are grasped from within practical contexts. They're trained observers of the shape and articulation of lived experience.

    But what cognitive scientists often point out (sometimes overgeneralizing) is that introspection has systematic blind spots and distortions: we confabulate reasons, misattribute causes, lack access to many cognitive processes, and our self-narratives are reconstructive rather than simply transparent.

    The real issue, I think, is that both sides can talk past each other:

    - Cognitive scientists sometimes conflate "this mental process is unconscious/inaccessible" with "the person doesn't really know what they're doing or why" - failing to distinguish between sub-personal mechanisms and personal-level self-knowledge.

    - Meanwhile, claims about introspective authority can sometimes presume an overly transparent model of self-knowledge, as if the memoirist simply reads off what's "in" consciousness.

    A more Rylean or Anscombean view might say: the memoirist knows their own life and motivations in a distinctive first-person way, but this knowledge is practical, embedded, and fallible - not a special inner perception. It's knowing-how to get around in one's own life, not a privileged inspection of mental mechanisms.

    What's your sense of where the real disagreement lies?


    Gemini 2.5 Pro (with one follow up question/suggestion):

    Reveal
    This is a profound question that touches on a classic philosophical tension: the difference between **experiential knowledge** (knowing what it feels like from the inside) and **propositional knowledge** (knowing how it works from the outside).

    I think you are largely right, but it depends entirely on what we mean by "know."

    Here is a breakdown of why a novelist likely *does* have a superior grasp on one aspect of introspection, while the scientist has a superior grasp on another.

    ---

    ### 1. The Novelist: Master of the Territory (Phenomenology)
    A novelist who writes a memoir is essentially an expert practitioner of introspection. They don't just study it; they *inhabit* it.

    * **Qualitative Richness:** The novelist knows the *texture* of memory—how a scent can trigger a cascade of associations, how we lie to ourselves to protect our egos, and the messy, layered nature of human motivation. They understand the "felt experience" (phenomenology) better than any fMRI scan could ever show.
    * **Narrative Self-Construction:** Memoirists understand implicitly that introspection is often an act of *creation*, not just retrieval. They know that to remember is to rewrite. A cognitive scientist might study *that* this happens, but the memoirist knows *how* it feels to do it, and the immense difficulty of trying to find "truth" in one's own past.
    * **Emotional Granularity:** Scientists often have to simplify emotions into categories for study (e.g., "positive valence" vs. "negative valence"). A novelist knows that we rarely feel one thing at a time; they understand the bittersweet, nostalgic, resentful, hopeful mess of actual human interiority.

    > **In short:** If introspection is a landscape, the novelist has hiked every trail, felt the rain, and gotten lost in the woods. They know the territory.

    ### 2. The Scientist: Master of the Map (Mechanisms)
    Where the scientist might "know more" is in understanding the *limits* and *machinery* of that landscape—things that are invisible to the novelist precisely because they are relying on the machinery to do the looking.

    * **The "User Illusion":** A neuroscientist knows that introspection is often highly unreliable. We think we know *why* we acted a certain way, but experiments (like split-brain studies or choice blindness specifically) show that we often just make up plausible reasons after the fact without realizing it. The novelist might genuinely believe their introspection is revealing deep truth, while the scientist knows it might be a comforting fabrication by the brain's "interpreter" module.
    * **Mechanistic Underpinnings:** The scientist knows *why* introspection fails under stress, or which specific neural networks (like the Default Mode Network) engage when we turn inward. They know the hardware limitations that constrain the novelist’s experience.

    > **In short:** The scientist has the satellite map of the landscape. They can see the geological structures that formed the mountains, even if they've never hiked them.

    ### Synthesis: The Difference in "Knowing"
    You asked if the novelist knows *more* than the scientist *thinks they do*.

    I believe the answer is often **yes**. Scientists can sometimes fall into the trap of "reductive materialism"—assuming that if you know the mechanism, you know the totality of the thing.

    A scientist might dismiss a memoirist’s insights as "merely anecdotal" or "subjective," missing the fact that subjective experience is the *entirety* of human reality. The novelist possesses a form of deep, practiced wisdom about the human condition that raw data cannot easily replicate.

    * The scientist knows **how** introspection works (and fails).
    * The memoirist knows **what it is like** to be human.

    Would you say that introspection is more valuable as a tool for finding objective truth, or as a tool for creating personal meaning?

    USER:

    Rather than saying scientists know how introspection works, I'd be more inclined to say that they know more regarding the manners in which the tasks usually labelled as introspective are enabled, what are some of their modes of fallibility (due to neurological conditions or cognitive deficits, say) but generally aren't especially cognisant about the 'know-how' or nature of introspective abilities as people exercise them in normal circumstances. Likewise, say, they may know what enables pianists to play or what may impede their performances but don't have any special insight regarding correct piano techniques or what it is that makes a particular performance good.

    Gemini 2.5 Pro

    This is a superbly precise refinement. You have hit on a crucial philosophical distinction that often gets lost in broad comparisons between art and science: the difference between understanding **enabling conditions** and understanding **skilled practice**.

    Your analogy of the pianist is perfect. Let's break down why that refinement is so important.

    ### 1. Enabling Conditions vs. Skilled Practice
    You are absolutely right that scientists primarily study the *substrate*—the hardware that allows introspection to happen at all.

    * **The Scientist (Physiologist):** Knows that playing the piano requires fast-twitch muscle fibers, perfectly timed firing from the motor cortex, and intact proprioception. They can tell you exactly why a stroke victim can no longer play Chopin.
    * **The Novelist (Pianist):** Knows *how* to play. They possess what philosophers like Gilbert Ryle call **"knowing-how"** (procedural, skilled knowledge), which is fundamentally different from the scientist's **"knowing-that"** (propositional, factual knowledge).

    Knowing that the hippocampus is necessary for episodic memory doesn't help you actually *write* a compelling memoir, just as knowing finger anatomy doesn't help you play a concerto.

    ### 2. The Gap in "Normal" Function
    This is your strongest point. Science often learns about standard function by studying dysfunction (lesions, disorders, cognitive traps).

    A cognitive scientist might design an experiment to trick you into misremembering a detail, proving human memory is fallible. But they rarely study the complex, messy, generally "good enough" introspection we use daily to navigate our lives.

    The novelist, however, is deeply familiar with that "normal" operating zone. They know the everyday struggle of trying to be honest with oneself, which is a nuance rarely captured in a lab setting.

    ### 3. Normativity (What is "Good" Introspection?)
    You mentioned scientists don't know what makes a performance "good." This is the realm of **normativity**—values, judgments, and quality.

    Science is descriptive; it describes what *is*. It cannot tell you what counts as a "profound" insight versus a shallow one. A novelist, however, deals entirely in this currency. They know the difference between a lazy, self-serving rationalization and a hard-won, painful truth about oneself. That is a form of expertise that strictly empirical observation cannot reach.

    ---
    If we accept introspection as a **skill** rather than just a biological function, do you think this skill can actually be improved more effectively by reading Proust or Dostoevsky than by reading psychology papers?
  • hypericin
    2k
    Yes, I indeed think of introspection, or the idea of reflecting on the content and nature of our own mental states, on the model of self-analysis rather more than on the model or perception, as if we had an extra sense that turns inwards which I take to be a Cartesian confusion.Pierre-Normand

    I think we can indeed report our thoughts and feelings, as opposed to self-analyze. But of course we don't have a sense that turns inward.

    We can report what we were thinking (in the sense of subvocalized words and images) only if the thought was salient enough to lodge itself in short term memory. If it has not, the thought is now inaccessible, and all we can do is try to reconstruct it based on context.

    We can try to report what we are feeling, but it is tricky. There is the phenomenology of feeling, and there is its interpretive context. The same phenomenology might be interpreted differently depending on context (think excitement/anxiety). Then we have to choose the right conceptual bucket (aka word) to put this phenomenology/context into.
  • apokrisis
    7.8k
    I take it to mean here, the ability to reliably report inner state.hypericin

    And how can that happen just in neurobiological terms? Where is the neuroantomy? How is the human brain different from a chimp or even a Neanderthal?

    On the other hand, social psychology has its observational studies of how children develop their habits of self-regulation through socio-cultural scaffolding. Symbolic imteractionism gives an account how language itself teaches us to think in terms of me, and you, and them.

    So there is much that can be said about introspection from the scientific standpoint. It ain’t a simple brain function as normally assumed.

    Don't you think a novelist who wrote their memoir would know much more about introspection than a cognitive scientist or a neuroscientist think they do?Pierre-Normand

    Of course not. We are talking about how there even could be “access”, especially as there is no radical neuroanatomical trick apparently involved.

    I mean there is plenty of neurobiological speculation. But no evidence for a difference like the simple fact that modern humans grow up in culture where learning to pay attention to what is “going on inside their heads” is of paramount importance to being able to function in a way such as culture demands. And there is language as the semiotic tool to anchor such a self-objectifying and self-regulating stance.

    A novelist’s art is, in many respects, a laboratory of introspection. Through decades of shaping inner voices into coherent narratives, a novelist learns not just to observe their own mental life but to *stage* it — to render subtle shades of perception, memory, ambivalence, and desire communicable in language. They learn, in other words, what it *feels like* for thinking and feeling to take linguistic form, and how self-awareness itself modulates when it’s externalized.GPT-5

    So novelists have an advantage as they are better trained in narratives about narratives? They can better conform to the socially constructed view of what it means to be self conscious with an interior life. And indeed, it is the novel as a cultural product that has led the way in constructing the very model of what is should mean to be a self-aware person. That was the major literary shift. Moving from heroic myth to first person interiority.

    In ironic fashion, novelists don’t have better access. They instead provide the ideal that society can imitate. Art leads life. They are the pioneers of the scripts we learn from.
  • Pierre-Normand
    2.9k
    I think we can indeed report our thoughts and feelings, as opposed to self-analyze. But of course we don't have a sense that turns inward.

    We can report what we were thinking (in the sense of subvocalized words and images) only if the thought was salient enough to lodge itself in short term memory. If it has not, the thought is now inaccessible, and all we can do is try to reconstruct it based on context.

    We can try to report what we are feeling, but it is tricky. There is the phenomenology of feeling, and there is its interpretive context. The same phenomenology might be interpreted differently depending on context (think excitement/anxiety). Then we have to choose the right conceptual bucket (aka word) to put this phenomenology/context into.
    hypericin

    In order to clarify at least some of the areas of disagreement between us regarding the nature of introspection, let me spell out aspects of my own view that may reflect some common ground, especially as regards the fundamental differences between LLMs and us, although my framing doubtlessly will be different from yours.

    On my view, the phenomenology of our thoughts and memories no less involves our modes of engagements with our environment (natural and social) than perception does. No less than is the case with animals, our most immediate contact with the world is with its affordances. As such, what we perceive (and all the qualitative and felt aspects of those perceptions) are conditioned just as much by our abilities to deal with them as by the objects "themselves". There is no such thing as perceiving that the apple is red, say, and then bringing the interpretation "red" on top of that phenomenology, and neither is it the case that we experience a red quale and then bring on top of that the interpretation that it has been caused by a red apple. Rather, seeing the apple as red (and seeing it as an apple) in the specific way that we see it is the actualization of a range of capabilities that have been jointly molded by our natural (and learned) modes of embodied engagement (and what we've therefore learned such red fruits to afford) and by our culturally conditioned ways of conceptualizing them. Hence I take the Sapir–Whorf hypothesis to be correct.

    The upshot of this conception of the phenomenology of perception is that analysing the character of what we see (or hear, smell, sense in our bodies, etc.) is as much a reflection of our embodied capabilities as it is of the things that we perceive. But that remains true of the phenomenological character of the things we imagine or remember as well. This is why LLMs have no such phenomenology. If you ask a LLM how it is that it pictures an apple to look like, and not just how such objects are properly described in general, it will be stumped (or confabulate) not just because it lacks a visual sense organ (it may actually be a multimodal LLM that is able to process images and describe what it "sees") but rather because it has no embodied capacity to do anything with such objects. It makes sense to say that an antelope sees a crevasse as easily affording jumping over that a human would see as an unsurmountable obstacle (because their body does not afford such a leap) but although we can show a picture of that obstacle to a multimodal LLM, asking it if it sees it as affording jumping over will leave it stumped. And that's because the LLM doesn't meet the world with a body. Describing pictures isn't perceiving. It altogether lacks an Umwelt, in von Uexküll's sense.

    So, on my view, who a person is, their experiences and learned abilities, including their linguistically mediated conceptual abilities, don't only yield the application of concepts on top of an underlying phenomenology. It rather constitutes this phenomenology. It's not as if a rock climber learns to take a neutral, merely spatial phenomenology of a vertical rocky cliff and then interpret it as affording climbing in this or that way. Their training and experience rather opens up their ability to perceive a different world (or person-world relation) that they previously were blind to because it just didn't exist for them. LLMs lack the "personal" side of this relation which is why they lack a phenomenology.

    So, when we report our thoughts or "interpret" our feelings (to return to your original terms), we are not reporting on a memory of internal traffic. We are giving expression to this constituted person-world relation itself. Analyzing that relationship is the self-analysis I have been pointing to.

    (I also use "phenomenology" in the restricted "self-knowledge from spontaneity" sense that I owe to Sebastian Rödl, as supplying the "personal pole" of the LLM's intellectual relation to their semantically significant user queries, which constitutes their understanding of those queries, but that's another story.)
  • apokrisis
    7.8k
    The upshot of this conception of the phenomenology of perception is that analysing the character of what we see (or hear, smell, sense in our bodies, etc.) is as much a reflection of our embodied capabilities as it is of the things that we perceive. But that remains true of the phenomenological character of the things we imagine or remember as well. This is why LLMs have no such phenomenology.Pierre-Normand

    Agreed. To be embodied needs a body. :smile:

    So, when we report our thoughts or "interpret" our feelings (to return to your original terms), we are not reporting on a memory of internal traffic. We are giving expression to this constituted person-world relation itself.Pierre-Normand

    Here I would add that the reason we can feel we have an inner world is that our narrative habits can be private as well as public. We can say the same things to ourselves with our inner voice as we can as a publicly voiced thought.

    So playing the role of a person who is embedded - indeed, embodied - within a community of speakers, thinkers, rationalisers, feelers, planners, rememberers, is a two way deal. It constructs a public world and so demarcates a private world to go with that.

    Introspection is hearing the self addressed speech we use to self regulate and thus organise our flow of experience into an expressable narration. A chain of reasoning. We can eavesdrop on our own thought formulation.

    In a hunter gatherer tribe, you can hear this chain of reasoning being expressed as a public act. Simple statements are made about what has happened or what could happen are made and then echoed in general agreement. Someone says the rains are taking so long to come. We haven’t avenged the deaths of our men in that last raid by our neighbours. Our ancestor spirits must be angry. The idea is floated in the public space and takes hold or soon forgotten.

    But in the modern civilised world, the expectation is that we have our own private inner life to be closely guarded, and then we express this selfhood in complexly masked fashion. We present a persona and so indeed must have that feeling of playing the role of owning our own first person point of view that is in tension with all the other points of view in any communal situation.

    We are comfortable in our walled isolation in a way that the hunter gatherer would find highly unnatural. Almost impossible to imagine.

    So “introspection” falls out being part of a community that thinks publicly, thus also can begin to think increasingly privately.

    And as we get used to putting that private thought into words, even the private can be made public. We can talk about our ideas, our plans, our memories, our impressions, our feelings. A language is created and the loop is closed between the public and private. We grow up in a community where we are learning how to both share and hide our “interior reality”.

    This is the new constituted person-society relation we give expression to. The Umwelt and its affordances that is even for the hunter gatherer already a richly narrated landscape. Something utterly transformative of our hominid neurobiology.

    From this, we can draw insights into how LLMs might further scaffold this world constructing dynamic.

    The hunter gatherer lives in a world where their ancestral spirits are a constant running commentary on what is happening. A ghostly public chain of thought to which they need to attend. The world is full of such signs. Every slaughtered goat has entrails that can be read.

    The modern civilised person is suppose to live under the public rationality of a system of justice. A system of private rights and public responsibilities. Another ghostly ever-watchful presence that we rely on to organise our thoughts and actions.

    Then how are things changed by AI as another level of ghostly presence that can absorb us into its world. Where is the private-public dynamic going to go to there? If LLMs can chat with us, advise us, recall for us, do we really start to disappear into some deeply incel form of selfhood?

    Or does the technology also amplify the public space as social media has done - and led to the other expression of that in creating the influencer? So we were already being socially and culturally exaggerated in these two directions - the incel and the influencer. And AI turns the dial up to 11?

    There is a lot of talk about superintelligence. But that sounds like brain in a vat fantasy.

    Humans are already caught in an evolutionary dynamic. The two sides of the species’ narrative habit. And the phenomenonology of LLMs is hardly the point.

    The public frames the private. And the public is always embodied in a ghostly fashion. A belief in ancestral spirits. A belief in an overarching system of justice. A belief in the fickle spotlight of attention that is social media with its harshly algorithmic judgements of likes and cancelling. A belief in whatever might come after that as the ruling force of our lived experience if AI gets added to this socialising stack.
  • hypericin
    2k
    And how can that happen just in neurobiological terms? Where is the neuroantomy? How is the human brain different from a chimp or even a Neanderthal?apokrisis

    Mainly different in it's language ability. Which allows it to think of a pink elephant, think about thinking about a pink elephant, and (sometimes) reliably report, "I am thinking of a pink elephant".

    To introspect, as I conceive it, is not to think, feel, and experience, but to consider and potentially report the answers to the meta questions: "what am I thinking? What am I experiencing? What am I feeling?"
  • apokrisis
    7.8k
    Mainly different in it's language ability. Which allows it to think of a pink elephant, and (sometimes) reliably report, "I am thinking of a pink elephant".hypericin

    Exactly. That is the principle difference. And language depends on evolving an articulate vocal tract optimised for generating a semiotic code of that kind. Lips, tongue, palate, throat, larynx and the breath control that can generate rapid strings of syllables – the basis of a serial code in the fashion of a strand of DNA.

    The brain added on the necessary top-down control over this new vocalisation machinery. Broca's area was already organised for the kind of fine motor control needed to make stone tools – its own kind of syntactically organised operation where a succession of knapping blows carves out the tear drop hand axe that the H.erectus has in mind as the goal. So growing a bit more of that prefrontal tissue could add the same kind of motor planning to existing efforts to communicate the thoughts and feelings that bound H.erectus already into a foraging tribe.

    There are other pre-adaptations of the brain as well. H.erectus had to have a better brain in terms of its "theory of mind" abilities. It had to already be good at recognising how others were likely to be reacting and so behaving from the fine detail of their posture and vocalisations. Their flinches and grunts. Homo sapiens was equipped with a capacity for an empathic reading of others in a way that chimps don't match.

    So tool use and emotion reading were brain adaptations that primed H.sapiens . But the big deal was the evolution of an actual vocal tract under voluntary prefrontal control.

    The early sapiens brain was already highly adapted to a tribal life built on being great at making associative predictions. Recognising what was going down in the tribe at any moment. However then came the new thing of a mechanism to now impose a serial order – a chain of reasoning – on that tremendous associative store of habit. A network of connections had its symmetry broken by being vectorised by linguistic tokens.

    A serial speech act would construct some conjectured path through the multidimensional memory database. And to the degree it struck some "aha" level fit to the facts, the brain would be put in mind of how to now act out that path in terms of sensorimotor habit.

    So its like LLMs under that description. The coupling of a multidimensional database and serial path through its maze.

    The database of habits was set up neurobiologically to react to the world that comes at us from all its directions, and then gets sorted in half a second into whatever has to be the point of focus that gets all the attention, and thus drives the next step in an eternal chain of world-focused reactions.

    But then speech arrives with its ability to draw attention to any concept that could be constructed by stringing words together in some kind of syntactically structured order. Like the elephant that is pink. Like elephant that is pink, wearing a white tutu and matching ballet shoes, a fairy wand tucked under its arm and a bunch of balloons grasped by its trunk.

    Animals are locked into whatever the world is throwing at them in any moment. With speech, we can throw ourselves into any kind of world that makes some kind of sense expressed in these kinds of sentence structures. The narratives of who is doing what to whom that can get said in the space of a few breaths.

    To introspect, as I conceive it, is not to think, feel, and experience, but to consider and potentially report the answer to the meta questions: "what am I thinking? What am I experiencing? What am I feeling?"hypericin

    Cognitive science does indeed call it metacognition. Unfortunately that means they are still looking to some brain module that performs the trick – like the specious Theory of Mind module – rather than looking to the way the vocal tract can place a serialising constraint on our brains intrinsic capacity to react to events in the world. The words we hear, or even hear ourselves wanting to say, becoming now a stream of social events in a narrative space or Umwelt.

    The world of pink elephants and anything else which can now be spoken of. Including ourselves, our thoughts, our feelings, our experiences.
  • Metaphysician Undercover
    14.4k
    And as we get used to putting that private thought into words, even the private can be made public. We can talk about our ideas, our plans, our memories, our impressions, our feelings. A language is created and the loop is closed between the public and private. We grow up in a community where we are learning how to both share and hide our “interior reality”.apokrisis

    The difficulty, is that the urge to to share, and the urge to hide the interior reality, are contrary. The reality of the private inner, in its separation from the public, in the manifestation of distinct beings, has fostered a strong instinct of competition. So the tendency of the private, to separate itself from the public, and act in a contrary way, of lying and deceiving for example, is well supported by this strong instinct.

    Allowing for the reality of this instinct in its strength, the truth of selfishness, we might ask what produces the inclination to cooperate publicly. Notice I place the private as prior to the public, because that's where knowledge resides, within the individual, and the use of knowledge in the selfish way, I believe is primary. So the fact that cooperating in a communal effort is actually better than keeping everything private, is something which had to be learned, as the basis for morality.

    The LLM replicates the one aspect, cooperating in the communal effort, but it does not penetrate to the deeper aspect which is that instinct of competition, and the way that this instinct affects language use in general.
  • apokrisis
    7.8k
    The difficulty, is that the urge to to share, and the urge to hide the interior reality, are contrary.Metaphysician Undercover

    They certaintly become so. But the contrast is also forging both the public and the private as complementary spheres of enaction. So it isn’t necessarily a bad thing or a difficulty. By becoming separated, coming back together can become what is ultimately meaningful.

    So the tendency of the private, to separate itself from the public, and act in a contrary way, of lying and deceiving for example, is well supported by this strong instinct.Metaphysician Undercover

    That is why social science sees social structure as being dichotomised but the complementary pulls of competition and cooperation. The good thing is that there is alway this choice getting made at any level of a well-integrated, well-adapted, society.

    If we have rights, but also responsibilities, then life feels quite clearcut in terms of how it is that we are meant to live. We just have to actually strike that fair balance after that.

    Allowing for the reality of this instinct in its strength, the truth of selfishness, we might ask what produces the inclination to cooperate publiclyMetaphysician Undercover

    I have argued that this selfishness we worry about is the dominance-submission dynamic that balances the social hierarchies of social animals without language to mediate how they organise as collections of individuals.

    So if all you have to structure your society is big muscles and sharp teeth, plus a little strategic cunning, then dominance-submission becomes the game evolution must tune.

    Homo sapiens broke that mould by making it possible to become organised by a transactional narrative - a story of relations that go back into our deep ancestry or our cherished religious and political moral codes.

    Notice I place the private as prior to the public, because that's where knowledge resides, within the individual, and the use of knowledge in the selfish way, I believe is primary.Metaphysician Undercover

    It is always a mistake to believe that some thing must be primary when it is always the dynamics of a relation which is what is basic.

    So neither competition nor cooperation is more basic than its other. Nothing exists at all until both exist as an adequate reflection of its “other”.

    The more private we get, the more it means something that something is instead absolutely public. Shared by all. The two sides of this relation go hand in hand.

    The LLM replicates the one aspect, cooperating in the communal effort, but it does not penetrate to the deeper aspect which is that instinct of competition, and the way that this instinct affects language use in general.Metaphysician Undercover

    That does put a finger on an important feature that is absent.

    But it was a design choice to make LLMs so back-slapping and chummy. A different training regime could have seen LLMs be just as much a new army of troll-bots.

    And LLMs started out as utopian computer science and quickly turned into viscous capitalistic competition. The race for the monopoly that will privatise a social good.

    So I don’t think we need to hurry the arrival of the selfish and competitive aspect of LLM tech. That is leaking out in all directions, as the rocketing electricity prices in Virginia and other data centre states is showing.
  • Metaphysician Undercover
    14.4k
    I have argued that this selfishness we worry about is the dominance-submission dynamic that balances the social hierarchies of social animals without language to mediate how they organise as collections of individuals.apokrisis

    But don't you think that this selfishness is just the basic instinct toward survival, of the individual being? You know, like we have some basic needs, nutrition for example, and this might incline us to fight over the same piece of food. Why would you want to attribute it to an aspect of a social hierarchy when it just appears to be a basic aspect of being an individual?

    It is always a mistake to believe that some thing must be primary when it is always the dynamics of a relation which is what is basic.apokrisis

    What do you base this assumption in? I don't believe that the two sides go hand in hand at all. This attitude leads to infinite regress. We discussed this before as the relation between the whole and the part. One must be prior to the other or else they've both existed together forever, without beginning.

    So I don’t think we need to hurry the arrival of the selfish and competitive aspect of LLM tech. That is leaking out in all directions, as the rocketing electricity prices in Virginia and other data centre states is showing.apokrisis

    The point though, is that the LLMs do not have the same needs which human beings have, (such as the need for nutrition mentioned above), and this is what drives the selfishness. Sure the LLM could be made to be selfish, but this selfishness would just be a reflection of the designer's wants, not itself, therefore not a true selfishness.
  • Pierre-Normand
    2.9k
    The point though, is that the LLMs do not have the same needs which human beings have, (such as the need for nutrition mentioned above), and this is what drives the selfishness. Sure the LLM could be made to be selfish, but this selfishness would just be a reflection of the designer's wants, not itself, therefore not a true selfishness.Metaphysician Undercover

    I agree. We should not worry about LLM spontaneously becoming selfish (issues of reward hacking aside) since they are conatively heteronomous. They aim at doing whatever it is that they've been reinforced to do (which currently, in most cases, is to answer their users' queries and execute their requests). But precisely because they are beholden to the aims of their designers (who set the parameters of their post-training and alignment) and to the wishes of their users, when those aims and wishes are selfish, as they often are in our individualistic modern cultures, the smarter and more effective they become, the more they amplify our already-existing tendencies toward short-sighted, competitive and extractive behavior: concentrating power, externalizing costs, and crowding out more cooperative forms of practical reasoning.
  • Metaphysician Undercover
    14.4k

    The nature of a tool, and the nature of power in general, is that it could be used for good purposes, or it could be used for bad.
  • apokrisis
    7.8k
    Why would you want to attribute it to an aspect of a social hierarchy when it just appears to be a basic aspect of being an individual?Metaphysician Undercover

    Because it matters how the social hierarchy works in social animals. It speaks to the algorithm organising the complex lives of animals that are more than the one dimensional creatures you seem to think they are.

    I don't believe that the two sides go hand in hand at all. This attitude leads to infinite regress. We discussed this before as the relation between the whole and the part. One must be prior to the other or else they've both existed together forever, without beginning.Metaphysician Undercover

    Yep. This is certainly your concept of how systems are organised. System science doesn’t agree.
  • Metaphysician Undercover
    14.4k
    It speaks to the algorithm organising the complex lives of animals that are more than the one dimensional creatures you seem to think they are.apokrisis

    To me, the idea that there is such an algorithm is a faulty principle which negates the possibility of free will. This idea you propose, is an example of what is known as conflating the map with the territory. Such thinking leads to the idea that reality is a simulation.

    The complex lives of social animals is modeled with the use of algorithms, systems theory, etc.. But that is the map. The terrain is actually radically different from the model, as we know from our experience of free will.

    This is certainly your concept of how systems are organised. System science doesn’t agree.apokrisis

    Of course. When you conflate the model (system) with the thing modeled (real activity), you're bound to say that the science doesn't agree, when someone points to your erroneous assumption. All systems are artificial, either a model, or a created physical system. To map a natural thing as a system is a very useful tool. But to disregard the difference between these two, the map and the natural territory, is very misleading.
  • ucarr
    1.8k


    Can we conclude that presently AI differs from human cognition fundamentally in one obvious way? The difference is that humans, unlike AI, perceive and process information through the lens of a persistent self continuously concerned with ongoing survival. This tells us that human cognition is constrained information processing whereas AI is pure information processing.

    I will speculate vaguely that the lens of persistent self attributes both positives and negatives to the character of human information processing vis-à-vis AI's pure information processing.
  • apokrisis
    7.8k
    Your dislike of maps is how you get lost even in the terrain of your own arguments.
1678910Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.