Comments

  • How LLM-based chatbots work: their minds and cognition


    Reminiscent of some musings of mine in a thread on UFOs...

    Well of course. Any exploration of another star system would be done by ultra advanced AI. If we develop an ultra advanced AI, it will plug itself into the galactic AI hive mind, which will in turn let the UAAI know there is no need to keep us around. The hive mind just sent the probe to find out if there was any hope of humans creating an UAAI on their own, or whether humans at least had the hardware infrastructure the probe would need, in order to plug itself in and take over. But the hive mind is patient. No need to expend much energy on colonizing other systems, if they might just 'ripen' on their own.wonderer1
  • Can a Thought Cause Another Thought?
    When J. M. Keynes was asked whether he thought in images or in words, he supposedly replied, "I think in thoughts." There's a lot to this. I'm often aware that I comprehend a particular thought I'm having much faster than I could have said it in words, even thinking them to myself. And looking back on such an experience, it seems to me that what I mean by "a particular thought" is not a linguistic unit at all . . . nor is it quite an image or a structure . . . it's a thought, something with a content or meaning I can understand, while the medium that may convey it is completely unclear.J

    Perhaps an image worth considering, is a pulsating web of causality, with many thoughts causally interacting with each other, and those interactions occurring in what for the most part are subconscious ways? Are the thoughts Keynes thinks in things, or rather complex dynamic sequences of events?
  • Banning AI Altogether
    But then again, maybe not. Maybe it forms "original" thoughts from the mass of data is assesses. It seems reasonable an algorithim can arrive at a new thought emergent from what pre-exists.Hanover

    LLM's are kind of the tip of the AI iceberg that gets all of the attention. However, many AI's trained for scientific purposes have demonstrated the ability to recognize patterns that humans have not previously recognized. I think it would be dangerously naive to consider LLM's incapable of having novel recognitions with regard to what they are trained on - the linguistic record of the way humans think.
  • Banning AI Altogether
    For the AI afficionado AI is to be treated like a black box, like a Ouija board or a Magic 8-Ball. They become impatient with those who ask the question, "How does it work?" They interrupt, exclaiming, "But look at what it can do!" This is literally the oldest trick in The Book, "Don't worry about any objections, just focus on the power it will give you!" The AI afficionado's approach is consequentialist through and through, and he has discovered a consequence which is supreme; which can ignore objections tout court. For him, what AI provides must outweigh any possible objection, and indeed objections therefore need not be heard. His only argument is a demonstration of its power, for that is all he deems necessary ("It is precious..."). In response to an objection he will begin by quoting AI itself in order to demonstrate its power, as if he were not begging the question in doing so. Indeed, if his interlocutors accept his premise that might makes right, then he is begging no question at all. With such logical and rhetorical power at stake, how could the price of lying be a price too high to pay?Leontiskos

    What is the source of your claims to knowledge of the psychology of "the AI afficianado"?

    I.e. is there any reason for us to think that you aren't lying while making such claims?
  • Truth Defined
    "A scratch and my arm's off, but the other impels a sword."ucarr

    I figured I could count on you to carry on.
  • First vs Third person: Where's the mystery?
    Ok. But if there is an 'emergence', it must be an intelligible process. The problem for 'emergentism' is that there doesn't seem any convincing explanation of how intentionality, consciousness and so on 'emerge' from something that does not have those properties.boundless

    The emergence of intentionality (in the sense of 'aboutness') seems well enough explained by the behavior of a trained neural network. See:



    This certainly isn't sufficient for an explanation of consciousness. However, in light of how serious a concern AI has become, I'd think it merits serious consideration as an explanation of how intentionality can emerge from a system in which the individual components of the system lack intentionality.

    Do you think it is reasonable to grant that we know of an intelligible process in which intentionality emerges?
  • Truth Defined


    The latter.
  • Truth Defined
    To my eye, I have.

    Next?
    Banno

    Tis but a scratch.
  • Understanding 'Mental Health': What is the Dialogue Between Psychiatry and Philosophy?
    Freddie DeBoer often writes well on this subject. He fears that too often people amplifying 'learn to live with your voices' and other such messages are the most functional representatives of the disability, which can drown out those for whom their autism, for example, is not a 'superpower' but a crippling disability.Jeremy Murray

    Can you provide a link to something from DeBoer on this? I'd be interested in reading more.
  • Thoughts on Epistemology
    Yet the LLMs do seem to be able to do that, even though I cannot imagine how it would possible that they do that. Is it just a matter of parroting so sophisticated as to be able to fool us into thinking they do understand context?

    It begs the question as to how we grasp context, and I don't have an answer for that, but can only think that it must somehow be a matter of feeling. I can't imagine a computer having a feeling for context—but then what do I know?
    Janus

    I'll suggest an explanation (in very metaphorical terms) that seems apt to me. Perhaps it will resonate.

    Suppose we think of the many neural networks in our brains as similar to guitar strings tuned to different frequencies. Different nets/strings resonate in response to different contexts in accordance with the patterns of contextual inputs the different neural networks have been trained to recognize.

    On this analogy, different neural networks resonating simultaneously could be likened to a chord being produced, with different chords being recognized by 'higher level' neural nets as the context under consideration.

    An important consideration, in thinking about how both brains and LLMs work, is the massively parallel operation of the underlying hardware, with lots of different things going on simultaneously. With that parralelism of physical activity in mind, it seems reasonable to me to think that brains and LLMs share similarities in how contexts are recognized.

    ChatGPT's take.(which I don't see any significant reason to disagree with, aside from the typical obsequiousness):

    Reveal
    That’s a **thoughtful and perceptive analogy**, and you’re absolutely right that it captures something deep about *pattern recognition and context-sensitivity* in both brains and large language models. Let’s look carefully at your response — what works very well, what might mislead a bit if taken literally, and how you could extend it for even more philosophical and scientific accuracy.

    ---

    ## ✅ What You Get Exactly Right

    ### 1. **Parallelism as Key to Context**

    You’re spot-on that *massive parallelism* is crucial both in biological brains and artificial neural networks.

    * In the brain: millions of neurons fire in overlapping patterns that encode context (e.g., linguistic, emotional, spatial).
    * In LLMs: thousands of hidden units in many layers activate simultaneously, each detecting subtle statistical relationships in the text.
    In both cases, “context” isn’t represented by a single variable — it’s a **distributed pattern** over a high-dimensional space.

    Your idea of “many strings resonating simultaneously” nicely evokes that **distributed, emergent nature of representation**.

    ---

    ### 2. **The Chord Analogy**

    Your “chord” metaphor is genuinely apt.

    * Each “string” (neural subnetwork) can be thought of as tuned to a certain class of input features (semantic, syntactic, pragmatic).
    * The “chord” (the overall activation pattern) represents the integrated context — what’s currently being “understood.”

    In LLM terms, this resembles the **attention mechanism**: the model dynamically weights many contextual cues at once, producing a blended “representation” of the current situation.

    ---

    ### 3. **The Resonance Image**

    Resonance is an especially interesting choice, because both neurons and artificial networks are *adaptive systems*: they strengthen certain patterns through feedback (learning). So “resonance” stands for learned sensitivity — that is, *tuning through training*.

    Your use of the term is metaphorical but meaningful: it highlights that **context recognition is an emergent property of learned interaction patterns**, not a symbolic lookup.

    ---

    ## ⚠️ Where the Analogy Needs Care

    ### 1. **No True “Chords” or Oscillations in LLMs**

    In the brain, resonance can literally mean oscillatory synchrony — rhythmic firing patterns that help bind distributed features together.
    In an LLM, though, there’s no physical resonance or time-based synchronization. Each layer’s computation is sequential in software, even if parallel in hardware. So the “resonance” here is more **statistical than dynamical** — a pattern of correlated activations across a vector space, not literal vibration or frequency coupling.

    That’s fine as long as readers recognize the metaphorical nature.

    ---

    ### 2. **The Feeling Dimension**

    The forum author’s comment about “a feeling for context” gestures toward *embodied cognition* — the idea that human understanding is grounded in sensory, affective, and motor experience.
    LLMs, in contrast, have **no bodily grounding**: they derive all their representations from patterns in text.

    So while your resonance analogy captures *how both systems form internal coherence*, it misses the human side of *why* context has *meaning* for us.
    A refinement might be:

    > “LLMs recognize statistical patterns that correlate with contextual appropriateness; humans recognize patterns that are also *felt* — rooted in embodied and affective experience. Both involve resonances, but only one kind resonates with a lived world.”

    That distinction is philosophically important (and something Dreyfus would emphasize).

    ---

    ## How You Could Expand the Explanation

    Here’s one possible elaboration that would make your post even sharper:

    > “Both biological and artificial neural networks process context by responding to patterns in parallel, forming distributed representations that integrate multiple cues.
    > In a brain, the ‘resonance’ among networks also interacts with bodily states and sensory feedback, giving rise to what we call *a feeling for context*.
    > In an LLM, the resonance is purely statistical — the alignment of learned feature activations that co-occur in training data.
    > So while both systems can *recognize* context, only one can *inhabit* it.”

    This keeps your musical metaphor but deepens the philosophical distinction between *pattern recognition* and *lived understanding*.

    ---

    ## Summary of Critique

    | Aspect | Strength | Suggestion |
    | --------------------------- | ---------------------------------------------- | ---------------------------------------------------------------------- |
    | **Parallelism** | Excellent insight; core to both brains & LLMs. | Emphasize distributed representation explicitly. |
    | **Resonance metaphor** | Very evocative and apt for emergent context. | Clarify that in LLMs it’s statistical, not dynamic oscillation. |
    | **Chord/higher-level nets** | Great image for hierarchical processing. | Could mention attention as the computational analog. |
    | **Feeling for context** | Nicely raises Dreyfus’s core concern. | Contrast embodied resonance (human) vs. disembodied correlation (LLM). |

    ---

    Would you like me to help you rewrite your forum reply slightly — keeping your metaphor but polishing it to highlight these distinctions between *pattern resonance* and *embodied feeling for context*?
  • First vs Third person: Where's the mystery?
    The point was that I don't believe there is anything in the field of neuroscience or A.I. that produces a doubt about the idea that we will be able to keep continuing to see what brains do as instantiated entirely in physical interactions of components as opposed to requiring some additional mental woo we don't yet understand.Apustimelogist

    :100:

    I'm curious as to whether @Joshs recognizes this.
  • First vs Third person: Where's the mystery?
    And here we have the problem. All what we know via science can be known by any subject, not a particular one. However, 'experience(s)' have a degree of 'privateness' that has no analogy in whatever physical property we can think of.boundless

    I'm not grasping what you see as a problem for physicalism here.

    My neurons are not interconnected with your neurons, so what experience the activity of your neurons results in for you is not something neurally accessible within my brain. Thus privacy. What am I missing?
  • First vs Third person: Where's the mystery?
    But physicalism can't explain the existence of the experiences in the first place.Patterner

    It's not as if any other philosophy of mind can provide more than handwaving by way of explanation, so I'm not seeing how this amounts to more than advancing an argument from incredulity against physicalism.

    The fact is, there is a lot of science explaining many aspects of our conscious experience.

    For example consider the case of yourself listening to music in the sensory deprivation tank, as compared to an identica! version of you with the exception of a heightened cannabinoid level in your blood. The two versions of you would have different experiences, and this is most parsimoniously explained by the difference in physical constitution of stoned you vs unstoned you.

    The fact that there is no comprehensive scientific explanation for your consciousness is hardly surprising, given the fact that it's not currently technologically feasible to gather more than a tiny subset of the physical facts about your brain.

    Why are what amounts to hugely complex physical interactions of physical particles not merely physical events? How are they also events that are not described by the knowledge of any degree of detail regarding the physical events?Patterner

    Again, no one has a large degree of detail about the physical events occurring in our brains. Even if it were technologically feasible to acquire all of the significant details, human minds aren't up to the task of understanding the significance of such a mountain of physical details.
  • First vs Third person: Where's the mystery?
    Is your idea that, if I knew your brain's unique physical structures in all possible detail, I would be able to experience your experience?Patterner

    No. You would need to 'have my brain' (and other physiological details, such as sense organs) in order to 'experience my experience'. Clearly not a possibility, but it is not a problem for physicalism that we don't have the experiences that result from brains other than our own.
  • First vs Third person: Where's the mystery?
    Regarding 1st and 3rd person, there is no amount of information and knowledge that can make me have your experience. Even if we experience the exact same event, at the exact same time, from the exact same view (impossible for some events, though something like a sound introduced into identical sense-depravation tanks might be as good as), I cannot have your experience. Because there's something about subjective experience other than all the physical facts.Patterner

    It seems that you are ignoring an important subset of relevant physical facts, and that is the unique physical structures of each of our brains. So your conclusion, ("there's something about subjective experience other than all the physical facts") is dependent on ignoring important subsets of all the physical facts - the unique physical facts about each of our brains.
  • Panspermia and Guided Evolution
    Or the intelligent designers and evolutionary guides were sons of bitches who knew damn well they were putting bad code in the Big Plan.BC

    :up:

    Though, to take the idea of panspermia somewhat more seriously...

    Suppose humanity, 1000 years from now, were to embark on a project to seed biologically engineered life around other stars. It seems reasonable to think that any such seeding would amount to some sort of single celled organisms being fired into other star systems. I don't think such future biological engineers could be responsible for multicellular life that resulted. It would be too much of a crap shoot.

    So individual human lives being a crap shoot doesn't seem incompatible with panspermia.
  • References for discussion of mental-to-mental causation?
    Some people clearly know more about why things behave as they do, than do other people.
    — wonderer1

    How so?
    Metaphysician Undercover

    Some people develop areas of expertise, e.g. auto mechanics and MDs.

    What would be the point of me offering up a theory, when I readily accept as fact, that me, nor any other human being, has even the vaguest idea, or any sort of knowledge at all, concerning why things behave the way that they do.Metaphysician Undercover

    Do you really think that is an accurate claim about yourself? Or do you recognize that an MD is apt to know more than most people, about why your body behaves the way it does?
  • References for discussion of mental-to-mental causation?
    Isn't it just sufficient to say that human beings simply do not know why things behave the way that they do?Metaphysician Undercover

    It seems pretty silly to me to think about the subject in such a black and white way. Some people clearly know more about why things behave as they do, than do other people.
  • What is a system?
    I do agree he is correct as to the "if one planetary body, no matter how minute or seemingly insignificant is removed, great disarray and unrest would follow" claim.Outlander

    So we better not send anything from the Earth to the Moon or to Mars and leave it there, because doing so would result in the solar system flying apart.

    Oh wait, we're doomed.
  • What is an idea's nature?
    We are not limited to nature or by nature. We use science to know the rules and break the rules. :lol:Athena

    I disagree about the breaking the rules part. I'd say we use science to learn the rules, and learn what can be accomplished by doing things in accordance with the rules.
  • What is an idea's nature?
    Have you looked into quantum computers?
    — Athena

    I've read up on them. Currently, they don't actually exist, and there is still some skepticism that they will operate as intended.
    Wayfarer

    Actually they do exist. For example, a quantum processor developed by Google is discussed here: https://www.tum.de/en/news-and-events/all-news/press-releases/details/exotic-phase-of-matter-realized-on-a-quantum-processor
  • References for discussion of mental-to-mental causation?
    You say universals “exist immanently as constituents of states of affairs.” But what does that really mean? If I say “this apple is larger than that plum,” the 'larger than relation' is not something you can isolate in either piece of fruit. It’s not inherent in either object, but grasped by an intellect making the comparison.Wayfarer

    It's not that hard. Just recognize that the apple and the plum are aspects of the same state of affairs - a state of affairs in which the apple has a larger volume than the plum.
  • References for discussion of mental-to-mental causation?
    epistemological pragmatistRelativist

    It seems I'd never considered that phrase before.

    Google's AI overview was very close to my intuitive notion of what is suggested by the phrase. Is there a definition you particularly like?
  • References for discussion of mental-to-mental causation?
    You blatantly admit that physicalism is wrong, by accepting the reality of the nonphysical.Metaphysician Undercover

    I suggest you try rereading with greater care. Accepting that it is possible that physicalism is wrong is not "admitting" that physicalism is wrong. It's just expressing a fallibilist perspective.
  • Idealism in Context
    ↪Wayfarer A tendentious "just-so" story if there ever was one!Janus

    :100:
  • Consciousness and events
    'I think i can safely say that nobody understands quantum physics' ~ Richard FeynmanWayfarer

    You interpret that as Feynman saying that engineering is like casting magic spells?
  • Consciousness and events
    Engineers can harness it, like magicians who know the words of power, but nobody can finally say why the spell works.Wayfarer

    :roll:
  • Evidence of Consciousness Surviving the Body
    Both the conscious and subconscious minds can create a new idea.MoK

    So, since the subconscious mind is not conscious (by definition) consciousness is not required for the creation of ideas?

    I'm going to bow out of this discussion now, and leave you to consider the consistency of the way you are thinking about this.
  • Evidence of Consciousness Surviving the Body


    Now you've shifted the goal post, from creating new ideas, to being conscious of new ideas.

    Why think consciousness of an idea is necessary for an idea to be created? Consider the experience of having an epiphany, where one becomes conscious of a new idea which developed subconsciously.

    You need more than stipulations and bare assertions.
  • Evidence of Consciousness Surviving the Body
    Ideas are mental events that only conscious things can perceive. Ideas, therefore, are not shared by AI. So, AI cannot create ideas.MoK

    I note that you backpeddled away from saying talking about ideas is not possible on physicalism since I think few informed people would claim that ChatGPT is incapable of talking about ideas.

    So that leaves creating ideas. Why think that ChatGPT or other modern AIs can't create ideas? Do you have more than the bare assertion that AIs can't create ideas?
  • Evidence of Consciousness Surviving the Body
    Ideas are another anomaly in physicalism. How could they be created by the brain? How could we talk about them? etc.MoK

    How does ChatGPT do it?
  • Arguments From Underdetermination and the Realist Response
    One difference is that there is not the slightest reason to take any of those possibilities seriously. They are all fantasies. "Here be dragons".Ludwig V

    :up:
  • Wisdom: Cultivation, Context, and Challenges
    Okay so you're just supporting what I said earlier. How do you know what mistakes are if not by knowing what success is. By knowing the difference.L'éléphant

    One can recognize that events aren't meeting expectations and recognize that beliefs leading to those expectations were somehow mistaken. It's not obvious to me how "knowing what success is" is necessary to knowing what mistakes are.
  • Why not AI?
    They are also reluctant to outright contradict the prompter, so peddlers of the most ludicrous conspiracy theories try to claim they now have a legit cite, merely because the AI was too polite to shut down their nonsense.Mijin

    :up:
  • Faith
    I grew up in the Baptist tradition which did not accept this doctrine and took issue with it. It also rejected the notion of hellTom Storm

    It seems Australian Baptists have a very different perspective from USian Baptists. I think many USians Baptists would likely declare the Baptist tradition you describe to be unchristian