• RogueAI
    3.3k
    You seem to be digressing into books from the original topic conscious mind. But think again. If there was nothing in the world, i.e. no paper, no ink, no humans, no physical objects whatsoever (imagine a place like Mars - a field with just rocks and hills), can a story of Sherlock Holmes exist? Whatever idea or story it might be, it needs to be in the form of physical media, DVD or ebook or physical book for it to exist. With no physical objects to contain ideas or books or music, nothing can exist.Corvus

    Before a story can be put down on some physical media, it exists in a mind first. Now, you may say that a story exists in a brain, but then we're back to the knowledge issue: if the story exists in a brain, and I have complete knowledge of the brain, then I will have knowledge of all the mental content of that brain, including whatever stories it may be thinking of. You tried to respond to this with a hard drive analogy, but you seemed to abandon it when I pointed out you were creating a distinction between data and the physical components of the hard drive which heavily implied dualism.

    In that sense, they are all some form of physical objects. Ideas, minds and consciousness or whatever abstract objects you might be thinking, talking or imagining, they are in some form of physical existence - they need to be read, spoken or played by the physical beings and instruments. They might be different category of physical objects which are invisible, odourless and silent. But they are all some form of physical existence in nature and origin.Corvus

    Then we're back to an earlier objection. The IEP frames it nicely: "A more serious objection to Mind-Brain Type Identity, one that to this day has not been satisfactorily resolved, concerns various non-intensional properties of mental states (on the one hand), and physical states (on the other). After-images, for example, may be green or purple in color, but nobody could reasonably claim that states of the brain are green or purple. And conversely, while brain states may be spatially located with a fair degree of accuracy, it has traditionally been assumed that mental states are non-spatial."
    https://iep.utm.edu/identity/

    There is no such a thing called pain. You have your biological body which feels the sensation of pain when hit by some hard object. You call it "pain" when no such thing exists in the whole universe. It is just the state of your body cells with neurons which sent some electrical signals into your brain, and from your education and upbringing and customs, habits and cultural influence, you scream "ouch", and utter the sentence "I have pain." or "It is bloody painful."Corvus

    This is incoherent. People scream "ouch" because pain hurts. The salient feature of pain is that it feels bad. Any definition of pain which does not reference the subjective experience of hurting is incomplete. Imagine two old people from thousands of years ago talking about their various aches and pains. They know nothing about what the brain does or is. Are you saying then that their statements about their pains are nonsensical? Obviously, they can converse intelligently on the subject because when people talk of pains, they're almost always referring to the mental state of "being in pain" and not neurons and c-fibers.
  • flannel jesus
    2.9k
    ETA2: If physicalism is right, then a book is just ink on paper; patterns of squiggles. So a person with total physical knowledge of a book (ink chemistry, paper fibers, locations of atoms, etc.) should, in theory, know everything about the book.RogueAI

    This one is rather trivial. Of course someone with that knowledge could in principle learn anything about the book someone who physically had the book could. They'd have to do more work than someone who just had the book in front of them, but... so?
  • RogueAI
    3.3k
    This one is rather trivial. Of course someone with that knowledge could in principle learn anything about the book someone who physically had the book could. They'd have to do more work than someone who just had the book in front of them, but... so?flannel jesus

    So let's suppose John and Alice have complete knowledge of all the physical facts of a copy of Orwell's 1984 in English. Neither of them has ever heard of the story. John doesn't know English, but Alice does, and Alice now knows the story contained in the pages. Who has more knowledge of the book? Isn't it clear Alice does?
  • flannel jesus
    2.9k
    ah I didn't realise you were including language issues in that. The way you phrased the question made it sound like you thought there was a difference between knowing everything about the book Vs physically having it, whereas now the problem is really knowing the language of the book, Vs not knowing the language.
  • flannel jesus
    2.9k
    So anyway, the claim now from you is, if physicalism is true then knowing everything about the physical arrangement of the book should allow you to understand the meaning of the book, even if you don't understand the language it was written in.

    I just don't think that follows.

    I mean, let's take LLMs as an example. They're a good example because they're explicitly physical. They are implemented 100% in the physical world - the computer scientists who invented them didn't learn how to imbue them with souls or anything, they work on the same physical principles as any normal computer.

    Now if you give one of these LLMs a bunch of text in a language they're trained on, they can summarise it for you pretty well.

    And if you give them a bunch of text on a language they haven't been trained on, they can't.

    So we have a fully physical system which can, loosely speaking, "understand" some stuff and not "understand" other stuff, despite having the same access to the visual characters of each text. So... no I don't think it holds that, if physicalism is true, a person should be able to understand text he hasn't been trained to understand.

    Obviously LLMs aren't the same as human beings and a summary from the LLM isn't the same as human understanding. BUT the ability to summarise and paraphrase a text is a human test for understanding, so I think the comparison is honestly robust enough.
  • Ulthien
    34
    ↪RogueAI So anyway, the claim now from you is, if physicalism is true then knowing everything about the physical arrangement of the book should allow you to understand the meaning of the book, even if you don't understand the language it was written in.

    I just don't think that follows.
    flannel jesus

    There are 2 very good answers that confirm this:

    A) Searle and his Chinese Room argument
    i.e. special architecture is needed to sustain consciousness and it cannot arise from bit-waggling.

    B) Penrose and his Gödel's Theorem implications for Consciousness
    i.e. consciousness can not be a computational or iterative result.
  • Ulthien
    34
    For any unit to be conscious as a unit, it must be a unit processing energy. Arrangements of particles must mean something other than the arrangements of particles that they are, and they must be processing that information. So DNA, the beginning of life, is also the beginning of groups of particles that are conscious as a unit.Patterner

    close, but no cigar.

    [found your discussion guys online & joined in the forums to contribute, as this subject has been haunting me since 2006 and atheist forums back then ;) ]

    My background being an EE of old, and that meaning the applied physics, I hope you do not mind me introducing more of physics language into the discussion, as this offshoot of philosophy IS the language by which we understand and describe "the reality out there" (and "in there"?) nowadays..

    Me thinks we need distinguish between:
    a) information representation - the complexity of bits and tidbits that are describing the contents, and
    b) the mechanism that senses the contents i.e. gives them qualia as subjective, aware, cognizant experience - the one we might name "the basis of consciousness".
  • Ulthien
    34
    Also, me thinks we need distinguish in detail "what is searched for" as distinction between easy and hard problems of consciousness as defined by Chalmers back in 1996.

    The core of the basis of consciousness is the ability to awarely perceive and feel the qualia (the instantaneous, non-computational, direct, subjective, private, SENSING).

    Cittavrti by Patanjali: When qualia appear in succession, we perceive a thought, when the brain rests for a few secs (like it does after every thought), we evaluate and feel the qualia which stumbles our cybernetics-meandering regulatory organ (the brain!) into next perturbations..
  • Corvus
    4.6k
    This is incoherent. People scream "ouch" because pain hurts. The salient feature of pain is that it feels bad. Any definition of pain which does not reference the subjective experience of hurting is incomplete. Imagine two old people from thousands of years ago talking about their various aches and pains. They know nothing about what the brain does or is. Are you saying then that their statements about their pains are nonsensical? Obviously, they can converse intelligently on the subject because when people talk of pains, they're almost always referring to the mental state of "being in pain" and not neurons and c-fibers.RogueAI

    You utter the word "ouch" for the pain in your body, but you don't know what the state of the neurons and electrons inside your brain is for your utterance of the word. What is clear is that it is a physical state in your brain and body, not something called "pain" exists as some objects. That's what I meant.

    For finding out what conscious mind is, we need to trace how it comes into existence. Is mind posited by something or someone in your brain? It is emerged, or generated? Or embedded into your brain when you were born?

    To me, mind is just the physical state of brain, which is perceptual, evolutionary and also intelligent. Because of this fact, AI is coming into the world. AI and computers are 100% physical from the body to the intelligence and capabilities they present. There is nothing mental about them.

    If mind is not physical, then it should survive physical death of the body it resides in. No mind has ever done so. Mind always dies when body dies, and the death is eternal.
  • flannel jesus
    2.9k
    I don't think the Chinese room argument is very good, to be honest. I think it misses the point entirely.

    I'll check out the second one.
  • RogueAI
    3.3k
    So anyway, the claim now from you is, if physicalism is true then knowing everything about the physical arrangement of the book should allow you to understand the meaning of the book, even if you don't understand the language it was written in.

    I just don't think that follows.
    flannel jesus

    If physicalism is true (more specifically, if minds are physical), how do we even begin to discuss how something like the meaning of a book is possible? My point was simply that, if physicalism is true, then knowledge of all the physical facts about a book should entail complete knowledge of the book, but obviously that's not true, so strict physicalism isn't true. My point is just a rehash of Mary's Room. I personally am an idealist.
  • RogueAI
    3.3k
    Where do you land on the issue of consciousness? What's your favored theory?
  • RogueAI
    3.3k
    a) information representation - the complexity of bits and tidbits that are describing the contentsUlthien

    If all minds in the universe suddenly disappeared, would it still be true that Sherlock Holmes lives at 221B Baker Street? Truth is supposed to be what corresponds to reality, but what reality does the Holmes story correspond to if there are no minds left to comprehend it? And yet, before the disappearance of minds, it was clearly true that Holmes lived at 221B. So how can a truth just vanish the moment consciousness does? If that’s the case, then some truths depend on minds to exist—which challenges the idea that all truths are purely objective or physical.
  • Ulthien
    34
    So how can a truth just vanish the moment consciousness does? If that’s the case, then some truths depend on minds to exist—which challenges the idea that all truths are purely objective or physical.RogueAI

    emm... we have a quite developed information theory as science (of EE & IT). Suffice to say that info has a relevance within a context - so information is not information in all cases - i.e. there is no objective info.

    Which has noth to do with the fact that it is physical.
    The MEANING is not, or at least not directly readable as physical: you need access & interpret it.
  • Ulthien
    34
    "Where do you land on the issue of consciousness? What's your favored theory?"

    well since 2006 and speaking with some anesthesiologist dr. and getting acquainted with TIQM of prof. Cramer, it dawned to me that aware perceiving aka sensing of EM situation of the brain is quite a simple "inner" feature of a stroboscopically pulsating brain EEG EM field: it tastes or collects the situation, akin to a "weather radar" albeit this integrates here into an ever-present moment of *now* due to instantaneous collapse of photon wave of the brain field.

    In other words, the emitter is the observer, and the expenditure of 20% of bodily energy for the field in the brain serves the purpose of information collection and presentation as qualia.

    here are some AI videos of my theory which I call RFOC (resonant field overlap collapse) as an extension/explanation of prof. McFadden's CEMI theory (Conscious Electromagnetic field Information)... (i still work on making the theory understandable to everyone also from different walks of life, so any input is much appreciated :)

    I presented it first at my MMC computer club annual meeting 2 years ago.

    short intro: https://youtu.be/6dA2xgdhSsw?si=yGYkBe_OIE_WW924

    AI intro: https://www.youtube.com/watch?v=-gFcgHYPlOo&list=PLTJJU-mQ_nDb-sPTq4tjMLImbhj7cceRU&index=9

    part of my lecture, AI enhanced: https://youtu.be/u3KkhQy7k_E?si=VHAHkG26oH9-6xEV

    .pdf slides: https://docs.google.com/presentation/d/1z9NZumOJKCfflgNdQOWttTmLHWUIeOU-TgHj4sGm0MA/edit?usp=sharing

    elaboration points: https://docs.google.com/document/d/1Gy0FRQHsWAG_5E7q_WmlpFCEK8i8FHRl/edit?usp=drive_link&ouid=105114585402487734057&rtpof=true&sd=true
  • Patterner
    1.6k
    For any unit to be conscious as a unit, it must be a unit processing energy. Arrangements of particles must mean something other than the arrangements of particles that they are, and they must be processing that information. So DNA, the beginning of life, is also the beginning of groups of particles that are conscious as a unit.
    — Patterner

    close, but no cigar.
    Ulthien
    Argh! Reading your quote of me, I see a mistake. I don't know how I made such an obvious mistake, but "energy" should be "information".

    no I have yours and a bunch of other posts to read. Welcome aboard!
  • wonderer1
    2.3k


    As an EE myself, I have to say that sounds to me like pseudoscience.

    Welcome to the forum.
  • RogueAI
    3.3k
    As an EE myself, I have to say that sounds to me like pseudoscience.wonderer1

    I was reminded of another pseudoscience, IIT.
  • Ulthien
    34
    I was reminded of another pseudoscience, IIT.RogueAI

    well, I agree on that - integrated consciousness theory by IT colleagues DOES lack the mechanism that physically accounts for sensing of that information - which CEMI RFOC does offer..
  • Ulthien
    34
    As an EE myself, I have to say that sounds to me like pseudoscience.wonderer1

    well, dear colleague, have a go at TIQM seminal paper (in hope you are not too young to have had quantum mechanics curriculum on study years): it opens the eyes directly :)

    https://drive.google.com/file/d/1M6tTbR_rt0sWjlrlKEXAcg0xzZK2QRSb/view?usp=drive_link
  • Ulthien
    34
    For any unit to be conscious as a unit, it must be a unit processing energyUlthien

    ..but exactly this "lapsus" made me join here, as it stands true for the binding of the info to sentiency: only the EM quantum field can accomplish this thansposition :)
  • Patterner
    1.6k
    For any unit to be conscious as a unit, it must be a unit processing energy
    — Ulthien (should be Patterner)

    ..but exactly this "lapsus" made me join here, as it stands true for the binding of the info to sentiency: only the EM quantum field can accomplish this thansposition :)
    Ulthien
    But I meant to say:

    For any unit to be conscious as a unit, it must be a unit processing information.

    A ping-pong ball is not a unit in regards to consciousness. It's just a physical arrangement of particles.

    A Rube Goldberg Machine is not a unit in regards to consciousness. It's just a bunch of physical arrangements of particles knocking into each other. There is no information anywhere in the system. No part of it means anything.

    Dominos set up too reveal whether or not a given number is prime is not a unit in regards to consciousness. There is no information being processed. Dominos are falling in a way that demonstrates something mathematical. But because they were specifically arranged to do that, not because they mean that.

    When protein is synthesized, information is processed. The structure of DNA is encoded information. The codons mean amino acids, and the order of the codons means proteins. Proteins are literally assembled. They are stuck together, molecule by molecule, in the specified order. This is the beginning of consciousness of more than individual particles.
  • Patterner
    1.6k
    ↪RogueAI So anyway, the claim now from you is, if physicalism is true then knowing everything about the physical arrangement of the book should allow you to understand the meaning of the book, even if you don't understand the language it was written in.

    I just don't think that follows.

    I mean, let's take LLMs as an example. They're a good example because they're explicitly physical. They are implemented 100% in the physical world - the computer scientists who invented them didn't learn how to imbue them with souls or anything, they work on the same physical principles as any normal computer.

    Now if you give one of these LLMs a bunch of text in a language they're trained on, they can summarise it for you pretty well.

    And if you give them a bunch of text on a language they haven't been trained on, they can't.

    So we have a fully physical system which can, loosely speaking, "understand" some stuff and not "understand" other stuff, despite having the same access to the visual characters of each text. So... no I don't think it holds that, if physicalism is true, a person should be able to understand text he hasn't been trained to understand.

    Obviously LLMs aren't the same as human beings and a summary from the LLM isn't the same as human understanding. BUT the ability to summarise and paraphrase a text is a human test for understanding, so I think the comparison is honestly robust enough.
    flannel jesus
    Do you think LLMs understand text? I don't think they have the slightest understanding that the marks on paper, or the binary code that the marks on paper are converted to, mean other things. I don't think they understand what meaning is, even when they are programmed to say they are. I think the binary code reacts in different ways to different binary code that is input, entirely determined by how they are programmed. I think it's very complex dominos.
  • RogueAI
    3.3k
    Do you think LLMs understand text? I don't think they have the slightest understanding that the marks on paper, or the binary code that the marks on paper are converted to, mean other things. I don't think they understand what meaning is, even when they are programmed to say they are. I think the binary code reacts in different ways to different binary code that is input, entirely determined by how they are programmed. I think it's very complex dominos.Patterner

    I agree, but...when you look under the hood at how we process meaning and produce intelligible output from inputs, it's just a bunch of neurons firing. Wouldn't a machine intelligence coming across us for the first time also be amazed we have the slightest understanding of anything?
  • Patterner
    1.6k

    Could be. Unless they have definitively figured out all about consciousness, no longer debating it the way we do, and would know for sure.
  • RogueAI
    3.3k
    Could be. Unless they have definitively figured out all about consciousness, no longer debating it the way we do, and would know for sure.Patterner

    But since we're as ignorant as we are, could we be wrong that ChatGPT doesn't understand and isn't conscious?
  • wonderer1
    2.3k
    well, dear colleague, have a go at TIQM seminal paper (in hope you are not too young to have had quantum mechanics curriculum on study years): it opens the eyes directly :)

    https://drive.google.com/file/d/1M6tTbR_rt0sWjlrlKEXAcg0xzZK2QRSb/view?usp=drive_link
    Ulthien

    It seems I can't access the file without giving out identifying information I don't want to give out.
  • Ulthien
    34
    It seems I can't access the file without giving out identifying information I don't want to give out.wonderer1

    i found it online :)
    https://arxiv.org/pdf/1503.00039
  • Ulthien
    34

    When protein is synthesized, information is processed. The structure of DNA is encoded information. The codons mean amino acids, and the order of the codons means proteins. Proteins are literally assembled. They are stuck together, molecule by molecule, in the specified order. This is the beginning of consciousness of more than individual particles.
    Patterner

    prof McFaddem the author of CEMI theory went on similar lines back in 2001 when he, over molecular biology, which is his area of knowledge, posited that complex biological molecules like proteins unfold from quantum undetermined state BASED UPON info surroundings, i.e. they communicate and adapt to assume one of 5-6 possible fold-forms based on local needs.

    Here is an AI-summary of his further thinking:

    ⚛️ 1. The Binding Problem
    McFadden was intrigued by how the brain integrates disparate sensory inputs — color, shape, motion — into a unified conscious experience. He argued that molecular mechanisms, like neurotransmitter release and ion channel activity, are temporally integrated (i.e., they process information over time), but not spatially integrated — meaning they don’t physically unify information in space.

    He proposed that only energy fields, like electromagnetic (EM) fields, can integrate information across space simultaneously.

    2. Synchronous Neuronal Firing
    Studies showed that synchronous firing of neurons correlates strongly with conscious awareness. McFadden noted that when neurons fire in sync, their EM fields reinforce each other, creating a coherent global EM field. This field, he argued, could serve as the physical substrate of consciousness3.

    3. Feedback Loop & Causality
    Unlike passive molecular processes, McFadden proposed that the brain’s EM field is causally active — it doesn’t just reflect neural activity, but can influence neuron firing via voltage-gated ion channels. This creates a feedback loop: neurons generate the EM field, and the field in turn modulates neuronal behavior.

    4. Consciousness as Field-Level Computation
    He suggested that consciousness is not just computation in time (like in digital circuits), but algorithmic processing in space — within the EM field itself. This wave-based computation could handle holistic concepts like identity, meaning, and self, which are difficult to reduce to molecular interactions4.
  • flannel jesus
    2.9k
    Do you think LLMs understand text? I don't think they have the slightest understanding that the marks on paper, or the binary code that the marks on paper are converted to, mean other things. I don't think they understand what meaning is, even when they are programmed to say they are. I think the binary code reacts in different ways to different binary code that is input, entirely determined by how they are programmed. I think it's very complex dominos.Patterner

    I think it's the only tangible comparison we can make at the present moment. Whether they "truly understand" or not is... kinda inaccessible to us. They pass the turing test, they give us all the signs we would expect of understanding, and so... as far as I'm concerned, it's the most valid existent comparison point to human understanding.

    They do this:
    https://www.lesswrong.com/posts/yzGDwpRBx6TEcdeA5/a-chess-gpt-linear-emergent-world-representation
    They internally represent what "they think" the "world" looks like. If that's not some attempt at "understanding" I don't know what is.

    And so we can say, "Maybe we don't know if human minds are physical or not, but we know for sure LLMs are physical, and they display all the signs of understanding, including internal representations of what they 'think' the state of the world is, so... you can't just blanket say 'if human minds are physical, then they would understand every language in the world if it were written down'" all the stuff Rogue was saying. What he's saying is pure speculation (probably worse than speculation, tbh, it sounds like gibberish to me), LLMs are the closest thing we have to non-speculation about the topic of understanding. Obviously it's speculative to some degree, but it's decidedly less speculative.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.