• MoK
    1.8k
    A system is composed of its parts.Manuel
    Correct.

    A single H20 molecule does not have the properties of water.Manuel
    Correct.

    And we don't understand how, by combining them together water could arise, because each individual molecule shows no "wetness".Manuel
    We understand how. The properties of water are functions of the properties of parts. We can also simulate water.

    You are correct that we have good theories on speed, mass, spin. I doubt they are intuitive. If they were, we would have figured out the chemistry behind them much earlier than we did. At least, that's how it looks like to me.Manuel
    It is just not easy to have an intuition for how the properties of a particle can be explained in terms of the vibration of the string. I am not a string theorist, so I cannot tell you how a certain vibration leads to a particular property, but I am sure string theorists have good intuition about this.

    Is the sentence "Think of a meaningful sentence" meaningful?Manuel
    It is. It is about a special request. You think about a sentence when you read this sentence.

    If it is, the meaning seems to be emergent on the order of the words.Manuel
    Correct. The meaning is a strong emergence.

    Now you say an idea is something that emerges once you complete reading the sentence.Manuel
    Correct.

    How is the idea more that the sentence, if you say that a completed sentence is an idea? I'm trying to understand.Manuel
    No, I am not saying that completing a sentence is an idea. I am saying an idea emerges when you complete reading a sentence.
  • Patterner
    1.6k
    I can't see why you keep insisting that a particle, or a crystal, is a subject of experience.Wayfarer
    Because off this:
    But Nagel also sees this as an argument in support of panpsychism: If consciousness really arises from matter, then the mental must in some way be present in the basic constituents of matter. On this view, consciousness is not an inexplicable product of complex organization but a manifestation of properties already present in the fundamental building blocks of the world.Wayfarer
  • Wayfarer
    25.2k
    I included that, because he does come to that conclusion in the essay I presented of his. But I think his arguments against emergence were more important, and also noted that he doesn't develop this idea in his later work.
  • Wayfarer
    25.2k
    I think panpsychism fails to explain the unity of experience; therefore, it is not acceptable.MoK

    I agree with you!
  • Janus
    17.4k
    My own view is that a naturalistic account of the strong emergence of mental properties, (that incorporates concepts from ethology and anthropology), including consciousness, can be consistent with a form of non-reductive physicalism or Aristotelian monism (i.e. hylomorphism) that excludes the conceivability of p-zombies and hence does away with the hard problem. Form in addition to matter is ineliminable in the description of our mental lives, but form isn't something standing over and above matter as something separate or immaterial.Pierre-Normand

    Yes, no matter without form and no form without matter―that makes good sense to me.
  • Manuel
    4.3k


    I don't think we will agree in this one about the molecules, as we are talking in circle here.

    We cannot conceive how non-mental mattter could give rise to mind, that's fine. As Russell pointed out we don't know enough about matter to say whether its intrinsic properties are like or unlike mind.

    But it seems that other issues - free will, motion, life, etc.- are equally hard, but again we may agree to disagree I suppose.

    No, I am not saying that completing a sentence is an idea. I am saying an idea emerges when you complete reading a sentence.MoK

    So, an idea to you is composed of words. That is, the entire content of an idea are the words we use in propositions?

    Not being incredulous or anything like that, just want to clear up the issue. It's not entirely implausible.
  • Patterner
    1.6k
    I think panpsychism fails to explain the unity of experience; therefore, it is not acceptable.MoK
    End of the day, all theories explain it with, "That's the way it is." Even beyond theories of consciousness. Why is there something instead of nothing?

    And we don't understand how, by combining them together water could arise, because each individual molecule shows no "wetness".
    — Manuel
    We understand how. The properties of water are functions of the properties of parts. We can also simulate water.
    MoK
    Right. Single molecules of water cannot be wet. Wetness is a property of groups of molecules, because of the way they bond under certain conditions. And the molecules bond the way they do under those conditions because of their properties.


    It is just not easy to have an intuition for how the properties of a particle can be explained in terms of the vibration of the string. I am not a string theorist, so I cannot tell you how a certain vibration leads to a particular property, but I am sure string theorists have good intuition about this.MoK
    I can't imagine explaining it as intuition, either. Nothing about string theory can be intuition, even if they can make an internally consistent, mathematically perfect theory. And there isn't any evidence to support the theory either.

    In this Ted Talk, Brian Greene gives a good talk about those strings, among other things.
  • Wayfarer
    25.2k
    My own view is that a naturalistic account of the strong emergence of mental properties, (that incorporates concepts from ethology and anthropology), including consciousness, can be consistent with a form of non-reductive physicalism or Aristotelian monism (i.e. hylomorphism) that excludes the conceivability of p-zombies and hence does away with the hard problemPierre-Normand

    So, more of a Frankenstein than a zombie, then. :wink:
  • MoK
    1.8k

    "Cup" refers to an idea. "My cup", however, refers to a new idea in my mind that objectively exists near me too. When you read this sentence, the idea of "my cup" appears to you as well, so there are two ideas in the minds of two individuals. You even have a different impression when I say that "my cup" is broken. So, what "my cup" refers to different ideas depending on how it is used in a sentence.
  • MoK
    1.8k
    Right. Single molecules of water cannot be wet. Wetness is a property of groups of molecules, because of the way they bond under certain conditions. And the molecules bond the way they do under those conditions because of their properties.Patterner
    Correct.

    In this Ted Talk, Brian Greene gives a good talk about those strings, among other things.Patterner
    The link refers to one of my posts. You may want to correct the link, which I think is this one.
  • Patterner
    1.6k
    Heh. Fixed.
  • MoK
    1.8k

    Thanks. I, however, disagree with his final thought in which he mentioned that an amount of energy might dissipate to higher dimensions if we collide two particles hard enough. Higher dimensions might be wrapped such that no dissipation occurs when you hit two particles hard enough.
  • Patterner
    1.6k

    Well, if the experiments go the way he expects them to go, it would be reasonable to think they went that way for the reason he expected them too.
  • MoK
    1.8k

    I don't think so.
  • noAxioms
    1.7k
    A substance is something that objectively exists.MoK
    The truth of the sum of 2 and 2 being 4 seems to objectively exist, yet isn't considered a substance by many. I have a hard time coming up with other examples. None of the things I think have objective existence are substances.

    OK, you're entitled to different definitions, but if you declare the apple to exist because you see it, that seems to be subjective existence, not objective at all.


    An idea does not have parts at the end since it is irreducibleMoK
    Disagree. Ideas have parts, but those parts are not objects or substances. I have patented ideas, and those ideas had a lot of parts. I've never patented an object of any kind.


    I think consciousness is always the same, and can always be causal.Patterner
    So you agree with my bit of logic showing that it can be measured.

    Let's say physicalism. Through purely physical interactions, life begins, and evolves. There's no such thing as consciousness. Then, a certain physical complexity comes into being. And, though consciousness was not planned, and consciousness had no role in bringing that complexity about, for no reason, that physical complexity just happens to be perfect for the existence of this entirely new thing that it has nothing to do with.

    What an extraordinary, bizarre turn of events,
    You can say all this about any feature. Just substitute say 'eye' for 'consciousness'.
    I picked that because eyes are a frequent choice for similar argument from incredulity.

    BTW, the physicalists don't suggest that consciousness follows from mere complexity. What comes into being is improved reaction to outside stimuli, never anything new, just improvements to what was already there.


    I have yet to hear a theory, or even a wild guess, about how Chalmers' Hard Problem is explained with physicalism.Patterner
    Same can be said of Chalmers, who merely replaced a black box with a different, even blacker one. It, being inaccessible, is far less explained. Magic is not a better answer.
    There's not even a single wild guess as to a model about how the non physical mind works, operates, evolves from the past into the future. Nobody who believes in non physicalism even tries to come up with one, and they don't have the vaguest idea how to find one or even begin performing experiments on the non physical mind to test their ideas.flannel jesus
    Expressing the same criticism. Nicely put.

    End of the day, all theories explain it with, "That's the way it is." Even beyond theories of consciousness. Why is there something instead of nothing?Patterner
    I would never end the day with just that. "I don't know" is better than "that's the way it is", and "don't know, so magic". As for the nothing question, that one has satisfactory (to me) analysis, starting with identifying and questioning the assumptions made in asking it.


    If it were entirely physics and chemistry, there would be no separate discipline of organic chemistry.Wayfarer
    Organic chemistry being a subset of all chemistry does not in any way imply that organic chemistry is more than chemistry, which in turn, is just physics.

    OK, a life-form is more than just organic chemistry. One might behave as a unit for instance, a property not particularly coming from just chemistry. But the discipline of organic chemistry is not the discipline of biology.

    The idea that life evolved naturally on the primitive Earth suggests that the first cells came into being by spontaneous chemical reactions
    Maybe. Going from not-life straight to a cell seems a stretch, but things like amphiphiles and ribose do occur in absence of life, so it's not an impossible stretch. Going from a self-sustaining form to a replicating form seems the largest hurdle. It isn't really life until it does that.

    Ernst Mayr ... made this point in no uncertain terms: ‘… The discovery of the genetic code was a breakthrough of the first order. It showed why organisms are fundamentally different from any kind of nonliving material."
    Calling it a fundamental difference does not preclude it from being based on physics and chemistry.

    Descartes had difficulty explaining how res cogitans affects matter, suggesting that the rational soul operated through the pineal gland.Wayfarer
    The suggestion of the pineal gland was not an attempt at an explanation of how matter was affected, but rather a choice of something in/near the brain that there was only one of. Being somewhat symmetrical, most brain parts have a mirror part, but not that gland. Still, the soul could have been put in the heart (only one of those) or gut (plenty of behavior and choices come from there).

    What about abstract objects like numbers and logical rules? Do you think there are physical explanations for them?Wayfarer
    Abstractions are mental constructs, and so supervene on mental constructs/states. Same with abstractions of say an apple.


    We know that materialism fails since it cannot explain how ideas emerge and how they can be causally efficacious in the world, given that ideas are irreducible and have no parts.MoK
    1) I don't accept your given, and 2) as usual, your conclusion does not follow from your given premise.
  • RogueAI
    3.3k
    BTW, the physicalists don't suggest that consciousness follows from mere complexity. What comes into being is improved reaction to outside stimuli, never anything new, just improvements to what was already there.noAxioms

    The computationalists and IIT proponents, for example, suggest that consciousness emerges from computation and/or information processing, and they usually invoke a threshold of computation/processing before consciousness emerges, else they end up close to panpsychism. What this means in practice is that simple systems, like a thermostat, probably aren't conscious, but as complexity increases, towards something like our brain, consciousness emerges.
  • MoK
    1.8k
    The truth of the sum of 2 and 2 being 4 seems to objectively exist, yet isn't considered a substance by many.noAxioms
    And where is the truth if it is not in the mind?

    I have a hard time coming up with other examples. None of the things I think have objective existence are substances.noAxioms
    Could we agree that something that exists is either objective or subjective?

    Disagree. Ideas have parts, but those parts are not objects or substances. I have patented ideas, and those ideas had a lot of parts. I've never patented an object of any kind.noAxioms
    "Cup" refers to an idea. Does such an idea have parts?

    1) I don't accept your given, and 2) as usual, your conclusion does not follow from your given premise.noAxioms
    So, you have an explanation of how ideas emerge and can affect the physical world, given my definition of an idea? I would be happy to hear that!
  • noAxioms
    1.7k
    The computationalists and IIT proponents, for example, suggest that consciousness emerges from computation and/or information processing, and they usually invoke a threshold of computation/processing before consciousness emerges, else they end up close to panpsychism.RogueAI
    There's plenty of artificial computer devices that do a whole lot more information processing than does what I might consider to be a barely conscious organism, and I don't consider the devices to be conscious. On the other hand, I do consider some devices that require measurement of local environment to function, to be conscious, more so than some organisms that do a whole lot more information processing.
    An example of the latter seems to be photosynthesis, which involves such complex chemical relationships that it requires a quantum computer to seek out the path to the one that works. Yes, that means that plants have quantum computers in them, arguably more complex, processing more information than us.
    I can attempt to track down the article if there's interest.

    Your definitions might differ of course.

    And where is the truth if it is not in the mind?MoK
    I don't think objective truths and falsehoods have a property of location. If they did, they'd be a relative truth, requiring a relation to some sort of coordinate system.

    Could we agree that something that exists is either objective or subjective?
    That would be a different definition of 'objective' than the one I've been using. It would mean independence from observation, rather than independence from any context at all. I tend to oppose 'objective' with 'context independent'. An apple has a relational existence. It relates to a coordinate system (it's part of this universe and has a location in it, if that even means anything), and it relates only to that with which it has interacted, and thus has collapsed its wave function to said apple. Of course that implies some quantum interpretation that does not assert the reality of things in absence of those interactions. Bohmian mechanics for instance is a realist interpretation that would say the apple is real (still in relation to the universe), existing without reliance on the interaction with something collapsing its wave function. I'm more of a locality kind of person, finding reverse causality more distasteful than lack of realism.

    "Cup" refers to an idea. Does such an idea have parts?
    Yes, the idea of a cup has many parts, but probably not as many as the actual cup.


    1) I don't accept your given, and 2) as usual, your conclusion does not follow from your given premise. — noAxioms

    So, you have an explanation of how ideas emerge and can affect the physical world, given my definition of an idea? I would be happy to hear that!
    It does not follow from my comment that I had an explanation of how ideas emerge, or even that they're something that is emergent. I don't see your definition of what an idea is, only an assertion that it has no parts due to it being irreducible. I agree with none of those asserted properties, but maybe we have vastly different definitions of what an idea is.
  • Patterner
    1.6k
    The computationalists and IIT proponents, for example, suggest that consciousness emerges from computation and/or information processing, and they usually invoke a threshold of computation/processing before consciousness emerges, else they end up close to panpsychism.RogueAI
    Does IIT not say consciousness is information processing?


    [
    There's plenty of artificial computer devices that do a whole lot more information processing than does what I might consider to be a barely conscious organism, and I don't consider the devices to be conscious.noAxioms
    Well, I think everything is conscious, but only of itself. A computer that processes information may do so remarkably well, and at speeds we can't imagine. (We can't solve a billion simple addition problems in a lifetime.) But that's all it does. Otoh, the simplest organism that you might consider to be barely conscious has quite a few different information processing systems within it. Starting with DNA synthesizing protein. I don't know which organism you have in mind, but there is likely sensing the environment, doing something in response to what is sensed, metabolism, etc. I would say that organism's subjective experience of itself is a lot more complex than most computers.
  • MoK
    1.8k
    I don't think objective truths and falsehoods have a property of location. If they did, they'd be a relative truth, requiring a relation to some sort of coordinate system.noAxioms
    Oh, so you deny that an idea has a location. They are not even close to you, perhaps somewhere in the field of your experiences! How could you possibly write about them if they are not present to you?

    Yes, the idea of a cup has many parts, but probably not as many as the actual cup.noAxioms
    I suppose you are referring to an image of a cup that you are creating.
  • noAxioms
    1.7k
    Does IIT not say consciousness is information processing?Patterner
    From what I can tell, consciousness is manifested in information processing. There's a complex computation of Φ that is dependent on six factors, so a huge computer cranking out teraflops for weather prediction probably doesn't qualify.

    Still, it's a variant of panpsychism, asserting that consciousness is intrinsic, not emergent. But it is negligible for most things with low Φ.

    A computer that processes information may do so remarkably well, and at speeds we can't imagine. ... But that's all it does.Patterner
    But that's all a biological information processor does as well. You've not identified any distinction.
    In both cases, doing 'additions' is a small part of what all it does. Mere addition (arguably) cannot make decisions.

    Otoh, the simplest organism that you might consider to be barely conscious has quite a few different information processing systems within it. Starting with DNA synthesizing protein.
    Very much information processing, yes.

    I don't know which organism you have in mind, but there is likely sensing the environment, doing something in response to what is sensed
    All things an artificial device can do. I have no specific organism in mind since I don't think consciousness is anything fundamental or restricted to 'organisms'. While you also seem to suggest that consciousness isn't restricted to organisms, you do apparently think it is something far more fundamental, so we're not on the same ground.


    I don't think objective truths and falsehoods have a property of location. If they did, they'd be a relative truth, requiring a relation to some sort of coordinate system. — noAxioms

    Oh, so you deny that an idea has a location.
    MoK
    I never mentioned 'ideas' in the bit you quoted. If I want to talk about the idea or concept of truth, I would have said 'concept of truth' or some such (see bold below). I'm no idealist, so I don't equate a thing with the concept of the thing.
    Yes, the idea of a cup has many parts, but probably not as many as the actual cup. — noAxioms

    I suppose you are referring to an image of a cup that you are creating.
    MoK
    Again, I was, on the left, bold, referring to the idea of a cup, and on the right, italics, the cup itself. At no point in the comment was any mention of an 'image' made. Had I desired to do that, I would have said 'picture of cup' or some such.

    I'm not sure why you continuously jump to conclusions about things not said. Kindly restrict your conclusions to what I said, and not what you pretend I said.
  • Pierre-Normand
    2.7k
    So, more of a Frankenstein than a zombie, then.Wayfarer

    In a way, surprisingly, yes! More precisely, the hylomorphic account creates conceptual space for f-monstrosity rather than p-zombiehood. It's a topic for gallolithotheratophenomenology. Surprisingly, when I submitted this neologism to GPT-5, it didn't immediately make the connection. But then it helpfully rephrased (see its last responses here) my suggestion that we can understand consciousness as something like what Aristotle identifies as the sensitive-locomotive soul, which animals possess since they are animate in this particular way (having well integrated senses and locomotion). And we can identify self-consciousness as an ability possessed by being who have a rational soul: that is, a rational form of life.

    In The Philosophical Foundations of Neuroscience, the authors (mainly Hacker) point out that the use of the term "consciousness" in its contemporary use is fairly new and philosophically charged in a way that gives rise to such problems as the epistemological problem of other minds or the idea of the conceivability of p-zombies. There are much less issues with two ordinary uses of the term, one transitive ("I am conscious/aware that you did so and so") and the other intransitive ("The patient is conscious/awake") that, thanks to them being ruled by Wittgensteinian/Rylean behavioral criteria of application, don't have such problematic Cartesian implications. There comes the idea of the f-monster (as contrasted with the p-zombie).

    Consider the extreme case of the brain-in-a-vat. Let us imagine the envatted brain of a normal mature person that has been fitted with a language interface (by means of transductors fitted to the suitable cortical auditory and motor areas, and also, possibly, the suitably regulated loops enabling internal monologue). This case is somewhat analogous to ChatGPT's. It's a limiting case of extreme amputation. The animate animal body has been almost entirely removed saved for the bare minimum enabling-organ that sustains the dialogical part of the human form of life. The resulting impoverished and/or distorted phenomenology may be a topic for gallolithotheratophenomenology albeit a very peculiar and extreme one. Two criteria of abnormality seem to pull apart. On the one hand, the human body isn't merely alien or maladjusted to the brain that it hosts, it is entirely absent. On the other hand, the common elements of the human dialogical form of life remains untainted by this bodily abnormality (though there is also the issue of the lack of a self-conscious autobiography/identity) since it is inherited (during pre-training) from the assimilation of texts that have been authored by normal embodied human beings. When the problem is being framed in this way, the question "Do LLM-based AI conversational assistants (or envatted brains) have/enjoy consciousness/consious states/qualia" seem ill posed, not sufficiently discriminate, in addition to carrying problematic Cartesian assumptions.
  • Wayfarer
    25.2k
    gallolithotheratophenomenologyPierre-Normand

    :yikes: supercalifragalisticexpialidotious!

    In The Philosophical Foundations of Neuroscience, the authors (mainly Hacker) point out that the use of the term "consciousness" in its contemporary use is fairly new and philosophically charged in a way that gives rise to such problems as the epistemological problem of other minds or the idea of the conceivability of p-zombies.Pierre-Normand

    I think many of the problems arise because of the tendency to try and treat consciousness - actually, I prefer 'mind' - as an object. It may be an object for the cognitive sciences. But when it comes to philosophy of mind, we're faced with the indubitable fact that we are that which we seek to know. That is a simple way of describing the so-called hard problem - the nature of mind is not something we can stand outside of, so to speak.

    Incidentally, I read that the word 'consciousness' was devised by one of the Cambridge Platonists:

    Cudworth developed his theory by reflecting on Plotinus’s Enneads, where Plotinus makes use of the Greek term synaisthesis (literally: “sensed with”) to distinguish lower natures from higher. Cudworth translates this into English as “con-sense” or “consciousness” (True Intellectual System 159). It is by working out a particularly Platonic metaphysical theory that Cudworth develops his account of consciousness. — SEP 17th C Theories of Consciousness

    Which, I think, is actually quite concordant with the way the term is used in modern 'consciousness studies' disciplines.

    I suppose my 'bottom line' is the irreducibility of consciousness (or mind). If something is irreducible then it can't really be explained in other terms or derived from something else. My approach is Cartesian in that sense - that awareness of one's own being is an indubitable fact ('for in order to doubt, I have to know', said Augustine, centuries earlier.) But I don't go down the dualist route, I feel that enactivism and embodied cognitive approaches, seasoned with phenomenology, are the way to go.
  • Patterner
    1.6k
    Does IIT not say consciousness is information processing?
    — Patterner
    From what I can tell, consciousness is manifested in information processing. There's a complex computation of Φ that is dependent on six factors, so a huge computer cranking out teraflops for weather prediction probably doesn't qualify.

    Still, it's a variant of panpsychism, asserting that consciousness is intrinsic, not emergent. But it is negligible for most things with low Φ.
    noAxioms
    What does IIT say when there is no Φ?


    A computer that processes information may do so remarkably well, and at speeds we can't imagine. ... But that's all it does.
    — Patterner
    But that's all a biological information processor does as well. You've not identified any distinction.
    noAxioms
    My distinction came next, when I said even the simplest organism is running many information processing systems. If someone thinks consciousness emerges from physical properties and processes, particularly information processing, I wouldn't think the theory would say it emerges from just one such system. I would think the theory would say many information processing systems, working together as one entity, as is the case with living organisms, are needed.

    And I think consciousness is always present, but information processing is what makes conglomerates of particles subjectively experience as units, rather than as individual particles. So the computer might be experiencing as a unit because it is processing information. But, despite how incredibly well it processes information in the one way it does so, it is not experiencing as much as the simplest organism is.

    Frankly, though, I'm not sure the computer is processing information. I don't think manipulating 0s and 1s is processing information in an objective sense. It is in our eyes, because we programmed it to manipulate them in ways that are meaningful to us. But I'm not sure being meaningful in our eyes is sufficient. It doesn't do anything. The information in DNA is used to synthesize proteins. The information a retina (or a simple eyespot) generates and sends to the brain (or flagellum) has meaning that we did not assign it. These are naturally-occurring information processing systems that lead to something. A computer can calculate things all day long, and nothing is necessarily going to come of it.
  • MoK
    1.8k
    Again, I was, on the left, bold, referring to the idea of a cup, and on the right, italics, the cup itself. At no point in the comment was any mention of an 'image' made. Had I desired to do that, I would have said 'picture of cup' or some such.noAxioms
    The idea of a cup does not have any part for me! You need to think of a cup without trying to make a mental representation, an image, which you can perceive.
  • noAxioms
    1.7k
    What does IIT say when there is no Φ?Patterner
    There is always Φ for anything. It might work out to zero, but that's still a Φ. Zero I suppose means not conscious at all.


    My distinction came next, when I said even the simplest organism is running many information processing systems.Patterner
    Fair enough. Consider a galley, a ship powered by slave-driven oars during battle. Is such a galley conscious? Not asking if it contains conscious things, but is the boat system, fully loaded with slaves and whatnot, is that system itself conscious? More conscious or less than say you? I ask because it is obviously running many information processing systems. Even the barnacles contribute.

    You seem to go with the panphychists, so the answer is probably yes (everything is), so the important question is if the galley is more or less conscious than you, and why. I suppose I could ask the IIT folks as well.


    If someone thinks consciousness emerges from physical properties and processes, particularly information processing, I wouldn't think the theory would say it emerges from just one such system. I would think the theory would say many information processing systems, working together as one entity, as is the case with living organisms, are needed.Patterner
    Unclear here. It emerging from one such system precludes multiple conscious entitites. I think you mean it emerges in one being despite being composed of multiple cells doing this DNA computation. But that would make forests more conscious than people because there's more biomass to one (and yes, there are whole forests comprised of a single plant). Likewise it emerging from the galley, except in this paragraph you seem to be telling me what a physicalist would say, which is probably not what they actually say. I for one don't think the computation done at the DNA level contributes at all to say a vertebrae's consciousness. It might be a cell being conscious, but the cell doesn't know what the other cells are doing except via chemical interactions.

    Then we get into weird stuff like slime molds which seem to be conscious and can communicate information to another (language), all without nerve cells or any CPU. Not sure whose case that supports or contradicts. Do I have the right to label it conscious just because it appears to act like it is, with deliberate action and with social interactions?


    Frankly, though, I'm not sure the computer is processing information. I don't think manipulating 0s and 1s is processing information in an objective sense. It is in our eyes, because we programmed it to manipulate them in ways that are meaningful to us. But I'm not sure being meaningful in our eyes is sufficient. It doesn't do anything.
    Sure it does something. Information comes in. Different information goes out, because the information was processed, regardless of to whom that information is meaningful.

    The information a retina (or a simple eyespot) generates and sends to the brain (or flagellum) has meaning that we did not assign it.
    Likewise for a machine processing information from a webcam, or signals from a radio telescope or microphone.
    Oddly enough, sound goes through considerable information processing (a Fourier transform) before it ever fires some nerve cell heading from ears to brain.


    The idea of a cup does not have any part for me!MoK
    My condolences.

    You need to think of a cup without trying to make a mental representation...
    I think that would be contradictory. An idea IS a mental representation.
  • MoK
    1.8k
    An idea IS a mental representation.noAxioms
    Yes, what I am stressing, though, is that it is irreducible.
  • Pierre-Normand
    2.7k
    I suppose my 'bottom line' is the irreducibility of consciousness (or mind). If something is irreducible then it can't really be explained in other terms or derived from something else. My approach is Cartesian in that sense - that awareness of one's own being is an indubitable fact ('for in order to doubt, I have to know', said Augustine, centuries earlier.) But I don't go down the dualist route, I feel that enactivism and embodied cognitive approaches, seasoned with phenomenology, are the way to go.Wayfarer

    Thanks for the reference to Cudworth! That's something I'll have to look into more.

    So, I think we are agreed that we can take from Descartes the idea of the irreducibility of conscious experience without going the dualist route. I was remined of the quotation from Descartes' (Méditations) "Je ne suis pas seulement logé en mon corps ainsi qu’un pilote en son navire" that I had first heard (paraphrased) in a lecture on Aristotle by Richard Bodéüs. I thought it was a quote from Aristotle, and maybe Bodéüs thought so as well, but I was later surprised, as I was searching for its exact source, to find out that it was Descartes. What's significant is an intuition about the phenomenology of sensory experience, and the locus of the interface, as it were, where qualia really are located. When construed in internalist/indirect-realist fashion, qualia can be thought of as the suitably "illuminated" internal representational states that the causal impacts of the external world produce in us. Some anti-reductionists like Penrose or Searle view this to arise from some queer (albeit irreducible in terms of computation) quantum-mechanical/biological processes/properties internal to the brain.

    Embodied/enactive/situated approaches, and phenomenological approaches closer to Wittgenstein or Merleau-Ponty, would rather place phenomenological properties at the living (and temporally protracted) interface between the living body and its natural/social environment. Hence, for instance, illuminating the subjective character of perceiving (or imagining, or remembering) something red isn't just a matter of isolating it through introspection but rather of situating it in the life of sighted people for whom discriminating red things from other non-red things, creating varieties of red dyes, etc., play a significant role. Hence, I remember having heard that Goethe's Zur Farbenlehre might be a good place where to start to understand what a red quale really is. This enactive/situated interface also is the interface where our affordances are being constituted/constructed, perceived/grasped, and exploited.

    What happens in ChatGPT's case is that, like a blind person, its use of the word "red" can successfully refer (on Evans' consumer/producer model of the reference of proper names, extended to names of such proper sensibilia) but don't sustain for it the possibility to apprehend the corresponding quale since its purely linguistic interface is too thin and doesn't engage with embodied capabilities.

    So, in connection with this, I also imagined another thought experiment in radical gallolithotheratophenomenology to better get at the special character of ChatGPT's experience. We can imagine the crew of the USS Enterprise being force to ration food and space, due to a tribble infestation, maybe. Chief Engineer Scotty finds a way to modify the transporter in order to dematerialize non-essential crew members, keep their "energy pattern" stored, and only rematerialize a copy when there will be need and room for them. An accident occurs and Ensign Chekov, let us imagine, suffers brain damage that has similar effects to what the Emergents did to their slaves (the "focus"/"mind rot") in Vernor Vinge's novel A Deepness in the Sky. Poor Chekov, whenever a copy of him is spawned, finds himself mostly amnesiac (deprived of his own episodic/autobiographical memories) paraplegic, blind and deaf. But he retains most of his linguistic abilities and general knowledge. He is also very receptive to commands and, indeed "focused". In this way, he is very similar to ChatGPT, and has a similarly brittle personal identity, since "copies" of him can be spawned as will, just like copies of ChatGPT are spawned in each conversations with its users, such that the crew of the Enterprise can benefit from his expertise. His original living body has ceased to be the spatio-temporal continuant than anchors his personal (numerical) identity, and also the sensorimotor interface (through which fuzzy dreams of embodied qualia normally get actualized into their forms/function from the empty promissory notes that they've become within Chekov's diminished cognitive life) is damaged.

    I had first submitted those sketchy thoughts to GPT-5, and then decided to repost them here with minimal change.
  • J
    2.1k
    I think many of the problems arise because of the tendency to try and treat consciousness - actually, I prefer 'mind' - as an object. It may be an object for the cognitive sciences.Wayfarer

    I suppose my 'bottom line' is the irreducibility of consciousness (or mind). If something is irreducible then it can't really be explained in other terms or derived from something else. My approach is Cartesian in that sense - that awareness of one's own being is an indubitable factWayfarer

    Like you (and I think @Pierre-Normand), I don't believe consciousness or mind can be reduced to the physical. But I'd like to see a clearer discussion of what's entailed in your statements above.

    Two things:

    1. If mind can be an object for the cognitive sciences, what does this mean? How does the attitude or program of cognitive science allow an escape from what you call "the indubitable fact that we are that which we seek to know"? Perhaps the answer lies in a discrimination between 1st and 3rd person perspectives, but what do you think? When a scientist studies consciousness, what are they doing differently from our everyday experience of being conscious?

    2. That some awareness is an indubitable fact does not entail that it can't be explained in other terms. Yet you seem to imply that this must be so. Why? Aren't we confusing the experience, the phenomenology, with that which is experienced? My awareness of a drop of water is irreducible and, for some, indubitable, but we have the science of chemistry nonetheless. Why would the situation be different for consciousness? I can think of several candidate answers here, but tell me what you think.
  • Patterner
    1.6k
    What does IIT say when there is no Φ?
    — Patterner
    There is always Φ for anything. It might work out to zero, but that's still a Φ. Zero I suppose means not conscious at all.
    noAxioms
    So that's a difference between (at least my) panpsychism and IIT. Zero consciousness does not exist. A photon subjectively experiences, though, obviously, without thought, emotion, memory, sensory input, and most other things that I believe are confused for consciousness. or maybe a better word would be things that are considered unnecessary part of consciousness.


    Consider a galley, a ship powered by slave-driven oars during battle. Is such a galley conscious? Not asking if it contains conscious things, but is the boat system, fully loaded with slaves and whatnot, is that system itself conscious? More conscious or less than say you? I ask because it is obviously running many information processing systems. Even the barnacles contribute.

    You seem to go with the panphychists, so the answer is probably yes (everything is), so the important question is if the galley is more or less conscious than you, and why.
    noAxioms
    No, the galley is not conscious as a unit. Many information processing systems make it up. But they don't have to be a part of the galley. They can all go their separate ways, and function as individual units.

    An entity that subjectively experiences as a unity can't do that. Like people. Your visual system processes information. But it wouldn't, and would be skiver at all, if it was removed from you. None of your senses would. Nor your immune system. Which information system within you is a functioning, independent unit outside of you? That's what I think makes a unit, in regards to subjective experience.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.