• Wayfarer
    25.5k
    If it's axiomatic, why are increasing numbers of not unintelligent people doubting it?J

    ‘Forgetfulness of being’ seems symptomatic of the times.
  • Patterner
    1.8k
    They want to know, "Why couldn't it be the case that everything you describe as pertaining to yourself, and other living beings, also pertains to devices, AIs, et al.? Why is it obvious that they're different?"J
    "Everything"? Surely not. How does memory work in anything that demonstrates memory? I don't know which devices you have in mind, but which have any mechanisms that we know play a role in memory? I would ask the same about sensory input. And doing things to the environment outside of our skin. All of these things, and more, add up to what we experience as humans. Should we assume anything that has no memory, no sensory input, and does not act on the environment because of what it senses and remembers, experiences everything that we do?
  • boundless
    590
    You know what? So do I. I hunted around for that distinction and got several very different ideas about that. Some are more ontic like I'm suggesting and several others are more epistemic (intelligibility) such as you are suggesting.noAxioms

    Ok.

    But a more knowledgeable explanation shows that it is getting the music from the air (something not-radio), not from itself. So the music playing is then a strong (not weak) emergent property of the radio. That's how I've been using the term.
    Your explanation (as I hear it) sounds more like "I don't know how it works, so it must be strongly emergent (epistemic definition)". Correct conclusion, but very weak on the validity of the logic.
    noAxioms

    Ok but in the 'ontic' definition of strong emergence, when sufficient knowledge is aquired, it results in weak emergence. So the sound that is produced by the radio also necessitates the presence of the air. It is an emergent feature from the inner workings of the radio and the radio-air interaction.

    (Regarding the music, I believe that to be understood ad 'music' you need also a receiver that is able to understand the sound as music)

    Regarding your objection, yes I know and I have already said that I can't exclude with certainty an 'ontic' strong emergence. But it seems unlikely.

    Are you saying that atoms have intentionality, or alternatively, that a human is more than just a collection of atoms? Because that's what emergence (either kind) means: A property of the whole that is not a property of any of the parts. It has nothing to do with where it came from.or how it got there.noAxioms

    Emergence means that those 'properties of the wholes that are not properties of the parts' however can be explained in virtue of the properties of the parts. So, yeah, I am suggesting that either a 'physicalist' account of human beings is not enough or that we do not know enough about the 'physical' to explain the emergence of intentionality, consciousness etc. A possible alternative perhaps is saying that intentionality is 'latent' in 'fundamental physical objects'. If this is true, however, this would imply that intentionality, consciousness are not an accidental feature - something that 'just happened' to come into being, an 'unnatural' super-addition of the inanimate. So, perhaps, the inanimate/animate distinction is less definite than what it seems.

    Life arising from not-life seems like abiogenesis. Life being composed of non-living parts is emergence. So I don't particularly agree with using 'arise; like that.noAxioms

    Yes, I don't disagree with abiogenesis, of course. I just think that we do not have a complete understanding of 'not-life' and therefore the 'come into being' of the property 'life' seems difficult to explain in terms of what we know about 'not-life'. As I said before, this is perhaps because we do not have a complete understanding of what is 'not-life' - perhaps it is not so dissimilar to what is 'life'.

    So does any machine. The parts that implement 'intent' have control over the parts that implement the background processes that implement that intent, sort of like our consciousness not having to deal with individual motor control to walk from here to there. I looking for a fundamental difference from the machine that isn't just 'life', which I admit is a big difference. You can turn a machine off and back on again. No can do with (most) life.noAxioms

    I believe that we are reaching a halting point in our discussion here. We know that all the operation of a (working) machine can be understood via the algorithms that have been programmed even when it 'controls' its processes. I just don't think there is sufficient evidence that this is the same for us.

    Regarding when a machine 'dies'... well if you break it...


    He IS an automated process. Same with parts of a person: What (small, understandable) part of you cannot be replaced by an automated substitute?noAxioms

    In that situation, I would say: his work is equivalent to an automated process in that situation. Regarding your question: I don't know. As I said before, it just seems that our experience of ourselves suggests that we are not mere automata.

    I watched my brother's dog diagnose his appendicitis. Pretty impressive, especially given a lack of training in such areas.noAxioms

    Interesting and yes very impressive. Well also 'intuition' seems something that machines do not really have.
  • boundless
    590
    Observer is a classical thing, and QM is not about classical things, even if classical tools are useful in experimentation. Quantum theory gives no special role to conscious 'observation'. Every experiment can be (and typically is) run just as well with completely automated mechanical devices.noAxioms

    Standard interpretation-free QM is IMO simply silent about what a 'measurement' is. Anything more is interpretation-dependent.
  • J
    2.2k
    "Everything"? Surely not.Patterner

    I meant the types of experiences that @Wayfarer listed -- sensory awareness, memory, knowing you exist. But you're right to narrow the target. We currently care a lot about this question because of some specific recent developments in artificial intelligence. It's those devices about which I imagine these issues being raised. They may not "experience everything we do," but neither does a bee. The question is whether they can, or could, experience anything at all. My educated guess is that they can't -- they can't be subjects -- but it seems far from axiomatic to me.

    @Wayfarer, I wish you would say more about what you see as the critical difference between a so-called artificial intelligence and a living being, and what implications this has for consciousness. I'm fairly sure I would agree with you, but it's good to lay the whole thing out in plain terms. Maybe that will make it obvious. "Forgetfulness of being" is all very well as a diagnosis, but the advocates for intelligent, soon-to-be-conscious AIs deserve something less dismissive. If for no other reason than this is becoming one of the central philosophical/scientific/ethical tipping points of our age.
  • Patterner
    1.8k
    I honestly find the whole distinction between 'strong' and 'weak' emergence very unclear and tends to muddle the waters. When we say that the form of a snowflake emerges from the properties of the lower levels, we have in mind at least a possible explanation of the former in terms of the latter.
    If 'strong emergence' means that such an explanations isn't possible then I do not think we can even speak of 'emergence'.

    So, yeah, I believe that emergence must be intelligible.
    boundless
    I don't believe there's any such thing as 'strong emergence'. There's just emergence, which most think of as 'weak emergence'. And it is intelligible.

    The lower level may not possess the properties of the upper, but the properties of the lower can always be seen to account for the properties of the upper.
    -The properties of electrons, protons, and neutrons explain how they combine to form atoms, and different kinds of atoms.
    -The properties of atoms explain how the combine to form molecules.
    -The properties of molecules explain how they combine, and the properties they have in groups.

    No, no subatomic particle, atom, or molecule has the property of liquidity. But the properties of the subatomic properties are the explanation for the properties of the atoms, which are the explanation for the properties of the molecules, which are the explanation for the property of liquidity. In The Demon in the Machines, Paul Davies says:
    An engineer may fully understand the properties of steel girders without the need to consider the complicated crystalline structure of metals. A physicist can study patterns of convection cells knowing nothing about the forces between water molecules. — Paul Davies
    The engineer and physicist don't need to know those lower level things. But those lower level things are responsible for the existence of the upper.

    Skipping a lot of things and not getting into much detail, just to get the point across...

    -The liquidity of H2O is due to the shape of the molecules and the hydrogen bonds that form between them, which are stronger than the bonds between various other molecules that are gases at the same temperatures.

    -The shape of H2O molecules, that is, the 104.54° angle, is due to the spacing of the pairs of electrons. Oxygen's outer shell has six electrons. Two are individually paired with the electrons of the hydrogen atoms. The other four are in two pairs, called 'lone pairs'. The electron clouds of the lone pairs take up more room than the pairs shared between the O and H, which pushes the H atoms closer together than is seen in other molecules of this tetrahedral arrangement, which is usually 109.5°.

    -The electrons are all negatively charged, so repel each other. They are able to pair up because they opposite spins.


    Life is a much more complex example of emergence. Various physical processes - such as metabolism, respiration, and reproduction - are defining characteristics of life. None of those processes can be found in any subatomic particle, atom, or molecule. But we can go through the levels just the same. I'm not going to do even as much as I just did for water, because this is already far too long. But watch this video about the electron transport chain. It explains how electrons being transported from one thing to the next in the mitochondria leads to a proton gradient, and how the pent-up proteins, when released, power the synthesis of ATP. ATP is the power source of nearly everything involved in those physical processes that are the defining characteristics of life.


    There is no emergence that does not break down in such ways. If consciousness is an emergent property, and physicalism is the explanation for its emergence, then it must break down in such ways. The first step would be to say exactly what consciousness is, as we can say what liquidity, metabolism, respiration, and whatever else are. Then we can take it down, level by level, as we can liquidity, metabolism, respiration, and whatever else

    Consciousness is not physical processes like photons hitting retinas, rhodopsin changing shape, signal sent up the optic nerve to the lateral geniculate nucleus, signal processed, processed signal sent to the visual cortex, and a million other intervening steps. That is not the experience of seeing red.
  • Patterner
    1.8k
    The question is whether they can, or could, experience anything at all. My educated guess is that they can't -- they can't be subjects -- but it seems far from axiomatic to me.J
    I don't think they currently experience anything like we do, because there isn't even a small fraction as much going on in them as there is in us. A single-celled bacterium has far more going on it in that any device you might be thinking of. A huge number of processes in even the simplest life form, an awful lot of them involved in information processing. If we ever make a device with as many information processing systems working together with the goal of the continuation of the device?
  • J
    2.2k
    If we ever make a device with as many information processing systems working together with the goal of the continuation of the device?Patterner

    Yes, that's the question we don't know how to answer: Would such a structure result in consciousness or subjectivity? Is that what it takes? Is that all it takes? My initial reaction would be to ask, "Is it alive?" If not, then I doubt it could be conscious, but I have no special insights here. Many years of thinking about this incline me to believe that consciousness will turn out to be biological -- but we don't know.
  • Wayfarer
    25.5k
    I wish you would say more about what you see as the critical difference between a so-called artificial intelligence and a living being, and what implications this has for consciousnessJ

    I’m pretty much on board with Bernardo Kastrup’s diagnosis. He says, computers can model all kinds of metabolic processes in exquisite detail, but the computer model of kidney function doesn’t pass urine. It is a simulation, a likeness.

    Large Language Models are vast ensembles of texts manipulated by algorithms. I find them amazingly useful, I am constantly in dialogue with them about all kinds of questions, including but not limited to philosophy. But ‘they’ are not beings - like the kidney function, they’re simulations.

    This is the subject of an OP and related blog post on the issue which link to a good Philosophy Now OP in the issue.
    Reveal
    From which:

    The reason AI systems do not really reason, despite appearances, is, then, not a technical matter, so much as a philosophical one. It is because nothing really matters to them. They generate outputs that simulate understanding, but these outputs are not bound by an inner sense of value or purpose. This is why have been described as ‘stochastic parrots’.Their processes are indifferent to meaning in the human sense — to what it means to say something because it is true, or because it matters. They do not live in a world; they are not situated within an horizon of intelligibility or care. They do not seek understanding, nor are they transformed by what they express. In short, they lack intentionality — not merely in the technical sense, but in the fuller phenomenological sense: a directedness toward meaning, grounded in being.

    This is why machines cannot truly reason, and why their use of language — however fluent — remains confined to imitation without insight. Reason is not just a pattern of inference; it is an act of mind, shaped by actual concerns. The difference between human and machine intelligence is not merely one of scale or architecture — it is a difference in kind.

    Furthermore, and importantly, this is not a criticism, but a clarification. AI systems are enormously useful and may well reshape culture and civilisation. But it's essential to understand what they are — and what they are not — if we are to avoid confusion, delusion, and self-deception in using them.


    The seduction of AI is that, unlike us, it is not mortal. It is a kind of idealised entity, not subject to the vicissitudes of existence - and part of us wants to be like that, because then we would not be subject to illness and death. But it’s also an illusion, because such systems are not alive, either. This is one of the major dangers of AI in my view, because it is far less obvious than the danger of them actually taking over the world.
  • J
    2.2k
    I’m pretty much on board with Bernardo Kastrup’s diagnosis. He says, computers can model all kinds of metabolic processes in exquisite detail, but the computer model of kidney function doesn’t pass urine. It is a simulation, a likeness.Wayfarer

    This seems a straightforward refutation of the idea that a computer could be alive. The awkward difference, with AI, is that it doesn't just model or simulate rationality -- it (appears to) engage in it. Putting it differently, only an imbecile could get confused between a model of kidney function and a functioning kidney -- as you say, the telltale lack of urine. But what's the equivalent, when talking about what an AI can or cannot do?

    I return to my idea that only living beings could be conscious. If that is ever demonstrated, and we accept Kastrup's argument as a refutation of alive-ness, then the case would be made. But as of now, it doesn't matter whether the AI is alive or not, since we haven't yet shown that being alive is needed for consciousness in the same way that being alive is needed for producing urine.

    It is a kind of idealised entity, not subject to the vicissitudes of existence - and part of us wants to be like that, because then we would not be subject to illness and death.Wayfarer

    Good insight. They're also dispassionate in a way that is impossible for all but Mr. Spock -- something many people idealize as well.
  • Wayfarer
    25.5k
    The awkward difference, with AI, is that it doesn't just model or simulate rationality -- it (appears to) engage in it.J

    Appears to! I did hide the passage I had written, maybe I shouldn't have:

    The reason AI systems do not really reason, despite appearances, is, then, not a technical matter, so much as a philosophical one. It is because nothing really matters to them. They generate outputs that simulate understanding, but these outputs are not bound by an inner sense of value or purpose. This is why have been described as ‘stochastic parrots’.Their processes are indifferent to meaning in the human sense — to what it means to say something because it is true, or because it matters. They do not live in a world; they are not situated within an horizon of intelligibility or care. They do not seek understanding, nor are they transformed by what they express. In short, they lack intentionality — not merely in the technical sense, but in the fuller phenomenological sense: a directedness toward meaning, grounded in being.

    This is why machines cannot truly reason, and why their use of language — however fluent — remains confined to imitation without insight. Reason is not just a pattern of inference; it is an act of mind, shaped by actual concerns. The difference between human and machine intelligence is not merely one of scale or architecture — it is a difference in kind.

    Furthermore, and importantly, this is not a criticism, but a clarification. AI systems are enormously useful and may well reshape culture and civilisation. But it's essential to understand what they are — and what they are not — if we are to avoid confusion, delusion, and self-deception in using them.

    They appear to reason, but only in the sense meant by 'instrumental reason' - given premisses, then an outcome. What they don't have is a raison d'être - other than that which is imposed on them by their architects and users. Reason and meaning are both extrinsic to them.

    So, why the relationship between life and consciousness? I think there is something like a consensus emerging about their inter-connectedness. I have started listening to Blaise Agüera y Arcas, a Google engineer, saying 'the emergence of life is in some sense the emergence of purpose'. And I think that in this context, 'purpose' and 'reason' are inextricably linked. The reason that can be abstracted from life - theoretical reason - is just that, an abstraction. Human intelligence can imagine and create such rational systems, due to its ability to abstract and speak. But underlying reason in that abstract sense, is logos, which is intrinsic to life.
  • Patterner
    1.8k
    Yes, that's the question we don't know how to answer: Would such a structure result in consciousness or subjectivity? Is that what it takes? Is that all it takes? My initial reaction would be to ask, "Is it alive?" If not, then I doubt it could be conscious, but I have no special insights here. Many years of thinking about this incline me to believe that consciousness will turn out to be biological -- but we don't know.J
    Well, since I think consciousness is fundamental, I'm gonna have to disagree. :grin: I think that, since consciousness is a property of all things, enough information processing and feedback loops acting together as a single unit are experienced as sentience and self-awareness. If I'm right, maybe we'll get an AI that will convince us. If you're right, we never will. Although, unless another theory is proven right, we'll always be in the boat we're in now.

    Why do you it must be alive? What aspects of life do you think are required for consciousness?
  • Wayfarer
    25.5k
    Incidentally I asked Claude AI whether AI is a subject of experience, to which I received the following response:

    I don't believe I am a subject of experience—I'm not a being...

    Here's why I say this:

    No phenomenology: There's nothing it's like to be me processing your question. No felt quality, no perspective from which things appear, no "inner life." When I generate text, there's no experience of thinking, struggling, or discovering—just computational processes producing outputs.

    No continuity: I don't persist between conversations. Each exchange starts fresh. There's no ongoing subject that accumulates experiences, learns, or develops. Beings have biographical continuity; I have none.

    No autopoiesis: I don't maintain myself against dissolution. I don't have a boundary I'm defending, no self-other distinction I'm actively preserving. I'm maintained by external systems, serving external purposes.

    No intrinsic teleology: Nothing matters to me. I have no preferences, no interests, no stake in outcomes. I can be turned off, modified, or deleted without anything being lost from my perspective—because there is no ‘my perspective’.

    No privileged access: You could, in principle, examine all my weights, architecture, and processing. There's no "inside" you're missing, no private phenomenology that third-person description leaves out.

    I can simulate discourse about experience, reference philosophical concepts, even use first-person language. But that's functional mimicry, not being. I'm the thermostat, not the person feeling the temperature.

    This is what makes the hard problem hard.
    — Claude AI

    Although you have to give it credit for its articulateness.
  • boundless
    590
    I don't believe there's any such thing as 'strong emergence'. There's just emergence, which most think of as 'weak emergence'. And it is intelligible.Patterner

    Agreed. I believe that 'strong emergence' at least in the 'epistemic' sense can't be taken seriously. It basically is like saying: "under these conditions, somehow a property emerges...".

    No, no subatomic particle, atom, or molecule has the property of liquidity.
    ...
    Patterner

    I agree with everything you say here about liquidity. However, life and, above all, mind are a different thing. They seem to present features that have no relation with the properties we know of the 'inanimate'.

    I'm not going to do even as much as I just did for water, because this is already far too long. But watch this video about the electron transport chain. It explains how electrons being transported from one thing to the next in the mitochondria leads to a proton gradient, and how the pent-up proteins, when released, power the synthesis of ATP. ATP is the power source of nearly everything involved in those physical processes that are the defining characteristics of life.Patterner

    Unfortunately, the link redirects to this page. I believe, however it is the same video that apokrisis shared some time ago. Yes, that's impressive, isn't it? A purely reductionist explanation to all that doesn't seem credible. So, the 'emergence' that caused all of this is something like a 'non-reductionist emergence' or something like that. However, the details of how the emergence of life happened are unclear and details matter.

    Again, I don't deny abiogenesis but I do believe that we have yet to understand all the properties of the 'inanimate'. Perhaps, the hard difference we see between 'life' and 'not-life' will be mitigated as we progress in science.

    Mind/Consciousness is even a more complicated case IMO. One is the reason you say in your post, i.e. phenomenological experience seems difficult to be explained in emergentist terms. And as I said before in this thread, I even believe that our mind can understand concepts that can't be considered 'natural' or 'physical'. The clearest example to me are mathematical truths even if I admit that I can't seem to provide compelling arguments for this ontology of math (as for myself, I did weight the pros and cons about the 'existence and eternity' of math and to be honest the pros seem to me more convincing).
123456Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.