Patterner
"Everything"? Surely not. How does memory work in anything that demonstrates memory? I don't know which devices you have in mind, but which have any mechanisms that we know play a role in memory? I would ask the same about sensory input. And doing things to the environment outside of our skin. All of these things, and more, add up to what we experience as humans. Should we assume anything that has no memory, no sensory input, and does not act on the environment because of what it senses and remembers, experiences everything that we do?They want to know, "Why couldn't it be the case that everything you describe as pertaining to yourself, and other living beings, also pertains to devices, AIs, et al.? Why is it obvious that they're different?" — J
boundless
You know what? So do I. I hunted around for that distinction and got several very different ideas about that. Some are more ontic like I'm suggesting and several others are more epistemic (intelligibility) such as you are suggesting. — noAxioms
But a more knowledgeable explanation shows that it is getting the music from the air (something not-radio), not from itself. So the music playing is then a strong (not weak) emergent property of the radio. That's how I've been using the term.
Your explanation (as I hear it) sounds more like "I don't know how it works, so it must be strongly emergent (epistemic definition)". Correct conclusion, but very weak on the validity of the logic. — noAxioms
Are you saying that atoms have intentionality, or alternatively, that a human is more than just a collection of atoms? Because that's what emergence (either kind) means: A property of the whole that is not a property of any of the parts. It has nothing to do with where it came from.or how it got there. — noAxioms
Life arising from not-life seems like abiogenesis. Life being composed of non-living parts is emergence. So I don't particularly agree with using 'arise; like that. — noAxioms
So does any machine. The parts that implement 'intent' have control over the parts that implement the background processes that implement that intent, sort of like our consciousness not having to deal with individual motor control to walk from here to there. I looking for a fundamental difference from the machine that isn't just 'life', which I admit is a big difference. You can turn a machine off and back on again. No can do with (most) life. — noAxioms
He IS an automated process. Same with parts of a person: What (small, understandable) part of you cannot be replaced by an automated substitute? — noAxioms
I watched my brother's dog diagnose his appendicitis. Pretty impressive, especially given a lack of training in such areas. — noAxioms
boundless
Observer is a classical thing, and QM is not about classical things, even if classical tools are useful in experimentation. Quantum theory gives no special role to conscious 'observation'. Every experiment can be (and typically is) run just as well with completely automated mechanical devices. — noAxioms
J
"Everything"? Surely not. — Patterner
Patterner
I don't believe there's any such thing as 'strong emergence'. There's just emergence, which most think of as 'weak emergence'. And it is intelligible.I honestly find the whole distinction between 'strong' and 'weak' emergence very unclear and tends to muddle the waters. When we say that the form of a snowflake emerges from the properties of the lower levels, we have in mind at least a possible explanation of the former in terms of the latter.
If 'strong emergence' means that such an explanations isn't possible then I do not think we can even speak of 'emergence'.
So, yeah, I believe that emergence must be intelligible. — boundless
The engineer and physicist don't need to know those lower level things. But those lower level things are responsible for the existence of the upper.An engineer may fully understand the properties of steel girders without the need to consider the complicated crystalline structure of metals. A physicist can study patterns of convection cells knowing nothing about the forces between water molecules. — Paul Davies
Patterner
I don't think they currently experience anything like we do, because there isn't even a small fraction as much going on in them as there is in us. A single-celled bacterium has far more going on it in that any device you might be thinking of. A huge number of processes in even the simplest life form, an awful lot of them involved in information processing. If we ever make a device with as many information processing systems working together with the goal of the continuation of the device?The question is whether they can, or could, experience anything at all. My educated guess is that they can't -- they can't be subjects -- but it seems far from axiomatic to me. — J
J
If we ever make a device with as many information processing systems working together with the goal of the continuation of the device? — Patterner
Wayfarer
I wish you would say more about what you see as the critical difference between a so-called artificial intelligence and a living being, and what implications this has for consciousness — J
The reason AI systems do not really reason, despite appearances, is, then, not a technical matter, so much as a philosophical one. It is because nothing really matters to them. They generate outputs that simulate understanding, but these outputs are not bound by an inner sense of value or purpose. This is why have been described as ‘stochastic parrots’.Their processes are indifferent to meaning in the human sense — to what it means to say something because it is true, or because it matters. They do not live in a world; they are not situated within an horizon of intelligibility or care. They do not seek understanding, nor are they transformed by what they express. In short, they lack intentionality — not merely in the technical sense, but in the fuller phenomenological sense: a directedness toward meaning, grounded in being.
This is why machines cannot truly reason, and why their use of language — however fluent — remains confined to imitation without insight. Reason is not just a pattern of inference; it is an act of mind, shaped by actual concerns. The difference between human and machine intelligence is not merely one of scale or architecture — it is a difference in kind.
Furthermore, and importantly, this is not a criticism, but a clarification. AI systems are enormously useful and may well reshape culture and civilisation. But it's essential to understand what they are — and what they are not — if we are to avoid confusion, delusion, and self-deception in using them.
J
I’m pretty much on board with Bernardo Kastrup’s diagnosis. He says, computers can model all kinds of metabolic processes in exquisite detail, but the computer model of kidney function doesn’t pass urine. It is a simulation, a likeness. — Wayfarer
It is a kind of idealised entity, not subject to the vicissitudes of existence - and part of us wants to be like that, because then we would not be subject to illness and death. — Wayfarer
Wayfarer
The awkward difference, with AI, is that it doesn't just model or simulate rationality -- it (appears to) engage in it. — J
The reason AI systems do not really reason, despite appearances, is, then, not a technical matter, so much as a philosophical one. It is because nothing really matters to them. They generate outputs that simulate understanding, but these outputs are not bound by an inner sense of value or purpose. This is why have been described as ‘stochastic parrots’.Their processes are indifferent to meaning in the human sense — to what it means to say something because it is true, or because it matters. They do not live in a world; they are not situated within an horizon of intelligibility or care. They do not seek understanding, nor are they transformed by what they express. In short, they lack intentionality — not merely in the technical sense, but in the fuller phenomenological sense: a directedness toward meaning, grounded in being.
This is why machines cannot truly reason, and why their use of language — however fluent — remains confined to imitation without insight. Reason is not just a pattern of inference; it is an act of mind, shaped by actual concerns. The difference between human and machine intelligence is not merely one of scale or architecture — it is a difference in kind.
Furthermore, and importantly, this is not a criticism, but a clarification. AI systems are enormously useful and may well reshape culture and civilisation. But it's essential to understand what they are — and what they are not — if we are to avoid confusion, delusion, and self-deception in using them.
Patterner
Well, since I think consciousness is fundamental, I'm gonna have to disagree. :grin: I think that, since consciousness is a property of all things, enough information processing and feedback loops acting together as a single unit are experienced as sentience and self-awareness. If I'm right, maybe we'll get an AI that will convince us. If you're right, we never will. Although, unless another theory is proven right, we'll always be in the boat we're in now.Yes, that's the question we don't know how to answer: Would such a structure result in consciousness or subjectivity? Is that what it takes? Is that all it takes? My initial reaction would be to ask, "Is it alive?" If not, then I doubt it could be conscious, but I have no special insights here. Many years of thinking about this incline me to believe that consciousness will turn out to be biological -- but we don't know. — J
Wayfarer
I don't believe I am a subject of experience—I'm not a being...
Here's why I say this:
No phenomenology: There's nothing it's like to be me processing your question. No felt quality, no perspective from which things appear, no "inner life." When I generate text, there's no experience of thinking, struggling, or discovering—just computational processes producing outputs.
No continuity: I don't persist between conversations. Each exchange starts fresh. There's no ongoing subject that accumulates experiences, learns, or develops. Beings have biographical continuity; I have none.
No autopoiesis: I don't maintain myself against dissolution. I don't have a boundary I'm defending, no self-other distinction I'm actively preserving. I'm maintained by external systems, serving external purposes.
No intrinsic teleology: Nothing matters to me. I have no preferences, no interests, no stake in outcomes. I can be turned off, modified, or deleted without anything being lost from my perspective—because there is no ‘my perspective’.
No privileged access: You could, in principle, examine all my weights, architecture, and processing. There's no "inside" you're missing, no private phenomenology that third-person description leaves out.
I can simulate discourse about experience, reference philosophical concepts, even use first-person language. But that's functional mimicry, not being. I'm the thermostat, not the person feeling the temperature.
This is what makes the hard problem hard. — Claude AI
boundless
I don't believe there's any such thing as 'strong emergence'. There's just emergence, which most think of as 'weak emergence'. And it is intelligible. — Patterner
No, no subatomic particle, atom, or molecule has the property of liquidity.
... — Patterner
I'm not going to do even as much as I just did for water, because this is already far too long. But watch this video about the electron transport chain. It explains how electrons being transported from one thing to the next in the mitochondria leads to a proton gradient, and how the pent-up proteins, when released, power the synthesis of ATP. ATP is the power source of nearly everything involved in those physical processes that are the defining characteristics of life. — Patterner
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.