• Logical proof that the hard problem of consciousness is impossible to “solve”
    It's all so tiresome.Baden

    Well, true, but I still think calling it out serves a purpose. I’m trying to get the point across that why it seems so obvious that only the physical can be real, is because of the way the problem has been set up in our culture. It is why when the question is asked ‘what alternative is there?’ the expectation is that the answer must necessarily entail something spooky. That’s why the task is to show how we got here. And actually I think that is very much what Chalmer’s essay was about in the first place.

    Splendid unicorn analogy, by the way. :up:
  • Logical proof that the hard problem of consciousness is impossible to “solve”
    I posted a response yesterday:

    There is no soul, or other essence as neuroscience has shown repeatedly.Philosophim

    To say that mind is not reducible to physical constituents, is not to posit some ethereal substance or 'ghost in the machine' (if that is what 'soul' means to you). That view is grounded in Cartesian dualism, which posited body as extended but mindless substance and mind (res cogitans) as non-extended pure intelligence. Cartesian dualism is written deeply into the fabric of modern philosophy and science. In general terms, in the following centuries, science tended to regard res cogitans as an incoherent idea, and to concentrate on material causes, res extensa, as the ground of explanation in natural science. And I think that is in the back of your mind whenever we get into this topic. That is why for you it is axiomatic that the mind has to be understood in terms of physical (or neurological) causation. It's because the alternative seems to be a 'thinking substance', which to you makes no sense. When physicalism is questioned, this is what you think is being proposed. I think your approach is very much influenced by that.

    The physical world is matter and energy. To have something non-physical, you would need something that does not fit in the category of matter and energy.Philosophim

    Information would be a good candidate in our scientific age. 'Information is information, not matter or energy', said one of founders of computer science. 'No materialism which does not admit this can survive at the present day.' And information is clearly separate from matter. Why? Because the same information can be encoded in completely different material forms, and yet still retain its meaning.

    When you say living organisms display attributes and characteristics that cannot be extracted from the laws of chemistry and physics alone, could you give some examples?Philosophim

    When I say that living organisms display attributes and characteristics that cannot be extracted from the laws of chemistry and physics alone, I'm pointing to the fact that organisms are fundamentally different from machines. Unlike machines, which serve purposes imposed on them from the outside, living organisms exhibit intrinsic agency and functional autonomy. They actively maintain themselves through processes like homeostasis which enables them to differentiate themselves from their surroundings, unlike minerals or other non-organic materials. This self-maintenance and self-regulation give organisms an internal purpose—a drive to persist, adapt, and flourish—that is entirely absent in the purely extrinsic purposes of machines. This fundamental distinction between intrinsic and extrinsic purpose is key to understanding why organisms cannot be reduced to mere physical or chemical mechanisms. The whole system and its environment are deeply intertwined, making living systems more than just the sum of their parts.
  • Logical proof that the hard problem of consciousness is impossible to “solve”
    I don't think I follow any conventional dualism.Mark Nyquist

    Sure you do. It's implied in everthing you write. Whatever is being forced on you, is doing so by virtue of your prior commitment to the sole reality of the physical.
  • Logical proof that the hard problem of consciousness is impossible to “solve”
    the question is in what form do non-physical things exist? If physical matter isn't involved there is no physical form.Mark Nyquist

    Forms are not just the shapes of physical things; they are the essential principles that particular things must conform to in order to exist. For instance, the concept of 'wings'—a structure for flight—has emerged independently across insects, birds, reptiles, and mammals. The form of a wing is determined by the function of flight. This form, understood as an abstract principle, pre-exists physical wings. It represents the necessary conditions that must be realized for flight, rather than being derived from physical matter.

    a concept a non-physical always is mental content so is physically contained.Mark Nyquist

    While concepts such as wings or circles are grasped by the mind, they are not merely products of the mind. They exist as forms independently of their physical manifestations. The mind may indeed correlate with brain activity, but the claim that the mind 'is the product of' the brain is precisely what the hard problem of consciousness calls into question. The relationship between brain and mind remains a mystery, and it is worth considering that the brain might enable conscious acts might actually drive evolutionary processes. Greater intelligence provides greater possibilities for the organism, suggesting that the brain is a product of the mind’s ability to conceptualize and act within the world, as much as a cause.

    This does not posit 'non-material' things or forces, but constraints, which are top-down rather than simply bottom-up. Living things, generally, are shaped by both of those factors, not simply by physical (bottom up) causation. Most of what you and @Philosophim are saying, is a consequence of the 'Cartesian duality', with it's artificial model of matter and 'non-material substance', as explained in this earlier post. It seems natural to you, because it is deeply embedded in our way of seeing things.
  • Where is AI heading?
    I looked at the link provided, and he comes across more as an idealist,noAxioms

    If you mean, Bernardo Kastrup, then yes, he describes himself as an 'analytical idealist'. (I suspect he includes 'analytical' to align his approach with analytical philosophy more widely.) But note the reason I introduced him, and Federico Faggin, is because they are both computer scientists who have deep experience with early AI, but they are also philosophers.

    I have issue with not using 'understanding' since it would seem impossible to pass a high school exam on a subject without any understanding of the subject, and yet gemini could do so.noAxioms

    But it wouldn't understand that it had! It wouldn't, then, get ready for college, decide on a subject, move out of home, and, you know, get a life :-)
  • Logical proof that the hard problem of consciousness is impossible to “solve”
    Even if we can study our brain and associate phenomena with consciousness, our understanding of it is made through consciousness, through this subjective notion in our mind. And breaking down consciousness is impossible: it's always there as a whole, at least if we consider the whole to be the experience of the subject (you could study altered states of consciousness to learn more about the missing elements in these experiences).Skalidris

    Phenomenology is grounded on that awareness. The phenomenological method is grounded in awareness of the nature of first-person experience, but not from an objective or 'outside' stance but by attention to the quality of experience in a moment-by-moment basis. It is related to ancient philosophical skepticism, which 'withholds judgement about that which is not evident'.

    From a phenomenological perspective, in everyday life, we see the objects of our experience such as physical objects, other people, and even ideas as simply real and straightforwardly existent. In other words, they are “just there.” We don’t question their existence; we view them as facts.

    When we leave our house in the morning, we take the objects we see around us as simply real, factual things—this tree, neighboring buildings, cars, etcetera. This attitude or perspective, which is usually unrecognized as a perspective, Edmund Husserl terms the “natural attitude” or the “natural theoretical attitude.”

    When Husserl uses the word “natural” to describe this attitude, he doesn’t mean that it is “good” (or bad), he means simply that this way of seeing reflects an “everyday” or “ordinary” way of being-in-the-world. When I see the world within this natural attitude, I am solely aware of what is factually present to me. My surrounding world, viewed naturally, is the familiar world, the domain of my everyday life. Why is this a problem?

    From a phenomenological perspective, this naturalizing attitude conceals a profound naïveté. Husserl claimed that “being” can never be collapsed entirely into being in the empirical world: any instance of actual being, he argued, is necessarily encountered upon a horizon that encompasses facticity but is larger than facticity. Indeed, the very sense of facts of consciousness as such, from a phenomenological perspective, depends on a wider horizon of consciousness that usually remains unexamined.
    Key Ideas in Phenomenology

    David Chalmers recognises that phenomenology must be 'absolutely central' to a properly-constituted science of conciousness.
  • Logical proof that the hard problem of consciousness is impossible to “solve”
    As a footnote to the above, what really is 'physical'? Is the brain physical? Living organisms? I question these assumptions, because living organisms generally display attributes and characteristics that can't be extracted from the laws of physics or chemistry alone. They don't violate physical laws, but adapt to them in ways that physical things like minerals or gases do not. As for the brain, it can be considered as a physical object, but in its context, embodied in a living organism, it is certainly much more than that.

    What I think is meant by 'physical' simply means 'objective' - what can be sensed, measured, analysed by objective methods and instruments. Yet at the bottom of 'physical matter' we nowadays find abstractions, and indeed the whole model of particle physics is grounded in mathematical abstractions.
  • Logical proof that the hard problem of consciousness is impossible to “solve”
    The noting of the current limitations of science being able to objectively capture personal experience are just that, a limit.Philosophim

    They're not that. There are limitations to scientific method in this respect as a matter of principle, which you're not seeing. It requires a different kind of approach to what has been up until now understood as scientific method.

    is philosophy trying to find something that isn't there, disguising wishes and fantasy as word play to keep some hope alive of a mortal shell that isn't shackled to physical reality? The former is what propels civilizations, while the latter keeps us in the dark ages.Philosophim

    :roll:
  • Where is AI heading?
    Well it’s a big part of it for sure. But it’s well known in philosophy that the more general a term, the harder it is to define.
  • Where is AI heading?
    They have some level of intelligenceCarlo Roosen

    I think that can be questioned. 'Intelligence' is the term we use but whether artificial intelligence is actually intelligent is questionable. I put it to ChatGPT, which you can review below:

    Reveal
    Question: Is artificial intelligence actually intelligent?

    ChatGPT: The question of whether artificial intelligence (AI) is "actually intelligent" hinges on how we define "intelligence." AI systems, particularly those based on machine learning and neural networks, can perform tasks that require sophisticated pattern recognition, problem-solving, and decision-making. They can simulate many aspects of what humans would call intelligent behavior, such as language generation, playing complex games like chess or Go, and analyzing large datasets.

    However, this simulation of intelligence doesn't necessarily imply the same kind of intelligence humans possess. AI operates based on algorithms, data processing, and statistical correlations, not conscious understanding or awareness. In contrast, human intelligence includes subjective experience, creativity, emotional insight, and the ability to reflect on one's own thoughts and surroundings—qualities that AI, as it exists today, lacks.

    While AI can appear to be intelligent by some practical measures, it is not intelligent in the sense of possessing conscious awareness or genuine understanding. Many consider its intelligence to be an advanced form of calculation, not equivalent to human intelligence.


    I don't think you understand the implications of the question you pose in the OP:

    To me, it’s clear that if computers had their own language, one they could develop for themselves, they would form a worldview independent from ours. Once we implement this language in a neural network, it will be flexible enough to learn things we can’t even imagine.Carlo Roosen

    I think that what you're proposing is actually creating a novel form of being, to all intents and purposes. After all, 'a worldview' of the kinds that humans have has literally been thousands of years in the making (or millions, if you include our whole evolutionary history.) Humans are biological beings with existential concerns, which culture and society has developed ways to address over the millenia. So what would be the overall aim of the network you're proposing? What would imbue it with the will to exist or evolve? Of course, many people are asking these questions now, seeing as AI has suddenly exploded in power, but they're big questions. I don't expect any AI systems to have the answers.

    To make (intelligence) truely defined and measurable, yes there is a challenge, but I don't see why it would be impossible. We've done it for humans and animals.Carlo Roosen

    But have we? I believe that humans are more intelligent than animals, although there's a lot of people here that call that into question. What is intelligence? The word is actually derived from 'inter-legere' meaning 'to read between'. So in essence, it implies judgement. And judgement requires knowledge of context, a grasp of meaning, and an overall sense of what matters. They too are very deep questions.
  • Where is AI heading?
    But isn't the burden of proof on you, to prove that intelligence and consciousness are connected, as you say?Carlo Roosen

    But aren’t they always connected? Can you provide an example of where they’re not?

    And can intelligence really be defined and measured? I suppose it can be in some respects, but there are different modes of intelligence. A subject may have high intelligence in a particular skill and be deficient in other areas.

    So what ‘human level intelligence’ means is still an open question (let alone ‘superhuman’).

    To me, it’s clear that if computers had their own language, one they could develop for themselves, they would form a worldview independent from ours. Once we implement this language in a neural network, it will be flexible enough to learn things we can’t even imagine.Carlo Roosen

    You’re assuming a lot there! Have a look at this dialogue from a few days back
  • Logical proof that the hard problem of consciousness is impossible to “solve”
    Yes, and that extra ingredient is the inability to objectively grasp other subjective experiences. Again, this does not mean there is some actual essence we're missing. It means we are at a limitation of what we can evaluate objectively: the personal subjective experience. This does not mean subjective experiences aren't physical. We can evaluate a brain objectively and state, "According to what we know of behavior, this brain is in pain." We just can't objectively state 'how that brain is personally experiencing pain'.Philosophim

    You're still not seeing the point, though. There is some ability to infer some obvious physiological correlations like pain or epilepsy from neuroscience, but you still fall back on the assumption that subjective experiences are still ultimately physical, without addressing the real crux of the issue: the first-person, qualitative aspect of experience that resists explanation by objective, third-person descriptions. Even if we can identify some neural correlates—like specific brain states that accompany pain, color perception, or emotions—these correlations don’t explain why or how those states are accompanied by conscious experience, and there's no actual theory that does so. The subjective feel of pain, or what it is like to experience red, is not part of the objective description. That is the explanatory gap that you're explaining away by equating the subjective with the merely personal.

    If we want philosophy to stay relevant, we need to follow the discoveries that are being made today, or find some way to push science into areas we want to explore like 'personal experiences'.Philosophim

    So to paraphrase, your response is, 'It's true that objective science can't capture personal experience, but it doesn't really matter. From brain science, we know that experience is basically physical in nature, let's hope philosophy catches up with that one day.'
  • Logical proof that the hard problem of consciousness is impossible to “solve”
    There is no soul, or other essence as neuroscience has shown repeatedly.Philosophim

    I might as well try and spell this out, as it's been a bone of contention between us in many debates. There is a deep philosophical problem here. To say that mind is not reducible to physical constituents, is not to posit some ethereal substance or 'ghost in the machine' (if that is what 'soul' means to you). That view is grounded in Cartesian dualism, which posited body as extended but mindless substance and mind (res cogitans) as non-extended pure intelligence. Cartesian dualism is written deeply into the fabric of modern philosophy and science. In general terms, in the following centuries, science tended to res cogitans as an incoherent idea, and to concentrate on material causes, res extensa, as the ground of explanation in natural science. And I think that is in the back of your mind whenever we get into this topic. That is why for you, and for many others, it is axiomatic that the mind has to be understood in terms of physical (or neurological) causation.

    I don't think Chalmers is trying to suggest that there is a soul or essence in that sense. I'm certainly not trying to resurrect a Cartesian soul! But I also think that the physicalist picture that arises from denying the reality of consciousness (in effect) is also mistaken, because it's grounded in faulty premisses from the outset, on an artifical distinction between abstractions. Rather, the whole picture of Cartesian dualism, and the physical reductionism that descended from it, has to be called into question. That is the philosophical background as I see it.
  • Logical proof that the hard problem of consciousness is impossible to “solve”
    he's not implying that subjective consciousness isn't physicalPhilosophim

    He jolly well is!

    Chalmers asks:

    We have seen that there are systematic reasons why the usual methods of cognitive science and neuroscience fail to account for conscious experience. These are simply the wrong sort of methods: nothing that they give to us can yield an explanation. To account for conscious experience, we need an extra ingredient in the explanation.

    That 'extra ingredient' is missing from physical explanations:

    It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does. ...

    For any physical process we specify there will be an unanswered question: Why should this process give rise to experience? Given any such process, it is conceptually coherent that it could be instantiated in the absence of experience. It follows that no mere account of the physical process will tell us why experience arises. The emergence of experience goes beyond what can be derived from physical theory.

    So he's explicitly rejecting physical reductionism.
  • Where is AI heading?
    My suggestion was to ignore the topic of consciousness here, but maybe that doesn't work. Especially not if one, like Wayfarer, equates consciousness with intelligence.Carlo Roosen

    You have yet to explain how intelligence can be dissociated from consciousness. You might say that AI does this, but as noted above, AI systems don't actually know anything, so the question must be asked if you think they are capable of 'human-level intelligence' in light of that lack. So my objection may appear off topic to you, but maybe that's because you're not seeing the problem. It might be that you have a false picture of what AI is and can do.

    // I was reading yesterday that IBM's Deep Blue, which famously beat Gary Kasparov at chess in 1996, doesn't actually know what 'chess' is, doesn't know what 'a game' is, and doesn't know what 'winning' means. It simply performs calculations so as to derive an outcome.//
  • All Causation is Indirect
    I struggle with the idea that causation is always indirect. As I type this, my keystrokes cause the appropriate symbols to appear on the screen. I understand this is mediated by thousands of sub-processes on my computer and the servers but nonetheless if I were to press Z and see X, I would have to think there was some cause for that. (Although even as I write that, I recall an old friend who had a near-fatal neural aneurism, the first symptom of which is that he couldn't type, and thought he had a faulty keyboard.....so I guess I've just shot myself in the foot....)
  • Logical proof that the hard problem of consciousness is impossible to “solve”
    First, lets clarify what 'the hard problem is'. Is it that we're conscious? No. Is it that the brain causes consciousness? No. The idea that consciousness is caused by our physical brains is the easy problem. The hard problem is, "Will we ever know what it is like to BE a conscious individual that isn't ourselves".Philosophim

    That is your particular intepretation of the problem. David Chalmer’s original paper doesn’t say that. He says that understanding the specific functional aspects of consciousness and their correlation with neural processes are comparatively easy:

    The easy problems of consciousness include those of explaining the following phenomena:

    * the ability to discriminate, categorize, and react to environmental stimuli;
    * the integration of information by a cognitive system;
    * the reportability of mental states;
    * the ability of a system to access its own internal states;
    * the focus of attention;
    * the deliberate control of behavior;
    * the difference between wakefulness and sleep. ...

    There is no real issue about whether these phenomena can be explained scientifically.... If these phenomena were all there was to consciousness, then consciousness would not be much of a problem.
    Chalmers

    Compare with:

    The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel (1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them. All of them are states of experience.

    He surveys a number of proposed causal links between brain and conscious experience and finds them wanting. Further on he says:

    I suggest that a theory of consciousness should take experience as fundamental. We know that a theory of consciousness requires the addition of something fundamental to our ontology, as everything in physical theory is compatible with the absence of consciousness. We might add some entirely new nonphysical feature, from which experience can be derived, but it is hard to see what such a feature would be like. More likely, we will take experience itself as a fundamental feature of the world, alongside mass, charge, and space-time. If we take experience as fundamental, then we can go about the business of constructing a theory of experience.

    Where there is a fundamental property, there are fundamental laws. A nonreductive theory of experience will add new principles to the furniture of the basic laws of nature. These basic principles will ultimately carry the explanatory burden in a theory of consciousness. Just as we explain familiar high-level phenomena involving mass in terms of more basic principles involving mass and other entities, we might explain familiar phenomena involving experience in terms of more basic principles involving experience and other entities.

    In particular, a nonreductive theory of experience will specify basic principles telling us how experience depends on physical features of the world. These psychophysical principles will not interfere with physical laws, as it seems that physical laws already form a closed system. Rather, they will be a supplement to a physical theory. A physical theory gives a theory of physical processes, and a psychophysical theory tells us how those processes give rise to experience. We know that experience depends on physical processes, but we also know that this dependence cannot be derived from physical laws alone. The new basic principles postulated by a nonreductive theory give us the extra ingredient that we need to build an explanatory bridge.
    Chalmers

    Which he proposes as a 'naturalistic dualism'. He never states that the problem is what it is like to be a conscious individual that isn’t ourselves. His key point is the emphasis on 'experience' which is by nature first-person. That could be intepreted as saying that 'we can't directly know the experience of another person', but he doesn't directly state it.

    The stumbling block for the objective sciences - the actual problem that has to be faced up to - is that experience is not objective, as the OP kind of says. Consciousness is the property of the subject to whom the experience occurs, so the exclusive emphasis on objective, third-party measurement which is the backbone of modern scientific method can't accomodate it. Which is why elminativism wants to eliminate it.

    The idea that consciousness is caused by our physical brains is the easy problem.Philosophim

    But the nature of that causal relationship is the very heart of the issue. Physicalism assumes that it possess an in-principle explanation, but that is what is being called into question.
  • Where is AI heading?
    His critique of materialism isn't hard to agree with. Materialism does posit, ultimately, mathematical abstractions at the bottom of everything and ignores consciousness. But Kastrup's idealism--as expressed in that article--fares no better in that it posits consciousness as fundamental as a solution to ignoring it, but with no real insight into how it interacts with or why it's necessary to interact with matter in order to produce human experience. Or why human experience, which is the origin of the concept of "consciousness", is so special such that this concept turns out to the most fundamental map of the big picture. So, we're left without the only pieces of the puzzle that actually matter.Baden

    Hey, thanks for that feedback! As has been pointed out already, that abstract that you're reacting to was AI generated, for the purpose of criticism of one of the claims in the OP, namely, that we will soon produce 'human-level intelligence' (or even superhuman, whatever that's supposed to mean.) So it's very cursory. Kastrup does address those points you raise in great detail in his various books, articles and lectures. He has found that his idealist philosophy is convergent in many respects with Schopenhauer's (hence his book on that), and from my reading, he has produced a comprehensive idealist metaphysics, although I won't try and respond to all of your points in a single post. If you're interested, he has a free course on it.

    Take them or leave them, materialism and idealism boil down to the same thing, fruitless stories aimed at elevating their storytellers into something they're not nor ever can be, i.e. vessels of wisdom that point to anything of actual significance beyond scientific progress and lived human experience.Baden

    I have more confidence in philosophy as a vehicle for truth.
  • Rational thinking: animals and humans
    I assume every species has thoughts...Vera Mont

    Don't you think there might be just a smidgen of anthropomorphic projection there?
  • The Biggest Problem for Indirect Realists
    I think we have good reasons to believe, e.g., that electrons exist.Bob Ross

    They only have a tendency to exist. We know they don't have any determinate existence until they're measured. That is an implication of Heisenberg's uncertainty principle. We also know that sub-atomic phenomena can behave as both waves and particles, and so are not really either one or the other, as those two forms of existence are incommensurable.
  • The Biggest Problem for Indirect Realists
    Thank you Banno, means a lot. I don't claim that it conveys anything particularly earth-shattering but I was pleased with the writing style and with what I think is the novel idea in the title.
  • Rational thinking: animals and humans
    I have no idea why other people think this is remarkable, when we all not only have a sense of time, but can witness every living thing around us respond to the passage of time.Vera Mont

    There's a well-known - some might say notorious - case which was recounted in a popular book of the 1970's, Supernature, and again in a more recent work, The Human Cosmos, Jo Marchant.

    In February 1954 , a US biologist named Frank Brown discovered something so remarkable, so inexplicable, that his peers essentially wrote it out of history. Brown had dredged a batch of Atlantic oysters from the seabed off New Haven, Connecticut, and shipped them hundreds of miles inland to Northwestern University in Evanston, Illinois. Then he put them into pans of brine inside a sealed darkroom, shielded from any changes in temperature, pressure, water currents, or light. Normally, these oysters feed with the tides. They open their shells to filter plankton and algae from the seawater, with rest periods in between when their shells are closed. Brown had already established that they are most active at high tide, which arrives roughly twice a day. He was interested in how the mollusks time this behavior, so he devised the experiment to test what they would do when kept far from the sea and deprived of any information about the tides. Would their normal feeding rhythm persist?

    For the first two weeks, it did. Their feeding activity continued to peak 50 minutes later each day, in time with the tides on the oysters’ home beach in New Haven. That in itself was an impressive result, suggesting that the shellfish could keep accurate time. But then something unexpected happened, which changed Brown’s life forever.

    The oysters gradually shifted their feeding times later and later. After two more weeks, a stable cycle reappeared, but it now lagged three hours behind the New Haven tides. Brown was mystified, until he checked an astronomical almanac. High tides occur each day when the moon is highest in the sky or lowest below the horizon. Brown realized that the oysters had corrected their activity according to the local state of the moon; they were feeding when Evanston—if it had been by the sea—would experience high tide. He had isolated these organisms from every obvious environmental cue. And yet, somehow, they were following the moon.

    The second of those two books recounts how Frank Brown was essentially ostracized by the scientific mainstream for the claim that the oysters somehow responded to changes in lunar gravitation. Nevertheless his findings still stand as far as I know.

    //I should add, I don't think molluscs are conscious, and these actions are not rational, but that it is very interesting that this behaviour can be regulated in this manner.//
  • The Biggest Problem for Indirect Realists
    Perfectly clear to me, but apparently something others do not see.
  • Philosophy Proper
    Perhaps it would have better to say something like "In the early 20th century a split in methods and interests occurred within philosophy, and Husserl was a bellwether."J

    Have a look at How the premature death of Collingwood changed philosophy
  • The Biggest Problem for Indirect Realists
    Paul Kneirem had a rather good little essay on the topic on the old forum which I think I saved somewhere. Also God does not exist, although you’ll say you knew that already.
  • The Biggest Problem for Indirect Realists
    It sounds like, then, you believe that numbers are real a priori? Either way, they exist and are real. That's confused and muddied language to make a distinction between what is real and what exists.Bob Ross

    I can understand why you would say that, as it seems a strange distinction to make, but the distinction between what is real and what exists is nevertheless a valid one. But I won't take it further at this point.
  • Where is AI heading?
    Do you agree that his statement is contradictory? He stated that consciousness is a fundamental aspect of the universe yet he claims that computer is not conscious.MoK

    Read up on Bernardo Kastrup. I can’t break it down for you in a forum post. Try this https://besharamagazine.org/science-technology/mind-over-matter/
  • Where is AI heading?
    Intelligence can be definedCarlo Roosen

    Well, go ahead, define it. You say human level intelligence ‘can be achieved’ and superhuman intelligence some time after that. Show some evidence you’re not just making it up.

    Do some research - google Bernardo Kastrup and read or listen. I’m not going to try and explain what he says but happy to answer any questions that throws up if I’m able.
  • Where is AI heading?
    Do you equate human-level intelligence with consciousness?Carlo Roosen

    Of course human-level intelligence is an aspect of human consciousness. Where else can it be found? What else could it be?

    To me, the path is clear: superhuman intelligence is where we're headed.Carlo Roosen

    But not associated with consciousness?

    What do you mean by 'human level intelligence' and 'superhuman inteligence'?
  • What is 'innocence'?
    now you mention that, I have noticed it, although haven't looked at it. I've just bought the Kindle edition of his The World is Full of Gods, which I quite like, although as I basically agree with every word in it, am finding it hard to actually enjoy. :rofl:
  • Rational thinking: animals and humans
    The parts of the universe that become aware of themselves and other parts only do so by distinguishing themselves and other parts from everything else.Janus

    Indeed. That is one of the unique attributes of living beings. The hallmark of organic life is that it has to maintain itself rather than being subsumed into whatever chemical or energetic process is going on around it, as non-organic matter does. This is one of the distinctions that Evan Thompson makes in Mind in Life. But humans go a step beyond that. They're aware that they're aware.
  • What is 'innocence'?
    The last few weeks I've been looking after a poodle-spaniel cross ('cavoodle') about a year old. She's never been neglected or hit, and as a consequence she has a kind of child-like innocence about her - everything is new, everything is a fun game, other dogs and people are all potentially friends. So she has that kind of quality of innocence, in a dog kind of way.

    So I guess the attraction is that innocence represents purity, unspoildness, and spontaneity. I think that's why many modern societies express reverence for nature and the environment. It's the opposite of jadedness, of cynicism, of remembered or forgotten pain and guilt.
  • Where is AI heading?
    What you call consciousness is not fundamental since any mechanical device is equally capable of gleaning the workings of the world through such means, and many refuse to call that consciousnessnoAxioms

    If 'gleaning' means 'understanding', then AI systems glean nothing. In fact a computer system knows nothing.

    I put that to gemini too, which responded:

    I agree with your provocative claim that LLMs don't actually know anything. While they can process information and generate text that may seem intelligent, they do not possess true understanding or consciousness.

    Here's why:

    1. Lack of subjective experience: LLMs do not have personal experiences or feelings. They cannot understand the world in the same way that a human does, as they lack subjective consciousness.

    2. Pattern recognition: LLMs are essentially pattern recognition machines. They identify patterns in vast amounts of data and use those patterns to generate text. However, they do not comprehend the meaning behind the information they process.

    3. Manipulation of language: LLMs can manipulate language in impressive ways, but this does not equate to true understanding. They can generate text that is coherent and informative, but they do not have a deep understanding of the concepts they discuss.

    In essence, LLMs are powerful tools that can be used for various purposes, but they should not be mistaken for sentient beings. They are simply machines that can process and generate information based on the data they are trained on.
    — gemini.google.com

    OK, I don't understand Kastrup's argument, since all I had was that one summary not even written by him.noAxioms

    I provided it in the context of the Carlo Roosen's claim that AI will soon give rise to 'superhuman intelligence', by pointing out the objections of Kastrup and Faggin, both computer scientists and philosophers. It was meant as a suggestion for looking into the philosophical issues concerning AI, not as a complete wrap of Kastrup's philosophy. As for Kastrup's books, here's a list if you're interested (and he also has many hours of youtube media).
  • Where is AI heading?
    If you disagree with an argument it follows that you must not understand it. QEDJanus

    Perhaps then you can parse this sentence for me:

    a human body is nowt but a complex physical system, and if that physical system can interact with this non-physical fundamental property of the universe,noAxioms

    (I take it 'nowt' means 'nothing but'.) So, the objection appears to be, that body is wholly phyhsical, and mind a non-physical fundamental property - which is something very close to Cartesian dualism. But Kastrup's argument is not based on such a model. Hence my remark.
  • Where is AI heading?
    You don't say how long you've been following AI, but the breathless hype has been going since the 1960s. Just a few years ago we were told that radiologists would become obsolete as AI would read x-rays. Hasn't happened. Back in the 1980s it was "expert systems." The idea was to teach computers about the world. Failed. The story of AI is one breathless hype cycle after another, followed by failure.fishfry

    The story is well-told by now [written 2005 about the 70's] how the cocksure dreams of AI researchers crashed during the subsequent years — crashed above all against the solid rock of common sense. Computers could outstrip any philosopher or mathematician in marching mechanically through a programmed set of logical maneuvers, but this was only because philosophers and mathematicians — and the smallest child — were too smart for their intelligence to be invested in such maneuvers. The same goes for a dog. “It is much easier,” observed AI pioneer Terry Winograd, “to write a program to carry out abstruse formal operations than to capture the common sense of a dog.”

    A dog knows, through its own sort of common sense, that it cannot leap over a house in order to reach its master. It presumably knows this as the directly given meaning of houses and leaps — a meaning it experiences all the way down into its muscles and bones. As for you and me, we know, perhaps without ever having thought about it, that a person cannot be in two places at once. We know (to extract a few examples from the literature of cognitive science) that there is no football stadium on the train to Seattle, that giraffes do not wear hats and underwear, and that a book can aid us in propping up a slide projector but a sirloin steak probably isn’t appropriate.
    Steve Talbott, Logic, DNA and Poetry
  • Where is AI heading?
    But a human body is nowt but a complex physical system, and if that physical system can interact with this non-physical fundamental property of the universe, then so can some other complex physical system such as say an AI.noAxioms

    But it is the inability to describe, explain or account for how physically describable systems are related to the mind, that is what is described in 'facing up to the problem of consciousness'. Our understanding of 'the physical world' is itself reliant on and conditioned by our conscious experience. We perceive and interpret physical phenomena through an experiential lens, which means that consciousness, in that sense, is prior to any understanding of the physical. Trying to explain consciousness in terms of physical processes ultimately involves using concepts that are themselves products of consciousness. Of course it is true that physicalism on the whole won't recognise that, precisely because it supposes that it has excluded the subject from its reckonings, so as to concentrate on what is really there. But that only works up to a point, and that point is well short of explaining the nature of mind. So it's not true that the human body is a 'complex physical system', that is lumpen materialism.

    That argument wasn't a very good one,noAxioms

    I don't think you demonstrate an understanding of it.
  • Where is AI heading?
    But one day, I’m certain, we’ll realize there's more to learn from the human mind than just neurons. We can gain insights from observing our minds—how we remember, reason, and use language. Essentially, the kinds of discussions we have here on the forum.Carlo Roosen

    That is meta-cognitive awareness - knowing about knowing, understanding through insight how the mind operates. That might seem obvious but since 20th century psychology came along with the understanding of the sub- and unconscious aspects of the mind, it clear that this not at all simple.

    But overall I find the casual way in which you assume that human-level and then super-human intelligence can or will be achieved is hubristic. Many are saying that AI systems will reach the threshhold of consciousness or sentience if they haven't already. ChatGPT and other LLMs obviously display human-like conversational and knowledgement management abilities and can sail through the Turing Test. But I agree with those who say they are not conscious beings, and never will be, in principle.

    I asked Google Gemini to summarize why Bernardo Kastrup says that the idea of 'conscious AI' is an illusion:

    Bernardo Kastrup's argument against conscious AI is rooted in his philosophical perspective on consciousness and the nature of reality. He primarily argues that:

    1. Consciousness is fundamental: Kastrup believes that consciousness is a fundamental property of the universe, not a product of complex physical systems like the human brain. This means that AI, which is a product of human design and operates on physical principles, cannot inherently possess consciousness.

    2. AI as a simulation: He views AI as a simulation of consciousness, rather than a genuine manifestation of it. While AI can exhibit intelligent behavior and even mimic certain aspects of human consciousness, it does so based on programmed rules and algorithms, not on subjective experience.
     
    3. The hard problem of consciousness: Kastrup emphasizes the "hard problem" of consciousness, which is the question of how physical processes can give rise to subjective experience. He argues that current scientific understanding cannot adequately explain this phenomenon, and therefore, it's unlikely that AI, which operates on known physical principles, can achieve it.  

    Essentially, Kastrup's position is that while AI can be incredibly sophisticated and capable, it is fundamentally limited by its physical nature and cannot truly possess the subjective experience that we associate with consciousness.

    See also this blog post.

    I don't submit this just as an appeal to authority, but because Kastrup is a well-known critic of the idea of conscious AI, and because he has doctorates in both philosophy and computer science and created and sold an IT company in the early stages of his career. He has summarized and articulated the reasons why he says AI consciousness is not on the horizon from an informed perspective.

    It might also be of interest that he's nowadays associated with Federico Faggin, an Italian-American computer scientist who has the claim to fame of having built the first commercially-produced microprocessor. Fagin's autobiography was published a couple of years ago as Silicon (website here.) He also described an epiphany about consciousness that he underwent which eventually caused him to retire from IT and concentrate full-time on 'consciousness studies', subject of his later book, Irreducible.

    Noteworthy that both Kastrup and Faggin came to forms of idealist metaphysics because of the realisation that there was an essential quality of consciousness that could never be replicated in silicon.

    There's a lot of philosophical background to this which is often overlooked in the understandable excitement about LLMs. And I've been using ChatGPT every single day since it launched in November 2022, mainly for questions about philosophy and science, but also for all kinds of other things (see this Medium article it helped me draft). So I'm not an AI sceptic in any sense, but I am pretty adamant that AI is not and won't ever be conscious in the sense that living beings are. Which is not to say it isn't a major factor in life and technology going forward.