But one day, I’m certain, we’ll realize there's more to learn from the human mind than just neurons. We can gain insights from observing our minds—how we remember, reason, and use language. Essentially, the kinds of discussions we have here on the forum. — Carlo Roosen
Bernardo Kastrup's argument against conscious AI is rooted in his philosophical perspective on consciousness and the nature of reality. He primarily argues that:
1. Consciousness is fundamental: Kastrup believes that consciousness is a fundamental property of the universe, not a product of complex physical systems like the human brain. This means that AI, which is a product of human design and operates on physical principles, cannot inherently possess consciousness.
2. AI as a simulation: He views AI as a simulation of consciousness, rather than a genuine manifestation of it. While AI can exhibit intelligent behavior and even mimic certain aspects of human consciousness, it does so based on programmed rules and algorithms, not on subjective experience.
3. The hard problem of consciousness: Kastrup emphasizes the "hard problem" of consciousness, which is the question of how physical processes can give rise to subjective experience. He argues that current scientific understanding cannot adequately explain this phenomenon, and therefore, it's unlikely that AI, which operates on known physical principles, can achieve it.
Essentially, Kastrup's position is that while AI can be incredibly sophisticated and capable, it is fundamentally limited by its physical nature and cannot truly possess the subjective experience that we associate with consciousness.
Not sure if Gemini accurately summarized the argument, but there seems to be an obvious hole.I don't submit this just as an appeal to authority, but because Kastrup is a well-known critic of the idea of conscious AI — Wayfarer
But a human body is nowt but a complex physical system, and if that physical system can interact with this non-physical fundamental property of the universe, then so can some other complex physical system such as say an AI. So the argument seems to be not only probably unsound, but invalid, and not just probably. It just simply falls flat.1. Consciousness is fundamental: Kastrup believes that consciousness is a fundamental property of the universe, not a product of complex physical systems like the human brain. This means that AI, which is a product of human design and operates on physical principles, cannot inherently possess consciousness. — GoogleGemini
No chatbot has passed the test, but some dedicated systems specifically designed to pass the test have formally done so. And no, I don't suggest that either a chatbot or whatever it was that passed the test would be considered 'conscious' to even my low standards. It wasn't a test for that. Not sure how such a test would be designed.Many are saying that AI systems will reach the threshhold of consciousness or sentience if they haven't already. ChatGPT and other LLMs obviously display human-like conversational and knowledgement management abilities and can sail through the Turing Test. — Wayfarer
As you seem to realize, that only works for a while. Humans cannot surpass squirrel intelligence only by using squirrels as our training. An no, a human cannot yet pass a squirrel Turing test.All predictions about AI's future are based on refining this model—by adding more rules, improving training materials, and using various tricks to approach human-level intelligence. — Carlo Roosen
Software developers are also making their own lives easier. It's no secret you can ask a chatbot to generate code for a website—and it works pretty well. — Carlo Roosen
The dot product is a straightforward calculation, where the result increases as the vectors align, meaning the more they point in the same direction, the more likely they are to combine. — Carlo Roosen
But a human body is nowt but a complex physical system, and if that physical system can interact with this non-physical fundamental property of the universe, then so can some other complex physical system such as say an AI. — noAxioms
That argument wasn't a very good one, — noAxioms
You don't say how long you've been following AI, but the breathless hype has been going since the 1960s. Just a few years ago we were told that radiologists would become obsolete as AI would read x-rays. Hasn't happened. Back in the 1980s it was "expert systems." The idea was to teach computers about the world. Failed. The story of AI is one breathless hype cycle after another, followed by failure. — fishfry
The story is well-told by now [written 2005 about the 70's] how the cocksure dreams of AI researchers crashed during the subsequent years — crashed above all against the solid rock of common sense. Computers could outstrip any philosopher or mathematician in marching mechanically through a programmed set of logical maneuvers, but this was only because philosophers and mathematicians — and the smallest child — were too smart for their intelligence to be invested in such maneuvers. The same goes for a dog. “It is much easier,” observed AI pioneer Terry Winograd, “to write a program to carry out abstruse formal operations than to capture the common sense of a dog.”
A dog knows, through its own sort of common sense, that it cannot leap over a house in order to reach its master. It presumably knows this as the directly given meaning of houses and leaps — a meaning it experiences all the way down into its muscles and bones. As for you and me, we know, perhaps without ever having thought about it, that a person cannot be in two places at once. We know (to extract a few examples from the literature of cognitive science) that there is no football stadium on the train to Seattle, that giraffes do not wear hats and underwear, and that a book can aid us in propping up a slide projector but a sirloin steak probably isn’t appropriate. — Steve Talbott, Logic, DNA and Poetry
If you disagree with an argument it follows that you must not understand it. QED — Janus
a human body is nowt but a complex physical system, and if that physical system can interact with this non-physical fundamental property of the universe, — noAxioms
Well, from an epistemological standpoint, yea, the whole hierarchy is turned more or less around. Data acquisition and information processing become fundamental. What you call consciousness is not fundamental since any mechanical device is equally capable of gleaning the workings of the world through such means, and many refuse to call that consciousness. They probably also forbid the term 'understanding' to whatever occurs when the machine figures it all out.Our understanding of 'the physical world' is itself reliant on and conditioned by our conscious experience. We perceive and interpret physical phenomena through an experiential lens, which means that consciousness, in that sense, is prior to any understanding of the physical. — Wayfarer
For a long time they couldn't explain how the sun didn't fall out of the sky, except by inventing something fundamental. Inability to explain is a poor excuse to deny that it is something physical, especially when the alternative has empirically verifiable prediction.But it is the inability to describe, explain or account for how physically describable systems are related to the mind — Wayfarer
Ditto greeting from me. I'm one myself, but my latest installation of cygwin for some reason lacks a development environment which stresses me out to no extent. It's like I've been stripped of the ability to speak.Hello, nice to see a computer scientist on the forum — Shawn
What you call consciousness is not fundamental since any mechanical device is equally capable of gleaning the workings of the world through such means, and many refuse to call that consciousness — noAxioms
I agree with your provocative claim that LLMs don't actually know anything. While they can process information and generate text that may seem intelligent, they do not possess true understanding or consciousness.
Here's why:
1. Lack of subjective experience: LLMs do not have personal experiences or feelings. They cannot understand the world in the same way that a human does, as they lack subjective consciousness.
2. Pattern recognition: LLMs are essentially pattern recognition machines. They identify patterns in vast amounts of data and use those patterns to generate text. However, they do not comprehend the meaning behind the information they process.
3. Manipulation of language: LLMs can manipulate language in impressive ways, but this does not equate to true understanding. They can generate text that is coherent and informative, but they do not have a deep understanding of the concepts they discuss.
In essence, LLMs are powerful tools that can be used for various purposes, but they should not be mistaken for sentient beings. They are simply machines that can process and generate information based on the data they are trained on. — gemini.google.com
OK, I don't understand Kastrup's argument, since all I had was that one summary not even written by him. — noAxioms
I find the casual way in which you assume that human-level and then super-human intelligence can or will be achieved is hubristic — Wayfarer
Many are saying that AI systems will reach the threshhold of consciousness or sentience if they haven't already. — Wayfarer
AI has long since passed the point where its developers don't know how it works, where they cannot predict what it will do. It really wouldn't be AI at all if it were otherwise, but rather just an automaton doing very defined and predictable steps. Sure, they might program the ability to learn, but no what it will learn or what it will do with its training materials. And the best AI's I've seen, with limited applicability, did all the learning from scratch without training material at all. — noAxioms
Recently debunked. Marginal increase in productivity — fishfry
Hardly a new idea. Search engines use that technique by dot-producting — fishfry
Hello, nice to see a computer scientist on the forum. Would you care to comment on some of my thoughts about computing in this thread? — Shawn
Ditto greeting from me. I'm one myself — noAxioms
Do you equate human-level intelligence with consciousness? — Carlo Roosen
To me, the path is clear: superhuman intelligence is where we're headed. — Carlo Roosen
Well, that seems contradictory to me. Everything should be conscious if consciousness is a fundamental property of the universe. So a computer that simulates intelligence is also conscious. What is its subjective experience is however the subject of discussion. Its subjective experience could be simple low-level that allows the computer to run the code. I highly doubt that its subjective experience is high-level such as thoughts though even if its behavior indicates that it is intelligent.Consciousness is fundamental: Kastrup believes that consciousness is a fundamental property of the universe, not a product of complex physical systems like the human brain. This means that AI, which is a product of human design and operates on physical principles, cannot inherently possess consciousness. — Wayfarer
Intelligence can be defined — Carlo Roosen
Do you agree that his statement is contradictory? He stated that consciousness is a fundamental aspect of the universe yet he claims that computer is not conscious. — MoK
I read the article. It does not explain what he means by that consciousness is a fundamental aspect of the universe.Read up on Bernardo Kastrup. I can’t break it down for you in a forum post. Try this https://besharamagazine.org/science-technology/mind-over-matter/ — Wayfarer
You say human level intelligence ‘can be achieved’ and superhuman intelligence some time after that. Show some evidence you’re not just making it up. — Wayfarer
His critique of materialism isn't hard to agree with. Materialism does posit, ultimately, mathematical abstractions at the bottom of everything and ignores consciousness. But Kastrup's idealism--as expressed in that article--fares no better in that it posits consciousness as fundamental as a solution to ignoring it, but with no real insight into how it interacts with or why it's necessary to interact with matter in order to produce human experience. Or why human experience, which is the origin of the concept of "consciousness", is so special such that this concept turns out to the most fundamental map of the big picture. So, we're left without the only pieces of the puzzle that actually matter. — Baden
Take them or leave them, materialism and idealism boil down to the same thing, fruitless stories aimed at elevating their storytellers into something they're not nor ever can be, i.e. vessels of wisdom that point to anything of actual significance beyond scientific progress and lived human experience. — Baden
My suggestion was to ignore the topic of consciousness here, but maybe that doesn't work. Especially not if one, like Wayfarer, equates consciousness with intelligence. — Carlo Roosen
But isn't the burden of proof on you, to prove that intelligence and consciousness are connected, as you say? — Carlo Roosen
To me, it’s clear that if computers had their own language, one they could develop for themselves, they would form a worldview independent from ours. Once we implement this language in a neural network, it will be flexible enough to learn things we can’t even imagine. — Carlo Roosen
I already did. Chess programs and ChatGPT. They have some level of intelligence, that is why we call it AI. And they have no conciousness, I agree with you on that.But aren’t they always connected? Can you provide an example of where they’re not? — Wayfarer
Yes, my challenge is that currently everybody sticks to one type of architecture: a neural net surrounded by human-written code, forcing that neural net to find answers in line with our worldview. Nobody has even time to look at alternatives. Or rather, it takes a free view on the matter to see that an alternative is possible. I hope to find a few open minds here on the forum.You’re assuming a lot there! — Wayfarer
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.