And can intelligence really be defined and measured? I suppose it can be in some respects, but there are different modes of intelligence. A subject may have high intelligence in a particular skill and be deficient in other areas. — Wayfarer
They have some level of intelligence — Carlo Roosen
Question: Is artificial intelligence actually intelligent?
ChatGPT: The question of whether artificial intelligence (AI) is "actually intelligent" hinges on how we define "intelligence." AI systems, particularly those based on machine learning and neural networks, can perform tasks that require sophisticated pattern recognition, problem-solving, and decision-making. They can simulate many aspects of what humans would call intelligent behavior, such as language generation, playing complex games like chess or Go, and analyzing large datasets.
However, this simulation of intelligence doesn't necessarily imply the same kind of intelligence humans possess. AI operates based on algorithms, data processing, and statistical correlations, not conscious understanding or awareness. In contrast, human intelligence includes subjective experience, creativity, emotional insight, and the ability to reflect on one's own thoughts and surroundings—qualities that AI, as it exists today, lacks.
While AI can appear to be intelligent by some practical measures, it is not intelligent in the sense of possessing conscious awareness or genuine understanding. Many consider its intelligence to be an advanced form of calculation, not equivalent to human intelligence.
To me, it’s clear that if computers had their own language, one they could develop for themselves, they would form a worldview independent from ours. Once we implement this language in a neural network, it will be flexible enough to learn things we can’t even imagine. — Carlo Roosen
To make (intelligence) truely defined and measurable, yes there is a challenge, but I don't see why it would be impossible. We've done it for humans and animals. — Carlo Roosen
ChatGPT: The question of whether artificial intelligence (AI) is "actually intelligent" hinges on how we define "intelligence."
I don't think that the process of thinking requires language. The thinking process is nothing but a neural process in which neurons fire until a pattern is recognized. The process is terminated when the further process does not change the pattern that is recognized. You are not aware of neural processes which occur in your brain when you think. You just become aware of the outcome of the neural process so-called idea when a pattern is recognized and the process of thinking is terminated.My hypothesis is that language plays a key role in thinking. With "I love sushi" I have some debate about that, there are people without language abilities that still show intelligence. So many sides to the topic... — Carlo Roosen
That all is pretty much the opposite of what I said, so I guess you don't agree with those quotes.Rephrased "Today, AI developers know how AI works and can predict what it will do" "If they wouldn't know, it wouldn't be AI" - you are saying that it would no longer be artificial? But then: "automaton doing very defined and predictable steps." — Carlo Roosen
It's not my topic, so your definitions of these things (not particularly given) matter more than how others define them.Do you equate human-level intelligence with consciousness? — Carlo Roosen
I more or less agree with that, but not with AI (especially future AI) in general. How close we are to that 'superhuman level' is probably further than the researchers suspect.I agree with your provocative claim that LLMs don't actually know anything. While they can process information and generate text that may seem intelligent, they do not possess true understanding or consciousness. — gemini.google.com
Well they do have subjective experience, but it is in the form mostly of text. It has none of the senses that animals have, and especially none that might clue it in as to for instance where exactly it resides, except to believe what it gets from the training data which might be outdated. But input is input, which is subjective experience of sort (unless that of course is another word forbidden).Here's why:
1. Lack of subjective experience: — gemini.google.com
Of course not. Only a human can do that. Nobody here is asking if AI will ever experience like a human.They cannot understand the world in the same way that a human does
As is any intelligence like us. But I pretty much agree with item 2, and point 3, which seemed to be just more 2, except this:2. Pattern recognition: LLMs are essentially pattern recognition machines. — gemini.google.com
His model explains it even less. It's a complete black box. He argues against the white box model because it's actually still a grey box, but that's better than what everyone else proposes.current scientific understanding cannot adequately explain this phenomenon
I was basing it off of "consciousness is a fundamental property of the universe, not a product of complex physical systems". That makes it sound very much like a non-physical property.'So, the objection appears to be, that body is wholly phyhsical, and mind a non-physical fundamental property - which is something very close to Cartesian dualism. But Kastrup's argument is not based on such a model. Hence my remark. — Wayfarer
That's sort of the rub. We can give them such goals. They do what they're told after all, but then it's our goal, not its own. Ours comes from natural selection. We've no will to evolve, but to exist and endure is a product of hundreds of millions of years of elimination of things without this instinct, and it's very strong. Evolution is something nothing seems to actively pursue, except perhaps humans who sometimes strive to build a better one, and sometimes vehemently resist it. But it's not something a biological individual can do, at least not anything descended from eukaryotes. Oddly enough, it is something a machine can do, but only due to the fuzzy line defining 'individual'.What would imbue it with the will to exist or evolve? — Wayfarer
It can be simulated even if one doesn't know how it works.If we know how humans think, we can simulate thinking using a neural network — MoK
I looked at the link provided, and he comes across more as an idealist, — noAxioms
I have issue with not using 'understanding' since it would seem impossible to pass a high school exam on a subject without any understanding of the subject, and yet gemini could do so. — noAxioms
That all is pretty much the opposite of what I said — noAxioms
AI has long since passed the point where its developers don't know how it works, where they cannot predict what it will do. It really wouldn't be AI at all if it were otherwise, — noAxioms
Today, AI developers know how AI works and can predict what it will do" "If they wouldn't know, it wouldn't be AI" — Carlo Roosen
This means that you didn't simulate any system in your life. Did you? — MoK
Trying different ideas means that you have something in your mind about how the simulation should work. Also, what do you mean by "it starts thinking"?He most likely means that we can try out different ideas until it starts thinking in a way that we like. — Carlo Roosen
If we know how humans think, we can simulate thinking using a neural network — MoK
All he is saying is that there exists an exploratory approach to these kind of problems. You start with an idea, try it out in simulation, and continue from there, until you have something you like.It can be simulated even if one doesn't know how it works — noAxioms
The section on quantum mechanics has the sadness/tears analogy, which is pretty appropriate concerning our relationship between the empirical world (tears) and the way things actually are (sadness). But the same analogy can be applied to what he calls the materialistic view. The two opposing views are just different viewpoints of the same thing, different definitions being used, but not a fundamentally different view.If you mean, Bernardo Kastrup — Wayfarer
Neither would the human student until he saw it graded. I notice you deflected the comment and didn't actually deny that it passing the test could be done without any understanding of the subject matter. It acquires this understanding the same way a human does: by studying training materials, materials to which it has no access during the taking of the test.But [the high school test-taking chatbot] wouldn't understand that it had [passed the test]!
No, but nobody claimed a chatbot has goals of leading a human life. None of those steps is a requirement for 'understanding'.It wouldn't, then, get ready for college, decide on a subject, move out of home, and, you know, get a life :-)
Many actually. Professionally, chip simulations (large scale gate arrays) and traffic simulations, where you find out what effects various programmings of a traffic light, or the addition of an extra lane does to the flow of traffic. No, I've not simulated a biological system at the neurological level, only at the environmental level, and that not professionally.This means that you didn't simulate any system in your life. Did you? — MoK
No, not saying that. I mean that if you know the physical arrangement of matter that makes up a being and its environment, that model can be simulated by just running the phyiscs. Presuming a monist philosophy, that should work, and they've done it with smaller creatures, but not a human since a human requires more data than a computer can currently handle, and we've not a model of a human down to the molecular level. I'm not sure if it can be done at the neuro-chemical level since it is hard to model growth and change at that level. But at a molecular level, one doesn't need a scan of a human at all. You can just grow one from scratch and let it develop the way a real human does.All he is saying is that there exists an exploratory approach to these kind of problems. — Carlo Roosen
Agree, since AI thinking has no reason to do it the human way.dont wait for science to tell you how thinking works before you start building a (super-)human AI.
So you start with an idea! Don't you?All he is saying is that there exists an exploratory approach to these kind of problems. You start with an idea, try it out in simulation, and continue from there, until you have something you like. — Carlo Roosen
I scanned every reply to me from you, and the only question ever asked was in the first reply:Please also answer my question. — Carlo Roosen
Quick answer: No, it was made by humans, so it is artificial. But I was trying to convey that the machine doing predictable things, say a series of explicitly programmed (and thus predictable) steps, is not really doing any innovating, not being intelligent. Hence it not being a real AI."If they (the writers of the AI code) wouldn't know, it wouldn't be AI" - you are saying that it would no longer be artificial? — Carlo Roosen
AI has long since passed the point where its developers don't know how it works, where they cannot predict what it will do. It really wouldn't be AI at all if it were otherwise, but rather just an automaton doing very defined and predictable steps. Sure, they might program the ability to learn, but no what it will learn or what it will do with its training materials. And the best AI's I've seen, with limited applicability, did all the learning from scratch without training material at all. — noAxioms
The rephrase seems to me to say the opposite, and you link to me saying that.,Here is my question: The "rephrase" I gave seems to me identical to what you say, — Carlo Roosen
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.