Not only possible, but it's been here for quite some time already, unless you presume a definition of 'consciousness/ sentience' of 'is human' like so many others do, in which case AI can surpass us all it wants, but it will never be conscious/sentient by that definition.Do you think artificial consciousness/ sentience is possible without understanding exactly how consciousness works? — Benj96
That sounds like a quantity over quality definition. I think there have been artificial networks that have had more switches per second than humans have neuron firings. On a complexity scale, a single cell of say a worm has arguably more complexity than does the network of them serving as a human brain, which is actually pretty simple, being just a scaled up quantity of fairly simple primitives. It certainly took far longer to evolve the worm cell than it took to evolve the human-scale neural network from the earliest creatures with neurons.Computer scientists say that if consciousness is simply an emergent property of complexity and information processing then it stands to reason that artificial neural networks with millions of neurons and processing units will naturally become aware when fed large volumes of data and allowed to learn or evolve and refine its circuitry.
Ah, there's that 'is a human' definition. Pesky thing. Why would something not human be expected to act like a human? I'd hope it would be far better. We don't seem capable of any self improvement as a species. The AI might do better. Bring it on.something that acts perfectly like a humanoid being would without an actual internal experience or any feelings of their own.
Depends what its goals are. Sure, I'd worry, especially if 'make the world a better place' is one of its goals. One of the main items on the list is perhaps to eliminate the cause of the Holocene extinction event. But maybe it would have a different goal like 'preserve the cause of the Holocene extinction event, at whatever cost' which will probably put us in something akin to a zoo.Lastly do you think AI has more chance of being beneficial or of being detrimental to humanity.
AI / AGI does not need to be "conscious" (whatever that means) to function, and probably will be more controllable by us / themselves as well as better off without "consciousness" as a (phenomenological? affective?) bottleneck.Do you think artificial consciousness/ sentience is possible without understanding exactly how consciousness works? — Benj96
Both, as I've pointed out . :nerd: And, anyway, aren't cripples in some sense "slaves" to their crutches which make / keep them crippled?Lastly do you think AI has more chance of being beneficial or of being detrimental to humanity. What do you think AI would offer to us - huge advancement or sinister manipulation and slavery.
Lastly do you think AI has more chance of being beneficial or of being detrimental to humanity. What do you think AI would offer to us - huge advancement or sinister manipulation and slavery. — Benj96
Possible?, maybe. Probable?, who knows? Advisable?, pioneers are seldom deterred by lack of understanding. Dangerous?, a shot in the dark is always perilous.Do you think artificial consciousness/ sentience is possible without understanding exactly how consciousness works? — Benj96
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.