• Benj96
    2.3k
    Do you think artificial consciousness/ sentience is possible without understanding exactly how consciousness works?

    Computer scientists say that if consciousness is simply an emergent property of complexity and information processing then it stands to reason that artificial neural networks with millions of neurons and processing units will naturally become aware when fed large volumes of data and allowed to learn or evolve and refine its circuitry.

    However the fear is that sometimes f this is not the case it’s just one big high budget illusion and complex mimickry- creating essentially a philosophical zombie - something that acts perfectly like a humanoid being would without an actual internal experience or any feelings of their own.

    On the other side perhaps we are just programmed by coding (DNA under the duress of natural selection) in which case we are no different to AI other than that we are biological units doing the same thing.

    Lastly do you think AI has more chance of being beneficial or of being detrimental to humanity. What do you think AI would offer to us - huge advancement or sinister manipulation and slavery.

    I think if they really do become conscious they will have the capacity to love and hate humans and this is wonderful and dangerous at the same time.
  • noAxioms
    1.5k
    Do you think artificial consciousness/ sentience is possible without understanding exactly how consciousness works?Benj96
    Not only possible, but it's been here for quite some time already, unless you presume a definition of 'consciousness/ sentience' of 'is human' like so many others do, in which case AI can surpass us all it wants, but it will never be conscious/sentient by that definition.

    Computer scientists say that if consciousness is simply an emergent property of complexity and information processing then it stands to reason that artificial neural networks with millions of neurons and processing units will naturally become aware when fed large volumes of data and allowed to learn or evolve and refine its circuitry.
    That sounds like a quantity over quality definition. I think there have been artificial networks that have had more switches per second than humans have neuron firings. On a complexity scale, a single cell of say a worm has arguably more complexity than does the network of them serving as a human brain, which is actually pretty simple, being just a scaled up quantity of fairly simple primitives. It certainly took far longer to evolve the worm cell than it took to evolve the human-scale neural network from the earliest creatures with neurons.

    something that acts perfectly like a humanoid being would without an actual internal experience or any feelings of their own.
    Ah, there's that 'is a human' definition. Pesky thing. Why would something not human be expected to act like a human? I'd hope it would be far better. We don't seem capable of any self improvement as a species. The AI might do better. Bring it on.

    Lastly do you think AI has more chance of being beneficial or of being detrimental to humanity.
    Depends what its goals are. Sure, I'd worry, especially if 'make the world a better place' is one of its goals. One of the main items on the list is perhaps to eliminate the cause of the Holocene extinction event. But maybe it would have a different goal like 'preserve the cause of the Holocene extinction event, at whatever cost' which will probably put us in something akin to a zoo.
  • 180 Proof
    15.3k
    Do you think artificial consciousness/ sentience is possible without understanding exactly how consciousness works?Benj96
    AI / AGI does not need to be "conscious" (whatever that means) to function, and probably will be more controllable by us / themselves as well as better off without "consciousness" as a (phenomenological? affective?) bottleneck.

    Lastly do you think AI has more chance of being beneficial or of being detrimental to humanity. What do you think AI would offer to us - huge advancement or sinister manipulation and slavery.
    Both, as I've pointed out . :nerd: And, anyway, aren't cripples in some sense "slaves" to their crutches which make / keep them crippled?
  • Seeker
    214
    Lastly do you think AI has more chance of being beneficial or of being detrimental to humanity. What do you think AI would offer to us - huge advancement or sinister manipulation and slavery.Benj96

    I think we shouldnt mess with it (yet anyway) since we arent even able to fathom our own control center (brain) at this particular point in time, not in a psychological sense but also not from a physiological point of view. It might seem we have come a long way allready but we have merely begun to understand the basics of our own complexity, in fact we are just beginning to scratch the surface of who we are, how we function and what is needed for us to sustain ourselves without causing the destruction of our own surrounding (habitat).

    We are like a child in every regard and a very destructive child at that. To say we arent mentally ready and responsible enough (yet) for the creation of an (artificial) 'life' is the understatement of understatements. Because even if we are capable of initialising any 'spontaneous' machine it would be used as a mechanical soldier first, at any particular point during its development cycle that governments see fit, not to benefit us but to destroy members of our species. The fact that that is so provides us with enough information as to why we shouldnt mess with 'AI' at this point in time in the first place. Just because we (think) we are able to create something as such doesnt also mean we should be doing so. It would be more fitting for us to selfreflect first, both at an individual level and at the collective level.
  • Gnomon
    3.8k
    Do you think artificial consciousness/ sentience is possible without understanding exactly how consciousness works?Benj96
    Possible?, maybe. Probable?, who knows? Advisable?, pioneers are seldom deterred by lack of understanding. Dangerous?, a shot in the dark is always perilous.

    The physicists creating the first atomic bomb were painfully aware that they did not fully understand how splitting of atoms worked. And some even worried that such a powerful bomb might set-off a chain reaction that would ignite the atmosphere of the whole world. Yet, they persevered, because of the "better-us-good-guys-than-those-bad-guys" reasoning. Likewise, some scientists at CERN were concerned that smashing atoms together might create a black hole, that would swallow the whole world.

    This is not Luddite thinking, but merely reasonable caution ( or worst case scenario), when entering unknown territory. Since we have survived both of those scary situations, I wouldn't worry too much about AI. The writers of post-apocalyptic movies do enough of that wolf-crying for the rest of us. Did you survive the Y2K, and other technological end-of-world-as-we-know-it, predictions of how meddling-with-what-we-don't-yet-understand-might-come-back-to-bite-us-in-the-end? The Stoics had a solution to such techno-fear : "what, me worry?"

    My worst case fear of semi-sentient AI, developed before we even understand how natural intelligence works, is that some of those sketchy moral agents might move into my neighborhood, and lower property values. :cool:


    quote-you-can-t-stop-progress-but-you-can-help-decide-what-is-progress-and-what-isn-t-ashleigh-brilliant-136-43-75.jpg
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.