• Nemo2124
    29
    We can basically fully map bacterial behavior with traditional computing algorithms that don't require advanced neural networks. And we've been able to scale up the cognition to certain insects using these neural models. But as soon as the emotional realm of our consciousness starts to emerge in larger animals and mammals we start to hit a wall in which we can only simulate complex reasoning on the level of a multifunctional superadvanced calculator.Christoffer

    This is all very well, but the question that remains is that one concerning subjectivity. At what point does the machine suddenly become self-aware? Do we have to wait until additional complexity and advancement or do we have enough understanding to begin to formulate our stance with respect to machine subjectivity and our recognition of it. To my mind, and this is the key, we have already begun to recognise subjectivity in the AI, even if the programming is relatively basic for now. This recognition of the AI is what fuels its development. AI-robots may think differently to the way we do, as well, much more logical, faster processing and relatively emotion-free.

    So we are dealing with a supercomputer AI that is constantly rebounding off our intelligence and advancing ultimately to our level, although with noticeably different characteristic (silicon-based transistor technology). What's perhaps more important is how this throws into question our subjectivity, as well. So we are also on a track to advancement. How do we end-up becoming subjects or selves, what caps-off our intelligence, the limits of our self-awareness? Is our consciousness having to expand to account for more of the Universe, perhaps? Technology seems to be key here, the obvious development being how our minds are adapting to work in tandem with the smartphone, for example.

    That said, I think we are reaching some sort of philosophical end-point. A lot of what the great philosophers of the past have provided us with some sort of foundation to build on using commentary. We can't expect the emergence of a systemiser like Hegel nowadays, but his master-slave dialectic seems instructive in how we relate with AI-robots. Finally, perhaps we could move forward via a metaphysical turn and consider the 'Deus ex Machina' argument. What is animating, fundamentally driving this futuristic expansion in AI? I think AI is one of the biggest shifts I have encountered in my lifetime from the past few years and I am excited to see what will happen in the future.
  • ssu
    8.6k
    This is all very well, but the question that remains is that one concerning subjectivity. At what point does the machine suddenly become self-aware?Nemo2124
    You should first answer, when do people or animals become self-aware? Is there an universal agreement how this happens?
  • Nemo2124
    29
    That's the point entirely. Are we not just as fallible to this argument, our thoughts just merely elaborate computations with emotions that simply shift our reference frame? Nothing special apart from language sets us apart from the animals and that seems merely basic formulations. The question about human consciousness follows immediately when we consider machine self-awareness.
1234Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.