“I believe consciousness is simply what it feels like to have a neocortex.” — praxis
Neuroreductionism.
The better answer is that consciousness is simply what it is like to be a self living in its world.
So the neocortex, and the rest of the brain, are all a necessary part of the hardware equation. But being "a mind" is how the neocortex, rest of the brain, and even the entire body - and with humans, the whole damn sociocultural edifice – get to pay for their biological existence.
Consciousness is the modelling relation an organism has with its environment. An engorged neocortex is what you can afford if it adds that much of a benefit in terms of a nutrition and survival dividend.
Brains burn through energy like working muscle, even when idling. So this is something we have to consider when it comes to AI. An artificial mind would also be one that is paying close attention to its own organismic existence. It would have to be smart in the sense of earning its entropic keep.
Of course, here in the real world, humans build machines to amplify their own power to exist. They are an investment meant to serve our entropic existence. We want AI in the form of extensions to our reach, not as some rival class of organisms, living in the same world, equipped with the minds - or modelling relation - which might allow them that level of mental independence.
If we build actual AI, then we are just proving ourselves stupid.
Animals have consciousness but not reasoning like we do. — Gregory
Animals have reason. They have genetic and neural level models of the world they live in that work because they are "reasonable" in the pragmatic sense.
So what humans have got is the extra semiotic modelling capacity that comes with having developed speech and maths - codes based on words and numbers, layered on top of the codes based on genes and neurons.
Words allow humans to organise in a properly organismic fashion - as one shared mind - at the scale of the social organism.
Then maths/logic became the even more abstracted and universalised symbol system that led to a civilised and technological version of this social order - one that amplified its entropic reach through machinery like steam engines and Turing computation.
So "consciousness" is an unhelpful term here. It presumes that the mind is some kind of special Cartesian substance which has properties like "an introspective glow of awareness".
Neuroscientists avoid using it. Computer scientists are not so bashful, but even they started to limit themselves to artificial "intelligence" once they were asked to put up or shut up.
Neuroscience has now got quite used to understanding consciousness and reasoning in terms of embodied semiosis – the enactive turn and Bayesian brain. So it ain't about having a neocortex. It is about there being some level of reality modelling that an organism can pragmatically afford.
Humans stumbled into language and technology - fire, spears, shelters, baskets - as a new sociocultural way of life. They could then afford a much bigger brain because this new level of semiosis filled their bellies with a much more calorie dense diet.
Reason for me is the ability to grasp an idea with the necessary means of consciousness. ... If AI can do complex thoughts it's possible it does so by another means then by way of consciousness. — Gregory
You are describing how one level of semiosis gets stacked on another.
So the brain does the neurosemiosos. It gives you an animal level of intelligence, insight, habit learning, recognition memory, etc.
Then language and logic are further levels of world modelling where we humans learn to stand outside our animal or biological level of ideation to now take an objective – or rather, social and technical - view of the deal.
We learn the habit of thinking about what we are doing from first the point of a society, which is looking at our rather animistic desires and reactions and passing some kind of more rational collective judgement.
And then we up it even more by living in a society that has learnt to stand back even from the embodied social point of view to consider the problems of existence from the point of view of a world ruled by the abstractions of numbers and logic. We become part of a civilisation that wants society to run itself in a technocratic and enlightened fashion.
Again, where does AI fit into this natural arc of mental development? In what way does it pave the path to some even higher level of semiotic intelligence?
Even for a computer scientist, this is the kind of question that needs to be answered.
IBM might self-advertise by cranking out gadgets that can win at chess, or even go and bridge. But chucking lumps of circuitry – even biologically-inspired circuitry like neural nets – at the public is a big fake.
Replicating what brains do is just rehashing neurosemiosis. Where is AI's sociosemiosis, or technosemiosis? What social world would make sense of these neural machines?
Anyone can talk about making conscious machines as some kind of sci-fi engineering project. But actual AI ain't even a thing until we see the social engineering - the blueprint of the world in which this hardware even makes sense, pragmatically speaking.