1. The computational approach appears to have the greatest explanatory power of the various alternatives out there. — Malcolm Lett
Reading more bits in detail, my criticism remains. Even as a computational approach, it is the wrong computational approach.
You are thinking of the brain as something that takes sensory input, crunches that data and then outputs a "state model" - a conscious representation.
So a simple input/output story that results in a "Cartesian theatre" where awareness involves a display. But then a display witnessed by who?
And an input/output story that gives this state model top billing as "the place where all data would want to be" as that is the only place it gets properly appreciated and experienced.
But biology-inspired computation - the kind that Stephen Grossberg in particular pioneered - flips this around. The brain is instead an input-filtering device. It is set up to predict its inputs with the intent of being able to ignore as much of the world as it can. So the goal is to be able to handle all the challenges the world can throw at it in an automatic, unthinking and involuntary fashion. When that fails, then attentional responses have to kick in and do their best.
So it is an output/input story. And a whole brain story.
The challenge in every moment is to already know what is going to happen and so have a plan already happening. The self is then felt as that kind of prepared stability. We know we are going to push the door open and exactly how that is going to feel. We feel that embodied state of being.
And then the door turns out to be covered in slime, made of super-heavy lead, or its a projected hologram. In that moment, we won't know what the fuck is going on - briefly. The world is suddenly all wrong and we are all weird. Then attention gets to work and hopefully clicks thing back into place - generating a fresh state of sensorimotor predictions that now
do mesh with the world (and with our selves as beings in that world).
But this attentional level awareness is not "consciousness" clicking in. It is just the catch-up, the whole brain state update, required by a failure to proceed through the door in the smooth habitual way we had already built as our own body image.
Consciousness is founded on all the things we don't expect to trouble us in the next moment as much as the discovery that there is almost always something that is unexpected, novel, significant, etc, within what we had generally anticipated.
That is why I say it is holistic. What you managed to ignore or deal with without examination - which is pretty much everything most of the time - is the iceberg of the story. It is the context within which the unexpected can be further dealt to.
As I say, this is a well developed field of computational science now - forward modelling or generative neural networks. So even if you want to be computational, you haven't focused on the actually relevant area of computer science - the one that founded itself on claims of greater biological realism.
furthermore it offers predictions about what we'll discover as neuroscience develops. It provides explicit mechanisms behind why we are aware of certain things, and not aware of others. — Malcolm Lett
Errm, no. You would have to show why you are offering a sharper account than that offered by a Bayesian Brain model of attentional processing for instance.
2. I have not seen a non-computational theory provide this level of detail. — Malcolm Lett
And you've looked?
Besides....
We appear to perceive certain external and internal senses and data sources, while not perceiving others. For example, we don't have direct access to arbitrary long term memories and it seems quite reasonable to assume that access to long term memory requires some sort of background lookup
....is an example of the sketchiness of any "level of detail".
Basic "psychology of memory" would ask questions like are we talking about recognition or recollection here? I could go on for hours about the number of wrong directions this paragraph is already headed in from a neurobiological point of view.
Perhaps it is more accurate to say that a computational theory of the brain and consciousness has the best ability to "model" the observed behaviours (internal and external), enabling us to do useful things with that modelling capability; however it may not form a "complete" theory. — Malcolm Lett
Sure. But I've seen countless cogsci flow chart stories of this kind - back in the 1980s, before thankfully folk returned to biological realism.
That statement assumes that semiosis only applies to language. — Malcolm Lett
I said language was a "new level" of semiosis. So I definitely was making the point that life itself is rooted in semiosis - biosemiosis - and mind, in turn, is rooted in neurosemiosis, with human speech as yet a further refinement of all this semiotic regulation of the physical world.
I would suggest that the mechanisms I have proposed are another example of semiotics. I probably haven't got the split quite right, but as an attempt at the style of Pattee:
* sense neurons produce a codified state having observed an object
* other neurons interpret that codified state and use them for control
* the codified state has no meaning apart from what the system interprets of it — Malcolm Lett
That's not it.
But look, if your interest is genuine, then stick with Pattee.
I think I said that I did all the neurobiology, human evolution, and philosophy of mind stuff first. I was even across all the computer science and complexity theory.
But hooking up with Pattee and his circle of theoretical biologists was when everything fully clicked into place. They had a mathematical understanding of biology as an information system. A clarity.
I'm not actually sure where the problem is here. I see the two as complimentary. As I have stated in my paper, the overall problem can be seen from multiple complimentary views: mechanistic/computational view ("logical" in my paper), and biological ("physical" in my paper). — Malcolm Lett
In some sense, the machinery is complementary to the physics. But that is what biosemiosis is about - the exact nature of that fundamental relationship.
So it is not just about having "two views" of the phenomenon - "pick whichever, and two is better than one, right?"
My claim here is that the only foundationally correct approach would be - broadly - biosemiotic. Both life and mind are about informational constraints imposed on dynamical instability. Organisms exist because they can regulate the physics of their environment in ways that produce "a stable self".
And (Turing) computation starts out on the wrong foot because it doesn't begin with any of that.