Would it be possible to add a short summary in the form of bullet points? — Outlander
This papers deserves serious replies. — JerseyFlight
It provides the foundation for an up to date discussion. — JerseyFlight
What one's State Machine is, becomes that way through a concrete material social process, there is no way around this, and it makes a huge difference when it comes to the way we view humans and approach education. — JerseyFlight
The theory presents a mechanistic account of consciousness. — Malcolm Lett
The existence of the feedback path is the explanation for why we have awareness of our thoughts — Malcolm Lett
Would it be possible to add a short summary in the form of bullet points? Oh, that's right. Yes. It is. Could you please do so? Thanks! — Outlander
I don't think so, though I shouldn't answer for someone else. I'm almost finished with this intricate paper. It would be very hard to reduce what he has here to bullet points because the argument builds on itself stage by stage. — JerseyFlight
A computation approach builds in this basic problem. A neurobiological approach never starts with it. — apokrisis
We are very different as we evolved a capacity for syntactic speech. And that new level of semiosis is what allows a socially-constructed sense of self-awareness — apokrisis
So the question is why you would even pursue a mechanistic theory in this day and age? Why would you not root your theory in biology? — apokrisis
1. The computational approach appears to have the greatest explanatory power of the various alternatives out there. — Malcolm Lett
furthermore it offers predictions about what we'll discover as neuroscience develops. It provides explicit mechanisms behind why we are aware of certain things, and not aware of others. — Malcolm Lett
2. I have not seen a non-computational theory provide this level of detail. — Malcolm Lett
We appear to perceive certain external and internal senses and data sources, while not perceiving others. For example, we don't have direct access to arbitrary long term memories and it seems quite reasonable to assume that access to long term memory requires some sort of background lookup
Perhaps it is more accurate to say that a computational theory of the brain and consciousness has the best ability to "model" the observed behaviours (internal and external), enabling us to do useful things with that modelling capability; however it may not form a "complete" theory. — Malcolm Lett
That statement assumes that semiosis only applies to language. — Malcolm Lett
I would suggest that the mechanisms I have proposed are another example of semiotics. I probably haven't got the split quite right, but as an attempt at the style of Pattee:
* sense neurons produce a codified state having observed an object
* other neurons interpret that codified state and use them for control
* the codified state has no meaning apart from what the system interprets of it — Malcolm Lett
I'm not actually sure where the problem is here. I see the two as complimentary. As I have stated in my paper, the overall problem can be seen from multiple complimentary views: mechanistic/computational view ("logical" in my paper), and biological ("physical" in my paper). — Malcolm Lett
I considered doing that and then deleted it. I knew if I wrote a summary then people would read that and jump to conclusions without reading the whole paper. — Malcolm Lett
Perhaps a bit outrageously, I am suggesting that the divide between conscious and not-conscious, intentional and unintentional, is difficult to actually define — SaugB
How come, when you are consciously or actively recalling that girl's face, the dress she wore also features in your mental picture, without you having to consciously recall it? — SaugB
But biology-inspired computation - the kind that Stephen Grossberg in particular pioneered - flips this around. The brain is instead an input-filtering device. It is set up to predict its inputs with the intent of being able to ignore as much of the world as it can. So the goal is to be able to handle all the challenges the world can throw at it in an automatic, unthinking and involuntary fashion. When that fails, then attentional responses have to kick in and do their best. — apokrisis
I may be misunderstanding which particular kind of neural network you're referring to, but sounds like artificial neural networks such as the "deep learning" models used in modern AI. Some in the field think that these are or will plateau. We have no way to extend them to the capability of artificial general intelligence. Examples like AlphaGo and AlphaZero are amazing feats of computational engineering, but at the end of the day they're just party tricks.As I say, this is a well developed field of computational science now - forward modelling or generative neural networks. So even if you want to be computational, you haven't focused on the actually relevant area of computer science - the one that founded itself on claims of greater biological realism. — apokrisis
I'm curious about one thing. What's your stance on the hard problem of phenomenal experience? If I'm understanding you correctly, you're suggesting an explanation that is just as materialist as my own (ie: no metaphysical). So it should suffer the same hard problem. You suggested in another post that a "triadic" model somehow avoids both the hard problem and the need to resort to metaphysics, but it isn't clear to me how that works.My claim here is that the only foundationally correct approach would be - broadly - biosemiotic. Both life and mind are about informational constraints imposed on dynamical instability. Organisms exist because they can regulate the physics of their environment in ways that produce "a stable self". — apokrisis
I'm fairly confident that both the input/output and output/input viewpoints are equally accurate and correct. — Malcolm Lett
What's your stance on the hard problem of phenomenal experience? If I'm understanding you correctly, you're suggesting an explanation that is just as materialist as my own (ie: no metaphysical). So it should suffer the same hard problem. — Malcolm Lett
You suggested in another post that a "triadic" model somehow avoids both the hard problem and the need to resort to metaphysics, but it isn't clear to me how that works. — Malcolm Lett
I just wanted to say that I really appreciate your comments. I always find new avenues for learning that come from them. — Malcolm Lett
Hi, Your theory states that some animals are conscious, but not others. I wonder where you drew the line? How and why ? I also have a theory of consciousness, but could not draw this line. So I'm interested in your reasons for doing this. — Pop
You write, "what do I mean by the use of the word consciousness and of its derivative, conscious? In simple terms, I am referring to the internal subjective awareness of self that is lost during sleep and regained upon waking."
I think you are confusing two concepts here. One is subjective awareness of contents, the other, what might be called "medical consciousness," which is full realized in a responsive wakeful state. Medical consciousness is objectively observable, and, I would suggest, part of Chalmers' easy problem. Subjective awareness is found also in sleep, in our awareness of dreams, and its modeling is Chalmers' hard problem. — Dfpolis
While my body may be asleep, my consciousness could be awake. — Malcolm Lett
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.