• Malcolm Lett
    76
    Consider seeing an apple. One neural state represents not only the apple acting on our neural state, but also the fact that our neural/sensory state is modified. There are two concepts here, but only one physical representation.Dfpolis

    I would rather say that there are three concepts here:
    1. Representation of sense data -- ie: the interpreted form of the apple, as a neural state.
    2. Interpretation of that representation -- ie: our modified overall mental state, or the "content" of conscious awareness.
    3. The hard problem of consciousness -- ie: why the neural state "content" is accompanied by the "existence" of conscious awareness.

    The neural state of #1 may or may not enter our conscious awareness. There is plenty of evidence for our brain processing and even acting on sense inputs without the need for us to be consciously aware of it at the time.

    #2 also has an easy neural representation, which is part of what my paper focuses on. I tend to think of the total state of the brain as a hierarchy of representations: the raw sense data has a very low level representation that is generated for each sense modality (touch vs sight etc.) and is not in any way perceived consciously. Those low level representations are slowly merged and built on as multiple layers of hierarchies are built upwards, until finally they form together into a single coherent and very high-level representation. It is somewhere towards the top of that hierarchy where the "content" of consciousness is derived. (And just to be clear, I use this as a simplistic way of thinking about the brain when it's convenient. I don't assume it's a full explanation).

    What I'm trying to say here is that #2, or the "fact that our neural/sensory state is modified", is that high level representation that integrates all the senses and all our current mental state, following a filtering process from attentional focus.

    I've found @apokrisis's mention of semiotics particularly helpful. I now see #1 and #2 as two components of a semiotic process. If there are neural states that represent sense inputs, what is it that perceives those neural states? Historically this has been answered by invoking the idea of a "soul". But the idea of semiotics explains that the same underlying mechanisms (in this case neurons) can also be used to interpret the sense representations.

    Now, where is the division between #2 and #3? Is there some additional state that is accompanied with conscious awareness and which is not representable as neural state? I believe the answer is "no".
  • Dfpolis
    1.3k
    We agree -- in reference to your first response on consciousness.

    On your second, I need more time to understand what you are saying and formulate a response.
  • Dfpolis
    1.3k
    I would rather say that there are three concepts here:
    1. Representation of sense data -- ie: the interpreted form of the apple, as a neural state.
    2. Interpretation of that representation -- ie: our modified overall mental state, or the "content" of conscious awareness.
    3. The hard problem of consciousness -- ie: why the neural state "content" is accompanied by the "existence" of conscious awareness.
    Malcolm Lett

    I would not call these "concepts." Rather, interpretations always involve judgements, i.e. thinking this is that. A concept is more fundamental, it is simply the awareness of some aspect of reality. In my comment, the (1) the awareness of an objective aspect of the world, i.e. an apple and (2) an awareness of a subjective aspect of reality, i.e. my state. Neither of these awarenesses is a judgement or an interpretation, because their expression is not propositional. We are not saying "the apple is x" or "my state is y." We are just aware of some information.

    Your (3) is not a concept, but a question, which is a desire that requires judgements to satisfy it.

    To return to my point, physicalism fails because one neural state founds two concepts <an apple> and <the modification to me caused by the apple>. To have two distinct concepts, we need a differentiating factor, and one physical state can not provide it.

    There is plenty of evidence for our brain processing and even acting on sense inputs without the need for us to be consciously aware of it at the time.Malcolm Lett

    Agreed. Thus, consciousness is not simply a concomitant of neural data processing. We need an additional causal factor. Physicalists believe that this is some special form of processing, but have neither a rational basis for thinking this nor a coherent hypothesis of what kind of processing this might be.

    In fact, in Consciousness Explained Dennett provides cogent arguments that no naturalist theory can produce the experience of consciousness. His response is to discard the data of experience. Mine is to see the naturalistic hypothesis as falsified.

    I posted a suite of arguments on this Forum showing that intentionality cannot be reduced to physicality. (https://thephilosophyforum.com/discussion/4732/intentional-vs-material-reality-and-the-hard-problem). None of the many responding comments found a fatal flaw in my case -- the conclusion of which is that Chalmers's "Hard Problem," is not a problem, but a chimera.

    Those low level representations are slowly merged and built on as multiple layers of hierarchies are built upwards, until finally they form together into a single coherent and very high-level representation.Malcolm Lett

    This was Aristotle's conclusion in De Anima, where he named the integrated sensory representation a "phantasm." Still, he was smart enough to realize that a representation is not a concept. Representations are intelligible, (they contain information that can be known), but they are incapable of making themselves known. (How could they possibly do so?)

    Instead, he argues in iii, 7, that we need an aspect of mind already operational in the intentional sphere to make what is potentially known, actually know. He called this aspect the "agent intellect," but phenomenologically, it is what we now call awareness -- for it is by turning our awareness to contents that they become actually known.

    I've found apokrisis's mention of semiotics particularly helpful.Malcolm Lett

    Semiotic reflection confirms Aristotle's case. I will discuss an important semiotic distinction later, but for now consider what are called "instrumental signs" such as smoke, written and spoken language, and symbols. Some of these, such as smoke, signify naturally, and others signify conventionally, but what ever the origin of their meaningfulness, they cannot actually signify until we first recognize what they are in themselves and then form the concept they properly evoke.

    For example, until we recognize that the smudge on the horizon is smoke, and not dust or a cloud, it cannot signify fire. In the same way, we cannot grasp the meaning of a written term until we can discern the form of its letters. In all cases, a thinking mind, one capable of semiotic interpretation, is absolutely essential to actualizing the meaning latent in the sign. So, invoking semiotics does not dispense with the need of an Aristotelian agent intellect to make what is only intelligible actually known.

    But, as John of St. Thomas points out in his Ars Logica, the instruments of thought (concepts, judgements, and chains of reasoning) are not instrumental signs, but a wholly different kind of sign, formal signs. We can see this difference by reflecting on how ideas signify. Unlike natural signs, language and other symbols, when we think <apple> we do not first have to realize that we are dealing with an idea, and only then understand that the <apple> concept refers to actual apples. Rather, the concept <apple> signifies apples immediately and transparently, without us first recognizing that it is an idea which signifies. Instead, we first think <apple> and then realize that we must have employed some instrument of thought and name the instrument "the concept apple."

    What has this to do with our problem? Simple this, while we might conceivably observe neural states and work out what they represent, if we did so, the neural states would not act as formal signs, would not act as concepts. Rather, they would be instrumental signs -- things whose nature must be understood in itself before we can extract any meaning they represent.

    The whole being of formal signs -- all that they ever do -- is to signify. On the other hand, neurons do many things: growing and trimming dendritic connections, transporting ions and firing at various rates, and consuming nutrients. Among all these functions they may also signify information. But, in signifying they act as instrument, not formal signs.

    Further, when we form concepts in the normal way, neurons do not act as any kind of sign. I can think <apple> without the slightest idea that my thought is supported by neural processing -- which it is. So, the final point is that when we say that neural states "represent" information, we must be careful not to confuse the way they normally represent information with the way any kind of sign represents information. Neurons do not represent as instrumental signs do, because we do not need to know them before we grasp their contents. Nor do they represent as formal signs do, because their whole being, all that they ever do, is not to signify, as they have physiological activities as well as representational activities.
12Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.