• Why is the Hard Problem of Consciousness so hard?
    Ok I'll try to hone in.

    How can a collection of mindless, thoughtless, emotionless particles come together and yield inner sensations of color or sound, of elation or wonder, of confusion or surprise? — Brian Green
    This is the gist of subjective experience, correct?

    As far as I know, when it comes to e.g. images (same with sound), the process can be explained to a sufficient degree computations-wise. The visual cortex is the one which allows us to "see", via translating eye photon receptors, since incoming photon particles form patterns (in the same way, sound is specific particle vibrations on a medium). These patterns are specific pieces of data. For mental images various parts of the brain cooperate in order to recall the data - or even build it from scratch - and create a mental image. This process hasn't been fully deciphered yet, but it seems plausible to me from what I've read, that a subjective experience of a mental image can be regarded as "seeing via memory/imagination" in the same way we decode sensory information when seeing via the eyes, since it's the brain who does the actual seeing after all.

    When it comes to emotions, we know that they are contingent with neural functions and the process from which they stem from, e.g. elation:
    - We receive or recall data.
    - Depending on its nature certain neurotransmitters, which are hormones and biochemical substances are released. In the case of elation, it should be dopamine, endorphins, oxytocin etc.
    - We know these neurotransmitters bind to neuron receptors, opening ion channels and affecting their firing rate. In the case of elation the neurons fire more frequently in the reward centers of the brain.

    You are right though, this one seems still far from deciphered. Could it be a bespoke form of information meant to shape behaviors for lifeforms to survive and evolve? Much like the cells that translate photon patterns to vision or vibrations to sound? There doesn't seem to be any more information that connects it with the subjective experience of emotion.

    The reason why the experience is private and unique, I think is explained by our neurodivergence as I wrote in the other post. After all each person's sensors differ so they don't get the exact same information, and most importantly they process it in vastly different neural networks. Subsequently, since the processed data differs, will the Qualia that come from it.

    I do disagree on the why - I think all forms of subjective experience have an important evolutionary value, for example recalling or imagining information has practical value and emotions work as a reward/punishment mechanism that promotes certain behaviors, much like a reward function in AI reinforcement learning.
  • Why is the Hard Problem of Consciousness so hard?
    It is most definitely incomplete - for starters I couldn't hope to articulate it properly within a post, a limited amount of time and without a lot of knowledge to pinpoint many of the logical leaps it does.

    Even if I could do all that, the only way a solution would be complete, would be for science to map and understand every single node and function of our brains. We already have for less complex organisms though. So in general, it just seems like it is the most fitting solution with the available information we have at this time.

    I think the Chinese Room is fallacious and there are pretty convincing counterarguments against it. My own take is:
    - For starters, if a person that doesn't understand Chinese, manually runs the program that can answer in Chinese, they will inevitably have used the knowledge that is contained in that program.
    - What is "understanding what characters mean" if not just data? If the only knowledge that was contained in the program, was how to form an answer without knowledge about what each character means, be it a computer or human, they would just lack that knowledge that is absent from the program.
    Even if someone would need that knowledge to write the program, that doesn't mean it would be included in the program.
    - Can the actual meaning of characters be embedded in the program? Sure. The information we store and recall - ideas, pictures, sounds, smells, feelings, touch impressions can all be represented by data. It's just that, should they come in the form of data, the latter three require molecules or direct brain signals to be communicated to us.

    By extension, the so-called Qualia (these subjective experiences) can be represented by data. Perhaps even then, there would be no way to communicate Qualia accurately, unless one shared the same brain, since each person has a differently formed network that process them in different ways.


    Yes, my bad, should have wrote programming instead of replicating. I'll edit it and include an extra link.
  • Why is the Hard Problem of Consciousness so hard?
    Given our current and best information about the physical world, unless I am missing something, I don't see how consciousness as well as the subjective experience that forms from it, can't be safely explained as a purely computational phenomenon.

    Take the simplest of computational networks - two states going through a logic gate, producing a new state. According to the research that I am aware of, examples of which I write in the next paragraphs, this simple network, by itself, can be regarded as a fundamental level of consciousness, or a single block of logic if you will. If, for example, you want it to contain memory, in order to process that memory and produce a new state, two NOR gates will suffice. Connect them with another gate and a binary sensor and you essentially have stored information processing which also depends on the environment.

    There is already an amount of research around programming microorganism behavior with a combination of logic gates - which is the fundamental computational mechanism in electronics. Example nice reads:
    https://www.sciencedirect.com/science/article/pii/S030326472200003X
    https://arstechnica.com/science/2010/12/building-logic-gates-with-bacterial-colonies/

    Beyond that, we are just describing different complexity levels of "logic". From what I understand molecular neurotransmitter function (that mostly work as emotional regulators in humans), can be boiled down to logic gates as well. For one, they seem to work similarly to AI neural network learning algorithm techniques to encourage or discourage decisions by altering neuron firing frequency, and even if one could argue that neurotransmitter effect on neurons is not binary, unlike logic gates, their analog behavior can be replicated with binary behavior. Again, by looking at something we can actually map, neurotransmitters in earthworms for example, work in their nervous system as a decision regulator.

    By taking a look at the animal kingdom to comprehend our "seemingly inexplicable phenomenon" of consciousness, we can see that the more complex this network of logic is, the more behaviors emerge from it. In vastly more complex social organisms like bees, research has shown that they share more "traditionally human" behaviors than was thought before. Some name that level of complexity "sentience" - but what does this sentience describe, if not something that just describes a greater level of similarity to our own "special" experience, and not something unique or a separate phenomenon.

    In essence a decently complex lifeform, is self-powered, has sensors that constantly gather information from the environment, can store an amount of memory, and contains a mindbogglingly complex neural network regulated by neurotransmitters that makes decisions.

    Moving on to more complex lifeforms, their similarities to our species increase. There are important differences, for example, like the capacity to store long-term memory, or the evolution of a dedicated neurotransmitter network (Amygdala) and many more, but at the end of the day, it boils down to the aggregation of complex computational systems.

    As "the hard problem of the consciousness" in the sense of how "gives rise to subjective experience", I don't see how it's not just simply a subsequent symptom of the complexity of our systems and the randomization of information. Randomization of information exists in every aspect of our conscious being. From our imperfect sensory inputs, to the wiring of our neural networks and the unique set of experiences and DNA that helps it form.

    Beyond information randomization, in theory, the quantum mind hypothesis could further explain and bridge the probabilistic nature of cognition that gives rise to subjectivity, but again, this is well within the realm of soon-to-be conventional computation. Anyhow, I think that speculating or even philosophizing around this kind of a black box is counter-productive to the discussion, so I won't touch it further.

    If there is information that dispels this, please, go forth and explain.

Generic Snowflake

Start FollowingSend a Message