• Beautiful Mind
    1
    A lot of times when we have a problem we tend to go and search for a solution for our problem instead of understanding what caused the problem to exist in the first place.


    Whenever I come across the hard problem of consciousness I always have a problem in even understanding it. I understand that the dilemma is trying to understand how the how the qualities of consciousness that are experienced by living things may arise from out of inanimate matter which possesses no inherent qualities in and of itself; outside of our subjective perceptions.


    I read all of Bernardo Kastrup's books where he lays out his theory of ''Metaphysical Idealism'' and I'm familiar with the materialist worldview of how consciousness might have arised form complex neural processes of the brain. The problem with both is that both theories require a form of emergence for them to exist and then the question might be at what point would this emergence happen? What is the criteria that those mental/matter processes have to meet in order for it to happen? Where do you draw the line that now there is mind or now there is matter? not to mention if the concept of time even applies here since in the idealist worldview time and space are what allow representations (matter) to happen so when when you talk about Universal Consciousness you can't bring time into the equation because it's outside of the whole spacetime dimensionality.


    I tend to think of Whitehead's Process-ontology as the best way to explain reality, because the nature of reality is not made of properties whether physical or mental at least you can never posit that it is, because if you did this isn't parsimonious and it'll always be a postulate that cannot be proved. It's like saying I'm experiencing the world without the concept of an experience, our reality is made of processes where by the past is inherited into the present and perishes into the future so that there's a temporalization of terms which in a substance ontology are static and self existent, rather than matter and mind being separate substances, mind and matter are different phases of the same process.


    So eventually if we were to say that mind and matter are one of the same thing, two phases of the same process we won't only solve the hard problem but we diminish its existence in the first place.

    What does everyone else think? Have we invented the hard problem of consciousness?
  • Philosophim
    2.6k


    I view "the hard problem" as not really a "problem". All its really doing is stating, "Figuring out how your subjective consciousness maps to your brain in an exact and repeatable model is hard."

    Well, yeah. I think some people take the wrong conclusions from it. It doesn't mean consciousness doesn't originate from your brain. We all know it does. But do we have a model that states, "If I send 3 nanos of dopamine to cell number 1,234,562 in quadrent 2 you'll see a red dog?" Not yet.

    The hard problem is simply predicting its difficulty. Part of philosophies job is to form ideas and examine if they are rational to pursue. If we consider the complexity of the mind, and the fact we would have to rely on subjective experience to create an exact model of consciousness for all things, its complexity is likely outside of our current technical and scientific know how today.

    This is why people are trying to construct different models of consciousness that can avoid this problem of "exactness". Which is pretty normal. When people meet limits, build what you can regardless.
  • Gnomon
    3.7k
    Have we invented the hard problem of consciousness?Beautiful Mind
    No. The problem of explaining the emergence of immaterial processes --- such as Life, Mind, & Consciousness --- was inherent in the theory of Materialism. That notion was based on the observation that physical Effects usually had prior physical Causes. So, it was just common sense to conclude that even meta-physical aspects of reality should be reducible to physical causes. Unfortunately, no one has ever found the missing link between Matter and Mind. This glaring gap in cause & effect may be why Plato concluded that ultimate Causes (Forms) were not physical, but meta-physical : i.e. Ideal.

    I have come to believe that the "missing link" in the emergence of non-physical phenomena -- such as the invisible Mind -- is the meta-physical process of En-formation : creation of novel forms of being. It's analogous to the mysterious emergences that physicists call "Phase Transitions". My odd notion of EnFormAction is derived in part from Shannon's theory of abstract Information (1s & 0s), and partly from the Quantum queerness of Virtual states of being. I have a thesis to explain how I came to such an un-real worldview. It combines elements of Realism with Idealism. :smile:

    Hume on Causation : Natural relations have a connecting principle such that the imagination naturally leads us from one idea to another.
    "And what stronger instance can be produced of the surprizing ignorance and weakness of the understanding than [the analysis of causation]?…so imperfect are the ideas we form concerning it, that it is impossible to give any just definition of cause,. . ."

    https://iep.utm.edu/hume-cau/

    Information :
    Claude Shannon quantified Information not as useful ideas, but as a mathematical ratio between meaningful order (1) and meaningless disorder (0); between knowledge (1) and ignorance (0). Hence, that meaningful mind-stuff exists in the limbo-land of statistics, producing effects on reality while having no sensory physical properties. We know it exists ideally, only by detecting its effects in the real world.

    For humans, Information has the semantic quality of "aboutness" , that we interpret as meaning. In computer science though, Information is treated as meaningless, which makes its mathematical value more certain. It becomes meaningful only when a sentient Self interprets it as such.

    When spelled with an “I”, Information is a noun, referring to data & things. When spelled with an “E”, Enformation is a verb, referring to energy and processes.

    http://blog-glossary.enformationism.info/page11.html

    Enformationism : http://enformationism.info/enformationism.info/
  • bert1
    2k
    We all know it does.Philosophim

    Apparently I know that. But how do I know it?
  • Mijin
    123
    I view "the hard problem" as not really a "problem". All its really doing is stating, "Figuring out how your subjective consciousness maps to your brain in an exact and repeatable model is hard."Philosophim

    Well firstly this concedes the point. You are agreeing it's difficult (hard) and a work in progress (a problem).

    But the rest of your post seems to be implying that we broadly understand how phenomena like pain, or color occur, we are just narrowing in on more precise answers. As someone working in the field of neuroscience I can say that that just isn't the case.
    While, for example, we can understand a great deal about how the visual cortex and retina process input data to perform, for example, edge detection, where the sensation of "redness" enters the picture, we still don't know.

    And how you can know that we don't know, is in predictive power. I'm personally not very impressed by the various handwaves of the hard problem of consciousness because where are the testable inferences or predictions?
  • Mijin
    123
    The more interesting question for me, is why so many people seem so committed to dismissing the hard problem. What's the issue with admitting something we don't know yet?

    And I think it's:

    1. The baggage. Many seem to feel that admitting we don't understand a fundamental aspect of consciousness is (re-)opening the door to souls, spirits and other nonsense.

    2. The difficulty groking the problem in the first place. Since subjective experience is the window through which we naturally and effortlessly examine everything, many people have difficulty examining the window itself. And that includes me. Just like most people, I had to have a "penny drop" moment, where I realized that pain, color, smells etc are phenomena that occur in the brain, not in the outside world (or the body, in the case of pain), in a way we don't yet understand.
    Many people arguing against the hard problem either have never had the penny drop moment, or didn't have it prior to strongly taking a position and are now committed.
  • Possibility
    2.8k
    Whitehead’s process ontology is a first step - it regards existence as consisting of four-dimensional events, rather than the 3+1 (objects in time) view of traditional materialism. Process ontology corresponds to the theories of quantum physicists such as Carlo Rovelli, but both Whitehead and Rovelli fall short of acknowledging the relations between these structures as necessarily five-dimensional. From here, they turn to mathematical structures, which present at least a relational potential to reconcile these events with aspects of reality such as gravity, qualia and emotion. But we can only define a four-dimensional structure by relating it to a specific four-dimensional observer event, and then mapping the relation as a four-dimensional mathematical prediction, which can be reduced to relative instructions for any four-dimensional system to align to this observer-position.

    Interestingly, this corresponds to Lisa Feldman Barrett’s neuroscience/psychology based theory of constructed emotion, in which the correlates of consciousness consist of relative instructions (attention and effort) for a predictive alignment between the four-dimensional interoception of an organism and the brain’s conceptualised reality. It seems this continual process of alignment - in terms of what adjustments are made to the interoceptive or conceptual predictions throughout what is an ongoing interaction - constructs our unique experience of the world.

    Of course, this is theory upon theory upon theory - but I think the structure may be there to work towards testable inferences or predictions. It’s well above my pay grade, though.
  • khaled
    3.5k
    But do we have a model that states, "If I send 3 nanos of dopamine to cell number 1,234,562 in quadrent 2 you'll see a red dog?" Not yet.Philosophim

    Building such a model is not the hard problem. The hard problem is: Why does sending 3 nanos of dopamine to cell number 1,234,562 in quadrent 2 you'll see a red dog? What properties of dopamine, cells and synapses allow for the existence of the experience of seeing a red dog
  • khaled
    3.5k
    the qualities of consciousness that are experienced by living things may arise from out of inanimate matter which possesses no inherent qualities in and of itself; outside of our subjective perceptions.Beautiful Mind

    What makes you think it doesn't? Until we invent a "consciousness-o-meter" we can't actually know that.
  • TheMadFool
    13.8k
    This is the way I understand the hard problem of consciousness:

    Imagine an experiment in which a robot with a pressure sensor and yourself are seated next to each other. A person then proceeds to prick both the robot and you with a sharp needle. The robot's sensors pick up the needle-prick and your nerves do the same.

    Is there a difference between the robot and you in this experiment?

    There is an aspect of the needle-prick - what it feels like (qualia) - that is present in you but absent in the robot.

    Another way to look at it is imagine you've built an exact replica of the human body, call it X, complete with biological organs, except that instead of a brain, you're at the controls. X comes with sensors in precisely the same configuration as a normal human nerves. Now if someone pricks X with a needle, a red light turns on in the control center where you're located. Someone does prick X with a needle, the red light turns on. At the same moment, a small accident occurs in the control center and a needle pricks you too. Is there a difference between the red light turning on when X was pricked by a needle and the pain you felt when the same thing happened to you? :chin:
  • ChrisH
    223
    There is an aspect of the needle-prick - what it feels like (qualia) - that is present in you but absent in the robot.

    How can we be sure of this?
  • TheMadFool
    13.8k
    There is an aspect of the needle-prick - what it feels like (qualia) - that is present in you but absent in the robot.

    How can we be sure of this?
    ChrisH

    Not sure. Just stating the official position on the matter. Why do you ask? Did you, by any chance, happen to see anything that contradicts me?
  • Mijin
    123
    How can we be sure of this?ChrisH

    Well of course at a certain level, a robot should be capable of feeling pain since we're essentially robots made by nature.
    However, it's also true that there must be a distinction somewhere, since we could write a 2-line program that responds "Ouch!" when you press a key, but I assume we all would agree that such a program does not actually feel pain. There is a difference between responding to stimuli and actually experiencing pain.
  • ChrisH
    223
    Did you, by any chance, happen to see anything that contradicts me?TheMadFool

    No, but neither have I seen anything that corroborates your view.
  • ChrisH
    223
    since we could write a 2-line program that responds "Ouch!" when you press a key, but I assume we all would agree that such a program does not actually feel pain.Mijin

    Of course, but we seem to be talking about something quite different now.
  • Mijin
    123
    Of course, but we seem to be talking about something quite different now.ChrisH

    Yeah; I think you're alluding to "strong AI". But I would say that's irrelevant to understanding TheMadFool's argument. (S)He is just saying there's a distinction between nociception and pain.

    While it may be possible one day to make a robot that feels pain, there is no doubt it is possible to make one that only has nociception and no experience of pain.
  • ChrisH
    223
    Yeah; I think you're alluding to "strong AI".Mijin

    That wasn't my intention.

    I'm simply suggesting that we are not in a position to say with absolute confidence that a robot, as described by TheMadFool, cannot/does not experience 'qualia'.
  • Outlander
    2.1k
    I'm simply suggesting that we are not in a position to say with absolute confidence that a robot, as described by TheMadFool, cannot/does not experience 'qualia'.ChrisH

    If it does, it is because it was programmed to. Who programmed us to? The question many will pass off as rubbish and a red herring, and that few wish to answer.

    Does qualia = sensation? An experience that the mind perceives that is apart from the normal state of nothingness or the norm rather (whatever that may be)?

    My cat experiences 'qualia' when I rub her on the head I'm sure. When I first got her and turned her into an indoor cat, she left some 'qualia' on my bed when i was absent for 2 days and didn't change the litter box. Both she and I exhibited some consciousness in some way shape or form in this exchange of 'qualia'.

    I guess, to ask the obvious, what is 'qualia'? What isn't and why isn't it?

    There's a green lighter next to me as I type this. Bright, lime green. It's a striking image, object rather that seems to jut out from the background. As a person with a human mind and relatively functional sense of sight, it's just my neurons/synapses rendering a scene of high contrast, which catches the eye inherently. Yet I know I am me, a person, and that object is an object, a tool for my use. Is this not consciousness? If not, what is?
  • ChrisH
    223
    If it does, it is because it was programmed to.Outlander

    Not necessarily.

    Edited to add: How would one confirm that the programming had worked? If we could confirm this then we'd have cracked the hard problem.
  • Outlander
    2.1k


    A robot is an inorganic creation that isn't alive ie. has the essence/evolutionary capacity the human brain has. It's either a (depending on form) circuit board or figurine operating from a base system of 1's and 0's. It performs pre-programmed functions and nothing more. Now, sure, you could program randomness into it and its operation, but that's all it really ever would be. Perhaps there's other forms of AI I'm not familiar with, where the "randomness"/"mood" parameter fluctuates according to stimuli/circumstance or perhaps yes even "qualia". Similar to a human, in it's earliest stage it was around (and capable of observing) say a mountain range or beach and so "prefers" or is "happy" around the same scenery. It's still a creation following code/script/circuitry. Of course... there's an argument the human brain isn't much different. Again, one has a clear creator-creation relationship the other... is what we're debating about.
  • ChrisH
    223
    Now, sure, you could program randomness into it and its operation, but that's all it really ever would be.Outlander

    You're talking about behaviour. I'm not.

    I'm talking about what TheMadFool describes as "what it feels like (qualia)".
  • Outlander
    2.1k


    You're on a computer right? Phone at least. Run a systems diagnostic or "CPU health" test or something of the like. It doesn't "feel" anything it only reports, when asked. Rather, instructed.

    You'd need a robotic body with millions (if not more) of nanowires crisscrossing every single surface to be able to report to the main circuit board (brain) to be able to encompass any sort of "feeling" which again is little more than a systems check. Example, the right arm is slightly dented or damaged, etc. You'd have to program an entire AI to give it a "mood" in response to being in a less than optimal state, which defeats the purpose of robotics/tools in general. My toaster is kind of dirty. What if it was sad and so decided not to toast my bread properly? That's not why machines were introduced, otherwise we'd just use people.
  • ChrisH
    223
    You're on a computer right? Phone at least. Run a systems diagnostic or "CPU health" test or something of the like. It doesn't "feel" anything it only reports, when asked.Outlander

    This is what I'm challenging.

    You have no way of knowing that It doesn't "feel" anything. It's an assumption.
  • Outlander
    2.1k


    So where do we go from there? There's no organic components nor any recognizable sensory or "feeling" nodes so... if we're on that tangent why don't we just ponder if raindrops are sad when they fall from the clouds, or that the grass gets angry when we cut it. I mean, at least they have organic, intelligent cells. Does a magnet get angry when we introduce another magnet of opposite polarity? Is an electromagnetic generator "happy" when it produces current? It's all the same physics, so where are we supposed to draw the line?

    What is it about mindless circuitry that infatuates some folks so? Wait. Unless....

    Reveal
    SkyNet. Don't terminate me bro :cool:
    (just a late night joke, likely not to be well received)
  • ChrisH
    223
    so where are we supposed to draw the line?Outlander

    Welcome to the hard problem.
  • TheMadFool
    13.8k
    Welcome to the hard problem.ChrisH

    Since you've raised doubts regarding my interpretation of the hard problem, I'd like to hear yours if that's alright with you. Thanks.
  • ChrisH
    223
    I wasn't casting doubt on your interpretation of the hard problem.

    I'm simply saying that there is no way in practice or in principle to determine if any entity, other than oneself, animate or inanimate, actually experiences 'qualia'. Therefore the claim that a robot cannot/does not experience qualia is an unwarranted assumption.
  • TheMadFool
    13.8k
    I wasn't casting doubt on your interpretation of the hard problem.

    I'm simply saying that there is no way in practice or in principle to determine if any entity, other than oneself, animate or inanimate, actually experiences 'qualia'. Therefore the claim that a robot cannot/does not experience qualia is an unwarranted assumption
    ChrisH

    Right! That's for certain, uncertain. I seem to have misinterpreted your point, a good one at that. Anyway...it looks like you're not denying the existence of qualia per se, you only wish to inform me that that robots lack qualia is an unfounded assumption.

    :ok:

    Since you don't deny that humans have qualia, my question is this: can it be explained with physicalism?
  • ChrisH
    223
    Since you don't deny that humans have qualia,TheMadFool

    I don't deny that we have conscious experiences.

    can it be explained with physicalism?TheMadFool

    I assume conscious experiences can be explained within physicalism.
  • Harry Hindu
    5.1k
    I assume conscious experiences can be explained within physicalism.ChrisH
    How can the quality of depth in a visual experience be explained within physicalism? What is physical about the experience of empty space? What does "physicalism" even mean? What are "experiences"?

    How does "physicalism" address the question of why evidence for your conscious experiences from your perspective is different than the evidence of your conscious experiences from my perspective? You dont experience your brain, you experience colors and shapes and sounds and tastes, smells and feelings and report that this is evidence of your consciousness. I experience the visual if your brain or body"s behavior and report this as evidence of your consciousness. Why is there a difference and how can you reconcile the difference using "physicalism"?
  • ChrisH
    223
    How can the quality of depth in a visual experience be explained within physicalism?Harry Hindu

    I have absolutely no idea. All anyone can do, whether it's within physicalism or any alternative, is produce untestable hypotheses (guesses).
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment