If you agree that the Hard Problem is not solved then you must agree that we are no closer to understanding Consciousness now than 100 years ago. The Hard Problem was not a coined phrase back then but they had the basic idea that Neural Activity leads to Conscious experience. Today we know a vast amount more about the Neural Activity (Neural Correlates of Consciousness) but in spite of that we know zero about how this Neural Activity causes Conscious experiences. We don't even know what any Conscious experience really is. We only know we experience it.I never claimed that the hard problem was solved. In fact I have been very explicit that it has not been solved. But only deliberate ignorance can be the reasom for claiming that we are no less closer to understanding consciousness than we were 100 years ago. — Uber
Proof of that would probably solve the Hard Problem of Consciousness and eliminate the Explanatory Gap all at the same time. I have also toyed with someway to do that but am not there yet. Good Luck.Just looking for the right partners. — MiloL
Details like this ,,, Given:One of the greatest neuroscientists of our time, Antonio Damasio, holds the view that consciousness is an emergent state. The following article from MIT gives a quick rundown of his theories.
The Importance of Feelings
I think variations of these views are now widely accepted in neuroscience. In another debate on this forum I cited several books by prominent neuroscientists saying that it's basically impossible to maintain dualism while pretending to care about reality. Materialism has already won. Now it's matter of filling in some (very important) details. — Uber
You are probably directing this to Tyler. But I think what you are saying tracks pretty good with what The Inter Mind is all about. Although, I think I am less certain that the Conscious Self can exist after loss of the Physical body, I do see that the possibility exists.The Mind is a computer. The Body is a Machine. The Self is what I'll call "Self". For my purposes Self is the energy that experiences the data that it receives from the computer. Life is the every changing environment in which we live. Self evaluates that information and inputs the requisite commands reflective of Self's decisions. The Mind and Machine work together to process the commands all while gathering data. Self faces choices that will impact the very fabric of how they experience Life.
Life is often referred to as a game which is in some ways accurate but in truth is more an open world MMO (an open explorer formatted world online game in which anyone around the world can play and interact with each other).
Much like the game we start at birth with nothing. Now modern life might toss a few basics your way. A home of some kind, clothes, perhaps some food. Granted as humans we are giving very little in the way of instinctive memories. So we learn from the world around us.
You see in this game the player is your self. It arrives with no instructions, no training and little else. We are 100% reliant on the machine we assigned and those who birthed out machine.
I will leave the example at this point and ask that you consider your position on this topic with a certainty that the self remains regardless of the condition of the machine. Once the Machine is damaged beyond repair and function the self moves on but that topic has enough threads no doubt. I will add that the self also is with certainty separate from the machine. The self experiences all the physical and emotional feelings and sensations provided by the machine and the computer but the self is definitely separate.
This all being said does it chance what you think and how? — MiloL
I'll just ask my usual question ... Given:But even when you get to the point of having all the pieces and you know these pieces cause the Conscious experience, the question still screams out as to how the Conscious experience happens from these pieces. — SteveKlinko> I don't think I see the difference.
If we have a causal explanation of the mechanical function of something, that is the answer of how. If we explain it with neural activity, then it seems to me that the question is answered, of how the Conscious Experience happens from the pieces. What is left to be answered?
As with your eg of the Red Experience, hypothetically with the understood neural function, it does explain how.
What more is there to explain? Asking how, asks what function causes a result. Hypothetically, that would explain just that: the function which causes the result.
There is a Categorical difference between any kind of Neural Activity that you can talk about and the Experience of something like Red. — SteveKlinko> The only categorical difference that I see, is degree of specificity. Neural activity is a more specific category involving details, where as experience is more general, involving less detail of the scientific process. This doesn't mean that the details of neural activity cannot explain the more general overall experiences. — Tyler
Excellent post. Made me laugh because I've been dodging that Giant Club for a while now."The problem is that the Brain is an electro-chemical machine and nowhere during all the processing that goes on can you find the actual Conscious perception of Light. I like to say that when you have a Conscious perception of Light that you are seeing Conscious Light. "
Why do we continue to seek for consciousness within the Brain. Descartes localized it to the pineal gland and Science laughed, and continues to do so. Why all this ridiculous peripatetic philosophical meandering. It has not been found in the brain, or in the neurons or the synaptic clefts or the neurotransmitters, or neural activity..... etc etc ad infinitum.
It clearly, is NOT there. Lets get over it!
If it is not there it must be somewhere else...? Oh no..... I hear the thud of the homocentric giant approaching. He is about to club me over 'my conscious' head, and insist that Man is still the center of the Universe that he is the measure of all things, and that he manufactures this 'consciousness' somewhere inside his head and we will find it, if we just keep looking. As long as he can continue to do so he can maintain the delusion that 'God' is within him or the more contemporary delusion that he is a 'God' unto himself.
Why does philosophy insist that Galileo must continually recant, and that "God" or consciousness is inside our heads. Why not follow established precedent and point the telescope/microscope towards the stars? — Marcus de Brun
So you are just arguing about symantics? For me Cup Template and Cupness have the same meaning.I agree about templates but don't understand your objection to saying cupness or even sporkness. — SteveKlinkoThen are you sure that you agree with me about templates? My point was that we have better, more accurate terms to use ("cup template") instead of these philosophically loaded terms, like "cupness". — Harry Hindu
I think Science will get nowhere if it insists that a grain of sand has Consciousness.The Computer Mind would be equivalent to the Physical Human Mind (the Brain). But Humans have a further processing stage which is the Conscious Mind. When Humans see the Color Red there are Neurons firing for Red. But with Humans there is also that undeniable Conscious Red experience that happens. You can't really believe that a Computer has an actual Red experience. That would imply some Computer Self having that experience. Science knows very little about Consciousness so who knows maybe even a grain of sand has Consciousness. But you have to draw a line somewhere in order to study the problem. — SteveKlinkoNo. You have to study it first to see where you should draw the line, or else that line would be subjective - arbitrary. — Harry Hindu
I agree except that the Conscious experience of something like the color Red is more than "Just a Predictive Model of the World". It is a Conscious experience. Science does not know how to explain any Conscious experience yet.There is no red out there. Red only exists as a representation of a certain wavelength of EM energy. Any system could represent that wavelength as something else - the written word, "red, a sound of the word, "red" being spoken, another color, or even something else entirely. No matter what symbol some system uses to represent that wavelength of EM energy, others that also have a different representation could eventually translate the other's symbol for that thing. That is what we do with translating languages.
What we see is not what the world is. Our minds model the world as physical objects with boundaries and borders, but the world isn't like that. It is all process, which can include "mental" processes, and non-mental (what many might call "physical" processes). When you look at someone you see them as a physical being, but they are just an amalgam of certain processes, some of them being "mental" in nature. What I mean by "mental" is goal-oriented sensory information processing. Brains are what we see, but they are just models of other's mental processes.
YOU are a process. What I mean is, your mind is a process - a mental process. It is what it is like to be your mental process of simulating the world in fine detail so that you can fine-tune your behavior for the extremely wide range of situations you will find yourself in during your lifetime. Your conscious experience is just a predictive model of the world and is not as the world is in real-time. It is continually updated with new sensory information milliseconds after the events in the world. — Harry Hindu
I say Computers that we have today don't have Minds but I didn't say that they can never have Minds. We first have to understand how the Human Mind works and only then will we be able to design Conscious Machines. But with Consciousness everything is still on the table. Maybe we should study the Consciousness of a gran of sand first, but I doubt the Wisdom or Logic of doing that..So to say that computers cannot have minds seems to be out of the question, if we designed them to learn using the information they receive about the world and their own bodies through sensory devices and to represent the world (using the information from it's senses) in a way that enables it to fine-tune it's behavior to achieve it's own personal goals. In other words, the computers you have on your desktop probably do not have minds in the same sense that we do. There may be something it is like with it being a process like everything else, but it is without any self-awareness or independent thought — Harry Hindu
But even when you get to the point of having all the pieces and you know these pieces cause the Conscious experience, the question still screams out as to how the Conscious experience happens from these pieces.My overall explanation is not that finding more neural activity will magically explain consciousness. My explanation of consciousness, is that all the known neural activity can create consciousness, when in the correct combination. Which is why I refer to it as a puzzle. All the pieces are there, and known (at least sufficiently), they just have to be arranged correctly. — Tyler
The Computer Mind would be equivalent to the Physical Human Mind (the Brain). But Humans have a further processing stage which is the Conscious Mind. When Humans see the Color Red there are Neurons firing for Red. But with Humans there is also that undeniable Conscious Red experience that happens. You can't really believe that a Computer has an actual Red experience. That would imply some Computer Self having that experience. Science knows very little about Consciousness so who knows maybe even a grain of sand has Consciousness. But you have to draw a line somewhere in order to study the problem.↪BlueBanana I don't see how you can have a brain, but no mind, or at least the potential for mind. Having memory means you have a mind. Many people on this thread are being inconsistent and attributing minds to humans but not to computers. Why? How do we know that humans have minds but computers don't? What is a mind if not memory that stores and processes sensory data? — Harry Hindu
I agree about templates but don't understand your objection to saying cupness or even sporkness.You're missing the reality that the Robot would most definitely need the concept of cupness to operate in the general world of things. Knowing the color of the handle of one particular cup might help with that cup. In the real world the Robot would need to understand cupness in order to find a cup in the first place. Then when it finds a cup it can determine what color it is. — SteveKlinkoSo, we design a robot with templates - a template for cups, for humans, for dogs, for cars, etc. - just like humans have. We humans have templates stored in our memory for recognizing objects. We end up getting confused, just like a robot would, when an object shares numerous qualities with different templates. The solution is to make a new template, like "spork". What would "sporkness" be? Using the word, "cupness" just goes to show what is wrong with philosophical discussions of the mind. — Harry Hindu
You're missing the reality that the Robot would most definitely need the concept of cupness to operate in the general world of things. Knowing the color of the handle of one particular cup might help with that cup. In the real world the Robot would need to understand cupness in order to find a cup in the first place. Then when it finds a cup it can determine what color it is.But Humans don't work like Robots. — SteveKlinko
Is the converse true? I think a robot works, although in a simplified way, like a human, making it possible for it to replicate the actions of conscious beings.
The Conscious Visual experience contains an enormous amount of information that is all packed up into a single thing. The Neural Activity is not enough. — SteveKlinko
I think the opposite is the case. A conscious experience, whatever its benefits are, cannot be efficient. While containing all of the visual data provided by eyes, it also contains the experience of that data, which is such a rich experience we ourselves can't even begin to comprehend how it is created. The brain also unconsciously organizes and edits that data to a huge extent, filling gaps, causing us to perceive illusions, basically expanding our visual experience beyond what information is provided by the senses. For example,
When I reach out to pick up my coffee cup I see my Hand in the Conscious Visual experience. If my hand is off track I adjust my hand movement until I can touch the handle and pick up the coffee cup. — SteveKlinko
a robot would only need to find a specific kind of group of pixels with a color matching the color of the cup. Conscious mind, for some reason, in a way wastes energy forming an idea of "cupness", equating that cup with other cup and connecting it to its intended usage as well as all the memories (unconscious or conscious) an individual has relating to cups. All that information could be broken down to individual points and be had access to by a robot, but instead human mind makes something so complex and incomprehensible.
The existence of that idea also allows me to, while seeing a simple cup, appreciate my conscious perception of that cup. I still can't see the evolutionary value of that appreciation, though. — BlueBanana
Seriously ... you think a Computer experiences the color Red like we do? You know that the only thing happening in a computer at any instant of time is : Add, Subtract, Multiply, And, Or, Xor, Divide, Shift Left, Shift Right, compare two numbers, move numbers around in memory, plus a few more. If you have 4 cores then this is happening in 4 different places in the computer chip. Which one of these operations experiences the color Red?Any device that can do what a person or other animal can do has "Consciousness". That's how it does those things, you know. — Michael Ossipoff
If a Computer could experience for example the Color Red then I would agree. But a Computer does not Experience anything. — SteveKlinko
...and you know that....how? If you aren't a computer, then how can you speak for what a computer does or doesn't experience?
What does experience mean? I define "experience" as a purposefully-responsive device's interpretation of its surroundings and events in the context of that device's designed purposes.
By that definition, yes a computer has experience.
As I said, we tend to use "Consciousness" and "Experience" chauvinistically, applying those words only to humans or other animals. That's why I try to cater to that chauvinism by sometimes defining those words in terms of the speaker's perception of kinship with some particular other purposefully-responsive device.. — Michael Ossipoff
Question is: Do you have a Red experience in any meaningful sense. Think about your Red experience. Think about the Redness of the Red. That Redness is a Property of a Conscious phenomenon. Think about how a Computer works. Add, Subtract, Multiply, etc. There are categorical differences with how the Human Brain functions and how a Computer functions. A Human Brain has trillions of Neurons firing simultaneously at any instant of time. A 4 core processor chip only has 4 places where things can happen at any given instant of time. Effectively a 4 core computer chip has only 4 Neurons.A Computer can be programmed to scan pixels in an image to find the Red parts. A Computer will look for pixels with values that are within a certain range of numbers. A Computer never has a Red experience but it can find the Red parts of an image. — SteveKlinko
When you find the red part of an image, why should I believe that you have a red experience in a meaningful sense in which a computer doesn't?
The computer finds the red part of the image. You find the red part of the image. Period (full-stop).
You wouldn't report the red part of the image if you hadn't experienced it. The same can rightly be said of the computer. — Michael Ossipoff
I showed you how a Machine detects Color. It compares numbers in memory locations. It makes no sense to think that it also has a Red experience. It doesn't need a Red experience to detect colors. Machines and Brains do things using different methods.So just because it can find the Red parts of an image, like a Human can, it does not mean it has a Conscious Red experience while doing this. ---SteveKlinko
You call it a Conscious Experience when it's yours, or of another person, or maybe another animal. ...you or a purposeful-device sufficiently similar to you, with which you perceive some kinship.
A Computer works in a different way than a Conscious being does. ---SteveKlinko
...because you define a Conscious Being as something very similar to yourself. — Michael Ossipoff
I'm not defining Consciousness as the ability to pass as Human. Most Birds can probably have a Conscious Red experience.Science doesn't understand enough about Consciousness yet to design Machines that have Consciousness. ---SteveKlinko
...if you're defining "Consciousness" as "ability to pass as human".
Current technology can't yet produce a robot that acts indistinguishably similarly to a human and does any job that a human can do.
Imitating or replacing humans is proving more difficult than expected. Life has evolved over billions of years of natural-selection. It wasn't reasonable to just expect to throw-together something to imitate or replace us in a few decades.
If such a machine is ever built, some would say that it has Consciousness and Experience (as do we), and some would say that it doesn't (and that it's a philosophical zombie merely claiming to have feelings and experiences).
Of course the former would be right. — Michael Ossipoff
If a Computer could experience for example the Color Red then I would agree. But a Computer does not Experience anything. A Computer can be programmed to scan pixels in an image to find the Red parts. A Computer will look for pixels with values that are within a certain range of numbers. A Computer never has a Red experience but it can find the Red parts of an image. So just because it can find the Red parts of an image, like a Human can, it does not mean it has a Conscious Red experience while doing this. A Computer works in a different way than a Conscious being does. Science doesn't understand enough about Consciousness yet to design Machines that have Consciousness.Any device that can do what a person or other animal can do has "Consciousness". That's how it does those things, you know. — Michael Ossipoff
But Humans don't work like Robots. Humans and probably all Conscious beings have a further processing stage that presents the Visual experience to them. The Visual experience is what we use to move around in the World. When I reach out to pick up my coffee cup I see my Hand in the Conscious Visual experience. If my hand is off track I adjust my hand movement until I can touch the handle and pick up the coffee cup. It would be much more difficult to do this without the Conscious Visual experience. The Conscious Visual experience contains an enormous amount of information that is all packed up into a single thing. The Neural Activity is not enough. We would need far more Neural Activity to equal the efficiency that the Conscious Visual experience provides us.As a counter-example, robots lack conscious visual experience but manage to react accordingly to information transferred by photons. — BlueBanana
Take the Conscious Visual experience of the scene the p-Zombie is looking at. Without the Conscious Visual experience the p-Zombie would be Blind and would not be able to move around in the World without bumping into things. The Conscious Visual experience is the final stage of the Visual process. People who think the Conscious experience is not necessary for the p-Zombie to move around in the World are exhibiting Insane Denial of the purpose of the Conscious Visual experience.However, a p-zombie, despite having no consciousness, reacts to stimuli in exactly the same way a human being would. — BlueBanana
Sorry, I was unclear. I meant that common experience effects are additional "puzzle pieces", not really added to neural activity. Common experiences that involve neural activity, but the function of which, aren't necessarily completely understood or proven by neural activity.
Experiences such as memories triggering other memories, or compounds of memories creating memories of concepts, or categories of memories, or analyzing cause and effect, etc. — Tyler
What exactly are these Common Experience Effects that you would add to the Neural Activity to explain the Explanatory Gap?I dont mean; common or existing knowledge, which is regarding the overall function of consciousness. The theory is regarding overall consciousness yes, but the common knowledge I was referring to was more like basic concepts. The theory takes those basic concepts like puzzle pieces, and explains how they fit together.
I attempt to arrange the puzzle pieces of neural activity (+ common experience effects) to fill the explanatory gap. With the finished puzzle, the correlation & cause/ effect between neural activity and consciousness, is portrayed. — Tyler
I think you are giving too much credit to the existing knowledge about Consciousness. Nobody has any idea how Neural Activity leads to the Conscious experience. Forget about knowing any kind of precise molecular functioning of the process. There is no such knowledge. All we know is that when particular Neural Activity happens there will be particular correlated Conscious experience happening. There is no explanation of how this happens. This is the classic Explanatory Gap of Consciousness.>I cant explain the precise molecular function of how neural activity creates a conscious experience, as I'm not a scientist. But can explain the more generalized logical logical process.
I believe I offered plenty of explanation of how neural activity creates conscious expeirence (and could link or paste more that I have tried to explain the overall concept).
Evidence supporting it, is common knowledge concepts, and the theory explains how the cause and effects of those concepts interacting with each other.
I dont claim it scientifically proven. I claim it's a theory, which should be considered, and tested for flaws and to see if it can be disproven, and potentially become scientifically proven. After a while now with this theory, I have yet to receive much of any reasoning at all suggesting it is incorrect — Tyler
Because Science knows so little about the actual mechanisms of Consciousness, any and all speculations are still on the table. Your speculation is a good one that could be proven true or false some day.Here is the question, do you think its reasonable to think that consciousness also has a storage and that it too is passed on during reproduction? And thus be under the influence of evolution over millions of years too. #foodforthought — Dendu
Both the Red Experience and Neural Activity could be considered the same category of Memory Access. I suppose you could argue the red experience isn't necessarily memory access, but considering it's an "experience", it could also be argued that any experience is memory.
If my explanation is true, then the difference between the 2 categories, is that 1 is the cause, and 1 is the effect. Neural activity is the cause, and the Red Experience is the effect.
The visual screen embodied in front of your face, that you mention, is memory. Similar to taking a photo, then later accessing that photo. Its a coded recording of the image. Human memory just isn't nearly as precise as a computer, at accessing a particular memory.
An additional quantity of one category of thing, results in a different category, by cause and effect, since the extra quantity surpasses a point, which causes a new effect.
eg 1. a small quantity of water on the ground is moisture, but an additional quantity surpasses the point where the category becomes a puddle.
eg 2. a small quantity of various molecules in an egg + additional quantity = baby — Tyler
Just because all these Neural things are happening does not even begin to explain the actual Experience of Red. — SteveKlinko
Why do you say that? Why shouldnt a complex combination of simultaneous memory access explain the experience?
I don't really see any reason to assume that the experience must be more than that.
The "experiencer" is the additional quantity of neural activity. The inter-workings of a complex combination of many smaller elements, creates something greater and more significant than the sum of the parts.
I think that concept is observed to occur in other situations in this universe, as combinations of smaller parts (potentially the way that virtually everything is constructed by smaller parts),
so why can't consciousness be the same? — Tyler
When memories of these concepts, plus similar scenarios are accessed simultaneously, as a combination, this creates the conscious experience of Red.
You are also likely accessing memories of seeing the color and similar shades in past instances. These neural patterns of memories of red, match current incoming neural patterns of visual input (when you are actively looking at red). — Tyler
If we can all agree that there are at least the two distinct Realms, Physical and Conscious, then we need to understand how things that happen in the Physical Realm can cause things to happen in the Conscious Realm. — SteveKlinko
The best approach, I find, is to reverse this position, and look at how things in the conscious realm cause things in the physical realm. The evidence of a temporal priority is much clearer this way, and we can proceed toward understanding this priority through concepts such as final cause and free will. — Metaphysician Undercover
The Conscious Red Light can be interpreted as a type of input Data that the Conscious Mind can process. The Conscious Red Light is input Data for the Conscious Mind in a similar way to how the hex number 00FF0000 is input Data for a Computer. A Conscious Mind Detects Physical Red Light when it receives a Conscious Red experience. A Computer Detects Physical Red Light when it receives the 00FF0000 hex number. The Conscious Red Light and the hex number 00FF0000 are Surrogates for the Physical Red Light.the Conscious Experience of Red) has Redness as a Property, but Conscious Red Light does not have Wavelength as a Property. — SteveKlinkoSo, if by definition, the property of "Redness" is only in the conscious experience, doesn't that mean, the property of Redness is just the neurological process? (assuming conscious experience is a neurological process).
The difference between Wavelength and Redness, is Redness is in the brain as an interpretation of the wavelength.
So, basically I would think Redness is just the coded version of the measurement of the Wavelength.
where does this Surrogate come from and how do we Experience it?
>Assuming the eyeball measures the wavelength and translates that measurement into information (as you mentioned, it's a surrogate), then the brain would send and store that information as neurological activity.
So Redness would be the coded information of the measurements of wavelengths.
Computers code information, save it, and access it later. I'm guessing the brain does a similar concept, but with a more efficient coding and saving process (and the additional function of accessing many bits of information simultaneously).
It is a little bizarre to think that everything we ever experience, is probably only information of measurements, which is coded and saved with neurons.. — Tyler
Just because Causal Processes can happen over time doesn't mean you don't need a C Realm. Any Causal Process of the C Realm must deal with Physical Realm Activity and translate that to Conscious Realm Activity. Maybe these Causal Processes are in Realm A and Realm B but somehow a Bridge between Realm A and B must be constructed. — SteveKlinko
As I explained above, the "Bridge", which is realm C is not necessary. The realm C is only required to prove a causal relation. Realm A and realm B be may be causally interactive without any realm C. The so-called "Bridge" is just needed to understand the causal relation. However, since understanding is already a property of the one realm, let's say realm A, the Bridge would be entirely within realm A, principles of understanding, and not a real bridge, nor a realm C, at all.
This is the real problem of consciousness. We assume a material, physical, world, a realm which is outside the realm of consciousness. But we have no real way to understand it because everything which we understand is within the realm of consciousness. So we poke and prod at this material world, observing how it behaves in response, but we can only make conclusions based on a supposed causal relation, because we haven't discovered any real Bridge. There may not actually be a Bridge, and any constructed Bridge would just be within realm A, and only a false Bridge — Metaphysician Undercover
I think I understand what the question is asking. But my answer is still the same; it's just memory. Even when I concentrate on it, and it seems indescribable, I still comprehend the scientific reasoning behind that.
The brain is accessing the neurons which have saved the information about the wavelengths of light which reached the eyeball, when Red was recorded. It probably "feels" like something special and unique when you focus on it, because you are accessing memories of concepts relative to red, simultaneously to memories of the visual of red (wavelength information). This would also explain why Red does not seem significant, when it is seen or remembered, but not consciously thought about (no memory concepts accessed).
I think that is basically the only mystery about it. Same as all sensory data saved as memories.
How could you know that it exists only in the conscious mind though? It could potentially exist in a computer program (unless you would consider that a conscious mind). I dont believe it would with current day technology, but I suspect future general AI with perceive similar conscious states, including the experience of red — Tyler
The Physical Realm is real and the Conscious Realm is real. The Interaction is completely real, we just don't know what it is yet.The Correlations are predictable and consistent enough that we must assume there is some kind of causal Interaction between A and B. I think we need a C Realm, at least as a place holder, for the Interaction to take place in. — SteveKlinko
The C realm here is completely imaginary. What is real is the activity of A and the activity of B. That there is a "causal interaction" is your description, so it is something which is completely a product of your mind, imaginary. If you want to assign "reality" to this causal interaction you would need to base it in something real, independent of your mind. You could assign reality to the passing of time, to make the causal interaction real, but this is not introducing another "realm", it is just assuming that the passing of time is real, and is common to both realms. — Metaphysician Undercover
Just because Causal Processes can happen over time doesn't mean you don't need a C Realm. Any Causal Process of the C Realm must deal with Physical Realm Activity and translate that to Conscious Realm Activity. Maybe these Causal Processes are in Realm A and Realm B but somehow a Bridge between Realm A and B must be constructed.I think we really need this Realm C to keep us concentrating on what the problem really is. — SteveKlinko
Do you agree, that the passing of time satisfies the conditions required of the place holder (realm C)? We do not need the realm C as a place holder if the passing of time is real and common to both A and B, allowing for causal relations. — Metaphysician Undercover
It is only by your assumption of a third realm that you claim A and B are not separate. As I explained above, A and B may be distinct, and interacting. When you give reality to this "interacting", you make A and B parts of a larger whole, C, which contains this interacting. But there is no necessity to assume C. There is simply A interacting with B and the reality of the interactions is accounted for by the activities of A and the activities of B.
This is why there appears to be a "problem" of consciousness. We keep assuming that if A and B interact, the "interaction" itself ought to be evident. So we look for the interaction. But this is a mistaken procedure because the assumption of this third realm, the realm of interaction, is not supported logically. there is no need to assume a realm of interaction. We have activity occurring in A, and activity occurring in B. Some of the activity in A might be the cause of some activity in B, and vise versa. There is no need to assume C, the realm of interaction, unless your intent is to make A and B two parts of a larger whole, C. But that is simply the intent to reduce the two distinct realms to one realm, C. It is a monist intent. If the realm of interaction is not supported by evidence, then this is an incorrect procedure, and the monist intent is a misguided attempt to simplify what cannot be simplified — Metaphysician Undercover
↪Metaphysician Undercover You're right, I could be clearer in expressing my concerns. I think the issue I'm worried about has the principle of sufficient reason at its core. I'll have another go at explaining my confusion more clearly.
Let's suppose you believe that reality consists of two realms, we can neutrally call them realm A and realm B. If they are genuinely two distinct realms, then they are self-contained insofar as all the elements in one realm can be accounted for in terms only involving other elements of that same realm. This is real dualism about reality.
However, realm A becomes epiphenomenal with regard to realm B and vice-versa. But the principle of sufficient reason would then require us, from the perspect of realm B, to reject the existence of realm A, since realms that just "tag along for the ride" have no sufficient reason for existing. Similarly, the principle of sufficient reason would require us to reject the existence of realm B when considering things from realm A. We might want to try to take a third way, but we are assuming that reality just consists of two realms, so there is no third realm we can go to for adjudication.
So, we then suppose that realm A and realm B are not separate realms. We might think that everything we previously presumed to be in realm A can be accounted for by things in realm B, or vice-versa, but we thus eliminate one of the two realms we presumed to exist in the first place. We might want to try an approach that said that there is at least one thing in what we used to call realm A that cannot be accounted for by things in realm B, but then we are just keeping realm A as a self-contained realm, but with a shrunken domain, and once again from the perspective of the enlarged realm B, it becomes epiphenomenal and so its raison d'être is removed. And similarly, from the perspecive of this reduced realm A, realm B is redundant.
So, the principle of sufficient reason pushes us towards monism.
There might be some conceptual confusion going on in the above, maybe in my assumption that if reality consists of two realms, then the principle of sufficient reason is applicable in both realms individually. But if the alternative is to say that the principle of sufficient reason has to be applied from neither realm A nor realm B, then we seem to have two choices. First, we could suppose that the principle is to be applied from a perspective in reality. But then we have have to introduce a third realm to reality, and the same line of thought as applied above to realm A and realm B would reapply to realm A + realm B + realm C, and the pressure for monism remains. Alternatively, we could try the line that the principle of sufficient reason is not to be applied from any perspective in reality at all, but that requires that we be able to make a distinction between not applying a principle from within reality, on the one hand, and not really applying the principle at all, on the other, but that smells like a distinction without a difference. — jkg20
Think about the Redness of the Red. What is that? The Redness of the Red is not explainable in words. It exists only in the Conscious Mind. It's purely a Conscious Phenomenon. Nobody even knows what the Red experience is. It's so familiar to us but it is a complete Mystery. How can you possibly think you know the answer when you don't even know what the Red experience is? Concentrate on the Redness itself and you will eventually see the Mystery of it and that it is quite a different thing than anything Science can Explain right now.I don't know how the neural activity functions mechanically, if that's what you're asking. All I know is somehow neurons store memories as information, and when that neuron is accessed, the info of that memory is recalled.
But as far as I can theorize, based on these concepts, this process of accessing the recorded information, is all it takes to produce a conscious experience of anything (including Red), as long as it's the appropriate info and neurons which are being accessed simultaneously.
I don't see why there should be anything more to it. — Tyler
As I remember it the pZombie was a discussion tool for asking the question: What would be the difference if Consciousness was removed from the Human Mind? The question was asked because people were really wondering what the purpose of Consciousness was. There were people that thought that there would be no noticeable difference because they thought Consciousness was just an Illusion and had no real purpose. It was Insane denial of the purpose of Consciousness. If you take away the Visual Conscious experience you would be Blind. We could not move around in the world with just Neural Processing. Removing the Conscious Visual experience removes the final stage in the Visual process that lets us See. People who think that the pZombie would be undetectable deny the Primacy of Consciousness in our existence.Okay so to subscribe to the logical possibility of zombies you have to subscribe to strong anthropic mechanism first? As in it's all bottom up cognition and you can have a complete mechanistic description like billiard balls colliding with each other.
I've always suspected the zombie argument is really an argument against reductionism where if the argument was rephrased to allow for top-down causation the problem would vanish. That's really why Descartes mental substance exists because he had already decided that mechanism was sufficient to describe everything else, including the other animals. — JupiterJess
This is awkward for me since I'm secretly a determinist with a predilection for bottom-up explanation wherever possible (Occam's razor etc.). Yet, no Zombies for me!
Weird, hey? — Kym
I have never quite understood arguments that end up with the conclusion that Consciousness is an Illusion. In my way of thinking an Illusion is something that doesn't really exist. The Red Experience certainly Exists. So how do Physicalists understand the meaning of the word Illusion? — SteveKlinko
I'm not sure I've done enough reading of the Physicalist boffins to answer for them. But years back asked a big-time Zen teacher why Buddhists said we all live in the 'Maya' world of illusion - when there was so obviously correspondences beween the world and my experiences of it. He said, yes a material world does exist but it's SO different from how we perceive it that we more accurately should say we're living in an illusion. — Kym
>I mean it is the coordinated combination, that creates the experience.
If the neural activity of a memory is on its own, the memory doesn't do much for experience.
Or if the combination of neural activity is uncoordinated, and random or irrelevant, then the experience would be nonsense.
But when it's a coordinated combination of parts, the sum of those parts is a coordinated assembly.
The coordinated assembly, is the experience of Red. — Tyler
I agree with half of this physicalist view: Yes for the illusion part. No for its irrelevancy.
An illusion? Well, a convenient fiction at least. It turns out the these is no distinct redness in the material world. There is in fact a seemless array of available wavelengths across very wide spectrum (most of which is quite invisible to us but still real). We perceive a distinct redness after our red colour cones are triggered by a certain range wavelengths — Kym