bivalves can feel pain and therefore have consciousness — thewonder
The scientific revolution of the 17th century, which has given rise to such extraordinary progress in the understanding of nature, depended on a crucial limiting step at the start: It depended on subtracting from the physical world as an object of study everything mental – consciousness, meaning, intention or purpose. The physical sciences as they have developed since then describe, with the aid of mathematics, the elements of which the material universe is composed, and the laws governing their behavior in space and time.
We ourselves, as physical organisms, are part of that universe, composed of the same basic elements as everything else, and recent advances in molecular biology have greatly increased our understanding of the physical and chemical basis of life. Since our mental lives evidently depend on our existence as physical organisms, especially on the functioning of our central nervous systems, it seems natural to think that the physical sciences can in principle provide the basis for an explanation of the mental aspects of reality as well — that physics can aspire finally to be a theory of everything.
However, I believe this possibility is ruled out by the conditions that have defined the physical sciences from the beginning. The physical sciences can describe organisms like ourselves as parts of the objective spatio-temporal order – our structure and behavior in space and time – but they cannot describe the subjective experiences of such organisms or how the world appears to their different particular points of view. There can be a purely physical description of the neurophysiological processes that give rise to an experience, and also of the physical behavior that is typically associated with it, but such a description, however complete, will leave out the subjective essence of the experience – how it is from the point of view of its subject — without which it would not be a conscious experience at all.
So the physical sciences, in spite of their extraordinary success in their own domain, necessarily leave an important aspect of nature unexplained. — Thomas Nagel
At this point, would the people collectively manifest the consciousness of the original brain, as a whole, the way it would have manifested inside the person? — simeonz
Qualia can just describe aspects of physical states. — thewonder
It shows you how philosophically illiterate I am. At least the Wikipedia article doesn't say - "first proposed by ancient Chinese philosophers". :)Do you here allude to, or have you just re-invented, the China brain? — bongo fury
Interesting. Thank you.Also relevant, this speculative theory of composition of consciousnesses. Also it attempts to quantify the kind of complexity of processing with which you (likewise) appear to be proposing to correlate a spectrum of increasingly vivid consciousness. — bongo fury
Just to explicate something to which you may have alluded here with the vegetarianism remark. If the hypothesis that bivalves can feel pain is true (, which doesn't seem particularly implausible in principle, and I wouldn't eat them either), why wouldn't other reactionary systems, such as those of plants, feel pain at some reduced level of awareness?Edit: To avoid starting another thread, I just wanted to bring up that bivalves can feel pain and therefore have consciousness. This, for me, had partially resulted in a crisis of faith as a vegetarian, but I think may posit something useful for anyone concerned with Philosophy of Mind. That a decentralized network can still be conscious has interesting implications for the field. — thewonder
This neatly distinguishes a strong eliminativism (ascribing consciousness to nothing) from mere identity-ism (ascribing consciousness to some things, some brain states). The former would be what causes horrified reactions from many (see above), and the latter is accepted by Terrapin (I think), and @TheHedoMinimalist (I think). — bongo fury
Doesn't ascribing consciousness to any machines with "software" set the bar a bit low? Are you at all impressed by Searle's Chinese Room objection? — bongo fury
The human brain has greater overall capacity for information processing than that of animal species. Both have (in general) greater analytical performance compared to plants. Doesn't it follow that animals are more conscious then plants? — simeonz
Plants, on the other hand, are capable of some sophisticated behavior (both reactive and non-reactive), if their daily and annual routines are considered in their own time scale. Doesn't that make them more conscious then, say dirt? — simeonz
But is dirt completely unconscious? Particles cannot capture substantial amount of information, because their states are too few, but they have reactions as varying as can be expected. After all, their position momentum state is the only "memory" of past "observations" that they possess. But it isn't trivial however. One could ask, why wouldn't they be considered capable of microscopic amount of awareness? Not by virtue of having a mass, but because of their memory and responses. If not, there has to be some specific point in the scale of structural and behavioral complexity at which we consider awareness to become manifested. — simeonz
How many neurons (or similar structures) would we need to create an organism whose behavior can be considered minimally sentient - five, five hundred, five million, etc? — simeonz
I would like to illustrate how I think societies and ecosystems are similar with respect to consciousness using a thought experiment. Suppose that we use a person for each neuron in the brain, and give each person orders to interact with the rest like a neuron would, but using some pre-arranged conventional means of human interaction. We instruct each individual what corresponding neuron state it has initially, such that it matches the one from a living brain (taken at some time instant). Then we also feed the peripheral signals to the central nervous system, as the real brain would have experienced them. At this point, would the people collectively manifest the consciousness of the original brain, as a whole, the way it would have manifested inside the person? Or to put differently, do eliminative materialists allow for consciousness nesting? — simeonz
I would start by mentioning that bongo fury corrected me — TheHedoMinimalist
I would say that my view is more properly called functionalism rather than eliminative materialism. — TheHedoMinimalist
, the classical Turing test is outdated, because it limits the scope of the observations to static behavior. — simeonz
In particular, does materialism deny awareness and self-awareness as a continuous spectrum for systems of different complexity?
— simeonz
They do not deny that it is a spectrum but they don’t have to think that it begins on a molecular level or that all objects are part of the spectrum. — TheHedoMinimalist
How many neurons (or similar structures) would we need to create an organism whose behavior can be considered minimally sentient - five, five hundred, five million, etc?
— simeonz
This is difficult to precisely answer but I would make an educated guess and say enough to form a microscopic insect. — TheHedoMinimalist
I understand. I cannot imagine how eliminative materialists would deny the phenomenology of senses, considering that senses are central to logical empiricism, which I thought was their precursor, but I am not qualified to speak.I would to start by mentioning that bongo furry corrected me in his earlier comment about the definition of eliminative materialism. I would say that my view is more properly called functionalism rather than eliminative materialism. — TheHedoMinimalist
I would like to contribute to my earlier point with a link to a video displaying vine-like climbing of plants on the surrounding trees in the jungle. While I understand that your argument is not only about appearances, and I agree that analytico-synthetic skills greatly surpass plant life, it still seems unfair to me to award not even a fraction of our sentience to those complex beings.Well, I’m not sure if plants have mental activity of any sort. This is because plants do not seem to be capable of autonomous action or decision making which is remotely similar to that of humans. They also probably do not possess sufficient energy to support something like mental activity. Plants are more likely to have mental activity than dirt though. This is because dirt doesn’t seem to be sufficiently compact to form an embodied entity which could support a mind. — TheHedoMinimalist
My thinking here is probably inapplicable to philosophy, but I always entertain the idea of a hypothetical method of measurement, a system of inference, and conditions for reproducibility. If we were to observe that our muscles strain when we lift things, and conclude that there is a force compelling objects to the the ground, this assertion wouldn't be implausible. Yet, it wouldn't have the aforementioned explanative and analytical qualities. But I acknowledge that philosophy is different from natural sciences.This is difficult to precisely answer but I would make an educated guess and say enough to form a microscopic insect. I don’t think that my theory has to explain everything precisely in order to be a plausible theory. — TheHedoMinimalist
I don't accept any view at present. I am examining the the various positions from a logical standpoint. But, speaking out of sentiment, I am leaning more towards a continuum theory.The same epistemic difficulties exist for the binary view of consciousness which you accept. — TheHedoMinimalist
The peripheral input could be fed in as slowly as necessary to allow a relaxed scale of time that is comfortable for the human beings involved. This doesn't slow the brain down relative to the sense stimuli it receives, only to time proper. But real time does not appear relevant for the experiment.The reason why I think that the thought experiment you are providing me is absurd is because humans cannot remotely behave like neurons while maintaining their identity as humans or even humanoid creatures. This is because humans would have to carry out interactions as rapidly as neurons do with unrealistically perfect synchronization. — TheHedoMinimalist
I don't know really. But according to Wikipedia:Wasn't that Searle's point? That the test was useless already, because an obvious zombie (an old-style symbolic computer) would potentially pass it? — bongo fury
This seems to me to suggest that John Searle wanted to reject machines sentience in general.The Chinese room argument holds that a digital computer executing a program cannot have a "mind", "understanding" or "consciousness",[a] regardless of how intelligently or human-like the program may make the computer behave.
For me personally, the value of the discussion is the inspection of the logical arguments used for a given position and the examination of its distinguishing qualities. Without some kind of method of validation, meaning - any kind of quality control, it is difficult to commit. I would like a scale that starts at nothing, increases progressively with the analytico-synthetic capacity of the emergent structures, and reaches its limit at a point of total comprehension, or has no limit. It simply would make interpretations easier.I agree it is interesting to poll our educated guesses (or to dispute) as to where the consciousness "spectrum" begins (and zombie-ness or complete and indisputable non-consciousness ends). I vote mammals.
Related to that, it might be useful to poll our educated guesses (or to dispute) as to where the zombie "spectrum" ends (and consciousness or complete and indisputable non-zombie-ness begins). I vote humans at 6 months. — bongo fury
through the microscope of molecular biology, we get to witness the birth of agency, in the first macromolecules that have enough complexity to ‘do things.’ ... There is something alien and vaguely repellent about the quasi-agency we discover at this level — all that purposive hustle and bustle, and yet there’s nobody home ...
...Love it or hate it, phenomena like this exhibit the heart of the power of the Darwinian idea. An impersonal, unreflective, robotic, mindless little scrap of molecular machinery is the ultimate basis of all the agency, and hence meaning, and hence consciousness, in the universe.' — Daniel Dennett
I was once interviewed in Italy and the headline of the interview the next day was wonderful. I saved this for my collection it was... "YES we have a soul but it's made of lots of tiny robots" and I thought that's exactly right. Yes we have a soul, but it's mechanical. But it's still a soul, it still does the work that the soul was supposed to do. It is the seat of reason. It is the seat of moral responsibility. It's why we are appropriate objects of punishment when we do evil things, why we deserve the praise when we do good things. It's just not a mysterious lump of wonder stuff... that will out-live us. — Daniel Dennett
I must say, I find it easy to intuit that all insects are complete zombies, largely by comparing them with state of the art robots, which I likewise assume are unconscious (non-conscious if you prefer). — bongo fury
I would like to contribute to my earlier point with a link to a video displaying vine-like climbing of plants on the surrounding trees in the jungle. While I understand that your argument is not only about appearances, and I agree that analytico-synthetic skills greatly surpass plant life, it still seems unfair to me to award not even a fraction of our sentience to those complex beings. — simeonz
I don't accept any view at present. I am examining the the various positions from a logical standpoint. But, speaking out of sentiment, I am leaning more towards a continuum theory. — simeonz
The peripheral input could be fed in as slowly as necessary to allow a relaxed scale of time that is comfortable for the human beings involved. This doesn't slow the brain down relative to the sense stimuli it receives, only to time proper. But real time does not appear relevant for the experiment. — simeonz
In particular, does materialism deny awareness and self-awareness as a continuous spectrum for systems of different complexity? — simeonz
They do not deny that it is a spectrum but they don’t have to think that it begins on a molecular level or that all objects are part of the spectrum. — TheHedoMinimalist
This seems to me to suggest that John Searle wanted to reject machines sentience in general. — simeonz
, I mostly suspect that insects are conscious because they are capable of moving. They also appear afraid whether I try to squash them. — TheHedoMinimalist
But wouldn't they appear that way if they were zombie robot insects?... if you can imagine such a thing... could zombie actors help? — bongo fury
But Mars has no analytico-synthetic capacity, just dynamism. Even in its own time scale, it wouldn't appear as sentient as human beings. I do entertain the panpsychic idea, that simple matter possesses awareness, but of very tenuous quality. Negligible by human standards. Mars does manifest adaptation, but It does not engender assumptions of complex underlying model of reality.Otherwise, we might as well conclude that a lifeless rock like Mars is conscious because it’s capable of micro-movements like teutonic plate activity. — TheHedoMinimalist
I understand, but what is the alternative? If anything less then a million neurons is declared not conscious according to some version of materialism, then that one neuron somehow introduces immense qualitative difference. Which is not apprehensible in the materialist world, where the behavior will be otherwise almost unchanged - i.e. there will be no substantial observable effect. At the million scale, normal genetic variations or aging would be sufficient to alternate the presence of consciousness, without significant functional changes otherwise.I always suspect that (replacement of heap/non-heap by as many different grades of heap as we can possibly distinguish) is a step backwards. — bongo fury
Reading from the quotes that you kindly provided, I am left wondering what "illusion" means in this context. I understand the general sentiment expressed, and I can see how Daniel Dennett might reject subjectivity as its own substance (mind-body dualism) or intrinsic property (panpsychism), But I do not see how the determination and differentiation of one's self can be considered anymore an illusion than other biologically compelled emotions - like hunger or willfulness. We don't call those dysfunctional. In the end, I am not sure that Daniel Dennett considers the concept of self dysfunctional either, since he does accept the consequences from being a subject.The philosophy of mind that is based on this view, is that the mind is simply the harmonised output of billions of neurons that produce the illusion of subjectivity. — Wayfarer
The brain from the thought experiment (the China brain idea, as ↪bongo fury pointed out) includes all marks of sentience that Mars does not have - great memorization, information processing, and responsiveness (through simulated peripheral output.) The time scale is off, but I do not see how this affects the assumption of awareness. In the post-Einsteinian world, time is flexible, especially when acceleration is involved, so I wouldn't relate time and sentience directly. — simeonz
I do not see how the determination and differentiation of one's self can be considered anymore an illusion than other biologically compelled emotions - like hunger or willfulness. We don't call those dysfunctional. — simeonz
I am not sure Daniel Dennett considers the concept of self dysfunctional either, since he does accept the consequences from being a subject. — simeonz
Similarly, my past experience of having behavioral patterns and seeing that they are influenced by my mental activity provides evidence for the hypothesis that insects are more likely to be conscious than zombies. Why do you think they are more likely to be zombies? — TheHedoMinimalist
I always suspect that (replacement of heap/non-heap by as many different grades of heap as we can possibly distinguish) is a step backwards.
— bongo fury
I understand, but what is the alternative? — simeonz
The twist in the Chinese room, I guess, is to reveal a human (Searle) who is then revealed to be, in relation to the outer behaviour of the creature, a mere machine himself. — bongo fury
Predictably, a primitive science attempts to understand and build machines with true "automotivity". The fruits of this research are limited to sail-powered and horse-powered vehicles, and there is much debate as to whether true automotivity reduces ultimately to mere sail-power, so that car engines will eventually be properly understood as complicated sail-systems. And even now the philosophers remark sagely that engines may appear to be automotive, but the appearance of automotivity is, in reality, the sum of millions of sailing processes. — bongo fury
self-driving cars might be conscious — TheHedoMinimalist
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.