...the brain controls EVERY aspect of the body... — Garrett Travers
No, your research agreed with me, not you. — Garrett Travers
No, your research agreed with me, not you. And I pointed that out to you by quoting it. Did you miss that part? Here, I'll do it again:
"This has never been shown before," says Levin. "No one would have guessed that eyes on the flank of a tadpole could see, especially when wired only to the spinal cord and not the brain." The findings suggest a remarkable plasticity in the brain's ability to incorporate signals from various body regions into behavioral programs that had evolved with a specific and different body plan."
You completely misunderstood your research. — Garrett Travers
Frontiers in Cellular Neuroscience, 2019 meta-analysis:
https://www.frontiersin.org/articles/10.3389/fncel.2019.00302/full — Garrett Travers
Research on consciousness has always been difficult. For example, the definition of consciousness is very vague and includes many aspects, such as the disciplines of psychology and philosophy. Thus, there is no definitive conclusion.
At present, consciousness is a very vague concept that lacks a specific and accurate definition
Thousands of years of reflection on the human mind has left us hard-wired with concepts that are intuitive, descriptive, and wildly unscientific. Understanding the mechanisms of the human brain will require much greater definitional precision. But in the end, better definitions will allow us to develop the theoretical constructs—like the periodic table for chemistry or the theory of evolution for biology—that can move neuroscience forward and help us understand how our brains work.
The easy problems of consciousness include those of explaining the following phenomena:
* the ability to discriminate, categorize, and react to environmental stimuli;
* the integration of information by a cognitive system;
* the reportability of mental states;
* the ability of a system to access its own internal states;
* the focus of attention;
* the deliberate control of behavior;
* the difference between wakefulness and sleep.
The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel (What is it like to be a Bat,1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them. All of them are states of experience. — Chalmers
(Bennett and Hacker) argue that for some neuroscientists, the brain does all manner of things: it believes (Crick); interprets (Edelman); knows (Blakemore); poses questions to itself (Young); makes decisions (Damasio); contains symbols (Gregory) and represents information (Marr). Implicit in these assertions is a philosophical mistake, insofar as it unreasonably inflates the conception of the 'brain' by assigning to it powers and activities that are normally reserved for sentient beings. It is the degree to which these assertions depart from the norms of linguistic practice that sends up a red flag. The reason for objection is this: it is one thing to suggest on empirical grounds correlations between a subjective, complex whole (say, the activity of deciding and some particular physical part of that capacity, say, neural firings) but there is considerable objection to concluding that the part just is the whole. These claims are not false; rather, they are devoid of sense.
Wittgenstein remarked that it is only of a human being that it makes sense to say “it has sensations; it sees, is blind; hears, is deaf; is conscious or unconscious.” (Philosophical Investigations, § 281). The question whether brains think “is a philosophical question, not a scientific one” (p. 71). To attribute such capacities to brains is to commit what Bennett and Hacker identify as “the mereological fallacy”, that is, the fallacy of attributing to parts of an animal attributes that are properties of the whole being.
Any and All assertions made about the nature of consciousness herein must be supported by some sort of evidence, or they will be dismissed. — Garrett Travers
Frontiers in Cellular Neuroscience, 2019 meta-analysis:
https://www.frontiersin.org/articles/10.3389/fncel.2019.00302/full
— Garrett Travers
From which:
Research on consciousness has always been difficult. For example, the definition of consciousness is very vague and includes many aspects, such as the disciplines of psychology and philosophy. Thus, there is no definitive conclusion.
From the conclusion of the article:
At present, consciousness is a very vague concept that lacks a specific and accurate definition
From the second source:
Thousands of years of reflection on the human mind has left us hard-wired with concepts that are intuitive, descriptive, and wildly unscientific. Understanding the mechanisms of the human brain will require much greater definitional precision. But in the end, better definitions will allow us to develop the theoretical constructs—like the periodic table for chemistry or the theory of evolution for biology—that can move neuroscience forward and help us understand how our brains work.
So, both the first articles acknowledge that the nature of consciousness is elusive, and in no way claim that it has been found or accounted for. And in any case, they are only addressing what David Chalmers describes as the easy problems, which he gives as:
The easy problems of consciousness include those of explaining the following phenomena:
* the ability to discriminate, categorize, and react to environmental stimuli;
* the integration of information by a cognitive system;
* the reportability of mental states;
* the ability of a system to access its own internal states;
* the focus of attention;
* the deliberate control of behavior;
* the difference between wakefulness and sleep.
But as for the hard problem, the problem which physicalist accounts can't explain, he says:
The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel (What is it like to be a Bat,1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them. All of them are states of experience.
— Chalmers
Plainly neuroscience is an objective discipline. Neuroscience can help remediate those in need of neuroscientific therapy, or help to understand the relationship between functional aspects of conciousness and perfomative acts - in that sense, how the brain operates. But that is cognitive science, and neuroscience, not philosophy as such.
One canonical text on the subject remains the Bennett and Hacker book, The Philosophical Foundations of Neuroscience, which is staunchly critical of the kind of lumpen reductionism that the OP is advocating. I'll probably never read it in full but the Notre Dame Review provides a useful primer. The main point of their critique centres around the 'mereological fallacy', i.e. the idea that the brain does things. The brain is not itself an agent, and does not, in that sense, do anything, although obviously you need one to act (although not always, it seems.)
(Bennett and Hacker) argue that for some neuroscientists, the brain does all manner of things: it believes (Crick); interprets (Edelman); knows (Blakemore); poses questions to itself (Young); makes decisions (Damasio); contains symbols (Gregory) and represents information (Marr). Implicit in these assertions is a philosophical mistake, insofar as it unreasonably inflates the conception of the 'brain' by assigning to it powers and activities that are normally reserved for sentient beings. It is the degree to which these assertions depart from the norms of linguistic practice that sends up a red flag. The reason for objection is this: it is one thing to suggest on empirical grounds correlations between a subjective, complex whole (say, the activity of deciding and some particular physical part of that capacity, say, neural firings) but there is considerable objection to concluding that the part just is the whole. These claims are not false; rather, they are devoid of sense.
Wittgenstein remarked that it is only of a human being that it makes sense to say “it has sensations; it sees, is blind; hears, is deaf; is conscious or unconscious.” (Philosophical Investigations, § 281). The question whether brains think “is a philosophical question, not a scientific one” (p. 71). To attribute such capacities to brains is to commit what Bennett and Hacker identify as “the mereological fallacy”, that is, the fallacy of attributing to parts of an animal attributes that are properties of the whole being.
I would add to that, that every assertion about 'what is' or 'what is not', relies on judgements which in themselves can never be validated by or found in neuroscientific data. It's one thing to look for the neural correlates of 'attention' or 'wakefulness' - but what could possibly be the meaning of the neural correlate of a judgement? How would you even look for that? You can't stand outside judgement, as every attempt to identify it, is itself a judgement. Judgement is in fact internal to thought, it is part of the operations of the intellect, and is in the province of logic, not that of neuroscience, which nevertheless must assume the validity of judgement in order to even begin.
Any and All assertions made about the nature of consciousness herein must be supported by some sort of evidence, or they will be dismissed.
— Garrett Travers
Yours is the shell-game of dogmatic empiricism - to declare that only evidence of a certain kind, namely, the kind that supports empiricism, is valid! So you're demanding empirical proof of the limitations of empiricism. But none of the sources you quote claim to understand the nature of consciousness in any philosophical sense - in fact they generally will sorround any such ideas with disclaimers and qualifications, or promissory notes of how much progress has been made, or will be made. — Wayfarer
So, both the first articles acknowledge that the nature of consciousness is elusive, and in no way claim that it has been found or accounted for. And in any case, they are only addressing what David Chalmers describes as the easy problems, which he gives as: — Wayfarer
Plainly neuroscience is an objective discipline. Neuroscience can help remediate those in need of neuroscientific therapy, or help to understand the relationship between functional aspects of conciousness and perfomative acts - in that sense, how the brain operates. But that is cognitive science, and neuroscience, not philosophy as such.
One canonical text on the subject remains the Bennett and Hacker book, The Philosophical Foundations of Neuroscience, which is staunchly critical of the kind of lumpen reductionism that the OP is advocating. I'll probably never read it in full but the Notre Dame Review provides a useful primer. The main point of their critique centres around the 'mereological fallacy', i.e. the idea that the brain does things. The brain is not itself an agent, and does not, in that sense, do anything, although obviously you need one to act (although not always, it seems.) — Wayfarer
I would add to that, that every assertion about 'what is' or 'what is not', relies on judgements which in themselves can never be validated by or found in neuroscientific data. — Wayfarer
Yours is the shell-game of dogmatic empiricism - to declare that only evidence of a certain kind, namely, the kind that supports empiricism, is valid! So you're demanding empirical proof of the limitations of empiricism. But none of the sources you quote claim to understand the nature of consciousness in any philosophical sense - in fact they generally will sorround any such ideas with disclaimers and qualifications, or promissory notes of how much progress has been made, or will be made. — Wayfarer
However, what you are overlooking in this articles, although I'm quite pleased you actually addressed them, is the fact that what is NOT a mystery, is that the brain is what is producing what it is that we call consciousness. — Garrett Travers
You'll need to address the above arguments before we move on. — Garrett Travers
Except that neither you nor anyone can pronounce what the meaning of 'produced' is here. — Wayfarer
I'll leave it at that. — Wayfarer
Don't mistake the fact that I can't be bothered arguing with you, means that I think you've made anything like a 'valid point' - only that you will never understand my criticisms, so any further wrangling will be a simple waste of time. — Wayfarer
You'll need to address the above arguments before we move on. — Garrett Travers
every assertion about 'what is' or 'what is not', relies on judgements which in themselves can never be validated by or found in neuroscientific data. — Wayfarer
However, there are other regions of the brain that play essential roles in making decisions, but their exact mechanisms of action still are unknown. — Garrett Travers
We will now address the deepest and most interesting variant of the NBP, the phenomenal unity of perception. There are intractable problems in all branches of science; for Neuroscience a major one is the mystery of subjective personal experience. This is one instance of the famous mind–body problem (Chalmers 1996) concerning the relation of our subjective experience (aka qualia) to neural function. Different visual features (color, size, shape, motion, etc.) are computed by largely distinct neural circuits, but we experience an integrated whole. This is closely related to the problem known as the illusion of a stable visual world (Martinez-Conde et al. 2008).
We normally make about three saccades per second and detailed vision is possible only for about 1 degree at the fovea (cf. Figure 1). These facts will be important when we consider the version of the Visual Feature-Binding NBP in next section. There is now overwhelming biological and behavioral evidence that the brain contains no stable, high-resolution, full field representation of a visual scene, even though that is what we subjectively experience (Martinez-Conde et al. 2008). The structure of the primate visual system has been mapped in detail (Kaas and Collins 2003) and there is no area that could encode this detailed information. The subjective experience is thus inconsistent with the neural circuitry. Closely related problems include change- (Simons and Rensink 2005) and inattentional-blindness (Mack 2003), and the subjective unity of perception arising from activity in many separate brain areas (Fries 2009; Engel and Singer 2001).
Traditionally, the NBP concerns instantaneous perception and does not consider integration over saccades. But in both cases the hard problem is explaining why we experience the world the way we do. As is well known, current science has nothing to say about subjective (phenomenal) experience and this discrepancy between science and experience is also called the “explanatory gap” and “the hard problem” (Chalmers 1996). There is continuing effort to elucidate the neural correlates of conscious experience; these often invoke some version of temporal synchrony as discussed above.
There is a plausible functional story for the stable world illusion. First of all, we do have a (top-down) sense of the space around us that we cannot currently see, based on memory and other sense data—primarily hearing, touch, and smell. Also, since we are heavily visual, it is adaptive to use vision as broadly as possible. Our illusion of a full field, high resolution image depends on peripheral vision—to see this, just block part of your peripheral field with one hand. Immediately, you lose the illusion that you are seeing the blocked sector. When we also consider change blindness, a simple and plausible story emerges. Our visual system (somehow) relies on the fact that the periphery is very sensitive to change. As long as no change is detected it is safe to assume that nothing is significantly altered in the parts of the visual field not currently attended.
But this functional story tells nothing about the neural mechanisms that support this magic. What we do know is that there is no place in the brain where there could be a direct neural encoding of the illusory detailed scene (Kaas and Collins 2003). That is, enough is known about the structure and function of the visual system to rule out any detailed neural representation that embodies the subjective experience. So, this version of the NBP really is a scientific mystery at this time.
The very source you referred to in reply actually says this — Wayfarer
You have to be able to make judgements about what such data means, to even speculate about those kinds of connections. So those kinds of explanations are fundamentally question-begging - they have to assume the thing they need to demonstrate. — Wayfarer
And as you're quoting NCBI papers, you might be interested in this one about the neural binding problem. It actually acknowledges the hard problem by showing that it has a very real correlate in neuroscience itself: — Wayfarer
The quoted paper is a domain of intrigue within neuroscience, I won't deny that. However, I will say that there is more up-to-date info on this particular topic. — Garrett Travers
Like mathematics, the functional architecture of Life can be expected to form a Platonic realm. It is structured not by external intelligence but by the system-immanent constraint of consistency, which discerns between viable and non-viable structures. This, then, is the true meaning of intelligent design, that material structures and processes, once they come near to consistent architectural structure, are fully drawn into it as if by magical force, like a dangling chain into the form of a perfect catenoid or like a potatoid soap bubble into a perfect sphere.
For one thing, we are certainly not born as a tabula rasa (as Locke claimed), being endowed at birth with the behavioral architecture just mentioned and with the infrastructure to take in and administer experiences, as pointed out by Kant.
Gaps in knowledge do not constitute an argument against my original claims. — Garrett Travers
What, then, is the relation between the standard ‘third-person’ objective methodologies for studying meteors or magnets (or human metabolism or bone density), and the methodologies for studying human consciousness? Can the standard methods be extended in such a way as to do justice to the phenomena of human consciousness? Or do we have to find some quite radical or revolutionary alternative sci- ence? I have defended the hypothesis that there is a straightforward, conservative extension of objective science that handsomely covers the ground — all the ground — of human consciousness, doing justice to all the data without ever having to abandon the rules and constraints of the experimental method that have worked so well in the rest of science.
And as you're quoting NCBI papers, you might be interested in this one about the neural binding problem. It actually acknowledges the hard problem by showing that it has a very real correlate in neuroscience itself:
— Wayfarer
The quoted paper is a domain of intrigue within neuroscience, I won't deny that. However, I will say that there is more up-to-date info on this particular topic. — Garrett Travers
A memorandum written four decades ago (von der Malsburg 1981) formulates this missing aspect as the “binding problem.” The term refers to the putative mechanism that enables the brain to agglomerate neurons into a hierarchy of composite mental structures. The binding problem has gained public attention (Roskies 1999) but to this day no solution or even formulation has gained broad acceptance. — Von Der Malsburg
The main point of their critique centres around the 'mereological fallacy', i.e. the idea that the brain does things. The brain is not itself an agent, and does not, in that sense, do anything, although obviously you need one to act (although not always, it seems.) — Wayfarer
Galen Strawson said he should be sued by Fair Trading for calling his book Consciousness Explained, when he does no such thing. — Wayfarer
Why is the binding problem a problem? — EugeneW
The existence of binding is not a problem, it's a good thing. Accounting for it, explaining how it happens is the problem. — bert1
I'll look at the global workspace theory more as that is what Garrett seems to be drawn to, and I like the idea of a space, as consciouness seems somewhat space-like to me, and space might be a candidate for that which unifies brain processes. — bert1
Why is the binding problem a problem? Wouldn't it be a problem if there wasn't binding? — EugeneW
Neurons fire around the brain, but there is no obvious spot where they all meet. But, phenomenologically, experiences are unified, we don't experience photon1 signal followed by photon2 signal etc. We see a car, a whole. All experiences involve some kind of many-in-one event. How this unification is achieved is the issue. My thought is that fields are extended throughout the brain, and indeed everything, and consciousness is perhaps best understood as a fundamental field-property. We feel what our brains are doing because we are the fields that constitute and unify it. — bert1
It isn't a problem. As I explained, it's just a gap in knowledge, most of which is accounted for in 2022. But, I can't literally explain the same thing over and over again to someone. — Garrett Travers
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.