Its ineffable?
Being interpreted as a chestnut does not mean that unseen, it's no longer a chestnut.
What reason is there to suppose that unseen, it is no longer a chestnut? — Banno
I hope to respond in other ways but will start with this. My take on what Chalmers is presenting is something like: "can the world we touch through our awareness be caused entirely by agents outside of that experience?" The call for a completely objective account is a kind of mapping more than a finding about the 'body.' The scientific method is an exclusion of certain experiences in order to pin down facts. Can this process, which is designed to avoid the vagaries of consciousness, also completely explain it? — Paine
The call for a completely objective account is a kind of mapping more than a finding about the 'body.'
When it comes to conscious experience, this sort of explanation fails. What makes the
hard problem hard and almost unique is that it goes beyond problems about the performance of functions. To see this, note that even when we have explained the performance of all the cognitive and behavioral functions in the vicinity of experience—perceptual discrimination, categorization, internal access, verbal report—there may still remain a further unanswered question: Why is the performance of these functions accompanied by experience? A simple explanation of the functions leaves this question open.
Throughout the higher-level sciences, reductive explanation works in just this way. To
explain the gene, for instance, we needed to specify the mechanism that stores and transmits
hereditary information from one generation to the next. It turns out that DNA performs this
function: once we explain how the function is performed, we have explained the gene. To
explain life, we ultimately need to explain how a system can reproduce, adapt to its environment, metabolize, and so on. All of these are questions about the performance of functions, and so are well-suited to reductive explanation. The same holds for most problems in cognitive science. To explain learning, we need to explain the way in which a system’s behavioral capacities are modified in light of environmental information, and the way in which new information can be brought to bear in adapting a system’s actions to its environment. If we show how a neural or computational mechanism does the job, we have explained learning. We can say the same for other cognitive phenomena, such as perception,
memory, and language. Sometimes the relevant functions need to be characterized quite subtly, but it is clear that insofar as cognitive science explains these phenomena at all, it does so by explaining the performance of functions.
When it comes to conscious experience, this sort of explanation fails. What makes the hard problem hard and almost unique is that it goes beyond problems about the performance of functions. To see this, note that even when we have explained the performance of all the cognitive and behavioral functions in the vicinity of experience—perceptual discrimination, categorization, internal access, verbal report—there may still remain a further unanswered question: Why is the performance of these functions accompanied by experience? A simple explanation of the functions leaves this question open.
There is no analogous further question in the explanation of genes, or of life, or of learning. If someone says “I can see that you have explained how DNA stores and transmits hereditary information from one generation to the next, but you have not explained how it is a gene”, then they are making a conceptual mistake. All it means to be a gene is to be an entity that performs the relevant storage and transmission function. But if someone says “I can see that you have explained how information is discriminated, integrated, and reported, but you have not explained how it is experienced”, they are not making a conceptual mistake. This is a nontrivial further question.
This further question is the key question in the problem of consciousness. Why doesn’t all this information-processing go on “in the dark”, free of any inner feel? Why is it that when electromagnetic waveforms impinge on a retina and are discriminated and categorized by a visual system, this discrimination and categorization is experienced as a sensation of vivid red? We know that conscious experience does arise when these functions are performed, but the very fact that it arises is the central mystery. There is an explanatory gap (a term due to Levine 1983) between the functions and experience, and we need an explanatory bridge to cross it. A mere account of the functions stays on one side of the gap, so the materials for the bridge must be found elsewhere.
This is not to say that experience has no function. Perhaps it will turn out to play an important cognitive role. But for any role it might play, there will be more to the explanation of experience than a simple explanation of the function. Perhaps it will even turn out that in the course of explaining a function, we will be led to the key insight that allows an explanation of experience. If this happens, though, the discovery will be an extra explanatory reward. There is no cognitive function such that we can say in advance that explanation of that function will automatically explain experience.
To explain experience, we need a new approach. The usual explanatory methods of
cognitive science and neuroscience do not suffice. These methods have been developed
precisely to explain the performance of cognitive functions, and they do a good job of it. But
as these methods stand, they are only equipped to explain the performance of functions. When it comes to the hard problem, the standard approach has nothing to say.
For Carnap, a framework was a candidate language of unified science (i.e. for all knowledge), while Chalmers’s “domains” determined by furnishing functions result merely from various gradations of ontological assertion, unconnected to any larger bodies of knowledge (pp. 114-16). The point of classifying a question as “internal” to a framework, for Carnap, was to regard it as, in principle, answerable, with the resources specified by that framework — i.e. to distinguish what we can in principle know from what we in principle can’t. In Chalmers’s terms, the Carnapian constraints on admissibility are explicitly supplied by the framework itself, and have no need of any supplementation by ontological fiat. So from a Carnapian point of view, the best sense one can make of Chalmers’s supposed “replacement” for the internal-external distinction is that he is attempting to create a space for a “third realm” of statements that are neither answerable in the cut-and-dried, scientific or mathematical sort of way (i.e. internal), nor are fully indeterminate (i.e. external), but, let us say, possibly-answerable by looser constraints in a not-quite-scientific, ontological dialect of ordinary language (and the intuitions it supports) that is somewhat regimented but whose boundaries are unclear (Chalmers refers to Cian Dorr and Ted Sider in this context, p. 100). Between properly behaved frameworks and the outer space of indeterminateness, that is, Chalmers wants to introduce a space for quasi-frameworks.
We don't expect the crabs or the lions to talk, but some of us might talk to them. Or even claim to hear them.
So, what's the difference? Without an account, then there is no difference. Rather, we have to accept that some people can talk to the whales, crabs, lions of the world. — Moliere
The lion may already be speaking, but there's no way we'd understand what he's saying because we're different creatures. — Moliere
We're just barking while we feel like it all means something. — Moliere
The only thing is, every single one of the claims is false. So it is possible for us to carry on at length while having no contact with truth -- it doesn't matter that it makes sense to us, because astrology can make sense to us, and it is false. — Moliere
But this is all just to make sense of an epiphenomenal account of meaning -- that language means, but meaning drifts beyond any empirical measurement and has no causal connection to the world or brains. — Moliere
Perhaps, or they may be deluded. — Janus
I don't discount the possibility, and yet I see little reason to believe it. Wittgenstein's statement never made much sense to me. — Janus
As far as I know there is no conceivable way of blending accounts given in causal, physical terms with accounts given in terms of subjective reasons. — Janus
To be able to determine if someone is deluded, you sort of already have to have a notion about determining both the minds of others, and the truth about the world. — Moliere
Note that not having a reason isn't the same as things being true or false. It may just be that there is no reason at all. The reason unmarried men are bachelors is because that's the relationship between those locutions. The reason crabs can't talk is because they are not in the class of talking animals. — Moliere
Well, yeah. Exactly why I claimed to be out of my depth -- the mind-body problem has been around for awhile specifically because it's a quagmire of a problem.
I'm not sure what would qualify one to not be out of depth with the mind-body problem. — Moliere
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.