why wouldn't our ability to reason be advantageous for survival?...So the underlying issue here from Wayfarer perspective is that naturalism presupposes intentionality; our capacity for thoughts to be about stuff. How can physical things give rise to such thought? But isn't intentionality essentially about memory - our ability to observe things and recall them? — Tom Storm
I think viewing reason through that criterion of whether it 'helps us survive' is reductionist. What is advantageous to survival is an essential consideration in evolutionary theory. But classifying reason along with other traits - tentacles, claws, physical speed or strength - undermines the sovereignty, thus the credibility, of reason. Surely if reason is to have meaning, it has to be able to stand on it's own feet, so to speak. If it appeals to the court of 'what helps survival', then reason becomes subordinated to other purposes. (Both Donald Hoffman and Alvin Plantinga make use of this line of argument, for different purposes. It's also discussed in Nagel's Evolutionary Naturalism essay which I've previously referred to.)
My view: evolutionary biology certainly provides the account of how h. sapiens physically evolved. But thanks to the rapid evolution of the massive sapient fore-brain, h. sapiens has developed powers of perception which are almost entirely absent in other creatures, chief amongst them reason (hence 'the rational animal'). That faculty, along with language, tool-making, story-telling, and the capacity for self-transcendence, enables us to 'transcend our biology' so to speak. An intuition of that, I contend, is what is behind the various forms of philosophical dualism, such as the rational soul in the physical body. These need not be literally true in order to be metaphorically accurate, to tell us something vital about human nature.
I think the nature of reason is tied up with the ability to abstract and to generalise, which is the basis of both language and logic. And I think the Greek philosophers realised this - you can see the origins of it in Parmenides and Plato and the discussions of forms and universals. That's a digression, but it's also part of the background of this argument.
The question we're faced with: is it impossible that conscious processes could evolve from natural causes? — Tom Storm
Everyone
assumes that 'mind is organised matter' and that the processes understood by the natural sciences will one day illustrate the fine detail of how this happened through what is called a-biogenesis (life from non-life). But, I refer again to the first four paragraphs of
The Core of MInd and Cosmos, which discusses the sense in which 'the subject' was deliberately omitted or excluded at the formation of the modern natural sciences of Galileo, Descartes and Newton, with many concommitant philosophical implications. One could argue that the perspective of the subject (subject-hood, as distinct from subjectivity) is being re-introduced through phenomenology and embodied cognition (although It's still not considered in the kind of physicalism which this argument is addressing.)
I am contemplating the idea that right from the very first life-forms, life *is* the earliest manifestation of intentionality. As the complexity of organisms evolves over the aeons, so too their intelligence, apparently arriving at h. sapiens, through which the whole process has become critically self-aware.
//we arrive at the ability to understand abstract truths and the like. They're not simply 'a product' of the human mind, although having such a mind, we can produce, e.g. imaginary number systems and the like. But I maintain the furniture of reason such as logical laws, are discovered not invented, and certainly are not the products of a biological process.//
Do the outputs of ChatGPT have intentionality? Why or why not? — wonderer1
Well, taking it from the horses' mouth:
Question: In philosophy, 'intentionality' is the power of minds and mental states to be about, to represent, or to stand for, things, properties and states of affairs. Do AI systems such as ChatGPT possess this power?
ChatGPT: AI systems, including ChatGPT, do not possess intentionality in the same way that humans do. Intentionality is typically associated with consciousness and subjective experience, which are currently not attributes of AI systems.
While AI systems like ChatGPT can process and generate information about things, properties, and states of affairs, they do not possess a genuine understanding or subjective experience of the concepts they deal with. Their ability to represent and respond to information is based on statistical patterns learned from vast amounts of training data.
AI systems operate through algorithms and statistical models that enable them to process and generate text based on patterns in the data they were trained on. They don't possess personal goals, beliefs, desires, or subjective experiences. Their responses are determined by the patterns in the input they receive and the way they were trained, rather than by any internal mental states or subjective understanding.
While AI systems can be sophisticated in generating responses that appear intentional or meaningful, these responses are ultimately the result of complex computational processes rather than true intentionality.
Verbatim!
In simple terms, if brain state = X, then mind state = Y is the claim right? If this can be confirmed through testing, then I would say this is a completely rational argument. — Philosophim
So I'm saying, you can't get to such simple terms in neuroscience. What you're arguing here is 'brain-mind identity theory'. To respond in terms of the argument from reason, I would say that the brain-mind identity theory collapses or blurs the distinction between logical necessity and physical causation. If mental states are said to be identical to specific brain states or processes, this suggest that there is a direct correspondence between logical propositions and specific physical configurations.
In the brain-mind identity theory, the identity statement between a mental state and a brain state is typically understood as a necessary identity. This means that if a specific mental state is instantiated, it is necessarily identical to a specific brain state. The logical necessity is derived from the supposed one-to-one correspondence between mental and brain states. By collapsing the distinction between logical necessity and physical causation, the brain-mind identity theory implies that the truth of a logical proposition is causally determined by the physical state of the brain. In this view, the brain state corresponding to a particular mental state is thought to be both the cause and the logical ground for the associated mental experience or thought.
But while there may be correlations between mental states and brain states, this doesn't necessarily imply a strict identity between them. Logical propositions and their truth values are abstract entities that exist independently of any specific physical realization, such as brain states. 'If X >Y and A>X, then it must be the case that A>Y'. This is a logical proposition, but note that its validity is not dependent on any configuration of physical symbols. I could choose to represent it (and any number of different propositions) in different symbolic form and in different media, all whilst still preserving the meaning of the proposition. Hence the distinction between logical necessity and physical causation is preserved, and you can't show that 'brain-states' are causal, in respect of propositional content.
I interpreted this as suggesting that exercise of reason is assumed to be incompatible with the determinism of physics, when that is what your argument seeks to show. — wonderer1
And I think the argument does show that. It distinguishes between insight based on reasoned inference (knowing that X must be so on account of Y) and observation of a cause-and-effect relationship. No question is being begged, a case is being made.
I think consideration of the role of networks of neurons, and disregarding the molecular details on which the neurons supervene, is an appropriate level of looking at things for the purpose of this discussion — wonderer1
It might be, were this a computer science or neuroscience forum.