This brings me to a more speculative point: perhaps we will never be able to fully understand ourselves — Jacques
What are we trying to understand in ourselves though ? — kindred
By “understanding ourselves,” I meant fully decoding ourselves—much like scientists are currently attempting with the simplest model organism: the nematode Caenorhabditis elegans. This tiny animal consists of 959 cells, its nervous system of 302 neurons, and its genome was fully sequenced back in 1998. Yet even after more than 60 years of research, we still haven't succeeded in fully understanding how it functions. — Jacques
What have the scientists failed to do with the nematode? As a non-programmer, I guess I'm asking whether decoding is an analogy, or something that literally can be done with creatures. — J
That's because AI hasn't been programmed to acquire information for its own purposes. It is designed to acquire information only from a human and then process that information for a human. If we were to design a robot AI like a human - with a body an sensory organs (cameras for eyes, microphones for ears, chemical analyzers for taste and smell, tactile sensors for touch, etc.) and program it to take in information from other sources, not just humans, and use it to improve upon itself (it can rewrite its code, like we rewire our brains when learning something) and others, then it would develop its own intrinsic motives.I’ve come to the conclusion that most media portrayals of AI developing "its own motives" are based on flawed reasoning. I don’t believe that machines—now or ever—will develop intrinsic motivation, in the sense of acting from self-generated desire. This is because I believe something far more basic: not even human beings have free will in any meaningful, causally independent sense. — Jacques
Evolutionary predispositions and environmental conditioning determine our genes, but our genes have the current environment to deal with which could be different than the conditions prior (like having an over abundance of sugar in our diet). So our decisions are more of a product of our genes interacting with our current environment.To me, human decisions are the inevitable product of evolutionary predispositions and environmental conditioning. — Jacques
While I would agree with your last point, I don't know how a "physical" machine would experience qualia. The visual experience of a brain and its neurons, and of a computer and its circuits, is information. It is information because it is an effect of prior causes and the effect informs us of the causes - the environment's interaction with my body. While the world is not as it appears, it is as we are informed it is, and "informed" is not what just one sense is telling you, but includes integrating all sensory information (why else would we have multiple senses?)I also reject the idea that humans possess some irreducibly mysterious cognitive abilities. Qualia, intuition, consciousness—they are all real phenomena, but I see no reason to believe they’re anything but products of material data processing. The brain, though vastly complex, is just a physical machine. If that machine can experience qualia, why not a future machine of equal or greater complexity? — Jacques
This appears to simply be a projection of our ignorance of what can happen in the future. We can design an algorithm for a specific system that never interacts with an external system and it works. The problem is there are other external systems that interact. Our solar system is a complex interaction of gravitational forces and has been stable and predicable for billions of years, but an external object like a black hole or a brown dwarf could fly through the system and disrupt the system. It seems to me that every system halts at some point except reality itself (existence is infinite in time and space or existence is an infinite loop).This idea reminds me of Turing’s Halting Problem: the impossibility of writing a general program that determines whether any arbitrary program halts. — Jacques
Qualia, intuition, consciousness—they are all real phenomena, but I see no reason to believe they’re anything but products of material data processing. The brain, though vastly complex, is just a physical machine. If that machine can experience qualia, why not a future machine of equal or greater complexity? — Jacques
I also reject the idea that humans possess some irreducibly mysterious cognitive abilities. Qualia, intuition, consciousness—they are all real phenomena, but I see no reason to believe they’re anything but products of material data processing. The brain, though vastly complex, is just a physical machine. If that machine can experience qualia, why not a future machine of equal or greater complexity? — Jacques
I’ve come to the conclusion that most media portrayals of AI developing "its own motives" are based on flawed reasoning. I don’t believe that machines—now or ever—will develop intrinsic motivation, in the sense of acting from self-generated desire. This is because I believe something far more basic: not even human beings have free will in any meaningful, causally independent sense.
To me, human decisions are the inevitable product of evolutionary predispositions and environmental conditioning. A person acts not because of a metaphysical "self" that stands outside causality, but because neural machinery—shaped by genetics, trauma, language, culture—fires in a particular way. If that’s true for humans, how much more so for a machine? — Jacques
That should start the usual disagreements about scientistic physicalism and how this has collapsed the richness of conscious experience into merely computational or mechanistic terms. Next comes the points about the hard problem of consciousness, followed by some Thomas Nagel quotes. Enjoy. — Tom Storm
In my opinion anyone who rejects physicalism and the associated reduction of conscious experiences to material processes must assume that these experiences are based on something else. But on what – an élan vital, magic, or what else? — Jacques
The brain, though vastly complex, is just a physical machine. If that machine can experience qualia, why not a future machine of equal or greater complexity? — Jacques
Turing showed that such a program would lead to a logical contradiction when applied to itself. Similarly, a human trying to model the human mind completely may run into a barrier of self-reference and computational insufficiency. — Jacques
In my opinion anyone who rejects physicalism and the associated reduction of conscious experiences to material processes must assume that these experiences are based on something else. But on what – an élan vital, magic, or what else? — Jacques
Many entities have, in addition to their material constitution, formal/functional/teleological features that arise from their history, their internal organisation, and the way they are embedded in larger systems. This is true of human beings but also of all living organisms and of most human artifacts. — Pierre-Normand
What must then be appealed to in order to explain such irreducible formal features need not be something supernatural or some non-material substance. What accounts for the forms can be the contingencies and necessities of evolutionary and cultural history history, and the serendipitous inventions of people and cultures. Those all are explanatory factors that have nothing much to do with physics or the other material sciences. Things like consciousness (and free will) are better construed as features or emergent abilities of embodied living (and rational) animals rather than mysterious immaterial properties of them. — Pierre-Normand
The problem with this statement is that, in modern biology and the philosophy of science, teleology is generally rejected as a fundamental explanatory principle. While evolutionary processes can produce structures that appear purpose-built (such as wings for flying), this appearance is understood as a result of natural selection, not as evidence of actual purpose. Since Darwin — and even more explicitly since Stephen Jay Gould — such apparent design is treated as an illusion rather than a literal reality. — Jacques
Machines and living entities are a bit different (as I assume you know), but let's accept the very broad definition here and ignore the obvious physical differences between man made machines and living organisms.I’ve come to the conclusion that most media portrayals of AI developing "its own motives" are based on flawed reasoning. I don’t believe that machines—now or ever—will develop intrinsic motivation, in the sense of acting from self-generated desire. - I also reject the idea that humans possess some irreducibly mysterious cognitive abilities. Qualia, intuition, consciousness—they are all real phenomena, but I see no reason to believe they’re anything but products of material data processing. The brain, though vastly complex, is just a physical machine. — Jacques
This idea reminds me of Turing’s Halting Problem: the impossibility of writing a general program that determines whether any arbitrary program halts. Turing showed that such a program would lead to a logical contradiction when applied to itself. Similarly, a human trying to model the human mind completely may run into a barrier of self-reference and computational insufficiency. — Jacques
Everything is about objectivity and subjectivity, actually. It's not merely a psychological issue, but simply logical. We can easily understand subjectivity as someone's (or some things) point of view and objectivity as "a view without a viewpoint". To put this into a logical and mathematical context makes it a bit different. Here both Gödel and Wittgenstein are extremely useful.
In logic and math a true statement that is objective can be computed and ought to be provable. Yet when it's subjective, this isn't so: something subjective refers to itself.
Math and logic are precise. There you cannot wiggle your way off just by assuming something. Otherwise we can always just assume a "black box" that gives us the correct models to everything and not think about it more. I can also assume to have a "black box" that gives me a solution to every math problem. The problem with this thinking is that I have no specific answers, naturally.I'm not entirely familiar with the halting problem, but your wording suggests a mistake in your reasoning. It may not be possible for some program A to determine whether or not itself will halt, but is it possible for it to determine whether or not some equivalent program B will halt? If so, even if I cannot model my own mind, I may be able to model your mind, and if it's reasonable to assume that our minds are broadly equivalent then that will suit our purposes of modelling "the human mind" in general. — Michael
... anti-reductionist biologists like Ernst Mayr have defended naturalized conceptions of teleology (that Mayr calls "teleonomy") that don't conflict with Gould's insistence on the lack of foresight of evolution through natural selection. The question regarding the present aims (forward-looking) of an organism's structure and behavior is distinct from the question regarding the origin of this structure (backward-looking). — Pierre-Normand
Machines and living entities are a bit different (as I assume you know), but let's accept the very broad definition here and ignore the obvious physical differences between man made machines and living organisms. — ssu
there doesn't have to be at all any kind of "meta-algorithm" at all, it is just that subjectivity isn't computable. — ssu
There are four features involved when it comes to a person, namely instinct, logical thinking, intuition, and wisdom; they come in order in a person. Free will is the ability of the mind to choose freely. It is required to decide in a situation when there is a conflict of interest. — MoK
I only studied Kant on morality, and I disagree with him. Generally, I am against any form of Idealism. The problem with any form of Idealism is how ideas could be coherent in the absence of a mind. I would be happy to see if Kant ever mentioned the mind in his books or gave a definition for it.You sound like you’re drawing from idealism and Kantian philosophy — Jacques
I think that free will in the libertarian form is the ability of any mind.maybe with a libertarian view of free will—is that how you see it? — Jacques
Do note the "as a kind of machine". Yes, we can talk for example about molecular machines in our body, but there still is a difference between living organism and an artificial motor human have constructed. But yes, we can generalize, so I also agree that we can talk about motors.While it's true that most people might share your opinion, it's worth noting that several prominent thinkers have argued that the brain—or even the human being as a whole—can be understood as a kind of machine. — Jacques
Wait a minute.While subjectivity may not be computable at present, I assume it is in principle, given that the brain - a physical system effectively functioning as a (non-digital) computer - somehow gives rise to it. — Jacques
Do note that computation is a specific way to solve problems, the process of performing calculations or solving problems using a set of well-defined rules or instructions, following algorithms. We don't compute everything if we are presented with a problem. Or do you really compute every problem you find? — ssu
Machines and living entities are a bit different (as I assume you know), but let's accept the very broad definition here and ignore the obvious physical differences between man made machines and living organisms. — ssu
This brings me to a more speculative point: perhaps we will never be able to fully understand ourselves. Not for mystical reasons, but because of a structural limitation: a system may need more complexity than itself to fully model itself. In other words, explaining a human brain might require more computational capacity than the brain possesses. Maybe we will someday fully understand an insect, or a fish—but not a human being.
But this doesn't at all counter my point of there being uncomputable mathematics and hence uncomputable problems. Or to put it another way, undecidable problems where an undecidable problem is a decision problem for which an effective method (algorithm) to derive the correct answer does not exist.You're right, I don’t consciously compute every problem I encounter. But that doesn’t mean computation isn’t happening. Much of the problem-solving is outsourced to unconscious brain processes. - So while I don’t deliberately compute everything, my brain is constantly computing - just not in a way that feels like "doing math". — Jacques
Again, this isn't at all an issue of vitalism at all or how related in a deep sense physico-chemical systems are. That isn't the question, the question is purely logical and of logic.So, while machines and organisms differ in origin and complexity, their internal workings are, in a deep sense, physico-chemical systems, and thus comparable under the lens of natural science. — Jacques
While subjectivity may not be computable at present, I assume it is in principle — Jacques
I don’t see why subjectivity, or anything else a human brain does can’t be modelled. — Punshhh
But this doesn't at all counter my point of there being uncomputable mathematics and hence uncomputable problems. Or to put it another way, undecidable problems where an undecidable problem is a decision problem for which an effective method (algorithm) to derive the correct answer does not exist. — ssu
Very correct!I would suggest that mysticism is the only way to fully understand ourselves. This is because it endeavours to develop understanding not simply through the intellect, but also through the body, through being and through growth. — Punshhh
What do you mean by this?Thus enabling a more holistic, or 3 dimensional (by analogy) perspective. — Punshhh
Perhaps we can do it someday. Deities, perhaps, are practicing this!Also I would suggest that fully understanding anything, other than abstract concepts is not possible. Because it would require an understanding of the whole context in which it resides. Something which we are not in a position to do. — Punshhh
Matter is an environment that allows minds to interact. Matter does not do any process since it cannot even cause a change in itself. Those are minds that do the process over the information that is held in the stuff, matter, for example.To address your question about AI and subjectivity. I don’t see why subjectivity, or anything else a human brain does can’t be modelled. But subjectivity etc is not the same as consciousness, which is something present in living organisms. Resulting from biological processes, rather than computation in the nervous system. Just like the robot in Star Trek, known as Data, AI can conceivably be programmed to perform anything a human can do, but it simply isn’t conscious. It’s a machine carrying out preordained processes. — Punshhh
To see and know ourselves through an understanding of and with the body, through an understand of and being being and through growing, or progressing in these activities. Alongside an intellectual understanding and enquiry. One, or more of these means can inform the others and in a personal way integrate with the others. Forming a broader understanding, or knowing, in which the intellect is no more important in attaining that growth than the other means.Thus enabling a more holistic, or 3 dimensional (by analogy) perspective.
— Punshhh
What do you mean by this?
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.