• Jacques
    106
    I’ve come to the conclusion that most media portrayals of AI developing "its own motives" are based on flawed reasoning. I don’t believe that machines—now or ever—will develop intrinsic motivation, in the sense of acting from self-generated desire. This is because I believe something far more basic: not even human beings have free will in any meaningful, causally independent sense.

    To me, human decisions are the inevitable product of evolutionary predispositions and environmental conditioning. A person acts not because of a metaphysical "self" that stands outside causality, but because neural machinery—shaped by genetics, trauma, language, culture—fires in a particular way. If that’s true for humans, how much more so for a machine?

    I also reject the idea that humans possess some irreducibly mysterious cognitive abilities. Qualia, intuition, consciousness—they are all real phenomena, but I see no reason to believe they’re anything but products of material data processing. The brain, though vastly complex, is just a physical machine. If that machine can experience qualia, why not a future machine of equal or greater complexity?

    This brings me to a more speculative point: perhaps we will never be able to fully understand ourselves. Not for mystical reasons, but because of a structural limitation: a system may need more complexity than itself to fully model itself. In other words, explaining a human brain might require more computational capacity than the brain possesses. Maybe we will someday fully understand an insect, or a fish—but not a human being.

    This idea reminds me of Turing’s Halting Problem: the impossibility of writing a general program that determines whether any arbitrary program halts. Turing showed that such a program would lead to a logical contradiction when applied to itself. Similarly, a human trying to model the human mind completely may run into a barrier of self-reference and computational insufficiency.

    That doesn’t mean we shouldn’t try. But it does mean we should be humble about our expectations. Consciousness, intuition, and will may all be computational in nature, but we are not guaranteed access to a God's-eye view of them—not because we are magical, but because we are finite.
  • kindred
    185
    This brings me to a more speculative point: perhaps we will never be able to fully understand ourselvesJacques

    What are we trying to understand in ourselves though ? Our consciousness, or whether we have free will? Consciousness is a problem because as self aware beings we can’t isolate it in a brain scan of where it’s emerging from and the human brain is not reducible in such a way because ultimately consciousness is an emergent phenomena. And this ties to free will too we simply don’t have full visibility of how we truly make our choices.
  • Jacques
    106
    What are we trying to understand in ourselves though ?kindred

    By “understanding ourselves,” I meant fully decoding ourselves—much like scientists are currently attempting with the simplest model organism: the nematode Caenorhabditis elegans. This tiny animal consists of 959 cells, its nervous system of 302 neurons, and its genome was fully sequenced back in 1998. Yet even after more than 60 years of research, we still haven't succeeded in fully understanding how it functions. That’s why I suspect it will take quite some time before we truly understand the human being—if that's even possible. In fact, I’m skeptical that it can ever be done. I suspect that decoding a system requires a more complex system. A human being might, in theory, be able to decode a nematode—but just as a nematode would never be capable of decoding itself, I suspect that humans, too, will never fully succeed in decoding themselves.
  • J
    1.9k
    By “understanding ourselves,” I meant fully decoding ourselves—much like scientists are currently attempting with the simplest model organism: the nematode Caenorhabditis elegans. This tiny animal consists of 959 cells, its nervous system of 302 neurons, and its genome was fully sequenced back in 1998. Yet even after more than 60 years of research, we still haven't succeeded in fully understanding how it functions.Jacques

    What would decoding mean, then? What have the scientists failed to do with the nematode? As a non-programmer, I guess I'm asking whether decoding is an analogy, or something that literally can be done with creatures.
  • Jacques
    106
    What have the scientists failed to do with the nematode? As a non-programmer, I guess I'm asking whether decoding is an analogy, or something that literally can be done with creatures.J

    Decoding refers to the exhaustive understanding of the biological system of the nematode – encompassing its genetics, cellular processes, neural architecture, behavior, and environmental responses – with the goal of constructing a fully accurate computer simulation based on these data. This goal has not yet been achieved.
  • Harry Hindu
    5.7k
    I’ve come to the conclusion that most media portrayals of AI developing "its own motives" are based on flawed reasoning. I don’t believe that machines—now or ever—will develop intrinsic motivation, in the sense of acting from self-generated desire. This is because I believe something far more basic: not even human beings have free will in any meaningful, causally independent sense.Jacques
    That's because AI hasn't been programmed to acquire information for its own purposes. It is designed to acquire information only from a human and then process that information for a human. If we were to design a robot AI like a human - with a body an sensory organs (cameras for eyes, microphones for ears, chemical analyzers for taste and smell, tactile sensors for touch, etc.) and program it to take in information from other sources, not just humans, and use it to improve upon itself (it can rewrite its code, like we rewire our brains when learning something) and others, then it would develop its own intrinsic motives.

    In this case, humans would simply be part of the naturally selective process promoting a design that allows the robot-AI to develop its own intrinsic motives. With humans and their minds now part of the environment, natural selection has become purposeful.


    To me, human decisions are the inevitable product of evolutionary predispositions and environmental conditioning.Jacques
    Evolutionary predispositions and environmental conditioning determine our genes, but our genes have the current environment to deal with which could be different than the conditions prior (like having an over abundance of sugar in our diet). So our decisions are more of a product of our genes interacting with our current environment.

    It appears to me that evolutionary selective processes would favor organisms that adapt more quickly to dynamic environments and approach new situations with an open mind to learn new things that might be advantageous or a detriment to one's fitness. It would also be helpful to have a good memory and an ability to pick out the finer details (have a higher resolution) of reality.

    Our ancestors left the forest to live on the savannah. Just as an ostrich repurposes its wings, our ancestors repurposed our hands. Instead of being primarily used for climbing our hands became primarily used for tool-making, and that is what set off a chain of events in our brains over the past three million years (a very brief period evolutionarily speaking). If organs can be repurposed, why can't the brain, as in repurposing it for things other than just survival and reproduction.


    I also reject the idea that humans possess some irreducibly mysterious cognitive abilities. Qualia, intuition, consciousness—they are all real phenomena, but I see no reason to believe they’re anything but products of material data processing. The brain, though vastly complex, is just a physical machine. If that machine can experience qualia, why not a future machine of equal or greater complexity?Jacques
    While I would agree with your last point, I don't know how a "physical" machine would experience qualia. The visual experience of a brain and its neurons, and of a computer and its circuits, is information. It is information because it is an effect of prior causes and the effect informs us of the causes - the environment's interaction with my body. While the world is not as it appears, it is as we are informed it is, and "informed" is not what just one sense is telling you, but includes integrating all sensory information (why else would we have multiple senses?)


    This idea reminds me of Turing’s Halting Problem: the impossibility of writing a general program that determines whether any arbitrary program halts.Jacques
    This appears to simply be a projection of our ignorance of what can happen in the future. We can design an algorithm for a specific system that never interacts with an external system and it works. The problem is there are other external systems that interact. Our solar system is a complex interaction of gravitational forces and has been stable and predicable for billions of years, but an external object like a black hole or a brown dwarf could fly through the system and disrupt the system. It seems to me that every system halts at some point except reality itself (existence is infinite in time and space or existence is an infinite loop).


    .
  • Relativist
    3.1k
    Qualia, intuition, consciousness—they are all real phenomena, but I see no reason to believe they’re anything but products of material data processing. The brain, though vastly complex, is just a physical machine. If that machine can experience qualia, why not a future machine of equal or greater complexity?Jacques

    It makes sense to think that IF the complex machinery of the brain produces qualia, THEN it is physically possible to develop a machine that reproduces this. However, we can't envision a way to implement this in a machine. That's why this is labeled "the hard problem of consciousness". I don't think it's reasonable to think qualia would just HAPPEN with sufficient computational capacity. Rather, we'd first need to have some theory about how qualia manifest in ourselves. Still, you have a good point that we may not be able to figure this out due to our finite limitations.
  • Tom Storm
    10k
    I also reject the idea that humans possess some irreducibly mysterious cognitive abilities. Qualia, intuition, consciousness—they are all real phenomena, but I see no reason to believe they’re anything but products of material data processing. The brain, though vastly complex, is just a physical machine. If that machine can experience qualia, why not a future machine of equal or greater complexity?Jacques

    That should start the usual disagreements about scientistic physicalism and how this has collapsed the richness of conscious experience into merely computational or mechanistic terms. Next comes the points about the hard problem of consciousness, followed by some Thomas Nagel quotes. Enjoy.
  • PoeticUniverse
    1.6k
    I’ve come to the conclusion that most media portrayals of AI developing "its own motives" are based on flawed reasoning. I don’t believe that machines—now or ever—will develop intrinsic motivation, in the sense of acting from self-generated desire. This is because I believe something far more basic: not even human beings have free will in any meaningful, causally independent sense.

    To me, human decisions are the inevitable product of evolutionary predispositions and environmental conditioning. A person acts not because of a metaphysical "self" that stands outside causality, but because neural machinery—shaped by genetics, trauma, language, culture—fires in a particular way. If that’s true for humans, how much more so for a machine?
    Jacques

    Great analogy; nothing more to say.
  • Jacques
    106
    That should start the usual disagreements about scientistic physicalism and how this has collapsed the richness of conscious experience into merely computational or mechanistic terms. Next comes the points about the hard problem of consciousness, followed by some Thomas Nagel quotes. Enjoy.Tom Storm

    In my opinion anyone who rejects physicalism and the associated reduction of conscious experiences to material processes must assume that these experiences are based on something else. But on what – an élan vital, magic, or what else?
  • Pierre-Normand
    2.7k
    In my opinion anyone who rejects physicalism and the associated reduction of conscious experiences to material processes must assume that these experiences are based on something else. But on what – an élan vital, magic, or what else?Jacques

    The rejection of materialistic (or physicalistic) reductionism need not entail the reduction(sic) [On edit: I meant rejection!] of materialism broadly construed: the idea that everything that we see in the natural world is materially constituted of physical objects obeying the laws of physics. But material constitution just is, generally, one particular feature of a material entity. Many entities have, in addition to their material constitution, formal/functional/teleological features that arise from their history, their internal organisation, and the way they are embedded in larger systems. This is true of human beings but also of all living organisms and of most human artifacts.

    What must then be appealed to in order to explain such irreducible formal features need not be something supernatural or some non-material substance. What accounts for the forms can be the contingencies and necessities of evolutionary and cultural history history, and the serendipitous inventions of people and cultures. Those all are explanatory factors that have nothing much to do with physics or the other material sciences. Things like consciousness (and free will) are better construed as features or emergent abilities of embodied living (and rational) animals rather than mysterious immaterial properties of them.
  • Michael
    16.2k
    The brain, though vastly complex, is just a physical machine. If that machine can experience qualia, why not a future machine of equal or greater complexity?Jacques

    It may not be just a matter of complexity but also of composition. Organic molecules may be necessary for consciousness to emerge because other chemicals are incapable of behaving in the appropriate way.

    So if anything like an artificial brain is possible, it might require being made of the same material as ours, and so traditional computers might never produce qualia no matter how many moving parts there are.

    Turing showed that such a program would lead to a logical contradiction when applied to itself. Similarly, a human trying to model the human mind completely may run into a barrier of self-reference and computational insufficiency.Jacques

    I'm not entirely familiar with the halting problem, but your wording suggests a mistake in your reasoning. It may not be possible for some program A to determine whether or not itself will halt, but is it possible for it to determine whether or not some equivalent program B will halt? If so, even if I cannot model my own mind, I may be able to model your mind, and if it's reasonable to assume that our minds are broadly equivalent then that will suit our purposes of modelling "the human mind" in general.
  • RogueAI
    3.2k
    If that machine can experience qualia, why not a future machine of equal or greater complexity?Jacques

    What about a machine of lesser complexity? Could a smartphone be conscious?
  • Tom Storm
    10k
    In my opinion anyone who rejects physicalism and the associated reduction of conscious experiences to material processes must assume that these experiences are based on something else. But on what – an élan vital, magic, or what else?Jacques

    There might be various models offered as alternatives. One would be to invert physicalism entirely and argue (as Bernardo Kastrup does) that physical reality is how consciousness appears when viewed from a particular perspective. In other words, physicalism is a product of consciousness, not the other way around. There are detailed and well-argued accounts of this view, which go well beyond the scope of a few posts here.
  • Jacques
    106
    Many entities have, in addition to their material constitution, formal/functional/teleological features that arise from their history, their internal organisation, and the way they are embedded in larger systems. This is true of human beings but also of all living organisms and of most human artifacts.Pierre-Normand

    The problem with this statement is that, in modern biology and the philosophy of science, teleology is generally rejected as a fundamental explanatory principle. While evolutionary processes can produce structures that appear purpose-built (such as wings for flying), this appearance is understood as a result of natural selection, not as evidence of actual purpose. Since Darwin — and even more explicitly since Stephen Jay Gould — such apparent design is treated as an illusion rather than a literal reality.

    What must then be appealed to in order to explain such irreducible formal features need not be something supernatural or some non-material substance. What accounts for the forms can be the contingencies and necessities of evolutionary and cultural history history, and the serendipitous inventions of people and cultures. Those all are explanatory factors that have nothing much to do with physics or the other material sciences. Things like consciousness (and free will) are better construed as features or emergent abilities of embodied living (and rational) animals rather than mysterious immaterial properties of them.Pierre-Normand

    I consider the assumption of irreducible formal features in complex entities to be unfounded. What may appear irreducible arises from our limited understanding or from the intricate interplay of physical processes. In principle, everything composite — including consciousness and will — can ultimately be traced back to physical processes. Where such reduction has not yet been achieved, this reflects epistemic limitations rather than an ontological impossibility. Even if higher-level phenomena are described functionally or as emergent, there is no justification for treating them as independent ontological domains beyond physics. This applies equally to biological organisms and to artificially created intelligent systems.
  • Pierre-Normand
    2.7k
    The problem with this statement is that, in modern biology and the philosophy of science, teleology is generally rejected as a fundamental explanatory principle. While evolutionary processes can produce structures that appear purpose-built (such as wings for flying), this appearance is understood as a result of natural selection, not as evidence of actual purpose. Since Darwin — and even more explicitly since Stephen Jay Gould — such apparent design is treated as an illusion rather than a literal reality.Jacques

    I like Gould very much. I read some of his collected essays in the French translation as a teen (The Panda's Thumb, Hen's Teeth and Horse's Toes, The Famingo's Smile), his book The Mismeasure of Man, and followed his debate with Dawkins regarding the latter's genocentrism and reductionism. Other anti-reductionist biologists like Ernst Mayr have defended naturalized conceptions of teleology (that Mayr calls "teleonomy") that don't conflict with Gould's insistence on the lack of foresight of evolution through natural selection. The question regarding the present aims (forward-looking) of an organism's structure and behavior is distinct from the question regarding the origin of this structure (backward-looking).

    (I'll comment on your second paragraph later on.)
  • ssu
    9.5k
    I’ve come to the conclusion that most media portrayals of AI developing "its own motives" are based on flawed reasoning. I don’t believe that machines—now or ever—will develop intrinsic motivation, in the sense of acting from self-generated desire. - I also reject the idea that humans possess some irreducibly mysterious cognitive abilities. Qualia, intuition, consciousness—they are all real phenomena, but I see no reason to believe they’re anything but products of material data processing. The brain, though vastly complex, is just a physical machine.Jacques
    Machines and living entities are a bit different (as I assume you know), but let's accept the very broad definition here and ignore the obvious physical differences between man made machines and living organisms.

    Yet do notice the logical difference: AI is still a computer (or a network of computers) and acts like a computer, it follows algorithms robotically. A human can indeed understand the "algorithms" he or she is following, and then change them and have initiative because we can act as a subject. A computer cannot: it has to have in it's algorithms clear instructions how to deal in a situation where a human would use initiative, imagination etc. Yes, indeed AI can mimick humans very accurately, but it isn't thinking as we are, it's computing/calculating.

    This idea reminds me of Turing’s Halting Problem: the impossibility of writing a general program that determines whether any arbitrary program halts. Turing showed that such a program would lead to a logical contradiction when applied to itself. Similarly, a human trying to model the human mind completely may run into a barrier of self-reference and computational insufficiency.Jacques

    I'm starting to think it's because we haven't understood simply how general the limitations what Turing's Halting Problem are to us. Computation is objective, but once you put in an element that is subjective into the equation, namely that the Turing Machine should take into account the actions of itself, we have the halting problem. When that self-reference is basically negative self reference, it's impossible to do this, hence the famous result.

    Let me give you the most easy example of this:

    Try to do the following: Write a reply to my post that you never will write.

    Now obviously you cannot do it. Anything you will write obviously won't be in the category (or the set) of things that you will never write in your lifetime. Are there replies that exist that you won't write? Yes, obviously there are these kinds of replies as you don't live forever. Hopefully you can notice the negative self-reference in the above statement. Yet do notice also the subjectivity. Perhaps a friend of yours could fairly accurately describe what kind of reply you will give. For him or her, the modeling (of what your reply will be, if there is one) can be objective. But once it's you, there's no way out of it.

    This similar problem came to discussion with @Sam26 in his [TPF Essay] Wittgenstein's Hinges and Gödel's Unprovable Statements where I brought up the difference with the objective and the subjective. I'll rewrite what I said in that thread:

    Everything is about objectivity and subjectivity, actually. It's not merely a psychological issue, but simply logical. We can easily understand subjectivity as someone's (or some things) point of view and objectivity as "a view without a viewpoint". To put this into a logical and mathematical context makes it a bit different. Here both Gödel and Wittgenstein are extremely useful.

    In logic and math a true statement that is objective can be computed and ought to be provable. Yet when it's subjective, this isn't so: something subjective refers to itself.

    Hence between a computer (be it AI or whatever) and a human being, the logical difference might be more clear when we think of the difference from viewpoint of subjectivity and objectivity. A computer computes and cannot act as a subject, make decisions itself and go against the algorithms and "do something else" what isn't in the algorithms whereas we can understand our reasoning (basically our algorithms) and then come up with something new from them.

    Yet note just how fixated we are with the false view that everything can be described objectively. The standard counterargument would be that we humans indeed are similar to computers, but the "algorithms" we use are somehow on a hidden layer of "meta-algorithms" that we cannot describe. Yet what this is basically just insisting on the view that everything can be modeled objectively and we just assume this meta-algorithms of us...without any explanation why. Yet there doesn't have to be at all any kind of "meta-algorithm" at all, it is just that subjectivity isn't computable.

    When you think of it, this is like a cat going around a cup of milk that is too hot for it to drink as subjectivity (at least to me) brings up the question of consciousness and the hard problem of consciousness, learning, etc. Yet here to model these "problems" by showing that they are indeed mathematical and logical is beneficial.

    I'm not entirely familiar with the halting problem, but your wording suggests a mistake in your reasoning. It may not be possible for some program A to determine whether or not itself will halt, but is it possible for it to determine whether or not some equivalent program B will halt? If so, even if I cannot model my own mind, I may be able to model your mind, and if it's reasonable to assume that our minds are broadly equivalent then that will suit our purposes of modelling "the human mind" in general.Michael
    Math and logic are precise. There you cannot wiggle your way off just by assuming something. Otherwise we can always just assume a "black box" that gives us the correct models to everything and not think about it more. I can also assume to have a "black box" that gives me a solution to every math problem. The problem with this thinking is that I have no specific answers, naturally.

    And anyway, If not familiar to the halting problem, please look up this site about Self-Reference and Uncomputability. It shows how the halting problem (and Gödel's Incompleteness Theorems) are tied to self-reference and uncomputability. Let me remind you that one outgrowth of the Halting Problem was the Church-Turing thesis, a vague definition for computability. Computers literally compute.

    And anyway, the problem here is that basically both you and @Jacques are part of the universe and you cannot just assume to look at the whole universe outside of it to get that truly objective viewpoint needed. Just look what physics turns into when measurements start having an effect on what is measured.
  • Jacques
    106
    ... anti-reductionist biologists like Ernst Mayr have defended naturalized conceptions of teleology (that Mayr calls "teleonomy") that don't conflict with Gould's insistence on the lack of foresight of evolution through natural selection. The question regarding the present aims (forward-looking) of an organism's structure and behavior is distinct from the question regarding the origin of this structure (backward-looking).Pierre-Normand

    Teleology explains events as caused by goals or intended purposes. In contrast, teleonomy describes the appearance of goal-directedness, treating such goals as merely apparent rather than actual causes. To me, teleonomy represents a rejection of teleology rather than its naturalization. I maintain that all phenomena—including biological ones—can ultimately be explained by physical processes, even if, in many cases, we are still far from achieving this.
  • Jacques
    106
    Machines and living entities are a bit different (as I assume you know), but let's accept the very broad definition here and ignore the obvious physical differences between man made machines and living organisms.ssu

    While it's true that most people might share your opinion, it's worth noting that several prominent thinkers have argued that the brain—or even the human being as a whole—can be understood as a kind of machine. Notable proponents of this view include: Julien Offray de La Mettrie, Thomas Hobbes, Gottfried Wilhelm Leibniz, Alan Turing, Marvin Minsky, Daniel Dennett, Francis Crick, Jacques Loeb, Rodney Brooks, and Robert Sapolsky.

    there doesn't have to be at all any kind of "meta-algorithm" at all, it is just that subjectivity isn't computable.ssu

    While subjectivity may not be computable at present, I assume it is in principle, given that the brain - a physical system effectively functioning as a (non-digital) computer - somehow gives rise to it.
  • MoK
    1.4k

    There are four features involved when it comes to a person, namely instinct, logical thinking, intuition, and wisdom; they come in order in a person. Free will is the ability of the mind to choose freely. It is required to decide in a situation when there is a conflict of interest.
  • Jacques
    106
    There are four features involved when it comes to a person, namely instinct, logical thinking, intuition, and wisdom; they come in order in a person. Free will is the ability of the mind to choose freely. It is required to decide in a situation when there is a conflict of interest.MoK

    You sound like you’re drawing from idealism and Kantian philosophy, maybe with a libertarian view of free will—is that how you see it?
  • MoK
    1.4k
    You sound like you’re drawing from idealism and Kantian philosophyJacques
    I only studied Kant on morality, and I disagree with him. Generally, I am against any form of Idealism. The problem with any form of Idealism is how ideas could be coherent in the absence of a mind. I would be happy to see if Kant ever mentioned the mind in his books or gave a definition for it.

    maybe with a libertarian view of free will—is that how you see it?Jacques
    I think that free will in the libertarian form is the ability of any mind.
  • ssu
    9.5k
    While it's true that most people might share your opinion, it's worth noting that several prominent thinkers have argued that the brain—or even the human being as a whole—can be understood as a kind of machine.Jacques
    Do note the "as a kind of machine". Yes, we can talk for example about molecular machines in our body, but there still is a difference between living organism and an artificial motor human have constructed. But yes, we can generalize, so I also agree that we can talk about motors.


    While subjectivity may not be computable at present, I assume it is in principle, given that the brain - a physical system effectively functioning as a (non-digital) computer - somehow gives rise to it.Jacques
    Wait a minute.

    Do you understand the Turing's Halting Problem and the Church-Turing thesis?

    First and foremost, there is uncomputable mathematics. Not everything in mathematics is computable. Period, end of story. Do not assume that everything is then computable ...in the future or anywhere.

    You are making a mistake if you just assume that subjectivity may not be computable at present (and here the emphasis on computable), but in principle it could be. Well, if you put it that way, you are basically arguing that Turing is wrong (and actually Gödel with his incompleteness theorems too). Non-digital computer doesn't make actually any sense, if by digital you mean something relating to computers or to electronic technology that uses discrete values, generally zero and one. It would be like talking about unhuman humans (there are plenty of inhuman humans, but not unhuman humans). And discrete values don't have anything to do with this problem.

    Do note that computation is a specific way to solve problems, the process of performing calculations or solving problems using a set of well-defined rules or instructions, following algorithms. We don't compute everything if we are presented with a problem. Or do you really compute every problem you find?
  • Jacques
    106
    Do note that computation is a specific way to solve problems, the process of performing calculations or solving problems using a set of well-defined rules or instructions, following algorithms. We don't compute everything if we are presented with a problem. Or do you really compute every problem you find?ssu

    You're right, I don’t consciously compute every problem I encounter. But that doesn’t mean computation isn’t happening. Much of the problem-solving is outsourced to unconscious brain processes.

    Take sports as an example: when I play tennis and see a ball flying toward me, I don’t consciously calculate its trajectory using physics formulas. But my brain does - automatically and incredibly fast - estimate its speed, angle, and landing point, allowing me to react in time. That’s a form of implicit computation.

    So while I don’t deliberately compute everything, my brain is constantly computing - just not in a way that feels like "doing math".
  • Jacques
    106
    Machines and living entities are a bit different (as I assume you know), but let's accept the very broad definition here and ignore the obvious physical differences between man made machines and living organisms.ssu

    You're right that at first glance, machines and living organisms appear quite different—especially regarding structure and origin. However, if we focus on functional principles and physical processes, the line becomes less sharp than it seems.

    In fact, already in the mid-19th century, leading scientists such as Carl Ludwig, Hermann von Helmholtz, Jakob Moleschott, and Carl Vogt argued that the physiological and chemical processes occurring in living beings are governed by the very same natural laws as those in the non-living world. This marked a decisive shift away from vitalistic thinking toward a scientific materialism that sees no need for a special “life force” beyond physics and chemistry.

    Vitalists had posited an irreducible inner principle—vis vitalis—to account for the uniqueness of life, but this notion steadily lost ground as physiology and biochemistry revealed the continuity between living and non-living matter.

    Helmholtz in particular famously applied the law of conservation of energy to biological systems, emphasizing that no separate principles were required to explain life. Carl Vogt even provocatively claimed that “the brain secretes thought as the liver secretes bile”—underscoring the continuity between organism and mechanism.

    So, while machines and organisms differ in origin and complexity, their internal workings are, in a deep sense, physico-chemical systems, and thus comparable under the lens of natural science.
  • Punshhh
    3k
    This brings me to a more speculative point: perhaps we will never be able to fully understand ourselves. Not for mystical reasons, but because of a structural limitation: a system may need more complexity than itself to fully model itself. In other words, explaining a human brain might require more computational capacity than the brain possesses. Maybe we will someday fully understand an insect, or a fish—but not a human being.

    I would suggest that mysticism is the only way to fully understand ourselves. This is because it endeavours to develop understanding not simply through the intellect, but also through the body, through being and through growth. Thus enabling a more holistic, or 3 dimensional (by analogy) perspective.

    Also I would suggest that fully understanding anything, other than abstract concepts is not possible. Because it would require an understanding of the whole context in which it resides. Something which we are not in a position to do.

    To address your question about AI and subjectivity. I don’t see why subjectivity, or anything else a human brain does can’t be modelled. But subjectivity etc is not the same as consciousness, which is something present in living organisms. Resulting from biological processes, rather than computation in the nervous system. Just like the robot in Star Trek, known as Data, AI can conceivably be programmed to perform anything a human can do, but it simply isn’t conscious. It’s a machine carrying out preordained processes.
  • ssu
    9.5k
    You're right, I don’t consciously compute every problem I encounter. But that doesn’t mean computation isn’t happening. Much of the problem-solving is outsourced to unconscious brain processes. - So while I don’t deliberately compute everything, my brain is constantly computing - just not in a way that feels like "doing math".Jacques
    But this doesn't at all counter my point of there being uncomputable mathematics and hence uncomputable problems. Or to put it another way, undecidable problems where an undecidable problem is a decision problem for which an effective method (algorithm) to derive the correct answer does not exist.

    So, while machines and organisms differ in origin and complexity, their internal workings are, in a deep sense, physico-chemical systems, and thus comparable under the lens of natural science.Jacques
    Again, this isn't at all an issue of vitalism at all or how related in a deep sense physico-chemical systems are. That isn't the question, the question is purely logical and of logic.

    Here's where this goes wrong, and I'll try to make my case why it is so:

    While subjectivity may not be computable at present, I assume it is in principleJacques

    I don’t see why subjectivity, or anything else a human brain does can’t be modelled.Punshhh

    These two comments are quite close to each other. I reason that the thinking is the following: everything what either we or computers do is computation or can be modelled as computation, hence there is a correct model. Hence subjectivity isn't a problem (@Punshhh) or we could perhaps find the correct model in the future (@Jacques).

    Here it is extremely important to notice just how negative self-reference works and how it creates a limitation and how it is related to subjectivity.

    We could start with the observation: you can freely write anything you want, there is no limitation on what you write (yes, an administrator can ban you for some writings but that doesn't limit what you at first can write).

    With negative self-reference we can show that there indeed is a limitation on what we can write, because we cannot do the following thing:

    Write something that you never will write.

    And obviously that is none of us can do: anything we write will instantly be part of what we have written, hence we simply cannot write something we don't write. Us writing them creates the "ownership", or the subjectivity, to the matter. It is something that we have written. And naturally we can reason that there is a set of writings, that we will never write.

    The solution to this problem isn't just to assume a fourth person that could do this (who could write what we never write). For him or her (or it if it's a computer or AI) the limitation stands too on what they produce.

    If you have understood my point so far, then one question could be: OK, what's the big problem here? Surely we can avoid this kind of logical trap.

    The question is can we avoid this with every important question we would like to model or compute?

    Because once any computation or model has impact to what the outcome is or what the correct model would be, the possibility of negative self-reference emerges. And many of the most important questions we have the models/computations can really have an effect on the outcome. Some can be handled and taken into account, but negative self-reference cannot.

    Hope this will clarify my point.
  • Jacques
    106
    But this doesn't at all counter my point of there being uncomputable mathematics and hence uncomputable problems. Or to put it another way, undecidable problems where an undecidable problem is a decision problem for which an effective method (algorithm) to derive the correct answer does not exist.ssu

    This discussion involves two distinct issues. The first concerns uncomputable problems, on which I fully agree with you. The second relates to whether biological organisms function like machines. As I mentioned earlier, there is broad consensus among biologists that living organisms operate according to the same physical laws as non-living systems. For instance, water molecules can pass consecutively through several living organisms and emerge unchanged—identical to how they were before entering the first of them.

    Furthermore, I believe that the primary function of brains is computation—more specifically, to act as control systems. While they may not be able to solve every problem they encounter, in such cases the organism may suffer harm or die. Yet I regard this fallibility, too—this occasional failure—as a similarity between brains and computers.
  • MoK
    1.4k
    I would suggest that mysticism is the only way to fully understand ourselves. This is because it endeavours to develop understanding not simply through the intellect, but also through the body, through being and through growth.Punshhh
    Very correct!

    Thus enabling a more holistic, or 3 dimensional (by analogy) perspective.Punshhh
    What do you mean by this?

    Also I would suggest that fully understanding anything, other than abstract concepts is not possible. Because it would require an understanding of the whole context in which it resides. Something which we are not in a position to do.Punshhh
    Perhaps we can do it someday. Deities, perhaps, are practicing this!

    To address your question about AI and subjectivity. I don’t see why subjectivity, or anything else a human brain does can’t be modelled. But subjectivity etc is not the same as consciousness, which is something present in living organisms. Resulting from biological processes, rather than computation in the nervous system. Just like the robot in Star Trek, known as Data, AI can conceivably be programmed to perform anything a human can do, but it simply isn’t conscious. It’s a machine carrying out preordained processes.Punshhh
    Matter is an environment that allows minds to interact. Matter does not do any process since it cannot even cause a change in itself. Those are minds that do the process over the information that is held in the stuff, matter, for example.
  • Punshhh
    3k
    Thus enabling a more holistic, or 3 dimensional (by analogy) perspective.
    — Punshhh
    What do you mean by this?
    To see and know ourselves through an understanding of and with the body, through an understand of and being being and through growing, or progressing in these activities. Alongside an intellectual understanding and enquiry. One, or more of these means can inform the others and in a personal way integrate with the others. Forming a broader understanding, or knowing, in which the intellect is no more important in attaining that growth than the other means.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.

×
We use cookies and similar methods to recognize visitors and remember their preferences.