• Pez
    33
    The tragic fate of Alan Turing does not diminish his role in computer-science. The test named after him was invented to determine, if a computer can think. Nowadays AI would quite easily pass this test, if not now, so in the foreseeable future. Does this mean, that modern-day computers actually are able to think like human beings? Or even, that they have consciousness like we have? The question has to be extended, though. How can we be sure, that other people can think and have consciousness? If we just infer from their bodily movements and facial expression, a robot in far future might be able to deceive us as well.

    The road to solipsism is paved that way and if You have no problem with that, the discussion is over. If You do have a problem, though, You have to concede, that there is some fundamental problem involved and something is wrong about the usual way of looking at things. The decision between materialism and idealism might not be sufficient to understand ourselves and the world we live in.

    The mind-body problem has a long tradition. Maybe René Descartes (* 31.03.1596, †11.02.1650) introduced it by postulating to types of “substances” or things: extended ones and thinking ones. The human body is an extended, material thing. On the other side we have (our) conscious experience, according to Descartes a thinking thing (cogito ergo sum – I think therefor I exist). Ever since the dispute gyrates around the relation between these two substances.

    Materialism poses its weight on the concept of matter, stating that material stuff is the essence of existence. Everything else, including consciousness, has to be explained in terms of attributes of the physical world. Mind is only an epi-phenomenon due to the complexity of some material things, e.g. the human brain.

    Idealism on the other side claims, that everything we can ever know about matter stems from our conscious experience. Therefore the concept of matter is only inference from data the senses provide us. Consciousness is at the base of existence and matter has to be explained as outcome of conscious experience. Everything is in the mind only.

    Thus materialism and idealism are only extreme positions regarding the two substances. If we concede Descartes' arguments, the problem of mutual influence between the two substances arises. If we intend to raise our hands, can our mind have causal influence on our physical body? On the other hand scientists found out, that stimulating certain regions of the brain results in consciously experiencing sensory impressions. Science has made astounding progress and we can study the functioning of the human brain at work. It is possible to follow the excitation of nerves from the retina to specific parts of the brain – but that is all there is, no trace of consciousness to be found. Inside our brain it is pitch-dark so to speak.

    “ No one thinks that when a tune is running in my head, a surgeon could unearth a little orchestra buried inside my skull or that a doctor by applying a stethoscope to my cranium could hear a muffled tune...” to quote Mr. Ryle (The Concept of Mind). And no one would introduce himself to a party by saying: Hello, here I am and I have brought my body with me. It may sound ridiculous, but the bifurcation of nature would indeed admit such propositions.

    Interesting in this context are experiments conducted by the American physiologist Benjamin Libet. He wanted to find out, how long it takes from the moment we intend to execute a bodily action to the stimulus of the appropriate nerves. The unexpected outcome of these experiments was that the nerves acted an average 0.35 seconds prior to the intention. The nerves and muscles are activated before we intend to raise our arm. Not surprisingly this has led to discussions regarding free will versus behaviorism.

    The complexity of integrated circuits in modern computers approaches rapidly the complexity of the human brain. Traditional computer-programs give the program a limited range and programmers can quite easily foresee the possible outcomes. AI is different though. Not even the designer can predict what will happen as these programs in a certain way program themselves and are able to learn depending on the scope of available data.

    Give AI senses and the possibility to act, then the difference to human behaviour will diminish on the long run. Does this mean that we are just sophisticated machines and all talk about freedom of choice and responsibility towards our actions is just wishful thinking? Or is there something fundamentally wrong about our traditional concepts regarding mind and matter? I maintain that we need a new world-picture, especially as the Newtonian view is nowadays as outdated as the Ptolemaic system was in the 16th century. But this will be a new thread in our forum.
  • flannel jesus
    1.8k
    Materialism poses its weight on the concept of matter, stating that material stuff is the essence of existence. Everything else, including consciousness, has to be explained in terms of attributes of the physical world. Mind is only an epi-phenomenon due to the complexity of some material thingsPez

    There's a lot of philosophy about this, it's normally anti-materialists who insist that all materialists must consider consciousness epiphenomenal, actual materialists have a wide range of views on that question.
  • Corvus
    3.2k
    AI is different though. Not even the designer can predict what will happen as these programs in a certain way program themselves and are able to learn depending on the scope of available data.Pez
    Problem with all the mental operations and events is its privateness to the owners of the minds. No one will ever access what the other minds owners think, feel, intent ... etc. Mental events can only be construed with the actions of the agents and languages they speak by the other minds.

    By the way, they say that, it is official now that no human being can beat the AI chess players. To know what the AI machines think, and feel, one must be an AI machine himself. The possibility of that happening in the real world sounds like as unrealistic and impossible as the futile ramblings on time travel fictions.
  • flannel jesus
    1.8k
    Yes but the important question isn't "is it like that now?", the important question is, "Is that necessarily the case in principle?"

    Is it in principle possible or impossible that some future AI might be sentient or have a mind of its own?
  • MoK
    381
    Yes, AI can think if we know how we think. We can simulate the brain if we know how it works when it comes to thinking. There are three questions that we need to answer before we can design an AI that can think. These questions are:
    1) How the new information is classified in the brain?
    2) How the related information are linked in the brain?
    3) How do we find new information from the linked information?
  • Arne
    817
    My paper for my senior seminar almost 40 years ago was entitled Pylyshyn and the Mine/Body Problem. In a sense, the answer will always be no because humans will continually move the goal posts. And AI cannot do that. :-)
  • Count Timothy von Icarus
    2.8k


    Give AI senses and the possibility to act, then the difference to human behaviour will diminish on the long run. Does this mean that we are just sophisticated machines and all talk about freedom of choice and responsibility towards our actions is just wishful thinking?

    Determinism doesn't necessarily rule out many conceptions of free will. Many philosophers argue that determinism is a prerequisite for free will. If our acts didn't have determinate outcomes, or at least predictable ones, we couldn't enact our will in the world. Consider a videogame where you can push buttons but the buttons do random things each time you push them. In what way are you free to change things in the game?

    The flip side of this problem is that if our actions are not determined by anything in the world, our past experiences, etc. then they wouldn't seem to rightly be "our actions." If they are determined by nothing that comes before, they become uncaused, arbitrary and random.

    But free will can be conceived of a sort of "self-determination." Our recursive self-awareness has causes, its existence and contents are determined by past events, but our awareness itself also seems to play a causal role in our choices. We are not perfectly free, which is obvious, we cannot walk through walls or wipe memories from our mind at will. But who we are and what we think seems to play a determining role in what we do, and in this way we are free as "self-determining," entities. Nothing in determinism contradicts this sort of freedom.

    The challenge to freedom in our "being reduced to machines," generally runs through smallism and epiphenomenalism. The argument is normally something like:

    Atoms are not concious and lack purposes. We are nothing but atoms. Therefore, all our acts are determined by that which lacks purpose, and conciousness' effects on action must illusory, since all thought is determined by the rules governing mindless atoms.

    But this doesn't follow from determinism, it follows from other concepts normally lumped in with determinism, namely reductionism and smallism, the idea that facts about all large entities can be wholly reduced to facts about smaller entities. However, such views seem to make accounting for conciousness impossible, barring panpsychism, so it's unclear how seriously they should be taken, nor is there overwhelming empirical support for them. True reductions in science are very rare.




    Yes, AI can think if we know how we think

    Or maybe rather, "we could determine that AI was thinking if we knew how we thought?" But we don't, and therein lies the massive hole at the center of this debate.

    But for those who deny the possibility...




    ...what about "wetware" AI that is "grown" from neuronal tissue, perhaps with silicone involved as well? If these cannot be concious, why can natural entities composed of a similar material be conscious?

    Of course, there the difference between "artificial life," and "AI" gets blurry. Where is the line between cyborg and hybot? How much biological material, or biological-like material can be involved in a potential "thinking thing" before we dismiss it as being "AI?"

    Whether purely silicone based systems can produce sentience seems impossible to answer currently. Finding evidence of silicone-based life, while unlikely, would really shake this up.
  • Arne
    817
    Whether purely silicone based systems can produce sentience seems impossible to answer currently. Finding evidence of silicone-based life, while unlikely, would really shake this up.Count Timothy von Icarus

    I agree.
  • Joshs
    5.7k


    The complexity of integrated circuits in modern computers approaches rapidly the complexity of the human brain. Traditional computer-programs give the program a limited range and programmers can quite easily foresee the possible outcomes. AI is different though. Not even the designer can predict what will happen as these programs in a certain way program themselves and are able to learn depending on the scope of available data.Pez


    Computers are our appendages. They are like organ systems within our bodies. Just like the functioning of a liver or heart cannot be understood apart from its inextricable entanglement in the overall aims of the organism, the same is true of our machines with respect to our purposes. They are not autonomous embodied-environmental systems but elements of our living system. As long as we are the ones who are creating and programming our machines by basing their functional organization on our understanding of concepts like memory storage , patten matching and sensory input, their goals cannot be self-generated. They can only generate secondary goals derived as subsets of the programmed concepts , which we then respond to by correcting and improving the programming. This is how our appendages and organ systems function.

    Saying our machine are smarter or dumber than us is like saying the spider web or birds nest is smarter or dumber than the spider or bird. Should not these extensions of the animal be considered a part of our living system? When an animal constructs a niche it isnt inventing a life-form, it is enacting and articulating its own life form. Machines, as parts of niches , belong intimately and inextricably to the living self-organizing systems that ‘we' are.

    Why is the complex behavior of current AI not itself creative, apart from the user's interpretation? Because the potential range of unpredictable behaviors on the part of the machines are anticipated in a general sense, that is, are encompassed by the designer's framework of understanding. Designing a chaotic fractal system, a random number generator, mathematically describing the random behavior of molecules, these schemes anticipate that the particulars of the behavior of the actual system they describe will evade precise deterministic capture. Industrial age machines represented a linear, sequential notion of temporality and objective physicalism, complementing representational approaches to art and literature, today's AI is an expression of the concept of non-linear recursivity, and will eventually embrace a subject-object semantic relativism. Current AI thus ‘partners' with newer forms of artistic expression that recognize the reciprocal relation between subject and object and embed that recognition into the idea the artwork conveys.

    And just like these forms of artistic expression, non-linear, recursive AI functions as an archive, snapshot, recorded product, an idea of self-transforming change frozen in time. In dealing with entities that contribute to our cultural evolution, as long as we retain the concept of invention and machine we will continue to be interacting with an archive, a snapshot of our thinking at a point in time, rather than a living self-organizing system. In the final analysis the most seemingly ‘autonomous' AI is nothing but a moving piece of artwork with a time-stamp of who created it and when. In sum, I am defining true intelligence as a continually self-transforming ecological system that creates cultural (or biological) worldviews (norms, schemes, frames), constantly alters the meaning of that frame as variations on an ongoing theme (continues to be the same differently), and overthrows old frames in favor of new ones. The concept of an invented machine, by contrast, is not a true intelligence, since it not a self-modifying frame but only a frozen archive of the frame at a given moment in time.

    Can we ever ‘create' a system that is truly autonomous? No, but we can tweak living organic material such as dna strands enclosed in cellular-like membranes so that they interact with us in ways that are useful to us. Imagine tiny creatures that we can ‘talk to'. These would be more like our relationship with domesticated animals than with programmed machines. Think of humanity as ecosystem 1, the encompassing ecosystem whose intelligence evolves over time. Computers belong to ecosystem 2, the sub-ecosystem operating within, and a dependent part of, ecosystem 1. As a dependent part, it cannot evolve beyond ecosystem 1. It evolves with it.
  • NOS4A2
    9.3k


    Anything that isn't human cannot do what a human does by virtue of it not being a human being. Whatever sapiens build in their pursuit of artificiality and technology will have to be content with mere mimicry. It's the same with human thought. Although I don't think it could be said that thinking is a strictly human affair any more than drinking and running is, the problem occurs in believing we can apply strictly biological concepts and activities to technological and artificial ones. Do submarines swim? That answer, along with others of the similar themes, is invariably a "no".
  • flannel jesus
    1.8k
    Why is it invariably a no? Why is it inconceivable to imagine a machine that can swim pretty much like other swimming being swim? We already have machines that walk like humans walk.
  • NOS4A2
    9.3k


    I argued that the best technology can do is mimic the biological activity. This is for two reasons: technology isn’t biological, so mimicry is all it could ever amount to, but also because the technology is designed to mimic the biological activity, not to be biological activity.
  • flannel jesus
    1.8k
    but what makes "swimming" an inherently biological activity, such that a machine mimicking it doesn't count as "swimming"?
  • RogueAI
    2.8k
    If some computer becomes conscious, would that entail my old Commodore 64 was conscious? ENIAC? An abacus?
  • NOS4A2
    9.3k


    I don’t see how the term applies to anything else.
  • MoK
    381
    Or maybe rather, "we could determine that AI was thinking if we knew how we thought?" But we don't, and therein lies the massive hole at the center of this debate.

    But for those who deny the possibility...
    Count Timothy von Icarus
    Correct. Unfortunately, we don't know how we think so we cannot design an AI that can think.
  • 180 Proof
    15.4k
    Is it in principle possible or impossible that some future AI might be sentient or have a mind of its own?flannel jesus
    What do you mean by "sentient" & "mind of its own"? Do you believe these properties are attributes of human beings? If so, why do you believe this? And, assuming it's possible, would these properties be functionally identical instantiated in an AI-system as they are embodied in a human? Why or why not?

    As for me, I've yet to find any compelling arguments for why in principle a machine cannot be built (either by h. sapiens and/or machines) that functionally exceeds whatever biological kluge (e.g. primate brain) nature adaptively spawns by environmental trial & error, and also since the concept-prospect does not violate any (current) physical laws, I see no reason (yet) to assume, or suspect, that "sentient AI" is a physical/technological impossibility.


  • RogueAI
    2.8k
    As for me, I've yet to find any compelling arguments for why in principle a machine cannot be built (either by h. sapiens and/or machines) that functionally exceeds whatever biological kluge (e.g. primate brain) nature adaptively spawns by environmental trial & error, and also since the concept-prospect does not violate any (current) physical laws, I see no reason (yet) to assume, or suspect, that "sentient AI" is a physical/technological impossibility.180 Proof

    OK, let's suppose we develop sentient AI. Do we then have to reevaluate sentience for all the computing devices we didn't think were sentient?
  • Patterner
    997
    OK, let's suppose we develop sentient AI. Do we then have to reevaluate sentience for all the computing devices we didn't think were sentient?RogueAI
    How would we know we developed sentient AI? I would think whatever criteria we used to determine that would be used to evaluate all computing devices. Entire classes of them would likely be ruled out, known to not have the required element.
  • RogueAI
    2.8k
    How would we know we developed sentient AI? I would think whatever criteria we used to determine that would be used to evaluate all computing devices. Entire classes of them would likely be ruled out, known to not have the required element.Patterner

    Well, there's the rub. We don't even know if each other are conscious. But suppose we're in contact with superintelligent aliens, only they don't tell us a whole lot. But the stuff they do share with us is invariably correct. And one of the things they do tell us is that at least one of the machines we've developed is conscious. Would that entail considering digital watches, say, as being conscious?
  • Wayfarer
    22.5k
    Nowadays AI would quite easily pass this test, if not now, so in the foreseeable future. Does this mean, that modern-day computers actually are able to think like human beings? Or even, that they have consciousness like we have?Pez

    I put the question to chatGPT, which responded:


    The Turing Test, devised by Alan Turing in 1950, is a measure of a machine's ability to exhibit intelligent behavior indistinguishable from that of a human. Turing proposed that if a human evaluator could not consistently tell the machine apart from a human based on their responses to questions, the machine could be considered to have passed the test. The focus is on the imitation of the external behavior of intelligent beings, not on the internal thought processes.

    Modern artificial intelligence (AI) systems, including chatbots and language models, have become increasingly sophisticated, making it more challenging to distinguish their outputs from human responses in certain contexts. However, passing the Turing Test does not necessarily mean that computers are able to think like human beings. Here's why:

    1. **Imitation vs. Understanding**: AI can mimic the patterns of human conversation and generate responses that seem human-like, but this does not imply understanding or consciousness. The AI does not possess self-awareness, emotions, or genuine understanding of the content it processes; it operates through algorithms and data.

    2. **Narrow AI vs. General AI**: Most modern AIs are examples of narrow AI, designed to perform specific tasks, such as language translation, playing a game, or making recommendations. They are not capable of general intelligence, which would involve understanding and reasoning across a broad range of domains with human-like adaptability.

    3. **Lack of Consciousness**: Consciousness and subjective experience are fundamental aspects of human thought. Current AI lacks consciousness and the ability to experience the world subjectively. The process of thought, as humans experience it, involves not just responding to stimuli or questions but also emotions, motivations, and a continuous stream of internal dialogue and reflection.

    4. **Different Processing Mechanisms**: Human brains and computers operate in fundamentally different ways. Human thought is the product of biological processes, evolved over millions of years, involving complex interactions among neurons and various brain regions. AI, on the other hand, processes information through algorithms and computational methods that do not replicate the biological processes of human thought.

    While AI can simulate certain aspects of human thinking and may pass the Turing Test, it does so without the underlying consciousness, emotions, and genuine understanding that characterize human thought. The development of AI that truly thinks and understands like a human being would require not just advancements in computational techniques but also a deeper understanding of consciousness and human cognition, which remains a significant scientific and philosophical challenge.
    — ChatGPT
  • Patterner
    997

    I already say "Thank you" to Alexa and Siri. If I had a digital watch, I suppose I wouldn't mind thanking it.

    I'm gonna need a better scenario than that. I know who the Kanamits are. Why don't they tell us a whole lot? Why tell us something like this, but no specifics? I wouldn't assume they were telling the truth. And we're already trying to figure out if any are sentient.
  • Agree-to-Disagree
    465
    According to the Merriam-Webster dictionary "sentient" means
    1 : responsive to or conscious of sense impressions
    2 : having or showing realization, perception, or knowledge : aware
    3 : finely sensitive in perception or feeling.

    Using these descriptions of what "sentient" means, does that mean that a Tesla car is "sentient"?
  • 180 Proof
    15.4k
    OK, let's suppose we develop sentient AI. Do we then have to reevaluate sentience for all the computing devices we didn't think were sentient?RogueAI
    I don't see why we would need – why it would be useful – to do that.
  • RogueAI
    2.8k


    For the same reason we now take plant consciousness seriously. Machine consciousness would beg obvious questions, one of which being: are primitive machine conscious?
  • 180 Proof
    15.4k
    On what grounds would one ask that question (which is like asking whether or not a complete skeleton is "conscious")?
  • BC
    13.6k
    One difference between animal intelligence and computers is that young animals -- puppies and people -- initiate inquiry into the world around them. Animal intelligence reaches out on its own. another difference: A computer (AI) has to be given instructions to acquire information--which they do not convert into experience. They can discover that it is 110ºF in the shade but they can not "feel" the heat. Animals experience the world through their bodies. They can tell that it is hot, and unpleasant. Computers can not do that.

    Animal intelligence isn't separate from the bodies which contains it. Computers may have a chassis, may have millions of components, but there is no experience, no body to have experience,

    This animal intelligence that is writing now can not tell whether some of the people who bring. up computers and AI are in favor of, or against, granting computers "thought". Some of them seem to long for a thinking machine. It strikes me as a little like wanting one's puppets to come alive.
  • Corvus
    3.2k
    "There is more wisdom in your body than in your deepest philosophy.”
    ― Nietzsche, Friedrich, Thus Spoke Zarathustra

    AI is unlikely to be sentient like humans without the human biological body. Without 2x hands AI cannot prove the existence of the external world, for instance. Without being able to drink, AI wouldn't know what a cup of coffee tastes like.

    AI might be able to speak human languages, but they would lack the voice quality which also transfers the content of the emotions and feelings. But most critically, they would be unable to understand similes or metaphorical expressions of the living humans.

    AIs are machines designed to carry out certain tasks efficiently and intelligently, hence they are the tools to serve humans.
  • Agree-to-Disagree
    465
    AI is unlikely to be sentient like humans without the human biological body.Corvus

    Why can't something with an artificial body be sentient? Not being "sentient like humans" does not exclude being sentient.

    Without 2x hands AI cannot prove the existence of the external world, for instance.Corvus

    AI can have tactile pressure sensors which allow it to "feel" the world. AI can also have other sensory systems which allow it to "see" the world.

    AI might be able to speak human languages, but they would lack the voice quality which also transfers the content of the emotions and feelings.Corvus

    AI is getting to the stage where they do have voice quality and facial expressions which display emotions and feelings. They can also "hear" human voice quality and "read" human faces.

    AIs are machines designed to carry out certain tasks efficiently and intelligently, hence they are the tools to serve humans.Corvus

    Humans are biological machines which carry out certain tasks for bacteria. Hence humans are tools to serve bacteria.
  • Corvus
    3.2k
    AI is getting to the stage where they do have voice quality and facial expressions which display emotions and feelings. They can also "hear" human voice quality and "read" human faces.Agree-to-Disagree
    How do you prove that they have human sentience? Just because they can sense, and respond to certain situations and input data, it doesn't mean they have feelings, emotions and autonomous intentions of their own.

    It is like saying the thermometers are sentient, because when it is hot, the readings go up, and when it is cold, the readings go down. Just because they can sense something, should they be regarded as conscious or sentient?

    Humans are biological machines which carry out certain tasks for bacteria. Hence humans are tools to serve bacteria.Agree-to-Disagree
    Suppose bacteria would be more close to humans, because at least they are living beings. Not sure on the claim that humans serve bacteria. Do they not cooperate each other for their own survivals?
  • Agree-to-Disagree
    465
    AI can be programmed to operate like humans, but are they really sentient like humans? How do you prove that they have human sentience?Corvus

    How do you know that the person standing next to you has "human sentience"?
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.