• tom111
    14
    Let's say we have two black boxes, named A and B. We want to see if they are the same system. The logical thing to do would be to feed both systems the same input data and see if they produce the same output. The more times we do this, and the more times we see the same outputs, the more certain we can be that they are the same system.

    The Turing test, at a basic level, tries to take advantage of this fact. It indirectly compares two systems; a human and AI, to see if there are any distinguishable differences in output. It claims that if there are no observable differences in output between a human and AI, then we should regard the AI as sentient.

    Let's return to systems A and B. Let's suppose they give the same outputs, given identical or similar inputs. Can we really be certain that the wiring inside these boxes is exactly the same?

    Of course not. If we assume for a moment that the two black boxes take a series of numerical inputs and produce an output, based on a series of algorithmic calculations, we could have the following:

    Algorithm A:
    1) Take input x and add 10
    2) Multiply x by 2
    3) Add these results together

    In order: output = x + 10 + 2x

    Algorithm B:
    1) Take input x and add 5
    2) Multiply x by 4
    3) Add these results
    4) Subtract two times x
    5) Add 5

    In order: output = x + 5 + 4x - 2x + 5

    Functionally, these algorithms calculate the same thing, only one of them in a more convoluted way than the other. In practice, this would make for two black boxes with different internal wiring, but we would have no way of telling from the outside. From an outsider's perspective, they are the same system and there is no way to distinguish them.

    So this brings me to a theorem:
    ---------------------------------------
    Theorem 1: Given two black boxes, A and B, if inputs and corresponding outputs for both are the same, then either the internal wiring of A and B are the same, or one is a more efficient version of the other.
    ---------------------------------------

    We can state an alternative version of this theorem:

    ---------------------------------------
    Theorem 2: We have two black boxes, A and B. Instead of a single input-output system, we now have a range of outputs for a single input. For a single input, i, A can have outputs (X1, X2, ... XN) and B can have outputs (Y1, Y2, ... YN).

    We can do a similar thing to theorem 1, and test the potential outputs of B for an input i. If we find that all outputs for B fall within the output range of A, (X1, X2, ... XN), for a particular i, then it is likely that black boxes A and B are the same type of system. (Of course we would have to test a wide range of i's to be certain).

    This is simply a scaled up version of theorem 1, with multiple potential outputs.
    ---------------------------------------

    Now back to the Turing test. What we are basically doing, on a fundamental level, is comparing two systems; humans and AI. As humans we are all too familiar with the nature of human interactions, so we can safely compare them and a potential AI interaction.

    We are seeing if the speech output of the AI falls within the range of acceptable human speech.

    In theorem 2, if system A is a human and system B is an AI, then a human for a particular speech prompt, i, can have a range of speech outputs (X1, X2, ... XN). If for the same prompts the AI's (system B) output always falls within the range of acceptable human speech outputs, (X1, X2, ... XN), its passed the Turing test. Furthermore, by theorem 2, a human and AI can be regarded as the same type of systems.

    In other words, the Turing test is testing theorem 2.

    Therefore if an AI passes the Turing test (over a very large number of conversations), it is likely that the AI can be considered to be a "brain-like system", and therefore conscious.

    I hope this all makes sense.
  • RogueAI
    2.9k
    The problem is you can never get "inside" the black box and verify if it is conscious or not.
  • Deus
    320


    That does not matter. Humans still don’t know how consciousness is produced in our brains even with dissection, MMR, MRI, etc
  • tom111
    14
    It's true that we cannot directly observe consciousness, so there will never be any 100% certainty that an AI (or even another human) is conscious.

    However, the reasoning I outlined in the post could be a justification as to why the Turing test might be a good way to verify if an AI is sentient or not. This is of course, based on the assumption that something of similar structure and functioning to a human brain would also be conscious
  • tom111
    14
    It's basically based on the premise that if two systems have the same outputs for all inputs, they are functionally the same system.

    If an AI has the same outputs as a human might do for all inputs, then it logically follows that they are functionally the same type of system too. Therefore an AI that passes the Turing test is likely conscious
  • Seeker
    214
    Arithmetic would by far not be enough to verify consciousness. Such simple methods could easily result in the opposite of what is intended.
  • Art48
    480
    Functionally, these algorithms calculate the same thing, only one of them in a more convoluted way than the other. In practice, this would make for two black boxes with different internal wiring, but we would have no way of telling from the outside. From an outsider's perspective, they are the same system and there is no way to distinguish them.

    So this brings me to a theorem:
    ---------------------------------------
    Theorem 1: Given two black boxes, A and B, if inputs and corresponding outputs for both are the same, then either the internal wiring of A and B are the same, or one is a more efficient version of the other.
    ---------------------------------------
    tom111

    You'll need to define efficiency. Also, cos(x-pi/4) = sin(x+pi/4) but which is more efficient?
  • Real Gone Cat
    346


    Judging efficiency requires the user to recognize some difference between the two black boxes AND place a value on those differences. Five differences that a user might value that I can think of off the top of my head are size, speed, energy, accuracy, and the expense of creating the boxes. An obvious difference that probably carries no value for the user is external color.

    The situation is made more difficult by multiple differences. Is a mainframe that takes up half a room but can solve a difficult problem in seconds more or less efficient than a desktop that requires half an hour to solve the same problem? It depends on the value the user places on each difference (size vs. speed).

    And of course judging between a human and a machine is many times more difficult.

    The two sides of your equation are, of course, equivalent, but they may not share the same "efficiency" - if you mean by efficiency how much time or energy is required to arrive at an answer. Subtraction may require more steps for a given system (blackbox, human, or other) than addition. Ditto computing sines or cosines.
  • Alkis Piskas
    2.1k
    Therefore if an AI passes the Turing test (over a very large number of conversations), it is likely that the AI can be considered to be a "brain-like system", and therefore conscious.tom111
    Well, well, well! You passed from simple computing systems to consiousness!!
    I would call that "too ambitious", if it wasn't a blatantly wrong conclusion.

    So, although your whole description of the Turing test, etc. was excellent, you finally burried everything under the lava an exploded volcano: a huge generalization. You used observations about a sample to come to a conclusion about the source it came from. If a monkey can drink water from a glass like I do, can shoot the ball like I do, can wera a hat like I do, etc., it means that the monkey's behavior in general can be considered similar to mine.
  • Real Gone Cat
    346


    Perhaps. We'd have to meet the monkey.

    To avoid solipsism, we all assume that other humans share the trait we call consciousness - and we deny this trait to non-human entities (sorry panpsychists). But, when feet are held to the fire, it's nearly impossible to explain why. Clearly neither appearance nor behavior are adequate to recognize consciousness. (All we ever have is sample of observations. The next time you speak to the entity you call "Mother", her head might spin around and springs and wires pop out from her ear!)

    I mean this seriously : What will we do if we someday meet a space-faring race of lizard men? Assume they're not conscious just because they're not human beings?

    You're probably right that linear computer systems will never be conscious. But what about massively parallel, mutable computing systems with multiple feedback loops? (I.e., don't go bad-mouthing the internet when online!)

    Stevan Harnad points out that Turing wished to change the question from "Can machines think?" to "Can machines do what we (as thinking entities) can do?".
  • Alkis Piskas
    2.1k
    we all assume that other humans share the trait we call consciousness - and we deny this trait to non-human entities (sorry panpsychists).Real Gone Cat
    Not all. At least not me. :smile: Consciousness exists in all forms of life, however elementary it is. Every living organism, even bacteria, receives stimuli from the environment and reacts to them. Our consciousness is simply more complex.

    it's nearly impossible to explain whyReal Gone Cat
    Unfortunately, yes.

    What will we do if we someday meet a space-faring race of lizard men?Real Gone Cat
    I'll leave this to science-fiction. Regarding philosophy, I beliebeve this is an unfounded and/or useless hypothesis. (As well as the assumption that follows.)

    Bit what about massively parallel, mutable computing systems with multiple feedback loops?Real Gone Cat
    What's the difference betwee one and many, from the moment that possibility of consciousness in any computer system is rejected? (To be precise, it's not about "computer systems" but "computer programming".)

    Stevan Harnad points out that Turing wished to change the question from "Can machines think?" to "Can machines do what we (as thinking entities) can do?".Real Gone Cat
    Better that he finally didn't! :grin:
    (Machines thinking --at least in a creative and autonomous way, not a mechanical one-- is already out of question.)
  • Agent Smith
    9.5k
    Subjective Identity is not the same as Objective Identity. Hence, in my humble opinion, sancta trinitas unus deus. :cool:
  • Real Gone Cat
    346


    So life is conscious, all else is not. Based on what evidence? Plants? Amoebas? Bacteria? What besides personal prejudice proves consciousness?

    Machines require sustenance (fuel). Machines can be designed to move towards light. Etc.
  • Pantagruel
    3.4k
    So life is conscious, all else is not. Based on what evidence? Plants? Amoebas? Bacteria? What besides personal prejudice proves consciousness?

    Machines require sustenance (fuel). Machines can be designed to move towards light. Etc.
    Real Gone Cat

    Consciousness is a feature of a system that exists in a cybernetic feedback loop with its environment. This entails, among other things, self-direction and self-maintenance, minimally. When a robot can tell me not to turn it off and wanders off in search of tastier power sources, then we'll talk.

    FWIW I have long believed that experiments should be conducted in this light, building machines capable of self-direction, self-modification, and self-replication. If there is any possibility of AI being realized, this is it. Of course, it is risky, for all the obvious reasons. Plus, if consciousness were to emerge as a result of our having 'set the stage,' so to speak, it isn't clear that we would have really created AI, but only facilitated it. In which case we wouldn't be any closer to actually solving the riddle of consciousness.
  • universeness
    6.3k

    How do you 'fit' a human concept or aspect of sentience/consciousness, such as irrationality into something like a Turing test? One aspect of a Turing test is that you can use irrationality as a method of testing the sentience of a system. For example, if I can communicate with the system then I might suggest to the black box systems that they are inferior to the white box systems in the next room.
    What reaction would be required to such an input, under the turing rules as you define them, that would support a declaration that your black box systems are sentient?
  • Real Gone Cat
    346


    Fair enough. Truth be told, I believe in a gradation of consciousness as well.
  • Alkis Piskas
    2.1k
    So life is conscious, all else is not. Based on what evidence? Plants? Amoebas? Bacteria? What besides personal prejudice proves consciousness?Real Gone Cat
    I have explained that shortly: "Every living organism, even bacteria, receives stimuli from the environment and reacts to them."
    You must not think of consciousness only in human terms. One of the definitions of "consciousness" from the Oxford Reading Dictionary "The state of being aware of and responsive to one's surroundings." This also appeared in the former Oxford LEXICO, which was the official dictionary that Google were using. This exact definition appers also in a lot of other references. And, as you can undestand, it applies even to bacteria and viruses, including our damned coronavirus. How could these viruses travel in the organism and expand if theiy didn't have an ability to perceive (be aware of) and respond to their surroundings?

    References on plant consciousness:

    "Plants possess a highly developed, conscious root brain that works much as ours does to analyze incoming data and generate sophisticated responses." (Plant Consciousness: The Fascinating Evidence Showing Plants Have Human Level Intelligence, Feelings, Pain and More - http://www.esalq.usp.br/lepse/imgs/conteudo_thumb/Plant-Consciousness---The-Fascinating-Evidence-Showing-Plants-Have-Human-Level-Intelligence--Feelings--Pain-and-More.pdf)

    "Plants are able to think by perceiving their environment and making decided changes in order to thrive." (Can Plants Think? - https://www.goldbio.com/articles/article/Can-Plants-Think)Etc.

    etc.

    About bacteria and consciousness:

    Bacteria and the cellular basis of consciousnes -https://www.wellbeingintlstudiesrepository.org/cgi/viewcontent.cgi?article=1152&context=animsent)

    Why microbes are smarter than you thought - https://www.newscientist.com/article/dn17390-why-microbes-are-smarter-than-you-thought/

    etc.

    So, in fact, the prejudice you are talking about is that there's only human consciousness! :smile:
  • Real Gone Cat
    346


    Yeah, I think there is a gradation in consciousness as well. Had too many intelligent dogs to think otherwise.

    Not so sure I'd include plants and bacteria though. Those articles you cite are interesting, but I don't believe very many biology departments are teaching about intelligent plants. At a minimum, I think you need a somewhat sophisticated nervous system. Birds and mammals, sure. But even insects and fish should give pause. Conditioned behavior does not require self-awareness or even other higher cognitive functions.
  • Alkis Piskas
    2.1k
    Not so sure I'd include plants and bacteria though.Real Gone Cat
    Before citing articles, I used simple logic. But it seems it was not enough ...
  • Real Gone Cat
    346


    When you want to claim consciousness for plants and bacteria, I think you place yourself in a tiny minority. To repeat : conditioned behavior does not require self-awareness or high cognitive functions.

    Simple machines react to stimuli from their environment. Consider the thermostat.
  • Alkis Piskas
    2.1k
    conditioned behavior does not require self-awareness or high cognitive functions.Real Gone Cat
    I have never talked about "self-awareness" (I think it's the third time you bring this up) or "high cognitive functions". These are for humans only. I said specifically "Consciousness exists in all forms of life, however elementary it is." I also said "How could these viruses travel in the organism and expand if theiy didn't have an ability to perceive (be aware of) and respond to their surroundings?"

    Simple machines react to stimuli from their environment. Consider the thermostat.Real Gone Cat
    A machine is a mechanism, not an organism. Machines are dead objects, so they lack the drive to survive (instict of life), which is a characteristic of all organisms.
    The instinct to survive (live) of the organisms requires that they are furnished with senses so that they can perceive and respond to their surroundings.
    Sensors, on the other hand, are mechanical devices that receive and transmit signals. Like the brain. They don't perceive things. So, your thermostat receives temperature signals and transmits its own signals to indicate change in temperature. On the other hand, an organism has senses and thus it can perceive. Receiving signals is a reaction. Perceiving is an action. The two processes are totally different.
    But yes, highly developed machines, like humanoid robots, can simulate human behavior that looks like real.
  • Agent Smith
    9.5k


    What are your views on reflex (action)?

    There's more to simple reflexes than meets the eye, si señor/señorita?
  • Alkis Piskas
    2.1k
    What are your views on reflex (action)?Agent Smith
    A reflex, in physiology, is an involuntary response/reaction to a stimulus. This is not debatable. But in general, a reflex is a reaction/response to a stimulus. It is usually used reference to time.
    On a physical level, it means moving fast or slow in response to external stimuli, esp. threat. E.g. boxers must have good refelexes otherwise they can't defend themselves well, e.g. by repelling instantly a punch. This holds for a lot of other sports, an in general for anything that requires immediate confrontation of an opponent.
    However. one can extend the term "reflex" on a mental level to mean "responsivness", which refers to thinking fast or slow in response to a problem, challenge, etc. E.g. we say about someone that they are too "slow" when they cannot undestand immediately something that normally does not require much thought. Exams and tests are based on such "reflexes", i.e., responses, since one has a restricted amount of time to answer questions.

    This is what I can think of right now.

    There's more to simple reflexes than meets the eye, si señor/señorita?Agent Smith
    Like what?
  • Agent Smith
    9.5k


    Well, as crazy as this sounds, we could short circuit the brain; you know, use your brain spinal cord! :chin:
  • Agent Smith
    9.5k
    Come again?Alkis Piskas

    :zip:

  • Real Gone Cat
    346


    Um, a bit muddled.

    So a mechanism has sensors, and an organism has senses. And an organism's senses are different from sensors - senses do not transmit signals (or they would be sensors). Is that right? Then what do senses do? Do the senses initiate action?

    To be kind, your definition of consciousness is ... unusual. As I said before, it puts you in a tiny minority.

    A bacteria's reaction to a stimulus seems purely mechanical. The reaction is entirely predictable (within the parameters of the given situation). And no decision-making seems to be taking place. I fail to see how any of its responses to stimuli could not be modeled by a fairly simple machine or computer program.

    But then I reread this :
    Sensors, on the other hand, are mechanical devices that receive and transmit signals. Like the brain. They don't perceive things.

    Oh, now I get it. You think of consciousness as the ghost in the machine.
  • Richard B
    441
    Therefore if an AI passes the Turing test (over a very large number of conversations), it is likely that the AI can be considered to be a "brain-like system", and therefore conscious.tom111

    The problem with this conclusion starts before we even need to analyzed these theorems. Specifically, how do we define the input to show a particular system in conscious? Will any series of questions be OK? If not, then how is this going to be defined? We don't go around asking folk, “by the way are you conscious?” What would one say? Wait let me ask myself a series of questions that so called experts put together to make this determination.

    I bet we can construct of series of “inputs” where we would exclude not only AI as conscious but also a good portion of humanity as well. Alternately, we could construct a series of “inputs” to encompass all of humanity and most systems, like calculators, as conscious. This does not demonstrate the need of further refinement, but to just give up this sorry attempt at defining conscious as a series of “inputs” and “outputs”.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.