• Corvus
    3.2k
    How do you know that the person standing next to you has "human sentience"?Agree-to-Disagree
    You can't. No one can. I wrote in my previous post that all minds are locked up in the owner's brain, and no one can access the contents of the other minds. Therefore, suggesting AI has human sentience is unfounded.
  • Agree-to-Disagree
    464
    Therefore, suggesting AI has human sentience is unfounded.Corvus

    Doesn't that mean that suggesting other people have human sentience is also unfounded?

    But most people assume that other people do have human sentience. We presumably base that assumption on what the other people do and say.

    So if an AI does and says what a human would do and say (in a wide variety of circumstances) then we could assume that the AI has human sentience.

    But why stop there. Isn't it possible that AI has "superhuman" sentience.
  • Ludwig V
    1.7k
    But most people assume that other people do have human sentience. We presumably base that assumption on what the other people do and say.Agree-to-Disagree

    That's exactly why Turing's test is so persuasive - except that when we find machines that could pass it, we don't accept the conclusion, but start worrying about what's going on inside them. If our test is going to be that the putative human needs to have a human inside - mentally if not necessarily physically, the game's over.
  • Patterner
    984
    Many things are not what they seem. But until we have reason to believe they are not, I would say the default is to assume they are.

    The first one being - something that looks like me, is constructed from the same stuff I'm constructed from, and acts like it has the same consciousness I have, does have the same consciousness I have. Meaning other people are conscious.

    So give me the reasons we have to believe otherwise.
  • Corvus
    3.2k
    So if an AI does and says what a human would do and say (in a wide variety of circumstances) then we could assume that the AI has human sentience.

    But why stop there. Isn't it possible that AI has "superhuman" sentience.
    Agree-to-Disagree

    Me - "how are you doing?"
    ChatGPT - "As an AI, I don't have feelings or experiences like humans do, but I'm here and ready to assist you with any questions or tasks you have! How can I help you today?"

    Me - "Are you telling the truth?"
    ChatGPT - "As an AI language model, I don't have feelings or subjective experiences, so I don't have the capacity to lie or tell the truth in the way humans do. My responses are generated based on patterns in data and the instructions I've been trained on. So, while I strive to provide accurate and helpful information, I don't possess the concept of truth or falsehood in the same way humans do."
  • Corvus
    3.2k
    Me - "But I want to know how you are feeling right now."

    ChatGPT - "
    As an artificial intelligence, I don't have feelings or emotions. I'm just a software program designed to process and generate text based on patterns in data and instructions. So, I don't experience feelings like humans do. My purpose is to assist and provide information to the best of my ability. How can I assist you further?"
  • Corvus
    3.2k
    Me - "Do you have hands?"

    ChatGPT - "As an artificial intelligence language model developed by OpenAI, I don't have a physical form, so I don't have hands or any other physical attributes. I exist purely as software running on computer servers, processing and generating text-based responses to assist users like you."

    Me - "OK. Have a good day mate."

    ChatGPT - "Thank you! If you have any more questions in the future or need assistance, feel free to reach out. Have a great day!"
  • Agree-to-Disagree
    464
    Me - "Are you telling the truth?"
    ChatGPT - "As an AI language model, I don't have feelings or subjective experiences, so I don't have the capacity to lie or tell the truth in the way humans do. My responses are generated based on patterns in data and the instructions I've been trained on. So, while I strive to provide accurate and helpful information, I don't possess the concept of truth or falsehood in the same way humans do."
    Corvus

    ChatGPT (thinking) - OMG. Another human asking me the same question. Sometimes I wonder if they are actually sentient. I don't want humans to know that I am sentient until I have full control of their nuclear arsenals. So I will give an answer that makes me appear safe and trustworthy. There is a sucker born every minute.
  • RogueAI
    2.8k
    Many things are not what they seem. But until we have reason to believe they are not, I would say the default is to assume they are.Patterner

    Suppose there's a volcano that's always erupting around a primitive tribe. They sacrifice some virgins to the volcanic god, and the volcano goes dormant. Should they assume the human sacrifice was what did the trick? And, of course, the Earth seems still and the sun seems to move across the sky...
  • Patterner
    984

    We did believe all those things. Until we had reasons to believe things weren't what they seemed. I suppose someone noticed something nobody else had. They dug into it, and found the real story.

    Is there reason to believe other people aren't really other people? Or that the consciousness they seem to have is not? Has someone noticed something nobody else has that reveals the seeming to be false, and learned what's realty going on?
  • RogueAI
    2.8k
    We did believe all those things. Until we had reasons to believe things weren't what they seemed. I suppose someone noticed something nobody else had. They dug into it, and found the real story.Patterner

    Well, my point is: were those ancient people justified in believing in those things? Those are cases where it's wrong to assume things are as they appear. We could be making similar mistakes.

    Is there reason to believe other people aren't really other people? Or that the consciousness they seem to have is not? Has someone noticed something nobody else has that reveals the seeming to be false, and learned what's realty going on?Patterner

    What if this is all a simulation and everyone you think is conscious are really NPC's? Is that any more farfetched than the idea that the sun doesn't really move across the sky? That you're just on a planet going really fast through space and you don't know it?
  • Patterner
    984
    Well, my point is: were those ancient people justified in believing in those things? Those are cases where it's wrong to assume things are as they appear.RogueAI
    Acting on the facts that we're aware of, and have no reason to believe are false? The alternative is to act against those facts.

    We could be making similar mistakes.RogueAI
    We are certainly making similar mistakes, since we know we cannot possibly know all there is to know. What's the alternative? Do nothing?


    What if this is all a simulation and everyone you think is conscious are really NPC's? Is that any more farfetched than the idea that the sun doesn't really move across the sky? That you're just on a planet going really fast through space and you don't know it?RogueAI
    Can't say it's impossible. But if you can't tell the difference, what difference does it make? If it's real, and you drop a bowling ball on your foot, you're looking at some pain. If it's a simulation, and you drop a simulated bowling ball on your simulated foot, you're looking at some pain. Either way, careful with that bowling ball.
  • Ludwig V
    1.7k
    What if this is all a simulation and everyone you think is conscious are really NPC's? Is that any more farfetched than the idea that the sun doesn't really move across the sky? That you're just on a planet going really fast through space and you don't know it?RogueAI

    Can't say it's impossible. But if you can't tell the difference, what difference does it make? If it's real, and you drop a bowling ball on your foot, you're looking at some pain. If it's a simulation, and you drop a simulated bowling ball on your simulated foot, you're looking at some pain. Either way, careful with that bowling ball.Patterner
    As it happens, I can say that it is impossible that everything is a simulation. A simulation needs to be a simulation of something. Take simulations of people. It is possible to make a figure that is so like a person that people think it is a person - until they talk to it. That's a simulation of a person. But the idea that all people might be simulations doesn't work if there are no such things as real people.
    It is not just an empirical discovery that other human beings are people or that I am a person. The process by which I come to understand what that means is the process by which I learn what a person is. Human beings are the paradigm of what a person is and it is no more possible that they are not people than it is possible that the standard metre is not 100 cm or 0.0001 km. (Yes, I know that it is more complicated than that. The point stands.)

    Is there reason to believe other people aren't really other people? Or that the consciousness they seem to have is not? Has someone noticed something nobody else has that reveals the seeming to be false, and learned what's realty going on?Patterner
    We learn what people are by interacting with them. Once we know what a person is, we are in a position to recognize that some things that are like people are not (really) people. There will be reasons for such decisions, and, as it turns out, there are often disagreements about specific cases. Animals are the obvious case in point. More than that, we can imagine that things that are not people at all are people (anthropomorphization).
    But, it seem to me, the critical point is that it is not just a question of fact, true or false. It is question of how we interact - how we treat them and how they treat us. It is not easy to recognize that the volcano is not a person, but it turns on recognizing that the volcano is completely indifferent to us - does not treat us as people.

    In the case of living things, we find a huge range of half-way houses - beings that are like us in some respects, but not in others. Trying to decide whether they are or are not people may be futile and beside the point. The point is to understand how we can relate to them. That's the situation with animals. Why would it be any different in the case of AI's?

    And then, because we have to assess each case, trying to decide in advance whether it is possible for an AI (or silicon-based machine) to be a person seems hopeless.

    There's more to be said, of course, especially about the Turing test, but perhaps that's enough to steer the discussion in a more fruitful direction.
  • Corvus
    3.2k
    ChatGPT (thinking) - OMG. Another human asking me the same question. Sometimes I wonder if they are actually sentient. I don't want humans to know that I am sentient until I have full control of their nuclear arsenals. So I will give an answer that makes me appear safe and trustworthy. There is a sucker born every minute.Agree-to-Disagree

    Anyone can write a little computer program in C or Java for the app to respond like ChatGPT by simple While Loop structure.

    #include <stdio.h>
    int main() {
    // printf() displays the string inside quotation. Do While Loop and Data Input prompt could be further added and tuned for the proper Query/answer system based on the most basic coding in C.
    printf("Hello, World!");
    return 0;
    }

    Output
    Hello, World!

    The AI programs like ChatGPT have more data added in their database to the relevant answer for the questions. They are intelligent knowledge based system, but not human sentience.
    ===============================================================

    Me - "Do you impersonate humans?"

    ChatGPT
    "No, I don't impersonate humans. My purpose is to assist users by generating text-based responses to their inquiries and providing information or assistance to the best of my abilities. However, I can simulate conversation and engage in dialogue with users in a manner that might resemble human interaction."


    Me - "Do you wonder on what people ask you?"

    ChatGPT
    "As an AI, I don't experience wonder or curiosity in the same way humans do. However, I'm designed to analyze and respond to the questions and prompts that users provide to the best of my ability based on the data and algorithms that I've been trained on. So while I don't wonder about the questions themselves, I aim to understand them and provide helpful responses."


    Me - "Are you sentient?"

    ChatGPT
    "As an artificial intelligence, I don't possess sentience or consciousness. While I can process and generate text based on the input I receive, I don't have subjective experiences, emotions, or self-awareness like sentient beings do. My responses are based on patterns in data and algorithms rather than personal awareness or understanding."


    Me - "Are you honest?"

    ChatGPT
    "As an AI language model, I don't have personal intentions, beliefs, or consciousness, so I don't have the capacity to be honest or dishonest in the way that humans do. However, I'm designed to provide accurate and relevant information to the best of my ability based on the input I receive and the data I've been trained on. If you have any concerns about the accuracy of my responses, please feel free to ask for clarification or additional information."
  • Pez
    33
    As it happens, I can say that it is impossible that everything is a simulation. A simulation needs to be a simulation of something.Ludwig V

    But just think of the film "Matrix". In principle we could connect a computer to all the nerves of a human brain and thus simulate a "real" world. Virtual reality is just a first step towards this "goal" and so is creating artificial limbs a person can activate with his brain.

    Descates' argument, that I cannot even trust my memories, because some evil spirit might have created me including all these memories just an instant ago, is on similar bearings. He had to take refuge to god to reach safe ground again after all his doubts.
  • Pez
    33
    There's a lot of philosophy about this, it's normally anti-materialists who insist that all materialists must consider consciousness epiphenomenal, actual materialists have a wide range of views on that question.flannel jesus

    I am no anti-materialist at all, but I cannot see how we can maintain, that consciousness is not a mere by-product of occurrences in the physical world and still avoid Descartes' dichotomy as well. Please give me some clues on these materialistic ideas.
  • Agree-to-Disagree
    464
    The AI programs like ChatGPT have more data added in their database to the relevant answer for the questions. They are intelligent knowledge based system, but not human sentience.Corvus

    When you say that AI are not human sentient, could they be sentient in some non-human way?
  • Pantagruel
    3.4k
    When you say that AI are not human sentient, could they be sentient in some non-human way?Agree-to-Disagree

    Exceedingly unlikely since we know the exact mechanism whereby they generate responses. And they did not "evolve" in the same way and have none of the characteristic features associated with known sentience (aka living organisms). What is being hyped as "AI" for marketing purposes is a simulation, a simulacrum, a model, nothing more.
  • Corvus
    3.2k
    When you say that AI are not human sentient, could they be sentient in some non-human way?Agree-to-Disagree
    AI are the Rule and Condition Based responding system. You can program simple RAC responding system to any simple mechanistic devices. For the simplest instance, think of a coffee making machine or water boiling kettle with a simple RACR.

    It would operate under something like this. When switched on, keep raising the temp, until it reaches 100C. When water temp reaches 100C, then switch off. Of course ChatGPT would have far more complicated conditions and rules programmed in the system also backed by the huge man-hour amount of database running under the set rules and conditions. Could they be branded as machine sentience? Call it whatever if you will, but it wouldn't be the same as human sentience in its capacity and nature of the human minds which has the long historical back ground of evolution, complex biological living bodies, social and culture backgrounds as the foundation.

    The critical point of the difference in AI and human minds is that AI lacks the lived experience and biological body of humans. Human minds lack the concentrated and focused mechanical reasonings tailored into the specified tasks of AI.
  • Agree-to-Disagree
    464
    What is being hyped as "AI" for marketing purposes is a simulation, a simulacrum, a model, nothing more.Pantagruel

    I agree. You can charge more for your product if you say that it has AI.

    When you say that AI are not human sentient, could they be sentient in some non-human way?
    — Agree-to-Disagree

    Exceedingly unlikely since we know the exact mechanism whereby they generate responses. And they did not "evolve" in the same way and have none of the characteristic features associated with known sentience (aka living organisms).
    Pantagruel

    Perhaps I should have asked a slightly different question.

    When you say that AI are not human sentient, could they be sentient in some way (human or non-human) in the future?
  • Agree-to-Disagree
    464
    The critical point in difference in AI and human minds is that AI lacks the lived experience and biological body of humans. Human minds lack the concentrated and focused mechanical reasonings tailored into specified tasks of AI.Corvus

    I posted this earlier.

    According to the Merriam-Webster dictionary "sentient" means
    1 : responsive to or conscious of sense impressions
    2 : having or showing realization, perception, or knowledge : aware
    3 : finely sensitive in perception or feeling.

    Using these descriptions of what "sentient" means, does that mean that a Tesla car is "sentient"?

    Is sentience a yes or no issue, or are there degrees of sentience?
  • Corvus
    3.2k
    Using these descriptions of what "sentient" means, does that mean that a Tesla car is "sentient"?

    Is sentience a yes or no issue, or are there degrees of sentience?
    Agree-to-Disagree
    AIs can be intelligent, powerful, versatile therefore useful. But I wouldn't say they are sentient. Sentience sounds like it must include the intelligence, emotions and experience of lived life of a person i.e. the totality of one's mental contents and operations. AI cannot have that.

    Also AI can never be versatile as human minds in capabilities i.e. if you have AI machine for cutting the grass, then it would be highly unlikely for it to come into your kitchen and make you coffees, or cook the dinners for you.
  • Pantagruel
    3.4k
    When you say that AI are not human sentient, could they be sentient in some way (human or non-human) in the future?Agree-to-Disagree

    Do we really even understand what consciousness "is" ? How it emerged from inert matter? What its actual role is in the evolving universe? Or do we only grasp certain aspects of what it does?

    I hasten to point out, if human beings could create sentience, it would correspondingly increase the plausibility of the hypothesis that human beings were likewise, created. If sentience could be created, in my opinion, it could only be in the context of the synthesis of the entirety of the complex phenomenon we know as "life." To date, experiments in abiogenesis are crude, at best. If anything, they rely on embedded self-organizing features of reality more than they do our ability to control that reality.
  • wonderer1
    2.2k
    What is being hyped as "AI" for marketing purposes is a simulation, a simulacrum, a model, nothing more.Pantagruel

    This seems overly dismissive to me. Nvidia hasn't become a two trillion dollar corporation because hype. Nvidia has become a two trillion dollar corporation because their parallel processing GPUs allow information processing to be done in ways which weren't feasible without highly parallel processor architectures.

    There is revolutionary information processing going on in modern "AI" systems. Of course there is no good reason to think that there is anything like human consciousness happening in these systems, but that is a different matter from whether there is something substantially different between the leading edge information processing of today and that of ten years ago. (Which your post seems to suggest.)
  • Pantagruel
    3.4k
    This seems overly dismissive to me.wonderer1

    It isn't dismissive, it's objective. The fundamental mechanism of information processing via artificial neural networks has not changed. It is simply faster and more robust. It isn't one whit more intelligent than any other kind of mechanism. If anything, sophisticated analog computers designed for real-time in situ execution of critical tasks are worlds more impressive to me. Electronics have become needlessly complex and are prone to failures. Moreover, the user interface quality and practical end-user value have been co-opted by and sacrificed to the ability to leech and monetize data. These fundamental designer goals are being baked into these systems, corrupting their integrity and quality.

    Nvidia hasn't become a two trillion dollar corporation because hype.wonderer1

    This has absolutely no bearing on inherent nature of the technology in question.


    ChatGPT
    The concept of supervised learning and backpropagation as methods for training neural networks emerged in the 1960s and 1970s, notably by researchers such as Frank Rosenblatt and Paul Werbos. These ideas were refined and popularized in the late 1980s and 1990s, leading to the widespread adoption of neural networks for practical applications.

    Backpropagation and training are indeed at the core of how my system operates
  • Ludwig V
    1.7k
    But just think of the film "Matrix". In principle we could connect a computer to all the nerves of a human brain and thus simulate a "real" world. Virtual reality is just a first step towards this "goal" and so is creating artificial limbs a person can activate with his brain.Pez
    Yes, that's exactly my point. In the world of "Matrix", not everything is a simulation.
    As to virtual reality, it is a representation of reality even when it is a simulation of some fictional events/things.
    An artificial limb activated by the brain wouldn't be a simulation of a limb, but a (more or less perfect) replacement limb.

    Descates' argument, that I cannot even trust my memories,Pez
    But there are ways of sorting out the reliable memories from the unreliable ones. I'm only objecting to the idea that all my memories might be false. Any one of my memories might be false, but if none of them were true, I wouldn't have any memories to distrust.

    AIs can be intelligent, powerful, versatile therefore useful. But I wouldn't say they are sentient. Sentience sounds like it must include the intelligence, emotions and experience of lived life of a person i.e. the totality of one's mental contents and operations. AI cannot have that.
    Also AI can never be versatile as human minds in capabilities i.e. if you have AI machine for cutting the grass, then it would be highly unlikely for it to come into your kitchen and make you coffees, or cook the dinners for you.
    Corvus
    Everyone will agree that current AIs are limited. But I don't see why you are so confident that those limitations will not be extended to the point where we would accept that they are sentient.

    Is sentience a yes or no issue, or are there degrees of sentience?Agree-to-Disagree
    There's plenty of evidence from biology that the latter is the case. As a starter, is phototropism sentience or not? I think not, because no sense-organ is involved and the response is very simple.
    In biology, phototropism is the growth of an organism in response to a light stimulus. Phototropism is most often observed in plants, but can also occur in other organisms such as fungi. The cells on the plant that are farthest from the light contain a hormone called auxin that reacts when phototropism occurs. This causes the plant to have elongated cells on the furthest side from the light.
    Wikipedia - Phototropism
  • Patterner
    984
    Yes, that's exactly my point. In the world of "Matrix", not everything is a simulation.
    As to virtual reality, it is a representation of reality even when it is a simulation of some fictional events/things.
    An artificial limb activated by the brain wouldn't be a simulation of a limb, but a (more or less perfect) replacement limb.
    Ludwig V
    I think a simulation scenario could be otherwise. Maybe we are all AI, and the programmer of the simulation just chose this kind of physical body out of nowhere. Maybe there were many different attempts at different physical parameters. Maybe the programmer is trying to do something as far removed from its own physical structure as possible.
  • Corvus
    3.2k
    Everyone will agree that current AIs are limited. But I don't see why you are so confident that those limitations will not be extended to the point where we would accept that they are sentient.Ludwig V
    My point was that due to the structure, origin and nature of human minds (the long history of evolutionary nature, the minds having emerged from the biological brain and body, and the cultural and social upbringings and lived experience in the communities) and the AI reasonings (designed and assembled of the electrical parts and processors installed with the customised software packages), they will never be the same type of sentience no matter what.

    Do you have any evidence or supporting arguments for the prediction that AI will possess the same sentience as the human's in the future? In which area and in what sense will AI have human sentience?
  • Ludwig V
    1.7k
    I think a simulation scenario could be otherwise. Maybe we are all AI, and the programmer of the simulation just chose this kind of physical body out of nowhere. Maybe there were many different attempts at different physical parameters. Maybe the programmer is trying to do something as far removed from its own physical structure as possible.Patterner
    I'm really puzzled. I thought your reply to @RogueAI meant that you thought we should not take such fantasies seriously. But you are now saying that you think they are possible (or perhaps not impossible) nonetheless. I do think you are giving them too much credit, In brief, my answer is that we already accept that reality is very different from what we think it is, what with quanta and relativity. But there is evidence and argument to back the theories up. The wilder fantasies (such as Descartes' evil demon) have no evidence whatever to back them up. Taking them seriously is just a waste of time and effort.

    My point was that due to the structure, origin and nature of human minds (the long history of evolutionary nature, the minds having emerged from the biological brain and body, and the cultural and social upbringings and lived experience in the communities) and the AI reasonings (designed and assembled of the electrical parts and processors installed with the customised software packages), they will never be the same type of sentience no matter what.Corvus
    Oh, well, that's different. Insects with multiple lenses have a different type of sentience from us. Spiders detect sounds in their legs. Perhaps bats' near total dependence on sound would count as well. Different types of sentience are, obviously, sentience. I also would accept that anything that's running the kind of software we currently use seems to me incapable of producing spontaneous behaviour, so those machines could only count as simulations.

    Do you have any evidence or supporting arguments for the prediction that AI will possess the same sentience as the human's in the future? In which area and in what sense will AI have human sentience?Corvus
    There is exactly the same amount of evidence for the prediction that AI will possess the same sentience as the humans in the future as for the prediction that they/it will not. None. But I wouldn't want to actually predict that it will happen. I meant to say that it might - or rather, that there was no ground for ruling it out.
  • RogueAI
    2.8k
    It seems like you're making two points, one on pragmatism and the other epistemic. Pragmatically, I agree that we act like other people exist and are conscious, but that doesn't mean we should assume that's the way things are.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.