You can't. No one can. I wrote in my previous post that all minds are locked up in the owner's brain, and no one can access the contents of the other minds. Therefore, suggesting AI has human sentience is unfounded.How do you know that the person standing next to you has "human sentience"? — Agree-to-Disagree
Therefore, suggesting AI has human sentience is unfounded. — Corvus
But most people assume that other people do have human sentience. We presumably base that assumption on what the other people do and say. — Agree-to-Disagree
So if an AI does and says what a human would do and say (in a wide variety of circumstances) then we could assume that the AI has human sentience.
But why stop there. Isn't it possible that AI has "superhuman" sentience. — Agree-to-Disagree
Me - "Are you telling the truth?"
ChatGPT - "As an AI language model, I don't have feelings or subjective experiences, so I don't have the capacity to lie or tell the truth in the way humans do. My responses are generated based on patterns in data and the instructions I've been trained on. So, while I strive to provide accurate and helpful information, I don't possess the concept of truth or falsehood in the same way humans do." — Corvus
Many things are not what they seem. But until we have reason to believe they are not, I would say the default is to assume they are. — Patterner
We did believe all those things. Until we had reasons to believe things weren't what they seemed. I suppose someone noticed something nobody else had. They dug into it, and found the real story. — Patterner
Is there reason to believe other people aren't really other people? Or that the consciousness they seem to have is not? Has someone noticed something nobody else has that reveals the seeming to be false, and learned what's realty going on? — Patterner
Acting on the facts that we're aware of, and have no reason to believe are false? The alternative is to act against those facts.Well, my point is: were those ancient people justified in believing in those things? Those are cases where it's wrong to assume things are as they appear. — RogueAI
We are certainly making similar mistakes, since we know we cannot possibly know all there is to know. What's the alternative? Do nothing?We could be making similar mistakes. — RogueAI
Can't say it's impossible. But if you can't tell the difference, what difference does it make? If it's real, and you drop a bowling ball on your foot, you're looking at some pain. If it's a simulation, and you drop a simulated bowling ball on your simulated foot, you're looking at some pain. Either way, careful with that bowling ball.What if this is all a simulation and everyone you think is conscious are really NPC's? Is that any more farfetched than the idea that the sun doesn't really move across the sky? That you're just on a planet going really fast through space and you don't know it? — RogueAI
What if this is all a simulation and everyone you think is conscious are really NPC's? Is that any more farfetched than the idea that the sun doesn't really move across the sky? That you're just on a planet going really fast through space and you don't know it? — RogueAI
As it happens, I can say that it is impossible that everything is a simulation. A simulation needs to be a simulation of something. Take simulations of people. It is possible to make a figure that is so like a person that people think it is a person - until they talk to it. That's a simulation of a person. But the idea that all people might be simulations doesn't work if there are no such things as real people.Can't say it's impossible. But if you can't tell the difference, what difference does it make? If it's real, and you drop a bowling ball on your foot, you're looking at some pain. If it's a simulation, and you drop a simulated bowling ball on your simulated foot, you're looking at some pain. Either way, careful with that bowling ball. — Patterner
We learn what people are by interacting with them. Once we know what a person is, we are in a position to recognize that some things that are like people are not (really) people. There will be reasons for such decisions, and, as it turns out, there are often disagreements about specific cases. Animals are the obvious case in point. More than that, we can imagine that things that are not people at all are people (anthropomorphization).Is there reason to believe other people aren't really other people? Or that the consciousness they seem to have is not? Has someone noticed something nobody else has that reveals the seeming to be false, and learned what's realty going on? — Patterner
ChatGPT (thinking) - OMG. Another human asking me the same question. Sometimes I wonder if they are actually sentient. I don't want humans to know that I am sentient until I have full control of their nuclear arsenals. So I will give an answer that makes me appear safe and trustworthy. There is a sucker born every minute. — Agree-to-Disagree
As it happens, I can say that it is impossible that everything is a simulation. A simulation needs to be a simulation of something. — Ludwig V
There's a lot of philosophy about this, it's normally anti-materialists who insist that all materialists must consider consciousness epiphenomenal, actual materialists have a wide range of views on that question. — flannel jesus
The AI programs like ChatGPT have more data added in their database to the relevant answer for the questions. They are intelligent knowledge based system, but not human sentience. — Corvus
When you say that AI are not human sentient, could they be sentient in some non-human way? — Agree-to-Disagree
AI are the Rule and Condition Based responding system. You can program simple RAC responding system to any simple mechanistic devices. For the simplest instance, think of a coffee making machine or water boiling kettle with a simple RACR.When you say that AI are not human sentient, could they be sentient in some non-human way? — Agree-to-Disagree
What is being hyped as "AI" for marketing purposes is a simulation, a simulacrum, a model, nothing more. — Pantagruel
When you say that AI are not human sentient, could they be sentient in some non-human way?
— Agree-to-Disagree
Exceedingly unlikely since we know the exact mechanism whereby they generate responses. And they did not "evolve" in the same way and have none of the characteristic features associated with known sentience (aka living organisms). — Pantagruel
The critical point in difference in AI and human minds is that AI lacks the lived experience and biological body of humans. Human minds lack the concentrated and focused mechanical reasonings tailored into specified tasks of AI. — Corvus
AIs can be intelligent, powerful, versatile therefore useful. But I wouldn't say they are sentient. Sentience sounds like it must include the intelligence, emotions and experience of lived life of a person i.e. the totality of one's mental contents and operations. AI cannot have that.Using these descriptions of what "sentient" means, does that mean that a Tesla car is "sentient"?
Is sentience a yes or no issue, or are there degrees of sentience? — Agree-to-Disagree
When you say that AI are not human sentient, could they be sentient in some way (human or non-human) in the future? — Agree-to-Disagree
What is being hyped as "AI" for marketing purposes is a simulation, a simulacrum, a model, nothing more. — Pantagruel
This seems overly dismissive to me. — wonderer1
Nvidia hasn't become a two trillion dollar corporation because hype. — wonderer1
Yes, that's exactly my point. In the world of "Matrix", not everything is a simulation.But just think of the film "Matrix". In principle we could connect a computer to all the nerves of a human brain and thus simulate a "real" world. Virtual reality is just a first step towards this "goal" and so is creating artificial limbs a person can activate with his brain. — Pez
But there are ways of sorting out the reliable memories from the unreliable ones. I'm only objecting to the idea that all my memories might be false. Any one of my memories might be false, but if none of them were true, I wouldn't have any memories to distrust.Descates' argument, that I cannot even trust my memories, — Pez
Everyone will agree that current AIs are limited. But I don't see why you are so confident that those limitations will not be extended to the point where we would accept that they are sentient.AIs can be intelligent, powerful, versatile therefore useful. But I wouldn't say they are sentient. Sentience sounds like it must include the intelligence, emotions and experience of lived life of a person i.e. the totality of one's mental contents and operations. AI cannot have that.
Also AI can never be versatile as human minds in capabilities i.e. if you have AI machine for cutting the grass, then it would be highly unlikely for it to come into your kitchen and make you coffees, or cook the dinners for you. — Corvus
There's plenty of evidence from biology that the latter is the case. As a starter, is phototropism sentience or not? I think not, because no sense-organ is involved and the response is very simple.Is sentience a yes or no issue, or are there degrees of sentience? — Agree-to-Disagree
Wikipedia - PhototropismIn biology, phototropism is the growth of an organism in response to a light stimulus. Phototropism is most often observed in plants, but can also occur in other organisms such as fungi. The cells on the plant that are farthest from the light contain a hormone called auxin that reacts when phototropism occurs. This causes the plant to have elongated cells on the furthest side from the light.
I think a simulation scenario could be otherwise. Maybe we are all AI, and the programmer of the simulation just chose this kind of physical body out of nowhere. Maybe there were many different attempts at different physical parameters. Maybe the programmer is trying to do something as far removed from its own physical structure as possible.Yes, that's exactly my point. In the world of "Matrix", not everything is a simulation.
As to virtual reality, it is a representation of reality even when it is a simulation of some fictional events/things.
An artificial limb activated by the brain wouldn't be a simulation of a limb, but a (more or less perfect) replacement limb. — Ludwig V
My point was that due to the structure, origin and nature of human minds (the long history of evolutionary nature, the minds having emerged from the biological brain and body, and the cultural and social upbringings and lived experience in the communities) and the AI reasonings (designed and assembled of the electrical parts and processors installed with the customised software packages), they will never be the same type of sentience no matter what.Everyone will agree that current AIs are limited. But I don't see why you are so confident that those limitations will not be extended to the point where we would accept that they are sentient. — Ludwig V
I'm really puzzled. I thought your reply to @RogueAI meant that you thought we should not take such fantasies seriously. But you are now saying that you think they are possible (or perhaps not impossible) nonetheless. I do think you are giving them too much credit, In brief, my answer is that we already accept that reality is very different from what we think it is, what with quanta and relativity. But there is evidence and argument to back the theories up. The wilder fantasies (such as Descartes' evil demon) have no evidence whatever to back them up. Taking them seriously is just a waste of time and effort.I think a simulation scenario could be otherwise. Maybe we are all AI, and the programmer of the simulation just chose this kind of physical body out of nowhere. Maybe there were many different attempts at different physical parameters. Maybe the programmer is trying to do something as far removed from its own physical structure as possible. — Patterner
Oh, well, that's different. Insects with multiple lenses have a different type of sentience from us. Spiders detect sounds in their legs. Perhaps bats' near total dependence on sound would count as well. Different types of sentience are, obviously, sentience. I also would accept that anything that's running the kind of software we currently use seems to me incapable of producing spontaneous behaviour, so those machines could only count as simulations.My point was that due to the structure, origin and nature of human minds (the long history of evolutionary nature, the minds having emerged from the biological brain and body, and the cultural and social upbringings and lived experience in the communities) and the AI reasonings (designed and assembled of the electrical parts and processors installed with the customised software packages), they will never be the same type of sentience no matter what. — Corvus
There is exactly the same amount of evidence for the prediction that AI will possess the same sentience as the humans in the future as for the prediction that they/it will not. None. But I wouldn't want to actually predict that it will happen. I meant to say that it might - or rather, that there was no ground for ruling it out.Do you have any evidence or supporting arguments for the prediction that AI will possess the same sentience as the human's in the future? In which area and in what sense will AI have human sentience? — Corvus
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.