Yes, that's right. I agree also that persons, as we understand them, can only exist in an environment. Whether one includes that environment as part of the person or not is a tricky question and I don't know the answer. In our paradigm case (the only one that we actually know), a person is a human being, i.e. an animal. An animal is a physical body. (I'm setting aside the dualistic possibility of persons existing without a body.) Some physical structures are machines, and hence not animals, but I don't see why such structures cannot possibly constitute people.Similarly, a person (and not a brain) is what is conscious. Not even that, because an environment is also needed. — noAxioms
So I think you are right to argue that some such process as this would be necessary to create a machine person. The catch is that I'm not at all sure that this would be a sim, rather than a real person - especially as the process of its creation would be very close to the process of creating human beings. I think this is the same point as here:-It has to start somewhere, so the womb would be outside the system, an imitation womb, empirically (to the child) indistinguishable from a real mother, in every way. I suppose the placenta would be included in the system since it is, after all, the child and not the mother, but when it is severed, the sim needs to remember which half to keep as part of the system. — noAxioms
You said you would start the sim as a zygote. I am asking: what is the difference between this zygote and a zygote in reality? Or is the zygote you are postulating a mere simulation of a zygote? If so, that seems problematic. — NotAristotle
Well, perhaps I'm being provoking. My point is that when people act, they do so on the basis of values that they hold, that is, their emotions and desires. It may be a distortion to call them irrational, but standard ideas of logic and reason are well recognized (since Aristotle) to be incapable of generating actions on their own.Irrationality is required for consciousness? A computer is rational? I question both. Deterministic is not not rationality. I do agree that irrationality is a trait of any living creature, and a necessary one. — noAxioms
This may be a side-issue. I know that there is an issue about lucid dreaming. But I doubt whether the unsupported memory of a dreamer is sufficient to establish the phenomenon, except that I accept that the reports exist and I don't believe they are lies. But the possibility that the dreamer is dreaming the phenomenon cannot, it seems to me, be excluded.Sometimes. One is often reft of rational thought while dreaming, but not always. I can tell sometimes, and react to knowing so. — noAxioms
I don't know what you mean by "a simulation of low level physics", but you clearly have a different concept of categories from mine.To a simulation of low level physics, they pretty much are the exact same category, — noAxioms
A side-issue. If you call it a rationalization, you have already decided the argument is invalid or unsound. But knowing that someone had in mind a specific conclusion before formulating the argument does not, of itself, show that their argument is invalid or unsound.That's (sc. Descartes' argument) a great example of rationalization. It was his target all along. — noAxioms
Another side-issue, but you are presupposing a dualistic concept of pain. On that concept, you are right. But whatever exactly may be the relevant conception of pain, I think your point survives, in the sense that whatever caused the pain would have to cause real pain and not zombie pain, just as the anger would have to be real anger, etc.Would a simulation of agonising pain be actually painful? If it was, it can't really be a simulation, but as the primary attribute of pain is the feeling of pain, there's nothing else to simulate. — Wayfarer
If I am a brain in a vat, my claim is true, even if I can't refer to brain and vat, so long as "brain" and "vat" refer to the appropriate objects in that context. Perhaps I cannot know that my claim is true, but that's different. Actually, I don't really see why a brain in a vat cannot refer to itself as a brain in a vat.If I am a BIV, I cannot make claims like "I am a brain in a vat" because I am making no reference to the "brain" and "vat". So, if I say that sentence, it is false. — L'éléphant
Descartes starts with all this skepticism, and builds up from this simple state that, lacking any knowledge of modern physics, leaves him with something he decides can be known with certainty. I'm fine with that, and I'm admittedly not very familiar with his work, but he goes from there to conclude, surprise, surprise, the exact mythological teachings of his own culture and not any of the other thousand choices of other cultures. That's a great example of rationalization. — noAxioms
But who will doubt that he lives, remembers, understands, wills, thinks, knows, and judges? For even if he doubts, he lives. If he doubts where his doubs come from, he remembers. If he doubts, he understands that he doubts. If he doubts, he wants to be certain. If he doubts, he thinks. If he doubts, he knows that he does not know. If he doubts, he judges that he ougth not rashly to give assent. So whoever acquires a doubt from any source ought not to doubt any of these things whose non-existence would mean that he could not entertain doubt about anything." (Augustine, On the Trinity 10.10.14 quoted in Richard Sorabji, Self, 2006, p.219).
real pain and not zombie pain — Ludwig V
There's a contradiction here. People is animal. A machine is not animal. But a machine can be people? That means a machine is animal and not animal.a person is a human being, i.e. an animal. ... Some physical structures are machines, and hence not animals, but I don't see why such structures cannot possibly constitute people. — Ludwig V
I think you are again envisioning imitation people, like Replicants. That's a very different thing than the simulation hypothesis which does not involve machines pretending to be people.But if they are to constitute people
No. The simulation is creating a biological person, not a machine person. Try to get that. Replicants are not grown from a zygote. A replicant can be trivially tested by an x-ray or just by sawing it in half, or so I suggest. Apparently in Blade runner, it was very hard to tell the difference, but that's also a fiction.So I think you are right to argue that some such process as this would be necessary to create a machine person.
That's right. Physics doesn't do spontaneous things (quantum mechanics excepted, which is a big problem if you want to simulate that). But classical physics isn't spontaneous, and yet spontaneity emerges from it, or at least the appearance of it. Anything in the simulation would have to behave just like that.Calculating is widely recognized as a rational activity.
Yet again, no computer is pretending to be a person, so it isn't a problem.To me, it makes no sense to deny that computers can calculate. The catch is that such rational activities are not sufficient to be recognized as a person.
Probably invalid in this case, and yes, I've decided that, but on weak grounds since I have never followed the argument from beginning to a preselected improbable conclusion.If you call it a rationalization, you have already decided the argument is invalid or unsound.
If the simulation is any good at all, and presuming monism, then yes, it would be painful to the subject in question. No, the computer running the sim would not feel pain, nor would the people responsible for the creation of the simulation, despite suggestions from Kastrup that they apparently should.Would a simulation of agonising pain be actually painful?
I didn't say that was the rationalization. I even accepted it since it was a reasonable statement in the absence of modern physics. It is him building on that foundation to his later conclusions that is the rationalization, which I clearly spelled out in my post.The logic of cogito ergo sum is neither rationalisation nor myth, it is the indubitable fact — Wayfarer
I even accepted it since it was a reasonable statement in the absence of modern physics. — noAxioms
As for it being indubitable, well, I dubit it, as I do everything — noAxioms
all said states are states of the same thing — noAxioms
You do not understand what "refer" means, in other words.If I am a brain in a vat, my claim is true, even if I can't refer to brain and vat, so long as "brain" and "vat" refer to the appropriate objects in that context. Perhaps I cannot know that my claim is true, but that's different. Actually, I don't really see why a brain in a vat cannot refer to itself as a brain in a vat. — Ludwig V
Then you misunderstand what "true" means in statements.I don't follow that.If it says (without evidence) that it is a BiV, then the utterance is true if that is indeed the fact. — noAxioms
The analysis of Descartes' argument is a bit off-topic here, so I'll resist commenting.The logic of cogito ergo sum is neither rationalisation nor myth, it is the indubitable fact that, in order to be subject to an illusion, there must be a subject. — Wayfarer
But I can't resist saying that I agree with you.I have my doubts about Descartes, in that I believe his dualistic separation of the physical and mental as separate substances is profoundly problematical and has had hugely deleterious consequences for Western culture, but as for the essential veracity of his ‘cogito’ argument, I have no doubts. — Wayfarer
Yes. I did not put my point well. I was thinking of philosophical zombies, which would (if I've understood the idea correctly) not behave like zombies in the flicks.I had the idea that zombies don’t feel pain, at least they never do in zombie flicks. You have to literally dismember or disintegrate them to overcome them, merely inflicting blows or wounds does nothing. — Wayfarer
There's a contradiction here. People is animal. A machine is not animal. But a machine can be people? That means a machine is animal and not animal. — noAxioms
Are these two remarks compatible? My point is that there is no easy and clear way to state what the Turing hypothesis is trying to articulate.I mean, deep down, you're a machine as well running under the same physics. I think you're confusing determinism with predictability. — noAxioms
Thank you for the clarification. I misunderstood what the thread was about. My apologies. It is clear now that I haven't understood what the simulation hypothesis is. However, when I checked the Wikipedia - Simulation hypothesis, I found:-I think you are again envisioning imitation people, like Replicants. That's a very different thing than the simulation hypothesis which does not involve machines pretending to be people. — noAxioms
For me, a conscious being is a person and a simulated person is not a person, so this confuses me. Can you perhaps clarify?Suppose that these simulated people are conscious (as they would be if the simulations were sufficiently fine-grained and if a certain quite widely accepted position in the philosophy of mind is correct).
Well, since you have now used it, and I understand it (roughly, I think), it is a word now. Who knows, it may catch on and then you'll be awarded a place in the dictionaries of the future!why isn't 'dubit' a word? It ought to be. — noAxioms
You seem to think I cannot refer to anything that I have not experienced. But the reference of a word is established in the language in general, not by what I may or may not have experienced. So when I can refer to the President of the United States even if I don't know that Joe Biden is the President.You do not understand what "refer" means, in other words. — L'éléphant
I agree with @noAxioms, except that I would add that it's not something it can justify on the basis of its subjective experience.Then you misunderstand what "true" means in statements. — L'éléphant
It seems to me you cannot simulate with paper and pencil, because it is not an active medium. You can write about the game of basketball in all conceivable detail. You can write down every rule, and describe as many scenarios as you like, explaining how each rule applies at each moment. You can describe every required object, as well as the physical, mental, and emotional characteristics of every possible player. You can write all this down in every conceivable detail, but it would never be a basketball game.There is no technology constraint on any pure simulation, so anything that can be done by computer can be done (far slower) by paper and pencil. That means that yes, even the paper and pencil method, done to sufficient detail, would simulate a conscious human who would not obviously know he is being simulated. — noAxioms
At risk of opening a can of worms, how does 'modern physics' come into it?
I joined this and other forums to find out how the prominent philosophers (the ones you learn of in class) dealt with modern physics (narrowing the search to recent ones of course) and found that for the most part, they either didn't know their physics, or didn't care about it.
So I learned physics, or at least the parts of it relevant to the subjects I cared about.
Relativity threw significant doubt to Newtonian absolutism where there was one preferred frame and time was posited to be something that flows or progresses, that there was a preferred moment in time, and the universe was static, and either infinite age or somehow set in motion from some initial state at some point. Much of religious myths (especially the creation parts) requires the universe to be contained by time instead of the other way around, and this did not become apparent until about 110 years ago. The universe having a finite age is about a century old, and some religious teachings did at least bend with that one and put the creation event there.
Quantum mechanics really threw a spanner into the gears with suggestions that ontology might work backwards (that existence depends on interaction with future things), that identity of anything (electrons, rocks, people) is not at all persistent and thus I am not the same I as a second ago.
One can of course pick an interpretation consistent with your preferences and avoid the implications of the ones you don't like, but if doubt is to be eradicated, all the alternative interpretations contradicting the thing of which you are certain must be falsified.
And who knows what else might get discovered. Nobody saw QM coming, so all these people who held certain beliefs with certainty found themselves to be wrong or at least potentially wrong. So a declaration of 100% certainty is irrational. I mean, my certainty rests on the sum of two numbers (a pair of arbitrary real numbers say) being exactly one other real number, always and anywhere. I don't significantly doubt that, but I still question it. What if it's only a property of this universe that such a sum comes to that one solution and not a different one elsewhere?
— Wayfarer
Indeed it isn't, but the assumption is implicit. It's too obvious to bother calling out explicitly, or at least it was obvious until ~50 years ago.Persistence of self-identity over time is not discussed in Descartes
Your opinion. The opinion of others may vary.Beings are not objects or things
I knew what you meant, even if Wayfarer chose to reply to what you said instead of what you meant.I was thinking of philosophical zombies — Ludwig V
The Turing test (The closest 'Turing Hypothesis gets is the Chuch-Turing thesis, concerning what is computable, and is oddly relevant below) is an intelligence test for when a machine's written behavior is indistinguishable from that of a human. The large language models are getting close, and the easy way to tell the difference is to not ask them questions with factual answers. They also are not designed to pass the Turing test, so all one has to do is ask it what it is.My point is that there is no easy and clear way to state what the Turing hypothesis is trying to articulate.
A simulated person would be a person, just in a different universe (the simulated one). It's likely quite a small universe. You seem to define 'person' as a human in this universe, and no, the simulated person would not be that.Suppose that these simulated people are conscious (as they would be if the simulations were sufficiently fine-grained and if a certain quite widely accepted position in the philosophy of mind is correct).
For me, a conscious being is a person and a simulated person is not a person, so this confuses me. Can you perhaps clarify?
And it was already used in somebody else's reply.why isn't 'dubit' a word? It ought to be.
— noAxioms
Well, since you have now used it, and I understand it (roughly, I think), it is a word now.
Not sure what the term 'active medium' means. Googling it didn't help. I can implement a Turing machine armed with nothing but paper and pencil. Per the Church-Turing Thesis mentioned by mistake above, that means I can do anything that is computable, including the running of the simulation.It seems to me you cannot simulate with paper and pencil, because it is not an active medium. — Patterner
But I am hitting 'run'. I wouldn't need the pencil if I didn't 'run' it.If you program everything necessary to simulate consciousness into a computer**, but never hit Run
I describe human beings, in contexts like this, as our paradigm of a person. That's not exactly a definition - I'm not aware of any definition that is adequate. A paradigm, for me, is an example or sample that one uses in an ostensive definition. However, I think that looking for definitions is inadequate on its own, because the important feature of a people is the way we interact with them as different from the way we interact with objects.A simulated person would be a person, just in a different universe (the simulated one). It's likely quite a small universe. You seem to define 'person' as a human in this universe, and no, the simulated person would not be that. — noAxioms
That's because I just made it up. Sorry. I'm not well read almost anything that's ever discussed here. There are many in which I'm not at all read. I know what I want to say, but often don't know what words are normally used. I had hoped I explained it well enough to make what I am thinking clear.Not sure what the term 'active medium' means. Googling it didn't help. — noAxioms
But I am hitting 'run'. I wouldn't need the pencil if I didn't 'run' it. — noAxioms
Remember, we're not worrying about what those running the simulation are calling the simulated things. We're supposing that we are the subjects here, the ones being simulated, and we (and only we) call ourselves human beings or people. That's the only definition that matters.I describe human beings, in contexts like this, as our paradigm of a person. — Ludwig V
That's kind of like suggesting that God is unethical to have created a universe that has beings that feel bad, and yes, there are those that suggest exactly that.I have to say, if these beings are to be conscious, I wish you luck in getting your project through your research ethics committee.
I wanted a universe that is simulated, instead of being instantiated in some other way. I do suppose that the simulated universe is a part of the container universe, but it's still a separate universe. That's questionable if it's an open simulation, but not all of them are. Much depends on the goal of running the simulation. Bostrom actually posits what that purpose would be, even if it is a totally naive one.My question now, is why not just talk about people living in a different universe?
It is the same universe as we are, because I posit that we are the simulated ones. How would be tell if that were true? The topic isn't about how to run a sim. The topic is about what it's like to be one.the sims you are describing are clearly in the same universe as we are.
There are definitely war elements in both, but that makes it more an analogy than a simulation. The do run simulations of war all the time, pretty much continuously. Yay cold war. Those simulations don't simulate the consciousness of anybody, and I don't think they even have people beyond statistical counts.Talking of sims, do you regard chess or (American) football as a simulation of war?
It is a parallel process, yes. Per relativity, simultaneous is an ambiguous term for events, and no, nothing in a any physical system requires spatially separated components of any process to be simultaneous in any frame. Per the principle of locality, one cannot depend on the other (they are outside each other's causal light cone), and thus the interactions can be simulated in any order, serially.I know human consciousness is a fairly hotly contested issue. But does anyone disagree that it involves multiple processes taking place simultaneously? — Patterner
Granted, but there's no need to, per the above comment. Any such transactions can be computed in any order without altering the outcome. Per the principle of locality, no spatially extended process can have a requirement of simultaneous operation.If we agreed that a process can take place in the scenario you're describing, you cannot write multiple things simultaneously.
On the contrary, time in the simulation has nothing to do with time for the guy with the pencil. Our pencil guy can set everything aside for a year and get back to it later. The simulated guy will not notice. No doubt each transaction will have a location/timestamp, and there's nothing preventing multiple transactions (all the transactions in a single iteration of the data) from having the same recorded timestamp. That is pretty much how simulations are done. Here is the state at time X, and then it uses that state to compute the next state at X+<increment> where the increment might be a microsecond or something. It might take a minute for a machine to simulate all the transactions to generate the next state. It might take the pencil guy several lifetimes to do the same thing, so we're going to need that society to train his replacements each time he retires.At no time, in no sense, is everything needed for human consciousness happening at the same time in the paper and pencil scenario.
So how does this question differ from the brain in a vat, from Descartes' demon or from the supposed possibility that we are all dreaming?Remember, we're not worrying about what those running the simulation are calling the simulated things. We're supposing that we are the subjects here, the ones being simulated, and we (and only we) call ourselves human beings or people. That's the only definition that matters.
It is the people in the simulation that are tasked with finding evidence that they are the subject of a simulation. What we're called by the occupants of the reality running the simulation is irrelevant. — noAxioms
So how does this topic differ from the question what it's like to be a bat?The topic isn't about how to run a sim. The topic is about what it's like to be one. — noAxioms
This is the traditional problem of evil. I am one of those who think the problem has no solution and that therefore no such God exists. Of course, that doesn't prove that there are not other gods around or that it is only the Christian conception of God is wrong.That's kind of like suggesting that God is unethical to have created a universe that has beings that feel bad, and yes, there are those that suggest exactly that. — noAxioms
I wish I knew what the difference is between a simulation and an imitation, a simulation and a mimicry, a simulation and an analogy, and a simulation and a model.There are definitely war elements in both, but that makes it more an analogy than a simulation. The do run simulations of war all the time, pretty much continuously. — noAxioms
Nothing like dreaming.So how does this question differ from the brain in a vat, from Descartes' demon or from the supposed possibility that we are all dreaming? — Ludwig V
We are not bats. It's not about what it's like to be something we're not. We know what it is like to be a human. The question is, how might we (being the subject of simulation) detect that fact?So how does this topic differ from the question what it's like to be a bat?
Bostrom is half the story. Most popular fictions depict VR, not a sim. Matrix is a good example of a VR, however implausible.I'm afraid I didn't realize what the philosophical background is (essentially, Bostrom).
I didn't posit no ways ot testing. But depending on the quality of the simulation, it might get difficult. The best test is probably to recognize that there must be limits, and to test those limits.I don't find the question interesting, because if we posit that there is no way of telling, then there is no way of telling.
The 'can a computer think' topic was sort of about that. I suppose we could copy our own design and build an actual biological human, but in something other than by the normal way. Anything else is going to be trivially detectable. Not sure how that 'built' person would get loaded with experience. It's not like you can just upload software to a human. Doesn't work that way.The interesting question is under what circumstances we would accept that something we designed and built is a conscious being, i.e. a (non-human) person.
There is mention of the Turing test in earlier posts here. Passing it with a simulation is doing it the hard way. We're getting close to something that can pass the test now, but nowhere close to actually simulating the way a human does it. Perhaps you, like Ludwig here, mean 'imitation', which anything that passes the Turing test is doing by definition.The Turing Test is too weak, because it can be passed with a simulation. Simulating intelligent behavior is not actually behaving intelligently. — Relativist
Pain is not evil. I'd never want to change myself to be immune from pain. It serves an important purpose, and not an evil one.This is the traditional problem of evil. — Ludwig V
A statue, puppet, or a speaker blaring bird-of-prey noises to scare away geese, or a wooden duck lure, are all imitations/mimicry.I wish I knew what the difference is between a simulation and an imitation, a simulation and a mimicry, a simulation and an analogy, and a simulation and a model.
The Turing Test is passed by fooling people into believing there's a human giving responses in a conversation. This is feasible today at least within a limited range of conversation topics. What more are you looking for? A wider range of topics? Regardless, human responses are the product of thought processes (including feelings, reactions, influenced by motivations that could change during the course of the conversation). Example: a human can express true empathy; a computer can produce words that sound like it's expressing empathy - but it actually is not. The human may change her behavior (responding differently) based on this; will the computer?And if a machine passes the test (it's a text test, so there's no robot body that also has to be convincing), then it exhibits intelligent behavior. The test is not too weak. — noAxioms
I think that you are not talking about the same question as Relativist. (See below). You are positing that it is people who are "in" the sim - i.e. (I assume) being fed the data.It is the people in the simulation that are tasked with finding evidence that they are the subject of a simulation. What we're called by the occupants of the reality running the simulation is irrelevant. — noAxioms
Here, you are positing that you are starting with a machine. In that case, the question is whether the behaviour is really intelligent or merely seems to be intelligent. But if it's a machine, we already know that it is not intelligent. Actually, I don't think that is right, but even if the response was intelligent, it does not follow that the machine is conscious or sentient.And if a machine passes the test (it's a text test, so there's no robot body that also has to be convincing), then it exhibits intelligent behavior. The test is not too weak. — noAxioms
I think that you are not talking about the same test as noAxioms. (See above). Plus you are positing that it is a machine that is responding, so you are begging the question. (As Turing also does in his formulation of the test.)The Turing Test is passed by fooling people into believing there's a human giving responses in a conversation. — Relativist
Indeed. I dragged in Relativist since the topic of Turing test came up, and he suggests that the test is insufficient to determine intelligence.I think that you are not talking about the same question as Relativist. (See below). — Ludwig V
Here again, the quoted comment concerns the Turing test, not the simulation hypothesis.And if a machine passes the test (it's a text test, so there's no robot body that also has to be convincing), then it exhibits intelligent behavior. The test is not too weak.
— noAxioms
Here, you are positing that you are starting with a machine. In that case, the question is whether the behaviour is really intelligent or merely seems to be intelligent. — Ludwig V
The Turning test is not a test for either of those. There's not even a test that can tell if your neighbor is conscious/sentient. If there was, much of the p-zombie argument would be immediately settled by some empirical test. The whole point of the term 'conscious' is that it is always defined in such a way that is immune from empirical evidence.even if the response was intelligent, it does not follow that the machine is conscious or sentient. — Ludwig V
The question is simple. I am communicating with some unknown entity via text messages, much like we do on this forum. The question is, is that with which I am communicating a human or not?The fundamental point is whether we can even formulate the question without begging it.
In a text conversation, yes. That's pretty hard to do, and we're not there yet.The Turing Test is passed by fooling people into believing there's a human giving responses in a conversation. — Relativist
Well, one of the ideas is to go outside those topics. I mean, none of the chat bots have long term memory, so one of their traits is that they don't ask any questions of their own since they cannot learn. I suppose clarification requests of questions posed to it might count as asking something.This is feasible today at least within a limited range of conversation topics.
You claimed the test is too weak. I claim otherwise. If it passes, it has long since surpassed us in intelligence. As a test of human-level intelligence, it is more than enough.What more are you looking for?
It's not empathy, but it very much is expressing empathy. People are also quite capable of expressing empathy where there is no actual empathy, such as the politicians that send their 'thoughts and prayers' to mass-shooting families, but do nothing about the problem.a computer can produce words that sound like it's expressing empathy - but it actually is not.
In a VR, yes, exactly that. People are real, and are fed experience of a simulated reality. Every video RPG does this.You are positing that it is people who are "in" the sim - i.e. (I assume) being fed the data. — Ludwig V
No. If you can do that, you very much are aware of the creator/creation status. It would be like talking to a god. In a VR, you can talk to the other players, and you can talk to the NPCs if the NPCs have enough intelligence to talk, but you can't talk to anybody outside the simulated universe.Plus, if I've understood you, you are positing that the subjects cannot communicate with whatever is running the sim
Quite so. But I notice that you don't disagree with what I say. My argument is that if one starts the Turing test by specifying that the subject is a machine, the test cannot provide evidence to the contrary and this is the version that I have most commonly seen. But if one did start by specifying that it is a person, one would not get any evidence to the contrary either. (If the responses from the machine seem to be intelligent or sentient or whatever, we have to decide whether the responses really are intelligent or sentient or whatever.) Knowing what the subject of the test is governs one's interpretation of the replies, which consequently can't provide evidence either way. That applies also to your version, in which one doesn't know whether the subject is machine or person (and to a version I've seen that provides two subjects, one machine and one human)Here again, the quoted comment concerns the Turing test, not the simulation hypothesis. — noAxioms
Quite so, and the set-up specifies that there can be no empirical evidence. But then, the argument is devised as a thought-experiment with the aim of persuading us to accept that there are qualia, or some such nonsense.If there was, much of the p-zombie argument would be immediately settled by some empirical test. — noAxioms
Quite so. That's why the attempt to distinguish between the two on the basis of empirical evidence (Turing test) is hopeless.The whole point of the term 'conscious' is that it is always defined in such a way that is immune from empirical evidence. — noAxioms
That's capitalism for you. But it might turn out that the machine is more successful than human beings at that specific task,I've even been charged human health insurance rates for a diagnosis provided by a machine, and I protested it at the time. — noAxioms
I think that a machine can diagnose some medical conditions. Whether it can imitate diagnosing any medical conditions is not at all clear to me.If it does, it is probably already considerably more intelligent than humans, since it requires far more smarts to imitate something you are not that it does to just be yourself. — noAxioms
I frequent another forum which developed criteria for sniffing out AI. However, I may be wrong, but I don't think there is any follow-up on whether people's judgements are correct or not. Do you get confirmation about whether your "spots" are correct or not?I am a moderator on a different forum, and one job is to spot new members that are not human. — noAxioms
Parrots imitate talking. Are they smarter than human beings?The entity is not human, and to imitate human responses, especially those involving human emotions, would require superior ability. — noAxioms
I thought you said that there were people inside the system. Now I'm really confused.There are only fully simulated people inside 'the system', — noAxioms
Yes, the appeal to how things work inside is a popular refuge in these uncertain times. But we don't (can't) rely on our limited understanding of how we work to establish what is the same and what is different. Even if we could, I would not be persuaded to rule out the possibility of personhood simply on the grounds of different internal physical structures. The output is what counts most.Progress would not be measured by fooling people, but by showing there are processes that work like our brains do. — Relativist
In that sense, the two are similar. Also, quite often, in both VR and a true sim, solipsism is true, but you know it because there are clues. We here are envisioning a scenario where the simulated reality is good enough that those clues get harder and harder to find.Regarding the question "are we in a simulation?" I interpret this as similar to "is solipsism true?" It's impossible to prove one way or another, but nevertheless - it's rational to believe we are not. — Relativist
Cool. I wasn't aware. Nice controlled test, and kind of pre-chat-bot, which is maybe a good thing. I wonder how trained the judges were; where was the focus of their questioning? To pass today with tools like chatGTP around, you'd have to dumb down the machine answers since it 'knows' more than any human, even if the majority of what it knows is wrong.Regarding the Turing test: it has been passed - to a degree.
It would seem fairly easy to pretend to be an unintelligent machine, but I presume these people were not attempting to appear nonhuman.Conversely, humans have "failed" the Turing test (https://www.nbcnews.com/news/amp/ncna163206) -- observers inferred that a human's responses were not humans.
Agree. The game playing AI does all that, even if it is confined to game playing. Early chess or go playing machines were like self-driving cars, programmed by the experts, using the best known strategies. Then they came up with a general AI (like AlphaZero) that wasn't programmed at all to play anything specific. There was only a way to convey the rules of the game to it, and it would learn on its own from there. After a few days of practice, it could beat anybody and any of the specifically programmed machines. That definitely meets all your criteria.Regarding "true" AI: IMO, it would entail a machine engaging in thoughts, learning as we do, processing information as we do, and producing novel "ideas" as we do.
Totally agree. Progress by imitation has its limits, but since a computer is not a human, to pass a Turing test it will always have to pretend to be something it isn't, which is hard to do even well after it has surpassed us in intelligence.Progress would not be measured by fooling people, but by showing there are processes that work like our brains do.
That is more relevant to this topic. To demonstrate how our brains work, you (probably) have to simulate it. To simulate it, you need to give it state and an environment (all this was brought up in prior posts). The state in particular is not exacty something you can make up. It needs to have grown that way through experience, which means a quick sim won't do. You have to start it from well before birth and run this really complicated simulation through at least years of life, providing it with a convincing environment all the while. Tall order. It would presumably take centuries for a single test to run, during which the hardware on which it is running will be obsoleted multiple timesBenefits include confirming our theories about some of the ways our brains work.
Then the test is invalid, I agree. If you click the link about the test being passed, the judges did not know which conversations were machines and which were people. They did know that there were five of each. Everybody (judges, machines, human subjects) knew it was a test.My argument is that if one starts the Turing test by specifying that the subject is a machine — Ludwig V
The Turing test was never intended as a test of consciousness.That's why the attempt to distinguish between the two on the basis of empirical evidence (Turing test) is hopeless.
True. Machines can detect skin cancer better than any human, and that's worth paying for (but there's probably a free app). In my case, the non-doctor tech that saw me googled my symptoms and read back to be verbatim the same information google gave me at home, but leaving off the part where it said "see your doctor if you have these symptoms". Obviously no actual doctor was consulted.But it might turn out that the machine is more successful than human beings at [medical diagnosis]
A 3 year old can imitate giving a diagnosis. Its how daddy gets covered by 20 bandaids. And if a machine can give a diagnosis (they can), then why would they have to imitate the ability that they actually have?I think that a machine can diagnose some medical conditions. Whether it can imitate diagnosing any medical conditions is not at all clear to me.
A few are false positives, which are often confirmed by a simple PM to them. The bots don't hold conversations, but rather give single replies to a question, and no more. Short, and often correct but obvious and not particularly helpful. If you reply to a bot-post, the bot will probably not notice it.Do you get confirmation about whether your "spots" are correct or not?
No more than is a tape recorder. Parrots don't pass a Turing test.Parrots imitate talking. Are they smarter than human beings?
In the Simulation Hypothesis, we are the simulated people, the ones inside the system. Do not confuse this with the VR hypothesis where the people are real and only their experience is artificial. Read the OP if you don't get this distinction.I thought you said that there were people inside the system. Now I'm really confused.
Posthuman is defined here:This paper argues that at least one of the following propositions is true:
(1) the human species is very likely to go extinct before reaching a “posthuman” stage;
(2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof);
(3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor‐simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed. — BostromSimHypothesis
The trichotomy is reasonable, but worded in a misleading way. Point 1 makes it sound like this preposterous posthuman state is somehow inevitable if the human race doesn't meet an untimely demise along the way. This is nonsense since the posthuman state described is totally unreasonable, and human technology seems heavily dependent on non-renewable resources upon which this gilded age depends.The simulation argument works equally well for those who think that it will take hundreds of thousands of years to reach a “posthuman” stage of civilization, where humankind has acquired most of the technological capabilities that one can currently show to be consistent with physical laws and with material and energy constraints. — BostromSimHypothesis
Yes. That's Bostrom's whole point. He says we're probably all simulated, but it's based on the anthropic reasoning above, which makes many many unreasonable assumptions.So I have to imagine myself as being a sim and not knowing it? — Ludwig V
Comment - this possibility high-lights for me a question about Bostrom's first two hypotheses. They seem to me to be empirical. But I don't see how one could ever demonstrate that they are true or even plausible without some sort of evidence. Without that one could never demonstrate any consequence of them as sound, as opposed to valid. En masse simulations could provide such evidence. — Ludwig V
The weather is closer. Fluid dynamics of a system in stable state (say water moving through a pipe, dam spillway) needs a description of that state, a calculus task. If it is dynamic (simulation of water waves), then it's more complicated, closer to the weather.couldn't one adopt the kind of approach that the weather forecasters (and, I believe, physicists trying to work out fluid dynamics, which is probably the same problem) have adopted? — Ludwig V
The entire paper is one hypothesis. There are not more that I am aware of.Comment - this possibility high-lights for me a question about Bostrom's first two hypotheses.
I posted his definition of 'posthuman', which is, in short, a level of technology capable of running the numbers he underestimates, and far worse, capable of simulating a posthuman set of machines doing similar simulations.That would require us to define what is meant by "post-human" and "extinction".
There you go. You seem to see both routes. The third path is extinction, or simple permanent loss of technology.Then we would have to deal with the difference between two different possibilities. We may go extinct and be replaced (or ousted) by some other form of life or we may evolve into something else (and replace or oust our evolutionary predecessors).
What two possibilitie? Humans that evolve into something we'd not consider human by today's standard? Many species do that all the time. Other possibility is 'ousted' as you put it. Our biological line is severed, as happens to nearly all biological lines given time.Given that inheritance is not exact copy and the feed-back loop of survival to reproduction works on us just as surely as on everything else, can we exactly define the difference between these two possibilities?
Good example. There are no dinosaurs (which, unlike humans, is a collection of species). The vast majority of those species were simply ousted. They have no descendants. But some do, and the alligators and birds are their descendants. They are not dinosaurs because none of them is sexually compatible with any species that was around when the asteroid hit. They are postdinosaur.They say that birds evolved from dinosaurs, and that mammals took over as dominant species from dinosaurs.
It depends on the species, or the individual. Mom has 2 kids. One of those has children of his own, and the other is ousted, a terminal point in the family tree.Which possibility was realized for dinosaurs?
Prediction of what? A simulation of history makes no predictions. A simulation of the future is needed for that, hence the weather predictors.Another problem. Given that a feed-back loop is at work on these phenomena, can prediction ever be reliable?
You really need to tell me what these hypotheses are, because I know of only the one. Two if you count the VR suggestion, but that doesn't come from Bostrom. i know of several that support a VR view, but none that has attempted a formal hypothesis around it.The third hypothesis suffers, for me
It the second possibility. He says one of the three must be true. It's not a list of three premises.The second premise - any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof) - seems obviously true to me. — wonderer1
If it is simulating at the particle level, yes. I can run an easy simulation of the planetary motions without simulating each bit. Each planet/moon/asteroid can effectively be treated as a point object, at least until they collide.The simulator would need to consist of more particles than the system which is being simulated.
Yes, and Bostrom claims several levels of depth, meaning the simulation is simulating the machines doing simulations.That's a rather fundamental problem. In practice, only things that are simpler than the simulator (or things treated simplistically) can be simulated.
Yes. If the goal was to simulate consciousness, they'd probably do one person, or a small isolated community (a closed system). And it wouldn't be a simulation of anybody real, but rather just a learning tool to show that a simulated person behaves like we do. If it worked, it would be a big blow to the dualists, but I'm sure they'd find a way to explain the results away.It seems to me that the person who would seek to disprove the second premise would need to prove that consciousness can arise in a simulation of something much more simplistic than the world we find ourselves in,
I didn't pay enough attention to "extremely unlikely" in this hypothesis/axiom/premiss. That can't be verified or falsified in any of the usual ways. Your arguments are suggestive in support of it. But I can't see them as conclusive.It seems to me that the person who would seek to disprove the second premise would need to prove that consciousness can arise in a simulation of something much more simplistic than the world we find ourselves in, or that it will be a routine matter for a post-human civilization to take all of the matter in a big solar system, and use it to model a smaller solar systems. — wonderer1
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.