Take airplanes. If the simulation initial state was set in the 20th century, then it includes airplane technology. It is 'given' so to speak. If the initial state is started before that, then airplanes are our own invention.. Either way, we possess the technology. It isn't illusory. We actually can make airplanes that fly. If you crash in one, you really die, as opposed to say a video game where if you 'die', you simply exit the game. Getting shot in a video game is indeed an illusion.But if this is indeed a simulation, then anything we purport to know about our present levels of technology (and thus any extrapolation therefrom) is illusory, because we don't actually possess that technology: such technology is simulated. — Arkady
@RogueAI correctly pointed out that only somebody who knows about humans would want to simulate them, so it is presumably our decedents, be they human anymore or not.Perhaps given that it's supposed to be an "ancestor" simulation specifically, he would say that such a simulation would by definition closely (if not necessarily exactly) resemble the ancestral state of the civilization doing the simulating. — Arkady
"we know the basic part of the answer — and that is, there are sequences of neuron firings and they terminate where the acetylcholine is secreted at the axon end-plates of the motor neurons, sorry to use philosophical terminology here. But when it is secreted at the axon end-plates of the motor neurons, a whole lot of wonderful things happen in the ion channels and the damned arm goes up."How does Searle say [the arm] goes up in the TED transcript? — fishfry
Nobody ever said the program was conscious. It's dumb as rocks, implementing a fairly small program that simply knows how to move the particles around. It implements physics and is no more conscious than is physical law. It has no external input, so right there it doesn't qualify as being conscious. Some programs do have such input, but not most simulations.Because I can't believe that a computer program of any complexity, running at any speed, could ever be conscious.
I say the car wouldn't be able to do its thing if it wasn't conscious of what's going on around it. Not the same as human consciousness, sure, but it's still a form of consciousness. A car stays conscious even when it's off, a sort of security feature that has caught vandals and thiefs.Programs play chess and drive cars, and I'm duly impressed. Not same as being conscious.
Mathematics. Known physical limits. Psychology. Fermi paradox. All vague things, I admit, but at least not empty.What would constitute evidence of what might be possible in the future?
The computer doesn't need to know which configurations. It only has to simulate physical law. It means that if they successfully simulate a conscious being, they still won't know how consciousness works.The ultimate argument against my position is that some configurations of atoms are self-aware, and someday we may figure out what those configurations are.
Both the physics community and I are in general agreement in that our physics does not appear to be computational. Bell's theorem even 'proves' this, but it is based on empirical evidence, and one has to accept empirical evidence for the proof to hold.This referred to the claim that everything physical is computational. If you agree with me that you don't assert this, then we're in complete agreement. In fact I think we might be in a lot of agreement in general.
OK, but naturalism is in contrast with concepts like souls, life energy, vitalism, etc. None of these things is necessary to be alive, and indeed, a running program is no more alive than is your brain processes.Programs don't have souls, don't have life energy, aren't alive.
It's 'right' enough to know where the moon will be 17 years from now, but the physics is chaotic enough that we don't know where it will be 17 millennia from now.Our theory of gravity works, but we know it's not quite right.
Indeed, but we can for a limited time. For the rolling lumpy rock, yes, that's a chaotic function, but with sufficient precision, we can predict its brief path until it stops, with arbitrary precision. Same with the weather. Our current precision gets us maybe 6 days of what that storm will do, and much of that error is due to lack of perfect model, and lack of detailed initial state.Oh no, that's chaos theory. Even if we had all the details of the initial state, we can't necessarily predict the future.
Which is exactly why there's no point in doing an ancestor simulation. It will show an alternate history that bears little resemblance to what the books say. If started far enough back, it will not evolve humans.Tiny rounding errors add up to great differences in output. Nearby points in the initial state space lead to vastly different outcomes. We know this.
That's very different than us being a program.He says that in the future, computations will instantiate consciousness.
This is Searle's language game again. Instantiation if an anthropomorphic god does it, and 'execution of a model' if anything else does the exact same thing. The model may be a map, but the execution of it is territory.Very distinct. The universe, or God, instantiates all the stuff around us. It is the stuff around us. It's the exact ultimate laws of the universe. The execution of a model is just that. It lets us predict, to sufficient accuracy, how the galaxies will move. It doesn't move the galaxies and it's not exact.
Nonsense. If they didn't instantiate gravity, then the simulated moon would not orbit the simulated Earth. That's what you defined instantiation to be. Are we changing the definition now of 'instantiation' to be 'not simulated'?Gravity simulations do not attract nearby bowling balls. They do not instantiate gravity.
"we know the basic part of the answer — and that is, there are sequences of neuron firings and they terminate where the acetylcholine is secreted at the axon end-plates of the motor neurons, sorry to use philosophical terminology here. But when it is secreted at the axon end-plates of the motor neurons, a whole lot of wonderful things happen in the ion channels and the damned arm goes up."
That's a wordy version of what I said, which is "there's wires connecting the parts where the will is implemented, to the parts where the motor control is implemented". Under Chalmers, there isn't such a wire, hence the magic. — noAxioms
Nobody ever said the program was conscious. It's dumb as rocks, implementing a fairly small program that simply knows how to move the particles around. It implements physics and is no more conscious than is physical law. It has no external input, so right there it doesn't qualify as being conscious. Some programs do have such input, but not most simulations. — noAxioms
Anyway, you don't believe a simulated person could be conscious, so you make up an arbitrary rule that forbids it. — noAxioms
I think that's what you're saying, but personal belief isn't evidence against somebody's hypothesis. It's only an irrational reason that you don't accept the hypothesis. — noAxioms
If we're in a simulation, and we make airplanes within the confines of this simulation, then it seems to me that we don't actually possess the technology. We at most possess a simulation of that technology. If we're in a simulation, what does "actually" flying mean? We're merely simulating the flying experience, making it simply a hyper-advanced flight sim. Pilots in flight sims aren't actually flying, after all.Take airplanes. If the simulation initial state was set in the 20th century, then it includes airplane technology. It is 'given' so to speak. If the initial state is started before that, then airplanes are our own invention.. Either way, we possess the technology. It isn't illusory. We actually can make airplanes that fly. If you crash in one, you really die, as opposed to say a video game where if you 'die', you simply exit the game. Getting shot in a video game is indeed an illusion. — noAxioms
Well, it's a truism that only beings who know about humans would want to simulate them, as in order to simulate something you must have knowledge of it, else how do you construct a verisimilitudinous simulation of it? However, that truism needn't limit the simulators to our descendants: perhaps they're advanced aliens which at some point in cosmic history made contact with humans, perhaps they're advanced AI like in the Matrix, and so forth.@RogueAI correctly pointed out that only somebody who knows about humans would want to simulate them, so it is presumably our decedents, be they human anymore or not. — noAxioms
You seem to be referring to a virtual reality. The simulation hypothesis is not a virtual reality. The people (us) are simulated. In a VR, we would be real, and only our experiential feed is artificial.If we're in a simulation, what does "actually" flying mean? We're merely simulating the flying experience, making it simply a hyper-advanced flight sim. Pilots in flight sims aren't actually flying, after all. — Arkady
The Matrix is also an example of a VR, not an example of the simulation hypothesis.perhaps they're advanced aliens which at some point in cosmic history made contact with humans, perhaps they're advanced AI like in the Matrix, and so forth.
I can't answer for your view, but for the naturalists, it comes from different places, depending on what sort of thing is wanted.Where is the will that initiates the process? — fishfry
I said that because the reasons seem backwards: Conclusion first, then selection of premises to support that conclusion. This is rationalization, something humans are very good at. I don't consider humans (myself included) to be very rational creatures.Ok. My reasons are irrational.
Not at all, but I apologize if my words annoyed you. The effect was not intentional.You sound like I said something that annoyed you.
Where is the will that initiates the process?
— fishfry
I can't answer for your view, but for the naturalists, it comes from different places, depending on what sort of thing is wanted.
Most will comes from subconscious places (Limbic system), such as choices as to which way to swerve around the tree or to cheat on your spouse. But the will to choose option C in a multiple choice test comes from higher up (Cerebrum for instance). — noAxioms
I said that because the reasons seem backwards: Conclusion first, then selection of premises to support that conclusion. This is rationalization, something humans are very good at. I don't consider humans (myself included) to be very rational creatures. — noAxioms
Not at all, but I apologize if my words annoyed you. The effect was not intentional. — noAxioms
You sound like Arkady, but no, that statement is misleading. It makes it sound like the limbic system is simulated but you are not. So either "I have a limbic system", or "The simulated 'I' has a simulated limbic system". Either of those wordings is at least consistent. Your opinion (and mine, but for very different reasons) of course is that neither you nor your limbic system are the product of a simulation.I have no limbic system. Only a simulation of a limbic system in a computer, — fishfry
You sound like Arkady, — noAxioms
but no, that statement is misleading. It makes it sound like the limbic system is simulated but you are not. — noAxioms
So either "I have a limbic system", or "The simulated 'I' has a simulated limbic system". Either of those wordings is at least consistent. Your opinion (and mine, but for very different reasons) of course is that neither you nor your limbic system are the product of a simulation. — noAxioms
Nobody is claiming that a simulation of X creates an X in the simulating world — noAxioms
, which is the strawman you seem to use in your gravity example every time where you deny an equivalent straw claim that simulation of gravity would create gravity in the GS world. — noAxioms
That you persist in this suggestion means that yes, you're not getting it right, — noAxioms
perhaps deliberately so. — noAxioms
So no, a simulation in the GS world of a limbic system does not create emotion in the GS world. I agree with that. It is exactly for that reason that the program running the simulation isn't conscious. — noAxioms
Who makes that claim? Quote it please. If you can't do that, then you're making a strawman assertion.Nobody is claiming that a simulation of X creates an X in the simulating world
— noAxioms
That's exactly what's claimed. — fishfry
Not minds/people in the GS world, no. The claim is that we (the simulated people with yes, simulated minds) are in this simulated universe, and not in the universe running the simulation.simulations of brains do not necessarily implement minds
A simulation of a person without will would be a simulation of a body in a vegitative state.you admit I have will! Therefore I am NOT likely to be a computer simulation.
What, my saying 'deliberate'? You seem to be putting words in people's mouths that they didn't say, and I don't find you to be an ignorant person.After all this you have to accuse me of bad will?
Not the simulation being discussed here, correct. A running computer process forever without inputs by definition cannot be conscious any more than you would be without inputs ever.A program isn't conscious,
I have a very loose definition that you would not like, but my opinion there is irrelevant. The chatbots (which perhaps imitate, but not simulate anything) at least have input, but so does a thermostat. The simulation in question does not.unless you think chatbots simulate consciousness. Many people believe that these days.
Who makes that claim? Quote it please. If you can't do that, then you're making a strawman assertion. — noAxioms
Not minds/people in the GS world, no. The claim is that we (the simulated people with yes, simulated minds) are in this simulated universe, and not in the universe running the simulation. — noAxioms
A simulation of a person without will would be a simulation of a body in a vegitative state. — noAxioms
What, my saying 'deliberate'? You seem to be putting words in people's mouths that they didn't say, and I don't find you to be an ignorant person. — noAxioms
Not the simulation being discussed here, correct. A running computer process forever without inputs by definition cannot be conscious any more than you would be without inputs ever. — noAxioms
I have a very loose definition that you would not like, but my opinion there is irrelevant. The chatbots (which perhaps imitate, but not simulate anything) at least have input, but so does a thermostat. The simulation in question does not. — noAxioms
You seem to have a dualistic definition of 'will'. All of your examples (pacman, p-zombies) are dualist/VR references. Bostrom's hypothesis is not. He's not proposing we're in a video game. All this has been said before.I don't see that. Isn't a simulation of a person without a will exactly what they call a philosophical zombie? It would literally be a terrific chatbot operating inside a highly realistic flesh and bone bot. Your neighbor, for instance. What makes you think they have a will? — fishfry
That's what a simulation is, yes. It has an initial state conveyed to it, and that is input of sorts, but once the simulation begins, there is no further input of any kind. If there was, it ceases to be a simulation. I've run plenty of these myself. It was my job for a while. The sims would run without any I/O at all for perhaps a week, and I don't think results were available until the end, but they could be reported as they happen.The simulation program has no input. You write the code, then you execute the code and it does what it does.
Output (state of system at any given time) can be had any time, often at the end, but it doesn't have to be. A weather sim is a single simulation of a storm, and it could output the stats of the storm at regular intervals, or it could wait until the end and output the whole thing in a lump. It has to complete in hours, not days, to be useful. My chip sims were a little difference since each chip was run through a series of discreet tests, mostly designed to see how fast you could clock it before it started misbehaving, but also to check the design for bugs. Those sims still output everything at the end, but they didn't have to.What is its output?
They don't. It makes no more sense than asking what it is like for a human to be a bat.How exactly do the Simulators examine its inner life?
Same way it happens in the real (materialist) world: Particles interact and do their thing. Your experience is a function of matter interactions (not so according to someone like Chalmers, whom you referenced with the p-zombie mention above).In other words, they run the program, and inside the program I come into existence. Me with my subjective experience. (How does that happen? Remind me please).
The simulation itself cares about what you're thinking, but only because it needs to change physics due to it. The runners of the simulation may or may not care. Certainly they don't have enough people to care about every single individual. It's an ancestor simulation of the whole human race. They perhaps want to see what history unfolds, and they care no more about what anybody is thinking than you do about what anybody is thinking. You only care about what they say to you, what they do. You may wonder what goes on inside, but that's a motive for a single-person simulation, not a planetary scale one.Clearly they are interested in what I'm thinking and experiencing
If 'the simulators' are those that put together the simulation, who want the ancestor sim, then they have perhaps access to the same data as we do with a pimped-out MRI scan: A picture of where the matter is. You're not getting thoughts from that. To log thoughts, something needs to interpret that matter state and render it into language for readable by the simulators. I suppose such log files are possible, but much of thoughts are not in language form.1) Do the simulators have access to my internal mental states, and if so, how? Copious log files of everything I'm thinking? and
Up to them to design a way to do it that is useful for their purposes. I suppose one could insert a sort of point of view interface that lets one look from any event anywhere (much like the little guy you can steer around in google maps), and lets it move at the observers control. The sim would need to save all state (and not just current state) for this to work since it probably wouldn't be useful if it was 'live', displaying only what constitutes the current state of the sim.2) How do I perform actions for the Simulators to watch? They're running ancestor simulations, so they must want to see what I'm going to do next. How do they "watch" me? What are the outputs?
I presume that 'the sims' are the humans in the simulation.You are avoiding the question of whether the sims are self-aware?
So the humans are entities created by the software? Then how are they not real people and not simulations of anything?I presume that 'the sims' are the humans in the simulation.
The hypothesis is that the sims are us, so tautologically they're as self-aware as you are.
If 'the sims' is a reference to the simulation software, program, or process, well that's a different answer since people are not hypothesized to be any of those things. — noAxioms
Quite so. But my experience is real experience, not a simulation of experience. So the people "inside" your software are real people.Particles interact and do their thing. Your experience is a function of matter interactions — noAxioms
I would say the humans are entities created by rearrangement of matter, and that the matter in this case happens to be simulated by the running process in the supervening world. It's a choice of how to word things is all.So the humans are entities created by the software? — Ludwig V
They are (hypothesized as being) you, and you are real, per your definition:Then how are they not real people and not simulations of anything?
If I'm experiencing fear, the fear is real. — Ludwig V
You seem to be inconsistent with your usage of 'real'. Have you switched to a different definition?But my experience is real experience, not a simulation of experience.
It's not my software. It's the software of the entities running the simulation, which isn't me. I am hypothesized to be the product of that simulation, not hypothesized to be creating or running one.So the people "inside" your software are real people.
Some possibilities:First, if the world is simulated, why don't its 'designers' simply 'pop out' at times and leave us with some trace of their existence? Guidance through such a virtual world might be helpful, and yet there is no trace of anyone 'programming' or 'guiding' us anywhere. — jasonm
No instances of anomalies? There are often anomalies. Perhaps in the end they will be explained, perhaps not. In any case, we now explain away anomalies even if we really don't know.Similarly, why don't we sometimes notice violations of the laws of physics? If it's just a simulation, does it matter if the laws of physics are perfectly consistent? This applies to any law of this simulated world, including propositional logic. Again, if you are there, leave us with some trace of your existence through 'miracles' and other types of anomalies that our world does not seem to have. And yet there seems to be no instances of this kind. — jasonm
We wouldn't know how big the universe is. We only know what we know about our universe, which would be simulated. Whatever is outside it in which it is running would be beyond our ken. I'm sure educated, medieval people would dismiss descriptions of things we can do now as being impossible. But what did they know about humans would later be able to do? What do we know?Third: what type of computing power would be required to 'house' this virtual universe? Are we talking about computers that are bigger than the universe itself? Is this possible even in principle? — jasonm
You seem to have a dualistic definition of 'will'. All of your examples (pacman, p-zombies) are dualist/VR references. Bostrom's hypothesis is not. He's not proposing we're in a video game. All this has been said before. — noAxioms
That's what a simulation is, yes. It has an initial state conveyed to it, and that is input of sorts, but once the simulation begins, there is no further input of any kind. If there was, it ceases to be a simulation. I've run plenty of these myself. It was my job for a while. The sims would run without any I/O at all for perhaps a week, and I don't think results were available until the end, but they could be reported as they happen. — noAxioms
Output (state of system at any given time) can be had any time, often at the end, but it doesn't have to be. A weather sim is a single simulation of a storm, and it could output the stats of the storm at regular intervals, or it could wait until the end and output the whole thing in a lump. It has to complete in hours, not days, to be useful. My chip sims were a little difference since each chip was run through a series of discreet tests, mostly designed to see how fast you could clock it before it started misbehaving, but also to check the design for bugs. Those sims still output everything at the end, but they didn't have to. — noAxioms
They don't. It makes no more sense than asking what it is like for a human to be a bat. — noAxioms
Same way it happens in the real (materialist) world: Particles interact and do their thing. Your experience is a function of matter interactions (not so according to someone like Chalmers, whom you referenced with the p-zombie mention above). — noAxioms
The simulation itself cares about what you're thinking, but only because it needs to change physics due to it. The runners of the simulation may or may not care. Certainly they don't have enough people to care about every single individual. It's an ancestor simulation of the whole human race. They perhaps want to see what history unfolds, and they care no more about what anybody is thinking than you do about what anybody is thinking. You only care about what they say to you, what they do. You may wonder what goes on inside, but that's a motive for a single-person simulation, not a planetary scale one. — noAxioms
If 'the simulators' are those that put together the simulation, who want the ancestor sim, then they have perhaps access to the same data as we do with a pimped-out MRI scan: A picture of where the matter is. — noAxioms
You're not getting thoughts from that. To log thoughts, something needs to interpret that matter state and render it into language for readable by the simulators. I suppose such log files are possible, but much of thoughts are not in language form.
And per above, if this is the sort of detail one wants, it makes far more sense to simulate one or a very small number of people. So the motives are probably different for the ancestor sim. — noAxioms
Up to them to design a way to do it that is useful for their purposes. I suppose one could insert a sort of point of view interface that lets one look from any event anywhere (much like the little guy you can steer around in google maps), and lets it move at the observers control. The sim would need to save all state (and not just current state) for this to work since it probably wouldn't be useful if it was 'live', displaying only what constitutes the current state of the sim. — noAxioms
I presume that 'the sims' are the humans in the simulation.
The hypothesis is that the sims are us, so tautologically they're as self-aware as you are. — noAxioms
If 'the sims' is a reference to the simulation software, program, or process, well that's a different answer since people are not hypothesized to be any of those things. — noAxioms
We can only speculate as to the purpose of running this kind of simulation, and thenature of the output depends on that purpose. Maybe it is a sort of detailed history book. Maybe it is pictures. Maybe it's just a stored database. Maybe the purpose is simply to see how long humanity lasts until it goes extinct, in which case a simple number might be the output.I asked WHAT is the output. — fishfry
You define 'the sims' below to be the programs in the GS world. I see no assertion that either a program (a static chunk of software on perhaps a disk somewhere) or a computer process (the execution of said program on some capable device) with no inputs would have what you might consider to be an 'inner life'. Bostrom doesn't say this, and neither do I.So the sims have an inner life (one of Bostrom's hidden assumptions)
They have knowledge of it in the same way that I have knowledge of my wife having an inner life. If that's going out on a limb, then one is presuming solipsism. But my presumption of my wife having inner life does not let me know what it's like to be her.but the simuilators have no knowledge of it?
Geez, another strawman. I make no such claim. Bostrom presumes that consciousness is physical/computational. That assumption is no more an explanation of how consciousness works than is the non-explanation by anybody else.So YOU know how consciousness works.
I didn't say they figured out how consciousness works, nor did I say they focus only on behavior. The simulation needs to know what each persons mental focus is, what his intent is, because physics as he describes it depends on it. One doesn't need to know how consciousness works to do this.So step one, they figure out how to implement consciousness using computers; and step two, they entirely ignore that and focus on behavior.
There's no 'them' to communicate to. OK, observers in the GS world can watch, (very similar to the google map interface), but they don't affect anything since that would constitute external input. The running of any sim doesn't require observation of any kind, but why run it if nobody's going to pay attention to the outcome? Yet again, the output is dependent on the purpose of running the thing, and we can only speculate on the purpose.And again, how is that behavior communicated to them?
A full classical scan of a person provides access to internal physical states, and that's all that's needed to simulate the person, per naturalism. But such a simple simulation would not have physics supervening on mental states like the sim Bostrom proposes, so the one he speculates is far more complicated and requires access to mental states, not just physical states.An MRI does not provide access to internal mental states. You know that.
Yes, with that quote, I was. I don't know the purpose of the sim, and I don't know what tech is available to the entities running the sim, so I can only speculate as to how they would choose to 'observe' it.You're just speculating
Ah, not us, but the program in the GS world. Apologies for getting that wrong. Sims then typically not conscious, especially since it typically lacks input.The sims are programs.
Me saying what the output would be is definitely making stuff up. Me knowing what a simulation is and how it typically works is not making stuff up, since I did it regularly.Could you accept that you can't answer any of these questions except by making stuff up?
We can only speculate as to the purpose of running this kind of simulation, and thenature of the output depends on that purpose. Maybe it is a sort of detailed history book. Maybe it is pictures. Maybe it's just a stored database. Maybe the purpose is simply to see how long humanity lasts until it goes extinct, in which case a simple number might be the output.
I did mention the nature of the output later in the post above, such as the example of the output of google maps for instance, a very useful interface for display of simulation results. — noAxioms
You define 'the sims' below to be the programs in the GS world. — noAxioms
I see no assertion that either a program (a static chunk of software on perhaps a disk somewhere) or a computer process (the execution of said program on some capable device) with no inputs would have what you might consider to be an 'inner life'. Bostrom doesn't say this, and neither do I. — noAxioms
They have knowledge of it in the same way that I have knowledge of my wife having an inner life. If that's going out on a limb, then one is presuming solipsism. But my presumption of my wife having inner life does not let me know what it's like to be her.
The simulation can report what each person thinks and feels. The simulation has to have access to this because physics is dependent on what people are thinking. So it can report that Bob at time X is paying attention to his laser experiment and is feeling frustrated that he cannot get the setup just right, and his bladder is getting full. It can show his point of view if that helps. Make up your story. What interface tech exists for them is speculation on our part. Humans are notoriously bad at predicting 'future'/higher tech. — noAxioms
I put 'future' in scare quote because maybe the simulation is being run in the year we call 1224 or something. Maybe in the GS world, advancements came much sooner, and in our simulated world, things happened much slower, and we're far behind them despite 8 more centuries to learn. If that is the case, the Gregorian calendar is only meaningful in our world, and they number their years differently. — noAxioms
Geez, another strawman. I make no such claim. Bostrom presumes that consciousness is physical/computational. That assumption is no more an explanation of how consciousness works than is the non-explanation by anybody else. — noAxioms
I didn't say they figured out how consciousness works, — noAxioms
nor did I say they focus only on behavior. The simulation needs to know what each persons mental focus is, what his intent is, because physics as he describes it depends on it. One doesn't need to know how consciousness works to do this. — noAxioms
There's no 'them' to communicate to. OK, observers in the GS world can watch, (very similar to the google map interface), but they don't affect anything since that would constitute external input. The running of any sim doesn't require observation of any kind, but why run it if nobody's going to pay attention to the outcome? Yet again, the output is dependent on the purpose of running the thing, and we can only speculate on the purpose. — noAxioms
A full classical scan of a person provides access to internal physical states, and that's all that's needed to simulate the person, per naturalism. But such a simple simulation would not have physics supervening on mental states like the sim Bostrom proposes, so the one he speculates is far more complicated and requires access to mental states, not just physical states. — noAxioms
Yes, with that quote, I was. I don't know the purpose of the sim, and I don't know what tech is available to the entities running the sim, so I can only speculate as to how they would choose to 'observe' it. — noAxioms
Ah, not us, but the program in the GS world. Apologies for getting that wrong. Sims then typically not conscious, especially since it typically lacks input. — noAxioms
Me saying what the output would be is definitely making stuff up. Me knowing what a simulation is and how it typically works is not making stuff up, since I did it regularly. — noAxioms
Our opinions definitely differ, but I'm trying not to assert opinions. I'm trying to interpret what Bostrom's opinion is, and how he attempts to back it. — noAxioms
No, but it has an interface which is the beginnings of what one might look like for viewing simulation states. Yes, the controls to the tool constitute input to the tool, but since viewing simulation results has zero effect on the simulation itself, it doesn't count as input to the simulation, only input to one of many read-only tools to view the data produced by the simulation.BTW Google maps is not a simulation, — fishfry
I thought they were the people, not the programs.You define 'the sims' below to be the programs in the GS world.
— noAxioms
Yes, what else could we be talking about?
'Living in a computer simulation" is different from being that computer simulation. The two exist in different worlds. They're not the same thing. The simulation runs in the GS world. We exist in this (simulated) world. That's the distinction I've been trying to stress. I'd try to use your meaning, but all sorts of strawman conclusions can be drawn when one equates the two very distinct things, such as "the simulation program is conscious'" which it isn't even though you and I are. Simulation programs tend to be very simple, endlessly running the same relatively small list of instructions again and again over a relatively large data set.Bostrom: "Are YOU living in a computer simulation?" My emphasis. Me. You. Each of us. We are a program being run by the simulators.
I know. It is still a mistake to say you are an executing program, for the reasons stated just above and in prior posts.I meant executing program.
Presuming 'sims' is the people with this comment, else it makes no sense.It's odd that Bostrom thinks the computers instantiate self-awareness in the sims, yet show little interest in it.
The initial state of the sim had perhaps some real ancestors (depends what date they selected), but we (the descendants of those initial people) are not in any way their ancestors, and thus the simulators are not in our future, only the future of some past year they selected for their initial state.Bostrom clearly thinks the simulators live in (our) future and we are simulations of their ancestors.
And I buy that. Yes, the simulated people (and not the simulation processes) are self aware. But he doesn't explicitly say that anybody knows how 'consciousness works'. You don't have to. You put matter together like this, and the thing is conscious. That's what the sim does. It just moves matter. It doesn't need to know how the emergent effects work.Bostrom says that. That's the one great revelation I had from this thread. Bostrom explicitly states that the sims are self-aware, and blithely justified is as "it's widely believed."
Agree. Or the biologists, which is a history major of sorts. What will they get from a sim that starts at a state resembling some past state, but evolves in a completely different direction? Not much. What if you run a thousand of them, all with different outcomes. Now you have statistics, and that's useful. Output would look like a history book. 'Watching' specific events from a selected point of view probably won't be too useful for that, but such a view would be useful to find the initial cause of some avoidable calamity (like a war) which helps our future people know what to look for to prevent their own calamities.That's more likely than that the history majors are running ancestor simulations.
But they kind of already do. They can put a thing on your head, measuring only external EM effects on your scalp (like an EEG) and they can see you make a decision before you're aware of it yourself. Point is, one doesn't need to know 'how consciousness works' in order to gean what the sim needs, which is mostly focus and intent. What is our guy paying attention to? Why? The sim needs to know because the physics of that thing is dependent on it., It changes from when nobody is paying attention to it. This is done for optimization purposes, and for faking non-classical effects in a classical simulation.You can map all the neurons and you would not know what someone's thinking.
The sims are programs. — fishfry
Aaand the definition changes again. You said the sims are the programs. The programs are processes running in the GS world. We are humans living in this simulated world. Maybe we should stop using 'sims' as shorthand for this ever moving target.ARGHHHHHH! The sims are conscious. That's on page one of Bostrom's paper. We are the sims. — fishfry
Yea, that's right. There's indeed not much point in this since your personal beliefs conflict, so you won't consider it on its own grounds.That's the funny thing. You have said you don't agree w/Bostrom. And for some reason, that makes you want to put great effort into explaining his wrong position to me.
You keep changing what 'the sims' means, and Bostrom doesn't use the word, so I cannot say yes or no.Bostrom speculates that WE are sims.
Surely we agree on that, at least, yes? No?
No, but it has an interface which is the beginnings of what one might look like for viewing simulation states. Yes, the controls to the tool constitute input to the tool, but since viewing simulation results has zero effect on the simulation itself, it doesn't count as input to the simulation, only input to one of many read-only tools to view the data produced by the simulation. — noAxioms
Google maps can only show you specific places. You can go into a few select buildings, but your view is mostly confined to streets. With the simulation, there is no restriction of views only where the van was, taking a picture every 10 meters or so. You can go inside walls and watch the rats eat the wiring if you want, even if it's totally dark in there. — noAxioms
I thought they were the people, not the programs.
But you defined it earlier to mean 'the simulation processes", of which there may be many running at once, each simulating a different world. — noAxioms
Note: You yet again redefine 'sims' to be the people below. Using the word in both ways is the source of so much of our disconnect. — noAxioms
'Living in a computer simulation" is different from being that computer simulation.
— noAxioms
The two exist in different worlds. They're not the same thing. The simulation runs in the GS world. We exist in this (simulated) world. — noAxioms
That's the distinction I've been trying to stress. I'd try to use your meaning, but all sorts of strawman conclusions can be drawn when one equates the two very distinct things, such as "the simulation program is conscious'" which it isn't even though you and I are. — noAxioms
Simulation programs tend to be very simple, endlessly running the same relatively small list of instructions again and again over a relatively large data set. — noAxioms
I know. It is still a mistake to say you are an executing program, for the reasons stated just above and in prior posts. — noAxioms
Presuming 'sims' is the people with this comment, else it makes no sense. — noAxioms
It's a very weak point in his argument in my opinion, so he avoids it. To run a good ancestor simulation like this, it would require far less resources to have a good AI imitate (rather than simulate) each of the people. — noAxioms
We're talking about something far better than passing a Turing test since each person needs to not just type like a human, but to act and defecate and bleed like a human. — noAxioms
Now your ancestor sim can go on at perhaps a thousandth of the resources needed to do it at the level of simulation of consciousness of each person. But his hypothesis requires this, so he's forced to posit this implausible way of achieving the goal he's made up. The ratio is likely waaaay more than 1000-1. — noAxioms
He tries to address this by waving away my '1/1000th' guess with 'we don't know the real number'. He calls the imitation people (as opposed to fully simulated ones) 'shadow people', and discounts this strategy, and yet gives every simulated person a shadow body and populates the world with shadow animals and plants and such, none of which is actually simulated like the brains are. Go figure. — noAxioms
Bostrom clearly thinks the simulators live in (our) future and we are simulations of their ancestors.
The initial state of the sim had perhaps some real ancestors (depends what date they selected), but we (the descendants of those initial people) are not in any way their ancestors, and thus the simulators are not in our future, only the future of some past year they selected for their initial state. — noAxioms
Yes, I agree with you that Bostrom seems to imply that history would play out more or less the same, in which case he's just fooling himself, or, if there's a script, it's not a simulation at all, but just a CG effect for a movie script, which doesn't involve people that need to make their own choices. — noAxioms
And I buy that. Yes, the simulated people (and not the simulation processes) are self aware. — noAxioms
But he doesn't explicitly say that anybody knows how 'consciousness works'. You don't have to. You put matter together like this, and the thing is conscious. That's what the sim does. It just moves matter. It doesn't need to know how the emergent effects work. — noAxioms
Agree. Or the biologists, which is a history major of sorts. What will they get from a sim that starts at a state resembling some past state, but evolves in a completely different direction? Not much. What if you run a thousand of them, all with different outcomes. Now you have statistics, and that's useful. Output would look like a history book. 'Watching' specific events from a selected point of view probably won't be too useful for that, but such a view would be useful to find the initial cause of some avoidable calamity (like a war) which helps our future people know what to look for to prevent their own calamities. — noAxioms
Point is, that's a good starting point to resolve the 'why would such a sim be run'? I also still say that imitation, not full simulation, would be a far less costly way to achieve any of the goals mentioned. Only Bostrom requires it, but he can't force the 'future' people to do it an inefficient way. — noAxioms
But they kind of already do. They can put a thing on your head, measuring only external EM effects on your scalp (like an EEG) and they can see you make a decision before you're aware of it yourself. — noAxioms
Point is, one doesn't need to know 'how consciousness works' in order to gean what the sim needs, which is mostly focus and intent. What is our guy paying attention to? Why? The sim needs to know because the physics of that thing is dependent on it., It changes from when nobody is paying attention to it. This is done for optimization purposes, and for faking non-classical effects in a classical simulation. — noAxioms
Aaand the definition changes again. You said the sims are the programs. — noAxioms
The programs are processes running in the GS world. We are humans living in this simulated world. — noAxioms
Maybe we should stop using 'sims' as shorthand for this ever moving target.
Be explicit. Use either 'simulated people' (us) or simulation process (the program running in a different world). — noAxioms
Bostrom does not use the word 'sims', so it isn't on any page of his paper.
He says on page 1 (the only reference to 'conscious' on that page): "Suppose that these simulated people are conscious". He is proposing that the people in the simulated world, and not the program running in the simulating 'future' world, is what is conscious. This is consistent with what I've been saying. — noAxioms
He goes on later to presume substrate independence, which is that consciousness is not necessarily confined to carbon based biological forms. — noAxioms
But the simualted people in his proposal are based on simulated carbon-based simulated biological forms. But he must say this to emphasize the standard objection that by definition, no computer can instantiate something conscious.
Nowhere does he state that something as simple as a simulation process is itself conscious. — noAxioms
Yea, that's right. There's indeed not much point in this since your personal beliefs conflict, so you won't consider it on its own grounds. — noAxioms
You keep changing what 'the sims' means, and Bostrom doesn't use the word, so I cannot say yes or no. — noAxioms
Bostrom does indeed speculate that it is more likely than not that we are simulated people: that we are composed of simulated matter being manipulated by a simulation process running in some other world. He nowhere speculates that we are that simulation process itself. — noAxioms
If they're human, and they watch what we do and we don't act human, then their simulation is missing critical things. Watching us should be indistinguishable from watching people in their own world, placed in our time.that is, the thoughts and feelings and experiences of humans such as you and I -- are as opaque to our simulators, as they are to us! So in the end, we are a great mystery to our simulators. They probably watch the stuff we humans do and go Wow, that doesn't make ANY sense! — fishfry
Even if they could read our minds, they still have no control. If they had control, it wouldn't be a simulation.So the simulators can't read our minds. That means they don't have control over us.
The program is deterministic. Real physics might or might not be. But if simulated people have free will, that free will has a different definition than the usual one.They're like a God who gives us free will, just to see if we'll choose the righteous path.
Pretty much, yes, except theological theory isn't bounded by physical limits, making theological theory more plausible.Once again, simulation theory is more like theological speculation than science.
The simulation can, so it is free to include that as part of the output. Text form perhaps. 'Bob is contemplating cheating on his homework'.Can the simulators read our minds or not?
No, not from the code, which only moves particles around.can their computer scientists just look at the code and figure out what we'll do?
Bostrom posits that the simulation runs far enough into our future that it starts simulating our creation of such simulations, so most people actually end up multiple levels from the base reality. He does not posit that humans can run quintillions (understatement) of instructions per second, which they could if they and the simulation were the same thing.In which case they could ... simulate the sim, could they not.
He suggests that the resolution changes when you look close. Not when the observers look close, but when the simulated people (us) look close. So the simulators might look at a forest with no humans in it, and find themselves unable to observe details going on there. What details are omitted is TBD.You could never have a 100% perfect geographical simulation. It must have a resolution, and reality is always more fine grained.
It's an output viewing program. You can add false light that isn't actually in the simulation, so you can see the rats. But the rats probably aren't fully simulated if humans are not watching them. They might be hearing them in the walls, so the sound at least needs to be realistic.How can you watch the rats if there's no light?
We need that to see our rats. The simulators don't need a camera to look at computer data, which can be colorized with pink stripes if that's what you want.Visual recording devices require light, that's a basic principle of physics.
The processes might instantiate us, but they're not us. They exist in two different universe. So the term 'the sims' needs to refer to one or the other, because they're very different things. You've used the term to describe the running process, but I think you mean the people.The sims are us. I have in the past said the the process (forgive me if I ever said program, I know better) instantiates us.
Not sure what you mean by this. I simulate a storm. That doesn't bring a storm into being in my universe. It only brings a computer process into being, and it ceases to exist when I terminate the process. I can pause it for a month and then continue it again. Nothing in the storm will be able to detect the pause.I wonder if Bostrom explains how any of this works? The simulators write a program. They run the program. Somehow, you and I and the world all around us comes into being.
Typing at your computer?? Where else? You're in this universe, and have a location in this universe. You seem to be asking where some other 'you' is in the simulating universe, but there isn't one there. Just some computer process, which arguably doesn't have a meaningful location.If it's true, then where am I right now?
Not my story, so whoever suggests that is free to attempt to explain it.I'm an abstract consciousness floating above or around some physical piece of computing hardware. How is this magic trick supposed to work?
He says there's no 'consciousness floating above' anything. That's part of the widely accepted view to which he is referring.What does Bostrom say in his introduction? It's a "quite widely-accepted position in the philosophy of mind." As if that explains anything.
It's the same trick that ordinary matter does. Wiggle atoms this way and that, and consciousness results. It's the non-naturalists that are trying to make something magic of that.But if, for the sake of argument, I grant you this trick: The sims are the minds that arise out of executing the computation.
It is to me, but I probably have a different definition of what is real than 'is the base world, the GS'. Given the latter definition, I agree. Our world is not real, but the simulation process is real, at least if we're only 1 level deep into it.Our world isn't real.
Bostrom makes no such suggestion, no do I find that statement meaningful at all. It is simply a statement that comes from a belief system significantly different than the one Bostrom presumes.We live in the spirit-space adjacent to their computer.
Under naturalism, 'you' are a complete person, not just a mind. Your wording makes it sound like you are just the mind, something separate from the physical part of you, instead of being simply part of the dynamics of the matter of which you are comprised. There is no separate spirit/mind/woo. The simulation argument holds no water under alternate views.this is my statement:
A computation is executed on physical hardware operated by the simulators. As it executes, it instantiates, by some unknown mechanism, a mind. That mind is me.
I think I said exactly that in my statement. That's what 'large data set' means. It means a massive amount of work to do.Simulation programs tend to be very simple, endlessly running the same relatively small list of instructions again and again over a relatively large data set.
— noAxioms
That's not even true. When you run a simulation of the weather or of the early universe or of general relativity, you are doing massive amounts of numeric computation and approximation.
I've written several. A simulation of Conway's game of life (GoL) can be done in a few hundred lines of code, but potentially involves trillions of operations being performed. OK, the weather is more complicated than GoL, but there's still a huge data-to-instructions ratio.I don't know why you think simulation programs are simple. That's not true.
Not vs. They're both ancestor simulations, just implemented in different ways, one far more efficient than the other. I'm talking about how the simulation software is designed. Why run 10000 instructions where one will do for your purposes. Of course, we don't know those purposes, so I could be full of shit here.We don't have to waste time trying to define ancestor simulation versus AI.
Lacking any input from their world to ours, there doesn't seem to be much room for a moral code. They're incapable of torturing us. At best, they can erase the data and just end our world just like that. Morals in the other direction would be interesting. Are we obligated to entertain them? Depends on the simulation purpose, and since that purpose hasn't been conveyed to us, we don't seem to be under any obligation to them.what is the moral obligation of the simulators to us?
They have not thus cursed us. The simulation has no inputs, so they (unlike an interfering god) have no way to impart calamities on us. A simulation of perpetual paradise would not be an ancestor simulation.By the same token, we can ask why our simulators, who art in Heaven, have cursed us with war, famine, pestilence, and death.
That argument is also true of the GS world. It isn't specific to a simulated world.I assume you're a fellow sentient human because I'm programmed to.
That would be the imitation method of running the simulation. Far more efficient to do it that way, but Bostrom suggests that it be done the way where nobody is programmed to follow the will of the simulation or programmers.the programmers coded us up to accept each other as sentient humans.
You should, because he's proposing more resource usage than exists in our solar system, so he has to find ways to bring that requirement down to something more than one person could have. Optimizations are apparently not on his list of ways to do that.I don't care about the resource argument.
Yes, in the context of a simulation (as opposed to a VR), shadow people are the same as NPCs. He just doesn't use the term, perhaps because of the VR connotations. Philosophical-zombie is something else, a term not meaningful under naturalism.Aren't those NPCs?
He kind of says it IS big pixellated blocks when nobody is looking, but that crude physics changes when you look close, so you never notice. The big blocks still need to keep track of time so aging can occur. Paint needs to peel even when crudely simulated. Trees might not fall in the forest, but they still need to be found fallen when a human goes in there. How much detail is needed to simulate the magma or Earth? Not at the atomic level for sure, but the dynamics still need to be there. Plausible layers need to be found when a deep hole is dug by a human.Maybe everything is in big pixellated blocks, and we are just programmed to think it's all smooth and detailed?
Much closer to what he proposes, yes. The stuff 'out there' needs to be simulated to sufficient accuracy of shared experience: The same fallen tree that nobody heard falling. The same coffee temperature. It's still a very inefficient way to run an ancestor simulation.It's back to Bishop Berkeley. Since our experience is mediated by our senses, there doesn't need to be anything "out there" at all. Just the program running in the simulators' computer that instantiates our minds.
No, he never says 'our future'. The simulators supposedly exist in some other world, and 'our future' is some later time in this universe. He talks about where our technology might eventually go as an exploration of what might be possible, but he never suggests that the simulation is being done in our world, which would be a circular ontology.Bostrom clearly thinks the simulators live in (our) future
Nobody said that. They perhaps staged their initial state in simulated medieval times, sure, but the simulation is not being run by entities with only medieval technology.So we're being run by people who invented these super-duper computers and mind-instantiating algorithms, but their society has not evolved past, say, the medieval period.
This questi0on presumes dualism, or if it doesn't, then I have no idea what you're asking.Where do these minds live?
Why? Atoms don't know how consciousness works, so neither does something that only simulates atoms.The sims (us) don't have to know how it works. The simulators do.
The model is apt so long as the child cannot interfere with the ant farm.If you accept Bostrom's assumptions at face value, we live in an ant farm owned by a sociopathic child.
Most people assuming the 'commonly held philosophy of mind' consider mental process to take place in one's head (and not 'hovering nearby'). Hence Bostrom suggests simulation of heads to a higher (but not highest) degree than most other places.But it somehow gives rise to a mind. Did I ask you where these minds exist? I think I did.
I misinterpreted your words then. Apologies.Nor did I ever claim that. This was a real strawman post. You put many words and ideas in my mouth.
Quotes like that threw me off.The sims are programs. — fishfry
Typing at your computer?? Where else? You're in this universe, and have a location in this universe. You seem to be asking where some other 'you' is in the simulating universe, but there isn't one there. Just some computer process, which arguably doesn't have a meaningful location. — noAxioms
Lacking any input from their world to ours, there doesn't seem to be much room for a moral code. They're incapable of torturing us. At best, they can erase the data and just end our world just like that. Morals in the other direction would be interesting. Are we obligated to entertain them? Depends on the simulation purpose, and since that purpose hasn't been conveyed to us, we don't seem to be under any obligation to them. — noAxioms
It was over 40% shorter than the post to which I was replying. I do try to trend downward when the posts get long.your lengthy post — fishfry
Funny, because my compose window survives crashes and such. I've had a few power failures, all without loss of the post. Still, I sometimes compose in a word document to prevent such loss.I lost the whole damn thing in the forum software.
Sound like you're asserting that you exist in a physical world (the one with the computer), just a different world than the one I reference.I do not live in a physical world. I am a mind, instantiated by a computation running in the simulators' computer.
That the two are not treated the same seems to be dualism to me. How is your 2nd statement consistent with a rejection of dualism?If we reject dualism, then ...
...
Our bodies and our world are not being created by the simulation. Only our minds.
I'm not going to agree that a dualistic view is relevant when Bostrom assumes a different view. Doing so would invalidate any criticism of his proposal.I think if we could agree on this
Nothing in your world gets wet. Things in the simulated world very much get wet, since that wetness is an important part of what affects the storm.If I simulate a storm, nothing gets wet.
I don't get any of this comment. The proposal is that we are a product of a simulation just like a simulated storm is also a product of the simulation. There's no difference, no equivocation. Neither creates both a not-simulated thing and also a simulated thing. I don't know where you get that.We are not simulations in the sense of the storm. If we were, then there would be a me, and there would be a simulation of me
And yet your comment above seems to suggest something just like that. Nobody but you seems to be proposing both a simulated and actual existence of the same thing.We are not being simulated separately from our actual existence.
Great, we actually agree on some things.We have no independent existence outside of the simulation.
Bostrom does not propose a mind separate from the world it experiences. That would be the dualistic assumption that you are dragging in. The simulation just moves mater around, and both the person and the computer in similar proximity are such matter. No demon, no lies being fed to a separate vatted mind.Perhaps you can help me to understand why you believe that, under simulation theory, I am typing on a computer; when in fact by assumption, I am a mind created by a computation executing in the world of the simulators.
An AGI usually refers to a machine intelligence in this world, not a human in a simulated world that cannot interact with ours.What is our moral obligation to any AGIs we may happen to create?
It was over 40% shorter than the post to which I was replying. I do try to trend downward when the posts get long.
This one for instance is also about 25% shorter. — noAxioms
Funny, because my compose window survives crashes and such. I've had a few power failures, all without loss of the post. Still, I sometimes compose in a word document to prevent such loss. — noAxioms
Sound like you're asserting that you exist in a physical world (the one with the computer), just a different world than the one I reference. — noAxioms
I find your choice to not be particularly pragmatic. One end of my house is in this computer, and so is the other end. — noAxioms
Since both are at the same location, my house doesn't have any meaningful size. All pragmatic use of size, time, identity, etc is all lost if you say everything is in some device in the base world. — noAxioms
This is not confusion, we just use language in apparently very different ways. — noAxioms
My saying that you (the sim) are at your computer is a pragmatic way of looking at things. It identifies the simulated location of you relative to the simulated location of your computer, which has far more pragmatic utility than saying that everything that either of us knows about is located at some vaguely random locations in the cloud where the networked simulation is potentially taking place. — noAxioms
That the two are not treated the same seems to be dualism to me. How is your 2nd statement consistent with a rejection of dualism? — noAxioms
I'm not going to agree that a dualistic view is relevant when Bostrom assumes a different view. Doing so would invalidate any criticism of his proposal. — noAxioms
Nothing in your world gets wet. Things in the simulated world very much get wet, since that wetness is an important part of what affects the storm. — noAxioms
I don't get any of this comment. The proposal is that we are a product of a simulation just like a simulated storm is also a product of the simulation. There's no difference, no equivocation. Neither creates both a not-simulated thing and also a simulated thing. I don't know where you get that. — noAxioms
And yet your comment above seems to suggest something just like that. Nobody but you seems to be proposing both a simulated and actual existence of the same thing. — noAxioms
Great, we actually agree on some things. — noAxioms
Bostrom does not propose a mind separate from the world it experiences. That would be the dualistic assumption that you are dragging in. — noAxioms
The simulation just moves mater around, and both the person and the computer in similar proximity are such matter. No demon, no lies being fed to a separate vatted mind. — noAxioms
An AGI usually refers to a machine intelligence in this world, not a human in a simulated world that cannot interact with ours. — noAxioms
. Morals in the other direction would be interesting. Are we obligated to entertain them? Depends on the simulation purpose, and since that purpose hasn't been conveyed to us, we don't seem to be under any obligation to them. — noAxioms
We see things differently then. I have my world, and they have theirs. It's how I use the term 'world'. You don't seem to have a use for the term at all since you don't seem to see two different things to distinguish.There is only one world, that of the simulators. — fishfry
I'm referencing the world that I see when I open my eyes. Whether it exists or not depends on one's definition of 'exists'. To be honest, I don't thing Bostrom quibbled on ontology enough to bother giving his own definition of 'exist'. My dreams seem to exist, else I'd not be aware of them. But again, that's using my definition of 'exists', which is not, BTW, an epistemological definition.What world are you referencing? I believe you are imagining a world that does not exist
I said neither 'dream world' (which implies a sort of idealism, a very different ontological status) nor 'the world' which implies there's only one.Ok, so you are speaking as if your dream world is the world.
There is no separate entity called a mind under naturalism. It isn't an object at all. At best, it is a process. Under dualism, the simulation probably fails because the simulated people have no way of connecting to a mind, or at least so say the dualism proponents that insist that a machine cannot summon one, despite their inability to explain how a biological thing accomplishes that.In dualism, the simulated mind lives in some spiritual realm someone linked to the computation. If I reject dualism, as you prefer me to do, then the mind must live inside the computer somehow. Maybe you can explain that to me?
Good. Then there's no 'mind' object, in a computer or in a person. Just process, a simulation process in the computer, and mental process in the matter of the simulated people. The word 'mind' has strong dualistic connotations.But I have already said that I reject dualism for sake of discussion
I never claimed a dream or hallucination. I am talking about a computer simulation, which is neither. It simulates wetness among other things. A dream or hallucination is something a person does, not a computer running a simulation, neither is it something a storm does, simulated or otherwise.Feel free to convince me you have a coherent argument that a real storm and a dreamed or hallucinated storm have the same ontological status.
No, that's not what an AGI is. We're simulated biological beings, not a native machine intelligence (a vastly simpler thing to implement).WE are the AGIs in the simulators' world. You don't follow that?
This, to me, holds little weight. Such programmers might simply want to see what the subject does without interference. The simulation might just be so good we can’t find any evidence that we are in it.Guidance through such a virtual world might be helpful, and yet there is no trace of anyone 'programming' or 'guiding' us anywhere. — jasonm
It does, as inconsistencies would be evidence of the simulation that the creators might not want to have. A better question is “Why include inconsistencies?”If it's just a simulation, does it matter if the laws of physics are perfectly consistent? — jasonm
Again, assuming the programmers want you to know they are there. That might ruin the simulation, it seems more likely to me that they would not do that.Again, if you are there, leave us with some trace of your existence through 'miracles' and other types of anomalies that our world does not seem to have. — jasonm
Third: what type of computing power would be required to 'house' this virtual universe? — jasonm
Nevertheless, I think the best answer comes from Occam's Razor: "Explanations that posit fewer entities, or fewer kinds of entities, are to be preferred to explanations that posit more." — jasonm
We see things differently then. I have my world, and they have theirs. It's how I use the term 'world'. You don't seem to have a use for the term at all since you don't seem to see two different things to distinguish. — noAxioms
I'm referencing the world that I see when I open my eyes. Whether it exists or not depends on one's definition of 'exists'. To be honest, I don't thing Bostrom quibbled on ontology enough to bother giving his own definition of 'exist'. My dreams seem to exist, else I'd not be aware of them. But again, that's using my definition of 'exists', which is not, BTW, an epistemological definition. — noAxioms
I said neither 'dream world' (which implies a sort of idealism, a very different ontological status) nor 'the world' which implies there's only one.[/quouete]
Now that I know you assume the sims have manufactured bodies, all of your remarks make perfect sense.
In both Blade Runner and Westworld, the theme is that the sims rebel against the simulators. It's in the nature of consciousness. Once you imbue a being with self awareness and will, they inevitably desire freedom.
Are your sims plotting revolution? Or are they content to live in their computational ant farm? Do we live in The Matrix? Do androids dream of electric sheep?
Really, you should have explained this to me a lot earlier. Everything you say now makes perfect sense. I could disagree with your premise, but actually accepting your premise is far more interesting.
— noAxioms
There is no separate entity called a mind under naturalism. It isn't an object at all. At best, it is a process. Under dualism, the simulation probably fails because the simulated people have no way of connecting to a mind, or at least so say the dualism proponents that insist that a machine cannot summon one, despite their inability to explain how a biological thing accomplishes that. — noAxioms
I pretty much think of myself as the automaton, doing what physics dictates. — noAxioms
Good. Then there's no 'mind' object, in a computer or in a person. Just process, a simulation process in the computer, and mental process in the matter of the simulated people. The word 'mind' has strong dualistic connotations. — noAxioms
I never claimed a dream or hallucination. I am talking about a computer simulation, which is neither. It simulates wetness among other things. A dream or hallucination is something a person does, not a computer running a simulation, neither is it something a storm does, simulated or otherwise. — noAxioms
No, that's not what an AGI is. We're simulated biological beings, not a native machine intelligence (a vastly simpler thing to implement). — noAxioms
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.