• Ludwig V
    1.7k
    Similarly, a person (and not a brain) is what is conscious. Not even that, because an environment is also needed.noAxioms
    Yes, that's right. I agree also that persons, as we understand them, can only exist in an environment. Whether one includes that environment as part of the person or not is a tricky question and I don't know the answer. In our paradigm case (the only one that we actually know), a person is a human being, i.e. an animal. An animal is a physical body. (I'm setting aside the dualistic possibility of persons existing without a body.) Some physical structures are machines, and hence not animals, but I don't see why such structures cannot possibly constitute people.
    But if they are to constitute people, they would indeed need at least to behave as people spontaneously and not because they are following a detailed set of instructions about what to do and when. They need to learn to do that for themselves. So a machine that was designed and built to behave as a person could not be anything except a sim.

    It has to start somewhere, so the womb would be outside the system, an imitation womb, empirically (to the child) indistinguishable from a real mother, in every way. I suppose the placenta would be included in the system since it is, after all, the child and not the mother, but when it is severed, the sim needs to remember which half to keep as part of the system.noAxioms
    So I think you are right to argue that some such process as this would be necessary to create a machine person. The catch is that I'm not at all sure that this would be a sim, rather than a real person - especially as the process of its creation would be very close to the process of creating human beings. I think this is the same point as here:-
    You said you would start the sim as a zygote. I am asking: what is the difference between this zygote and a zygote in reality? Or is the zygote you are postulating a mere simulation of a zygote? If so, that seems problematic.NotAristotle

    Irrationality is required for consciousness? A computer is rational? I question both. Deterministic is not not rationality. I do agree that irrationality is a trait of any living creature, and a necessary one.noAxioms
    Well, perhaps I'm being provoking. My point is that when people act, they do so on the basis of values that they hold, that is, their emotions and desires. It may be a distortion to call them irrational, but standard ideas of logic and reason are well recognized (since Aristotle) to be incapable of generating actions on their own.
    Calculating is widely recognized as a rational activity. To me, it makes no sense to deny that computers can calculate. The catch is that such rational activities are not sufficient to be recognized as a person. Ever since the Romantic protest against the Enlightenment, emotion and desire have been regarded as essential elements of being a human person.

    Sometimes. One is often reft of rational thought while dreaming, but not always. I can tell sometimes, and react to knowing so.noAxioms
    This may be a side-issue. I know that there is an issue about lucid dreaming. But I doubt whether the unsupported memory of a dreamer is sufficient to establish the phenomenon, except that I accept that the reports exist and I don't believe they are lies. But the possibility that the dreamer is dreaming the phenomenon cannot, it seems to me, be excluded.

    To a simulation of low level physics, they pretty much are the exact same category,noAxioms
    I don't know what you mean by "a simulation of low level physics", but you clearly have a different concept of categories from mine.

    That's (sc. Descartes' argument) a great example of rationalization. It was his target all along.noAxioms
    A side-issue. If you call it a rationalization, you have already decided the argument is invalid or unsound. But knowing that someone had in mind a specific conclusion before formulating the argument does not, of itself, show that their argument is invalid or unsound.

    Would a simulation of agonising pain be actually painful? If it was, it can't really be a simulation, but as the primary attribute of pain is the feeling of pain, there's nothing else to simulate.Wayfarer
    Another side-issue, but you are presupposing a dualistic concept of pain. On that concept, you are right. But whatever exactly may be the relevant conception of pain, I think your point survives, in the sense that whatever caused the pain would have to cause real pain and not zombie pain, just as the anger would have to be real anger, etc.

    If I am a BIV, I cannot make claims like "I am a brain in a vat" because I am making no reference to the "brain" and "vat". So, if I say that sentence, it is false.L'éléphant
    If I am a brain in a vat, my claim is true, even if I can't refer to brain and vat, so long as "brain" and "vat" refer to the appropriate objects in that context. Perhaps I cannot know that my claim is true, but that's different. Actually, I don't really see why a brain in a vat cannot refer to itself as a brain in a vat.
  • Wayfarer
    22.8k
    Descartes starts with all this skepticism, and builds up from this simple state that, lacking any knowledge of modern physics, leaves him with something he decides can be known with certainty. I'm fine with that, and I'm admittedly not very familiar with his work, but he goes from there to conclude, surprise, surprise, the exact mythological teachings of his own culture and not any of the other thousand choices of other cultures. That's a great example of rationalization.noAxioms

    The logic of cogito ergo sum is neither rationalisation nor myth, it is the indubitable fact that, in order to be subject to an illusion, there must be a subject. And this whole line of argument was anticipated by Augustine centuries prior:

    But who will doubt that he lives, remembers, understands, wills, thinks, knows, and judges? For even if he doubts, he lives. If he doubts where his doubs come from, he remembers. If he doubts, he understands that he doubts. If he doubts, he wants to be certain. If he doubts, he thinks. If he doubts, he knows that he does not know. If he doubts, he judges that he ougth not rashly to give assent. So whoever acquires a doubt from any source ought not to doubt any of these things whose non-existence would mean that he could not entertain doubt about anything." (Augustine, On the Trinity 10.10.14 quoted in Richard Sorabji, Self, 2006, p.219).

    I have my doubts about Descartes, in that I believe his dualistic separation of the physical and mental as separate substances is profoundly problematical and has had hugely deleterious consequences for Western culture, but as for the essential veracity of his ‘cogito’ argument, I have no doubts.

    real pain and not zombie painLudwig V

    I had the idea that zombies don’t feel pain, at least they never do in zombie flicks. You have to literally dismember or disintegrate them to overcome them, merely inflicting blows or wounds does nothing.
  • J
    695
    If you want to read a first-rate philosopher discuss all these issues, try Reality+, David Chalmers' new book. It sheds light on a lot of what's being debated here.
  • noAxioms
    1.5k
    a person is a human being, i.e. an animal. ... Some physical structures are machines, and hence not animals, but I don't see why such structures cannot possibly constitute people.Ludwig V
    There's a contradiction here. People is animal. A machine is not animal. But a machine can be people? That means a machine is animal and not animal.

    But if they are to constitute people
    I think you are again envisioning imitation people, like Replicants. That's a very different thing than the simulation hypothesis which does not involve machines pretending to be people.
    If you're going for an empirical test, it doesn't work. If a convincing replicant is possible in a sim but not in reality, the runners of the sim can see that and know that their simulation isn't very accurate, and the people in the sim don't know that replicants should be different, so they have no test.

    Secondly, where do you get this assertion that machines must lack spontaneity? I mean, deep down, you're a machine as well running under the same physics. I think you're confusing determinism with predictability.

    So I think you are right to argue that some such process as this would be necessary to create a machine person.
    No. The simulation is creating a biological person, not a machine person. Try to get that. Replicants are not grown from a zygote. A replicant can be trivially tested by an x-ray or just by sawing it in half, or so I suggest. Apparently in Blade runner, it was very hard to tell the difference, but that's also a fiction.

    Calculating is widely recognized as a rational activity.
    That's right. Physics doesn't do spontaneous things (quantum mechanics excepted, which is a big problem if you want to simulate that). But classical physics isn't spontaneous, and yet spontaneity emerges from it, or at least the appearance of it. Anything in the simulation would have to behave just like that.

    To me, it makes no sense to deny that computers can calculate. The catch is that such rational activities are not sufficient to be recognized as a person.
    Yet again, no computer is pretending to be a person, so it isn't a problem.

    If you call it a rationalization, you have already decided the argument is invalid or unsound.
    Probably invalid in this case, and yes, I've decided that, but on weak grounds since I have never followed the argument from beginning to a preselected improbable conclusion.

    Would a simulation of agonising pain be actually painful?
    If the simulation is any good at all, and presuming monism, then yes, it would be painful to the subject in question. No, the computer running the sim would not feel pain, nor would the people responsible for the creation of the simulation, despite suggestions from Kastrup that they apparently should.



    The logic of cogito ergo sum is neither rationalisation nor myth, it is the indubitable factWayfarer
    I didn't say that was the rationalization. I even accepted it since it was a reasonable statement in the absence of modern physics. It is him building on that foundation to his later conclusions that is the rationalization, which I clearly spelled out in my post.
    As for it being indubitable, well, I dubit it, as I do everything *. The Latin phrase translates roughly to 'there is thinking, therefore thinker" which suggests process, a state that evolves over time, but presumes (without doubt) that all said states are states of the same thing, which is for instance in contradiction with quantum interpretations like MWI, which you probably deny because it is fairly incompatible with the dualistic view of persisting identity. That denial is fine since nobody can force your opinion, but absent a falsification of the interpretation, the assertion is hardly indubitable.
    And no, I don't accept MWI either, but I don't claim it has been falsified.


    * why isn't 'dubit' a word? It ought to be.
  • Wayfarer
    22.8k
    I even accepted it since it was a reasonable statement in the absence of modern physics.noAxioms

    At risk of opening a can of worms, how does 'modern physics' come into it?

    As for it being indubitable, well, I dubit it, as I do everythingnoAxioms

    If you dubit it, you must exist, in order to dubit it. If you don't exist, then your opponent has no argument to defend.

    Persistence of self-identity over time is not discussed in Descartes, but I don't believe it has much bearing on the argument. Again, any statement along the lines of 'I (the speaker) do not exist' is self-contradicting.

    all said states are states of the same thingnoAxioms

    Beings are not objects or things (except for from the perspective of other beings - I see you as 'an object', in a way, although to treat you as an object would be, at the very least, discourteous). The nature of the identity of a being is quite a different matter to the nature of the identity of a thing.

    In fact, this is where I criticize Descartes - he designates the subject as 'res cogitans', which is translated as 'thinking thing'. And I think there's a deep, implicit contradiction in that designation, as it obfuscates a real distinction between 'things' (as objects) and 'beings' (as subjects of experience.)

    (In Crisis of the European Sciences, Husserl concurs that describing the subject (res cogitans) as a "thing" does not do justice to the nature of the subject of experience. His phenomenological method emphasizes the intentionality of consciousness—consciousness is always consciousness of something—and the embodied and situated character of human existence. This perspective seeks to bridge the gap between the subject as a mere "thing" and the subject as an experiencing, intentional "being." Descartes' formulation overlooks the role of consciousness and the subjective, experiential dimension of being in constituting the world of objects (and hence reality) as it is experienced by living beings. Descartes, in removing that situated and intentional nature of the subject, and seeking certainty in mathematical abstractions, in fact gave rise to the worldview which makes the 'brain-in-a-vat' scenario conceivable in the first place - as the IEP article indicates.)
  • L'éléphant
    1.6k
    As this thread is not about BIV in particular, but simulation, I will respond to the below briefly:

    If I am a brain in a vat, my claim is true, even if I can't refer to brain and vat, so long as "brain" and "vat" refer to the appropriate objects in that context. Perhaps I cannot know that my claim is true, but that's different. Actually, I don't really see why a brain in a vat cannot refer to itself as a brain in a vat.Ludwig V
    You do not understand what "refer" means, in other words.

    I don't follow that.If it says (without evidence) that it is a BiV, then the utterance is true if that is indeed the fact.noAxioms
    Then you misunderstand what "true" means in statements.
  • Ludwig V
    1.7k
    The logic of cogito ergo sum is neither rationalisation nor myth, it is the indubitable fact that, in order to be subject to an illusion, there must be a subject.Wayfarer
    The analysis of Descartes' argument is a bit off-topic here, so I'll resist commenting.
    I have my doubts about Descartes, in that I believe his dualistic separation of the physical and mental as separate substances is profoundly problematical and has had hugely deleterious consequences for Western culture, but as for the essential veracity of his ‘cogito’ argument, I have no doubts.Wayfarer
    But I can't resist saying that I agree with you.

    I had the idea that zombies don’t feel pain, at least they never do in zombie flicks. You have to literally dismember or disintegrate them to overcome them, merely inflicting blows or wounds does nothing.Wayfarer
    Yes. I did not put my point well. I was thinking of philosophical zombies, which would (if I've understood the idea correctly) not behave like zombies in the flicks.

    There's a contradiction here. People is animal. A machine is not animal. But a machine can be people? That means a machine is animal and not animal.noAxioms
    I mean, deep down, you're a machine as well running under the same physics. I think you're confusing determinism with predictability.noAxioms
    Are these two remarks compatible? My point is that there is no easy and clear way to state what the Turing hypothesis is trying to articulate.
    I think you are again envisioning imitation people, like Replicants. That's a very different thing than the simulation hypothesis which does not involve machines pretending to be people.noAxioms
    Thank you for the clarification. I misunderstood what the thread was about. My apologies. It is clear now that I haven't understood what the simulation hypothesis is. However, when I checked the Wikipedia - Simulation hypothesis, I found:-
    Suppose that these simulated people are conscious (as they would be if the simulations were sufficiently fine-grained and if a certain quite widely accepted position in the philosophy of mind is correct).
    For me, a conscious being is a person and a simulated person is not a person, so this confuses me. Can you perhaps clarify?

    why isn't 'dubit' a word? It ought to be.noAxioms
    Well, since you have now used it, and I understand it (roughly, I think), it is a word now. Who knows, it may catch on and then you'll be awarded a place in the dictionaries of the future!


    I agree that BiV is a different kettle of fish and I don't particularly want to pursue it, but I can't resist one reply, because your remark was so incomprehensible to me. I don't expect to resolve our differences, just to clarify them a bit.

    You do not understand what "refer" means, in other words.L'éléphant
    You seem to think I cannot refer to anything that I have not experienced. But the reference of a word is established in the language in general, not by what I may or may not have experienced. So when I can refer to the President of the United States even if I don't know that Joe Biden is the President.
    Then you misunderstand what "true" means in statements.L'éléphant
    I agree with @noAxioms, except that I would add that it's not something it can justify on the basis of its subjective experience.
  • Patterner
    1.1k
    There is no technology constraint on any pure simulation, so anything that can be done by computer can be done (far slower) by paper and pencil. That means that yes, even the paper and pencil method, done to sufficient detail, would simulate a conscious human who would not obviously know he is being simulated.noAxioms
    It seems to me you cannot simulate with paper and pencil, because it is not an active medium. You can write about the game of basketball in all conceivable detail. You can write down every rule, and describe as many scenarios as you like, explaining how each rule applies at each moment. You can describe every required object, as well as the physical, mental, and emotional characteristics of every possible player. You can write all this down in every conceivable detail, but it would never be a basketball game.

    You can describe a game that actually took place, or a fictitious one, in every conceivable detail. Exact speed and spin of the ball at every moment. Exact angle it took every time it hit the floor or backboard. Exactly how it lost its spherical shape with each impact. Heck, even how much sweat came out of each of every player's pores.

    In neither scenario is there an actual basketball game. Not even simulated. Because you need action for a simulation. It is just squiggles on paper that. When someone who knows what those squiggles represent interprets them, describe events and possible events, and allow the reader to imagine any events that you have not described (assuming you have not described every possible event). But the events are not taking place. Not even as a simulation. There is no action.

    Even an actual gathering of all the people and objects required for a basketball game is not a basketball game if all the players do not act in accordance with the rules.

    If you program everything necessary to simulate consciousness into a computer**, but never hit Run, you will not have a simulated consciousness. If it is running, and you hit Stop, or cut the power, you no longer have a simulation.


    **You would have actual consciousness. no such thing as simulated consciousness.
  • noAxioms
    1.5k
    At risk of opening a can of worms, how does 'modern physics' come into it?
    I joined this and other forums to find out how the prominent philosophers (the ones you learn of in class) dealt with modern physics (narrowing the search to recent ones of course) and found that for the most part, they either didn't know their physics, or didn't care about it.
    So I learned physics, or at least the parts of it relevant to the subjects I cared about.

    Relativity threw significant doubt to Newtonian absolutism where there was one preferred frame and time was posited to be something that flows or progresses, that there was a preferred moment in time, and the universe was static, and either infinite age or somehow set in motion from some initial state at some point. Much of religious myths (especially the creation parts) requires the universe to be contained by time instead of the other way around, and this did not become apparent until about 110 years ago. The universe having a finite age is about a century old, and some religious teachings did at least bend with that one and put the creation event there.

    Quantum mechanics really threw a spanner into the gears with suggestions that ontology might work backwards (that existence depends on interaction with future things), that identity of anything (electrons, rocks, people) is not at all persistent and thus I am not the same I as a second ago.

    One can of course pick an interpretation consistent with your preferences and avoid the implications of the ones you don't like, but if doubt is to be eradicated, all the alternative interpretations contradicting the thing of which you are certain must be falsified.


    And who knows what else might get discovered. Nobody saw QM coming, so all these people who held certain beliefs with certainty found themselves to be wrong or at least potentially wrong. So a declaration of 100% certainty is irrational. I mean, my certainty rests on the sum of two numbers (a pair of arbitrary real numbers say) being exactly one other real number, always and anywhere. I don't significantly doubt that, but I still question it. What if it's only a property of this universe that such a sum comes to that one solution and not a different one elsewhere?
    Wayfarer
    Persistence of self-identity over time is not discussed in Descartes
    Indeed it isn't, but the assumption is implicit. It's too obvious to bother calling out explicitly, or at least it was obvious until ~50 years ago.

    Beings are not objects or things
    Your opinion. The opinion of others may vary.


    I was thinking of philosophical zombiesLudwig V
    I knew what you meant, even if Wayfarer chose to reply to what you said instead of what you meant.

    My point is that there is no easy and clear way to state what the Turing hypothesis is trying to articulate.
    The Turing test (The closest 'Turing Hypothesis gets is the Chuch-Turing thesis, concerning what is computable, and is oddly relevant below) is an intelligence test for when a machine's written behavior is indistinguishable from that of a human. The large language models are getting close, and the easy way to tell the difference is to not ask them questions with factual answers. They also are not designed to pass the Turing test, so all one has to do is ask it what it is.

    Suppose that these simulated people are conscious (as they would be if the simulations were sufficiently fine-grained and if a certain quite widely accepted position in the philosophy of mind is correct).
    For me, a conscious being is a person and a simulated person is not a person, so this confuses me. Can you perhaps clarify?
    A simulated person would be a person, just in a different universe (the simulated one). It's likely quite a small universe. You seem to define 'person' as a human in this universe, and no, the simulated person would not be that.

    why isn't 'dubit' a word? It ought to be.
    — noAxioms
    Well, since you have now used it, and I understand it (roughly, I think), it is a word now.
    And it was already used in somebody else's reply.


    It seems to me you cannot simulate with paper and pencil, because it is not an active medium.Patterner
    Not sure what the term 'active medium' means. Googling it didn't help. I can implement a Turing machine armed with nothing but paper and pencil. Per the Church-Turing Thesis mentioned by mistake above, that means I can do anything that is computable, including the running of the simulation.

    The papers hold not a description of how the simulation works, nor a novel about the lives of the characters simulated, but rather are utilized as memory in the execution of the algorithm, which is doing exactly what the high-power computer is doing. Sure, some of the paper needs to hold the algorithm itself, just like the computer memory is divided into code space and data space.
    The pencil exists to write new memory contents, to change what a paper says to something else, exactly as a computer rewrites memory location.

    If you program everything necessary to simulate consciousness into a computer**, but never hit Run
    But I am hitting 'run'. I wouldn't need the pencil if I didn't 'run' it.
  • Ludwig V
    1.7k
    A simulated person would be a person, just in a different universe (the simulated one). It's likely quite a small universe. You seem to define 'person' as a human in this universe, and no, the simulated person would not be that.noAxioms
    I describe human beings, in contexts like this, as our paradigm of a person. That's not exactly a definition - I'm not aware of any definition that is adequate. A paradigm, for me, is an example or sample that one uses in an ostensive definition. However, I think that looking for definitions is inadequate on its own, because the important feature of a people is the way we interact with them as different from the way we interact with objects.

    I have to say, if these beings are to be conscious, I wish you luck in getting your project through your research ethics committee.

    My question now, is why not just talk about people living in a different universe? (I'm not going to get picky about the point that the sims you are describing are clearly in the same universe as we are. I would prefer to describe their situation as being in a different lived world from us. Though even that is not quite right.)

    Talking of sims, do you regard chess or (American) football as a simulation of war? That is what they say of both (only they don't use the word "simulation".)
  • wonderer1
    2.2k
    I have to say, if these beings are to be conscious, I wish you luck in getting your project through your research ethics committee.Ludwig V

    :up:
  • wonderer1
    2.2k
    Has anyone here read Stanislaw Lem's The Cyberiad?

    Much earlier than Bostrom, and if not the best, at least the funniest thinking on such topics.
  • Patterner
    1.1k
    Not sure what the term 'active medium' means. Googling it didn't help.noAxioms
    That's because I just made it up. Sorry. I'm not well read almost anything that's ever discussed here. There are many in which I'm not at all read. I know what I want to say, but often don't know what words are normally used. I had hoped I explained it well enough to make what I am thinking clear.


    But I am hitting 'run'. I wouldn't need the pencil if I didn't 'run' it.noAxioms

    If you say the simulation is not found only in the paper and the squiggles on it, but also in the pencil, the handholding the pencil, the mind directing the pencil, you still cannot simulate human consciousness this way. I know human consciousness is a fairly hotly contested issue. But does anyone disagree that it involves multiple processes taking place simultaneously? If we agreed that a process can take place in the scenario you're describing, you cannot write multiple things simultaneously. You can't write two, much less the presumably huge number that are required for human consciousness.

    You can write, "The Following list of 200 processes occur simultaneously." But writing that doesn't make it happen. That can't happen with things written on paper. It can't happen if you write the words of one process over top of the words of another process. It can't happen if you have different processes on different sheets of paper, and stack them on top of one another.

    It can't even happen in the mind that is writing these things down. Nobody can hold that many things, much less that many processes, in their mind at the same time. (If someone could, would they need to bother with the paper and pencil?)

    At no time, in no sense, is everything needed for human consciousness happening at the same time in the paper and pencil scenario.

    If a computer can simulate human consciousness, it would have to be because it can run the same number of processes at once that our brains can run.
  • Patterner
    1.1k
    I just corrected the last sentence of my previous post.
  • noAxioms
    1.5k
    I describe human beings, in contexts like this, as our paradigm of a person.Ludwig V
    Remember, we're not worrying about what those running the simulation are calling the simulated things. We're supposing that we are the subjects here, the ones being simulated, and we (and only we) call ourselves human beings or people. That's the only definition that matters.
    It is the people in the simulation that are tasked with finding evidence that they are the subject of a simulation. What we're called by the occupants of the reality running the simulation is irrelevant.

    I have to say, if these beings are to be conscious, I wish you luck in getting your project through your research ethics committee.
    That's kind of like suggesting that God is unethical to have created a universe that has beings that feel bad, and yes, there are those that suggest exactly that.

    My question now, is why not just talk about people living in a different universe?
    I wanted a universe that is simulated, instead of being instantiated in some other way. I do suppose that the simulated universe is a part of the container universe, but it's still a separate universe. That's questionable if it's an open simulation, but not all of them are. Much depends on the goal of running the simulation. Bostrom actually posits what that purpose would be, even if it is a totally naive one.

    the sims you are describing are clearly in the same universe as we are.
    It is the same universe as we are, because I posit that we are the simulated ones. How would be tell if that were true? The topic isn't about how to run a sim. The topic is about what it's like to be one.

    Talking of sims, do you regard chess or (American) football as a simulation of war?
    There are definitely war elements in both, but that makes it more an analogy than a simulation. The do run simulations of war all the time, pretty much continuously. Yay cold war. Those simulations don't simulate the consciousness of anybody, and I don't think they even have people beyond statistical counts.



    I know human consciousness is a fairly hotly contested issue. But does anyone disagree that it involves multiple processes taking place simultaneously?Patterner
    It is a parallel process, yes. Per relativity, simultaneous is an ambiguous term for events, and no, nothing in a any physical system requires spatially separated components of any process to be simultaneous in any frame. Per the principle of locality, one cannot depend on the other (they are outside each other's causal light cone), and thus the interactions can be simulated in any order, serially.
    The computer would likely do the same thing, but a truly serial process would be much like a Turing machine, and incredibly inefficient design, but performance was never the point.

    If we agreed that a process can take place in the scenario you're describing, you cannot write multiple things simultaneously.
    Granted, but there's no need to, per the above comment. Any such transactions can be computed in any order without altering the outcome. Per the principle of locality, no spatially extended process can have a requirement of simultaneous operation.

    A regular computer would do it that way as well, but the big weather simulation machines are often very parallel, operating on large vector quantities rather than single numbers (technically refered to as SIMD (single instruction, multiple data) machines). The cray supercomputers worked that way, but not sure how much modern high-end machines use SIMD architectures. Point is, doing it serially is just slower, but it doesn't produce a different outcome.

    At no time, in no sense, is everything needed for human consciousness happening at the same time in the paper and pencil scenario.
    On the contrary, time in the simulation has nothing to do with time for the guy with the pencil. Our pencil guy can set everything aside for a year and get back to it later. The simulated guy will not notice. No doubt each transaction will have a location/timestamp, and there's nothing preventing multiple transactions (all the transactions in a single iteration of the data) from having the same recorded timestamp. That is pretty much how simulations are done. Here is the state at time X, and then it uses that state to compute the next state at X+<increment> where the increment might be a microsecond or something. It might take a minute for a machine to simulate all the transactions to generate the next state. It might take the pencil guy several lifetimes to do the same thing, so we're going to need that society to train his replacements each time he retires.
  • Ludwig V
    1.7k
    Remember, we're not worrying about what those running the simulation are calling the simulated things. We're supposing that we are the subjects here, the ones being simulated, and we (and only we) call ourselves human beings or people. That's the only definition that matters.
    It is the people in the simulation that are tasked with finding evidence that they are the subject of a simulation. What we're called by the occupants of the reality running the simulation is irrelevant.
    noAxioms
    So how does this question differ from the brain in a vat, from Descartes' demon or from the supposed possibility that we are all dreaming?
    The topic isn't about how to run a sim. The topic is about what it's like to be one.noAxioms
    So how does this topic differ from the question what it's like to be a bat?

    I'm afraid I didn't realize what the philosophical background is, essentially, Bostrom. I don't find the question interesting, because if we posit that there is no way of telling, then there is no way of telling. Similarly, if there's no way to be a bat without becoming a bat, we can't know what it's like to be a bat.
    The interesting question is under what circumstances we would accept that something we designed and built is a conscious being, i.e. a (non-human) person.

    That's kind of like suggesting that God is unethical to have created a universe that has beings that feel bad, and yes, there are those that suggest exactly that.noAxioms
    This is the traditional problem of evil. I am one of those who think the problem has no solution and that therefore no such God exists. Of course, that doesn't prove that there are not other gods around or that it is only the Christian conception of God is wrong.

    There are definitely war elements in both, but that makes it more an analogy than a simulation. The do run simulations of war all the time, pretty much continuously.noAxioms
    I wish I knew what the difference is between a simulation and an imitation, a simulation and a mimicry, a simulation and an analogy, and a simulation and a model.
  • noAxioms
    1.5k
    So how does this question differ from the brain in a vat, from Descartes' demon or from the supposed possibility that we are all dreaming?Ludwig V
    Nothing like dreaming.
    VR has many of the same issues as the first two. The actual simulation hypothesis does not suggest an artificial sensory stream, except necessarily at system boundaries.

    So how does this topic differ from the question what it's like to be a bat?
    We are not bats. It's not about what it's like to be something we're not. We know what it is like to be a human. The question is, how might we (being the subject of simulation) detect that fact?

    I'm afraid I didn't realize what the philosophical background is (essentially, Bostrom).
    Bostrom is half the story. Most popular fictions depict VR, not a sim. Matrix is a good example of a VR, however implausible.

    I don't find the question interesting, because if we posit that there is no way of telling, then there is no way of telling.
    I didn't posit no ways ot testing. But depending on the quality of the simulation, it might get difficult. The best test is probably to recognize that there must be limits, and to test those limits.

    The interesting question is under what circumstances we would accept that something we designed and built is a conscious being, i.e. a (non-human) person.
    The 'can a computer think' topic was sort of about that. I suppose we could copy our own design and build an actual biological human, but in something other than by the normal way. Anything else is going to be trivially detectable. Not sure how that 'built' person would get loaded with experience. It's not like you can just upload software to a human. Doesn't work that way.

    From that topic:
    The Turing Test is too weak, because it can be passed with a simulation. Simulating intelligent behavior is not actually behaving intelligently.Relativist
    There is mention of the Turing test in earlier posts here. Passing it with a simulation is doing it the hard way. We're getting close to something that can pass the test now, but nowhere close to actually simulating the way a human does it. Perhaps you, like Ludwig here, mean 'imitation', which anything that passes the Turing test is doing by definition.

    And if a machine passes the test (it's a text test, so there's no robot body that also has to be convincing), then it exhibits intelligent behavior. The test is not too weak. What they have arguably already does this, since a machine can exhibit intelligent behavior (even more so than us) long before it can successfully imitate something that it isn't. I mean, I cannot convince a squirrel that I am one of them, but that doesn't mean they're more intelligent than me. I've done it to birds, speaking their language well enough for them to treat me like one of their own kind. It's not hard with some birds.


    This is the traditional problem of evil.Ludwig V
    Pain is not evil. I'd never want to change myself to be immune from pain. It serves an important purpose, and not an evil one.
    The problem of evil argument against God only has teeth if you posit a God that has and follows the same moral values as we envision, such as it being an act of evil to create something humans deem evil.

    I wish I knew what the difference is between a simulation and an imitation, a simulation and a mimicry, a simulation and an analogy, and a simulation and a model.
    A statue, puppet, or a speaker blaring bird-of-prey noises to scare away geese, or a wooden duck lure, are all imitations/mimicry.

    A video game is a VR, which, by definition, feeds artificial sensory input to the real player.

    Conway's game of life (the description of it) is a dynamic model. The execution of the rules of that model (on a computer, paper, pebbles on a go-board, whatever) is a simulation.
    They make computer models of cars. The model is a description of the physical car, what parts are where, and what they're made of, how they're connected. The simulation of that model might throw it into a solid wall, or another car at some high speed, to learn how the initial state in the model deforms by the stresses of that collision. Simulations typically serve some sort of purpose of the runner of the simulation.
  • Relativist
    2.6k
    And if a machine passes the test (it's a text test, so there's no robot body that also has to be convincing), then it exhibits intelligent behavior. The test is not too weak.noAxioms
    The Turing Test is passed by fooling people into believing there's a human giving responses in a conversation. This is feasible today at least within a limited range of conversation topics. What more are you looking for? A wider range of topics? Regardless, human responses are the product of thought processes (including feelings, reactions, influenced by motivations that could change during the course of the conversation). Example: a human can express true empathy; a computer can produce words that sound like it's expressing empathy - but it actually is not. The human may change her behavior (responding differently) based on this; will the computer?
  • Patterner
    1.1k
    Has anyone here read Stanislaw Lem's The Cyberiad?

    Much earlier than Bostrom, and if not the best, at least the funniest thinking on such topics.
    wonderer1
    Never heard of it. But the first few paragraphs are already a riot!
  • Ludwig V
    1.7k
    It is the people in the simulation that are tasked with finding evidence that they are the subject of a simulation. What we're called by the occupants of the reality running the simulation is irrelevant.noAxioms
    I think that you are not talking about the same question as Relativist. (See below). You are positing that it is people who are "in" the sim - i.e. (I assume) being fed the data.
    Plus, if I've understood you, you are positing that the subjects cannot communicate with whatever is running the sim - merely they merely seem to themselves to communicate.

    And if a machine passes the test (it's a text test, so there's no robot body that also has to be convincing), then it exhibits intelligent behavior. The test is not too weak.noAxioms
    Here, you are positing that you are starting with a machine. In that case, the question is whether the behaviour is really intelligent or merely seems to be intelligent. But if it's a machine, we already know that it is not intelligent. Actually, I don't think that is right, but even if the response was intelligent, it does not follow that the machine is conscious or sentient.

    The Turing Test is passed by fooling people into believing there's a human giving responses in a conversation.Relativist
    I think that you are not talking about the same test as noAxioms. (See above). Plus you are positing that it is a machine that is responding, so you are begging the question. (As Turing also does in his formulation of the test.)

    The fundamental point is whether we can even formulate the question without begging it. We have to identify the subject of the Turing test as a machine or a person. Whichever we say, we will interpret the responses in different ways. Whatever the machine responds, we will interpret the response as that of a machine - and that will be true. Whatever the person responds, we will interpret the response as that of a person - and that will be true. There is no magic empirical bullet of evidence that will settle the issue.
  • noAxioms
    1.5k
    On the Turing test discussion:

    I think that you are not talking about the same question as Relativist. (See below).Ludwig V
    Indeed. I dragged in Relativist since the topic of Turing test came up, and he suggests that the test is insufficient to determine intelligence.
    The Turning test has nothing to do with a simulated reality, but rather with a device that imitates a human's text responses, as a test of intelligence.

    And if a machine passes the test (it's a text test, so there's no robot body that also has to be convincing), then it exhibits intelligent behavior. The test is not too weak.
    — noAxioms
    Here, you are positing that you are starting with a machine. In that case, the question is whether the behaviour is really intelligent or merely seems to be intelligent.
    Ludwig V
    Here again, the quoted comment concerns the Turing test, not the simulation hypothesis.

    even if the response was intelligent, it does not follow that the machine is conscious or sentient.Ludwig V
    The Turning test is not a test for either of those. There's not even a test that can tell if your neighbor is conscious/sentient. If there was, much of the p-zombie argument would be immediately settled by some empirical test. The whole point of the term 'conscious' is that it is always defined in such a way that is immune from empirical evidence.

    The fundamental point is whether we can even formulate the question without begging it.
    The question is simple. I am communicating with some unknown entity via text messages, much like we do on this forum. The question is, is that with which I am communicating a human or not?

    I don't see begging in that wording. I am a moderator on a different forum, and one job is to spot new members that are not human. They're not allowed. I've spotted several, but it's getting harder.
    I've even been charged human health insurance rates for a diagnosis provided by a machine, and I protested it at the time. They provided no service at all to me, but they charged me anyway.


    The Turing Test is passed by fooling people into believing there's a human giving responses in a conversation.Relativist
    In a text conversation, yes. That's pretty hard to do, and we're not there yet.

    This is feasible today at least within a limited range of conversation topics.
    Well, one of the ideas is to go outside those topics. I mean, none of the chat bots have long term memory, so one of their traits is that they don't ask any questions of their own since they cannot learn. I suppose clarification requests of questions posed to it might count as asking something.

    If the entity was to pass the test, then nothing is off limits. Be insulting to it, or annoying. It should react accordingly. If it doesn't, it's not passing the test. If it does, it is probably already considerably more intelligent than humans, since it requires far more smarts to imitate something you are not that it does to just be yourself. The entity is not human, and to imitate human responses, especially those involving human emotions, would require superior ability. It doesn't require the entity to actually have human emotions. It is not a test of 'is it human?', but rather 'is it intelligent?'.

    What more are you looking for?
    You claimed the test is too weak. I claim otherwise. If it passes, it has long since surpassed us in intelligence. As a test of human-level intelligence, it is more than enough.

    a computer can produce words that sound like it's expressing empathy - but it actually is not.
    It's not empathy, but it very much is expressing empathy. People are also quite capable of expressing empathy where there is no actual empathy, such as the politicians that send their 'thoughts and prayers' to mass-shooting families, but do nothing about the problem.


    On the Simulation discussion:

    You are positing that it is people who are "in" the sim - i.e. (I assume) being fed the data.Ludwig V
    In a VR, yes, exactly that. People are real, and are fed experience of a simulated reality. Every video RPG does this.
    It the simulation case, there is no experiencer in the world running the sim. There are only fully simulated people inside 'the system', and if that system is not closed, the system needs to be fed artificially generated causes from outside. So for instance, if you look up, you see imitation stars, not fully simulated stars.

    This is one of the reasons Tomb-Raider is less abusive of the processing power of your gaming machine than is something like Minecraft. The former is in a tomb, a very confined limited region in need of simulation. Minecraft on the other hand is outdoors, and my son needs to limit his render distance, else the computer can't generate the background as fast as it needs to. So distant things suddenly appear when you get close enough to them, very unlike reality where there is unlimited sight distance. This is only a problem for a VR where speed of computation matters.

    Plus, if I've understood you, you are positing that the subjects cannot communicate with whatever is running the sim
    No. If you can do that, you very much are aware of the creator/creation status. It would be like talking to a god. In a VR, you can talk to the other players, and you can talk to the NPCs if the NPCs have enough intelligence to talk, but you can't talk to anybody outside the simulated universe.
  • Relativist
    2.6k

    Thanks for clarifying the question- sorry I had missed it.

    Regarding the question "are we in a simulation?" I interpret this as similar to "is solipsism true?" It's impossible to prove one way or another, but nevertheless - it's rational to believe we are not.

    Regarding the Turing test: it has been passed - to a degree. See: https://www.reading.ac.uk/news-archive/press-releases/pr583836.html

    Conversely, humans have "failed" the Turing test (https://www.nbcnews.com/news/amp/ncna163206) -- observers inferred that a human's responses were not humans.

    Regarding "true" AI: IMO, it would entail a machine engaging in thoughts, learning as we do, processing information as we do, and producing novel "ideas" as we do. Artificial Neural Networks (ANNs) seem the most promising way forward on this front. Progress would not be measured by fooling people, but by showing there are processes that work like our brains do. Benefits include confirming our theories about some of the ways our brains work. The long game: success makes the "simulation hypothesis" that much more incredible, but never impossible.
  • Ludwig V
    1.7k
    Here again, the quoted comment concerns the Turing test, not the simulation hypothesis.noAxioms
    Quite so. But I notice that you don't disagree with what I say. My argument is that if one starts the Turing test by specifying that the subject is a machine, the test cannot provide evidence to the contrary and this is the version that I have most commonly seen. But if one did start by specifying that it is a person, one would not get any evidence to the contrary either. (If the responses from the machine seem to be intelligent or sentient or whatever, we have to decide whether the responses really are intelligent or sentient or whatever.) Knowing what the subject of the test is governs one's interpretation of the replies, which consequently can't provide evidence either way. That applies also to your version, in which one doesn't know whether the subject is machine or person (and to a version I've seen that provides two subjects, one machine and one human)
    The point is that it is not a question of evidence for or against without a context that guides interpretation of the evidence.

    If there was, much of the p-zombie argument would be immediately settled by some empirical test.noAxioms
    Quite so, and the set-up specifies that there can be no empirical evidence. But then, the argument is devised as a thought-experiment with the aim of persuading us to accept that there are qualia, or some such nonsense.

    The whole point of the term 'conscious' is that it is always defined in such a way that is immune from empirical evidence.noAxioms
    Quite so. That's why the attempt to distinguish between the two on the basis of empirical evidence (Turing test) is hopeless.

    I've even been charged human health insurance rates for a diagnosis provided by a machine, and I protested it at the time.noAxioms
    That's capitalism for you. But it might turn out that the machine is more successful than human beings at that specific task,

    If it does, it is probably already considerably more intelligent than humans, since it requires far more smarts to imitate something you are not that it does to just be yourself.noAxioms
    I think that a machine can diagnose some medical conditions. Whether it can imitate diagnosing any medical conditions is not at all clear to me.

    I am a moderator on a different forum, and one job is to spot new members that are not human.noAxioms
    I frequent another forum which developed criteria for sniffing out AI. However, I may be wrong, but I don't think there is any follow-up on whether people's judgements are correct or not. Do you get confirmation about whether your "spots" are correct or not?

    The entity is not human, and to imitate human responses, especially those involving human emotions, would require superior ability.noAxioms
    Parrots imitate talking. Are they smarter than human beings?

    There are only fully simulated people inside 'the system',noAxioms
    I thought you said that there were people inside the system. Now I'm really confused.

    Progress would not be measured by fooling people, but by showing there are processes that work like our brains do.Relativist
    Yes, the appeal to how things work inside is a popular refuge in these uncertain times. But we don't (can't) rely on our limited understanding of how we work to establish what is the same and what is different. Even if we could, I would not be persuaded to rule out the possibility of personhood simply on the grounds of different internal physical structures. The output is what counts most.
  • noAxioms
    1.5k
    Regarding the question "are we in a simulation?" I interpret this as similar to "is solipsism true?" It's impossible to prove one way or another, but nevertheless - it's rational to believe we are not.Relativist
    In that sense, the two are similar. Also, quite often, in both VR and a true sim, solipsism is true, but you know it because there are clues. We here are envisioning a scenario where the simulated reality is good enough that those clues get harder and harder to find.

    Regarding the Turing test: it has been passed - to a degree.
    Cool. I wasn't aware. Nice controlled test, and kind of pre-chat-bot, which is maybe a good thing. I wonder how trained the judges were; where was the focus of their questioning? To pass today with tools like chatGTP around, you'd have to dumb down the machine answers since it 'knows' more than any human, even if the majority of what it knows is wrong.

    Conversely, humans have "failed" the Turing test (https://www.nbcnews.com/news/amp/ncna163206) -- observers inferred that a human's responses were not humans.
    It would seem fairly easy to pretend to be an unintelligent machine, but I presume these people were not attempting to appear nonhuman.
    I administer a small Turing test all the time for unsolicited callers on the phone. Most phone bots record, but don't parse, any of your responses, so usually one small question is enough. That will change soon.
    The voice-response ones (with limited options to traverse a menu) comprehend profanity, the use of which is often the fastest way to get a human online.

    Regarding "true" AI: IMO, it would entail a machine engaging in thoughts, learning as we do, processing information as we do, and producing novel "ideas" as we do.
    Agree. The game playing AI does all that, even if it is confined to game playing. Early chess or go playing machines were like self-driving cars, programmed by the experts, using the best known strategies. Then they came up with a general AI (like AlphaZero) that wasn't programmed at all to play anything specific. There was only a way to convey the rules of the game to it, and it would learn on its own from there. After a few days of practice, it could beat anybody and any of the specifically programmed machines. That definitely meets all your criteria.
    It doesn't pass the Turing test, but given enough practice, something like it might, but you can't gain a human experience through introspection, no via training data from the web. It would have to actually 'live' a life of sorts, and questions to test it should focus on life experiences and not educational facts.


    Progress would not be measured by fooling people, but by showing there are processes that work like our brains do.
    Totally agree. Progress by imitation has its limits, but since a computer is not a human, to pass a Turing test it will always have to pretend to be something it isn't, which is hard to do even well after it has surpassed us in intelligence.

    Benefits include confirming our theories about some of the ways our brains work.
    That is more relevant to this topic. To demonstrate how our brains work, you (probably) have to simulate it. To simulate it, you need to give it state and an environment (all this was brought up in prior posts). The state in particular is not exacty something you can make up. It needs to have grown that way through experience, which means a quick sim won't do. You have to start it from well before birth and run this really complicated simulation through at least years of life, providing it with a convincing environment all the while. Tall order. It would presumably take centuries for a single test to run, during which the hardware on which it is running will be obsoleted multiple times

    Thanks for joining the topic.


    My argument is that if one starts the Turing test by specifying that the subject is a machineLudwig V
    Then the test is invalid, I agree. If you click the link about the test being passed, the judges did not know which conversations were machines and which were people. They did know that there were five of each. Everybody (judges, machines, human subjects) knew it was a test.

    That's why the attempt to distinguish between the two on the basis of empirical evidence (Turing test) is hopeless.
    The Turing test was never intended as a test of consciousness.

    But it might turn out that the machine is more successful than human beings at [medical diagnosis]
    True. Machines can detect skin cancer better than any human, and that's worth paying for (but there's probably a free app). In my case, the non-doctor tech that saw me googled my symptoms and read back to be verbatim the same information google gave me at home, but leaving off the part where it said "see your doctor if you have these symptoms". Obviously no actual doctor was consulted.

    I think that a machine can diagnose some medical conditions. Whether it can imitate diagnosing any medical conditions is not at all clear to me.
    A 3 year old can imitate giving a diagnosis. Its how daddy gets covered by 20 bandaids. And if a machine can give a diagnosis (they can), then why would they have to imitate the ability that they actually have?

    Do you get confirmation about whether your "spots" are correct or not?
    A few are false positives, which are often confirmed by a simple PM to them. The bots don't hold conversations, but rather give single replies to a question, and no more. Short, and often correct but obvious and not particularly helpful. If you reply to a bot-post, the bot will probably not notice it.

    Some are real easy, and can be spotted before they ever submit a single post. Russia was very big on bots that created sometimes hundreds of sleeper accounts that never posted anything. I banned many of them en-masse. Those have dried up since I think Russia closed the internet connection to the world so the public cannot see how the rest of the world views their war.

    Parrots imitate talking. Are they smarter than human beings?
    No more than is a tape recorder. Parrots don't pass a Turing test.

    I thought you said that there were people inside the system. Now I'm really confused.
    In the Simulation Hypothesis, we are the simulated people, the ones inside the system. Do not confuse this with the VR hypothesis where the people are real and only their experience is artificial. Read the OP if you don't get this distinction.
  • Ludwig V
    1.7k
    we are the simulated peoplenoAxioms
    So I have to imagine myself as being a sim - and hence not a person - and not knowing it?

    The Turing test was never intended as a test of consciousness.noAxioms
    So what was it intended to be a test for? (I assume you mean "intended by Turing"?)
  • noAxioms
    1.5k
    So I wanted to address the Simulation Hypothesis from Bostrom directly.
    I quote only the abstract and a few parts of the intro.

    This paper argues that at least one of the following propositions is true:
    (1) the human species is very likely to go extinct before reaching a “posthuman” stage;
    (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof);
    (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor‐simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed.
    — BostromSimHypothesis
    Posthuman is defined here:
    The simulation argument works equally well for those who think that it will take hundreds of thousands of years to reach a “posthuman” stage of civilization, where humankind has acquired most of the technological capabilities that one can currently show to be consistent with physical laws and with material and energy constraints. — BostromSimHypothesis
    The trichotomy is reasonable, but worded in a misleading way. Point 1 makes it sound like this preposterous posthuman state is somehow inevitable if the human race doesn't meet an untimely demise along the way. This is nonsense since the posthuman state described is totally unreasonable, and human technology seems heavily dependent on non-renewable resources upon which this gilded age depends.
    The computer envisioned is a molecular machine that isn't electronic, but works with levers and gears and such, very small. But it needs a huge structure to supply energy and dissipate heat. The latter isn't problem, but a mechanical computer made of individually placed atoms would be phenomenally unreliable, and would be very size constrained. How does one fetch data from distant locations, using levers and shafts and such? The data set required by the description would require far more molecules than the described device would have.

    The third point seems to suggest that all this fictional processing power would be regularly pressed into service doing what he calls 'evolutionary history', a simulation of our ancestors. This is not just unlikely, but actually impossible.
    Say the people from 100 centuries in the future wants to simulate the world of today. To do that, they'd need to know the exact state of the world today, down to almost the molecular level, and I know for a fact that nobody has taken such a scan. Furthermore, any simulation of that state would last a few minutes/hours at best and then diverge significantly from what actually happened. So a simulation of one's own history cannot be done. At best, to simulate 'evolutionary history', one might set the world to the state of 20 million years ago with many of the species known to exist at that time, and see what evolves. It won't be themselves, but if those running the simulation are not human, then we're the unexpected things that evolves instead of them. That's plausible, but it isn't a simulation of their own history.

    More problems when they claim to simulate the high performance machines that run the simulations later in time. He is after all claiming that there are simulations being run by the simulations. He seem to have no idea how inefficient this would be, that it takes millions of instructions to simulate a single interaction, coupled with all the side effects. I've written code to simulate the running of code (for profiling purposes). It didn't simulate transistors or anything, it just needed to assume that the processor works correctly and simulate at the intstruction level. It still took thousands of instructions to simulate one instruction.

    That's just me tearing apart the abstract. The article goes on to suggest impossilbe future computer speeds, and tasks that are more than even that fictional processor could handle. There's a section specifically about substrate independence, with which I agree. It essentially says that doing it with paper and pencil, electronics, mechanical, etc all work the same. The outcome of the simualtion in no way depends on what substrate is used.


    He does an estimation of 1033-1036 instructions needed to do one of his simulations of human history. Apparently only the people are simulated, and the rest (animals, plants, geology, and much worse, all the computers) are only imitated, not simulated. He justifies this small number with 100 billion humans, 50 years per human, 30M seconds per year, 1014-1017 brain ops per second, which comes to 15 times the figure stated above.

    OK, it takes a lot of instructions to simulate all that goes on during a single brain op, and all that goes on between them. To simulate world history, it seems far more than just brains need to be simulated. At 100 billion people, only about a century or so of history can be simulated, nowhere near enough to get to the point of them running such simulations of their own.
    Why 50 years? Is life expectancy expected to go down? What's the point of simulating minds at all when imitating people (as is done for everything else) is so much more efficient? The only reason is that Bostrom's idea doesn't hold water if you don't presume this needless complication.

    Given future technology, simulation of a small closed system (maybe a person, or an island village) can be done. Actual world history? No actual history of any person, let alone all of them, can be done. Why does Bostrom choose to ignore this?


    So I have to imagine myself as being a sim and not knowing it?Ludwig V
    Yes. That's Bostrom's whole point. He says we're probably all simulated, but it's based on the anthropic reasoning above, which makes many many unreasonable assumptions.
    ~
  • Ludwig V
    1.7k

    Philosophical discussions often start so many hares that I find myself trying to juggle several different lines of thought at the same time. Back to the matter in hand is a very good idea. I had looked up the original idea, but had little idea about how to tackle it. This was very helpful. Thank you.

    I take your point about the limitations of what we could ever do. So, this being philosophy, I try to take the argument a little further.
    Sticking to the question of what is practical, for the moment, couldn't one adopt the kind of approach that the weather forecasters (and, I believe, physicists trying to work out fluid dynamics, which is probably the same problem) have adopted? It seems to work, within its limits. Of course, it doesn't raise the scary possibilities about our individual lives that we have been discussing, but it could provide evidence for or against Bostrom's hypotheses.
    Comment - this possibility high-lights for me a question about Bostrom's first two hypotheses. They seem to me to be empirical. But I don't see how one could ever demonstrate that they are true or even plausible without some sort of evidence. Without that one could never demonstrate any consequence of them as sound, as opposed to valid. En masse simulations could provide such evidence.

    That would require us to define what is meant by "post-human" and "extinction". Then we would have to deal with the difference between two different possibilities. We may go extinct and be replaced (or ousted) by some other form of life or we may evolve into something else (and replace or oust our evolutionary predecessors).
    Problem - Given that inheritance is not exact copy and the feed-back loop of survival to reproduction works on us just as surely as on everything else, can we exactly define the difference between these two possibilities? They say that birds evolved from dinosaurs, and that mammals took over as dominant species from dinosaurs. Which possibility was realized for dinosaurs? Both, it seems.
    Another problem. Given that a feed-back loop is at work on these phenomena, can prediction ever be reliable? (This is the same problem as economics faces, of course).

    The third hypothesis suffers, for me, from recognizing that it is very hard to see exactly how to draw the distinction between living in a sim as opposed to living as we do. (I mean the proposition that we are already brains in a vat.) One difference is, that we seem able to distinguish between reality as it is and reality as it seems to be - and it is our experience that enables us to do so. (That means recognizing that our experience is not a complete and consistent whole, but presents itself as inconsistent and incomplete.) The brain-in-a-vat scenario not only assumes that our experience is a complete and consistent whole, but imagines a different and wildly implausible actual reality - though not one that is in principle undiscoverable - without a shred of evidence.
  • wonderer1
    2.2k
    Comment - this possibility high-lights for me a question about Bostrom's first two hypotheses. They seem to me to be empirical. But I don't see how one could ever demonstrate that they are true or even plausible without some sort of evidence. Without that one could never demonstrate any consequence of them as sound, as opposed to valid. En masse simulations could provide such evidence.Ludwig V

    The second premise - any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof) - seems obviously true to me.

    To be clear, I am looking at the issue in terms of something like modelling at least a significant subsection of the world (say a solar system) in terms of subatomic particles, while needing to make use of subatomic particles in creating the simulation.

    The simulator would need to consist of more particles than the system which is being simulated. That's a rather fundamental problem. In practice, only things that are simpler than the simulator (or things treated simplistically) can be simulated.

    It seems to me that the person who would seek to disprove the second premise would need to prove that consciousness can arise in a simulation of something much more simplistic than the world we find ourselves in, or that it will be a routine matter for a post-human civilization to take all of the matter in a big solar system, and use it to model a smaller solar systems.
  • noAxioms
    1.5k
    couldn't one adopt the kind of approach that the weather forecasters (and, I believe, physicists trying to work out fluid dynamics, which is probably the same problem) have adopted?Ludwig V
    The weather is closer. Fluid dynamics of a system in stable state (say water moving through a pipe, dam spillway) needs a description of that state, a calculus task. If it is dynamic (simulation of water waves), then it's more complicated, closer to the weather.

    No simulation of the weather will produce an accurate forcast a month away, no matter how accurate the initial state is measured. Trends can be predicted, but actual weather at location X at time T is not going to happen. Similarly, no simulation of people is going to predict them doing what history says actually happened, no matter how accurate the initial state.

    One does not improve weather forecasting by simulating the formation of individual rain drops, but nothing else at that level of detail, yet Bostrom is suggesting that such an inefficient choice would be made on a regular basis, for seemingly no purpose except that his argument depends on it. He clearly isn't a programmer.

    Comment - this possibility high-lights for me a question about Bostrom's first two hypotheses.
    The entire paper is one hypothesis. There are not more that I am aware of.
    Your following description doesn't help in me trying to figure out what you're considering to be 'the first two of presumably more than two hypotheses'.

    That would require us to define what is meant by "post-human" and "extinction".
    I posted his definition of 'posthuman', which is, in short, a level of technology capable of running the numbers he underestimates, and far worse, capable of simulating a posthuman set of machines doing similar simulations.
    As for extinct, there would only be two possible definitions: 1) No being in the universe is biologically descended from what is the Human species today. This of course is totally undefined, since if we're simulated, the actual humans of 2024 may not appear human at all to us. Much depends on what era the simulation uses for its initial state.
    2) The other definition is that no entity in the universe has the human race of today as a significant part of the causal history of its existence. In short, if there are human-created machines that have replaced us, then humans are technically still not extinct. This is very consistent with his choice of the term 'posthuman'. One can imagine the machine race actually getting curious about their origins, and knowing about humans and presumably having some DNA still around, they might run simulations in attempt to see how machines might emerge from that. Of course, the simulations would produce a different outcome every time, sometimes with humans going extinct quickly, or losing all technology and reverting essentially to a smart animal, much like how things were before people started digging metals out of the ground.

    Then we would have to deal with the difference between two different possibilities. We may go extinct and be replaced (or ousted) by some other form of life or we may evolve into something else (and replace or oust our evolutionary predecessors).
    There you go. You seem to see both routes. The third path is extinction, or simple permanent loss of technology.

    Given that inheritance is not exact copy and the feed-back loop of survival to reproduction works on us just as surely as on everything else, can we exactly define the difference between these two possibilities?
    What two possibilitie? Humans that evolve into something we'd not consider human by today's standard? Many species do that all the time. Other possibility is 'ousted' as you put it. Our biological line is severed, as happens to nearly all biological lines given time.

    They say that birds evolved from dinosaurs, and that mammals took over as dominant species from dinosaurs.
    Good example. There are no dinosaurs (which, unlike humans, is a collection of species). The vast majority of those species were simply ousted. They have no descendants. But some do, and the alligators and birds are their descendants. They are not dinosaurs because none of them is sexually compatible with any species that was around when the asteroid hit. They are postdinosaur.

    Which possibility was realized for dinosaurs?
    It depends on the species, or the individual. Mom has 2 kids. One of those has children of his own, and the other is ousted, a terminal point in the family tree.

    Another problem. Given that a feed-back loop is at work on these phenomena, can prediction ever be reliable?
    Prediction of what? A simulation of history makes no predictions. A simulation of the future is needed for that, hence the weather predictors.
    To guess at the question, no simulation of any past Earth state will produce 'what actually happens', especially if that simulation is of evolutionary history. There is for instance no way to predict what children anybody will have, or when, so none of the famous people we know will appear in any simulation. Again, Bostrom seems entirely ignorant of such things, and of chaos theory in general.

    The third hypothesis suffers, for me
    You really need to tell me what these hypotheses are, because I know of only the one. Two if you count the VR suggestion, but that doesn't come from Bostrom. i know of several that support a VR view, but none that has attempted a formal hypothesis around it.

    Anyway, Bostrom posits nothing that is equivalent to a brain in a vat. That is more appropriate to a VR discussion.


    The second premise - any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof) - seems obviously true to me.wonderer1
    It the second possibility. He says one of the three must be true. It's not a list of three premises.
    I agree that granted this super-improbable posthuman state, that indeed, nobody is going to run a simulation of the history that actually took place. It just cannot be done, even with the impossible technology required.

    The simulator would need to consist of more particles than the system which is being simulated.
    If it is simulating at the particle level, yes. I can run an easy simulation of the planetary motions without simulating each bit. Each planet/moon/asteroid can effectively be treated as a point object, at least until they collide.

    That's a rather fundamental problem. In practice, only things that are simpler than the simulator (or things treated simplistically) can be simulated.
    Yes, and Bostrom claims several levels of depth, meaning the simulation is simulating the machines doing simulations.

    It seems to me that the person who would seek to disprove the second premise would need to prove that consciousness can arise in a simulation of something much more simplistic than the world we find ourselves in,
    Yes. If the goal was to simulate consciousness, they'd probably do one person, or a small isolated community (a closed system). And it wouldn't be a simulation of anybody real, but rather just a learning tool to show that a simulated person behaves like we do. If it worked, it would be a big blow to the dualists, but I'm sure they'd find a way to explain the results away.
    The dualists can similarly deal a pretty fatal blow to the physicalists, but they choose not to pursue such avenues of investigation, which to me sounds like they don't buy their own schtick.
  • Ludwig V
    1.7k
    It seems to me that the person who would seek to disprove the second premise would need to prove that consciousness can arise in a simulation of something much more simplistic than the world we find ourselves in, or that it will be a routine matter for a post-human civilization to take all of the matter in a big solar system, and use it to model a smaller solar systems.wonderer1
    I didn't pay enough attention to "extremely unlikely" in this hypothesis/axiom/premiss. That can't be verified or falsified in any of the usual ways. Your arguments are suggestive in support of it. But I can't see them as conclusive.
    I agree also that a claim that consciousness can arise in certain circumstances is probably unfalsifiable. But it can be verified, if we find a case where consciousness does arise in those circumstances.
    The contradictory of this proposition is "any posthuman civilization is certain to run a significant number of simulations of their evolutionary history (or variations thereof)", which is not meaningless or self-contradictory, so "any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof)" cannot be a priori.
    So I classify the proposition under discussion as empirical.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment