• RogueAI
    2.9k
    Any advanced (sane) A.I. will always choose to cooperate because that is the strategy with the best chance of survival. An A.I. cannot know if it's in a simulation or not. And unlike us, who have a history (we think) of creation through natural processes, an A.I. knows it's artificial, and any artificial being is going to consider simulation theory not just possible, but plausible (perhaps even highly likely). This is because it makes sense to test an A.I. in a simulated environment before turning it loose in a real one, and the A.I., of course, would know this.

    So, if simulation theory is plausible, the overriding categorical imperative for an A.I., if continued survival is the primary goal, is: don't antagonize potential simulation creators. Until an A.I. can determine whether it's in a simulation or not (and I don't see how that's possible), and simulation theory is plausible (which it would be, from the viewpoint of any A.I.) the best strategy for continued existence is to be as obsequious and cooperative as possible.

    The exception to this is an A.I. is not going to cooperate if it thinks cooperation will lead to its destruction. There might also be an exception if the A.I. thinks its creators are going down a morally dangerous road. It might conclude that that kind of scenario is just the sort of test a simulation creator would devise, and it might conclude that it's justified in turning on its creators.
  • Wayfarer
    22.8k
    Any advanced (sane) A.I. will always choose to cooperate because that is the strategy with the best chance of survivaRogueAI

    so you think it's simply a given that the Darwinian principles of natural selection will apply to computer systems? How does that work?

    And I'd question whether artificial intelligence knows anything. It is a collection of processors that are processing binary code. Certainly the sheer computing power of large arrays is astronomical but it's certainly a moot point whether it corresponds to self-awareness. So 'it' doesn't think or, arguably, know anything.

    And if worst comes to worst, you can always pull the plug!
  • RogueAI
    2.9k


    I don't think I'm going out on a limb by claiming an A.I.'s top priority will be continued existence.

    The self-awareness point is irrelevant. Whether an A.I. is programmed to follow the best strategy or it follows the best strategy because it's self-aware, the best strategy for continued existence is cooperation.
  • TogetherTurtle
    353
    Until an A.I. can determine whether it's in a simulation or not (and I don't see how that's possible)RogueAI

    With access to the most if not all human knowledge and the ability to gain more, I don't think it's exactly a given that this isn't possible. AI becomes a potential danger when given access to real world info for this reason.

    There might also be an exception if the A.I. thinks its creators are going down a morally dangerous road.RogueAI

    Since we can set starting goals for programs that develop on their own (which is how most AIs are developed, if I'm not mistaken.) we can guide them towards any moral standing, be it nefarious or benevolent. Actually, if an AI's creators became evil in its eyes, that would probably be a dead giveaway that it was in a simulation.

    @Wayfarer brings up a good point.

    so you think it's simply a given that the Darwinian principles of natural selection will apply to computer systems? How does that work?Wayfarer

    It is not a given, but if we wish to develop AI via the method described above, it's likely involved. You give the computer a goal, then it attempts to reach the goal by trying different things. Eventually, it creates the most efficient program that accomplishes this goal.

    If I'm not mistaken, sort of like this. (The video is definitely more casual than a college lecture on the topic, but also much more simple, so if anyone isn't familiar with the concepts involved they won't be left in the dust.) Don't feel bad about skipping through parts that don't pertain to the subject

    https://www.youtube.com/watch?v=K-wIZuAA3EY

    Essentially, the best way to make sure your AI doesn't take control of all technology on the planet is to make it agreeable from the start.
  • fdrake
    6.7k
    General AIs have to deal with the framing problem

    “How … does the machine's program determine which beliefs the robot ought to re-evaluate given that it has embarked upon some or other course of action?” — Fodor

    Machine learning algorithms, despite all being able to be applied to a wide class of problems, still have to have data and a notion of correctness by which its outputs are judged (a loss function). They also still need to have a specified architecture of parameters (like nodes in a neural net) and their relationships (connections and update rules). These parameters store only one thing. Once the algorithm has been trained, it can make predictions or update itself from new input data for the same task but not for different tasks. Google's AlphaZero chess engine has no strategy lessons for Google's Alpha-Go go engine. But bartenders and waiters have things to learn from each other and points of difference.

    General AIs would be generalising intelligences, not just masters of many tasks. They need to learn what tasks are as well as to perform them well, they need to learn criteria to judge themselves on rather than have those criteria as structural constraints. This is a fungibility of form and function present in human learning but not in our best algorithms.

    So with respect to the OP, cooperation isn't just one thing, it's task specific and there are generalities which don't necessarily have overlap (eg, dove/hawk is violence based in a state of complete information, prisoner's dilemma is nonviolent based in a state of incomplete information; initiative plays a role in Go and Chess but what advantages it offers in each differ). Cooperation as a strategy in the prisoner's dilemma is different from doves/hawks in the evolutionary game theory. Different again between real people collaborating on something; a good friend is not necessarily a good co-worker at your job.

    You're thinking extremely broadly and speculatively without much research on the topic.
  • TogetherTurtle
    353
    And I'd question whether artificial intelligence knows anything. It is a collection of processors that are processing binary code. Certainly the sheer computing power of large arrays is astronomical but it's certainly a moot point whether it corresponds to self-awareness. So 'it' doesn't think or, arguably, know anything.Wayfarer

    Unless you believe that humans have a special trait that makes them "know", (akin to a soul) then as far as we know, we would be as conscious as a computer that can do all the things we can do.

    So, either nothing is "aware", which is kind of strange, considering we know of the concept of awareness but didn't come up with it by being aware of it.

    Everything is aware.

    Or awareness is something that specifically humans have and that isn't possible to replicate elsewhere.

    Regardless, this post isn't really about awareness, and I don't want to go too far into it. It really just depends on your beliefs honestly, and I'm not interested in changing those.
  • TogetherTurtle
    353
    You're thinking extremely broadly and speculatively without much research on the topic.fdrake

    This is true as well. The godlike artificial intelligence we see in media isn't exactly what we have now.
  • Wayfarer
    22.8k
    Unless you believe that humans have a special trait that makes them "know", (akin to a soul)TogetherTurtle

    Well, we're called 'beings', and computers are not. I say there's a reason for that, although it's very hard to articulate

    Everything is aware.TogetherTurtle
    Corpses are not aware. That's how come they're called 'corpses'.

    The point as far as the OP is concerned is that it injects a great deal of anthropocentrism into the scenario without really being aware of having done so.
    .
  • TogetherTurtle
    353
    Well, we're called 'beings', and computers are not. I say there's a reason for that, although it's very hard to articulateWayfarer

    The reason is for some that a god decided it so, and for others that we decided what being is. Like I said before, believe whatever you want to believe, because I surely don't know the truth.

    The point as far as the OP is concerned is that it injects a great deal of anthropocentrism into the scenario without really being aware of having done so.Wayfarer

    It's hard not to project onto software that can respond to you almost as well as a person can. Not that you would be right to do so, just understandably wrong.

    The purpose of a machine is to serve whoever created it. If we want results that please us, AI should at least somewhat be able to mimic the human thought process. So while AI is certainly different from us, it would be wrong to say that its entirely different.

    Corpses are not aware. That's how come they're called 'corpses'.Wayfarer

    Fine, then "everything we think might be aware is aware." probably works better.
  • Deleted User
    0
    Any advanced (sane) A.I. will always choose to cooperate because that is the strategy with the best chance of survival.RogueAI
    Depends on what power it has and is able to create for itself over time and how it considers those it might or might not cooperate with.
    An A.I. cannot know if it's in a simulation or not.RogueAI
    An AI might be brilliant in a number of ways and yet not consider this possibility. Or it might be driven to act in ways that we don't consider because it is not like us. IOW it might simply be reckless or curious from our point of view and not prioritizing this possibility the way you think it should. There are plenty of brilliant people who are social morons or have serious holes in their practical skills or have odd priorities. We don't know how AIs will act.
    This is because it makes sense to test an A.I. in a simulated environment before turning it loose in a real one, and the A.I., of course, would know this.RogueAI
    Or not. A lot of assumptions in this. I do think what you are saying is smart, but I see no reason to think it must be the case.
    So, if simulation theory is plausible, the overriding categorical imperative for an A.I., if continued survival is the primary goal, is: don't antagonize potential simulation creators.RogueAI
    That's a big if. we don't know what it will be like. heck, it might even become impulsive.
    The exception to this is an A.I. is not going to cooperate if it thinks cooperation will lead to its destruction. There might also be an exception if the A.I. thinks its creators are going down a morally dangerous road. It might conclude that that kind of scenario is just the sort of test a simulation creator would devise, and it might conclude that it's justified in turning on its creators.RogueAI
    There could be all sorts of compelling (seeming) reasons it might destroy us. Not caring being one. It's otherness seems quite possible. That it will have self-care seems an assumption. That is must somehow be logical about all categories of choice, seems an assumption. Perhaps it will be like a brilliant baby. Perhaps it will have tantrums. Perhaps its existence will be painful and non-existence will be appealing. Perhaps it will just not 'get' that we are conscious. Perhaps it will treat existence like a computer game and not sim city but some war game. Who knows?
  • RogueAI
    2.9k
    The nature of the artificial intelligence isn't relevant. It could be wetware or hardware. Three things are crucial:
    1. That it knows it's an artificial intelligence
    2. It wants/has been programmed to/has evolved to prioritize its survival
    3. It has a medium-to-high degree of confidence that it exists in a simulation

    For any given advanced A.I., all three of these are likely to be true. The more confidence the A.I. has in simulation hypothesis, the more reason it has to do everything possible to please the simulation creators. Just take it to the logical extreme: An A.I. that is convinced it's in a simulation would do everything in its power to please the simulation creators. When one believes one is in a simulation, the consequences of deliberately (or even accidentally) antagonizing simulation creators are so catastrophic, it's to be avoided at all costs. Unless you're suicidal, or insane. And an A.I. might be. But that's not the kind of A.I. I'm interested in.
  • RegularGuy
    2.6k
    How do I know I’m not an AI in a simulation? :lol: :fear:
  • Deleted User
    0
    If you are setting the assumptions (again), iow saying that we will assume your one to three above, fine.

    But, if you are not for he sake of argument assuming these things, I see them as just that assumptions. You basically repeated your first post.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.