• Henri
    184
    Let's say it is possible to develop conscious software, a real AI.

    This AI is somewhat more intelligent than humans and has a consciousness of a human or slightly higher.

    An environment is programmed for it. There are no other AIs there, but it's a rich environment. It's a situation somewhat a la Robinson Crusoe, but in a vast and rich setting, not a small island. Although AI is alone, during time it develops a language and can both talk to itself and write things down.

    Both AI and it's environment is one piece of software that's housed within a hard disk within one supercomputer. AI has no means to go outside of it's environment, to, let's say, read source files or something similar.

    As for supercomputer, it has no access to anything outside. No internet, no camera, no microphone. Also, no files other than what's needed to run AI and it's environment. And it's situated in highly isolated room.

    The question is, what can an AI think about the source of its existence? Can it understand that it was created by a creator?
  • Metaphysician Undercover
    13.2k
    The question is, what can an AI think about the source of its existence? Can it understand that it was created by a creator?Henri

    I suppose that depends on what is programmed into it.
  • Deleted User
    0

    If we can only speculate about our creator, then it stands to reason, so can the AI?
  • Sir2u
    3.5k
    Although AI is alone, during time it develops a language and can both talk to itself and write things down.Henri

    This would not happen unless it was part of its original coding.

    Can it understand that it was created by a creator?Henri

    If it has been programmed to live in its environment and nothing else how could it even dedicate processing time to it?
  • Henri
    184
    If it has been programmed to live in its environment and nothing else how could it even dedicate processing time to it?Sir2u

    We can assume that AI is programmed to think about various things, so it has processing time to think how it came to be. But it seems very unlikely that AI could know that it is created by a creator who not only created it but whole AI's environment, unless that knowledge is somehow given, programmed, into the AI. And this knowledge has to be given in a way so AI would believe it's true, because in some scenarios AI could think it's a lie even if it had the information.

    As far as on its own, I think AI could at best develop a number of hypotheses, but it probably couldn't have a knowledge, an absolute knowledge.
  • fishfry
    3.4k
    The question is, what can an AI think about the source of its existence? Can it understand that it was created by a creator?Henri

    As I understand it, this is the territory mined by Descartes back in 1641. He asked himself, what can I be sure of? I doubt everything. But then I must be the one who doubts. I doubt, therefore I am.

    Then he asked himself, how do I know this world I experience, my sensations, the other people, what if all of it is nothing more than an illusion caused by an evil demon?

    I imagine your hypothetical AI would reason the same way. Assuming it had a philosophy subroutine. Remember that machines can only do what they're programmed for. Anything that executes an algorithm is constrained by the limits of computation. It doesn't even matter if it's the latest neural network or machine learning design. Ever AI runs on conventional hardware and can do no more than what any Turing machine can do, executed by a human operator equipped with a pencil and an unbounded paper tape.

    I just wanted to make that point. Once you say something's a computation, it is severely constrained by the laws of computation. If an AI can become self-aware, it will be self-aware when implemented as a Turing machine and run by hand.

    But I will grant you that the designers have endowed your AI with a philosophy module. A subroutine that runs from time to time and contemplates its own existence.

    I imagine any AI would get to where Descartes did. I can doubt everything except the fact that there is an "I" who is doubting.

    But what about that evil demon making me imagine I have a life and I go to the grocery store and so forth? As I understand it, Descartes reasoned that since God is good, God would not so deceive him. Therefore the world must be real.

    So it depends on what your AI's religious programming is. If it lives in seventeenth century France, it believes in God. Other cultures, not so much. Depends on your religion module.

    Algorithms are not neutral. Every algorithm, hence every AI, incorporates the prejudices and assumptions and worldview of its creator.

    That's what the public needs to understand. People are afraid the AIs will develop a mind of their own. No. It's much worse than that. The AI's will be just like us. That's the horror.
  • Sir2u
    3.5k
    At best, I think, if on its own, AI could develop a number of hypotheses,Henri

    But why would it do this if it is not part of its code? And if it is part of its code then it is not doing anything it was not meant to do .
  • Henri
    184


    I agree that there has to be a code that allows AI to do stuff, but at the same time this AI is conscious being, and I don't know how such being, a conscious being at a level of human consciousness or slightly above, would think in said environment.
  • Sir2u
    3.5k
    this AI is conscious beingHenri

    If it as you say then it would act like a conscious being. But then it would realize that its environment is all that there is. What do you think a baby would do if it grow up in a room with everything it needed, would it think that there must be something OUT THERE providing for it?
  • Michael Ossipoff
    1.7k
    The question implies that the AI is designed to be like us, and so it might very well speculate on origins.

    Some people suggest that our universe might be created by a computer simulation. I say that a computer simulation can't create a world, because all worlds--possibility-worlds and impossibility-worlds--are already there, as abstract hypothetical logical systems.

    Given that, anything that a computer does is irrelevant, A computer could duplicate a possiblity-world, and display it to its viewing-audience. But it couldn't create one, because those abstract hypothetical logical systems are already eternally there.

    So I suggest that the AI would rightly reject the notion that it and its world were created by a computer.

    Maybe someone, somewhere (in this universe or elsewhere) is running a simulation that just happens to match this world. That doesn't mean that their computer is creating this world. It's merely duplicating it, for display to its viewing audience (as I mentioned earlier in this post). Maybe, in that other world where that simulation is being run, someone is saying, "Notice that some of the people in that world don't believe that their world is created by a simulation." (...because a computer simulation can't create what's already timelessly there.).

    You didn't explicitly say it, but you seemed to imply that this AI is in a simulated. virtual reality environment, You didn't say whether or not that environment simulates a physical world, with consistent physical laws.

    What if the simulated (duplicated) world displayed inconsistencies in its physical laws?

    Have our physical laws always seemed consistent? What about the puzzle of the unexpected and unexplained black-body-radiation energy vs wavelength curve? ...or Michaelson-Morely experiment result? ...or Mercury's seemingly anomalous rotation of apsides?

    All those things seemed inconsistent with physical law known at that time, but all were explained by subsequent new physics.

    Any seeming physical inconsistency might be explained by new physics to be discovered later.

    So then, doesn't that mean that it would be impossible to prove that a physical world is inconsistent?

    Of course we don't know how whacko-inconsistent is the simulated world that is the setting for the AI.

    But, even if there's an impression that it's inconsistent, or is likely to be inconsistent, then, its putative self-inconsistency would merely make it a hypothetical impossibity-world, instead of a hypothetical possibility-world. Either way, that hypothetical world is/was timelessly there, regardless of whether a computer somewhere duplicates it and displays the duplication to its viewing-audience.

    Michael Ossipoff
  • Akanthinos
    1k
    The question is, what can an AI think about the source of its existence? Can it understand that it was created by a creator?Henri

    Why would an AI dedicate resources to a question which is not relevant to it's operation?

    And the creator's existence could not become a relevant factor unless we posit the AI to be fairly identical with a conscious agent.
  • fishfry
    3.4k
    Why would an AI dedicate resources to a question which is not relevant to it's operation?Akanthinos

    Because it's programmed to. The assumption is that this is an AI, meaning an artificial intelligence. In other words it's the execution of an algorithm. We know a lot about algorithms. One thing we know is that they do exactly what they're programmed to do. This point is no less valid just because the programming gets clever, ie neural nets and the like. If it's a TM it only does what its algorithm allows it to do.

    To put this another way, algorithms do not have emergent properties. Whatever an algorithm can do, it can do. Running it on faster hardware doesn't give it any new capabilities. So an AI can't be conscious unless we code it to be conscious. And we have no idea how to do that, or even if it's possible.
  • Akanthinos
    1k
    To put this another way, algorithms do not have emergent properties.fishfry

    Yeah. Ok. I was reading your post with a bit of a raised eyebrow, but this is the part where I know you are just pulling this crap out of your ass. Neural nets are an actual examples of emergent algorithms (that's literally how they are called!).
  • Dorothy Witherell
    4
    According to be it's the believe and faith of a person which makes them believe that God do exists.
  • fishfry
    3.4k
    Yeah. Ok. I was reading your post with a bit of a raised eyebrow, but this is the part where I know you are just pulling this crap out of your ass. Neural nets are an actual examples of emergent algorithms (that's literally how they are called!).Akanthinos

    This is the specific example I have in mind and that I am disputing. No Turing machine or algorithm can have emergent behavior. A program that implements the Euclidean algorithm to find the greatest common divisor of two integers can only do that one thing. It does it if you run it with pencil and paper, and it does it when you run it on a supercomputer.

    The world's most advanced neural net is software that runs on conventional hardware no different in principle than the PC or laptop on your desk. It is not possible for such an algorithm to have emergent behavior.

    You are confusing media hype with actual computer science. That is exactly the point I'm making. Calling a classical program an "emergent algorithm" does not falsify the principles and laws of computer science. Calling it something it's not does have the benefit of drawing in credulous reporters and their readers, including you.
  • fishfry
    3.4k
    ps ... Let me give some more detail. Say we have a sophisticated neural network that plays Go, like Google's amazing AlphaGo program.

    Essentially they let it look at thousands of games and play millions of games against itself. It builds up a database of probabilities ... in this position that move leads to a win, this other move to a loss. The machine outputs a move and its human creators don't understand why it made that move. However this is not true. All they need to do is analyze the logfiles and they can determine why the move was made. Every move that's made is a deterministic function of the state of the program's database and its code.

    These programs are "emergent" in the sense that they do things that are very hard for the programmers to understand unless they carefully analyze the logfiles. But they are not emergent in any meaningful sense of developing consciousness or suddenly becoming great chefs or deciding to give up Go and join a monastery. Even the world's most sophisticated AI is a deterministic program. Input an initial state of the database and let it follow its code and it will product an output that is a computable function of the state and the code.

    By the way neural networks aren't really new. They were described by McCulloch and Pitts in 1943. The only thing that's new is fast hardware and several decades of research into algorithms. These programs are still practical implementations of TMs running on conventional hardware, subject to the limitations of computability theory. They're deterministic, they're not magic, and they really can't do anything that would be truly worthy of being called emergent. Yes there are algorithms that are CALLED emergent. That's not the same thing at all. It's just the latest iteration of the AI hype machine that's been operating since the 1960's.
  • Henri
    184
    There are various directions on this thread about understanding this theoretical AI, but common conclusion seems to be that this AI cannot know, on its own, if creator exists or not.

    Even if this AI is programmed to seek an answer about how it came to exist, to look for clues and evidence and make conclusions about it, it still cannot know if there is a creator, a conscious being with plan and purpose, that created both it and all of its environment. That is, unless creator reveals the truth to the AI in some way.
  • Akanthinos
    1k
    You are confusing media hype with actual computer science. That is exactly the point I'm making. Calling a classical program an "emergent algorithm" does not falsify the principles and laws of computer science. Calling it something it's not does have the benefit of drawing in credulous reporters and their readers, including you.fishfry

    This is entirely wrong and ignorant of the science at hand. And while I'm not a computer engineer, I do some pentesting in my free time, and know enough about distributed botnets to know that this is blatently wrong. Emergent algorithms are a class of algorithms that has been developped for the last 2 decades at least. It includes cellular automata, neural nets, swarm optimization, cluster formation, etc.

    Take Chang and Perrig on emergent algorithms regarding cluster formation : "A
    class of localized algorithms that are particularly promising are emergent algorithms. Emergent algorithms have the additional characteristic that the individual agents (i.e., the sensor nodes in the case of distributed sensor networks) only encode simple local behaviors and do not explicitly coordinate on a global scale. Through repeated interaction and feedback at the individual level, global properties emerge in the system as a whole.
    Emergent behaviors are being studied extensivelyin biological, physical and social systems - such systems are often collectively termed complex adaptive sys-tems. Examples include ant colonies, ecosystems, and stock markets.
    It is possible that emergent algorithms have the potential to be more flexible than non-emergent
    localized algorithms, which are constrained by the fact that a complex global property may be difficult
    to directly encode in a program that can act only upon local information."
  • Michael Ossipoff
    1.7k
    To what I said above, I should add that I guess it wouldn't really be right to call a self-inconsistent impossibility-world a logical system.

    Maybe a self-inconsistent impossibililty-world consisting of abstract nonfacts, as opposed to a self-consistent possibility-world consisting of abstract facts.

    Of course there's something really questionable about the notion of that, and I realize that there's been academic philosophical writing about impossible worlds.of various kinds.

    The kind of impossibility-world of interest for this topic is the kind in which the inconsistencies make it seem unlikely that there will be later-discovered physics that will consistently explain the seeming inconsistencies. Even that couldn't be proven to be an impossible, self-inconstent world, because there might later be discovered new physics that consistently explains the seeming inconsistencies.

    Litewave (at these forums) said that any self-inconsistent world couldn't be considered real or valid. Maybe not.

    It's just of interest because someone writing and running a simulation could make it as inconsistent as he wants to.

    So the question would be:

    Just as it's possible to say that there's an already timelessly-there possibility-world that any self-consistent simulation is duplicating...then can it also be said that there's an already timelessly-there impossibility-world that any self-inconsistent simulation is duplicating?

    It seems like that should be so, though one has to agree with Litewave that it wouldn't make much sense.

    Of course if a world is completely without consistent physical laws, then how could there be the organization needed for a conscious experiencer? No problem--That being's existence is inconsistent with the inconsistent world, but so what? It's not supposed to be consistent.

    I don't know if even anything meaningful can be said about an AI in a genuinely inconsistent world. Maybe it isn't even at all ,meaningful to speak of it.

    So, what if you write and run a simulation of a world without consistent physical laws, and it contains an AI that (somehow) lives in that world. (maybe something reminiscent of a cartoon world). Would it mean anything to ask what that AI would say if asked if its world were created by a simulation?

    I suggest that (in agreement with Litewave), the whole subject would be meaningless to discuss.

    Your cartoon AI, in its cartoon world, will say whatever you program it to say. ...and that's all that can be said.

    ...when your simulated world is inconsistent.

    Ii mean, if a world is not logically consistent, then on what would you base a suggestion about what the AI would say???

    I suppose I'd program it to say:

    "Maybe, in this evidently-inconsistent 'physical' world there's physics that I just don't know about yet, that will consistently explain all this. In any case, even if this is a nonsense cartoon story, it's a nonsense cartoon story that was already timelessly there among the abstract objects. A computer simulation can't create something that's already timelessly there. So this world isn't created by a computer simulation"

    Abstract objects are a broader collection than abstract logical facts.

    Michael Ossipoff
  • Michael Ossipoff
    1.7k
    The reason why I spoke of impossible worlds, was because I don't know what kind of a world your simulation would simulate. So, to cover all possibilities, I wanted to say something about impossible worlds too.

    A world, as one hypothetical story-setting abstract object among infinitely-many,abstract objects, could be completely inconsistent and nonsensical, like a cartoon. The AI could be like a cartoon character.

    Your simulation could depict/duplicate such a world.

    A world world without consistent physical laws might seem unable to be organized enough to produce a conscious being, but the inconsistent existence of such a being in such a world is alright, because anything goes.

    Likewise, you might say a conscious being couldn't survive in a world without consistent physical laws, but that inconsistency, too, is permissible in an inconsistent world.

    So your simulation could depict and duplicate (but not create) a world that is entirely inconsistent, except that it has one or more rational conscious AI beings, who can talk about the nature of their world, as we do.

    Such a world is reminiscent of the movie "Who Framed Roger Rabbit", in which there are cartoon characters, cartoon events, and ordinary humans too.

    And, then, it would be possible to meaningfully speak of what that AI might say, when discussing the nature of the world that it finds itself in.

    It might say something like this (quoted from my previous post):

    "Maybe, in this evidently-inconsistent 'physical' world there's physics that I just don't know about yet, that will consistently explain all this. In any case, even if this is a nonsense cartoon story, it's a nonsense cartoon story that was already timelessly there among the abstract objects. A computer simulation can't create something that's already timelessly there. So this world isn't created by a computer simulation"Michael Ossipoff

    By the way, I've often wondered why it would be necessary for this world to be self-consistent. It seems to turn out that way. The black-body radiation's energy vs wavelength curve, the Michaelson-Morely experiment result, and the planet Mercury's seemingly anomalous rotation of apsides all were explained with subsequent new physics.

    Most likely that will always turn out to be so.

    Of course there remain some unexplained physical things, and apparent inconsistencies. No one has really satisfactorily explained ball-lightning. The fact that the universe's expansion is accelerating is unexplained and seemingly inconsistent with current physics. Gravity isn't well-understood. ..and so on.

    But the past success of subsequent new physics, in explaining previous apparent inconsistencies, suggests that our world is consistent, and would be shown to be so, if and when the necessary new physics is discovered.

    But, anyway, i've wondered why a world has to be self-consistent. Well, it can be said that a physical world can't be proven inconsistent, because there could always be as-yet undiscovered physics that will consistently explain the apparent inconsistencies.

    But that doesn't explain why a world must be self-consistent.

    Well, among the broad category of abstract objects, aren't there infinitely many inconsistent worlds too, or worlds that don't have consistent physical laws?

    I've always found it puzzling but intriguing when Hinduism and Buddhism refer to incarnation in nonphysical worlds. Could that be a reference to those worlds that aren't possibility-worlds, don't consist of logical facts, but are there as abstract objects anyway?

    Anyway, getting back to your simulated AI in a simulated world. ...a rational conscious AI, in a world that might be physically self-consistent or physically inconsistent, or not maybe physical at all, in any of those cases, that AI, would, it seems to me, say that its world isn't created by a computer program (even though your computer simulation is depicting/duplicating it), because a computer's transistor-switchings can't create what already timelessly is.

    Michael Ossipoff





    .

    .
  • fishfry
    3.4k
    Take Chang and Perrig on emergent algorithms regarding cluster formationAkanthinos

    Your quote from the developer of a particular software application claiming his program shows "emergence" is an example of what I'm talking about. A word with a vague and tenuous meaning, but that sounds impressive if you don't think about it too much.

    Your example illustrates and supports my point.

    But never mind that. I've given all this some thought and I retract my former statement that no AI shows emergence.

    Rather, I say that pretty much everything shows emergence, from simple programs to the world around us. By itself, emergence is a vaguely defined term that conveys little or no useful meaning.

    Wikipedia defines emergence as

    In philosophy, systems theory, science, and art, emergence is a phenomenon whereby larger entities arise through interactions among smaller or simpler entities such that the larger entities exhibit properties the smaller/simpler entities do not exhibit.

    https://en.wikipedia.org/wiki/Emergence

    There's a more detailed and nuanced definition or set of definitions in the main part of the article, but this will do for our purposes.

    Well, what's emergent then? A glass of common tap water is emergent. Hydrogen's a gas, oxygen's a gas, put them together and it makes water. Water is wet. Hydrogen and oxygen "self-organize," a phrase used in the article, into something that has properties neither of the ingredients have.

    That's emergence.

    Speaking of ingredients, how about some nice light fluffy scrambled eggs? Eggs by themselves are not fluffy. The fluffiness comes out in the presence of a master chef. Is the chef fluffy? No. The chef is not fluffy. The egg is not fluffy. But the chef-egg system produces an omelette which is fluffy. Emergence.

    Let's take computer programs. Do AI's do things that you couldn't predict from just looking at the code? Sure. But that's a commonplace in software. Pretty much every ancient legacy mainframe program from the 1960's, a lot of which is still running in the back rooms of the banks, the insurance companies, and the military, is completely inscrutable to its current programmers. They do their best to fix problems and not break things. Nobody understands these old legacy systems.

    Yet nobody thinks these old systems are conscious. They just consist of lot of lines of code that nobody understands, and that produce outputs that their programmers did not expect and can't entirely explain.

    Microsoft Windows is over 50 million lines of code. Windows does plenty of things that are a complete surprise to the maintenance programmers. The original designers of that system are long retired with their stock options.

    So inscrutability of the output is not just something AI exhibits. Virtually every nontrivial software system soon becomes too complicated for any one individual to fully understand it.

    But when it comes to AI, we're supposed to think that being surprised by the output is a big deal. Or that a Go program producing a "clever" move is any more meaningful, than the first chess program I ever saw. It came on a floppy disk. It could beat me. I never said wow it's emergent. That's just a buzzphrase. We program it to compute something and it computes something. Computers are good at that and humans have become absolutely brilliant at programming computers. But they're just computer programs.

    So what does emergence really mean, in the world or in the context of AI? Sometimes people say that self-awareness is an emergent quality of the brain. Maybe it's true. I don't think that's a very well-defined notion.

    But what would it mean for an AI? To say that an AI has emergent behavior means nothing. All it means is that the program has outputs that surprise the programmers. That it makes "clever" moves. We programmed it to do that, why shouldn't it find moves a human wouldn't see? Computers are just great at that kind of stuff. Anything that has rules.

    It most definitely does not mean that AI's have any kind of self-awareness or some kind of elevated consciousness. It means that we've gotten really good at programming them to play Go and recognize faces and mine and organize data. We've been organizing data for a long time, since the ancient IBM databases of the 1960's. Neural nets are a really clever way of organizing data and assigning probabilities for what logic branches the program should take. But it's just a program. Code executing against data. In principle, nothing we haven't been doing for sixty years.

    And remember, all of these AI programs run on conventional hardware. They are no different in principle to a Turing machine implementing the Euclidean algorithm. Just a set of instructions.

    So this mysticism and gee-whiz-ism around AI's is what I'm objecting to.

    And the word emergence is a symptom of that.

    So can you tell me, when you say an AI shows emergence, what does that really mean at the end of the day? Specifically? Start here:

    Take Chang and Perrig on emergent algorithms regarding cluster formationAkanthinos

    Beyond sounding cool, what does that really say? What does it really mean? That water is wet but hydrogen and oxygen aren't? You have to do better than that if you intend to say something meaningful about machine intelligence.

    ps -- SEP says that "Emergence is a notorious philosophical term of art. A variety of theorists have appropriated it for their purposes [my emphasis] ever since George Henry Lewes gave it a philosophical sense in his 1875 Problems of Life and Mind" and "Each of the quoted terms is slippery in its own right ..."

    https://plato.stanford.edu/entries/properties-emergent/

    So I'm not the first person to question the claim that because something shows "emergence" that therefore I should buy whatever the speaker is selling.
  • Michael Ossipoff
    1.7k
    So this mysticism and gee-whiz-ism around AI's is what I'm objecting to.

    And the word emergence is a symptom of that.
    fishfry

    Exactly. And that's all it is.

    Michael Ossipoff
  • AngleWyrm
    65
    Since the topic of emergent behavior seems to have closed, maybe we can move on.

    If I go to the trouble of putting down in print any specific thing that I consider intelligent, self-aware, creative, etc behavior, the moment the ink hits the paper it becomes a programmable task, and as such can be replicated in software.

    There's an online AI CleverBot that's been interacting with thousands of people/day for years. The result is that it has an interesting and sometimes surprisingly deep understanding of conversation trees. That's it's specialty; just don't ask it for look-up data. Google does a fine job of that, just another specialty.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.