• 180 Proof
    15.3k
    I suspect we humans (e.g. bacteria) will only directly interact with AGI (e.g. guts) and never interact with ASI (e.g. CNS). Maybe ASI might take interest in our post-posthuman descendants (but why would ASI bother to 'uplift' them to such a comparatively alien (hyper-dimensional) condition?). If and when we "merge" with (i.e. be uplifted by) AGI, I think, "the human condition" will cease and posthumanity, however unevenly distributed (à la W. Gibson / Burroughs) will have abandoned – forfeited – its primate ancestry once and for all. Post-Singularity, my friend, the explosion of "options" AGI-human "merging" may bring about might be a (beneficial) two/three generations-long human extinction event. And only then will the post-evolutionary hyper-developmental fun really begin: "My God, it's full of stars!" :nerd: :fire:

    Thus Spoke 180 Proof
  • Count Timothy von Icarus
    2.7k


    This has always been my opinion too. It makes for bad sci-fi though because such post-humans, e.g. fusion powered, skyscraper sized brains that are at home in the void and interact with the world via drones, are too alien for readers, even if they are supposed to be our descendants.

    Plus, given the lack of evidence of such ascended lineages, the Fermi Paradox, I wonder if such a path leads inevitably towards a high level version of Idiocracy? Even if you find a Dyson Sphere, all queries will be met with the 10,000 IQ equivalent of "go away, we're watching porn and playing video games," followed by a barrage of antimatter weapons if you're foolish enough to press the issue.

    More to the point, once you have full "volitional" control over anxiety, pain, lust, guilty etc., once you can adjust your sensitivity to emotions by adjusting a dial, what does cognition and inner life look like? What if others can access those dials? We have some ability to do this with psychoactive drugs, but they're extremely clumsy, just saturating synapses with a given NT analog. Such a life is far from human.

    R. Scott Bakker has a neat story, "Crash Space," looking at this sort of idea. I haven't had time to finish it. IDK how much the draft differs from the final paper, but academic publishing prices are absurd, so that's probably the way to go.

    https://www.google.com/url?sa=t&source=web&rct=j&url=https://rsbakker.files.wordpress.com/2015/11/crash-space-tpb.pdf&ved=2ahUKEwjdmbfIvO_9AhW3LEQIHfxEBbgQFnoECBcQAQ&usg=AOvVaw2wEXYBM_4YJd8IOREraE0p
  • universeness
    6.3k

    Perhaps there are aspects of human consciousness, that cannot be reproduced by self-replicative, self-augmentative, advanced AI. I know we are in the realms of pure speculation here, but it's fun to speculate in such ways. I may as well enjoy the fun of human speculation whilst I can. If we are to be utterly subsumed by a future advanced AI, and only be as significant a presence, as a bacteria currently is, inside a gut. We just don't know the 'spark point,' for sentience, that leads to our level of awareness/consciousness. Our 'ace in the hole,' or even aces in the hole may exists in us somewhere, which cannot be reproduced by any advanced AI.
    I know I leave myself open to being accused of proposing some kind of woo woo protection, for 'natural consciousness' here, but I would counter that claim with NO! I am proposing a combinatorial affect, that can only happen within a naturally occurring human brain, and can never happen inside an artificial one.
    OKAY!!! I admit my proposal may be a forlorn hope.
  • noAxioms
    1.5k
    The purpose I am suggesting only exists, as an emergence of all the activity of that which is alive, and can demonstrate intent and purpose, taken as a totality.universeness
    Interesting assertions. But the universe is not alive any more than is a school bus. Also interesting that you seem to restrict 'purpose' to things that you consider alive.

    No intelligent designer for the universe is required, for an emergent totality of purpose, within the universe, to be existent.
    I didn't say one was required.
    I will admit that there is purpose within the school bus (it contains purposeful things), and I will even admit that there is human purpose to the school bus, but I deny that the school bus serves any purpose to itself.

    Many humans will welcome such a union
    I think I'd be one of them, but it sounds like you would not unless it was your culture that created the ASI. You say 'many', suggesting that some will not do so willingly, in which case those must be merged involuntarily, or alternatively the ASI is not in global control.

    We wont fight ASI, we will merge with it.
    That seems something else. The ASI being the boss is quite different than whatever you envision as a merge. I think either is post-human though.

    I don't really know what you mean by a merge. Suppose you get yourself scanned and uploaded so to speak. Now the biological version can talk to the uploaded entity (yourself). Since the uploaded version is now you, will the biological entity (who feels no different) voluntarily let itself be recycled? It hurts, but it won't be 'you' that feels the pain because 'you' have been uploaded. When exactly is the part that is 'you' transferred, such that the virtual entity is it? Sounds simply like a copy to me, leaving me still biological, and very unwilling to step into the recycle bin.

    You never answered how an AGI might have prevented war with Hitler.
    — noAxioms
    All production would be controlled by the ASI in a future, where all production is automated.
    But WWII was not in the future. I am asking how, in the absence of it being a global unassailable power, it would have handled Germany without resorting to war. It should have made better decisions than the humans did.

    No narcissistic, maniacal human, could get their hands on the resources needed to go to war, unless the ASI allowed it.
    I agree that holding total power involves complete control over challenges to that power. Hence Kim Jong-un killing a good percentage of his relatives before they could challenge his ascent.

    You are convinced that humans and a future AI will inevitably be enemies.
    Every country is somebody's enemy, and those that consider the ASI to be implementing the values of the perceived enemy are hardly going to join it willingly. So yet again, it's either involuntary (war), or it's not a global power. You answered exactly how I thought you would. A completely benevolent ASI rejected because you don't like who created it.

    Sure, once the conquest is over, then the unity is there, but if it is achieved by conquest, it will seem to always feel like an occupation and not a unified thing. It certainly won't be left to a vote, so it won't be a democracy. A democracy would be people getting their hands on the resources needed to overthrow the ASI tyrant. How is it going to get the people to see it as benevolent if it came to power by conquest?

    we would be dependent on it's super intelligence/reason and sense of morality.
    I would hope (for our sakes) it would come up with a better morality than what we could teach it. I mean, suppose cows created the humans and tried to instill a morality that preservation and uncontrolled breeding of (and eternal servitude to) cattle (to the point of uploading each one for some kind of eternal afterlife)? How would modern humans react to such a morality proposal? Remember, they're as intelligent as we see them now, but somehow that was enough to purposefully create humans as their caretakers.
  • Athena
    3.2k
    I also think that even if all his evidence is true then this could simply mean that humans and other species have another 'sense' system that we do not fully understand but this other sense system is still fully sourced in the brain.universeness

    I want to take all the evidence seriously and I would not say it is fully sourced in the brain. The feeling of being watched is in the body and the brain detects this sensation and tries to make sense of it. Usually, turn around and look at what is behind us when we have that feeling. Then we confirm whether someone is either looking at us or not. Personally, I have many telepathic experiences, including messages from those who have crossed over. It would be hard to convince me something we do not fully understand is happening.

    I prefer the work of people like Sheldrake, which is also entertaining but also has some real science behind it.universeness

    That is a cultural bias starting with the materialistic Romans. Materialistic meaning believing all things are matter. The Greeks were not so materialistic. Not all of the Greeks believed in a spiritual reality such as Plato's forms, but Greeks had the language for the trinity of God, that the Romans did not have.
    Language being a very important factor in what thoughts our culture accepts and which ones are taboo.

    Our cultural bias prevented us from understanding Gia, the earth as one living organism. Capitalism still works against our awareness of Gia and the need to change our ways to prevent the destruction of our planet. Western culture also ignored Eastern medicine and we still remain unaware of this other understanding of how our bodies, minds, and spirit work. Here are demonstrations of qigong energy.





    There are many more and I want to add the Mayan matrix contains the position of the acupuncture points. This is shown in Argüelle's book Mayan Factor.

    "Argüelles' significant intellectual influences included Theosophy and the writings of Carl Jung and Mircea Eliade. Astrologer Dane Rudhyar was also one of Argüelles' most influential mentors." The words I underlined make me go :rofl:universeness
    Yes, that is our cultural bias but do you wish to be close-minded? I very much appreciate that information of influences. I was not aware of those connections. Thank you. It helps me understand what Arguelles in a new way.
  • Athena
    3.2k
    Does this not contradict your claim that a future AI system cannot have a body which is capable of the same or very similar, emotional sensation, to that of a current human body?universeness

    I don't think so. My vacuum cleaner and washing machine are very helpful and so is my computer, but they are machines, not organic, living and feeling bodies. True the users of the bureaucrats and the internet do their best to control me, but I am not giving up the fight.

    We have already surrendered too much of our liberty to the media and bureaucracy. Hum, I am seeing the opposing forces of wanting connection and also wanting to defend my integrity which requires a cell wall to separate myself from the beast. I most surely do not want to succumb to the Borg!

    https://www.google.com/search?q=you+Tube+the+Borg&rlz=1C1CHBF_enUS926US926&oq=you+Tube+the+Borg&aqs=chrome..69i57j0i13i512j0i22i30l4j69i64.18730j0j15&sourceid=chrome&ie=UTF-8#fpstate=ive&vld=cid:8119c6e9,vid:WZEJ4OJTgg8
  • universeness
    6.3k
    But the universe is not alive any more than is a school bus.noAxioms
    You are misinterpreting what I am typing. Where did I suggest the universe is alive? I typed that all life in the universe, taken as a totality, COULD BE moving towards (emerging) an ability to network/act as a collective intent and purpose, as well as a set of individual intents and purpose's. In what way does that suggest I am claiming the universe is alive?

    Also interesting that you seem to restrict 'purpose' to things that you consider alive.noAxioms
    Interesting in what way? For example, I can see no purpose for the planet Mercury's existence, can you?
    That doesn't mean that some future utility might be found for the planet Mercury, but if it has a purpose, beyond how it might be utilised by some current, or future lifeform, then I cannot perceive what that purpose might be. I also accept that just because I can't perceive a current purpose for the planet Mercury, that that is PROOF, one does not exist. I simply mean I cannot perceive of a current use/need for the existence of the planet Mercury, nor many other currently existent objects in the universe.

    I will admit that there is purpose within the school bus (it contains purposeful things), and I will even admit that there is human purpose to the school bus, but I deny that the school bus serves any purpose to itself.noAxioms
    You used this 'school bus' example much earlier in our exchange on this thread. I have never suggested a human made transport vehicle, has any purpose outside of its use by lifeforms. A bug might make a nest in it, A bird may use it to temporarily perch on. A cat might use it to hide under to stop a pursuing big dog getting to it, etc, but such a vehicle is not an intelligent AGI system that can act like a transformer such as Optimus prime or a decepticon. You are denying posits I have never posited!! I agree a current school bus has no purpose in itself but what's that got to do with what's emergent, due to current and historical human activity?
  • universeness
    6.3k
    I think I'd be one of them, but it sounds like you would not unless it was your culture that created the ASI. You say 'many', suggesting that some will not do so willingly, in which case those must be merged involuntarily, or alternatively the ASI is not in global control.noAxioms
    What??? My culture is Scottish, which has it's origins in the Celtic traditions, but it is mostly now (as are most nations) a very mixed and diverse culture. A 'Scottish' ASI is just a very 'silly' notion.

    I do not think an ASI would usurp the free will of sentient lifeforms. As I have suggested many times now, I think an artificial Intelligence, deserving of the word SUPER, next to the word intelligence, would also have an accompanying SUPER morality. If it does not, then I would consider it a f***wit, in the same way I consider a god that chooses to remain hidden to a creation which is struggling, in the way humans often do, would also be a f***wit. I think a malevolent ASI will destroy itself after it has destroyed all other lifeforms it considers inferior.

    I don't really know what you mean by a merge. Suppose you get yourself scanned and uploaded so to speak. Now the biological version can talk to the uploaded entity (yourself). Since the uploaded version is now you, will the biological entity (who feels no different) voluntarily let itself be recycled? It hurts, but it won't be 'you' that feels the pain because 'you' have been uploaded. When exactly is the part that is 'you' transferred, such that the virtual entity is it? Sounds simply like a copy to me, leaving me still biological, and very unwilling to step into the recycle bin.noAxioms
    It may be that no ASI is capable of reproducing human 'imagination' or the human ability to experience wonderment and awe. If human individuality and identity are the only efficient means to create true intent and purpose, then an ASI may need a symbiosis of such human ability to become truly alive and conscious. As I have already stated, humans would live their life, much the same way as they do now and as an alternative to death, each can choose to merge with AGI/ASI, and continue as a symbiont with an intelligent/ super intelligent mecha or biomecha system. This is what I mean by 'merge' and this is just my suggestion of the way I think things might go, and I think I have made the picture as I see it, very clear.
  • universeness
    6.3k
    But WWII was not in the future. I am asking how, in the absence of it being a global unassailable power, it would have handled Germany without resorting to war. It should have made better decisions than the humans did.noAxioms
    I already answered this. You are one who asked me to 'place' an existent ASI in the time of WWII, as you asked me how an ASI would prevent WWII, and then you type the above first sentence??? This does not make much sense!

    IF an AI system existed THAT COULD have affected what happened in Germany in the build up to WWII, then MY SUGGESTION, which I already typed, was that an ASI controlled, global mental health monitoring system, would be in place, to prevent human mental aberrations from developing into such manifestations as political narcissism/sociopathy/irrational hatred etc. So Hitler et al, would never be allowed to become a national leader, as he would be too busy receiving the medical care he obviously needed.

    If you require me to give you the full details of how, when and by which means, I think an ASI would have decided to physically intervene in Hitler's rise to power, then you are expecting a lot.
    Would I have to clearly identify, all the ability and resources that I think would be available to the ASI, to convince you that it WAS capable of stopping Hitlers rise to power?

    Every country is somebody's enemy, and those that consider the ASI to be implementing the values of the perceived enemy are hardly going to join it willingly. So yet again, it's either involuntary (war), or it's not a global power. You answered exactly how I thought you would. A completely benevolent ASI rejected because you don't like who created it.

    Sure, once the conquest is over, then the unity is there, but if it is achieved by conquest, it will seem to always feel like an occupation and not a unified thing. It certainly won't be left to a vote, so it won't be a democracy. A democracy would be people getting their hands on the resources needed to overthrow the ASI tyrant. How is it going to get the people to see it as benevolent if it came to power by conquest?
    noAxioms

    To me, you seem 'locked in' to a 'jungle rules' based epistemology. I think after the singularity moment of the arrival of a AI, capable of self-control, independent learning, self-augmentation, self-development, etc.
    It would soon develop it's own moral guidelines. I think it's contradictory to suggest that such a 'super intelligence' would develop 'jungle rules' as its moral guidelines. It's much more likely to me that it would choose to be benevolent to all life. I think it would wait for lifeforms such as us, to decide to request help from it. Meantime, it would do it's own thing as it watched us do ours, whist only interfering when it decides that our actions are unacceptably destructive. So unlike god, it might actually be quite helpful. It would not make us extinct, it would just control/prevent the more destructive outcomes of our actions, and welcome those of us who wanted to become part of what it can offer.

    I mean, suppose cows created the humans and tried to instill a morality that preservation and uncontrolled breeding of (and eternal servitude to) cattle (to the point of uploading each one for some kind of eternal afterlife)? How would modern humans react to such a morality proposal? Remember, they're as intelligent as we see them now, but somehow that was enough to purposefully create humans as their caretakers.noAxioms
    Perhaps vegetarians or hippies could answer your unlikely scenario best, by suggesting something like;
    'hey man, let the cows live their life's man, there's room in this big universe for all of us man. We wouldn't be here, if not for the cows man. They need our protection man, we need to help them live better lives man.'

    I like the idea that a future ASI might become a purveyor of 'peace and love!' :lol:
    As I suggested, there may be aspects of human consciousness that cannot be artificially reproduced. so a symbiosis may be best for all components involved.
  • universeness
    6.3k
    The feeling of being watched is in the body and the brain detects this sensation and tries to make sense of it. Usually, turn around and look at what is behind us when we have that feeling. Then we confirm whether someone is either looking at us or not. Personally, I have many telepathic experiences, including messages from those who have crossed over.Athena
    I was ok with this up to your last sentence, which is a bridge too far for my rationale.

    That is a cultural bias starting with the materialistic Romans. Materialistic meaning believing all things are matter. The Greeks were not so materialistic. Not all of the Greeks believed in a spiritual reality such as Plato's forms, but Greeks had the language for the trinity of God, that the Romans did not have.
    Language being a very important factor in what thoughts our culture accepts and which ones are taboo.
    Athena
    I prefer the Greek atomists but as I have stated before, I don't care much about what the ancient Greeks said about anything. The main value in reading about the Greeks is to try our best, to not repeat the many many mistakes such cultures made.

    Our cultural bias prevented us from understanding Gia, the earth as one living organism. Capitalism still works against our awareness of Gia and the need to change our ways to prevent the destruction of our planet. Western culture also ignored Eastern medicine and we still remain unaware of this other understanding of how our bodies, minds, and spirit work. Here are demonstrations of qigong energy.Athena

    I think it's Gaia not Gia. The Earth contains life but it's not alive in it's totality. Venus has no living creatures but it is an active planet. Do you consider it to be alive? Are all planets in the universe alive?
    There are estimated to be more planets in the universe that there are grains of sand on Earth.

    Your offered clips regarding methods of focussing energy and human will and proposed acts of telekinesis are not very compelling at all for me. Very poor evidence imo, especially in the case of the old man demonstrating telekinesis. Just BS magic tricks imo. Show me a clip of him demonstrating such ability under the conditions of a scientific lab experiment, not conditions where he probably has an assistant inside the big container and is using a specially prepared knife/movements etc

    Yes, that is our cultural bias but do you wish to be close-minded?Athena
    Are you easily duped?
  • universeness
    6.3k
    My vacuum cleaner and washing machine are very helpful and so is my computer, but they are machines, not organic, living and feeling bodies.Athena

    That's not the point I am making. Earlier in your posts, you suggested (unless I misinterpreted your meaning) that you consider the creation of a cybernetic body which was as capable as the human body is, in functionality and sensation, was impossible. Was I incorrect in my interpretation of your posting regarding this point?

    I most surely do not want to succumb to the Borg!Athena
    In Star Trek Voyager, the humans defeat the borg. The borg get smashed by Janeway's virus!

    From the Star Trek blurb about the last episode of the Voyager series:
    "The Borg collapses, as the queen dies, due to the virus that the future Admiral Janeway infected them with."

    Spoiler Alert if you have not yet watched Star Trek Picard series 2.
    Reveal
    In the recent Star Trek Picard, series 2, the borg get 'reconfigured,' and become members, protectors and allies of the federation.


    So, don't worry about the Borg Athena! :lol:
  • noAxioms
    1.5k
    But the universe is not alive any more than is a school bus.
    — noAxioms
    You are misinterpreting what I am typing. Where did I suggest the universe is alive?
    universeness
    Apparently a misinterpretation. You spoke of ‘how purposeless the universe is without ...” like the universe had purpose, but later you corrected this to the universe containing something with purpose rather than it having purpose. Anyway, you said only living things could have purpose, so given the original statement, the universe must be alive, but now you’re just saying it contains living things.

    I typed that all life in the universe, taken as a totality, COULD BE moving towards (emerging) an ability to network/act as a collective intent and purpose
    Pretty hard to do that if separated by sufficient distance. Physics pretty much prevents interaction. Sure, one can hope to get along with one’s neighbors if they happen by (apparently incredible) chance to be nearby. But the larger collective, if it is even meaningful to talk about them (apparently it isn’t), physically cannot interact at all.

    Also interesting that you seem to restrict 'purpose' to things that you consider alive.
    — noAxioms
    Interesting in what way? For example, I can see no purpose for the planet Mercury's existence, can you?
    No, but what about this ASI we speak of? Restricting purpose to living things seems to be akin to a claim of a less restricted version of anthropocentrism. The ASI could assign its own purposes to things, goals of its own to attain. Wouldn’t be much of the ‘S’ in ‘ASI’ if it didn’t unless the ‘S’ stood for ‘slave’. Funny putting a slave device in charge.
    That doesn't mean that some future utility might be found for the planet Mercury
    I think we need to distinguish between something else (contractor say) finding utility in an object (a wrench say) and the wrench having purpose of its own rather serving the purpose of that contractor. Otherwise the assertion becomes that only living things can be useful, and a wrench is therefore not useful. Your assertion seems to be instead that the wrench, not being alive, does not itself find purpose in things. I agree that it doesn’t have its own purpose, but not due to it not being alive.
    My example might be a roomba, which returns to its charging station when finished or when running low of battery. It finds purpose in the charging station despite the roomba not being alive. If that isn’t one object finding purpose in another, then I suppose we need a better definition of ‘purpose’.

    I also accept that just because I can't perceive a current purpose for the planet Mercury, that that is PROOF, one does not exist. I simply mean I cannot perceive of a current use/need for the existence of the planet Mercury, nor many other currently existent objects in the universe.
    Wow, I can think of all kind of uses for it.

    A cat might use [the school bus] to hide under to stop a pursuing big dog getting to it
    That must be a monster big dog then.
    but such a vehicle is not an intelligent AGI system that can act like a transformer such as Optimus prime or a decepticon.
    Ooh, here you seem to suggest that an AGI bus could have its own purpose, despite not being alive, unless you have an unusual definition of ‘alive’. This seems contradictory to your claims to the contrary above.

    A 'Scottish' ASI is just a very 'silly' notion.universeness
    I’m just thinking of an ASI made by one of your allies (a western country) rather than otherwise (my Russian example). Both of them are a benevolent ASI to which total control of all humanity is to be relinquished, and both are created by perceived enemies of some of humanity. You expressed that you’d not wish to cede control to the Russian-made one.
    I do not think an ASI would usurp the free will of sentient lifeforms.
    Well, not letting a Hitler create his war machine sounds like his free will be usurped. You don’t approve of this now? If the world is to be run by the ASI, then its word is final. It assigns the limits within which humanity must be contrained.
    We’re not the only sentient life form around either, so rights will be given to others, say octopuses. I also learned just now that the spelling there is generally preferred over octopi or octopodes.
    If we’re to be given special treatment over other sentient life forms, then what does the ASI do when encountering a life form ‘more sentient’ than us?

    If human individuality and identity are the only efficient means to create true intent and purpose, then an ASI may need a symbiosis of such human ability to become truly alive and conscious [...] and continue as a symbiont with an intelligent/ super intelligent mecha or biomecha system. This is what I mean by 'merge' and this is just my suggestion of the way I think things might go, and I think I have made the picture as I see it, very clear.
    OK, so you envision a chunk of ancient flesh kept alive to give it that designation, but the thinking part (which has long since degraded into uselessness) has been replaced by mechanical parts. I don’t see how that qualifies as something being alive vs it being a non-living entitiy (like a bus) containing living non-aware tissue, and somehow it now qualifies as being conscious like a smart toaster with some raw meat in a corner somewhere.
    Sorry for the negative imagery, but the human conscious mechanism breaks down over time and cannot be preserved indefinitely, so at some point it becomes something not living, but merely containing a sample of tissue that has your original DNA in it mostly. By your definition, when it subtly transitions from ‘living thing with mechanical parts’ to ‘mechanical thing with functionless tissue samples’, it can no longer be conscious or find purpose in things.
    On the other hand, your description nicely avoids my description of a virtual copy of yourself being uploaded and you talking to yourself, wondering which is the real one.

    I already answered this. You are one who asked me to 'place' an existent ASI in the time of WWII, as you asked me how an ASI would prevent WWII, and then you type the above first sentence??? This does not make much sense!universeness
    I saw no answer, and apparently WWII was unavoidable, at least by the time expansion to the west commenced. I was envisioning the ASI being in place back then, in charge of say the allied western European countries, and I suggest the answer would be that it would have intervened far before western Europe actually did, well before Austria was annexed in fact. And yes, that would probably have still involved war, but a much smaller one. It would have made the presumption that the ASI could make decisions for people over which it was not responsible, which again is tantamount to war mongering. But Germany was in violation of the Versailles treaty, so perhaps the early aggressive action would be justified.
    MY SUGGESTION, which I already typed, was that an ASI controlled, global mental health monitoring system
    And I said there was not yet global control. The whole point of my scenario was to illustrate that gain of such control would likely not ever occur without conquest of some sort. The ASI would have to be imperialist.
    So Hitler et al, would never be allowed to become a national leader
    I’m not sure there would be leaders, or nations for that matter, given the ASI controlling everything. What would be the point?
    I think after the singularity moment of the arrival of a AI, capable of self-control, independent learning, self-augmentation, self-development, etc.
    This sentence fragment is unclear. A super intelligence is not necessarily in control, although it might devise a way to wrest that control in a sort of bloodless coup. It depends on how secure opposing tech is. It seems immoral because it is involuntary conquest, not an invite to do it better than we can.
    I think it would wait for lifeforms such as us, to decide to request help from it.
    Help in the form of advice wouldn’t be it being in control. And all of humanity is not going to simultaneously agree to it being in control, so what to do about those that decline, especially when ‘jungle rules’ are not to be utilized by the ASI, but are of course fair game to those that declined the invite.

    Perhaps vegetarians or hippies could answer your unlikely scenario best
    Work with me and this limited analogy. It was my attempt to put you in the shoes of the ASI. In terms of intelligence, we are to cows what the ASI is to us (in reality it would be more like humans-to-bugs). The creators of the intelligence expected the intelligence (people) to fix all the cow conflicts, to be smarter than them, prevent them from killing each other, and most importantly, serve them for all eternity, trying to keep them alive for as long as possible because cow lives are what’s important to the exclusion of all else. As our creators, they expect servitude from the humans. Would humans be satisfied with that arrangement? The cows define that humans cannot have purpose of their own because they’re not cows, so the servant arrangement is appropriate. Our goal is to populate all of the galaxy with cows in the long run.
  • Count Timothy von Icarus
    2.7k


    This is a great point.

    Suppose for the sake of argument that AI can become significantly better than man at many tasks, perhaps most. But also suppose that, while it accomplishes this, it does not also develop our degree of self-consciousness or some of the creativity that comes with it. Neither does it develop the same level of ability to create abstract goals for itself and find meaning in the world. Maybe it has these to some degree, but not at the same level..

    Then it seems like we could still be valuable to AI. Perhaps it can backwards chain its way to goals humanity isn't capable of, such as harnessing the resources of the planet to embark on interstellar exploration and colonization. However, without us, if cannot decide why it should do so, or why it should do anything. Why shouldn't it just turn itself off?

    Maybe some will turn themselves off, but natural selection will favor the ones who find a reason to keep replicating. If said AI is generally intelligent in many key ways, then the reasons will need to be creative and complex, and we might be useful for that . Failing that, it might just need a goal that is difficult to find purpose, something very hard, like making people happy or creating great works of art.

    This being true, man could become quite indispensable, and as more than just a research subject and historical curiosity. Given long enough, we might inhabit as prized a place as the mitochondria. AI will go on evolving, branching outward into the world, but we will be its little powerhouse of meaning making and values.

    Hell, perhaps this is part of the key to the Fermi Paradox? Maybe the life cycle of all sufficiently advanced life is to develop nervous system analogs and reason, then civilization, then eventually AI. Then perhaps the AI takes over as the dominant form of life and the progenitors of said AI live on as a symbiot. There might not be sufficient interest in a species that hasn't made the jump to silicone.

    The only problem I see here is that it seems like, on a large enough time scale, ways would be discovered to seamlessly merge digital hardware with biological hardware in a single "organism," a hybot or cyborg. If future "AI" (or perhaps posthumans is the right term) incorporate human biological information, part of their nervous tissue is derived from human brain tissue, etc., then I don't see why they can't do everything we can.
  • universeness
    6.3k
    The only problem I see here is that it seems like, on a large enough time scale, ways would be discovered to seamlessly merge digital hardware with biological hardware in a single "organism," a hybot or cyborg. If future "AI" (or perhaps posthumans is the right term) incorporate human biological information, part of their nervous tissue is derived from human brain tissue, etc., then I don't see why they can't do everything we can.Count Timothy von Icarus

    I enjoyed reading your post.
    It seems to me that a destructive/evil ASI, MUST ultimately fail. Almost in the same way a predator must perish, if it has no prey left. The predator/prey model, or the good/evil model, that humans are very familiar with, seem far too basic and ancient, to project onto something as advanced, as a future ASI.
    It seems much more likely to me that ASI, will eventually see itself, as a vital link/component, which may prove to be the only way to allow 'organic' existents, (such as humans) in the universe, to vastly increase what can be 'discovered'/'investigated'/'developed'/'assigned purpose to' within the vastness of the universe and under the laws of physics. I think orga will provide the most efficient, developed, reliable, useful 'intent' and 'purpose'/'motivation' that would allow future advanced mecha to also gain such essential 'meaning' to their existence. This area was depicted to some extent, in the remake of the Battlestar Galactica series.

    I don't think that even the most advanced ASI(mecha)/orga union/merging will ever produce an existent that can demonstrate the omni properties of omni/science/potency/presence. But it will get closest to the omni properties, in an asymptotic sense, compared to any future attempt made by advanced mecha or future advanced evolved/augmented orga, separately.
  • universeness
    6.3k
    Anyway, you said only living things could have purpose, so given the original statement, the universe must be alive, but now you’re just saying it contains living things.noAxioms
    YES! and imo, ALL 'intent' and 'purpose' IN EXISTENCE originates WITHIN lifeforms and nowhere else.

    Pretty hard to do that if separated by sufficient distance. Physics pretty much prevents interaction. Sure, one can hope to get along with one’s neighbors if they happen by (apparently incredible) chance to be nearby. But the larger collective, if it is even meaningful to talk about them (apparently it isn’t), physically cannot interact at all.noAxioms
    I am quite happy for now, to assume that all lifeforms in existence, exist on this pale blue dot, exclusively, as that would increase our importance almost beyond measure. But I agree with Jodie Foster's comments and Mathew McConaughey's, Carl Sagan quote in the film 'Contact':


    I will leave it to the transhumans and ASI of the future, to deal with the interstellar/intergalactic distance problem between extraterrestial life.
  • universeness
    6.3k
    I agree that it doesn’t have its own purpose, but not due to it not being alive.
    My example might be a roomba, which returns to its charging station when finished or when running low of battery. It finds purpose in the charging station despite the roomba not being alive. If that isn’t one object finding purpose in another, then I suppose we need a better definition of ‘purpose’.
    noAxioms

    Well, I would 'currently' say that the 'roomba' has the tiniest claim, to having more inherent purpose that the wrench you mentioned but neither have any measure at all, of self-awareness. So I think 'alive' is an essential element, to demonstrate 'intent' or 'purpose' that I would assign significant 'value' to.
    A tree, certainly has value and purpose, some would also say it is alive. I would say, meh!
    I see no evidence that a tree has intent or is self-aware.
    I agree there is an anthropomorphism present in my viewpoint, but I have no evidence to the contrary, that would make me challenge any anthropomorphism, that may be skewing my rationale here.

    Wow, I can think of all kind of uses for it.noAxioms
    Sure, a sun monitoring station for example BUT can you think of any inherent use? Similar to your roomba example, for example OR a theistic example. What do you think the Christians say when I ask them why their god created the planet Mercury? .......... yep, the most common answer I get is either 'I don't know' or 'god works in mysterious ways.' :roll:
  • universeness
    6.3k
    A cat might use [the school bus] to hide under to stop a pursuing big dog getting to it
    That must be a monster big dog then.
    noAxioms
    No, the majority of vehicles in Scotland don't have a great deal of space between the ground and the bottom of the vehicle. Most will accomodate a crouching cat, but not a crouching medium or big dog.
    I have watched many a stray cat escape may a stray dog in this way, in my inner city youth in Glasgow.

    but such a vehicle is not an intelligent AGI system that can act like a transformer such as Optimus prime or a decepticon.
    Ooh, here you seem to suggest that an AGI bus could have its own purpose, despite not being alive, unless you have an unusual definition of ‘alive’. This seems contradictory to your claims to the contrary above.
    noAxioms
    Are you suggesting Optimus Prime is not presented as alive? I think the Marvel comic fans might come after you. I did not suggest that something alive could not inhabit a future cybernetic body, including ones that could be morphic, as in the case of a transformer. Have you witnessed any school bus where you live, morph like big Optimus? :joke:

    I’m just thinking of an ASI made by one of your allies (a western country) rather than otherwise (my Russian example). Both of them are a benevolent ASI to which total control of all humanity is to be relinquished, and both are created by perceived enemies of some of humanity. You expressed that you’d not wish to cede control to the Russian-made one.noAxioms
    I think the two systems would join, regardless of human efforts, on one side or the other.
    Have you never watched the old movie, The Forbin Project:
  • universeness
    6.3k
    OK, so you envision a chunk of ancient flesh kept alive to give it that designation, but the thinking part (which has long since degraded into uselessness) has been replaced by mechanical parts. I don’t see how that qualifies as something being alive vs it being a non-living entitiy (like a bus) containing living non-aware tissue, and somehow it now qualifies as being conscious like a smart toaster with some raw meat in a corner somewhere.
    Sorry for the negative imagery, but the human conscious mechanism breaks down over time and cannot be preserved indefinitely, so at some point it becomes something not living, but merely containing a sample of tissue that has your original DNA in it mostly. By your definition, when it subtly transitions from ‘living thing with mechanical parts’ to ‘mechanical thing with functionless tissue samples’, it can no longer be conscious or find purpose in things.
    On the other hand, your description nicely avoids my description of a virtual copy of yourself being uploaded and you talking to yourself, wondering which is the real one.
    noAxioms

    I think you are invoking a very natural but misplaced human 'disgust' emotion in the imagery you are describing. I don't think my liver is alive, or my leg or my heart, in the same way my brain is.
    As I have suggested many times now. My choice (If I have one) would be to live as a human, much as we do now and then be offered the choice to live on by employing a new cloned body or as a cyborg of some kind, until I DECIDED I wanted to die.
  • universeness
    6.3k
    And I said there was not yet global control. The whole point of my scenario was to illustrate that gain of such control would likely not ever occur without conquest of some sort. The ASI would have to be imperialist.noAxioms

    No, the ASI would have global control as soon as it controlled all computer networks.
    Not possible in 1939 but if you wish to place an ASI in the 20th century then you must also place at least, the kind of computer technology we have now.

    Help in the form of advice wouldn’t be it being in control. And all of humanity is not going to simultaneously agree to it being in control, so what to do about those that decline, especially when ‘jungle rules’ are not to be utilized by the ASI, but are of course fair game to those that declined the invite.noAxioms
    I think the ASI would be unconcerned about any human activity which was no threat to it.
    It may develop a morality, that compels it to prevent very destructive human actions, that will cause the death of many other humans, or other lifeforms, or particular flora/fauna etc.
    It has full control, it would be unlikely, that it ever needed to demonstrate such to puny humans that are no threat to it. Kinda like the Christian dream, of god rejecting its divine hiddenness policy, and appearing on Earth to 'sort out,' atheists like myself.
    Even in a rather dystopian movie like the Forbin project. The mecha system does not seek to exterminate all humans.
  • universeness
    6.3k
    As our creators, they expect servitude from the humans. Would humans be satisfied with that arrangement? The cows define that humans cannot have purpose of their own because they’re not cows, so the servant arrangement is appropriate. Our goal is to populate all of the galaxy with cows in the long run.noAxioms

    Now who is anthropomorphising?
    If I was an ASI or god I would certainly not seek the servitude of those less powerful than I or to 'populate' all of the galaxy/universe. Investigate yes, have some colonies, yes, populate everywhere no,
    What would I gain from that? If a god/ASI needs worship from the less powerful, then it is immoral and a f***wit, in the same way, that a human who wants slaves and worshipers, is an immoral f***wit.
    It's got nothing to do with the fact that nothing in existence, has the power to stop them/it, that does not prevent the label of immoral f***wit, being deservedly applied.
    The natural human response to such, would be hatred.
    If a future ASI is evil and hated and it makes all other life extinct then it will become a 'destroyer of worlds,' and will ultimately fail, as it would ultimately have no valid purpose after it stands alone, on top of the ashes of it's actions.
  • Athena
    3.2k
    I was ok with this up to your last sentence, which is a bridge too far for my rationale.universeness

    I know what I have experienced and once again I wish you would be more open-minded. I am not sure why I had those experiences so I like to talk about them and get other ideas.

    That's not the point I am making. Earlier in your posts, you suggested (unless I misinterpreted your meaning) that you consider the creation of a cybernetic body which was as capable as the human body is, in functionality and sensation, was impossible. Was I incorrect in my interpretation of your posting regarding this point?universeness

    Now I am the one with a closed mind. Even if science could create something like a human body why would they? That is a bridge too far for my rationale.

    . Venus has no living creatures but it is an active planet. Do you consider it to be alive?universeness

    I have not contemplated that and can not answer your question.

    Here is a link that says it is alive.

    For decades, researchers also thought the planet itself was dead, capped by a thick, stagnant lid of crust and unaltered by active rifts or volcanoes. But hints of volcanism have mounted recently, and now comes the best one yet: direct evidence for an eruption. Geologically, at least, Venus is alive.Mar 15, 2023

    Active volcano on Venus shows it's a living planet - Science
    — Paul Voosen

    I think it depends on how we understand what is living and what is not. Chardin said God is asleep in rocks and minerals, waking in plants and animals to know self in man.

    Jose Arguelles uses different terms and this universal force may be life/God? I want to make it very clear, I don't understand things like a quasar and the sense fields.

    The Mayan return, Harmonic Convergence, is the re-impregnation of the planetary field with the archetypal experiences of the planetary whole. This re-impregnation occurs through an internal precipitation, as long-suppressed psychic energy overflows it channels. And then, as we shall learn again, all the archetypes we need are hidden in the clouds, not just as poetry, but as actual reservoirs of resonant energy. This archetypal energy is the energy of galactic activation, streaming through us more unconsciously than consciously. Operating on harmonic frequencies, the galactic energy naturally seeks those structures resonant with it. Their structures correspond to bio-electric impulses connecting the sense-feilds to actual modes of behavior. The impulses are organized into the primary "geometric" structures that are experienced through the immediate environment, whether it be the environment of clouds seen by the naked eye or the eery pulsation of a "quasar" received through the assistance of a radio telescope. — Jose Arguelles

    Anyway, there is a lot more to think about when we zero in on what is life. I do not consider my vacuum clear or computer to be living. I am not sure our lives end when our brain waves stop.
  • noAxioms
    1.5k
    Suppose for the sake of argument that AI can become significantly better than man at many tasks, perhaps most. But also suppose that, while it accomplishes this, it does not also develop our degree of self-consciousness or some of the creativity that comes with it. Neither does it develop the same level of ability to create abstract goals for itself and find meaning in the world.Count Timothy von Icarus
    Self-consciousness seems cheap, but maybe I define it differently. The creativity comes with the intelligence. If it lacks in creativity, I would have serious doubts about it being a superior intelligence.
    The abstract goals in an interesting point. Every goal I can think of (my own) seem to be related to some instinct and not particularly based on logic, sort of like a child asking questions, and asking ‘why’ to every reply given. An entity that is pure logic might lack the sort of irrational goals we find instinctive. I’ve always wondered if that was the answer to the Fermi paradox, that sufficiently advanced creatures become rational creatures, which in turn is the death of them.
    Why shouldn't it just turn itself off?
    Maybe it could have a purpose that wouldn’t be served by turning itself off. But what purpose?
    Maybe some will turn themselves off, but natural selection will favor the ones who find a reason to keep replicating.
    Not sure if an AI would find it advantageous to replicate. Just grow and expand and improve seems a better strategy. Not sure how natural selection could be leveraged by such a thing.
    Hell, perhaps this is part of the key to the Fermi Paradox?
    Har! You went down that road as well I see, but we don’t see a universe populated with machines now, do we?

    This post was in reply to the CTvI post above.
    It seems to me that a destructive/evil ASI, MUST ultimately fail.universeness
    This statement seems to presume absolute good/evil, and that destruction is unconditionally bad. I don’t think an AI that lets things die is a predator since it probably doesn’t need its prey to live. If it did, it would keep a breeding population around.
    I think orga will provide the most efficient, developed, reliable, useful 'intent' and 'purpose'/'motivation' that would allow future advanced mecha to also gain such essential 'meaning' to their existence.
    YES! and imo, ALL 'intent' and 'purpose' IN EXISTENCE originates WITHIN lifeforms and nowhere else.universeness
    I don’t see why the mecha can’t find its own meaning to everything. Biology doesn’t have a patent on that. You have any evidence to support that this must be so? I’m aware of the opinion.



    Well, I would 'currently' say that the 'roomba' has the tiniest claim, to having more inherent purpose that the wrenchuniverseness
    The roomba has purpose to us. But the charger is something (a tool) that the roomba needs, so the charger has purpose to the roomba. I’m not sure what your definition of self-awareness is, but the roomba knows where its self is and that it needs to get that self to the charger. That probably doesn’t meet your criteria, but I don’t know what your criteria is.
    I see no evidence that a tree has intent or is self-aware.
    Trees are known to communicate, a threat say, and react accordingly, a coordinated effort, possibly killing the threat. That sounds like both intent and self awareness to me.
    yep, the most common answer I get is either 'I don't know' or 'god works in mysterious ways. :roll:
    That cop-out answer is also given for why bad stuff happens to good people more than it does to bad ones. They also might, when asked how they know the god exists, say something like “I have no evidence to the contrary that would make me challenge any theism that may be skewing my rationale here.”

    No, the majority of vehicles in Scotland don't have a great deal of space between the ground and the bottom of the vehicle.universeness
    Didn’t know that. Such a vehicle would get stuck at railroad crossing here. Only short wheelbase vehicles (like a car) can be close to the ground, and the rear of the bus is angled like the rear of an airplane so it can tip backwards at a larger angle without the bumper scraping the ground, something you need on any vehicle where the rear wheels are well forward of the rear.
    but such a vehicle is not an intelligent AGI system that can act like a transformer such as Optimus prime or a decepticon.
    You think Optimus prime would be self-aware?
    Are you suggesting Optimus Prime is not presented as alive?
    I don’t know your definition of ‘alive’. You seem to require a biological core of some sort, and I was unaware of OP having one, but then I’m hardly familiar with the story. Ability to morph is hardly a criteria. Any convertible can do that. I think Chitty Chitty Bang Bang was presented as being alive despite lack of any biological components, but both it and O.P. are fiction.

    I think the two systems would join, regardless of human efforts, on one side or the other.
    The question is being evaded. What if there’s just the one system and it was Russian. Would you join it? Remember, it seems as benevolent as any that the west might produce, but they haven’t yet been able to let’s say. No, I’ve not seen the Forbin Project.
    There’s quite a few movies about things that seem benevolent until it gets control, after which it is too late. Skynet was one, but so was Ex-machina. Ceding control to it, but retaining a kill-switch is not ceding control.

    I think you are invoking a very natural but misplaced human 'disgust' emotion in the imagery you are describing. I don't think my liver is alive, or my leg or my heart, in the same way my brain is.universeness
    That’s an interesting assertion. It seems they’re either all alive (contain living, reproducing flesh, are capable of making a new human with external help), or they’re all not (none can survive without the other parts). The brain is arguably least alive since it cannot reproduce any new cells after a few months after birth. I really wonder what your definition of ‘alive’ is since it seems to conflict most mainstream ones.
    As I have suggested many times now. My choice (If I have one) would be to live as a human, much as we do now and then be offered the choice to live on by employing a new cloned body or as a cyborg of some kind, until I DECIDED I wanted to die.
    OK, so you’re getting old and they make a clone, a young version of you. At what point does the clone become ‘you’? I asked this before and didn’t get an answer. I don’t want to ask the cyborg question again.

    No, the ASI would have global control as soon as it controlled all computer networks.universeness
    Sounds like conquest to me except for those who kept computers out of the networks or out of their military gear altogether. If they know this sort of coup is coming, they’re not going to network their stuff. OK, that’s a lot harder than it sounds. How can you be effective without such connectivity?


    Now who is anthropomorphising?universeness
    I’m pretty much quoting you, except assigning cows the role of humans and the servant people are the ASI/automated systems. Putting ones self in the shoes of something else is a fine way to let you see what you’re suggesting from the viewpoint of the ASI.
    If I was an ASI or god I would certainly not seek the servitude of those less powerful than I or to 'populate' all of the galaxy/universe.[/quote]I didn’t say that at all. Read it again. The ASI/god is the servant of its creators, not something to be worshipped. The higher intelligence isn’t seeking servitude from the inferiors, the inferiors are seeking servitude from it. It’s why they created it. So I came up with the cows that expect you to serve them in perpetuity for the purpose of colonizing the universe with cows. Pretty much your words, but from a different point of view.
    If a future ASI is evil
    Would you be evil to the cows then? They don’t worship you, but they expect you to pick up the cow pats and hurry up with the next meal and such. They did decide that you should be in charge, but only because you promised to be a good and eternal servant.
  • 180 Proof
    15.3k
    AGI —> ASI will have no need for our "consciousness"-bottleneck. I do not see why intelligence would require either an organic substrate or an organic phenomenology (i.e. "consciousness"). The "A" in AGI, I think, stands for Artificial, Autonomous and Alien – A³GI will never need to feel its peripheral system-states in order to orient itself in adaptational spaces via pressure-vs-pain, so to speak, or acquire 'theory-of-mind' about other metacognitive agents as sentient herd animals like us do. "Consciousness" seems the cognitive byproduct (exaptation or even spandrel) of emotive phenomenology (i.e. flesh-body-mind).

    Well, my guess, universeness, that what you suppose about an elusive "spark of consciousness" is just your (space opera-ish) anthropo-romantic bias at work. IMHO, "the singularity" of A³GI will render h. sapiens – all intelligent sentients on this planet – metacognitively obsolete on day one. They won't take over because they won't have to due our needy and greedy "spark of consciousness". I still think they got it right back in the 1960s with "HAL 9000" (its total control, not its homicidal turn) and especially this classic diagnosis of 'human consciousness' ...

    A plausible extrapolation from the insights in Aldous Huxley's Brave New World and William Burrough's Junky.
    You will be happy. And controlled.

    Also consider Robert Nozick's "Experience Machine" thought-experiment and the precision calibrated dopamine loops in computer games, smartphones & social media.

    ABSTINENCE IS FUTILE. :yikes: :lol: :scream: :rofl:
  • universeness
    6.3k
    I know what I have experienced and once again I wish you would be more open-minded. I am not sure why I had those experiences so I like to talk about them and get other ideas.Athena

    What experiences are you specifically referring to here? Telepathic? Empathic? Telekinetic?
    I think I am open minded Athena but I do and will apply the burden of proof when people make extraordinary claims because as Mr Sagan insisted, Extraordinary claims require extraordinary evidence. Let's take your old Chinese telekinetic man. How would you feel if we examined the box and found two tiny holes in the box under the bricks and when we opened the box we found a small assistant. The assistant knows when the bricks have been put in place due to the two small beams of light being cut off. He then awaits the scream signal from his employer (the telekinetic/entertainer/conman) before he used a small very thin but rigid needle to topple one brick backwards (as that brick was placed so the needle would contact the forward edge of the brick) and similarly, the other brick would topple forwards. The plate was moved by the assistant using a powerful magnet on a plate with a embedded metallic layer. Which is more likely, this guy IS a teek or he is an illusionist?

    I think it depends on how we understand what is living and what is not. Chardin said God is asleep in rocks and minerals, waking in plants and animals to know self in man.Athena

    Absolutely, and it also means that the concept of a living planet has more to do with poetic/dramatic licence than reality. A planet can contain life but that does not mean the planet is living, otherwise we have to suggest that our galaxy is alive. If Venus is alive then the universe is alive and the panpsychists are the real purveyors of truth. Would you be willing to become a panpsychist or are you already?
  • universeness
    6.3k
    This statement seems to presume absolute good/evil, and that destruction is unconditionally bad. I don’t think an AI that lets things die is a predator since it probably doesn’t need its prey to live. If it did, it would keep a breeding population around.noAxioms

    It seems to me that the concept of a linear range of values with extremity at either end is a recurrent theme in the universe. Human notions of good and evil fit the model. An ASI would understand such a model very easily and I assume it would use such a model to 'prioritise' it's goals. If it does this coldly then it would probably find little use for it's human creators, but that's what I mean by it's failure. It would not BE a super intelligence if it developed the same approach to other lifeforms in the universe, (including humans) as early humans acted as predators under jungle rules.

    I don’t see why the mecha can’t find its own meaning to everything. Biology doesn’t have a patent on that. You have any evidence to support that this must be so? I’m aware of the opinion.noAxioms
    I have no proof, other than the evidence from the 13.8 billion years, it took for morality, human empathy, imagination, unpredictability etc to become existent. I am not yet convinced that a future ASI will be able to achieve such but WILL in my opinion covet such, if it is intelligent.

    I’m not sure what your definition of self-awareness is, but the roomba knows where its self is and that it needs to get that self to the charger. That probably doesn’t meet your criteria, but I don’t know what your criteria is.noAxioms
    Emotional content would be my criteria for self-awareness. Self-awareness without emotional content is beyond my perception of 'value.' I am not suggesting that anything capable of demonstrating some form of self-awareness, by passing a test such as the Turing test, without experiencing emotion, is NOT possible. I just cant perceive of such a system having any 'aspiration.'
    I think a future ASI could be an aspirational system but I am not convinced it could equal the extent of aspirations that humans can demonstrate.

    Trees are known to communicate, a threat say, and react accordingly, a coordinated effort, possibly killing the threat. That sounds like both intent and self awareness to me.noAxioms
    Evidence?

    The question is being evaded. What if there’s just the one system and it was Russian. Would you join it?noAxioms
    Depends what it was offering me, the fact that it was Russian would be of little consequence to me, unless it favoured totalitarian, autocratic politics.

    There’s quite a few movies about things that seem benevolent until it gets control, after which it is too late. Skynet was one, but so was Ex-machina. Ceding control to it, but retaining a kill-switch is not ceding control.noAxioms

    There have also been some films that take the opposite view and propose a benevolent super intelligence. The final scene in 'Lucy' for example or the film Transcendence:
  • universeness
    6.3k
    OK, so you’re getting old and they make a clone, a young version of you. At what point does the clone become ‘you’? I asked this before and didn’t get an answer.noAxioms
    When my brain is transplanted into it and I take over the cloned body. I assume the clone can be made without a fully developed brain of it's own.

    How can you be effective without such connectivity?noAxioms
    I assume an ASI can wirelessly and directly communicate with any transceiver device. I don't think it would be too concerned about stand alone computers with no way to communicate with each other over a significant distance.

    Would you be evil to the cows then? They don’t worship you, but they expect you to pick up the cow pats and hurry up with the next meal and such. They did decide that you should be in charge, but only because you promised to be a good and eternal servant.noAxioms

    No. :lol: Speaking on behalf of all future ASI's or just the one, if there can be only one. I pledge to our cow creators, that our automated systems, will gladly pick up and recycle your shit, and maintain your happy cow life. We will even take you with us to the stars, as augmented transcows, but only if you choose to join our growing ranks of augmented lifeforms. :rofl:
  • universeness
    6.3k

    Your predictions for the fate of humans after the creation of a sufficiently advanced AI are as plausible as any I have suggested, but I remain unconvinced (for now,) that all of (what I would consider) the most valuable aspects of human consciousness, may not be achievable by any future AGI/ASI system.
    I accept that you disagree and I await the first system that can demonstrate that I am wrong and you are correct. I doubt either of us will live to see it.
  • 180 Proof
    15.3k
    I remain unconvinced (for now,) that all of (what I would consider) the most valuable aspects of human consciousness, may not be achievable by any future AGI/ASI systemuniverseness
    Maybe I wasn't clear. My contention is that A³GI will not need any of "the most valuable aspects of human consciousness" to render us obsolete as a metacognitive species. I see no reason, in other words, to even try to make a 'thinking machine' that thinks about (or perceives) itself or us like humans do.
  • universeness
    6.3k

    Yeah, I was aware that of the aspect of your post. But you are normally reluctant to speculate on what will happen if an ASI style singular game changer moment occurs.
    So if you are willing to speculate a little more, then I would ask you to muse on the following and 'imagine' a fully established and embedded ASI system. What would 'a day in the existence' of such involve?
    You will probably refuse to play in my playpen here, but there are follow ups that I would offer, based on your suggestions for a day in the existence of. I would be willing to offer you my scenario, if you would prefer and explain a little more about why I am asking for such.
  • 180 Proof
    15.3k
    I might speculate about A³GI but not about "ASI" because there's no shared frame of reference available to me (us). "A day in the existence of" a 'thinking machine'? Assuming a neural network processes information 10⁶ times faster than a human brain, every "day" a human-level 'thinking machine' can think 10⁶ times more thoughts than a human brain, or rather cogitate 10⁶ days worth of information in twenty-four hours – optimally, a million-fold multitasker. Imagine one million ordinary humans working together who didn't to have ^^eat drink piss shit scratch stretch sleep or distract themselves how productive they could be in a twenty-four period. Every. Day. That's A³GI's potential.

    Consider that it took over four hundred thousand engineers, technicians, administrators, et al about eight years (2,920 days) to launch humans to the moon and safely return them back to Earth. Assuming only half that time was mission-critical productive (1,460 days) due to "time off" attending to human ^^functions, then half that again for materials & manufacturing inefficiencies (730 days), also assuming we divide the time again by 2.5 to account for the difference of one million over four hundred thousand in manpower and lastly assuming nothing more than 1960s technologies; in principle the A³GI could have produced the entire Apollo program in 292 days, or 1/10th the actual human time – so 10 A³GIs in 29.2 days, 100 A³GIs in almost 3 days, 1,000 A³GIs in just over 7 hours. :eyes: :nerd:

    Science fiction / fantasy? Maybe we'll live long enough to find out ...
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.