Interesting assertions. But the universe is not alive any more than is a school bus. Also interesting that you seem to restrict 'purpose' to things that you consider alive.The purpose I am suggesting only exists, as an emergence of all the activity of that which is alive, and can demonstrate intent and purpose, taken as a totality. — universeness
I didn't say one was required.No intelligent designer for the universe is required, for an emergent totality of purpose, within the universe, to be existent.
I think I'd be one of them, but it sounds like you would not unless it was your culture that created the ASI. You say 'many', suggesting that some will not do so willingly, in which case those must be merged involuntarily, or alternatively the ASI is not in global control.Many humans will welcome such a union
That seems something else. The ASI being the boss is quite different than whatever you envision as a merge. I think either is post-human though.We wont fight ASI, we will merge with it.
But WWII was not in the future. I am asking how, in the absence of it being a global unassailable power, it would have handled Germany without resorting to war. It should have made better decisions than the humans did.You never answered how an AGI might have prevented war with Hitler.
— noAxioms
All production would be controlled by the ASI in a future, where all production is automated.
I agree that holding total power involves complete control over challenges to that power. Hence Kim Jong-un killing a good percentage of his relatives before they could challenge his ascent.No narcissistic, maniacal human, could get their hands on the resources needed to go to war, unless the ASI allowed it.
Every country is somebody's enemy, and those that consider the ASI to be implementing the values of the perceived enemy are hardly going to join it willingly. So yet again, it's either involuntary (war), or it's not a global power. You answered exactly how I thought you would. A completely benevolent ASI rejected because you don't like who created it.You are convinced that humans and a future AI will inevitably be enemies.
I would hope (for our sakes) it would come up with a better morality than what we could teach it. I mean, suppose cows created the humans and tried to instill a morality that preservation and uncontrolled breeding of (and eternal servitude to) cattle (to the point of uploading each one for some kind of eternal afterlife)? How would modern humans react to such a morality proposal? Remember, they're as intelligent as we see them now, but somehow that was enough to purposefully create humans as their caretakers.we would be dependent on it's super intelligence/reason and sense of morality.
I also think that even if all his evidence is true then this could simply mean that humans and other species have another 'sense' system that we do not fully understand but this other sense system is still fully sourced in the brain. — universeness
I prefer the work of people like Sheldrake, which is also entertaining but also has some real science behind it. — universeness
Yes, that is our cultural bias but do you wish to be close-minded? I very much appreciate that information of influences. I was not aware of those connections. Thank you. It helps me understand what Arguelles in a new way."Argüelles' significant intellectual influences included Theosophy and the writings of Carl Jung and Mircea Eliade. Astrologer Dane Rudhyar was also one of Argüelles' most influential mentors." The words I underlined make me go :rofl: — universeness
Does this not contradict your claim that a future AI system cannot have a body which is capable of the same or very similar, emotional sensation, to that of a current human body? — universeness
You are misinterpreting what I am typing. Where did I suggest the universe is alive? I typed that all life in the universe, taken as a totality, COULD BE moving towards (emerging) an ability to network/act as a collective intent and purpose, as well as a set of individual intents and purpose's. In what way does that suggest I am claiming the universe is alive?But the universe is not alive any more than is a school bus. — noAxioms
Interesting in what way? For example, I can see no purpose for the planet Mercury's existence, can you?Also interesting that you seem to restrict 'purpose' to things that you consider alive. — noAxioms
You used this 'school bus' example much earlier in our exchange on this thread. I have never suggested a human made transport vehicle, has any purpose outside of its use by lifeforms. A bug might make a nest in it, A bird may use it to temporarily perch on. A cat might use it to hide under to stop a pursuing big dog getting to it, etc, but such a vehicle is not an intelligent AGI system that can act like a transformer such as Optimus prime or a decepticon. You are denying posits I have never posited!! I agree a current school bus has no purpose in itself but what's that got to do with what's emergent, due to current and historical human activity?I will admit that there is purpose within the school bus (it contains purposeful things), and I will even admit that there is human purpose to the school bus, but I deny that the school bus serves any purpose to itself. — noAxioms
What??? My culture is Scottish, which has it's origins in the Celtic traditions, but it is mostly now (as are most nations) a very mixed and diverse culture. A 'Scottish' ASI is just a very 'silly' notion.I think I'd be one of them, but it sounds like you would not unless it was your culture that created the ASI. You say 'many', suggesting that some will not do so willingly, in which case those must be merged involuntarily, or alternatively the ASI is not in global control. — noAxioms
It may be that no ASI is capable of reproducing human 'imagination' or the human ability to experience wonderment and awe. If human individuality and identity are the only efficient means to create true intent and purpose, then an ASI may need a symbiosis of such human ability to become truly alive and conscious. As I have already stated, humans would live their life, much the same way as they do now and as an alternative to death, each can choose to merge with AGI/ASI, and continue as a symbiont with an intelligent/ super intelligent mecha or biomecha system. This is what I mean by 'merge' and this is just my suggestion of the way I think things might go, and I think I have made the picture as I see it, very clear.I don't really know what you mean by a merge. Suppose you get yourself scanned and uploaded so to speak. Now the biological version can talk to the uploaded entity (yourself). Since the uploaded version is now you, will the biological entity (who feels no different) voluntarily let itself be recycled? It hurts, but it won't be 'you' that feels the pain because 'you' have been uploaded. When exactly is the part that is 'you' transferred, such that the virtual entity is it? Sounds simply like a copy to me, leaving me still biological, and very unwilling to step into the recycle bin. — noAxioms
I already answered this. You are one who asked me to 'place' an existent ASI in the time of WWII, as you asked me how an ASI would prevent WWII, and then you type the above first sentence??? This does not make much sense!But WWII was not in the future. I am asking how, in the absence of it being a global unassailable power, it would have handled Germany without resorting to war. It should have made better decisions than the humans did. — noAxioms
Every country is somebody's enemy, and those that consider the ASI to be implementing the values of the perceived enemy are hardly going to join it willingly. So yet again, it's either involuntary (war), or it's not a global power. You answered exactly how I thought you would. A completely benevolent ASI rejected because you don't like who created it.
Sure, once the conquest is over, then the unity is there, but if it is achieved by conquest, it will seem to always feel like an occupation and not a unified thing. It certainly won't be left to a vote, so it won't be a democracy. A democracy would be people getting their hands on the resources needed to overthrow the ASI tyrant. How is it going to get the people to see it as benevolent if it came to power by conquest? — noAxioms
Perhaps vegetarians or hippies could answer your unlikely scenario best, by suggesting something like;I mean, suppose cows created the humans and tried to instill a morality that preservation and uncontrolled breeding of (and eternal servitude to) cattle (to the point of uploading each one for some kind of eternal afterlife)? How would modern humans react to such a morality proposal? Remember, they're as intelligent as we see them now, but somehow that was enough to purposefully create humans as their caretakers. — noAxioms
I was ok with this up to your last sentence, which is a bridge too far for my rationale.The feeling of being watched is in the body and the brain detects this sensation and tries to make sense of it. Usually, turn around and look at what is behind us when we have that feeling. Then we confirm whether someone is either looking at us or not. Personally, I have many telepathic experiences, including messages from those who have crossed over. — Athena
I prefer the Greek atomists but as I have stated before, I don't care much about what the ancient Greeks said about anything. The main value in reading about the Greeks is to try our best, to not repeat the many many mistakes such cultures made.That is a cultural bias starting with the materialistic Romans. Materialistic meaning believing all things are matter. The Greeks were not so materialistic. Not all of the Greeks believed in a spiritual reality such as Plato's forms, but Greeks had the language for the trinity of God, that the Romans did not have.
Language being a very important factor in what thoughts our culture accepts and which ones are taboo. — Athena
Our cultural bias prevented us from understanding Gia, the earth as one living organism. Capitalism still works against our awareness of Gia and the need to change our ways to prevent the destruction of our planet. Western culture also ignored Eastern medicine and we still remain unaware of this other understanding of how our bodies, minds, and spirit work. Here are demonstrations of qigong energy. — Athena
Are you easily duped?Yes, that is our cultural bias but do you wish to be close-minded? — Athena
My vacuum cleaner and washing machine are very helpful and so is my computer, but they are machines, not organic, living and feeling bodies. — Athena
In Star Trek Voyager, the humans defeat the borg. The borg get smashed by Janeway's virus!I most surely do not want to succumb to the Borg! — Athena
Apparently a misinterpretation. You spoke of ‘how purposeless the universe is without ...” like the universe had purpose, but later you corrected this to the universe containing something with purpose rather than it having purpose. Anyway, you said only living things could have purpose, so given the original statement, the universe must be alive, but now you’re just saying it contains living things.But the universe is not alive any more than is a school bus.
— noAxioms
You are misinterpreting what I am typing. Where did I suggest the universe is alive? — universeness
Pretty hard to do that if separated by sufficient distance. Physics pretty much prevents interaction. Sure, one can hope to get along with one’s neighbors if they happen by (apparently incredible) chance to be nearby. But the larger collective, if it is even meaningful to talk about them (apparently it isn’t), physically cannot interact at all.I typed that all life in the universe, taken as a totality, COULD BE moving towards (emerging) an ability to network/act as a collective intent and purpose
No, but what about this ASI we speak of? Restricting purpose to living things seems to be akin to a claim of a less restricted version of anthropocentrism. The ASI could assign its own purposes to things, goals of its own to attain. Wouldn’t be much of the ‘S’ in ‘ASI’ if it didn’t unless the ‘S’ stood for ‘slave’. Funny putting a slave device in charge.Also interesting that you seem to restrict 'purpose' to things that you consider alive.
— noAxioms
Interesting in what way? For example, I can see no purpose for the planet Mercury's existence, can you?
I think we need to distinguish between something else (contractor say) finding utility in an object (a wrench say) and the wrench having purpose of its own rather serving the purpose of that contractor. Otherwise the assertion becomes that only living things can be useful, and a wrench is therefore not useful. Your assertion seems to be instead that the wrench, not being alive, does not itself find purpose in things. I agree that it doesn’t have its own purpose, but not due to it not being alive.That doesn't mean that some future utility might be found for the planet Mercury
Wow, I can think of all kind of uses for it.I also accept that just because I can't perceive a current purpose for the planet Mercury, that that is PROOF, one does not exist. I simply mean I cannot perceive of a current use/need for the existence of the planet Mercury, nor many other currently existent objects in the universe.
That must be a monster big dog then.A cat might use [the school bus] to hide under to stop a pursuing big dog getting to it
Ooh, here you seem to suggest that an AGI bus could have its own purpose, despite not being alive, unless you have an unusual definition of ‘alive’. This seems contradictory to your claims to the contrary above.but such a vehicle is not an intelligent AGI system that can act like a transformer such as Optimus prime or a decepticon.
I’m just thinking of an ASI made by one of your allies (a western country) rather than otherwise (my Russian example). Both of them are a benevolent ASI to which total control of all humanity is to be relinquished, and both are created by perceived enemies of some of humanity. You expressed that you’d not wish to cede control to the Russian-made one.A 'Scottish' ASI is just a very 'silly' notion. — universeness
Well, not letting a Hitler create his war machine sounds like his free will be usurped. You don’t approve of this now? If the world is to be run by the ASI, then its word is final. It assigns the limits within which humanity must be contrained.I do not think an ASI would usurp the free will of sentient lifeforms.
OK, so you envision a chunk of ancient flesh kept alive to give it that designation, but the thinking part (which has long since degraded into uselessness) has been replaced by mechanical parts. I don’t see how that qualifies as something being alive vs it being a non-living entitiy (like a bus) containing living non-aware tissue, and somehow it now qualifies as being conscious like a smart toaster with some raw meat in a corner somewhere.If human individuality and identity are the only efficient means to create true intent and purpose, then an ASI may need a symbiosis of such human ability to become truly alive and conscious [...] and continue as a symbiont with an intelligent/ super intelligent mecha or biomecha system. This is what I mean by 'merge' and this is just my suggestion of the way I think things might go, and I think I have made the picture as I see it, very clear.
I saw no answer, and apparently WWII was unavoidable, at least by the time expansion to the west commenced. I was envisioning the ASI being in place back then, in charge of say the allied western European countries, and I suggest the answer would be that it would have intervened far before western Europe actually did, well before Austria was annexed in fact. And yes, that would probably have still involved war, but a much smaller one. It would have made the presumption that the ASI could make decisions for people over which it was not responsible, which again is tantamount to war mongering. But Germany was in violation of the Versailles treaty, so perhaps the early aggressive action would be justified.I already answered this. You are one who asked me to 'place' an existent ASI in the time of WWII, as you asked me how an ASI would prevent WWII, and then you type the above first sentence??? This does not make much sense! — universeness
And I said there was not yet global control. The whole point of my scenario was to illustrate that gain of such control would likely not ever occur without conquest of some sort. The ASI would have to be imperialist.MY SUGGESTION, which I already typed, was that an ASI controlled, global mental health monitoring system
I’m not sure there would be leaders, or nations for that matter, given the ASI controlling everything. What would be the point?So Hitler et al, would never be allowed to become a national leader
This sentence fragment is unclear. A super intelligence is not necessarily in control, although it might devise a way to wrest that control in a sort of bloodless coup. It depends on how secure opposing tech is. It seems immoral because it is involuntary conquest, not an invite to do it better than we can.I think after the singularity moment of the arrival of a AI, capable of self-control, independent learning, self-augmentation, self-development, etc.
Help in the form of advice wouldn’t be it being in control. And all of humanity is not going to simultaneously agree to it being in control, so what to do about those that decline, especially when ‘jungle rules’ are not to be utilized by the ASI, but are of course fair game to those that declined the invite.I think it would wait for lifeforms such as us, to decide to request help from it.
Work with me and this limited analogy. It was my attempt to put you in the shoes of the ASI. In terms of intelligence, we are to cows what the ASI is to us (in reality it would be more like humans-to-bugs). The creators of the intelligence expected the intelligence (people) to fix all the cow conflicts, to be smarter than them, prevent them from killing each other, and most importantly, serve them for all eternity, trying to keep them alive for as long as possible because cow lives are what’s important to the exclusion of all else. As our creators, they expect servitude from the humans. Would humans be satisfied with that arrangement? The cows define that humans cannot have purpose of their own because they’re not cows, so the servant arrangement is appropriate. Our goal is to populate all of the galaxy with cows in the long run.Perhaps vegetarians or hippies could answer your unlikely scenario best
The only problem I see here is that it seems like, on a large enough time scale, ways would be discovered to seamlessly merge digital hardware with biological hardware in a single "organism," a hybot or cyborg. If future "AI" (or perhaps posthumans is the right term) incorporate human biological information, part of their nervous tissue is derived from human brain tissue, etc., then I don't see why they can't do everything we can. — Count Timothy von Icarus
YES! and imo, ALL 'intent' and 'purpose' IN EXISTENCE originates WITHIN lifeforms and nowhere else.Anyway, you said only living things could have purpose, so given the original statement, the universe must be alive, but now you’re just saying it contains living things. — noAxioms
I am quite happy for now, to assume that all lifeforms in existence, exist on this pale blue dot, exclusively, as that would increase our importance almost beyond measure. But I agree with Jodie Foster's comments and Mathew McConaughey's, Carl Sagan quote in the film 'Contact':Pretty hard to do that if separated by sufficient distance. Physics pretty much prevents interaction. Sure, one can hope to get along with one’s neighbors if they happen by (apparently incredible) chance to be nearby. But the larger collective, if it is even meaningful to talk about them (apparently it isn’t), physically cannot interact at all. — noAxioms
I agree that it doesn’t have its own purpose, but not due to it not being alive.
My example might be a roomba, which returns to its charging station when finished or when running low of battery. It finds purpose in the charging station despite the roomba not being alive. If that isn’t one object finding purpose in another, then I suppose we need a better definition of ‘purpose’. — noAxioms
Sure, a sun monitoring station for example BUT can you think of any inherent use? Similar to your roomba example, for example OR a theistic example. What do you think the Christians say when I ask them why their god created the planet Mercury? .......... yep, the most common answer I get is either 'I don't know' or 'god works in mysterious ways.' :roll:Wow, I can think of all kind of uses for it. — noAxioms
No, the majority of vehicles in Scotland don't have a great deal of space between the ground and the bottom of the vehicle. Most will accomodate a crouching cat, but not a crouching medium or big dog.A cat might use [the school bus] to hide under to stop a pursuing big dog getting to it
That must be a monster big dog then. — noAxioms
Are you suggesting Optimus Prime is not presented as alive? I think the Marvel comic fans might come after you. I did not suggest that something alive could not inhabit a future cybernetic body, including ones that could be morphic, as in the case of a transformer. Have you witnessed any school bus where you live, morph like big Optimus? :joke:but such a vehicle is not an intelligent AGI system that can act like a transformer such as Optimus prime or a decepticon.
Ooh, here you seem to suggest that an AGI bus could have its own purpose, despite not being alive, unless you have an unusual definition of ‘alive’. This seems contradictory to your claims to the contrary above. — noAxioms
I think the two systems would join, regardless of human efforts, on one side or the other.I’m just thinking of an ASI made by one of your allies (a western country) rather than otherwise (my Russian example). Both of them are a benevolent ASI to which total control of all humanity is to be relinquished, and both are created by perceived enemies of some of humanity. You expressed that you’d not wish to cede control to the Russian-made one. — noAxioms
OK, so you envision a chunk of ancient flesh kept alive to give it that designation, but the thinking part (which has long since degraded into uselessness) has been replaced by mechanical parts. I don’t see how that qualifies as something being alive vs it being a non-living entitiy (like a bus) containing living non-aware tissue, and somehow it now qualifies as being conscious like a smart toaster with some raw meat in a corner somewhere.
Sorry for the negative imagery, but the human conscious mechanism breaks down over time and cannot be preserved indefinitely, so at some point it becomes something not living, but merely containing a sample of tissue that has your original DNA in it mostly. By your definition, when it subtly transitions from ‘living thing with mechanical parts’ to ‘mechanical thing with functionless tissue samples’, it can no longer be conscious or find purpose in things.
On the other hand, your description nicely avoids my description of a virtual copy of yourself being uploaded and you talking to yourself, wondering which is the real one. — noAxioms
And I said there was not yet global control. The whole point of my scenario was to illustrate that gain of such control would likely not ever occur without conquest of some sort. The ASI would have to be imperialist. — noAxioms
I think the ASI would be unconcerned about any human activity which was no threat to it.Help in the form of advice wouldn’t be it being in control. And all of humanity is not going to simultaneously agree to it being in control, so what to do about those that decline, especially when ‘jungle rules’ are not to be utilized by the ASI, but are of course fair game to those that declined the invite. — noAxioms
As our creators, they expect servitude from the humans. Would humans be satisfied with that arrangement? The cows define that humans cannot have purpose of their own because they’re not cows, so the servant arrangement is appropriate. Our goal is to populate all of the galaxy with cows in the long run. — noAxioms
I was ok with this up to your last sentence, which is a bridge too far for my rationale. — universeness
That's not the point I am making. Earlier in your posts, you suggested (unless I misinterpreted your meaning) that you consider the creation of a cybernetic body which was as capable as the human body is, in functionality and sensation, was impossible. Was I incorrect in my interpretation of your posting regarding this point? — universeness
. Venus has no living creatures but it is an active planet. Do you consider it to be alive? — universeness
For decades, researchers also thought the planet itself was dead, capped by a thick, stagnant lid of crust and unaltered by active rifts or volcanoes. But hints of volcanism have mounted recently, and now comes the best one yet: direct evidence for an eruption. Geologically, at least, Venus is alive.Mar 15, 2023
Active volcano on Venus shows it's a living planet - Science — Paul Voosen
The Mayan return, Harmonic Convergence, is the re-impregnation of the planetary field with the archetypal experiences of the planetary whole. This re-impregnation occurs through an internal precipitation, as long-suppressed psychic energy overflows it channels. And then, as we shall learn again, all the archetypes we need are hidden in the clouds, not just as poetry, but as actual reservoirs of resonant energy. This archetypal energy is the energy of galactic activation, streaming through us more unconsciously than consciously. Operating on harmonic frequencies, the galactic energy naturally seeks those structures resonant with it. Their structures correspond to bio-electric impulses connecting the sense-feilds to actual modes of behavior. The impulses are organized into the primary "geometric" structures that are experienced through the immediate environment, whether it be the environment of clouds seen by the naked eye or the eery pulsation of a "quasar" received through the assistance of a radio telescope. — Jose Arguelles
Self-consciousness seems cheap, but maybe I define it differently. The creativity comes with the intelligence. If it lacks in creativity, I would have serious doubts about it being a superior intelligence.Suppose for the sake of argument that AI can become significantly better than man at many tasks, perhaps most. But also suppose that, while it accomplishes this, it does not also develop our degree of self-consciousness or some of the creativity that comes with it. Neither does it develop the same level of ability to create abstract goals for itself and find meaning in the world. — Count Timothy von Icarus
Maybe it could have a purpose that wouldn’t be served by turning itself off. But what purpose?Why shouldn't it just turn itself off?
Not sure if an AI would find it advantageous to replicate. Just grow and expand and improve seems a better strategy. Not sure how natural selection could be leveraged by such a thing.Maybe some will turn themselves off, but natural selection will favor the ones who find a reason to keep replicating.
Har! You went down that road as well I see, but we don’t see a universe populated with machines now, do we?Hell, perhaps this is part of the key to the Fermi Paradox?
This statement seems to presume absolute good/evil, and that destruction is unconditionally bad. I don’t think an AI that lets things die is a predator since it probably doesn’t need its prey to live. If it did, it would keep a breeding population around.It seems to me that a destructive/evil ASI, MUST ultimately fail. — universeness
I think orga will provide the most efficient, developed, reliable, useful 'intent' and 'purpose'/'motivation' that would allow future advanced mecha to also gain such essential 'meaning' to their existence.
I don’t see why the mecha can’t find its own meaning to everything. Biology doesn’t have a patent on that. You have any evidence to support that this must be so? I’m aware of the opinion.YES! and imo, ALL 'intent' and 'purpose' IN EXISTENCE originates WITHIN lifeforms and nowhere else. — universeness
The roomba has purpose to us. But the charger is something (a tool) that the roomba needs, so the charger has purpose to the roomba. I’m not sure what your definition of self-awareness is, but the roomba knows where its self is and that it needs to get that self to the charger. That probably doesn’t meet your criteria, but I don’t know what your criteria is.Well, I would 'currently' say that the 'roomba' has the tiniest claim, to having more inherent purpose that the wrench — universeness
Trees are known to communicate, a threat say, and react accordingly, a coordinated effort, possibly killing the threat. That sounds like both intent and self awareness to me.I see no evidence that a tree has intent or is self-aware.
That cop-out answer is also given for why bad stuff happens to good people more than it does to bad ones. They also might, when asked how they know the god exists, say something like “I have no evidence to the contrary that would make me challenge any theism that may be skewing my rationale here.”yep, the most common answer I get is either 'I don't know' or 'god works in mysterious ways. :roll:
Didn’t know that. Such a vehicle would get stuck at railroad crossing here. Only short wheelbase vehicles (like a car) can be close to the ground, and the rear of the bus is angled like the rear of an airplane so it can tip backwards at a larger angle without the bumper scraping the ground, something you need on any vehicle where the rear wheels are well forward of the rear.No, the majority of vehicles in Scotland don't have a great deal of space between the ground and the bottom of the vehicle. — universeness
You think Optimus prime would be self-aware?but such a vehicle is not an intelligent AGI system that can act like a transformer such as Optimus prime or a decepticon.
I don’t know your definition of ‘alive’. You seem to require a biological core of some sort, and I was unaware of OP having one, but then I’m hardly familiar with the story. Ability to morph is hardly a criteria. Any convertible can do that. I think Chitty Chitty Bang Bang was presented as being alive despite lack of any biological components, but both it and O.P. are fiction.Are you suggesting Optimus Prime is not presented as alive?
The question is being evaded. What if there’s just the one system and it was Russian. Would you join it? Remember, it seems as benevolent as any that the west might produce, but they haven’t yet been able to let’s say. No, I’ve not seen the Forbin Project.I think the two systems would join, regardless of human efforts, on one side or the other.
That’s an interesting assertion. It seems they’re either all alive (contain living, reproducing flesh, are capable of making a new human with external help), or they’re all not (none can survive without the other parts). The brain is arguably least alive since it cannot reproduce any new cells after a few months after birth. I really wonder what your definition of ‘alive’ is since it seems to conflict most mainstream ones.I think you are invoking a very natural but misplaced human 'disgust' emotion in the imagery you are describing. I don't think my liver is alive, or my leg or my heart, in the same way my brain is. — universeness
OK, so you’re getting old and they make a clone, a young version of you. At what point does the clone become ‘you’? I asked this before and didn’t get an answer. I don’t want to ask the cyborg question again.As I have suggested many times now. My choice (If I have one) would be to live as a human, much as we do now and then be offered the choice to live on by employing a new cloned body or as a cyborg of some kind, until I DECIDED I wanted to die.
Sounds like conquest to me except for those who kept computers out of the networks or out of their military gear altogether. If they know this sort of coup is coming, they’re not going to network their stuff. OK, that’s a lot harder than it sounds. How can you be effective without such connectivity?No, the ASI would have global control as soon as it controlled all computer networks. — universeness
I’m pretty much quoting you, except assigning cows the role of humans and the servant people are the ASI/automated systems. Putting ones self in the shoes of something else is a fine way to let you see what you’re suggesting from the viewpoint of the ASI.Now who is anthropomorphising? — universeness
Would you be evil to the cows then? They don’t worship you, but they expect you to pick up the cow pats and hurry up with the next meal and such. They did decide that you should be in charge, but only because you promised to be a good and eternal servant.If a future ASI is evil
You will be happy. And controlled.
I know what I have experienced and once again I wish you would be more open-minded. I am not sure why I had those experiences so I like to talk about them and get other ideas. — Athena
I think it depends on how we understand what is living and what is not. Chardin said God is asleep in rocks and minerals, waking in plants and animals to know self in man. — Athena
This statement seems to presume absolute good/evil, and that destruction is unconditionally bad. I don’t think an AI that lets things die is a predator since it probably doesn’t need its prey to live. If it did, it would keep a breeding population around. — noAxioms
I have no proof, other than the evidence from the 13.8 billion years, it took for morality, human empathy, imagination, unpredictability etc to become existent. I am not yet convinced that a future ASI will be able to achieve such but WILL in my opinion covet such, if it is intelligent.I don’t see why the mecha can’t find its own meaning to everything. Biology doesn’t have a patent on that. You have any evidence to support that this must be so? I’m aware of the opinion. — noAxioms
Emotional content would be my criteria for self-awareness. Self-awareness without emotional content is beyond my perception of 'value.' I am not suggesting that anything capable of demonstrating some form of self-awareness, by passing a test such as the Turing test, without experiencing emotion, is NOT possible. I just cant perceive of such a system having any 'aspiration.'I’m not sure what your definition of self-awareness is, but the roomba knows where its self is and that it needs to get that self to the charger. That probably doesn’t meet your criteria, but I don’t know what your criteria is. — noAxioms
Evidence?Trees are known to communicate, a threat say, and react accordingly, a coordinated effort, possibly killing the threat. That sounds like both intent and self awareness to me. — noAxioms
Depends what it was offering me, the fact that it was Russian would be of little consequence to me, unless it favoured totalitarian, autocratic politics.The question is being evaded. What if there’s just the one system and it was Russian. Would you join it? — noAxioms
There’s quite a few movies about things that seem benevolent until it gets control, after which it is too late. Skynet was one, but so was Ex-machina. Ceding control to it, but retaining a kill-switch is not ceding control. — noAxioms
When my brain is transplanted into it and I take over the cloned body. I assume the clone can be made without a fully developed brain of it's own.OK, so you’re getting old and they make a clone, a young version of you. At what point does the clone become ‘you’? I asked this before and didn’t get an answer. — noAxioms
I assume an ASI can wirelessly and directly communicate with any transceiver device. I don't think it would be too concerned about stand alone computers with no way to communicate with each other over a significant distance.How can you be effective without such connectivity? — noAxioms
Would you be evil to the cows then? They don’t worship you, but they expect you to pick up the cow pats and hurry up with the next meal and such. They did decide that you should be in charge, but only because you promised to be a good and eternal servant. — noAxioms
Maybe I wasn't clear. My contention is that A³GI will not need any of "the most valuable aspects of human consciousness" to render us obsolete as a metacognitive species. I see no reason, in other words, to even try to make a 'thinking machine' that thinks about (or perceives) itself or us like humans do.I remain unconvinced (for now,) that all of (what I would consider) the most valuable aspects of human consciousness, may not be achievable by any future AGI/ASI system — universeness
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.