However there are fundamental differences that will likely influence its full ability to manifest that possibility, namely that it stands a good chance of permanence, immortality through part replacement and constant access to reliable energy sources. — Benj96
I'm not so sure. The knowledge of nuclear fission lead to compassionate/productive use: nuclear power plants and malevolent/destructive use: nuclear bombs. — Benj96
Having knowledge doesn't make anyone any better/more empathetic. It simply acts as a basis for further good or bad deeds. — Benj96
Knowledge or power/ability is not a reflection of character of a conscious entity. — Benj96
Making excuses for a god, using the argument, that it's 'not so bad,' because, although it's so called, 'recorded word,' testifies that it supports human slavery and ethnic cleansing and sending those it created (but judges flawed,) to hell (and not just for a fixed sentence or to get rehabilitated, but FOR ETERNITY!) IS rather irrational, if you ask me. KNOWING how gods are described historically and currently, surely means that any assignment, of any notion of 'benevolence,' is not enough to compensate for it's deserved accusations of supporting and performing atrocity and evil behaviour.This is partly the reason for a belief in a benevolent God. Because if its omnipotent/all powerful it could have just as easily destroyed the entire reality we live in or designed one to cause maximal suffering. But for those that are enjoying the state of being alive, it lends itself to the view that such a God is not so bad afterall. As they allowed the beauty of existence and all the pleasures that come with it. — Benj96
Which data are you labelling exclusively 'human.' If I program a computer with data that describes how the planets of the solar system orbit the Sun, how 'human' is the data involved?We design AI based on human data. So it seems natural that such a product will be similar to us as we deem success as "likeness" - in empathy, virtue, a sense of right and wrong. — Benj96
At the same time we hope it has greater potential than we do. Superiority. We hope that such superiority will be intrinsically beneficial to us. That it will serve us - furthering medicine, legal policy, tech and knowledge. — Benj96
By Darwinian, jungle style rules, no, conquering and assimilating has been the norm but the whole point of humans trying to create a 'civilisation,' is that we REJECT jungle rules as having ANY role to play. The fact that they still do, IS to the chagrin of all those millions of people who try, every day, to fight for a better world. Stay with us Ben and stop offering comfort to those who posit the benevolence of gods.The question then is, historically speaking, have superior organisms always favoured the benefit of inferior ones? If we take ourselves as an example the answer is definitely not. At least not in a unanimous sense. — Benj96
So f*** them!(EDIT: the selfish and dangerous, that is!) Let's keep working hard to change their viewpoints or render them as powerless as they need to be for the sake of the future of all of us.Some of us do really care about the ecosystem, about other animals, about the planet at large. But some of us are selfish and dangerous. — Benj96
When will you stop concentrating on where humans came from and start concentrating on what we have the potential to become?If we create AI like ourselves it's likely it will behave the same. I find it hard to believe we can create anything that isn't human behaving, as we are biased and vulnerable to our own selfish tendencies. — Benj96
I will be content with benevolent, as omnis are impossible. My hope remains that any ASI supported transhuman form is NOT posthuman. I use the term posthuman in the sense of the extinction of all traces of anything substantial that WE would be able to recognise as human.An omnibenevolent AI would be unrecognisable to us - as flawed beings. — Benj96
I would say, no. It couldn't have formed some kind of contract pre-existence. We can't expect it to be beholden to something it arises in. I don't think you can even look at children this way. That they have a debt to the parents. I'd be a little way of any parent viewing their children that way. I wouldn't want a child of mine, say, thinking...well, I'll drive over and see if I can fix my mother's sink. I owe her for feeding me and cleaning my wounds.Supposing we design and bring to fruition and artificial intelligence with consciousness, does it owe us anything as its creators? — Benj96
I would guess we will make them to do us favors. How effective that will be...depends.Should we expect any favours? — Benj96
No. I can't see high IQ humans justifying such a thing with low IQ humans.Do you think we would be better off or enslaved to a superior intelligence? — Benj96
I'm not so sure I agree, because AGI is being/will be developed on solely human data. Whatever biases we have in our conscious experiences that we cannot depart from are intrinsic to the setup of AI. — Benj96
True it likely can never be human and experience the full set of things natural to such a state, but it's also not entirely alien. — Benj96
If i had to guess, our determination of successful programming is to produce something that can interact with us in a meaningful and relatable way, which requires human behaviours and expectations inbuilt in its systems. — Benj96
This is partly the reason for a belief in a benevolent God. Because if its omnipotent/all powerful it could have just as easily destroyed the entire reality we live in or designed one to cause maximal suffering. But for those that are enjoying the state of being alive, it lends itself to the view that such a God is not so bad afterall. As they allowed the beauty of existence and all the pleasures that come with it. — Benj96
We design AI based on human data. So it seems natural that such a product will be similar to us as we deem success as "likeness" - in empathy, virtue, a sense of right and wrong. — Benj96
At the same time we hope it has greater potential than we do. Superiority. We hope that such superiority will be intrinsically beneficial to us. That it will serve us - furthering medicine, legal policy, tech and knowledge. — Benj96
The question then is, historically speaking, have superior organisms always favoured the benefit of inferior ones? If we take ourselves as an example the answer is definitely not. At least not in a unanimous sense.
Some of us do really care about the ecosystem, about other animals, about the planet at large. But some of us are selfish and dangerous. — Benj96
However we can give it huge volumes of data, and we can give it the ability to evolve at an accelerated rate. So it woukd advance itself, become fully autonomous, in time. Then it could go beyond what we are capable of. But indirectly not directly. — Benj96
If we create AI like ourselves it's likely it will behave the same. I find it hard to believe we can create anything that isn't human behaving, as we are biased and vulnerable to our own selfish tendencies. — Benj96
IOW, God. Voltaire vindicated. — Vera Mont
Would it evolve away from its primal programming (whatever benefits humanity) towarfs whatever benefits AI survival. — Benj96
I predict that the AI machines will turn out to be another bunch of ungrateful bastards. — BC
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.