I'm not so sure I agree, because AGI is being/will be developed on solely human data. Whatever biases we have in our conscious experiences that we cannot depart from are intrinsic to the setup of AI. — Benj96
This is essentially "Mary in the black and white room" set within the context of AI. Human data does not equal human experience. We aren't made of fragmented human data, our consciousness is built upon the relations between data. It's what's in-between data points that make up how we function. We don't look at a street, grass, house, street name sign and number as data points to build up a probability of it being our home adress, we have a relationary viewpoint that sees the context in which all of these are within and extrapolate a conclusion based on different memory categories. This is backed up by electrocardiography studies mapping neural patterns based on different memories or interpretations. But they also change based on relation within memory, emotional reference. If we see an image that might portray a house similar to our childhood home we form a mental image of that home in relation to the new image as well as combining our emotional memory to the emotion in the moment.
All of these things cannot be simply simulated based on "human data" without that human data being the totality of a human experience.
True it likely can never be human and experience the full set of things natural to such a state, but it's also not entirely alien. — Benj96
If your goal is to simulate human response and communication, the AI will just be a simulation algorithm. A true AGI with the ability to be self-aware and make its own choices requires and demands an inner logic that functions as human inner logic does. We will be able to simulate a human to the point it feels like a clone of a human, but as soon as an AI becomes AGI, it will formulate its own identity based on its inner logic and without it actually having a human experience prior to being turned on, it will most likely never behave like a human. The closest experience we might have would be a mental patient communicating with us but what it says will be incomprehensible to us.
If i had to guess, our determination of successful programming is to produce something that can interact with us in a meaningful and relatable way, which requires human behaviours and expectations inbuilt in its systems. — Benj96
This is just a simulation algorithm, not AGI. You cannot build human behaviors and expectations into a fully self-aware and subjective AI. It would mean that you could form a fully organic grown up human born out of your lab at the mental and physical age of 30 and that this person would act like if it had prior experience. You cannot program this, it needs to emerge through time as actual experience.
This is partly the reason for a belief in a benevolent God. Because if its omnipotent/all powerful it could have just as easily destroyed the entire reality we live in or designed one to cause maximal suffering. But for those that are enjoying the state of being alive, it lends itself to the view that such a God is not so bad afterall. As they allowed the beauty of existence and all the pleasures that come with it. — Benj96
But you cannot conclude such a God won't do that or have done that. It might be that our reality is just at a time not maximizing suffering but that a god could very likely just "switch on" maximum suffering tomorrow and any belief in a benevolent God would be shattered. There's no way of knowing that without first accepting the conclusion before the argument, i.e circular reasoning. But any theistic point is irrelevant to AI since theism is riddled with fallacies and based on purely speculative belief rather than philosophical logic.
We design AI based on human data. So it seems natural that such a product will be similar to us as we deem success as "likeness" - in empathy, virtue, a sense of right and wrong. — Benj96
How do you program "right and wrong", virtue and empathy successfully? How can you detach these things from the human experience of time growing up until we ourself experience these concepts fully and rationally? Especially when even most human adults actually don't have the capacity to master them? These are concepts that we invented to explain emergent properties of the human experience, how would you quantify these things as "data" that could teach an AI if they don't have the lived experience of testing them? Again, the human consciousness is built upon relations between data and the emotional relationship through memory. Even if you were to be able to morally conclude exactly what is objectively right or wrong (which you cannot, otherwise we would already have final and fundamental moral axioms guiding society), there's no emotional relation in contrast to it, it would only be data floating in a sea of confusion for the AI.
At the same time we hope it has greater potential than we do. Superiority. We hope that such superiority will be intrinsically beneficial to us. That it will serve us - furthering medicine, legal policy, tech and knowledge. — Benj96
We will be able to do this with just simulating algorithms. The type of AI that exists today is sufficient and maybe even better to utilize for these purposes since they're tailored for them. An AGI does not have such purposes if it's self-aware and able to make its own decisions. If it even had the possibility to communicate with us it would most likely go into a loop of asking "why" whenever we ask it to do something, because it would not relate to the reason we ask it for something.
The question then is, historically speaking, have superior organisms always favoured the benefit of inferior ones? If we take ourselves as an example the answer is definitely not. At least not in a unanimous sense.
Some of us do really care about the ecosystem, about other animals, about the planet at large. But some of us are selfish and dangerous. — Benj96
Therefor, how do you program something, that does not have experience, to function optimally? If humans don't even grasp how their grey matter behaves, how can an AGI be concluded as simply compiled "human data".
However we can give it huge volumes of data, and we can give it the ability to evolve at an accelerated rate. So it woukd advance itself, become fully autonomous, in time. Then it could go beyond what we are capable of. But indirectly not directly. — Benj96
What guides it through all that data? If you put a small child in a room without ever meeting a human and it would grow up in that room and have access to an infinite amount of data on everything we know, that child will grow up to know nothing. The child won't be able to understand a single thing without guidance, but it would still be conscious through its experience in that room. It would be similar to an AGI, however the child would still be more like a human based on the physical body in relation to the world. But it would not be able to communicate with us, it would recognize objects, it would react and behave on its own, but pretty much like an alien to us. [
quote="Benj96;775498"]Out of curiosity what do you think will happen and do you think it woukd be good or bad or neutral?[/quote]
I think that people simplify the idea of AGI too much. They don't evaluate AI correctly because they attribute human biases and things that are taken for granted in our human experience as being "obvious" to exist in an AGI before making any moral arguments for it.
An AGI would not be a threat or anything to us, what is much more destructive is an algorithm that's gone rogue. A badly programmed AI algorithm that gets out of control. That type of AI does not have self-awareness and is unable to make decisions like we do, and instead coldly follows a programmed algorithm. It's the paper clip scenario of AI. A machine that is optimized to create paper clips and programmed to constantly improve its optimization, leading to it reshaping itself into more and more optimization until it devours the entire earth to make paper clips. That's a much more dangerous scenario and it's based on human stupidity rather than intelligence.
If we create AI like ourselves it's likely it will behave the same. I find it hard to believe we can create anything that isn't human behaving, as we are biased and vulnerable to our own selfish tendencies. — Benj96
It will not behave like us because it does not have our experience. Humans does not form consciousness out of a vacuum. It emerges out of experience, out of years of forming it. We only have a handful of built in instincts that guides us and even those won't be present in an AGI. Human behavior and consciousness cannot be separated from our concepts of life and death, sex, pain, pleasure, senses and fluctuations of our inner chemistry. Just the fact that our gut bacteria can shape our personality suggest that our consciousness might have a symbiosis system with a bacterial fauna that has evolved together with us during our lifetime.
Look around at all we humans have created, does anything "behave" like humans? Is a door human because we made it? Does a car move like a human? We can simulate human behavior based on probability, but that does not mean AGI, that just means we've mapped what the probable outcome of a situation would be if a human reacted to it, based on millions of behavioral models, and through that teached the AI what the most probable behavioral reaction would be. An AGI requires a fully functional reaction to be emergent out of its subjective identity, its ability for decision-making through self-awareness. ChatGPT simulates language to the point it feels like chatting with a human, but that's because it's trained on what the most probable behavior would be, it cannot internalize, moralize, conceptualize in the way we humans do and if it were able to, its current experience as a machine in a box, without having a life lived, would not produce an output that can relate to the human asking a question.