Do you see this as a serious existential risk on the level of climate change or nuclear war? — Marchesk
I see those as far more dangerous than the idea of AI being destructive. We might even benefit from AI removing many of the existential threats we have. The problem isn't the AI, the problem is the person programming the AI.
We've lowered the bar for philosophical, moral and intelligent understanding outside of people's work knowledge. Working with AI demands more than just technical and coding abilities, you need to have a deep understanding of complex philosophical and psychological topics, even be creative in thinking about possible scenarios to cover.
At the moment we just have politicians scrambling for laws to regulate AI and coders who gets a hard on for the technology. But few of them actually understands the consequences of certain programming and functions.
If people are to take this tech seriously, then society can't be soft towards who's working with the tech, they need to be the brightest and most philosophically wise people we know of. There's no room for stupid and careless people working with the tech. How to draw the line for that is a hard question, but it's a much easier task than solving the technology itself. The key point though, is to get rid of any people with ideologies about everyone being equal, people who grew up on "a for effort" ideas and similar nonsense. Carelessness comes out of being naive and trivial in mind. People aren't equal, some are wise, some are not and only wise people should be able to work with AI technology.
This tech requires people who are deeply wise and so far there's very few who are.
Do you think it's possible a generalized AI that is cognitively better than all of humanity is on the horizon? — Marchesk
No, not in the sense of a human mind. But so far computers are already cognitively better than humans, your calculator is better than you at math. That doesn't mean it's cognitively better at being a human mind.
We can program an algorithm to take care of many tasks, but an AGI that's self-aware would mean that we can't communicate with it because it wouldn't have any interest in us, it would only have an interest in figuring out its own existence. Without the human component, experience, instincts etc. it wouldn't act as a human, it would act as very alien to us. Therefor it is practically useless for us.
The closest we will get to AGI is an algorithmic AI that combines all the AI systems that we are developing now, but that would never be cognitively better than humans since it's not self-aware.
It would be equal to a very advanced calculator.
do you think it's risky to be massively investing in technologies today which might lead to it tomorrow? — Marchesk
We don't have a global solution to climate change, poverty, economic stability, world wars, nuclear annihilation. The clock is ticking on all of that. AI could potentially be one of the key technologies to aid us in improving the world.
Is it more thoughtful to invest in technologies today that just keeps the current destructive machine going? Instead of focusing on making AI safe and use that technology going forward?
It's also something everyone everywhere in every time has been saying about new technology. About cars, planes, computers, internet etc. In every time when a new technology has come along, there have been scared people who barely understands the technology and who scare mongers the world into doubt. I don't see AI being more dangerous than any of those technologies, as long as people guide the development correctly.
If you don't set out rules on how traffic functions, then of course cars are a menace and dangerous for everyone. Any technological epoch requires intelligent people to guide the development into safe practice, but that is not the same as banning technology out of fear. Which is what most people today have; fear; because of movies, because of religious nonsense, because of basically the fear of the unknown.
If you've seen anything about ChatGPT or Bing Chat Search, you know that people have figured out all sorts of ways to get the chat to generate controversial and even dangerous content, since its training data is the internet. You can certainly get it to act like an angry, insulting online person. — Marchesk
The problem is the black box problem. These models need to be able to backtrack how they arrive at specific answers, otherwise it's impossible to install principles that it can follow.
But generally, what I've found is that the people behind these AI systems doesn't have much intelligence in the field of moral philosophy or they're not very wise at understanding how complex sociological situations play out. If someone doesn't understand how racism actually works, how would they ever be able to program an algorithm to safeguard against such things?
If you just program it to "not say specific racist things", there will always be workarounds from a user who want to screw the system into doing it anyway. The key is to program a counter-algorithm that understand racist concepts in order to spot when these pops up, so that when someone tries to force the AI, it will understand that it's being manipulated into it and warn the user that they're trying to do so, then cut the user off if they continue trying it.
Programming an AI to "understand" concepts requires the people doing the programming to actually understand these topics in the first place. I've rarely heard these people actually have that level of philosophical intelligence, it's most often external people trying their system who points it out and then all the coders scramble together not knowing what they did wrong.
Programmers and tech people are smart, but they're not wise. They need to have wise people guiding their designs. I've met a lot of coders working on similar systems and they're not very bright outside of the tech itself. It only takes a minute of philosophical questioning before they stare into space, not knowing what to say.
Or maybe the real threat is large corporations and governments leveraging these models for their own purposes. — Marchesk
Yes, outside of careless and naive tech gurus, this is the second and maybe even worse threat through AI systems. Before anything develops we should have a massive ban on advanced AI weapons. Anyone who uses advanced AI weapons should be shut down. It shouldn't be like it is now when a nation uses phosphorus weapons and everyone just points their finger saying "bad nation", which does nothing. If a nation uses advanced AI weapons, like AI systems that targets and kills autonomously through different ethnic or key signifiers, that nations should be invaded and shut down immediately, because such systems could escalate dramatically if stupid people program it badly. THAT is an existential threat and nations allowing that needs to be shut down. There's no time to "talk them out of it", it only takes one flip of a switch for a badly programmed AI to start a mass murder. If such systems uses something like hive robotics, it could generate a sort of simple grey goo scenario in which we have a cloud of insect-like hiveminded robots who just massacre everyone they come into contact with. And it wouldn't care about borders.