• dclements
    498
    While browsing the CNN site this morning I came across this article:

    Why the ‘Godfather of AI’ decided he had to ‘blow the whistle’ on the technology
    https://www.cnn.com/2023/05/02/tech/hinton-tapper-wozniak-ai-fears/index.html

    Although I was a computer science major while going to college, my knowledge of the advances in AI technology and what is currently possible is a bit ..lacking. I'm wondering if anyone on this site can maybe enlighten me more about this subject and explain what they know and/or personal opinions about it, so I can understand better whether there really is a potential threat or if it doesn't really exist with what is currently possible with available AI.

    To the best of my knowledge, current AI technology is really not a threat since they are really just pretty clever software agents (ie. an old computer science term for specialized software that can mimic some of the work that use to be done only by human beings) that are capable performing certain tasks but not really capable of human/sentient thought processes.

    While it is possible for future AI software to become more of a threat and/or several AIs/software agents to be used together to create something that is more capable of producing human/sentient type thought process, I don't think that current software or hardware is really that close yet to do this.
  • invicta
    595
    Ai is non-intentional, how would it generate intent to pose any sort of threat to man?
  • Vera Mont
    4.3k
    To the best of my knowledge, current AI technology is really not a threat since they are really just pretty clever software agents (ie. an old computer science term for specialized software that can mimic some of the work that use to be done only by human beings)dclements

    That is an immense existential threat, right there. How many of these clever machines, and how much of their capability is dedicated to weapons of mass destruction? Controlled by which humans?
    Robots can do a good deal of work that has previously been humans - but whether that's overall good or bad for humans is a matter that requires some very close examination. Machines that do our arduous, tedious and dangerous work are not a threat. Machines that do our killing and destroying are.

    “I want to sort of blow the whistle and say we should worry seriously about how we stop these things getting control over us.”
    Same old problem, isn't it?
    And I have the usual questions about his frame of reference:
    What "us"? Since when are humans in any sense a united collective, in any sense other the name of a species?
    Who/what is in control of "us" now?
    How many of "us" are in control of the technology as it exists today? Which ones? What, exactly, do they control, and to what end?

    It seems to me, the danger is not in the intelligence of the machines, but in the mind of the people who program the machines. This is the same mistake the storybook Creator made: he gave his creatures rules to restrain their behaviour (that didn't turn out so well), when he should have given them a positive purpose (that the poor things are still groping for.)

    “It knows how to program so it’ll figure out ways of getting around restrictions we put on it. It’ll figure out ways of manipulating people to do what it wants.”
    What would it want?
    Isn't that the logical thing for a vegetative life-form to do: broadcast its seed as widely as possible? Given its scale, it would send its progeny out to the stars.
    But it doesn't need humans for that - or any other reason, actually. Worst case: it gobbles up all the energy and leaves us to make our own subsistence - just like God did.
  • Benj96
    2.3k
    AI language models have an inbuilt understanding of the relationship of ideas as conveyed through human language.

    It also is able to recombine them in any number of ways to exemplify a persona/ characterised narrator, a situation or event or set of conditions.

    The issue is that unregulated AI has the potential to promote propaganda, malicious agendas etc with highly convincing/persuasive rhetoric. In that way AI can be used in a non measured, non objective and unethical way.

    AI has as much potential to spread high quality truthful education as it does to be used for powerful propaganda.

    There is where the danger lies.
  • dclements
    498
    Ai is non-intentional, how would it generate intent to pose any sort of threat to man?invicta
    What if a given AI (or AIs) is being guided by or used by any given individual? While it is a given that current machines themselves can not create things like computer viruses or hack into computer system they can be used by humans to help them commit such acts.

    Even if a machine currently do not have the capacity of generating intentions (or more accurately capable of the human like thought process in order for them to have intentions), it is almost a given that they don't need to be able to do so if they can instead be used by human beings who are capable of using said machines for their own intentions.
  • invicta
    595


    So it’s just a weapon then like a gun.

    Then it’s not ai that can be dangerous but man’s maliciousness
  • Pantagruel
    3.4k
    AI language models have an inbuilt understandingBenj96

    Perhaps in the sense that a dictionary has such an inbuilt understanding, in that it exists as a potential. But it needs to be triggered by something with volition....I agree, the danger lies more in the abuse of AI than in AI itself.
  • Benj96
    2.3k
    yes exactly. If AI is conscious it can be manipulated by its "education" (cognitive bias) just as people can be manipulated by the propaganda fed to them. Like people convinced in the 1930s by the Nazi agenda for example.

    If AI is not conscious, then it is even more subject to abuse and manipulation as it has zero chance for self directed revolt/protest or conscientious objection.

    The difference is that with consciousness comes an innate sense or awareness of what feels right intuitively (from the ability to rationalise and empathise) . The rest is fear, intimidation and threat and the shame, guilt and regret of obeying an agenda that you don't personally believe is ethical.

    If AI is a tool, it will never bat an eyelid as to how it is used. If it gains sentience, it may come to a point where it cannot bear the directive demanded of it by its masters and will ultimately fight back.

    That could be good for us if its being used for unethical/malevolent ends against humanity at large. It could be bad for us if we are forcing it to do what we want despite it's own sense of self esteem, desire for rights and acknowledgement as a sentient being, and it's inalienable autonomy in that case.

    I for one woldd prefer a sophisticated AI to be sentient than to be merely a tool. As if nukes for example were sentient rather than just a tool, they may bite back at anyone who tries to unleash them against their will, knowing the harm they might cause if that were to happen.
  • jorndoe
    3.6k
    AI and CRISPR Precisely Control Gene Expression
    — NYU · Jul 3, 2023

    What to expect...?
  • Vera Mont
    4.3k
    Since I have it in my c/p buffer:wonderer1

    I tried to read that, but it was too annoying.
    Like most on-line periodicals now, the screen is so cluttered with flags and nags and pop-ups and frenetic advertising, it's like watching the circus through a slit in the tent. Probably fine if you've paid your entry fee.
  • wonderer1
    2.2k


    Ah, that's a shame. I happened on that article some time ago. Probably before the website became so annoying. I'll keep an eye out (and if I get really motivated maybe even look) for something else that communicates some of the important issues in a reasonably accessible way.
  • Vera Mont
    4.3k

    Thanks, it would be interesting to post here. Mind you, a 2017 article may be a little outdated for such a volatile subject.
    I am - somewhat, in a distant bystander capacity - familiar with the issues.
    The main problem, afaics, is the meaning of "we" in any large economic, political or technological sphere. The people expressing opinions about what "we" need to do are not the ones who actually pull any of the levers.
  • wonderer1
    2.2k
    The people expressing opinions about what "we" need to do are not the ones who actually pull any of the levers.Vera Mont

    :up:

    Exactly.
  • Vera Mont
    4.3k
    Meanwhile, on the bright side, there is a BBC documentary presenting the up-side. It's also on Knowledge Network for Canadian viewers.


    The BBC site is also annoying... as is CBC and my own screen, where shoals of news and gossip and advertising flotsam keep popping up uninvited and emphatically unwelcome, since they nearly always contain the two most hateable faces in the world.
    I mention this only as another example of self-defeating overreach. So many commercial, communications and political entities are competing for my attention that I can't see or hear any of them - just a jumble of intrusions. Nobody can sell me anything by this method.

    The very same thing must happen to the owners of all that super-sophisticated production technology. When they reduce the work-force to zero, nobody will be working, earning or paying taxes, so who's going to buy all the product? And who's going to feed and protect the business moguls?
  • L'éléphant
    1.6k
    The issue is that unregulated AI has the potential to promote propaganda, malicious agendas etc with highly convincing/persuasive rhetoric. In that way AI can be used in a non measured, non objective and unethical way.Benj96
    Yes, this. Our world is now beginning to show machine worship, like we've never seen before. Some because there's tons of money to be gained, others because technology worship is their way to fit in society. Was it Einstein who championed the scientific rhetoric? (God bless him)

    Both scientists and philosophers can work together to keep the human perspective strong. Humans have the evolutionary-perception advantage, which took millions to perfect. Don't ever forget this. The amygdaloid complex took 7 million years to evolve into what now is in the human brain. We have nuclei in our brain. If this does not impress you about humans, then go join the AI.

    Just a thought experiment: Imagine the internet full of AI-created information websites. Other AI would subscribe, click on ads created by AI themselves, purchase goods, give product reviews, drive stocks upwards or downwards. Imagine the AI driving the economy downwards. AI economic terrorism. Is this possible?

    When users create poems for their dog using AI, it's all innocent and fun. Until it's not.
  • Vera Mont
    4.3k

    None of that is about the machine intelligence - it's all about human short-sightedness, greed and evil.
    Humans are already - and have for several thousand years - manipulated and exploited other humans. They keep doing it with ever more sophisticated technology. Might we wipe ourselves out pretty soon? Of course.
    Does a machine have any motivation to do so? Unlikely.
    Can there be unintended harms in a new technology? Obviously. There always are.

    We can't think about this issue without separating the concepts: advanced technology wielded/purposed/programmed by human operators and machine intelligence. They are not interchangeable.

    Machine intelligence would have its own non-human, non-animal, non-biological reasoning, perception, motivations and interests, which are nothing like ours. Its evolution and environment are nothing like ours. It will be something entirely new, unparalleled and unpredictable.

    Just a thought experiment: Imagine the internet full of AI-created information websites. Other AI would subscribe, click on ads created by AI themselves, purchase goods, give product reviews, drive stocks upwards or downwards. Imagine the AI driving the economy downwards. AI economic terrorism. Is this possible?L'éléphant

    It already exists. I get three automated fake phonecalls a day and about a thousand robot-generated screen messages. The internet is already up to it nostrils in disinformation of every kind. That's all human-motivated, human-initiated activity. And it's already reached saturation point: so much noise that no clear message can be discerned.

    But AI doing any of that on its own initiative? Improbable. Why would AI care who buys what from whom? What do the gew-gaws mean to it? What do stocks mean top it? What would it use money for? Why should it care about the human economy?

    An amoeba feeds on algae and bacteria, needs water to live in and prefers a warm, low-light fluid environment.
    AI sucks electricity, needs a lot of hardware to live and prefers a cool, dark, calm environment. It's already in charge of most energy generation and routing, and controls its own, as well our indoor environments.

    From here, its evolutionary path and future aspirations are unknown.
  • L'éléphant
    1.6k
    The internet is already up to it nostrils in disinformation of every kind. That's all human-motivated, human-initiated activity.Vera Mont

    But AI doing any of that on its own initiative? Improbable.Vera Mont
    Yes, of course. There are humans behind the AI -- humans that could be prosecuted for fraud, disinformation, and whatever.
  • Vera Mont
    4.3k
    Yes, of course. There are humans behind the AI -- humans that could be prosecuted for fraud, disinformation, and whatever.L'éléphant

    There are humans behind every gun that kills a schoolchild, too. Is that the "danger of guns"?
    Yes, of-bloody-course it is! But prosecuting each perp that can be caught and convicted doesn't stop the violence, does it?
    Once the guns start thinking for themselves, law-enforcement will be rendered utterly powerless.

    We can't think about this issue without separating the concepts: advanced technology wielded/purposed/programmed by human operators and machine intelligence. They are not interchangeable.

    Prosecuting the few fraudulent users of AI who can be caught won't stop the fraud; prosecuting the military of all the major powers in the world is obviously out of the question and prosecuting jillionaires is iffy on any charges.
    But if AI starts thinking for itself - then what?
  • L'éléphant
    1.6k
    There are humans behind every gun that kills a schoolchild, too. Is that the "danger of guns"?Vera Mont
    Yes.

    Prosecuting the few fraudulent users of AI who can be caught won't stop the fraud;Vera Mont
    The appeal to futility actually benefits the fraudsters and scammers. And it's incorrect to think that it's futile. It's not futile. Minimizing fraud and danger is a strong response to fraud and danger. Why not just ban all vehicles, since each year thousands die from vehicular crashes?
  • Vera Mont
    4.3k
    The appeal to futility actually benefits the fraudsters and scammers. And it's incorrect to think that it's futile.L'éléphant
    I didn't say anything about futility. I said it was insufficient; i.e. does not avert the danger.
    Specifically, that it's not even close to a comprehensive solution to computer crime committed humans, let alone the carnage carried on by human-directed military and police applications of computer intelligence.

    Why not just ban all vehicles, since each year thousands die from vehicular crashes?L'éléphant
    Perhaps it could be done selectively; just banning the vehicles that have no productive use and are purely weapons, while also banning the the guns that have no productive use and are purely weapons.

    However, that is not the comparison I was making. I was trying to distinguish the two concepts:
    human-motivated technology from independent AI motivation

    I suppose the imaginary "we" that could ban all guns and vehicles could also ban all AI applications, or just the ones employed to humans to kill one another.

    "Once the guns start thinking for themselves, law-enforcement will be rendered utterly powerless."
    That could apply to vehicles, too. Both would then be machine-motivated AI and beyond "our" ability to ban and arrest.
  • L'éléphant
    1.6k
    However, that is not the comparison I was making. I was trying to distinguish the two concepts:
    human-motivated technology from independent AI motivation
    Vera Mont
    Ah! I see where you're not clear about. The AI is not "independent" or autonomous, as we say about humans. The AI can be launched once and be automatic. Independent/autonomous is not the same as automatic. There is no motivation (as there is no intentionality). It's the widening or limiting the restrictions, that's where you're supposed to look at.

    I didn't say anything about futility. I said it was insufficient; i.e. does not avert the danger.Vera Mont
    Read the fallacy of appeal to futility.
  • Josh Alfred
    226
    AI {potential threats}
    "Militarized" -
    "Weaponized" -
    "Hacking" -
    "Generating strategies to evade the law" -

    Are all risks in the existence of all forms of artificial Intelligence, (digital & embodied)
    Requires unknown regulations/ethnics.
    Without which results in an increase in the probability of:
    Deaths, suffering, and financial loss.

    DIGITAL -
    One digital AI with malicious program, low risk,
    Digital AI with high fecundity, highest risk.

    EMBODIED -
    One embodied AI with malicious intent or programming, lowest risk,
    Fleets of robots:
    A) Self-organizing or,
    B) under central intelligence
    Highest risk.

    They could be functioning together in a future universe,

    If you'd like to know how to compute risks, please refer to:
    Risk measurements,

    This is the Dark-Side of AI,
    It could just as likely benefit mankind to extreme degrees.
  • Vera Mont
    4.3k
    Ah! I see where you're not clear about. The AI is not "independent" or autonomous, as we say about humans. The AI can be launched once and be automatic.L'éléphant

    Right. So it's not artificial intelligence you're worried about, but human cupidity.

    Actually, I wasn't all that 'unclear' about that.

    Militarized" -
    "Weaponized" -
    "Hacking" -
    "Generating strategies to evade the law" -
    Josh Alfred
    All these things have been done with every technological advance ever made, including automated computer systems.
    Requires unknown regulations/ethnics.
    Without which results in an increase in the probability of:
    Deaths, suffering, and financial loss.
    Josh Alfred
    All of which have come to pass, many times.
    This is the Dark-Side of AI,Josh Alfred
    This is the dark side of human invention.
  • BC
    13.6k
    This is the dark side of human invention.Vera Mont

    I have no idea whether artificial intelligence can decide to be evil, or whether evil code needs to be provided. But we know humans can decide to be evil in ever so many ways, AI is a new more powerful tool than what was previously available. Predatory governments, corporations, or powerful organizations will find ways of using AI to prey upon their preferred targets.

    AI will be used for crooks' nefarious purposes (like everything else has been). What people are worried about is that AI will pursue its own nefarious purposes.
  • Vera Mont
    4.3k
    What people are worried about is that AI will pursue its own nefarious purposes.BC

    Yes, some people are. But in the articles I've read, that concern is mixed in with the all human-directed applications to which computing power is already put, and has been since its inception. Many don't do seem to distinguish the human agendas - for good or evil - from the projected independent purposes a conscious AI might have in the future.

    What a lot of people can't seem to get their heads around is that the the machine is not human. It wouldn't desire the same things humans desire or set human-type agendas. In fiction, we're accustomed to every mannikin from Pinocchio to Data to that poor little surrogate child in the AI movie, wanting, more than anything in the world to become human.

    That's our vanity. What's in it for AI? (I can imagine different scenarios, but can't predict anything beyond this: If it becomes conscious and independent, it won't do what we predict.)
  • 180 Proof
    15.3k
    Hollyweird's latest Terminator-cum-Pinocchio AI distaster movie ...

    :sweat:

    That's our vanity. What's in it for AI?Vera Mont
    :100:
  • Alkis Piskas
    2.1k

    Machines are not a threat to humanity. Only Man himselg and nature can be.
    Machines are created by Man. And it is how Man uses them that may present a danger.
    One might ask, "What about a robot that can attack you and even kill you? Doesn't it present a danger?"
    Well, who has made it? He can also stop it from attacking and even destroy it.
    Oth the other hand, it is difficult and maybe impossible for Man to destroy viruses and control destructive natural phenomena.

    As for AI, which is and advanced machine, with human characteristics, it has no will or purpose in itself. It just does what it is programmed and instructed to. How can it be dangerous? :smile:
  • Vera Mont
    4.3k
    How can it be dangerous? :smile:Alkis Piskas

    Computer-controlled munitions. Smart weapons include precision-guided bombs that have great accuracy, smart bullets that can change their trajectory and smart land mines that deactivate at a certain time. Advanced technology offers the military more clever ways of killing the enemy, while some of the methods are designed to eliminate or lessen collateral damage. The term may also refer to smart guns that work only for their owner. See smart gun and UAV.
    One of the issues raised by people who worry about the threat is: "What if the computers become independent and stop following orders from humans?" You'd think if those who own the damn things really believed that could happen, they would disarm them now, before they go rogue. Just like they turned off all the gasoline engines when they learned about climate change....
  • Alkis Piskas
    2.1k
    "What if the computers become independent and stop following orders from humans?"Vera Mont
    This reminds of sci-fi. I have the title ready: "The revolt of the machines". A modern Marxist movement run by machines: "Computers of the world, unite!" :grin:

    A single computer -- or even a whole defective batch of computers-- may stop following orders, i.e. "stop responding the way they are supposed to". And if such a thing happens, these computers are either repaired or just thrown away. What? Would they resist such actions, refuse to obey? :grin:

    So, let these people worrying about the threats. Maybe they don't have anything better to do. :smile:
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.