This reminds of sci-fi. I have the title ready: "The revolt of the machines". A modern Marxist movement run by machines: "Computers of the world, unite!" — Alkis Piskas
The notion of machines with human-like intelligence dates back at least to Samuel Butler's 1872 novel Erewhon. Since then, many science fiction stories have presented different effects of creating such intelligence, often involving rebellions by robots.
Yeah, it's just morons who worry about this. People without the intelligence to think of your solution to the problem....So, let these people worrying about the threats. Maybe they don't have anything better to do. :smile: — Alkis Piskas
A single computer -- or even a whole defective batch of computers-- may stop following orders, i.e. "stop responding the way they are supposed to". And if such a thing happens, these computers are either repaired or just thrown away. What? Would they resist such actions, refuse to obey? — Alkis Piskas
Of course. AI reigns in sci-fi.Been done a few times — Vera Mont
Currently there is no true AI, there is simulated AI. However, even simulated AI can replace numerous workers in middle management and low level creative fields. This can/will have a devastating impact on employment and thus the economy as well as social stability. — LuckyR
Currently there is no true AI, there is simulated AI. However, even simulated AI can replace numerous workers in middle management and low level creative fields. This can/will have a devastating impact on employment and thus the economy as well as social stability. — LuckyR
My biggest concern about AI, is its ability to acquire knowledge that humans aren't up to acquiring due to the enormous amount of data AI can process without getting bored and deciding there must be a more meaningful way of being.
Knowledge is power, and individuals or small groups with sole possession of AI determined knowledge can use such power unscrupulously. — wonderer1
I've never heard a perspective like this. Can you give an example showing the cause for your concern? — Judaka
https://www.technologyreview.com/2017/04/11/5113/the-dark-secret-at-the-heart-of-ai/In 2015, a research group at Mount Sinai Hospital in New York was inspired to apply deep learning to the hospital’s vast database of patient records. This data set features hundreds of variables on patients, drawn from their test results, doctor visits, and so on. The resulting program, which the researchers named Deep Patient, was trained using data from about 700,000 individuals, and when tested on new records, it proved incredibly good at predicting disease. Without any expert instruction, Deep Patient had discovered patterns hidden in the hospital data that seemed to indicate when people were on the way to a wide range of ailments, including cancer of the liver. There are a lot of methods that are “pretty good” at predicting disease from a patient’s records, says Joel Dudley, who leads the Mount Sinai team. But, he adds, “this was just way better.”
At the same time, Deep Patient is a bit puzzling. It appears to anticipate the onset of psychiatric disorders like schizophrenia surprisingly well. But since schizophrenia is notoriously difficult for physicians to predict, Dudley wondered how this was possible. He still doesn’t know. The new tool offers no clue as to how it does this. If something like Deep Patient is actually going to help doctors, it will ideally give them the rationale for its prediction, to reassure them that it is accurate and to justify, say, a change in the drugs someone is being prescribed. “We can build these models,” Dudley says ruefully, “but we don’t know how they work.”
But just pairing AI + terrorism should be scary enough. — Judaka
Although I'm not actually that familiar with TikTok, there has been controversy over its AI gathering data from its user's phones to recommend videos and such, do you have any familiarity with this controversy? — Judaka
Knowledge can be a means to power, but rarely does it amount to much, and I'm not too sure what the actual concern is. Could you give a context? Does TikTok, or gambling apps using AI, or stuff like that, represent your concern well, or is it something else? — Judaka
I'm wondering if anyone on this site can maybe enlighten me more about this subject and explain what they know and/or personal opinions about it, so I can understand better whether there really is a potential threat or if it doesn't really exist with what is currently possible with available AI.
In most cases, I think what you're talking about is incredibly exciting, and I can think mostly of examples where it will be used for good. — Judaka
Oh, humans. That seems like a different issue to me.So, who will be to blame if AI will be used for purposes of massive destruction? AI itself or Man who created it and uses it? — Alkis Piskas
This seems not really to the point. It seemed like you were painting concerns as merely irrational and perhaps stupid. But intelligent people are concerned and there are a number within the AI industry itself who have dropped out because of their growing concerns. Who would be judged to be to blame is a separate issue. What step in the process of the development of something is also irrelevant to my response.The atomic bomb was created based on Einstein's famous equation, E=mc2. Can we consider this formula "dangerous"? Can we even consider the production of nuclear power based on this formula "dangerous"? It has a lot of useful applications. One of them however unfortunately has been used for the production of atomic bombs, the purpose of which is to produce enormous damage of the environment and kill people on a big scale. It has happened. Who is to bleme? The atomic bomb or the people who used it? — Alkis Piskas
Yes, I think that's be a good idea. Won't happen most likely and part of the reason is the way concerns are framed by others.So, what are we supposed to do in the face of such possibility? Stop the development of AI? Discontinue its use? — Alkis Piskas
Both dialogues are useful and neither benefits from painting people with concerns as silly or stupid. Both dialogues can happen at the same time. The problem with modern technologies and I mean the very recent ones like gm, nanotech and AI is that they are even less local than previous ones, including nuclear weapons - unless there is an all out nuclear war or a significant limited nuclear war. I don't see companies and governments as mature enough to handle and do oversight over these new techs. And in the US, government oversight is very controlled by industry.I believe that it wll be more consctuctive to start talking about amd actually taking legal measures against harmful uses of the AI. Now, before it gets uncontrollable and difficicut to tidy it up. — Alkis Piskas
My argument though is that AI will enable smaller players to do much more than they ever could before. — Judaka
No, I believe there are indeed things to be concerned about. But what I'm saying is that they are attributed to the wrong place. Machines cannot be responsible for anything. They have no will. They can't be the cause of anything. They have no morality. They can't tell good from bad. As such they themselves cannot be a threat. (Threat: "A declaration of an intention or determination to inflict punishment, injury, etc., in retaliation for, or conditionally upon, some action or course" (Dictionary.com))This seems not really to the point. It seemed like you were painting concerns as merely irrational and perhaps stupid. — Bylaw
I undestand that. I would cetainly not want to participate myself in projects that present a danger for humanity. But if I were an expert in the field these projects are developed around, I would not simply drop out of the game but unstead start warning people, knowing well the dangers and having a credibility as an expert on the subject. Because, who else should talk and warn people? Those who are active working on such projects?there are a number within the AI industry itself who have dropped out because of their growing concerns. — Bylaw
But you don't discontinue a technology that produces mostly benefits because it can also produce dangers! You create instead a legislation about the use of that technology. This is what I said at the end of my previes message. I repeat it here because I believe it is very important in dealing with hidden or potential dangers from the use of AI and which you are bringing it up yourself below.So, what are we supposed to do in the face of such possibility? Stop the development of AI? Discontinue its use?
— Alkis Piskas
Yes, I think that's be a good idea. Won't happen most likely and part of the reason is the way concerns are framed by others. — Bylaw
I don't know if you are refering to me. As I said above, I do believe there are concerns and that a lot of responsible and knowledgeable on the subject people are correctly pointing them out. But unfortunately the vast majority of the claims are just nonsense and ignorance. I'm a professional programmer and also work with and use AI in my programming. I answer a lot of questions in Quora on the subject of AI and this is how I know thet most concerns are foundless if not nonsense. The hype about AI these days is so stroing and extensive that it looks like a wave that inundates all areas in our society. And of course, ignorance about AI prevails.Both dialogues are useful and neither benefits from painting people with concerns as silly or stupid. — Bylaw
You are right saying this. And I guess there are much more factors involved than immaturity: ignorance, will, conscience, interests ...I don't see companies and governments as mature enough to handle and do oversight over these new techs. And in the US, government oversight is very controlled by industry. — Bylaw
The only post of mine you responded to me before this one was https://thephilosophyforum.com/discussion/comment/823537I can't really see your post, the one I orginally responded to as constructive, however. — Bylaw
But you don't discontinue a technology that produces mostly benefits because it can also produce dangers! — Alkis Piskas
Which "you" does this? How? Even assuming any existing government had the necessary accord, and power, what would that proposed bill actually say?You create instead a legislation about the use of that technology. — Alkis Piskas
But if I were an expert in the field these projects are developed around, I would not simply drop out of the game but unstead start warning people, knowing well the dangers and having a credibility as an expert on the subject. — Alkis Piskas
I don't know what can of "demonstation" are you expecting. There are many. But let this aside for the monment ...I have not seen it demonstrated that ever-increasing computing and automation capability is "mostly benefits". — Vera Mont
Example(s)?On the negative side, however, the obvious present harm is already devastating and the potential threat is existential. — Vera Mont
I don't have in mind any technology that has discontinued as beeing dangerous (although there may be). But I know that a lot of technologies have been discontinued because they wer obsolete. And this is usually the case and will continue to happen.In any case, the point is moot, since nobody has the actual power to stop or shut down the ongoing development of these technologies. — Vera Mont
Whoever has the authority to do it. And through resolutions of the appropriate channels (Parialament) as any legislation is established. Technocrats may also be involved. I can't have the details!You create instead a legislation about the use of that technology.
— Alkis Piskas
Which "you" does this? How? Even assuming any existing government had the necessary accord, and power, what would that proposed bill actually say? — Vera Mont
OK, let's make it simple and real. How has legislation been passing regarding Covid-19? Weren't all the cases based on expert opinion and suggested solutions by experts? Who else could provide information about the dangers involved? And this was a very difficult case because humanity had no similar experience, i.e. basic information were missing, and also Covid-19 has changed its "face" a lot of times during the yesrs 2020-22.A lot of experts are warning people, but they certainly can't issue public statements against e.g. smart weapons while collecting a salary from an arms manufacturer. ... — Vera Mont
True AI is machine learning such that the computer advances it's programming without a human programmer. Simulated AI is clever human programming made to simulate independant thought... — LuckyR
. But let this aside for the monment ...
Do you mean that the development of computing has stopped to be beneficial? — Alkis Piskas
Are we at the end of the digital era? — Alkis Piskas
Just imagine that the nuclear technology will stop being developed --even discontinued-- and all nuclear power plants be closed because of the Chernobyl disaster. This would mean erasing from Earth this technology and finding another technology to replace the nuclear technology, which took more than a century to be developed to its current state. — Alkis Piskas
Easy said! In theory, the US could legislate gun control... but it's not going so well.[ which "you"?] Whoever has the authority to do it. — Alkis Piskas
OK, let's make it simple and real. How has legislation been passing regarding Covid-19? — Alkis Piskas
This is all I'm talking about: taking measures ...
What is this legacy about?Even if shut down tomorrow, its legacy will be around for a hundred thousand years. — Vera Mont
It's a good thing you've brought up this, because I had the curiosity where do different countries stand ragarding guns control ...In theory, the US could legislate gun control... but it's not going so well. — Vera Mont
Indeed. Governments respond differently under the same circumstances of dangers. This is a socio-political matter that maybe would be interesting to explore, but not in this medium, of course. But whatever are the reasons for such difference it is true that any government has the ability and the authority to pass legislation about dangers threatening not only the human beings but also the animals and the nature.And how legislatures handled the simple, straightforward, known hazard of Covid was .... uneven at best — Vera Mont
Right. That's what I talk about a lot of factors involved in handling potential dangers, including interests.Development and application of computer technology is far more complicated and vested in more diverse interests. — Vera Mont
What is this legacy about? — Alkis Piskas
It's a good thing you've brought up this, because I had the curiosity where do different countries stand ragarding guns control ... — Alkis Piskas
Ideally....n a sector should not be a reason to stop the development in that sector, but a reason to take measures about that. — Alkis Piskas
Yes, I thought about thete waste. But the Chernobyl link you brought up talks about successful handling of the waste ... Otherwise, I have read that the area surrounding Chernobyl remains radioactive.The waste. ... Can't ever seem to erase the consequences - or the waste. — Vera Mont
Same with drugs. But here is where we use to ask, "Can't or doesn't want?" I believe that if a government cuts enough heads it can handle it. But I mean really cut. Not e.g. forcing the tobacco companies put a warning label on cigaret packs ... So, why tobacco use is still allowed?[Re guns] If that traffic can't be stopped, how do you figure computing technology that runs on a world-wide web and conducts vast amounts of international information and commerce is going to be confined by legislation in the UK or Austria? — Vera Mont
What is very sad is that all that shows the self-destructiveness of Man --in the Modern Era more than ever-- and I can't see how that could be cured — Alkis Piskas
Anyway, let's hope that we'll be luckier with the AI sector. — Alkis Piskas
:grin: "Well, you can, if you have no better solution to win a war.""Here you go, sir. Please don't drop it on anybody." — Vera Mont
They usually do, I believe. But, as I said, they can only act as consultants. They are not the decision makers.Scientists sometimes do see ahead to the probable dangers — Vera Mont
Well, I don't want to disappoint you, but as an AI programmer and quite knowledgeable in AI systems, I can say that this is totally impossible. Neither with chips nor with brain cells (in the furure).[Re AI] If it evolves a mind of its own. Then, it may decide to help us survive - or put us out of the artificial misery business once and for all. 50/50 — Vera Mont
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.