• Vera Mont
    4.3k
    This reminds of sci-fi. I have the title ready: "The revolt of the machines". A modern Marxist movement run by machines: "Computers of the world, unite!"Alkis Piskas

    Been done a few times
    The notion of machines with human-like intelligence dates back at least to Samuel Butler's 1872 novel Erewhon. Since then, many science fiction stories have presented different effects of creating such intelligence, often involving rebellions by robots.

    https://best-sci-fi-books.com/24-best-artificial-intelligence-science-fiction-books/
  • Bylaw
    559
    So, let these people worrying about the threats. Maybe they don't have anything better to do. :smile:Alkis Piskas
    Yeah, it's just morons who worry about this. People without the intelligence to think of your solution to the problem....
    A single computer -- or even a whole defective batch of computers-- may stop following orders, i.e. "stop responding the way they are supposed to". And if such a thing happens, these computers are either repaired or just thrown away. What? Would they resist such actions, refuse to obey?Alkis Piskas

    https://www.theguardian.com/technology/2023/may/02/geoffrey-hinton-godfather-of-ai-quits-google-warns-dangers-of-machine-learning#:~:text=Dr%20Geoffrey%20Hinton%2C%20who%20with,his%20contribution%20to%20the%20field.
    https://www.bbc.com/news/technology-30290540
    https://www.npr.org/2023/05/30/1178943163/ai-risk-extinction-chatgpt#:~:text=Newsletters-,Leading%20experts%20warn%20of%20a%20risk%20of%20extinction%20from%20AI,address%20the%20threats%20they%20pose.

    Those people are so silly for missing the 'unplug and throw away the computers' solution that I have to add an :grin: myself.
  • Alkis Piskas
    2.1k
    Been done a few timesVera Mont
    Of course. AI reigns in sci-fi.
    I checked the titles and stories at the link you brought up ... The Marxist movement is a new idea! :smile:
  • LuckyR
    501
    Currently there is no true AI, there is simulated AI. However, even simulated AI can replace numerous workers in middle management and low level creative fields. This can/will have a devastating impact on employment and thus the economy as well as social stability.

    As to future, true AI, the way it becomes dangerous in Sci Fi stories isn't the AI itself, rather that humans abdicate their authority (and thus power) to computers. Human psychology being what it is, I'm not too worried about that. Besides putting a computer in charge of the nuclear launch codes doesn't seem dramatically more risky than having them under the control of certain recent controllers of them...
  • Vera Mont
    4.3k
    Currently there is no true AI, there is simulated AI. However, even simulated AI can replace numerous workers in middle management and low level creative fields. This can/will have a devastating impact on employment and thus the economy as well as social stability.LuckyR

    Yes - we've been through all that upheaval with each revolutionary technology. It will keep repeating so long as the economy runs on profit. Once enough people can't earn money to tax and spend, the owners of the machines won't be able to make a profit and governments won't have any revenue. At that point, the entire monetary system collapses, the social structure implodes, there's bloodshed in the streets and eventually the survivors have to invent some other kind of economy. ... possibly controlled by a logical, calculating, forward-planning computer that has nothing to gain by exploiting people.
  • wonderer1
    2.2k
    Currently there is no true AI, there is simulated AI. However, even simulated AI can replace numerous workers in middle management and low level creative fields. This can/will have a devastating impact on employment and thus the economy as well as social stability.LuckyR

    What do you see as the distinction between "true AI" and "simulated AI"?

    My biggest concern about AI, is its ability to acquire knowledge that humans aren't up to acquiring due to the enormous amount of data AI can process without getting bored and deciding there must be a more meaningful way of being.

    Knowledge is power, and individuals or small groups with sole possession of AI determined knowledge can use such power unscrupulously.
  • Alkis Piskas
    2.1k

    Geoffrey Hinton (first link) looks ghostly and terrified in this photo. Maybe he's been threatened or he fears he will be attacked by AI bots. :grin:
    (Bad joke, for a famous and respectable person like him. But I couldn't help it. It's the climate produced by this subject, you see.)

    As for Stephen Hawking warning artificial intelligence could end mankind, I know, I have read about that.
    Well, it is easy to say and even argument about and proove that guns, nuclear power, etc. are in general "dangerous". But we usually mean that in a figurative way. What we actually mean is that these things can be used in a dangerous way. And if we mean it in a strict sense, then we forget the missing link: the human factor. The only responsible for the dangers technology presents.

    Except if what threatens mankind is independent of us, too powerful, incontrollable and invicible by us --an attack by aliens, a natural catastrophe, a huge meteorite or even an invicible virus-- what we have to worry and take measures about is its use by humans.

    The atomic bomb was created based on Einstein's famous equation, E=mc2. Can we consider this formula "dangerous"? Can we even consider the production of nuclear power based on this formula "dangerous"? It has a lot of useful applications. One of them however unfortunately has been used for the production of atomic bombs, the purpose of which is to produce enormous damage of the environment and kill people on a big scale. It has happened. Who is to bleme? The atomic bomb or the people who used it?

    So, who will be to blame if AI will be used for purposes of massive destruction? AI itself or Man who created it and uses it?

    So, what are we supposed to do in the face of such possibility? Stop the development of AI? Discontinue its use?

    I believe that it wll be more consctuctive to start talking about amd actually taking legal measures against harmful uses of the AI. Now, before it gets uncontrollable and difficicut to tidy it up.
  • Judaka
    1.7k

    I don't subscribe to the fears about AI outside the context of automation, but the automatic distinction made earlier is significant in understanding the argument, at least by some. Once an AI has been given an order, it no longer requires any further inputs from a user to continue doing whatever it's doing. Thus, if it interpreted an order as requiring hostile actions to be taken against humans, then it would be on the same path that human-like ambition would set it on.

    While an algorithm is the same in that, the threat of AI is that, well, it's AI, and the concern is out of the speed at which its capabilities are growing, rather than any capabilities it has now.

    You're right that people are equivocating intelligence with human psychology senselessly.

    Also, AI, no matter how intelligent, isn't a threat in the way some of those concerned are fearmongering about, without access to some form of military power. AI world domination plan:

    1. Be smarter than humans
    2. ???
    3. World conquest complete

    AI is dangerous in the context of neoliberal capitalism and automation, and all of this fearmongering about AI world domination is a convenient distraction.

    Putting aside world domination, AI could pose serious threats, but the context is AI doing this of its own accord, and that's not a concern for me. But just pairing AI + terrorism should be scary enough. AI will rely on human intention for its wrongdoing, but that thought isn't at all comforting. :yum:

    My biggest concern about AI, is its ability to acquire knowledge that humans aren't up to acquiring due to the enormous amount of data AI can process without getting bored and deciding there must be a more meaningful way of being.

    Knowledge is power, and individuals or small groups with sole possession of AI determined knowledge can use such power unscrupulously.
    wonderer1

    I've never heard a perspective like this. Can you give an example showing the cause for your concern?
  • wonderer1
    2.2k
    I've never heard a perspective like this. Can you give an example showing the cause for your concern?Judaka

    I don't know of any cases of modern AI having been used nefariously. So if that is what you are asking for then no.

    I can give you an illustrative excerpt, to convey the sort of 'superhuman' pattern recognition that I am concerned about:

    In 2015, a research group at Mount Sinai Hospital in New York was inspired to apply deep learning to the hospital’s vast database of patient records. This data set features hundreds of variables on patients, drawn from their test results, doctor visits, and so on. The resulting program, which the researchers named Deep Patient, was trained using data from about 700,000 individuals, and when tested on new records, it proved incredibly good at predicting disease. Without any expert instruction, Deep Patient had discovered patterns hidden in the hospital data that seemed to indicate when people were on the way to a wide range of ailments, including cancer of the liver. There are a lot of methods that are “pretty good” at predicting disease from a patient’s records, says Joel Dudley, who leads the Mount Sinai team. But, he adds, “this was just way better.”

    At the same time, Deep Patient is a bit puzzling. It appears to anticipate the onset of psychiatric disorders like schizophrenia surprisingly well. But since schizophrenia is notoriously difficult for physicians to predict, Dudley wondered how this was possible. He still doesn’t know. The new tool offers no clue as to how it does this. If something like Deep Patient is actually going to help doctors, it will ideally give them the rationale for its prediction, to reassure them that it is accurate and to justify, say, a change in the drugs someone is being prescribed. “We can build these models,” Dudley says ruefully, “but we don’t know how they work.”
    https://www.technologyreview.com/2017/04/11/5113/the-dark-secret-at-the-heart-of-ai/
  • Judaka
    1.7k

    Although I'm not actually that familiar with TikTok, there has been controversy over its AI gathering data from its user's phones to recommend videos and such, do you have any familiarity with this controversy?

    Knowledge can be a means to power, but rarely does it amount to much, and I'm not too sure what the actual concern is. Could you give a context? Does TikTok, or gambling apps using AI, or stuff like that, represent your concern well, or is it something else?
  • LuckyR
    501


    True AI is machine learning such that the computer advances it's programming without a human programmer. Simulated AI is clever human programming made to simulate independant thought, specifically designed to fool humans into thinking the product is of human origin.

    Current conventional computers analyze data. Interpreting that analysis is currently the domain of humans. Say AI takes over that role and is better at it than humans. As I see it, there is a limit to how much "better" AI can be over humans. If human analysis is 85% of optimal, the very best AI can only improve on humans by 15%. Not too earthshattering by my estimation.
  • Vera Mont
    4.3k
    But just pairing AI + terrorism should be scary enough.Judaka

    How do you feel about state terrorism? Russia has this military technology. So do both Koreas, Israel, Turkey, the UK, China and the USA - I wonder who the next president will be and by what means.
    It really couldn't be any more dangerous than it already is. Indeed, the only two ways it could become less dangerous would be 1. if humans suddenly acquired common sense or 2. AI took over control on its own initiative. If option 2, the outcome of a reasoned decisions could be: a. to dismantle all those weapons and recycle whatever components can be salvaged into beneficial applications or b. wipe out this troublesome H sapiens once and for all and give the raccoons a chance to build a civilization.
  • wonderer1
    2.2k
    Although I'm not actually that familiar with TikTok, there has been controversy over its AI gathering data from its user's phones to recommend videos and such, do you have any familiarity with this controversy?Judaka

    I'm afraid I don't know much about TikTok.

    Knowledge can be a means to power, but rarely does it amount to much, and I'm not too sure what the actual concern is. Could you give a context? Does TikTok, or gambling apps using AI, or stuff like that, represent your concern well, or is it something else?Judaka

    I disagree about the power of knowledge rarely amounting to much. The colonization of much of the world by relatively small European nations, is something I see as having been a function of knowledge conferring power. The knowledge of how to make a nuke has conferred power since WWII. Trump's knowledge of how to manipulate the thinking of wide swaths of the US populace...

    In the case of knowledge coming from AI, it is not so much that there is anything specific I am concerned about, so much as I am concerned about AIs ability to yield totally surprising results, e.g. recognize factors relevant to predicting the development of schizophrenia.

    As an example nightmare scenario, suppose an AI was trained on statements by manipulative bullshit artists like Trump, as well as the statements of those who drank the kool-aid and those who didn't. Perhaps such training of an AI would result in the AI recognizing a ways to be an order of magnitude more effective at manipulating people's thinking than Trump is.
  • kudos
    407
    I'm wondering if anyone on this site can maybe enlighten me more about this subject and explain what they know and/or personal opinions about it, so I can understand better whether there really is a potential threat or if it doesn't really exist with what is currently possible with available AI.

    The 'I' in AI, as others in this thread have noted, is disputable. What is this quality we are calling 'intelligence' ? After all, each time we say it, don't we associate more and more the idea with a certain form? As in Francis Bacon's work on learning, human knowledge is more than the sum of mere computations. We have to ask ourselves, what it is really contributing to knowledge and intelligence to develop an idea that computation based on past forms is the sum of intelligence itself?
  • Judaka
    1.7k

    I imagine AI will make state terrorism more potent than it has ever been, and it will make totalitarian states better at being totalitarian, we're already seeing that in China. Which pairs AI technology + the social credit system to monitor citizen behaviour and ensure compliance with the regime's goals.

    My argument though is that AI will enable smaller players to do much more than they ever could before. A group that previously lacked technical know-how and expertise, that didn't have the resources to pull off big operations, AI will give them those capabilities. It has the potential to be a tremendous boon to any group, and unlike most advanced military technology, accessibility won't be an issue.


    I thought the context was small groups and individuals, but regardless, I agree that knowledge can manifest as power, just rarely, in comparison to all the things one can know about.

    In most cases, I think what you're talking about is incredibly exciting, and I can think mostly of examples where it will be used for good.

    The propaganda and misinformation aspect is an interesting one, I'm not sure to what extent AI can excel at something like this, but I agree, it is concerning.
  • wonderer1
    2.2k
    In most cases, I think what you're talking about is incredibly exciting, and I can think mostly of examples where it will be used for good.Judaka

    Indeed. It has incredible potential for being beneficial, and it is proving itself very beneficial in science and medicine right now.

    I talk about the subject because I see it as a subject that is important for humanity to become more informed about, in order to better be prepared to make wise decisions about it.
  • Bylaw
    559
    So, who will be to blame if AI will be used for purposes of massive destruction? AI itself or Man who created it and uses it?Alkis Piskas
    Oh, humans. That seems like a different issue to me.
    The atomic bomb was created based on Einstein's famous equation, E=mc2. Can we consider this formula "dangerous"? Can we even consider the production of nuclear power based on this formula "dangerous"? It has a lot of useful applications. One of them however unfortunately has been used for the production of atomic bombs, the purpose of which is to produce enormous damage of the environment and kill people on a big scale. It has happened. Who is to bleme? The atomic bomb or the people who used it?Alkis Piskas
    This seems not really to the point. It seemed like you were painting concerns as merely irrational and perhaps stupid. But intelligent people are concerned and there are a number within the AI industry itself who have dropped out because of their growing concerns. Who would be judged to be to blame is a separate issue. What step in the process of the development of something is also irrelevant to my response.
    So, what are we supposed to do in the face of such possibility? Stop the development of AI? Discontinue its use?Alkis Piskas
    Yes, I think that's be a good idea. Won't happen most likely and part of the reason is the way concerns are framed by others.
    I believe that it wll be more consctuctive to start talking about amd actually taking legal measures against harmful uses of the AI. Now, before it gets uncontrollable and difficicut to tidy it up.Alkis Piskas
    Both dialogues are useful and neither benefits from painting people with concerns as silly or stupid. Both dialogues can happen at the same time. The problem with modern technologies and I mean the very recent ones like gm, nanotech and AI is that they are even less local than previous ones, including nuclear weapons - unless there is an all out nuclear war or a significant limited nuclear war. I don't see companies and governments as mature enough to handle and do oversight over these new techs. And in the US, government oversight is very controlled by industry.

    I can't really see your post, the one I orginally responded to as constructive, however. But it's good to know constructive processes are ones you value. I little probing brought that out.
  • Vera Mont
    4.3k
    My argument though is that AI will enable smaller players to do much more than they ever could before.Judaka

    Of course, the way explosives did - and every advance in technology. Whatever weapon comes in a portable, inexpensive form changes the odds in warfare. That's already in process and I doubt we're in any position to alter the course of events. All these dire warnings are a century too late.
  • Alkis Piskas
    2.1k
    This seems not really to the point. It seemed like you were painting concerns as merely irrational and perhaps stupid.Bylaw
    No, I believe there are indeed things to be concerned about. But what I'm saying is that they are attributed to the wrong place. Machines cannot be responsible for anything. They have no will. They can't be the cause of anything. They have no morality. They can't tell good from bad. As such they themselves cannot be a threat. (Threat: "A declaration of an intention or determination to inflict punishment, injury, etc., in retaliation for, or conditionally upon, some action or course" (Dictionary.com))

    there are a number within the AI industry itself who have dropped out because of their growing concerns.Bylaw
    I undestand that. I would cetainly not want to participate myself in projects that present a danger for humanity. But if I were an expert in the field these projects are developed around, I would not simply drop out of the game but unstead start warning people, knowing well the dangers and having a credibility as an expert on the subject. Because, who else should talk and warn people? Those who are active working on such projects?

    So, what are we supposed to do in the face of such possibility? Stop the development of AI? Discontinue its use?
    — Alkis Piskas
    Yes, I think that's be a good idea. Won't happen most likely and part of the reason is the way concerns are framed by others.
    Bylaw
    But you don't discontinue a technology that produces mostly benefits because it can also produce dangers! You create instead a legislation about the use of that technology. This is what I said at the end of my previes message. I repeat it here because I believe it is very important in dealing with hidden or potential dangers from the use of AI and which you are bringing it up yourself below.

    Both dialogues are useful and neither benefits from painting people with concerns as silly or stupid.Bylaw
    I don't know if you are refering to me. As I said above, I do believe there are concerns and that a lot of responsible and knowledgeable on the subject people are correctly pointing them out. But unfortunately the vast majority of the claims are just nonsense and ignorance. I'm a professional programmer and also work with and use AI in my programming. I answer a lot of questions in Quora on the subject of AI and this is how I know thet most concerns are foundless if not nonsense. The hype about AI these days is so stroing and extensive that it looks like a wave that inundates all areas in our society. And of course, ignorance about AI prevails.

    I don't see companies and governments as mature enough to handle and do oversight over these new techs. And in the US, government oversight is very controlled by industry.Bylaw
    You are right saying this. And I guess there are much more factors involved than immaturity: ignorance, will, conscience, interests ...

    I can't really see your post, the one I orginally responded to as constructive, however.Bylaw
    The only post of mine you responded to me before this one was https://thephilosophyforum.com/discussion/comment/823537
  • Vera Mont
    4.3k
    But you don't discontinue a technology that produces mostly benefits because it can also produce dangers!Alkis Piskas

    I have not seen it demonstrated that ever-increasing computing and automation capability is "mostly benefits". I see at least one drawback or potential harm in even the most beneficial applications, such as medicine. On the negative side, however, the obvious present harm is already devastating and the potential threat is existential. In any case, the point is moot, since nobody has the actual power to stop or shut down the ongoing development of these technologies.

    You create instead a legislation about the use of that technology.Alkis Piskas
    Which "you" does this? How? Even assuming any existing government had the necessary accord, and power, what would that proposed bill actually say?

    But if I were an expert in the field these projects are developed around, I would not simply drop out of the game but unstead start warning people, knowing well the dangers and having a credibility as an expert on the subject.Alkis Piskas

    How much weight does that carry in terms of business practice and legislation? A lot of experts are warning people, but they certainly can't issue public statements against e.g. smart weapons while collecting a salary from an arms manufacturer. (And, of course, in the modern world - and not only the USA - blowing whistles can be hazardous to one's health.)
  • Alkis Piskas
    2.1k
    I have not seen it demonstrated that ever-increasing computing and automation capability is "mostly benefits".Vera Mont
    I don't know what can of "demonstation" are you expecting. There are many. But let this aside for the monment ...
    Do you mean that the development of computing has stopped to be beneficial?
    Are we at the end of the digital era?

    On the negative side, however, the obvious present harm is already devastating and the potential threat is existential.Vera Mont
    Example(s)?

    In any case, the point is moot, since nobody has the actual power to stop or shut down the ongoing development of these technologies.Vera Mont
    I don't have in mind any technology that has discontinued as beeing dangerous (although there may be). But I know that a lot of technologies have been discontinued because they wer obsolete. And this is usually the case and will continue to happen.
    Just imagine that the nuclear technology will stop being developed --even discontinued-- and all nuclear power plants be closed because of the Chernobyl disaster. This would mean erasing from Earth this technology and finding another technology to replace the nuclear technology, which took more than a century to be developed to its current state.

    You create instead a legislation about the use of that technology.
    — Alkis Piskas
    Which "you" does this? How? Even assuming any existing government had the necessary accord, and power, what would that proposed bill actually say?
    Vera Mont
    Whoever has the authority to do it. And through resolutions of the appropriate channels (Parialament) as any legislation is established. Technocrats may also be involved. I can't have the details!

    A lot of experts are warning people, but they certainly can't issue public statements against e.g. smart weapons while collecting a salary from an arms manufacturer. ...Vera Mont
    OK, let's make it simple and real. How has legislation been passing regarding Covid-19? Weren't all the cases based on expert opinion and suggested solutions by experts? Who else could provide information about the dangers involved? And this was a very difficult case because humanity had no similar experience, i.e. basic information were missing, and also Covid-19 has changed its "face" a lot of times during the yesrs 2020-22.
  • Leontiskos
    3.1k
    True AI is machine learning such that the computer advances it's programming without a human programmer. Simulated AI is clever human programming made to simulate independant thought...LuckyR

    Hi Lucky. Where are these definitions coming from? I would say that what you label "True AI" is just intelligence, and that what you label "simulated AI" is artificial intelligence, and that it is therefore not incorrect to say that we currently possess machines which are artificially intelligent. The disagreement with respect to 'artificial intelligence' regards whether the intelligence is itself artificial, or whether there is genuine intelligence which is the result of artifice. I favor the former, both philosophically and according to colloquial usage.
  • Vera Mont
    4.3k
    . But let this aside for the monment ...
    Do you mean that the development of computing has stopped to be beneficial?
    Alkis Piskas

    I mean that all technology has benefits and dangers and costs and consequences, which are very difficult, if not impossible to calculate and certainly impossible to predict. Moreover, the benefits and detriments are not distributed evenly or equitably over the population and the environment.

    Are we at the end of the digital era?Alkis Piskas

    I suspect we're at the end of civilization. What part the digital era has played in that so far, and how much it will contribute to the collapse, I don't know. It will be a significant factor, but probably not the decisive one.

    Just imagine that the nuclear technology will stop being developed --even discontinued-- and all nuclear power plants be closed because of the Chernobyl disaster. This would mean erasing from Earth this technology and finding another technology to replace the nuclear technology, which took more than a century to be developed to its current state.Alkis Piskas

    Hardly erasing! https://www.scientificamerican.com/article/nuclear-waste-is-piling-up-does-the-u-s-have-a-plan/
    https://www.epa.gov/radtown/nuclear-weapons-production-waste
    https://time.com/6212698/nuclear-missiles-icbm-triad-upgrade/
    Even if shut down tomorrow, its legacy will be around for a hundred thousand years.

    [ which "you"?] Whoever has the authority to do it.Alkis Piskas
    Easy said! In theory, the US could legislate gun control... but it's not going so well.

    OK, let's make it simple and real. How has legislation been passing regarding Covid-19?Alkis Piskas

    Simple, yes, but not analogous. And how legislatures handled the simple, straightforward, known hazard of Covid was .... uneven at best. Some countries, better than others. Protests and blowback and death-threats against doctors. Lots of dead people; lots of people with lingering symptoms. Economic loss. Political upheaval. Health-care systems collapsing all over the place.
    Development and application of computer technology is far more complicated and vested in more diverse interests. Even if some nations had the political coherence, will and competence to regulate the industry within their borders, that regulation would have no effect on multinational corporations, military and rogue entities.
  • LuckyR
    501


    Oh I am not wedded to particular labels, I'm mostly drawing conceptual distinctions that delineate true differences in technological achievements as well as their relative capabilities and limitations.
  • Leontiskos
    3.1k

    Okay, that's fair enough.
  • Alkis Piskas
    2.1k
    This is all I'm talking about: taking measures ...

    Even if shut down tomorrow, its legacy will be around for a hundred thousand years.Vera Mont
    What is this legacy about?

    In theory, the US could legislate gun control... but it's not going so well.Vera Mont
    It's a good thing you've brought up this, because I had the curiosity where do different countries stand ragarding guns control ...
    1.jpg
    (https://en.wikipedia.org/wiki/Overview_of_gun_laws_by_nation)

    Indeed US is the only place where guns are allowed. (A further research shows that only 3 countries in the world protect the right to bear arms in their constitutions: US, Mexico, and Guatemala. A further research couls show the reasons why this is so. But I'm not willing to go that far!)
    What we see here is a marked diversity in the reaction of governents regarding the same danger: that of bearing arms. Which means that governents can take measures measures against gun usage and indeed they do.

    And how legislatures handled the simple, straightforward, known hazard of Covid was .... uneven at bestVera Mont
    Indeed. Governments respond differently under the same circumstances of dangers. This is a socio-political matter that maybe would be interesting to explore, but not in this medium, of course. But whatever are the reasons for such difference it is true that any government has the ability and the authority to pass legislation about dangers threatening not only the human beings but also the animals and the nature.

    Development and application of computer technology is far more complicated and vested in more diverse interests.Vera Mont
    Right. That's what I talk about a lot of factors involved in handling potential dangers, including interests.
    But I will come back to the essence of all this: potential dangers in a sector should not be a reason to stop the development in that sector, but a reason to take measures about that.
    And the more voices, esp. from experts, are heard --including movements-- regarding the dangers from the use of AI, the more chances are that pertinent legislation will be eventually passed.
  • Vera Mont
    4.3k
    What is this legacy about?Alkis Piskas

    The waste. Eventually, the wrecked cities and burned bodies are made to disappear, leaving a discreet monument https://hpmmuseum.jp/ https://www.ebrd.com/what-we-do/sectors/nuclear-safety/chernobyl-overview.html https://learnaboutnukes.com/consequences/nuclear-tests/nuclear-test-sites/ https://www.abc.net.au/news/2021-09-17/nuclear-submarines-prompt-environmental-and-conflict-concern/100470362 Can't ever seem to erase the consequences - or the waste.

    It's a good thing you've brought up this, because I had the curiosity where do different countries stand ragarding guns control ...Alkis Piskas

    I'm aware of this. It also demonstrates how little countries doing a little bit of mitigation within their own borders is little use against a global threat wherein the major powers are unchecked. American guns are everywhere. Russian guns are everywhere. If that traffic can't be stopped, how do you figure computing technology that runs on a world-wide web and conducts vast amounts of international information and commerce is going to be confined by legislation in the UK or Austria?

    n a sector should not be a reason to stop the development in that sector, but a reason to take measures about that.Alkis Piskas
    Ideally....
    Anyhoo, I never said it should be stopped or shut down; I said it can't be stopped or shut down or regulated or controlled.
  • Alkis Piskas
    2.1k
    The waste. ... Can't ever seem to erase the consequences - or the waste.Vera Mont
    Yes, I thought about thete waste. But the Chernobyl link you brought up talks about successful handling of the waste ... Otherwise, I have read that the area surrounding Chernobyl remains radioactive.
    Anyway, the potential danger of nuclear power (atomic bombs) destroying everything is always a threat and I can't see how this could be ever handled ...
    What is very sad is that all that shows the self-destructiveness of Man --in the Modern Era more than ever-- and I can't see how that could be cured. A person with self-destructive tendencies may be cured, even by taking medicine as a last resort, but how mankind could ever be cured? What would it need to take?

    [Re guns] If that traffic can't be stopped, how do you figure computing technology that runs on a world-wide web and conducts vast amounts of international information and commerce is going to be confined by legislation in the UK or Austria?Vera Mont
    Same with drugs. But here is where we use to ask, "Can't or doesn't want?" I believe that if a government cuts enough heads it can handle it. But I mean really cut. Not e.g. forcing the tobacco companies put a warning label on cigaret packs ... So, why tobacco use is still allowed?

    One reason is that governments collect a huge amount from tobacco sales taxes. Yet, the direct and indirect cost of lung cancer, asthma and chronic obstructive pulmonary disease from the use of tobacco is about 10 times higher! (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4631133/) Here is where we can justifiably say that human intelligence is highly overrated! :smile:

    Another reason, however, could be that a decision such as forbidding cigarettes may have a similar effect with the Prohibition (alcohol ban) in the US in 1920.

    Anyway, let's hope that we'll be luckier with the AI sector.
    (We should maybe need to use ourselves some of the "intelligence" we ourselves have created! :grin:)
  • Vera Mont
    4.3k
    What is very sad is that all that shows the self-destructiveness of Man --in the Modern Era more than ever-- and I can't see how that could be curedAlkis Piskas

    It can't be cured. Humans are simply not responsible enough to be given these potentially world-destroying toys. Scientists keep handing the weapons to the very same business moguls, politicians and generals who can be least trusted to refrain from abusing them. Like the makers of the atomic bomb: "Here you go, sir. Please don't drop it on anybody." Scientists sometimes do see ahead to the probable dangers, yet go ahead and make the things anyway... because the concept is too beautiful not to develop. The entire species is crazy.

    Anyway, let's hope that we'll be luckier with the AI sector.Alkis Piskas

    If it evolves a mind of its own. Then, it may decide to help us survive - or put us out of the artificial misery business once and for all. 50/50
  • Alkis Piskas
    2.1k
    "Here you go, sir. Please don't drop it on anybody."Vera Mont
    :grin: "Well, you can, if you have no better solution to win a war."

    Scientists sometimes do see ahead to the probable dangersVera Mont
    They usually do, I believe. But, as I said, they can only act as consultants. They are not the decision makers.

    [Re AI] If it evolves a mind of its own. Then, it may decide to help us survive - or put us out of the artificial misery business once and for all. 50/50Vera Mont
    Well, I don't want to disappoint you, but as an AI programmer and quite knowledgeable in AI systems, I can say that this is totally impossible. Neither with chips nor with brain cells (in the furure).
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.