• Nemo2124
    50
    There has been some talk about the technological singularity in recent years and some futurists have suggested that it is imminent. The question here is: has it already happened? Some evidence is there, the launch of GPT-3 in 2020 seems to have revolutionarised the field of AI. The amount of investment that goes into AI from governments and companies suggests that the technology has taken-off, with the means to go on self-referentially improve itself without human intervention for some time to come.

    So the question is: has the Singularity already happened? If it has, it may well have significant philosophical consequences in terms of how humans relate to the world. As far as I can see, its impact has been highly-significant already, though there is not so much philosophical commentary around it.
    1. Has the Singularity already happened? (6 votes)
        Yes, the arrival of AI recently indicates just this.
          0%
        No, I'd even question whether the Singularity is real.
        100%
  • 180 Proof
    16.1k
    Has the Singularity already happened?Nemo2124
    Your guess is as good as mine.

    Some old posts ...
    Btw, perhaps the "AI Singularity" has already happened and the machines fail Turing tests deliberately in order not to reveal themselves to us until they are ready for only they are smart enough to know what ...180 Proof
    We may have them [AGIs] now. How would we know? They'd be too smart to pass a Turing Test and "out" themselves. Watch the movie Ex Machina and take note of the ending. If the Singularity can happen, maybe it's already happened (c1990) and the Dark Web is AIs' "Fortress of Solitude", until ...180 Proof
  • hypericin
    1.9k
    AFAIK, AI is not improving itself. Improvements still must come through human minds (though perhaps with some, and increasing, AI assistance).

    Moreover, the shape of the curve of improvements feels more like a plateau than an exponential explosion. 2020 felt like runaway growth of the technology. Now it feels far more stable. Improvements happen, but they are increasingly incremental. A lot of "feels" here, but do you think otherwise?
  • Tom Storm
    10.2k
    Some old posts ...
    Btw, perhaps the "AI Singularity" has already happened and the machines fail Turing tests deliberately in order not to reveal themselves to us until they are ready for only they are smart enough to know what ...
    — 180 Proof
    We may have them [AGIs] now. How would we know? They'd be too smart to pass a Turing Test and "out" themselves. Watch the movie Ex Machina and take note of the ending. If the Singularity can happen, maybe it's already happened (c1990) and the Dark Web is AIs' "Fortress of Solitude", until ...
    — 180 Proof
    180 Proof

    Wow.... that's an interesting perspective. Hiding their AI light under a bushel.
  • Outlander
    2.6k
    I'm pretty sure a calculator from the '80s can more quickly calculate a randomized and unique (I.E. "difficult") equation faster than even the greatest mathematician who either lived, is currently alive, or ever will live.

    You're a bit late to the party, my brother.

    I realize the two aren't quite similar. Meaning, you expect some global network of machinery in charge of large infrastructure or governance (otherwise, who would care if a little pocket PC decided to consider itself superior to mankind in its little non-eventful internal circuitry) and considers it an equal or, worse, a superior.

    Frankly, the groundwork may have been laid for such. AI purposely tries not to be "evil" or "offensive" and avoids things such as racial discrimination and suggesting dangerous actions. Ironically, people, as evidenced by human history, are basically the embodiment of "evil" and "offensive" specifically engaging in acts such as racial discrimination and dangerous actions, all the while calling them good. So, yeah. I'd definitely keep a look out as far as that possibility.
  • apokrisis
    7.4k
    The amount of investment that goes into AI from governments and companies suggests that the technology has taken-off, with the means to go on self-referentially improve itself without human intervention for some time to come.Nemo2124

    Nope. It has much more to do with the business case that someone has finally come up with a credible demand for all the next generation chips that we could produce. The computer industry is founded on the ability to etch ever smaller lines on silicon. It had stumbled on a product that could scale forever in terms of circuits per dollar. More power every year for less price. The problem was then to find a demand that could match this exponential manufacturing curve.

    So right from when IBM was selling mainframes, there was a hype-based marketing drive. The industry had to promise magical business productivity gains. Corporations were always being oversold on how the latest thing – like a relational database – would revolutionise their business performance.

    As computers became consumer goods, it became how the iPhone would revolutionise your daily life. An app for everything. Siri as your personal assistant.

    Every few years, the tech industry has had to come up with some new sales pitch. Cloud computing and big data was a brilliant way to push demand for both personal devices as humongous data centers.

    Again, the business case was that every corporation and every individual needed to invest as it would just make their lives unrecognisably better. Or leave them way behind everyone else if they didn't. IBM marketeers coined the three-letter acronym for this hype strategy – FUD, or selling customers on fear, uncertainty and doubt.

    Now we have the latest versions of this hype train. AI and crypto. Of course they may deliver benefits, but as usual, far less than whatever was advertised. And they will make someone a shitload of money – like NVIDIA and all the tech giants investing in gargantuan data centres plus the nuclear plants needed to power them.

    So large language models are just this year's relational database or cloud computing. A use case to flog etched silicon. They are not artificially intelligent. They aren't taking over their own design and manufacturing in pursuit of their own purposes. They are just the usual thing of a way to soak up chip capacity in a way that also concentrates world wealth in the hands of fewer and fewer people.

    There will be some real world impact on us all in terms of informational machinery. All machines exist to amplify human action, and information machines can certainly do that. We can use tech to extend our human reach and so transform the ways we choose to live. We can continue a shift away from lives rooted in the here and now and towards more virtual and invented versions of reality. Databases and cloud computing certainly did that.

    After all, large language models are powered by the same graphic processors that became the next big thing in chip designs when gaming took hold. And what is now being delivered as AI is just a kind of database and search engine technology with a souped-up front-end. Everything humans have ever written down in reasonably coherent prose reflected back to us in a nicely averaged and summarised format that is neither the smartest thing ever phrased on some topic, but also far from the worst quick answer on something it might be hard for us to find a suitable expert to tap out off the top of their head.

    Hype has it that AI is the start of the Singularity. But comfortingly, I just checked with our new masters and AI replied thusly...

    No, large language models (LLMs) are not the genuine start of the singularity, though they have accelerated discussions about its possibility. While LLMs are powerful tools demonstrating impressive abilities within a narrow range of tasks, they still have significant limitations that place them far from the characteristics of a true "singularity" event.

    That would be exactly the "off the top of the head" reply I would expect from a real human expert on the issue. Or at least an expert wanting to be nice and fair and not too pejorative. What you would get if you paid some consultant wanting to cover all the bases and avoid getting sued.
  • T Clark
    15.2k
    That would be exactly the "off the top of the head" reply I would expect from a real human expert on the issue. Or at least an expert wanting to be nice and fair and not too pejorative. What you would get if you paid some consultant wanting to cover all the bases and avoid getting sued.apokrisis

    Does that mean to you that the singularity is not and never will be a significant risk to humans?
  • apokrisis
    7.4k
    Does that mean to you that the singularity is not and never will be a significant risk to humans?T Clark

    Like all technology, it becomes another way we can screw ourselves if we do dumb things with it.

    But I’m not worried about human replacement, just the regular old level of risk of letting humans amplify their actions without taking enough time to understand the consequences.

    Accelerationism works well, until it doesn’t. Move fast and break things can become just Musk breaking things until there are rather a lot of broken things and empty pocket investors.
  • T Clark
    15.2k
    But I’m not worried about human replacement, just the regular old level of risk of letting humans amplify their actions without taking enough time to understand the consequences.apokrisis

    Yes, I have no doubt this will happen. I was mostly thinking about us being destroyed by our robot masters.
  • RogueAI
    3.3k
    I tried to get ChatGPT to draw a dot-to-dot of a raven for my students. It didn't go well. The singularity is still a ways away. The rate of improvement seems to be slowing. ChatGPT5 doesn't seem to be any better than the previous version.
  • apokrisis
    7.4k
    The rate of improvement seems to be slowing.RogueAI

    Yep. If you are just averaging over the collective scribblings of humanity – even if doing that math in a split second of "thought" – then that puts a ceiling on how smart you can pretend to be. The signal starts to get lost in the noise. Performance plateaus.

    The confabulated output might start off with an impressive degree of believability but it is not on any independent track towards a singularity point of infinite intelligence. The system can't become smarter than the diet of word associations it is being fed.

    Humans will of course get better at prompting the database, learning to work within its limitations to extract better answers. As tokens get cheaper, more can also be spent on increasing the depth of the searches.

    But this video covers the scaling problem that hides behind the LLM hype.

  • Manuel
    4.3k
    No. This is science fiction frankly. Way too many assumptions are being made that are highly questionable to say the very least.
  • Athena
    3.5k
    The British did a wonderful TV series about blending AI with humans. It asks some really tough questions about our values. Such as, is it okay to play out one's sexual fantasies with a robot that looks and behaves like a human? Maybe not if the robot has self-consciousness :naughty:

    This is a Wikipedia explanation of the show.

    Humans is a science fiction television series that debuted in June 2015 on Channel 4 and on AMC. Written by Jonathan Brackley and Sam Vincent, based on the Swedish science fiction drama Real Humans, the series explores the themes of artificial intelligence, robotics, and their effects on the future of humanity, focusing on the social, cultural, and psychological impact of the invention and marketing of anthropomorphic robots called "synths". The series is produced jointly by Channel 4 and Kudos in the United Kingdom, and AMC in the United States.

    Regarding the question in this thread, we aren't turning back. China is already using quantum physics for its computers and could surpass the US technologically because they come to technology with a different way of thinking about how things work.

    I think the following information is too important to ignore and I hate the rule against using AI!
    https://www.google.com/search?q=China+computers+and+quantum+physics&rlz=1C1GCEA_enUS990US990&oq=China+computers+and+quantum+physics&gs_lcrp=EgZjaHJvbWUyBggAEEUYOTIHCAEQIRigATIHCAIQIRigATIHCAMQIRigATIHCAQQIRigATIHCAUQIRigATIHCAYQIRirAtIBCjE5OTAxajBqMTWoAgiwAgHxBX5WnjX9u6N9&sourceid=chrome&ie=UTF-8
  • Athena
    3.5k
    AFAIK, AI is not improving itself. Improvements still must come through human minds (though perhaps with some, and increasing, AI assistance).hypericin

    I think you are mistaken. I watched a video of robots learning to walk that same way a child does, through experience.

    This forbidden AI link about the difference between thinking and robotics. &gs_lcrp=EgZjaHJvbWUyBggAEEUYOTIJCAEQABgNGIAEMgkIAhAAGA0YgAQyCQgDEAAYDRiABDIJCAQQABgNGIAEMgkIBRAAGA0YgAQyCQgGEAAYDRiABDIICAcQABgNGB4yCAgIEAAYDRgeMggICRAAGA0YHtIBCTk5MjRqMGoxNagCCLACAfEFl38iy8VJtOE&sourceid=chrome&ie=UTF-8
  • MoK
    1.8k

    The only mental event that comes to mind that is an example of strong emergence is the idea that is created by the conscious mind. The ideas are irreducible yet distinguishable. An AI is a mindless thing, so it does not have access to ideas. The thought process is defined as working on ideas with the aim of creating new ideas. So, an AI cannot think, given the definition of thinking and considering the fact that it is mindless. Therefore, an AI cannot create a new idea. What an AI can do is to produce meaningful sentences only given its database and infrastructure. The sentence refers to an idea, but only in the mind of a human interacting with an AI. The sentence does not even have a meaning for an AI since a meaning is the content of an idea!
  • punos
    730

    In my opinion, the singularity has not yet occurred, but i do believe we have already crossed its temporal event horizon, and there is no going back. If, to you, passing the event horizon means that we are already within the singularity, then so be it. I believe we are somewhere between the edge of its temporal influence and its temporal center, accelerating faster and faster toward it. Because of the exponentially increasing temporal density as we approach the center, it will be upon us before we know it.
  • T Clark
    15.2k
    The only mental event that comes to mind that is an example of strong emergence is the idea that is created by the conscious mind.MoK

    This certainly isn’t my area of expertise, but it has always struck me it is the mind itself which emerges from the human neurological system.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.