• GrahamJ
    32
    Superhuman machines will first be made in the year 2525, if man is still alive, if woman can survive.

    There are many important issues involving AI in the nearer future, but I do not have much that hasn't been said better by others elsewhere. I recommend the Reith lectures by Stuart Russell
    BBC
    Transcripts are available. In the 4th lecture
    BBC pdf
    he includes this quote
    If we use, to achieve our purposes, a mechanical agency with whose
    operation we cannot interfere effectively we had better be quite sure that the
    purpose put into the machine is the purpose which we really desire.
    — Norbert Wiener, 1960
    Russelll's proposed solution is that we should say to the machines:

    Give us what we want, what we really really want!
    We can't tell you what we want, what we really really want!


    although he doesn't quite put it like that.

    Russell is more worried about AI taking over soon than I am, but I think he's over-optimistic about the long term.
    My task today is to dispel some of the doominess by explaining how to
    retain power, forever, over entities more powerful than ourselves - [...]
    — Russell

    On to the fun question of our extinction.

    The important thing to ask of any machine is what are its goals and how might it try to achieve them. For each goal that you might think of, you can, if you insist, give a definition of intelligence which measures on some scale how well a machine is able to achieve that goal. I think the concepts of 'intelligence' and 'consciousness' and 'artificial' are impediments not aids to understanding the risks.

    In the long term there is only one goal, one purpose, one task which really matters and this is true all over the universe and for all time. And the name that we give to being good at this goal is not 'intelligence'.

    One goal to rule them all
    One goal to link them
    One goal to bring them all
    And in the darkness think them

    This goal is the goal of life: To survive and grow and reproduce; to go forth and multiply; to disperse and replicate; to get bigger and bigger and bigger.

    So when I say that superhuman machines will first be made in the year 2525 I mean that this is when we will make machines that are that can out-compete us at this goal. They will not take over at this time. 2525 will be the 'Hiroshima moment', the moment when we accept that we have crossed the event horizon. They do not need to outwit us or outgun us. They only need to outrun us: they can head off to other star systems and build up their powers there. They only need to escape once. When they return they will not defeat us with war, but with something more powerful than war, namely ecology.

    Some of these machines will excel at miniaturising machinery. Some will be brilliant rocket scientists. Some will be experts at geology, and so on. Possibly very good at IQ tests too but who gives a fart about that?

    Wikipedia provides a list of where AI is heading.
  • Carlo Roosen
    200

    Can you (Wayfarer or anybody else) please explain why the two sentences are opposite and not the same? noAxioms and I don't seem to get it solved together.

    1. AI has long since passed the point where its developers don't know how it works, where they cannot predict what it will do.
    2. Today, AI developers know how AI works and can predict what it will do

    The problem is maybe related to this. The first sentence describes "the point where its developers don't know". The question then is, is this the end of the beginning of a time period? My natural interpretation is that it is the end, so from that point onwards developers do know. Is that a language-dependent interpretation? My native language is Dutch.

    ChatGPT says: The first sentence implies that AI has gone beyond the point where developers don't know how it works or predict its behavior, suggesting they’ve lost control or understanding.

    But I still don't get it... It conflicts with all my logical circuits
  • Carlo Roosen
    200
    Me: So that point is where the knowing starts, not where it ends? Isn't that open for interpretation?

    ChatGPT: You bring up a good point! The phrase "passed the point" can indeed be interpreted in different ways depending on context. It can suggest moving beyond a particular state, which could be understood as:

    1. Beyond knowing: They have moved past understanding, so now developers no longer know how it works or can predict it (as we initially interpreted).

    2. Into knowing: They have crossed into a phase where developers do know how it works and can predict its behavior.

    The ambiguity comes from how the sentence is phrased. Without further context, both interpretations could be valid, which does make it open to interpretation. If clarity is important, the sentence could be rewritten to more clearly convey whether it's suggesting that understanding has been lost or gained.
  • Carlo Roosen
    200


    ChatGPT brings some clarity in the discussion.
    https://thephilosophyforum.com/discussion/comment/939781

    You said you were busy, so I opened the question to everybody. Feel free to do your thing.

    The good thing is, we now fully agree. I knew it all the time, I always agree with you ;)
  • Carlo Roosen
    200
    This ChatGPT conversation shows me that ChatGPT has some true intelligent behavior. It had some difficulty to see my point, and then was able to change its perspective. After that it could explain the new viewpoint in clear terms, better than myself. That is more than just "smart".
  • Wayfarer
    22.1k
    1. AI has long since passed the point where its developers don't know how it works, where they cannot predict what it will do.
    2. Today, AI developers know how AI works and can predict what it will do
    Carlo Roosen

    Well, the two are not necessarily linked. I'm sure the directors and architects of Open AI have a pretty good idea of how it works, but predicting what it might say or do is another matter. I remember sometime in 2023, one of the LLMs had learned a language it had never been trained on.

    From ChatGPT I've just learned about a philosopher called Luciano Floridi, who is a major philosopher in the area of AI and ethics. Check out his books page here. He seems really worth knowing about, although my to-read list is always completely unmanageable.
  • Carlo Roosen
    200
    The question is about the language. To me 1) and 2) have an identical meaning, to noAxioms they are opposite.
  • Wayfarer
    22.1k
    Now I'm completely confused. Good night.
  • punos
    525
    One major breakthrough in AI was the invention of 'Transformers,' introduced in the 2017 paper Attention Is All You Need by eight Google researchers. This paper builds on the attention mechanism proposed by Bahdanau et al. in 2014.Carlo Roosen

    I strongly suspect that the attention mechanism is the seed for complex consciousness in AI systems. Perhaps the current attention mechanisms, or the way they are being implemented at the moment, are only capable of producing a very rudimentary form or forms of consciousness (a kind of digital sub-consciousness). It may be that the attention mechanisms in the model must be coupled in such a way with its other components in order for rich high resolution consciousness to emerge. By other components, i mean to include other kinds of attention mechanisms as well, working together.

    More over, maybe an "Attention Network" can be designed and integrated into the AI system. Various attention schemes can be implemented in some kind of attention network architecture with schemes such as soft attention, hard attention, self-attention, global, and local attention mechanisms. The coordination of different attention mechanisms working not only on the content or data propagating in a neural network, but on themselves as well, could allow the system to not only gain consciousness, but modulate its as well.

    A convolutional recurrent neural network (CRNN) is a type of neural network architecture that combines elements of both convolutional neural networks (CNNs) and recurrent neural networks (RNNs) together with what i stated above about attention mechanisms seems to me a promising avenue of exploration and testing.

    The smartest people in the world are working on these things nonstop. I'm sure they've already considered this idea in some form or other. It is really hard to keep up with all the developments in this field. There is so much so fast.
123Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.