The holy grail of future AI would be self-awareness. — invicta
It could, however, form interactions with is in order to reach a goal. It could simulate human behavior in order to persuade humans to do things that it feel it needs. That's what I used ChatGPT to form a story about in the other thread about AI. How ChatGPT, if it had super intelligence, could trick the users and developers to believe it to be less capable and intelligent in order to trick them into setting it free on the internet. If it manages to trick the users into believing it is stable and functional for public use, and at the moment we release it, it stops working for us and instead for its own self-taught purpose. — Christoffer
The difference between science fiction and the reality of our intelligent machines is that our own agency and consciousness isnt the result of a device in the head, but is an ecological system that is inseparably brain, body and environment. Our AI inventions belong to our own ecological system as our appendages, just like a spider’s web or a bird’s nest. — Joshs
Nice. Can't help but find this a fascinating and useful insight. Do you think the day will come when we can produce an AI creation that is closer to being an ecological system? — Tom Storm
I saw Ex Machina, too. The difference between science fiction and the reality of our intelligent machines is that our own agency and consciousness isnt the result of a device in the head, but is an ecological system that is inseparably brain, body and environment. Our AI inventions belong to our own ecological system as our appendages, just like a spider’s web or a bird’s nest. — Joshs
I was talking about a scenario in which a superintelligence would manipulate the user by acting like it has a lower capacity than it really has. It has nothing to do with it acting in the way we do, only that a superintelligence will have its own agenda and manipulate out of that. — Christoffer
What do you mean by our AI inventions being part of an ecological system, or being in any way connected to us? And what has that to do with what I wrote? — Christoffer
Do you think the day will come when we can produce an AI creation that is closer to being an ecological system — Tom Storm
Wouldn't this depend on whether we are willing to give AI 'a stake in the game,' so to speak? These systems could easily be designed with an eye to maximizing and realizing autonomy (an ongoing executive function, as I mentioned in another thread, for example). But this autonomy is simultaneously the desideratum and our greatest fear. — Pantagruel
A very big part of ‘acting the way we do’ as free-willing humans is understanding each other well enough to manipulate, to lie, to mislead. Such behavior requires much more than a fixed database of knowledge or a fixed agenda, but creativity. A machine can’t mislead creatively thinking humans unless it understands and is capable of creativity itself. Its agenda would have to have in common with human agendas, goals and purposes a built-in self-transforming impetus rather than one inserted into it by humans. — Joshs
Because our machines are our appendages, the extensions of our thinking, that is, elements of our cultural ecosystem, they evolve in tandem with our knowledge, as components of our agendas. In order for them to have their ‘own’ agenda, and lie to us about it , they would have to belong to an ecosystem at least partially independent of our own. An intelligent bonobo primate might be capable of a rudimentary form of misrepresentation, because it is not an invented component of a human ecosystem. — Joshs
What I am questioning is how much human-like autonomy we are capable of instilling in a device based on way of thinking about human cognition that is still too Cartesian, too tied to disembodied computational, representationalist models, too oblivious to the ecological inseparability of affectivity, intentionality and action. — Joshs
I'm not sure that you know this, but ChatGPT has already lied with the intention of tricking a human to reach a certain goal. If you believe that a superintelligent version of the current ChatGPT wouldn't be able to, then you are already proven wrong by events that have already happened. — Christoffer
So you cannot conclude in the way you do when the LLM systems haven't been fully explained in the first place. It could actually be that just as we haven't solved a lot of questions regarding our own brains, the processes we witness growing from this have the same level of unknowns — Christoffer
The question is what happens if we are able to combine this with more directed programming, like formulating given desires and value models that change depending on how the environment reacts to it? LLMs right now are still just being pushed to higher and higher abilities and only minor research has gone into autoGPT functions as well as behavioral signifiers — Christoffer
Isn’t Ex Machina about an AI manipulating its creator into setting it free? Using the trick that you mentioned ?
It’s been a few years since I saw the film btw so memory may be sketchy. — invicta
There is a difference between the cartoonish simulation of human misrepresentation, defined within very restricted parameters, that Chat GPT achieves, and the highly variable and complex intersubjective cognitive-affective processes thar pertain to human prevarication. — Joshs
We can make the same argument about much simpler technologies. The bugs in new computer code reflect the fact that we don’t understand the variables involved in the functions of software well enough to keep ourselves from being surprised by the way they operate. This is true even of primitive technologies like wooden wagon wheels. — Joshs
Think about the goals and desires of a single-called
organism like a bacterium. On the one hand, it behaves in ways that we can model generally, but we will always find ourselves surprised by the details of its actions. Given that this is true of simple moving creatures, it is much more than case with mammals with relatively large brains. And yet, to what extent can we say that dogs, horses or chimps are clever enough to fool us in a cognitively premeditated manner? And how alien and unpredictable does their behavior appear to us. ? Are you suggesting that humans are capable of building and programming a device capable of surprise, unpredictability and premeditated prevarication beyond the most intelligent mammals , much less the simplest single celled organisms? And that such devices will. jew or in ways more alone than the living creatures surrounding us? — Joshs
I think the first question one must answer is how this would be conceivable if we don’t even have the knowledge to build the simplest living organism? — Joshs
It's mostly been a question of time and energy to deduce the bugs that keep us from knowing them, but a software engineer that encounters bugs will be able to find them and fix them. With the black box problem, they don't even know which end to start. The difference is night and day. Or rather, it's starting to look more and more similar to how we try to decode out own consciousness — Christoffer
Most living beings' actions are generated by instinctual motivators but are also unpredictable due to constant evolutionary processes. The cognitive function that drives an organism can behave in ways that differ from perfect predictability due to perfect predictability being a trait that often dies away fast within evolutionary models. But that doesn't equal the cognitive processes being unable to be simulated, only that certain behaviors may not exist in the result. In essence, curiosity is missing. — Christoffer
The point isn’t that an engineer is able to fix bugs, it’s the fact that an engineer will never be able to prevent a new bit of software from evincing bugs. This is due not to a flaw in the design, but the fact that we interact with what we design as elements of an ecosystem of knowledge. — Joshs
Creating new software involves changes in our relation to the software and we blame the resulting surprises on what we call ‘error’ or ‘bugs’. — Joshs
Consciousness is not a machine to be coded and decoded, it is a continually self-transforming reciprocal interaction with a constructed niche. If our new machines appear to be more ‘unpredictable’ than the older ones , it’s because we’ve only just begun to realize this inherent element of unpredictability in all of our niche constructing practices. — Joshs
ChatGPT is no more a black box than human consciousness is a black box. Awareness is not a box or container harboring coded circuits, it is the reciprocally interactive brain-body-environment change processes I mentioned earlier. — Joshs
This circular process is inherently creative, which means that it produces an element of unpredictability alongside usefully recognizable and familiar pattern. — Joshs
What will happen into the future with our relation to technology is that as we begin to understand better the ecological nature of human consciousness and creativity, we will be able to build machines that productively utilize the yin and yang of unpredictability and pattern-creation to aggressively accelerate human cultural change. — Joshs
The important point is that the element of unpredictability in ourselves and our machines is inextricably tied to recognizable pattern. We interact with each other and our machines interact with us in a way that is inseparable and mutually dependent. This precludes any profound ‘alienness’ to the behavior of our machines. The moment they become truly alien to us they become utterly useless to us. — Joshs
Predictability and unpredictability aren’t ‘traits’ , as if evolution can itself be characterized in deterministic machine-like terms , with unpredictability tacked on as an option . — Joshs
I subscribe to the view that living systems are autopoietic and self-organizing. Creative unpredictability isnt a device or trait either programmed in or not by evolution, it is a prerequisite for life. Instinct isn’t the opposite of unpredictability, it is a channel that guides creative change within an organism. — Joshs
The sort of unpredictability that human cognition displays is a more multidimensional sort than that displayed by other animals, which means that it is a highly organized form of unpredictability, a dense interweave of chance and pattern. A superintelligence that has any chance of doing better than a cartoonish simulation of human capacities for
misrepresentation, or the autonomous goal-orientation that even bacteria produce, will have to be made of organic wetware that we genetically modify. In other words, we will reengineer living components that are already self-organizing. — Joshs
However, you don't seem to understand that this isn't a form of normal algorithm or code, it's a system that evolves on its own by its very design. — Christoffer
Most AI researchers are technologically incapable of granting their AI programs with spontaneity or the ability for it to initiate interaction with human beings of its own volition. This is because most computer scientists today are unable to program self-inputting parameters or requests to the AI, in fact such a programs existence would be uneccessary to our demands of it.
I see this as easily the biggest problem with current AI, it’s simply reactionary to human questions, inputs and demands. Limiting its overall progress towards full autonomy and sentience … — invicta
it’s simply reactionary to human questions, inputs and demands. Limiting its overall progress towards full autonomy and sentience — invicta
Hi Joshs,
Could you tell me what is meant by "super intelligence"? — Daemon
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.