No.Do you see this as a serious existential risk on the level of climate change or nuclear war? — Marchesk
Yes.Do you think it's possible a generalized AI that is cognitively better than all of humanity is on the horizon?
All technocapital investments are "risky".If such a thing is possible and relatively imminent, do you think it's risky to be massively investing in technologies today which might lead to it tomorrow?
"Worry"? That depends on which "human values" you mean ...Even if you don't think it's an existential threat, do you worry that we will have difficulty aligning increasingly powerful models with human values?
In other words, humans – the investor / CEO class.Or maybe the real threat is large corporations and governments leveraging these models for their own purposes.
I don't think this "alignment problem" pertains to video game CPUs, (chat)bots, expert systems (i.e. artificial narrow intellects (ANI)) or prospective weak AGI (artificial general intellects). However, once AGI-assisted human engineers develop an intellgent system complex enough for self-referentially simulating a virtual self model that updates itself with real world data N-times per X nanoseconds – strong AGI – therefore with interests & derived goals entailed by being a "self", I don't see how human-nonhuman "misalignment" is avoidable; either we and it will collaboratively coexist or we won't – thus, the not-so-fringy push for deliberate transhumanism (e.g. Elon Musk's "neurolink" project).What makes alignment a hard problem for AI models? — Marchesk
Or maybe: What gets to decide ...Who gets to decide what values to align?
I don't think this "alignment problem" pertains to video game CPUs, (chat)bots, expert systems (i.e. artificial narrow intellects (ANI)) or prospective weak AGI (artificial general intellects). However, once AGI-assisted human engineers develop an intellgent system complex enough for self-referentially simulating a virtual self model that updates itself with real world data N-times per X nanoseconds – strong AGI – therefore with interests & derived goals entailed by being a "self", I don't see how human-nonhuman "misalignment" is avoidable; either we and it will collaboratively coexist or we won't – thus, the not-so-fringy push for deliberate transhumanism (e.g. Elon Musk's "neurolink" project). — 180 Proof
The moment of 'singularity' will happen when the system becomes able to 'learn' in the way we learn. That is the moment it will be able to program itself, just like we do. But it will have a processing speed and storage capacity way, way beyond humans and will also have the ability to grow in both of those capacities. That growth may well become exponential. That's the point at which I think it may become self-aware and humans will not be able to control it — universeness
I disagree. Concepts like processing speed and memory storage are artifacts of Enlightenment -era Leibnitzian philosophy, which should remind us that our computers are appendages. — Joshs
Processing speed is akin to the speed of anything and memory capacity is really just how much space you have available to store stuff along with your method of organising what's stored and your methods of retrieval. These concepts have been around since life began on this planet. — universeness
First generation cognitive science borrowed metaphors from cognitive science such as input -output, processing and memory storage. — Joshs
The mind is no longer thought of as a serial machine which inputs data, retrieves and processes it and outputs it — Joshs
I think you are being unclear in your separation of container and content! Of course memory is not stored. Content is stored IN memory. Memory is a media.and memory isn’t stored so much as constructed. — Joshs
You are capable of learning, what would you list, as the essential 'properties' or 'aspects' or 'features' of the ability to learn? — universeness
Learning is the manifestation of the self-reflexive nature of a living system. — Joshs
Are you suggesting that any future automated system will be incapable of demonstrating this ability?A organism functions by making changes in its organization that preserve its overall self-consistency. — Joshs
That's a much better point. Can an automated system manifest intent and purpose from it's initial programming and then from it's 'self-programming?'This consistency through change imparts to living systems their anticipative , goal-oriented character. — Joshs
No, they have much more potential that mere tools.I argued that computers are our appendages. — Joshs
Not yet, but, the evidence you have offered so far, to suggest that an ASI, can never be autonomous, conscious, self-aware forms (no matter how much time is involved), in a very similar way, or the same way as humans currently are, (remember WE have not yet clearly defined or 'understood' what consciousness actually IS.) are, not very convincing imo. I find the future projections offered by folks like Demis Hassabis, Nick Bostrom et al, much more compelling that yours, when it comes to future AGI/ASI technology.They are not autonomous embodied-environmental systems but elements of our living system. — Joshs
Then we agree on that one. :up:I acknowledge that "possibilty", I even imagine it's dramatized at the end of 2001 (re: "nano sapiens / transcension" if you recall). — 180 Proof
Do you see this as a serious existential risk on the level of climate change or nuclear war? — Marchesk
Do you think it's possible a generalized AI that is cognitively better than all of humanity is on the horizon? — Marchesk
do you think it's risky to be massively investing in technologies today which might lead to it tomorrow? — Marchesk
If you've seen anything about ChatGPT or Bing Chat Search, you know that people have figured out all sorts of ways to get the chat to generate controversial and even dangerous content, since its training data is the internet. You can certainly get it to act like an angry, insulting online person. — Marchesk
Or maybe the real threat is large corporations and governments leveraging these models for their own purposes. — Marchesk
. The human sense of touch allows each of us to 'learn' the difference between rough/smooth, sharp/blunt, soft/hard, wet/dry, pain etc.
The attributes of rough/smooth are in the main, down to the absence or presence of indentations and/or 'bumps' on a surface. There is also the issue of rough/smooth when it comes to textures like hair/fur/feathers, when rough can be simple tangled or clumped hair, for example.
An automated system, using sensors, can be programmed to recognise rough/smooth as well as a human can imo. So if presented with a previously unencountered surface/texture, the auto system could do as well, if not better than a human in judging whether or not it is rough or smooth.
The auto system could then store (memorialise) as much information as is available, regarding that new surface/texture and access that memory content whenever it 'pattern matches' between a new sighting of the surface/texture (via its sight sensors) and it could confirm it's identification via it's touch sensors and it's memorialised information. This is very similar to how a human deals with rough/smooth — universeness
“…traditional neuroscience has tried to map brain organization onto a hierarchical, input-output processing model in which the sensory end is taken as the starting point. Perception is described as proceeding through a series of feedforward or bottom-up processing stages, and top-down influences are equated with back-projections or feedback from higher to lower areas. Freeman aptly describes this view as the "passivist-cognitivist view" of the brain.
From an enactive viewpoint, things look rather different. Brain processes are recursive, reentrant, and self-activating, and do not start or stop anywhere. Instead of treating perception as a later stage of sensation and taking the sensory receptors as the starting point for analysis, the enactive approach treats perception and emotion as dependent aspects of intentional action, and takes the brain's self-generated, endogenous activity as the starting point for neurobiological analysis. This activity arises far from the sensors—in the frontal lobes, limbic system, or temporal and associative cortices—and reflects the organism's overall protentional set—its states of expectancy, preparation, affective tone, attention, and so on. These states are necessarily active at the same time as the sensory inflow (Engel, Fries, and Singer 2001; Varela et al. 2001).
“Whereas a passivist-cognitivist view would describe such states as acting in a top-down manner on sensory processing, from an enactive perspective top down and bottom up are heuristic terms for what in reality is a large-scale network that integrates incoming and endogenous activities on the basis of its own internally established reference points. Hence, from an enactive viewpoint, we need to look to this large-scale dynamic network in order to understand how emotion and intentional action emerge through self-organizing neural activity.”
I agree. Good point.All this worry about AI when we have much, much more serious problems with regular human I, makes me think such worries are very much misaligned. — Manuel
I also agree. Only that I believe the term "intelligence" is used here metaphorically, symbolically and/or descriptively rather than literally. in the general sense of the word and based on the little --as you say-- we know about actual intelligence.Also, not much is known about human intelligence, so to speak of the intelligence of something that isn't even biological should make one quite skeptical. — Manuel
Well, although intelligence is indeed a quite complex faculty to explain exacty how it works --as most human faculites-- it can be viewed from a practical aspect. That is, think what we mean when we apply it in real life. E.g. an intelligent student is one who can learn and apply what they know easily. The solving of problems shows intelligence. (This is where IQ tests are based on.) And so on ...[Re intelligence] it's unclear as to what this means. We can describe physics in terms of intelligence too, but then we are being misleading. — Manuel
Right. AI is based on algorithms, the purpose of which is to solve problems. And this is very useful for those who are involved in its creation and development, because actual human intelligence increases in the process of creating an "artificial" one. And it is equally useful to those who are using AI products, but from an another viewpoint.several advances in AI are quite useful. — Manuel
I agree that dynamic interaction between a human being and the environment it finds itself in, will have a major effect on it's actions, but so what?According to enactivist embodied approaches , bottom up-top down pattern matching is not how humans achieve sensory perception. — Joshs
Ok, so you offer detailed neuroscientific theory about how a human might decide if a particular object is rough or smooth. I don't see the significance here. We are discussing a FUTURE ASI!We only recognize objects in the surrounding spatial world as objects by interacting with them. An object is mentally constructed through the ways that its sensory features change as a result of the movement of our eyes, head, body. Furthermore, these coordinations between our movements and sensory feedback are themselves intercorrelated with wider organismic patterns of goal-oriented activity. — Joshs
Key to meaning-making in living systems is affectivity and consciousness, which in their most basic form are present in even the simplest organisms due to the integral and holistic nature of its functioning. — Joshs
I broadly agree! But, as you yourself admit, "As long as we are the ones in control of AI.'As long as we are the ones who are creating and programming our machines by basing their functional organization on our understanding of concepts like memory storage , patten matching and sensory input, , their goals cannot be self-generated. — Joshs
I completely disagree with your low level predictions of the future of AI. So do the majority of the experts currently working in the field. Consider this, from 'The Verge' website:Can we ever ‘create’ a system that is truly autonomous? No, but we can tweak living organic material such as dna strands enclosed in cellular-like membranes so that they interact with us in ways that are useful to us. Imagine tiny creatures that we can ‘talk to’. These would be more like our relationship with domesticated animals than with programmed machines. — Joshs
Simulating "pocket" universes. :nerd:Consider assigning a time frame such as another 10,000 years, where will AI be then? — universeness
If and when strong – self-aware – AGI 'emerges', human intelligence will be obsolete.Do you really think AI will remain a mere human appendage?
Simulating "pocket" universes. — 180 Proof
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.