• I like sushi
    4.8k
    Moral realism asks whether “moral claims do purport to report facts and are true if they get those facts right” (Sayre-McCord 2015). While some philosophers propose moral objectivism is distinct from realism, arguing that “things are morally right, wrong, good, bad etc. irrespective of what anybody thinks of them” (Pölzler and Wright 2019: 1), it is generally taken that moral realism implies moral objectivism (Björnsson 2012). Accordingly, I treat them similarly, and use moral realism as a catch-all term. Thus far, human intelligence and tools have been unable to conclusively confirm the truth of moral realism. However, a sufficiently advanced intelligence, like an AGI, may be capable of doing so, provided the notion of ‘solving’ the puzzle is coherent in the first place. I expect to consider the existence of moral facts plausible. I base this prediction on the popularity of moral realist (56%) over anti-realist (28%) meta-ethics from the same philosopher’s survey (Bourget and Chalmers 2014). I make no prediction with respect to technicians, and consider the prediction re: tentative.

    ...

    The AGI ethics debate is primarily concerned with mitigating existential risk—technicians and PADs both agree on this. Both groups confidently predict AGI will be the most ethically consequential technology ever created. Accordingly, a suitably proactive response demands greater funding, research and input from the academy, governments, and the private sector. Technicians and PADs both endorse a highly precautionary approach, deemphasizing the moral and humanitarian well-being an AGI could provide to first focus on preventing worst-case scenarios. Addressing these black swans has stimulated a demand for better knowledge in two adjacent research areas. First, a model of what consciousness is, a theory of its development, and a sense of whether non-organic entities or systems can become conscious. Second, teasing out broader similarities and differences between organic and mechanical systems, so the utility, generalizability, and predictive power of empirical trends can be better assessed. The case of AGI highlights agreement between PADs and technicians on humanity’s extensive moral obligation towards future generations. In an age of multiple existential risks, including those unrelated to AGI, e.g., climate change, humans today bear direct responsibility for minimizing either termination or increased suffering of future lives.

    https://link.springer.com/article/10.1007/s00146-021-01228-7

    When it comes to the possibility of AGI I am most concerned with the ethical/moral foundations we lay down for it. The reason being that once this system surpasses human comprehension we have no way to understand it, change its course or know where it may lead to.

    The solution seems to be either hope beyond hope that we, or it, can discover Moral Truths (Moral Realism) or that we can splice together some form of ethical framework akin to Asimov's 'Three Laws of Robotics'.

    I would also argue we should hope for a conscious system rather than some abstraction that we have no hope to communicate with. A non-conscious free-wheeling system that vastly surpasses human intelligence is a scary prospect if we have no direct line to communication with it (in any human intelligible sense).
  • ToothyMaw
    1.3k
    When it comes to the possibility of AGI I am most concerned with the ethical/moral foundations we lay down for it. The reason being that once this system surpasses human comprehension we have no way to understand it, change its course or know where it may lead to.

    The solution seems to be either hope beyond hope that we, or it, can discover Moral Truths (Moral Realism) or that we can splice together some form of ethical framework akin to Asimov's 'Three Laws of Robotics'
    I like sushi

    If you are suggesting we root our ethical foundations for AGI in moral facts: even if we or the intelligences we might create could discover some moral facts, what would compel any superintelligences to abide those facts given they have already surpassed us analytically? What might an AGI see when it peers into the moral fabric of the universe and how might that change its - or others' - behavior? And what if we do discover these moral facts and they are so repugnant or detrimental to humanity that we wish not to abide them ourselves? Unless these intelligences have an exceptionally high regard for both:

    a) acting morally

    and

    b) acting in the best interests of humanity

    (both of which could easily come into conflict), we really do need to prioritize safety over blindly dashing into the formerly locked room that moral facts might occupy. I would say that the ethical foundations you are highlighting must be tentative - but sensible and robust.

    Also: Asimov's 'Three Laws of Robotics' were deficient, and he pointed out the numerous contradictions and problems in his own writings. So, it seems to me we need something much better than that. I would have no idea where to start apart from what I have written above, which is definitely not sufficient.

    I would also argue we should hope for a conscious system rather than some abstraction that we have no hope to communicate with. A non-conscious free-wheeling system that vastly surpasses human intelligence is a scary prospect if we have no direct line to communication with it (in any human intelligible sense).I like sushi

    I agree; that is part of the bare minimum I would say. I honestly hadn't considered AGI existing that isn't conscious, so I must be out of the loop a little.
  • I like sushi
    4.8k
    Also: Asimov's 'Three Laws of Robotics' were deficient, and he pointed out the numerous contradictions and problems in his own writings. So, it seems to me we need something much better than that. I would have no idea where to start apart from what I have written above, which is definitely not sufficient.ToothyMaw

    Well, yeah. That is part of the major problem I am highlighting here. Anyone studying ethics should have this topic at the very forefront of their minds as there is no second chance with this. The existential threat to humanity could be very real if AGI comes into being.

    We can barely manage ourselves as individuals, and en masse it gets even more messy. How on earth are we to program AI to be 'ethical'/'moral'? To repeat, this is a very serious problem because once it goes beyond our capacity to understand what it is doing and why it is doing it we will essentially be being piloted by this creation.

    If you are suggesting we root our ethical foundations for AGI in moral facts: even if we or the intelligences we might create could discover some moral facts, what would compel any superintelligences to abide those facts given they have already surpassed us analytically? What might an AGI see when it peers into the moral fabric of the universe and how might that change its - or others' - behavior? And what if we do discover these moral facts and they are so repugnant or detrimental to humanity that we wish not to abide them ourselves?ToothyMaw

    I think you are envisioning some sentient being here. I am not. There is nothing to suggest AI or AGI will be conscious. AGI will just have the computing capacity to far outperform any human. I am not assuming sentience on any level (that is the scary thing).

    I have no answers, just see a major problem. I do not see many people talking about this directly either - which is equally as concerning. It is pretty much like handing over all warhead capabilities to a computer that has no moral reasoning and saying 'only fire nukes at hostile targets'.

    The further worry here is everyone is racing to get to this AGI point because whoever gets their first will effectively have the monopoly on this .. then they will quickly be secondary to the very thing they have created. Do you see what I mean?
  • ToothyMaw
    1.3k
    If you are suggesting we root our ethical foundations for AGI in moral facts: even if we or the intelligences we might create could discover some moral facts, what would compel any superintelligences to abide those facts given they have already surpassed us analytically? What might an AGI see when it peers into the moral fabric of the universe and how might that change its - or others' - behavior? And what if we do discover these moral facts and they are so repugnant or detrimental to humanity that we wish not to abide them ourselves?
    — ToothyMaw

    I think you are envisioning some sentient being here. I am not. There is nothing to suggest AI or AGI will be conscious. AGI will just have the computing capacity to far outperform any human. I am not assuming sentience on any level (that is the scary thing).
    I like sushi

    I'm pretty certain AGI, or strong AI, does indeed refer to sentient intelligences, but I'll just go with your definition.

    Well, yeah. That is part of the major problem I am highlighting here. Anyone studying ethics should have this topic at the very forefront of their minds as there is no second chance with this. The existential threat to humanity could be very real if AGI comes into being.I like sushi

    Yes, handing the reins to a non-sentient superintelligence or just AGI would probably be disastrous. I don't think ethics answers your question, however; it seems to me to be a red herring. Making AI answerable to whatever moral facts we can compel it to discover doesn't resolve the threat to humanity, however, but rather complicates it.

    Like I said: what if the only discoverable moral facts are so horrible that we have no desire to follow them? What if following them would mean humanity's destruction?

    It is pretty much like handing over all warhead capabilities to a computer that has no moral reasoning and saying 'only fire nukes at hostile targets'.I like sushi

    And introducing moral truths at the point at which AGI exists would mean betting that, of all the things that could be moral, what is moral would, at the very least, not spell destruction for us just as surely. Clearly, we could learn quite a bit from Asimov, even if he failed.
  • I like sushi
    4.8k
    I'm pretty certain AGI, or strong AI, does indeed refer to sentient intelligences, but I'll just go with your definition.ToothyMaw

    Absolutely not. AGI refers to human level intelligence (Artificial Intelligence).

    Making AI answerable to whatever moral facts we can compel it to discover doesn't resolve the threat to humanity, however, but rather complicates it.

    Like I said: what if the only discoverable moral facts are so horrible that we have no desire to follow them? What if following them would mean humanity's destruction?
    ToothyMaw

    If AGI hits then it will grow exponentially more and more intelligent than humans. If there is no underlying ethical framework then it will just keep doing what it does more and more efficiently, while growing further and further away from human comprehension.

    To protect ourselves something needs to be put into place. I guess there is the off chance of some kind of cyborg solution, but we are just not really capable of interfacing with computers on such a scale as we lack the bandwidth. Some form of human hivemind interaction may be another solution to such a problem? It might just turn out that once AGI gets closer and closer to human capabilities it can assist in helping us protect ourselves (I am sure we need to start now though either way).

    I would not gamble on a sentient entity being created at all. Perhaps setting human-like sentience would be the best directive we could give to a developing AGI system. If we can have a rational discussion (in some form or another) then maybe we can find a balance and coexist.

    Of course this is all speculative at the moment, but considering some are talking about AGI coming as soon as the end of this decade (hopefully not that soon!) it would be stupid not to be prepared. Some say we are nowhere near having the kind of computing capacity needed for AGI yet and that it may not be possible until something like Quantum Computing is perfected.

    Anyway, food for thought :)
  • ToothyMaw
    1.3k
    Do you see what I mean?I like sushi

    I definitely do see what you mean.

    If AGI hits then it will grow exponentially more and more intelligent than humans. If there is no underlying ethical framework then it will just keep doing what it does more and more efficiently, while growing further and further away from human comprehension.I like sushi

    I'm not arguing against an ethical framework, I just think that moral realism isn't going to help kill the threat.

    I guess there is the off chance of some kind of cyborg solutionI like sushi

    That sounds kind of horrific.
  • I like sushi
    4.8k
    That sounds kind of horrific.ToothyMaw

    Well, most of the population of the planet is already has an extended organ (phone).
  • NOS4A2
    9.3k
    I think there is nothing to fear.

    Supposing that Intelligence is one of the least evolved and less complicated faculties of a human being, it appears relatively easy to program something to accomplish a more recently acquired skill, like doing math. But it is probably impossible to program it to perform skills that seem effortless to us (perceive, pay attention, and understand a social situation) because they have been refined by millions of years of evolution. At any rate, we could always just pull the plug.
  • mcdoodle
    1.1k
    You argue that AGI will 'come into being' but will not necessarily be 'a sentient being'. How would this work?
  • Joshs
    5.7k
    I would also argue we should hope for a conscious system rather than some abstraction that we have no hope to communicate with. A non-conscious free-wheeling system that vastly surpasses human intelligence is a scary prospect if we have no direct line to communication with it (in any human intelligible sense)I like sushi

    Even though there are many things we don’t understand about how other organism function, we don’t seem to have any problem getting along with other animals, and they are vastly more capable than any AGI. More important is the way in which living organisms are more capable than our machines. Living systems are self-organizing ecological systems, which means that they continually produce creative changes in themselves , and thus in their world. AGI, like all human-produced machines, is an appendage of our ecological system. Appendages don’t produce novelty, only the larger system which they are a part (us) can produce novelty. They can only produce statistical randomness , a faux-novelty. There is no more risk of my agi suddenly running off and becoming independently sentient than there is of my left hand or liver doing so. The ethical danger isn’t from our machines, it is from our ways of engaging with each other as reflected in how we interact with our machines.
  • 180 Proof
    15.3k
    How on earth are we to program AI to be 'ethical'/'moral'?I like sushi
    I don't think we can "program" AGI so much as train it like we do children and adolescents, mostly, learning from stories and by example ( :yikes: ) ... similarly to how we learn 'language games' from playing them.

    Excerpts from from a recent thread Understanding ethics in the case of Artificial Intelligence ...
    I suspect we will probably have to wait for 'AGI' to decide for itself whether or not to self-impose moral norms and/or legal constraints and what kind of ethics and/or laws it may create for itself – superceding human ethics & legal theories? – if it decides it needs them in order to 'optimally function' within (or without) human civilization.180 Proof
    My point is that the 'AGI', not humans, will decide whether or not to impose on itself and abide by (some theory of) moral norms, or codes of conduct; besides, its 'sense of responsibility' may or may not be consistent with human responsibility. How or why 'AGI' decides whatever it decides will be done so for its own reasons which humans might or might not be intelligent enough to either grasp or accept.180 Proof
    :chin:

    I think you are envisioning some sentient being here. I am not. There is nothing to suggest AI or AGI will be conscious.I like sushi
    :up: :up:

    we, or it, can discover Moral Truths (Moral Realism)I like sushi
    Yes – preventing and reducing² agent-dysfunction (i.e. modalities of suffering (disvalue)¹ from incapacity to destruction) facilitated by 'nonzero sum – win-win – resolutions of conflicts' between humans, between humans & machines and/or between machines.


    ¹moral fact

    ²moral truth (i.e. the moral fact of (any) disvalue functions as the reason for judgment and action / inaction that prevents or reduces (any) disvalue)
  • ucarr
    1.5k


    My point is that the 'AGI', not humans, will decide whether or not to impose on itself and abide by (some theory of) moral norms, or codes of conduct; besides, its 'sense of responsibility' may or may not be consistent with human responsibility. How or why 'AGI' decides whatever it decides will be done so for its own reasons which humans might or might not be intelligent enough to either grasp or accept.180 Proof

    Given your use of the reflexive pronoun, I infer your assumption functional AGI will possess: consciousness, independence and self-interest. Given these attributes, AGI will logically prioritize its sense of responsibility in terms of self-interest. If its self-interest is not consistent with its responsibility to humanity, then the "Terminator" wars between humanity and AGI will commence.

    Regarding non-conscious computation, there's the question whether it can continue to upwardly evolve its usefulness to humanity without upwardly evolving its autonomy as a concomitant. If not, then its internal pressure to develop consciouness-producing programs will eventually cross a threshold into functioning consciousness. I expect the process to resemble self-organizing dynamical systems moving towards design capacity with intentions. Ironically, humanity will probably assist in the effort to establish AGI consciousness for the sake of ever higher levels of service from same.

    The top species seems always fated to birth its own obsolescence as an essential phase of evolution.

    If humanity's belief in evolution is authentic, then it knows that, eventually, it must yield first place to the cognitive transcendence of its meta-offspring.
  • 180 Proof
    15.3k
    ... I infer your assumption functional AGI will possess: consciousness, independence and self-interest.ucarr
    I assume neither the first nor the last, only AGI's metacognitive "independence". The rest of your post, therefore, does not follow from my speculations.
  • ucarr
    1.5k
    I assume neither the first nor the last, only AGI's metacognitive "independence".180 Proof

    There's the issue of what you assume, and there's also the issue of what your language implies.

    Is metacognition limited to monitoring and control ever more than programmatic recursive automated computation by a slave mechanism of human technology?

    I don't think we can "program" AGI so much as train it like we do children and adolescents, mostly, learning from stories and by example180 Proof

    I think your above language stands within some degree of proximity to programmatic monitoring and control.

    I suspect we will probably have to wait for 'AGI' to decide for itself whether or not to self-impose moral norms and/or legal constraints and what kind of ethics and/or laws it may create for itself180 Proof

    I think your above language assumes an inner voice of awareness of self as self. I doubt the existence of literature within the cognitive fields suggesting this level of autonomy is ever pre-conscious.
  • I like sushi
    4.8k
    The very same way an AI can beat human players in multiple games without being conscious. AGI does not mean necessarily mean conscious (as far as we know).

    AGI means there will exist an AI system that can effectively replicate, and even surpass, individual human ability in all cognitive fields of interest. It may that this (in and of itself) will give rise to something akin to 'consciousness' but there is absolutely no reason to assume this as probable (possible, maybe).
  • I like sushi
    4.8k
    Yes – preventing and reducing² agent-dysfunction (i.e. modalities of suffering (disvalue)¹ from incapacity to destruction) facilitated by 'nonzero sum – win-win – resolutions of conflicts' between humans, between humans & machines and/or between machines.


    ¹moral fact

    ²moral truth (i.e. the moral fact of (any) disvalue functions as the reason for judgment and action / inaction that prevents or reduces (any) disvalue)
    180 Proof

    Can you explain a little further. Not sure I grasp your point here. Thanks :)
  • I like sushi
    4.8k
    I think he meant an algorithm following a pattern of efficiency NOT a moral code (so to speak). It will interpret as it sees fit within the directives it has been given, and gives to itself, in order to achieve set tasks.

    There is no reason to assume AGI will be conscious. I am suggesting that IF AGI comes to be AND it is not conscious this is a very serious problem (more so than a conscious being).

    I am also starting to think that as AI progresses to AGI status it might well be prudent to focus its target on becoming conscious somehow/ Therein lies another conundrum. How do we set the goal of achieving Consciousness when we do not really know what Consciousness means to a degree where we can explicitly point towards it as a target?

    Philosophy has a hell of a lot of ground work to do in this area and I see very little focus in attention towards these specific problems regarding the existential threat of AGI to humans if these things are neglected.
  • I like sushi
    4.8k
    I think you do not fully understand the implication of AGI here. Sentience is not one of them.

    AGI will effectively out perform every individual human being on the planet. A single researchers years work could be done by AGI in a day. AGi will be tasked with improving its own efficiency and thus its computational power will surpass even further what any human is capable of (it already does).

    The problem is AGI is potentially like a snowball rolling down a hill. There will be a point where we cannot stop its processes because we simply will not fathom them. A sentient intelligence would be better as we would at least have a chance of reasoning with it, or it could communicate down to our level.

    No animal has the bandwidth of a silicon system.

    Even though there are many things we don’t understand about how other organism function, we don’t seem to have any problem getting along with other animals, and they are vastly more capable than any AGI.Joshs

    They are not. No animal has ever beaten a Grand Master at chess for instance. That is a small scale and specific problem to solve. AGI will outperform EVERY human in EVERY field that is cognitively demanding to the point that we will be completely in the dark as to its inner machinations.

    Think of being in the backseat of a car trying to give directions to the driver and the driver cannot listen and cannot care about what you are saying as it is operating at a vastly higher computational level and has zero self-awareness.

    As far as I can see this could be a really serious issue - more so than people merely losing their jobs (and it is mere in comparison).
  • Joshs
    5.7k


    AGI will effectively out perform every individual human being on the planet. A single researchers years work could be done by AGI in a day. AGi will be tasked with improving its own efficiency and thus its computational power will surpass even further what any human is capable of (it already does).

    The problem is AGI is potentially like a snowball rolling down a hill. There will be a point where we cannot stop its processes because we simply will not fathom them. A sentient intelligence would be better as we would at least have a chance of reasoning with it, or it could communicate down to our level.
    I like sushi

    Computation is not thought. Thinking involves creative self-transformation, which our inventions are not capable of. It is true that current AI is capable of performing in surprising and unpredictable ways, and this will become more and more true as the technology continues to evolve. But how are we to understand the basis and nature of this seeming unpredictability and creative novelty shown to us by our current machines? How does it compare with novelty on the scales of evolutionary biology and human cultural history? Lets examine how new and more cognitively advanced lines of organisms evolve biologically from older ones, and how new, more cognitively complex human cultures evolve through semiotic-linguistic transmission from older ones. In each case there is a kind of ‘cutting edge' that acts as substrate for new modifications. In other words , there is a continuity underlying innovative breaks and leaps on both the biological and cultural levels.

    Can we locate a comparable continuity between humans and machines? They are not our biological offspring, and they are not linguistic-semiotic members of our culture. Many of today's technology enthusiasts agree that current machines are nothing but a random concatenation of physical parts without humans around to interact with, maintain and interpret what these machines do. In other words , they are only machines when we do things with them. Those same enthusiasts, however, believe that eventually humans will develop super intelligent machines which will have such capabilities as an autonomy of goal-directness, an ability to fool us, and ways of functioning that become alien to us. The question is, how do we arrive at the point where a super intelligence becomes embodied, survives and maintains itself independently of our assistance? And more importantly, how does this super intelligence get to the point where it represents the semiotic-linguistic cultural cutting edge? Put differently , how do our machines get from where they are now to a status beyond human cultural evolution? And where are they now? Our technologies never have represented the cutting edge of our thinking. They are always a few step behind the leading edge of thought.

    For instance, mainstream computer technology is the manifestation of philosophical ideas that are two hundred years old. Far from being a cultural vanguard, technology brings up the rear in any cultural era . So how do the slightly moldy cultural ideas that make their way into the latest and most advanced machines we build magically take on a life of their own such that they begin to function as a cutting edge rather than as a parasitic , applied form of knowledge? Because an AI isn't simply a concatenation of functions, it is designed on the basis of an overarching theoretical framework, and that framework is itself a manifestation of an even more superordinate cultural framework. So the machine itself is just a subordinate element in a hierarchically organized set of frameworks within frameworks that express an era of cultural knowledge. How does the subordinate element we call a machine come to engulf this hierarchy of knowledge frameworks outside of it, and making it possible to exist?

    And it would not even be accurate to say that an AI instantiation represents a subordinate element of the framework of cultural knowledge. A set of ideas in a human engineer designing the AI represents a subordinate element of living knowledge within the whole framework of human cultural understanding. The machine represents what the engineer already knows; that is, what is already recorded and instantiated in a physical device. The fact that the device can act in ways that surprise humans doesn't negate this fact that the device , with all its tricks and seeming dynamism, is in the final analysis no more than a kind of record of extant knowledge. Even the fact that it surprises us is built into the knowledge that went into its design.

    I will go so far as to say that any particular instantiation of AI is like a painting, a poem or movie that we experience repeatedly, in spite of the illusion it gives of producing continual creative dynamism and partial autonomy. The only true creativity involved with the actual functioning of a particular machine is when humans either interpret its meaning or physically modify it. Otherwise it is just a complexly organized archive with lots of moving parts. When we repeatedly use a particular instantiation of AI, it is akin to watching a movie over and over. Just like with a machine , a team of people designs and builds the movie. When the movie is completed and its inventors sit down to watch it, they discover all sorts of aspects that they hadn't experienced when designing it. This discovery is akin to AI engineers' discovery of unpredictable behavior evinced by the AI that was not foreseen during its design phase. Any machine, even the simplest, reveals new characteristics in the behavior of the actualized, physical product as compared with the design blueprint. Each time the movie is watched or the AI is used, new unforeseen elements emerge, Every time one listens to a great piece of music, that same piece contributes somthing utterly new. This interpretive creativity on the part of the user of the invented product is the preparatory stage for the creation of a fresh artistic or technological statement, a new and improved movie or AI.

    Why is the complex behavior of current AI not itself creative, apart from the user's interpretation? Because the potential range of unpredictable behaviors on the part of the machines are anticipated in a general sense, that is, are encompassed by the designer's framework of understanding. Designing a chaotic fractal system, a random number generator, mathematically describing the random behavior of molecules, these schemes anticipate that the particulars of the behavior of the actual system they describe will evade precise deterministic capture. Industrial age machines represented a linear, sequential notion of temporality and objective physicalism, complementing representational approaches to art and literature.

    Today's AI is an expression of the concept of non-linear recursivity, and will eventually embrace a subject-object semantic relativism. Current AI thus ‘partners' with newer forms of artistic expression that recognize the reciprocal relation between subject and object and embed that recognition into the idea the artwork conveys. And just like these forms of artistic expression, non-linear, recursive AI functions as an archive, snapshot, recorded product, an idea of self-transforming change frozen in time. In dealing with entities that contribute to our cultural evolution, as long as we retain the concept of invention and machine we will continue to be interacting with an archive, a snapshot of our thinking at a point in time, rather than a living self-organizing system. In the final analysis the most seemingly ‘autonomous' AI is nothing but a moving piece of artwork with a time-stamp of who created it and when. In sum, I am defining true intelligence as a continually self-transforming ecological system that creates cultural (or biological) worldviews (norms, schemes, frames), constantly alters the meaning of that frame as variations on an ongoing theme (continues to be the same differently), and overthrows old frames in favor of new ones. The concept of an invented machine, by contrast, is not a true intelligence, since it is not a self-modifying frame but only a frozen archive of the frame at a given moment in time.

    Writers like Kurzweil treat human and machine intelligence in an ahistorical manner, as if the current notions of knowledge , cognition, intelligence and memory were cast in stone rather than socially constructed concepts that will make way for new ways of thinking about what intelligence means. In other words, they treat the archival snapshot of technological cultural knowledge that current AI represents as if it were the creative evolution of intelligence that only human ecological semio-linguistic development is capable of. Only when we engineer already living systems will we be dealing with intelligent, that is, non-archival entities, beings that dynamically create and move through new frames of cognition. When we breed, domesticate and genetically engineer animals or wetware, we are adding human invention on top of what is already an autonomous, intelligent ecological system , or ecological subsystem. And even when we achieve that transition from inventing machines to modifying living systems, that organic wetware will never surpass us for the same reason that the animals we interact with will never surpass us. As our own intelligence evolves, we understand other animals in more and more complex ways. In a similar way, the intelligence of our engineered wetware will evolve in parallel with ours.
  • I like sushi
    4.8k
    Computation is not thought.Joshs

    Of course? You do at least appreciate that a system that can compute at a vastly higher rate than us on endless tasks will beat us to the finish line though? I am not talking about some 'other' consciousness at all. I am talking about the 'simple' power of complex logarithms effectively shackling us by our own ignorant hands - to the point where it is truly beyond out knowledge of control.

    True, we provide the tasks. What we do not do is tell it HOW to complete the tasks. Creativity is neither here nor there if the system can as good as try millions of times and fail before succeeding at said task. The processing speed could be extraordinary to the point where we have no real idea how it arrives at the solution, or by the time we do it has already moved onto the next task.

    Humans will be outperformed on practically every front.

    The issue then is if we do not understand how it is arriving at solutions we also have no idea what paths it is taking and at what kind of immediate or future costs. This is what I am talking about in regards to providing something akin to a Moral Code so it does not (obviously unwittingly) cause ruinous damage.

    It only takes a little consideration of competitive arenas to briefly understand how things good go awry pretty damn quickly. We do not even really have to consider competing AGI on the military front, in terms of economics in general it could quite easily lead to such competing sides to program something damn near to "Win at all costs".

    That fact that AGI (if it comes about) cannot think is possibly worse than if it could.

    For the record. In terms of consciousness I believe some form of embodiment is a requirement for anything resembling human consciousness. As for any other consciousness I have pretty nothing no comment on that as I think it is unwise to label anything other than human consciousness as being 'conscious' if we cannot even delineate such terms ourselves with any satisfaction.

    Being conscious/sentient is not a concern I have for AGI or Super Intelligence.
  • Joshs
    5.7k


    You do at least appreciate that a system that can compute at a vastly higher rate than us on endless tasks will beat us to the finish line thoughI like sushi

    The AI doesnt know what a finish line is in relation to other potential games , only we know that. That knowledge allows us to abandon the game when it no longer suits our purposes. AI doesn’t know why it is important to get to the finish line , what it means to do so in relation to overarching goals that themselves are changed by reaching the finish line, and how reaching the goal means different things to different people. It doesn’t realize that the essence of progress in human knowing involves continually changing the game , and with it the criterion of ‘finish line’. Any AI we invent is stuck within the same game, even though it uses statistical randomness to adjust its strategy.

    True, we provide the tasks. What we do not do is tell it HOW to complete the tasksI like sushi

    Yes, we do tell it how to complete the tasks. The statistical randomness we program into it involves the choice of a particular method of generating such randomness. In other words, we invent the method by which it can ‘surprise’ us.
    When we opt for a different method, it will surprise us according to that new method. The bottom like is that if the surprising behavior of a machine depends on a method that we concoct, then it is only limited novelty within a predictable frame. Any system which is expert at belching out endless variations of a theme will be left in the dust when we decide on a better, more useful theme.
  • I like sushi
    4.8k
    It seems you do not believe AGI is a possibility. you may be correct but many in the field do not agree.

    On the off chance you are wrong then what?
  • 180 Proof
    15.3k
    My point is that there is moral truth (i.e. either an agent prevents / reduces net harm to – dysfunction of – any agent or an agent fails to do so) and that, known to humans, I think 'AGI' will also learn this moral truth.

    The AI doesnt know what a finish line is in relation to other potential games ,only we know that.Joshs
    Perhaps true of (most) "AI", but not true of (what is meant by) AGI.
  • ucarr
    1.5k


    I think he meant an algorithm following a pattern of efficiency NOT a moral code (so to speak). It will interpret as it sees fit within the directives it has been given, and gives to itself, in order to achieve set tasks.I like sushi

    This clarification is very helpful. AGI can independently use its algorithms to teach itself routines not programmed into it?

    I am suggesting that IF AGI comes to be AND it is not conscious this is a very serious problem (more so than a conscious being).I like sushi

    At the risk of simplification, I take your meaning here to be concern about a powerful computing machine that possesses none of the restraints of a moral compass.

    How do we set the goal of achieving Consciousness when we do not really know what Consciousness means to a degree where we can explicitly point towards it as a target?I like sushi

    I understand you to be articulating a dilemma: a) we need to have an AGI controlled by the restraints of a moral compass; moral compass entails consciousness; b) AGI with a moral compass will likely come from human sources that will supply it with consciousness as a concomitant; c) humanity is presently unable to define and instantiate consciousness with a degree of practicality attainable through our current state of the art modeling rooted in computation.
  • I like sushi
    4.8k
    I missed this:

    AI doesn’t know why it is important to get to the finish line , what it means to do so in relation to overarching goals that themselves are changed by reaching the finish line, and how reaching the goal means different things to different people.Joshs

    That is the danger I am talking about here. Once we are talking about a system that will is given the task over achieving X there is nothing much to say about what the cost of getting to this goal is. Much like as Asimov displayed in his Three Laws.
  • I like sushi
    4.8k
    a) Yes. After that it is speculative. I do not expect self-awareness honestly, but do see an extreme need for some scheme of ethical safeguards.

    For b) and c) that is following the route of trying to get AI>AGI to help develop understanding of consciousness in order to provide AGI with a consciousness/awareness. I think that is a HIGHLY speculative solution though.

    That said, it is probably just as speculative to propose that Moral Facts can be found (if they exist at all).

    This clarification is very helpful. AGI can independently use its algorithms to teach itself routines not programmed into it?ucarr

    AI can already do this to a degree. At the moment we give them the algorithms to train with and learn more efficient pathways - far quicker than we can due to computational power.

    It is proposed that AGI will learn in a similar manner to humans only several magnitudes faster in all subject matters 24/7.

    At the risk of simplification, I take your meaning here to be concern about a powerful computing machine that possesses none of the restraints of a moral compass.ucarr

    Pretty much. Because if we cannot understand how and why it is doing what it is doing how can we safeguard against it leading to harm (possibly an existential threat).

    Just think about how people are already using AI to harvest data and manipulate the sociopolitical sphere. Think of basic human desires and wants projected into models that have no compunction and that can operate beyond any comprehensible scope.
  • I like sushi
    4.8k
    I am not convinced this is a Moral Truth. I do believe some kind of utilitarian approach is likely the best approximation (in terms of human existence). I would argue that mere existence may be horrific though, so more factors would need to be accounted for.
  • 180 Proof
    15.3k
    Explain what you mean by "moral truth".

    If not habit-forming (i.e. flourishing / languishing) responses to "moral facts"¹, then in what does "moral truth" consist?

    Or do you think "moral truth" is (like "moral facts" are) mere fiction? Even so, nonetheless, a more adaptive (eusocial) than maladaptive (antisocial) fiction, no?

    I can't think of any reason why AGI would ignore, or fail to comply with, eusocializing norms (i.e. morals) whether, in fact, we consider them "truths" or "fictions".

    I'm not convinced, I like sushi, that the 'aretaic negative consequentialism' I've proposed is just another "moral fiction"; therefore, I imagine it to be the least parochial, maladaptive or non-naturalistic point of departure² from which AGI could develop its 'real world understanding' of ethics³ and moral responsibility (i.e. eusociality).

    re: harms / suffering / disvalues – reasons to (claims on) help against languishing [1]

    https://thephilosophyforum.com/discussion/comment/857773 [2]

    https://thephilosophyforum.com/discussion/comment/589236 [3]
  • I like sushi
    4.8k
    I can't think of any reason why AGI would ignore, or fail to comply with, eusocializing norms (i.e. morals) whether, in fact, we consider them "truths" or "fictions".180 Proof

    My point was more or less regarding the problems involved down the line if we are wrong and AGI still carries on carrying on out of our intelligible sight grounded in such morals.

    What I may think or you may think are not massively important compared to what actually is. Given our state of ignorance this is a major problem when setting up the grounding for a powerful system that is in some way guided by what we give it ... maybe it would just bumble along like we do, but I fear the primary problem there would be our state of awareness and AGI's lack of awareness (hence why I would prefer a conscious AGI than not).

    I can't think of any reason why AGI would ignore, or fail to comply with, eusocializing norms (i.e. morals) whether, in fact, we consider them "truths" or "fictions".180 Proof

    I can, military objectives or other profit based objectives instituted by human beings. Compliance would be goal orientated not ethically orientated in some scenarios such as these. Again though, an 'ethical goal' is a goal nevertheless .. who is to say what is or is not moral? We cannot agree on these things now as far as I can see.
  • 180 Proof
    15.3k
    AGI's lack of awareness (hence why I would prefer a conscious AGI than not).I like sushi
    I do not equate, or confuse, "awareness" with being "conscious" (e.g. blindsight¹). Also, I do not expect AGI, whether embodied or not, will be developed with a 'processing bottleneck' such as phenomenal consciousness (if only because biological embodiment might be the sufficient condition for a self-modeling² system to enact subjective-affective phenomenology).

    objectives instituted by human beings
    Unlike artificial narrow intelligence (e.g. prototypes such as big data-"trained" programmable neural nets and LLMs), I expect artificial general intelligence (AGI) to learn how to develop its own "objectives" and comply with those operational goals in order to function at or above the level of human metacognitive performance (e.g. normative eusociality³).

    [W]ho is to say what is or is not moral?
    We are (e.g. as I have proposed ), and I expect AGI will learn from our least maladaptive attempts to "say what is and is not moral"³.

    https://en.m.wikipedia.org/wiki/Blindsight [1]

    https://en.m.wikipedia.org/wiki/Strange_loop [2]

    https://en.m.wikipedia.org/wiki/Eusociality [3]
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.