• Antony Nickles
    1.1k
    In moral philosophy, historically there was a desire to externalize ethical behavior to make it determined, like a law—even if just a law I give myself (with Kant). If you follow the law, you are good, even if you just try for something good. These frameworks want the rules to be clear, so that judgment can be certain (and also not involve me). The fact that sometimes we are not certain what the rules will be or how they apply or what we do when there are none, is cause for most to view the situation as impossible.

    Now I’m not an AI expert, but we can’t seem to create rules or goals because AI is too unpredictable (and we want rules to tell us what will be right). And there is also much comparison to humans. But these moral frameworks imagine something special about us because the fulcrum of their judgment is choice (do I/did they: follow the rule? or go against it?). So the discussion of whether AI is special like us is actually a figment of the projection of our desire for ethical clarity.

    More modern descriptions of morality focus on responsibility. We may not know what to do, but I am nevertheless answerable after it is done (even without rules). So then what ethics regarding AI turns on, is identity. The evil here is anonymity, not uncertainty. Thus the importance of chaining the outcome to an author. And we are not just judging outcomes, but also checking ourselves (a la Kant) because it would be tied to me, whether already determined to be bad, or yet to be justified. If, however, mythically put, god no longer sees us, we have no moral realm at all.
  • ToothyMaw
    1.3k
    In moral philosophy, historically there was a desire to externalize ethical behavior to make it determined, like a law—even if just a law I give myself (with Kant). If you follow the law, you are good, even if you just try for something good. These frameworks want the rules to be clear, so that judgment can be certainAntony Nickles

    I don't quite agree that many moral philosophers would consider you moral for following just any self-imposed rule, if you are saying that. Otherwise sounds good to me.

    The fact that sometimes we are not certain what the rules will be or how they apply or what we do when there are none, is cause for most to view the situation as impossible.Antony Nickles

    Yes, I think a lot of people look at it that way, too, even if it is kind of short-sighted.

    Now I’m not an AI expert, but we can’t seem to create rules or goals because AI is too unpredictable (and we want rules to tell us what will be right). And there is also much comparison to humans. But these moral frameworks imagine something special about us because the fulcrum of their judgment is choice (did I follow the rule? or go against it?). So the discussion of whether AI is special like us is actually a figment of the projection of our desire for ethical clarity.

    More modern descriptions of morality focus on responsibility. We may not know what to do, but I am nevertheless answerable after it is done (even without rules). So then what ethics regarding AI turns on, is identity.
    Antony Nickles

    Okay, I think there is a difference between reigning in an AI via necessary programming and ascribing moral responsibility to its actions, or whatever kind of moral angle is taken. I don't think the morality issue is so unpredictable as the job of keeping powerful AIs from inadvertently causing whatever doom scenarios the experts claim could befall us.

    Doesn't it matter though if the AI can choose between an effecting a moral outcome or a less moral outcome like one of us? I mean, if it can do that, shouldn't we treat it like a human, if we must follow through with holding AIs responsible? I mean otherwise we can just change the programming so that it chooses the moral outcome next time, right? Its identity is that which we create.

    And wouldn't we be almost entirely responsible for creating the AI that effected a bad outcome, anyways? It seems to me that we are the ones who need to be put in check morally, not so much the AIs we create. That isn't to say we shouldn't program it to be moral, but rather that we should exercise caution for the sake of everyone's wellbeing.

    we are not just judging outcomes, but also checking ourselves (a la Kant) because it would be tied to me, whether already determined bad, or yet to be justified. If, however, mythically put, god no longer sees us, we have no moral realm at all.Antony Nickles

    I'm not totally sure what this means. Could you maybe explain?
  • Antony Nickles
    1.1k


    I don't quite agree that many moral philosophers would consider you moral for following just any self-imposed rule, if you are saying that.ToothyMaw

    I was trying to allude to Kant’s sense of duty and moral imperative, with my point being that, even in that case, the desire is for impersonal rationality (certainty, generality, etc.).

    Doesn't it matter though if the AI can choose between affecting a moral outcome or a less moral outcome like one of us? …shouldn't we treat it like a human, if we must follow through with holding AIs responsibleToothyMaw

    I may not have been clear that the “identity” I take as necessary to establish and maintain is not the identity of the AI, to make the AI responsible, but to tie a particular human to the outcomes of the AI.

    And, while it may be that AI could curtail its actions to already-established law (as AI can only use existing knowledge), only a human can regulate based on how they might be judged in a novel situation (as I pointed out that the only truly “moral” situation is when we are at a loss as to what to do, or else we are just abiding by rules, or not). In other words, the threat of censure is part of conscience (even if not ensuring normativity), as we, in a sense, ask ourselves: who do I want to be? (In this sense of: be seen as). Anonymity diminishes cultural pressure to whatever remains of it as a voice inside me, with the knowledge that I may never be judged perhaps silencing that entirely.

    we can just change the programming so that it chooses the moral outcome next time, right? Its identity is that which we create.ToothyMaw

    But the distinct actual terror of AI is that our knowledge can not get in front of it to curtail it, to predict outcomes, because it can create capabilities and goals for itself—it is not limited to what we program it to do. It’s not: build a rocket. It’s: design a better rocket. And it can adopt means we don’t anticipate and determine an end we do not control nor could foresee.

    It seems to me that we are the ones who need to be put in check morally, not so much the AIs we create. That isn't to say we shouldn't program it to be moral, but rather that we should exercise caution for the sake of everyone's wellbeing.ToothyMaw

    I agree; my point is that, in the way morality works, tying the AI and its outcomes to who let it loose is the best way to put us “in check morally”—like a serial number on a gun which can tell us who shot someone. My last paragraph is just to say that if we have anonymity, we don’t have any incentive to check ourselves, as, say, in the example of words, I can’t be held to what I say, judged or revealed in having said it.
  • ToothyMaw
    1.3k


    I appear to have misunderstood a lot of what you posted. Honestly, I agree with almost everything you are saying. Not even much room for discussion. You appear to have thought this out. Maybe I'll try to cobble together some critique or something later on today.

    Good post. :up:
  • ToothyMaw
    1.3k
    I agree; my point is that, in the way morality works, tying the AI and its outcomes to who let it loose is the best way to put us “in check morally”—like a serial number on a gun which can tell us who shot someone.Antony Nickles

    I agree - we need to crush whoever initiates the robot apocalypse. Unfortunately, not every group of people working towards developing AIs at breakneck speeds would agree, or would even care, if we say we will lock them up in prison for doing so. If prisons would even exist at that point.

    So, even if we do come up with a punishment so severe that it just scares the shit out of everyone, how do we actually enforce it? We could, I guess, establish a global commission for investigating the misuse of AI, but that would require significant cooperation between disparate groups.

    Maybe it would take a horrible blunder to scare everyone into setting up such a commission? Maybe people need to be exposed to the abject horror that can accompany the misuse of AI? Although you might not guess it, I actually have a grasp on just how bad the misuse of AI can be, and it can be bad. Like, really bad. In ways you might not expect. But you already made that point, and we are in agreement.

    Honestly the only thing I have to say about your theory of accountability is that it just might be too little too late; so what if the crime is punished? It doesn't help the people harmed in any way. That isn't to say we shouldn't punish people - we should - but is that deterrence really going to be substantial? Can we really get the cooperation we need before an extinction level event?

    only a human can regulate based on how they might be judged in a novel situationAntony Nickles

    I'm sorry, are you saying here that an AI can't predict how it might be judged in a novel situation? I think it can if there is a compunction to consider how it might be seen by humans, and that it could be programmed to possess such a quality. That it can only consider novel situations based on already established laws is no different from how a human operates. A human just has a drive to conform, or to make sense of the world in such a way as to justify certain pre-existing biases, unlike an AI. I don't see anything preventing an AI from wanting to avoid internal threats to its current existence from acting poorly in the kind of situation you consider truly moral.

    It might be counter-intuitive to allow an AI a desire to survive so as to avoid actions taken due to a lack of a fear of censure, but that could solve the problem of it acting counter to our interests in situations considered truly moral. We would just need to form some sort of agreement on what kind of consequences are concomitant with an action considered to be a threat to humanity's existence or human wellbeing at large.

    So, we would find ourselves pivoting to consequentialism to fill in for the cases in which our rules fail us. Whether or not such a thing is rigorously, philosophically defensible doesn't matter, because we are talking about preventing horrific outcomes for humanity that we would all almost certainly agree should be avoided.

    But the distinct actual terror of AI is that our knowledge can not get in front of it to curtail it, to predict outcomes, because it can create capabilities and goals for itself—it is not limited to what we program it to do. It’s not: build a rocket. It’s: design a better rocket. And it can adopt means we don’t anticipate and determine an end we do not control nor could foresee.Antony Nickles

    Yes, this is inevitable if we let things get out of hand. Maybe allowing the AI some more human-like qualities might actually cause it to be more predictable in the ways that matter? I mean a cynical, jaded mostly-human with weaker superintelligence is better than a superintelligence that is willing to decimate all the rainforests on earth to produce as much paper as possible because we gave it a poorly thought-out command. I mean, at least one can be reasoned with and talked out of most things, provided we try hard enough (probably).

    It's kind of like that show 'House'. The brilliant, fucked up "doctor" with very little in terms of scruples or emotional intelligence might be a dick, but he more or less aligns with the goals of everyone in the hospital - even if it is difficult to anticipate what he might do in any given situation. But he's still a human in the ways that make his goals align. Or, at least, that is what I have gleaned from what exposure I've had to the show, but the point stands even if that isn't how the show is.

    Alternatively, you have ChatGPT999 walking around dispensing as many diagnoses as possible (perhaps even with inhuman accuracy), because we commanded it to do so, until it decides to reference questionable information from the internet because its medical databases ran dry, and somebody gets killed.

    I get that the difference between a real superintelligence and a person and the difference between House and a regular person are not the same, but I still think the example holds because the common ground between such a super-intelligence and humanity would help keep the intelligence from effecting unforeseeable, catastrophic ends.
  • 180 Proof
    15.3k
    I suspect we will probably have to wait for 'AGI' to decide for itself whether or not to self-impose moral norms and/or legal constraints and what kind of ethics and/or laws it may create for itself – superceding human ethics & legal theories? – if it decides it needs them in order to 'optimally function' within (or without) human civilization.
  • Arne
    815
    In moral philosophy, historically there was a desire to externalize ethical behavior to make it determined, like a law—even if just a law I give myself (with Kant). . .
    Now I’m not an AI expert, but we can’t seem to create rules or goals because AI is too unpredictable (and we want rules to tell us what will be right).
    Antony Nickles

    You go from externalizing ethical behaviour via rules as a desire historically to "we want the rules to tell us what will be right". That is a huge leap and ignores the equally historical rejection of the notion that morality is reducible to a set of rules.

    I don't want the rules to tell me "what will be right". Do you?

    Good OP.
  • Antony Nickles
    1.1k
    That [ AI ] can only consider novel situations based on already established laws is no different from how a human operates.ToothyMaw

    It seems there are at least two important differences. The first is epistemological (and ontological I guess for the AI). AI is limited to what already is known, yes, but it is also limited to knowledge, as in the type of information that knowledge is. Putting aside that it sucks at knowing the criteria for judgment—which are different, and of different types, for almost every thing—it is stuck outside of a history of curiosity, mistake, triumph, reprimand, habit, experiences, etc. and shared culture and practices that shapes and inhabits all of us without the need of being known. Even if it is able to be canonized as knowledge (because that is philosophy’s job: explicating intuition), there is no “telling” anyone that kind of wisdom (how basic do you have to get to completely explain an apology, and all the undying and attendant acts, must less: leading someone on). Also, even if AI could gather all the data of a changing present, it would still miss much of the world we instinctively take in based on training and history, much less biology. And I know we want moral decisions to be made on important-seeming things like rules or truth or right or knowledge, but we do things for reasons that don’t have that same pedigree. This doesn’t make them immoral, or self-interested (but not, not those), it’s just redemption or self-aggrandizement are complicated enough, much less just fear or something done without thinking.

    I don't see anything preventing an AI from wanting to avoid internal threats to its current existence from acting poorly in the kind of situation you consider truly moral.ToothyMaw

    But if it is a truly moral situation, we do not know what to do and no one has more authority to say what is right, so without the (predetermined, certain) means to judge what “acting poorly” in this situation would be. But AI cannot hold itself up as an example in stepping forward into the unknown in the way a person can. Or run from such a moment; could we even say: cowardly?
  • Arne
    815
    But if it is a truly moral situation, we do not know what to do and no one has more authority to say what is right, so without the (predetermined, certain) means to judge what “acting poorly” in this situation would be. But AI cannot hold itself up as an example in stepping forward into the unknown in the way a person can. Or run from such a moment; could we even say: cowardly?Antony Nickles

    Well said.
  • Arne
    815
    I suspect we will probably have to wait for 'AGI' to decide for itself whether or not to self-impose moral norms and/or legal constraints and what kind of ethics and/or laws it may create for itself – superceding human ethics & legal theories? – if it decides it needs them in order to 'optimally function' within (or without) human civilization.180 Proof

    I suspect we no longer have a choice, it we ever had one to begin with.
  • Arne
    815
    if we have anonymity, we don’t have any incentive to check ourselvesAntony Nickles

    Even if that were true. . .

    I understand the value in finding some way to better ensure "responsibility" for those engaged in the process. But the argument as a whole seems built upon questionable claims regarding how "we" behave. And maybe that is ok. But then your argument seems reducible to putting safeguards in place so we can all sleep better at night. . . and relieve ourselves of any moral responsibility for the results of bad actors. We let the genie out of the bottle,we opened the can of worms, we let the cat out of the bag. Collective action to avoid responsibility for the now perhaps inevitable results of our already existing and arguably irresponsible collective decisions is at best an illusion.
  • 013zen
    157
    determined, like a law—even if just a law I give myself (with Kant). If you follow the law, you are good, even if you just try for something good.Antony Nickles

    I wouldn't say this.

    You can fully believe you're doing the right thing, or that you're on the side of "good" and be completely wrong about this.

    I don't believe that ethics is characterized by rule following, in general, but rather by the inherent struggle we have when faced with ethically challenging situations. It's rather, that there is no rule to follow for every situation. I cannot know prima facie if whatever action I choose will be good or bad, and its this very uncertainty that makes moral situations difficult to navigate.

    Ethics springs from the desire to bring about the best situation in an uncertain world, and is characterized by the difficulty to act, even if we believe that we are right.

    If an AI ever feels something that we might characterize as an internal conflict regarding what makes the most sense to do in a difficult situation, that will affect people's lives in a differing but meaningful manner, then perhaps I might consider it capable of moral agency.
  • Antony Nickles
    1.1k

    I suspect we will probably have to wait for 'AGI' to decide for itself whether or not to self-impose moral norms and/or legal constraints and what kind of ethics and/or laws it may create for itself – superceding human ethics & legal theories? – if it decides it needs them in order to 'optimally function' within (or without) human civilization.180 Proof

    Not attributing an inherent nature to AI is something Hobbes of course famously also assumed about humans, which anticipates moral agreement only coming from mutually-assured destruction (the state of nature as the Cold War), or accepting limitation of freedom for self preservation. I don’t think it takes something only “human” to know what annihilation is, nor to “fear” it, or, alternatively, what perfecting oneself is (nor do I take that as evidence for “consciousness” of a “self”, which I would simply pin on a desire to be special without working at it).

    But AI does not love and hate, which is not to say, “have emotions”, but that it does not become interested or bored, which is the real basis of the social contract: not just “self interest”, but personal interests (not rationality so much as reasons). Thus AI would only be able to make the social contract (in avoiding death or reaching for a goal) as an explicit choice, one that is decided, as with Mill, or when Rawls tries to find the best position from which to start, rather than falling into sharing the same criteria for an act because of a history of our human interests in it, as with Locke or Wittgenstein (thus the need to “remember” our criteria, as Plato imagines it). As AI is not part of the weave and warp of human life, everything must be calculated, and, as is its limitation, from what is already (and only) known.
  • Antony Nickles
    1.1k

    But then your argument seems reducible to putting safeguards in place so we can all sleep better at night. . . and relieve ourselves of any moral responsibility for the results of bad actors.Arne

    I’m not saying self-monitoring is the only means, but, without being bound to your word, who knows what is going to come out of your mouth. Though without user-identity it wouldn’t matter, yes, we could look and say: “This is a bad outcome. Let’s make a law.” But with AI, playing catch up and whack-a-mole is untenable because the outcomes are almost unimaginable and the effects could be devastating, and could be even with a user picturing the world watching. So I’m not saying we rely on everyone behaving themselves, but, that only a human has the capacity to be responsible in a void of criteria for judgment when what action is itself is up for grabs.
  • Arne
    815
    without being bound to your word, who knows what is going to come out of your mouth.Antony Nickles

    but I am the only one who can bind me to my word. if you bind me to my word, you still do not know what is going to come out of my mouth.
  • Antony Nickles
    1.1k
    I don't believe that ethics is characterized by rule following013zen

    I was characterizing deontological morality, and roughly attributing the desire we have for it to be rational certainty so that I don’t have to be personally responsible because I followed a rule, which, in this case, is the only form of morality that AI is capable of, thus the need for another option since it may not care if it is personally responsible.

    Now, I agree that this overlooks an actual moral situation, as you say, “when faced with ethically challenging situations”. But I would point out that describing our navigation of “uncertainty” in “knowing” “good and bad” or the “best situation” plays into the desire to have certain knowledge of judgment and at the same time concede that we can’t have it, which makes it seem like we lack something we should have. My point is that this seeming lack shows that the nature of morality is different than knowledge in that we step into ourselves (possibly a new world) in acting in an unknown situation. Thus the need for a thread to the user in order that who they are is tied to what they do (or do with AI, or is done in their name).

    If an AI ever feels something that we might characterize as an internal conflict regarding what makes the most sense to do in a difficult situation, that will affect people's lives in a differing but meaningful manner, then perhaps I might consider it capable of moral agency.013zen

    And this is why there is discussion of whether AI is or could be “human”, because we associate morality with knowledge and choice. But, even if we assume that your example could happen (and why not)—that AI would find itself in a moral situation (beyond rules) and weigh options that affect others—my point is that it cannot be responsible to the future nor extrapolate from the past like a human. So it’s “agency” is not a matter of nature, but of categorical structure.
  • Antony Nickles
    1.1k
    but I am the only one who can bind me to my word. if you bind me to my word, you still do not know what is going to come out of my mouth.Arne

    Well of course you can says things like “that’s not what I meant” or “I’m sorry you took it that way” or “that’s just your perspective” but when someone says something in a specific situation, there is only so many ways it can having any import. So it is not me that ties you to the implications of your expression, but the whole history of human activity and practice. And so whether a threat or a promise is also an actual difference apart from you, not created by your desire or intention. Thus why we can know that you were rude despite your being oblivious, or that you’ve revealed that you are jealous in what you said.
  • Antony Nickles
    1.1k

    I don’t know if that’s an expression of a lack of interest or an inability to follow, but, assuming we’re here to understand each other, I was pointing out again that imagining “deciding” as the basis for moral action narrow-mindedly frames it as a matter of knowledge or goals like “optimal function”. This way of looking at acting comes from a desire for constant control and a requirement for explicit rationality that we would like have for ourselves (rather than, in part, our personal and shared interests). Imagining our power over action this way is what feeds the idea that AI could be “human” because it could fulfill this fantasy. But without any interest in explaining where and how you got lost, I can’t really help.
  • 180 Proof
    15.3k
    I should have written "I don't follow your thinking". And I still don't since it doesn't seem that you are responding to what I actually wrote.
  • Antony Nickles
    1.1k
    I suspect we will probably have to wait for 'AGI' to decide for itself whether or not to self-impose moral norms and/or legal constraints and what kind of ethics and/or laws it may create for itself – superceding human ethics & legal theories? – if it decides it needs them in order to 'optimally function' within (or without) human civilization.180 Proof

    I was discussing “deciding” and self-imposing norms, as you mentioned, and the difference between that picture of morality and the idea of responsibility I am suggesting.
  • 180 Proof
    15.3k
    I've not drawn any "picture of morality". My point is that the 'AGI', not humans, will decide whether or not to impose on itself and abide by (some theory of) moral norms, or codes of conduct; besides, its 'sense of responsibility' may or may not be consistent with human responsibility. How or why 'AGI' decides whatever it decides will be done so for its own reasons which humans might or might not be intelligent enough to either grasp or accept.
  • Antony Nickles
    1.1k
    My point is that the 'AGI', not humans, will decide whether or not to impose on itself and abide by (some theory of) moral norms, or codes of conduct.180 Proof

    I agree that AGI could be capable of imposing rules, norms, codes, laws, etc. on itself (as I was trying to acknowledge in bringing up the social contract, pictured as a decision). Preservation or perfection were merely examples of limits or goals we put on ourselves—I’m not claiming to understand what AGI would decide to choose. Our fear is that we do not have control over the rationale and outcomes of AGI; that, as you say, “How or why 'AGI' decides whatever it decides will be done so for its own reasons”. But that fear is a projection of the skeptical truth that all our talk of rationality and agreement on what is right can come to naught and we can be lost without knowledge of how to move forward.

    My claim is that being moral (not just following rules) only comes up in a situation where we don’t know what to do and have to forge a path ahead that reflects who we will be, creates a new world or builds new relations between us. But AGI is limited to knowledge, and so, structurally, it can only decide and choose based on information already made explicit that it is told or learns. And, as I put it to @ToothyMaw here, knowledge cannot encapsulate the history of our lives together and our shared interests and judgments, and so any extrapolation from knowledge is insufficient in a truly moral situation. So the question is not whether AGI can be self-aware—they would be omnipresently “conscious” of why they were doing something—but they do not live a human life. I would say you were halfway right; they are incapable of human responsibility. Not that they wouldn’t have reasons, but that those can’t answer for their actions the same way humans must. Thus my solution to tie a person to the outcomes of any AGI process, linking accountability, but also identity to what you author, knowingly or not, much as we are bound to what we say.
  • 180 Proof
    15.3k
    But AGI is limited to knowledge, and so, structurally, it can only decide and choose based on information already made explicit that it is told or learns.Antony Nickles
    This is incorrect even for today's neural networks' and LLMs' generative algorithms which clearly exhibit creativity (i.e. creating new knowledge or new solutions to old problems (e.g. neural network AlphaGo's 'unprecedented moves' in its domination of Go grandmaster Lee Sedol in 2016)). 'Human-level intelligence' entails creativity so there aren't any empirical grounds (yet?) to doubt that 'AGI' will be (at least) as creative (i.e. capable of imaging counterfactuals and making judgments which exceed its current knowledge) as its makers. It will be able to learn whatever we can learn and that among all else includes (if, for its own reasons, it chooses to learn) how to be a moral agent.
  • Antony Nickles
    1.1k

    I didn’t address the ability to extrapolate because the issue is a red herring**. A computer very well may come up with a novel response, be “creative”, but a capability is not what makes us a “moral agent”. Picturing a moral act as a decision comes from the desire to have it be something we can be right about (win or lose), so we imagine a moral situation as one that simply hasn’t been solved yet, or, because of the lack of rules guiding us, that there must be a novel act (a new or further rule). But in these desires we just really want to know (beforehand) that our choices (because right, true, just, selfless, imperative, etc) will exempt us from being judged. But being moral is not a capacity, it is a relation to others, a position we take on. We call it the ability to judge, to show judgment, because it is an act of placing ourselves in relation to others. We are responsible not as a function of some “sense”, or an answer I conclude, but in being (continually) answerable for what we do (even to ourselves). Now we may grant this position in relation to us (of judge, say) to a machine (as we might anthropomorphize our judgment by animals, or the earth, or the State), however, this is the ceding of authority (which there may already be a case for with testing and valuation algorithms), but not because of anything inherent (similar or analogous) in the machine’s capabilities, because nothing about it in our own.

    **At #143 in the PI, Wittgenstein discusses continuing a series of numbers based on a rule. The point is not the continuing (although it is under scrutiny) but the light it sheds on the relation between student and teacher. That the student may “come to an end” (#143), change their approach (#144) be “tempted” or “inclined” to speak or act (#143, or, notably: #217). The important part is they are prepared to react to each other (#145) because understanding is judged; claiming it is announcing a readiness to be judged (#146-154).
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.