• Wayfarer
    22.6k
    "Feelings" are instantiated in biochemical systems but this does not preclude them being instantiated other inorganic systems.180 Proof

    any examples of that? Beyond wishful thinking, I mean?

    //although I suppose this could also be read as an allusion to panpsychism. Is that what you mean?
  • Deletedmemberzc
    2.5k
    This doesn't follow. "Feelings" are instantiated in biochemical systems but this does not preclude them being instantiated other inorganic systems. Furthermore, in principle nothing precludes "AI" from being manifested through biochemical systems (via e.g. neuro-augmentation or symbiosis).180 Proof

    You're right, of course, on both points but I imagine those potentialities are distant-future.

    ....That is to say, without getting into the hard problem, I agree. I don't think you think the hard problem is hard but I've laid that debate to rest since it never gets off the ground.
  • Deletedmemberzc
    2.5k
    If people mistreat life-like robots or AI they are (to an extent) toying with doing so to real humansIsaac

    I think the eventual availability of high-fidelity graphic-emotive VR simulators of rape, torture & murder (plus offline prescription medications, etc) will greatly reduce the incidents of victimizing real persons by antisocial psychopaths.180 Proof

    I'm with 180 Proof. I play violent video games with a friend on a regular basis and the result if anything is a cathartic release of negative energy in the form of comic relief. It hasn't affected my ability to empathize with, for example, the residents I take care of at the nursing home where I work. Moreover it can make meditation even more peaceful by contrast after a hour-long virtual bloodbath. And I continue to be horrified by actual war, murder, history.
  • Deletedmemberzc
    2.5k
    We ought not be the sort of people who can hear cries of distress and not feel like we should respond.Isaac

    I hear cries of distress in movies all the time and know that because it's a simulation of distress there's no need for a response. I don't see a moral issue here.



    Technically a virtual simulation of distress - that is to say, twice-removed from actual distress. The human mind is able to cope with, manage, such nuances and remain completely healthy.
  • Isaac
    10.3k
    I think the eventual availability of high-fidelity graphic-emotive VR simulators of rape, torture & murder (plus offline prescription medications, etc) will greatly reduce the incidents of victimizing real persons by antisocial psychopaths.180 Proof

    Yes, it's an interesting debate. Personally I disagree. I think that these anti-social tendencies are not desires which need sating (like hunger) but rather failures in certain systems of restraint. Given this model, further suppressing what little of that restraint might be left will worsen incidents of victimisation, not lessen them. It's rather like taking the brakes off a train because they're not working properly - the train is no better off without brakes than it is without working brakes.

    Where I can see it working is in that using the VR will always be easier than trying it on a real person and so may act as a path of least resistance.

    I still would worry about the safety of letting a person out into society who has just spent several hours treating 'seemingly' real people without compassion and yet suffered no consequence of doing so...
  • Isaac
    10.3k
    a virtual simulation of distress - that is to say, twice-removed from actual distress. The human mind is able to cope with, manage, such nuances and remain completely healthy.ZzzoneiroCosm

    That's the conclusion, not the evidence.
  • Deletedmemberzc
    2.5k
    That's the conclusion, not the evidence.Isaac

    It's difficult to present evidence of the healthfulness of my mind. :wink:

    All I can say is I'm a peaceful, charitable, generous man who very often finds himself in the throes of the peak experience as described by Abraham Maslow.

    https://en.wikipedia.org/wiki/Peak_experience

    For other minds, and certainly for young children, whose minds are less skillful at managing nuance, it may be less healthy.
  • Deletedmemberzc
    2.5k



    I think it would be only too easy to induce ataraxia by producing two counter-papers so I think I'll jump straight to ataraxia.




    I think the minds of children should be protected from simulations of violence. And possibly some set of adult minds. But on minds like mine it has no detrimental effect.
  • Isaac
    10.3k
    I think it would be only too easy to induce ataraxia by producing two counter-papersZzzoneiroCosm

    It would. Although we'd normally then go on to discuss the relative merits and problems with those papers, but I understand philosophy is different...

    I think the minds of children should be protected from simulations of violence. And possibly some set of adult minds. But on minds like mine it has no detrimental effect.ZzzoneiroCosm

    Possibly. So we could then ask the question of how we ought act in the face of such uncertainty. Is it worth the risk? What are the costs either way? That kind of analysis can be done, no?
  • 180 Proof
    15.4k
    "Thought crime" as a prohibition has a very long history of failure and pathologization in countless societies.

    :cool:

    You're right, of course, on both points but I imagine those potentialities are distant-future.ZzzoneiroCosm
    :up:

    Examples of what? I have not claimed or implied that there are any other instantiations presently.
  • sime
    1.1k
    In line with Richard Dreyfus's criticisms of computer science in the seventies that predicted the failure of symbolic AI, AI research continues to be overly fixated upon cognitive structure, representations and algorithms, due to western culture's ongoing cartesian prejudices that continue to falsely attribute properties, such as semantic understanding, or the ability to complete a task, to learning algorithms and cognitive architectures per-se, as opposed to the wider situational factors that subsume the interactions of machines with their environments, that includes the non-cognitive physical processes that mediate such interactions.

    Humans and other organisms are after all, open systems that are inherently interactive, so when it comes to studying and evaluating intelligent behaviour why are the innards of an agent relevant? shouldn't the focus of AI research be on agent-world and agent-agent interactions, i.e. language-games?

    In fact, aren't such interactions the actual subject of AI research, given that passing the Turing Test is the very definition of "intelligence"? In which case, the Turing Test cannot be a measure of 'intelligence properties' that are internal to the interrogated agent.

    For instance, when researchers study and evaluate the semantics of the hidden layers and outputs of a pre-trained GPT-3 architecture, isn't it the conversations that GPT-3 has with researchers that are the actual underlying object of study? In which case, how can it make sense to draw context-independent conclusions about whether or not the architecture has achieved understanding? An understanding of what in relation to whom?
  • Cuthbert
    1.1k
    In which case, how can it make sense to draw context-independent conclusions about whether or not the architecture has achieved understanding? An understanding of what in relation to whom?sime

    I think all the above post is true. The robot has issued such and such words and the words all made sense. But did the robot mean any of it? On the other hand, if a robot threatens to beat me up I won't wait around to ask whether it understands what it's saying.
  • Moliere
    4.7k
    If people mistreat life-like robots or AI they are (to an extent) toying with doing so to real humans. There's several parts of the brain involved in moral decision-making which do not consult much with anywhere capable of distinguishing a clever AI from a real person. We ought not be training our systems how to ignore that output.Isaac

    What this discussion shows is that as soon as an observable criteria for consciousness is set out a clever programer will be able to "simulate" it.

    It follows that no observable criteria will ever be sufficient.

    But of course "phenomenal experience" can only be observed by the observer, and so cannot serve as a criteria for attributing consciousness.

    So this line of thought does not get anywhere.

    Whether some piece of software is conscious is not a technical question.
    Banno

    These two go along nicely together, and also stimulate some of my thinking on underlying issues with respect to the relationship between knowledge and ethics (which is super cool! But I'm going to stay on topic)

    I agree that, at bottom, there is no scientific matter at stake. A trained producer of scientific knowledge wouldn't be able run a process, interpret it, and issue a reasonable inference on every being in some kind of Bureau of Moral Inspection to whether or not we will be treating this one as if it is a moral being or not.

    In fact, while comical to think on at a distance, it would, in truth, be horrific to adjudicate moral reasoning to a bureaucratic establishment dedicated to producing knowledge, issuing certificates of analysis on each robot, alien, or person that they qualify. Not even in an exaggerated sense, but just imagine a Brave New World scenario where, instead of a science of procreation being run by the state to institute natural hierarchies to create order, you'd have a state scientific bureau determining what those natural hierarchies already are --

    Functionally speaking, not much different.


    Also, naturally we are hearing this for a reason -- the news is literature! And Google wants to make sure it still looks good in the eyes of the public in spite of firing this guy, especially because the public will be more credulous when it comes to A.I. being sentient.

    Another reason to be hesitant to immediately agree. After all -- what about the time the guy is right? Will Alphabet corporation have our moral worth at the heart of their thinking when they want to keep a sentient A.I. because it's more useful to own something sentient?


    No, I'd say it's far more sensible to err on the side of caution, because of who we will become if we do not.
  • Deletedmemberzc
    2.5k
    So we could then ask the question of how we ought act in the face of such uncertainty. Is it worth the risk? What are the costs either way? That kind of analysis can be done, no?Isaac

    Sure, if I was a policy maker or if I had children. As is, I don't feel a pressing need.

    Thank you again for the open engagement on the AI issue. :cool:
  • Isaac
    10.3k
    "Thought crime" as a prohibition has a very long history of failure and pathologization in countless societies.180 Proof

    Agreed. Whether or not we encourage/allow facilities to reduce/increase desensitisation is, I think, a far cry from thought crimes though.

    it would, in truth, be horrific to adjudicate moral reasoning to a bureaucratic establishment dedicated to producing knowledge, issuing certificates of analysis on each robot, alien, or person that they qualify.Moliere

    Exactly. Too often have we erred in this respect (slavery, animal cruelty, child abuse, treatment of the mentally retarded...) to trust any bureaucracy with this kind of judgement. It seems more likely than not that whatever decision we make about the moral worth of some entity, we'll be horrified 100 years later that we ever thought that way.

    The Zong was a slave ship transporting slaves from Africa. It ran out of water, and so to save what rations were left, the slaves were thrown overboard, still chained. In litigation, the Judge, Lord Mansfield said he

    ...had no doubt that the Case of Slaves was the same as if Horses had been thrown over board

    I think the key factor in cases like slavery is that we do not start from a limited group of 'moral subjects' and gradually expand it. We start with everything that seems like a moral subject included and we gradually reduce it.

    We eliminate, from the group of moral subjects, on the basis of a range of factors, some reasonable (unplugging the AI), some unreasonable (deciding slaves are like horses). Even when the grounds are reasonable, such decisions shouldn't be easy. They should come with discomfort, lest we're unfettered next time we decide some element of humanity is as dispensable as a horse.
  • Deletedmemberzc
    2.5k


    The chief danger in life is that you may take too many precautions. — Alfred Adler
  • Tom Storm
    9.1k
    In line with Richard Dreyfus's criticisms of computer science in the seventies that predicted the failure of symbolic AI, AI research continues to be overly fixated upon cognitive structure, representations and algorithms, due to western culture's ongoing cartesian prejudices that continue to falsely attribute properties, such as semantic understanding, or the ability to complete a task, to learning algorithms and cognitive architectures per-se, as opposed to the wider situational factors that subsume the interactions of machines with their environments, that includes the non-cognitive physical processes that mediate such interactions.sime

    I think you are referring to Hubert Dreyfus' work, not the American actor from Close Encounters... :wink:
  • Jackson
    1.8k
    I think you are referring to Hubert Dreyfus' work, not the American actor from Close Encounters.Tom Storm

    I was hoping it was Richard.
  • Banno
    25.1k
    I think the key factor in cases like slavery is that we do not start from a limited group of 'moral subjects' and gradually expand it. We start with everything that seems like a moral subject included and we gradually reduce it.Isaac

    Yep.

    No, I'd say it's far more sensible to err on the side of caution, because of who we will become if we do not.Moliere

    That's it.
  • Deletedmemberzc
    2.5k
    @Wayfarer

    Curious to me that those who have no use for the word 'subjectivity' prefer not to draw a line between creatures and machines. Thoughts?
  • Moliere
    4.7k
    How else would you draw a line between creatures and machines other than subjectivity?

    Seems to me that they go hand-in-hand
  • Wayfarer
    22.6k
    Curious to me that those who have no use for the word 'subjectivity' prefer not to draw a line between creatures and machines. Thoughts?ZzzoneiroCosm

    There's an expression you encounter in philosophy, 'forgetfulness of being'. The fact that the distinction can't be made between humans and devices (and also between humans and animals) betokens that forgetfulness, in my opinion. It's reminiscent of the Platonic 'anamnesis' (which means 'unforgetting', meaning we're generally in a state of 'amnesis', amnesia, due to forgetfulness). I think it's because we're so utterly absorbed in the phenomenal domain that we forget our real nature and then fiercely resist being reminded about it. (Bracing for flak :yikes: )

    Two books:

    You are not a Gadget, Jaron Lanier

    Devices of the Soul, Steve Talbott.
  • Deletedmemberzc
    2.5k
    Bracing for flakWayfarer

    You can handle it. :strong:
  • Deletedmemberzc
    2.5k
    'forgetfulness of being'.Wayfarer

    Heidegger's inspiration. Haven't read enough of him.
  • Banno
    25.1k
    How else would you draw a line between creatures and machines other than subjectivity?Moliere

    Trouble is, it doesn't help, because that subjectivity is not open to our inspection, neither in the case of @ZzzoneiroCosm or of LaMDA.

    So as an answer, it is useless.
  • Wayfarer
    22.6k
    Heidegger's inspiration. Haven't read enough of him.ZzzoneiroCosm

    I am meaning to get around to his intro to metaphysics. I've not tackled Being and Time and not sure if I want to make the investment. Besides I can't quite forgive him his enrollment in the Nazi Party.

    subjectivity is not open to our inspectionBanno
    Oh, you mean it's not objective! So that's it. No wonder, then.
  • Deletedmemberzc
    2.5k


    I see you know how to use the word 'subjectivity.' So no more grounds for special pleading on that score.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment