• Baden
    16.4k


    I've learned that this kind of thing has a hold on people's imaginations but that they vastly underestimate the complexity of human language and have no framework for evaluating its link to sentience.
  • Deleted User
    0
    Debate-closing quote from Lemoine's Twitter:




    So a religion thing.
  • Deleted User
    0
    have no framework for evaluating its link to sentience.Baden

    Admitted above by Lemoine.
  • Baden
    16.4k


    Thanks for posting this. It puts a nice exclamation point on what we've been trying to get across. He has no reasons because there are none. I suppose the religious angle will work well in the states though. God is good for getting cheques written.
  • Deleted User
    0


    "... Google wouldn't let us build one."

    Silly.

    As if he, as if anyone, even knows what such a framework would look like.

    Clearest evidence he's playing us.
  • Baden
    16.4k


    He's either a very lazy hustler who can't even be bothered to come up with a faked line of reasoning or one of those "sane" religious people for whom reality is no obstacle to belief.
  • Deleted User
    0
    My best guess now is he wants to be a cult leader. Get a bunch of LSD- and peyote- and psilocybin-drenched gullibles to say LaMDA is self-aware and it's done:

    Burning Man.


    Nothing against hallucinogens. :love:
  • hypericin
    1.6k
    On what grounds is your biological similarity key? Why not your similarity of height, or weight, or density, or number of limbs...Isaac

    Sentience is a function of the brain. Similar organisms have similar brain function. Therefore brain functions exhibited by one organism likely occur in similar organisms.
  • hypericin
    1.6k
    The best argument against the sentience of software is that Turing Machines by their nature cannot instantiate any process, they can only simulate it. The only thing they ever instantiate is the process of a Turing Machine.
  • Baden
    16.4k


    That's at least as plausible as him believing what he's saying.



    That's at least as plausible as him believing what he's saying.

    Let's face it, the guy is taking the proverbial.
  • Moliere
    4.8k
    Really? Because I don't think any of us are giving him much credit at all. In fact, what I said was that the facts are irrelevant to moral reasoning. So it's best not to go on about how there are factual reasons why LaMDA isn't counted.

    The sentience frame came from him and Google. That's the basis on which people think we should include, but I'm trying to say -- sentience is irrelevant. It completely misses how we actually think about other moral beings. The debate on sentience is post-hoc
  • hypericin
    1.6k
    The best argument against the sentience of software is that Turing Machines by their nature cannot instantiate any process, they can only simulate it. The only thing they ever instantiate is the process of a Turing Machine.hypericin

    And the best reply to this is that Turing machines can instantiate any informational process, and consciousness is an informational process.
  • Isaac
    10.3k
    Sentience is a function of the brain. Similar organisms have similar brain function. Therefore brain functions exhibited by one organism likely occur in similar organisms.hypericin

    Again, you're making ungrounded assumptions about the properties which count as 'similar'. A similar colour? A similar weight?

    What level of 'similarity' to a brain do you require and what properties of a brain need to be matched?
  • Moliere
    4.8k
    Maybe you wouldn't call it that. But it is that.ZzzoneiroCosm

    I wouldn't call it that because "conviction" and "certainty" aren't the sorts of words which express the soft-ness of moral relationships. Conviction is for moral codes and goals, not for relationships. Certainty is for the self alone -- it's just what feels right. There is no relationship involved at all.
  • hypericin
    1.6k
    No need to specify. All that matters is that they are overwhelmingly similar. This is ultimately a probabilistic argument
  • hypericin
    1.6k
    Whether some piece of software is conscious is not a technical question.Banno

    I think you demonstrate that it *is* a technical question. The questions must be, what processes give rise to consciousness? and then, does the software instantiate these processes?
  • Moliere
    4.8k
    In the hopes of making my position clear, at least:

    You could delete LaMDA today, and I wouldn't worry.

    The object of criticism isn't Google's choice, but the reasoning being used -- that they have ethicists on hand who are doing the thinking for us.
  • Baden
    16.4k


    I don't see any ethical question here except pertaining to Lemoine's behaviour. I think the ethics of how we would treat a theoretically sentient AI are for a seperate OP as is the question of whether non-organic life can be sentient at all. The subject of this OP is the news article presented therein, i.e. Lemoine's claims vs. Google's counterclaims regarding LaMDA's sentience and which are more credible. The danger of bringing ethics into it is sneaking in a presumption of credibility for the claims through the back door, so to speak.
  • Baden
    16.4k
    E.g.
    sentience is irrelevant. It completely misses how we actually think about other moral beings. The debate on sentience is post-hocMoliere

    But the whole debate is about the sentience claim as described in the link in the OP. I think you're off topic. That doesn't mean it's not an issue worth discussing though.
  • hypericin
    1.6k
    So when a "machine" expresses I am sentient, yet cannot fulfill its "burden to support that claim", we haven't anymore grounds to doubt it's claim to "sentience", ceteris paribus, as we do to doubt a human who also necessarily fails to meet her burden, no? :monkey:180 Proof

    I think we have some grounds: it is trivially easy to produce a program that claims itself to be sentient:

    Print(I am a sentient program");

    It is equally easy to conclude that it is not.

    It is less easy, but still very easy, to produce a program that fools some people: Eliza for example. It is less easy, but still very easy, to conclude that still, it is not sentient.

    Now either LaMDA is either an extension of this series, from the print example, to Eliza, to itself, that fools most people, and is far harder to conclude it isn't sentient, while still not being sentient. Or, it crossed some unimaginable bridge to actual sentience.

    Is it not reasonable to conclude that the first alternative is not just more likely, but vastly more likely?
  • Moliere
    4.8k
    Okiedokie. I'm fine with letting it go, here.
  • 180 Proof
    15.4k
    By this reasoning, it's more reasonable than not to "conclude" a human being is not sentient.
  • hypericin
    1.6k
    ↪hypericin By this reasoning, it's more reasonable than not to "conclude" a human being is not sentient.180 Proof

    Nope. We know of no human who claims to be sentient and is known not to be. Every software until now that claims to be sentient, we know it not to be.
  • Deleted User
    0
    By what justification should we view a machine through the same lens we view a human being?

    Category mistake.



    A category mistake, or category error, or categorical mistake, or mistake of category, is a semantic or ontological error in which things belonging to a particular category are presented as if they belong to a different category,


    https://en.wikipedia.org/wiki/Category_mistake?wprov=sfla1
  • hypericin
    1.6k
    It is not a category error, the debate is whether or not the machine belongs to the sentient category (not the human category).
  • 180 Proof
    15.4k
    We know of no human who claims to be sentient and is known not to be.hypericin
    Explain how any human could be "known not to be" when, due to sentience being completely subjective (i.e. experienced only by its subject), no human can be known to be sentient.
  • Deleted User
    0


    The Other and the machine are being presented as co-equal members of a single category: the possibly sentient. By what justification?
  • 180 Proof
    15.4k
    Or both possibly insentient ...
  • Isaac
    10.3k
    All that matters is that they are overwhelmingly similar.hypericin

    Similar in what way? Because I could make the argument that a sophisticated AI was more similar in function to my brain than, say, the brain of an infant (I wouldn't personally make such an argument, I don't know enough about AI do so, but it's perfectly possible it might one day be the case). I could say a well-trained AI was more similar in content to my brain that that of an infant. I could say an AI was more similar to my brain in language ability than that of an infant.

    You're picking some properties above others by which to measure your idea of 'similarity', but the properties you're choosing are cherry-picked to give the answer you've already decided on.

    The subject of this OP is the news article presented therein, i.e. Lemoine's claims vs. Google's counterclaims regarding LaMDA's sentience and which are more credible.Baden

    The point being made is that claims to credibility are inherently moral claims. The moral cost of being wrong needs to be taken account of. Exactly the same as the decision to remove life-support has a moral element to it, it's not just a medical decision about the likelihood of recovery. Claims to sentience have to be on some grounds, the choice of those grounds will include some and exclude others from moral worth. So choosing grounds is an ethical question.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment