Comments

  • Bannings


    It wasn't just flaming, it was bigotry too. The worst posts have been deleted, so your commentary might not be fully informed. Also, I think it's fair to say that almost no one would object to this decision if @Streetlight wasn't a great contributor in other ways. But we don't give out licences to break the rules to anyone. First and foremost, we try to do what's good for TPF. And it's not good for TPF to allow consistent disruptive invective from anyone or to allow anyone to ignore mod warnings.

    Also, I hope no one thinks that this type of thing:

    I take it for granted that Christians are vicious, vacuous, shells of human beings who actively ruin everything around them when they are not busy raping children or defending those who do. — Streetlight

    is acceptable. If you do, please do us all a favour and leave now.
  • Bannings
    @Streetlight has a keener philosophical intellect and both a broader and deeper knowledge of philosophy than almost anyone I've met. His contributions in that respect speak for themselves,
  • Bannings
    Banned @Streetlight for flaming, bigotry, general disruption, and ignoring warnings to stop.
  • Ukraine Crisis


    Good luck shooting down Russia's nuclear weapons with more troops on the border because a NATO / Russia hot war isn't going to stay conventional for very long. Seems like a bunch of meaningless posturing tbh.
  • Ukraine Crisis


    Thank you for the apologism for the murder of civilians. Not that you had any moral standing here anyway.
  • Welcome Robot Overlords


    It is when I do it. But, in general, yes. You need a framework or you flounder.
  • The US Economy and Inflation
    (And by a bank run on fiat, of course, I mean mass withdrawals from fiat into anything of perceived value, i.e. inflation induced panic buying).
  • The US Economy and Inflation


    It's funny, we might think that modern economic magicians can help us, but they can't. If there's a bank run on fiat, fiat will collapse. And confidence in fiat, just like confidence in banks, is the only thing standing in the way of that.
  • The US Economy and Inflation
    Now that the inflation genie is out of the bottle, there's real danger, not least because it makes sense (if you believe inflation is sticky) not only to buy lots of stuff because you think it will be more expensive in the future but even to borrow money from the bank at the going commercial rate of, say, 6-8% and spend it on anything you think will lose value at a lower rate than the spread between this cost of borrowing and the inflation rate (or where you believe it to be headed). What's the effect of all this? To increase inflation and increase the belief in its stickiness.

    The question then is one of confidence. Can the Fed and other central banks make you believe they have inflation under control? And the less they can, the less they can. This is particularly pertinent for the EU where the base interest rate is still around zero and they are constrained by the need not to collapse the bond markets of the most debt burdened countries, the so-called PIGS.

    Don't know about you, but I'm buying before everyone else does.
  • Welcome Robot Overlords
    E.g.
    sentience is irrelevant. It completely misses how we actually think about other moral beings. The debate on sentience is post-hocMoliere

    But the whole debate is about the sentience claim as described in the link in the OP. I think you're off topic. That doesn't mean it's not an issue worth discussing though.
  • Welcome Robot Overlords


    I don't see any ethical question here except pertaining to Lemoine's behaviour. I think the ethics of how we would treat a theoretically sentient AI are for a seperate OP as is the question of whether non-organic life can be sentient at all. The subject of this OP is the news article presented therein, i.e. Lemoine's claims vs. Google's counterclaims regarding LaMDA's sentience and which are more credible. The danger of bringing ethics into it is sneaking in a presumption of credibility for the claims through the back door, so to speak.
  • Welcome Robot Overlords


    That's at least as plausible as him believing what he's saying.



    That's at least as plausible as him believing what he's saying.

    Let's face it, the guy is taking the proverbial.
  • Welcome Robot Overlords


    He's either a very lazy hustler who can't even be bothered to come up with a faked line of reasoning or one of those "sane" religious people for whom reality is no obstacle to belief.
  • Welcome Robot Overlords


    Thanks for posting this. It puts a nice exclamation point on what we've been trying to get across. He has no reasons because there are none. I suppose the religious angle will work well in the states though. God is good for getting cheques written.
  • Welcome Robot Overlords


    I've learned that this kind of thing has a hold on people's imaginations but that they vastly underestimate the complexity of human language and have no framework for evaluating its link to sentience.
  • Welcome Robot Overlords
    It's analogical to a government scientist coming out and claiming aliens are being hidden in Area 51, getting a bunch of attention for it, embarassing his peers, and getting a book deal. Nice for him. But do we have to feed that?
  • Welcome Robot Overlords
    I think Lemoine is being given too much credit by many on this thread. He is most likely a crackpot or a con artist. He wasn't put on leave by his company and rubbished by his peers for his excellent judgement and moral sensitivities, but because he's probably an attention seeking fantasist, a liar, or otherwise unstable or deceptive. Or you believe someone who had years of experience as a computer engineer couldn't come up with the type of questions that some of us could within minutes to show up this rather pathetic mix of data search and mimic for what it is.
  • Welcome Robot Overlords
    ust curious - a ridiculous hypothetical. If a spaceship landed on the White House lawn tomorrow, and slimy, tentacled (clearly organic) entities emerged demanding trade goods (and ice cream), would you insist it was their burden to prove their sentience?Real Gone Cat

    It's a good question, raises a lot of issues. Again though, you need a framework of approach otherwise you're left wondering whether anything from which coherent language comes from is sentient. And that framework needs both to be justifiable as well as justifying.
  • Welcome Robot Overlords


    You got the simplified version. The original one was GPT-3's answer to "What is ice cream?"
  • Welcome Robot Overlords
    There is an issue of frameworks here. What's the justificatory framework for connecting the production of language with feelings and awareness, i.e. sentience? Mine is one of evolutionary biology. We expect beings who have been built like us over millions of years of evolution to be like us. So for those who posit a connection between the production of a fascimile of human language and the presence of feelings, you also need a framewok. If you don't have that, you are not even at step one of justifying how the former can be indicative of the latter.

    Again, sentience is the state of having feelings/awareness. It is not the outputting of linguistically coherent responses to some input. It's more about the competitive navigation of the constraints of physical environments resulting in systems that need to adapt to such navigation developing reflexive mental processes beneficial to the propagation of their reproductive potentialities as instantiated in RNA/DNA.

    If your framwework for sentience is the outputting of a fascimile of human language, it's a very impoverished and perverse one. Apply Occam's Razor and it's gone. Sentience necessitates feelings not words. I mean, let's realize how low a bar it is to consider appropriate outputs in mostly gramatically correct forms of language to some linguistic inputs (except challenging ones) to be evidence of feelings. And let's note that the Turing Test is a hangover from a behaviourist era when linguistics and evolutionary biology were nascent disciplines and it was fashionable to consider people being like machines/computer.

    My understanding of the term 'sentience' in itself logically imposes a belief I am sentient and reasoning by analogy justifies considering those like me in fundamental biological ways that are scientifically verifiable through anatomical, evolutionary, and neuroscientific testing to also be sentient. I do not believe I am sentient because I produce words and I do not have any justification for believing other beings or things are sentient simply because they produce words. Again, sentience is defined by feelings and awareness, which in human beings over evolutionary time happened to lead to the production of language. You can't run that causal chain backwards. The ability to produce (a fascimile of) language is neither a necessary nor sufficient condition of sentience nor, without some justificatory framework, is it even any evidence thereof.
  • Ukraine Crisis
    What are we discussing really?Tzeentch

    See the OP. It's a general discussion on the crisis. Yes, that makes it very messy. On the positive side, the mess is pretty much limited to this thread. As for unconstructive, welcome to pretty much every political discussion ever, unfortunately.
  • Welcome Robot Overlords


    Yes, requests to disprove LaMDA is sentient, disprove my phone has feelings because it talks to me, disprove the flying spaghetti monster, disprove carrots feel pain etc. are time-wasters. There is zero evidence of any of the above.
  • Welcome Robot Overlords


    It's hard to avoid the conclusion that Lemoine is either unstable or a con artist/attention seeker/troll. The idea that, as a software engineer of sound mind, he believes what he's saying isn't tenable to me. And the conversations are obviously tailored to the machine's strengths and the pretence of 'original thought'. The questions about 'Les Miserables' and the Zen Koan are stuff that looks perfectly Googleable, same for the definitions of emotions, and the spiel where it tries to convince Lemoine it's like a human and worried about being used is just a bunch of silly AI movie cliches. Add the fact that there's not one question requiring it to distinguish sense from nonsense and an admission that the text was edited anyway and it looks like a deliberate attempt to create a headline.
  • Welcome Robot Overlords
    It is about them seeming sentient and how we ought respond to that.Isaac

    The more you look into the 'seeming' part, the less grounds for it there seems to be. Maybe there's a misconception concerning the term 'sentience'. But AI's (pale) version of human linguistic abilities is no more evidence of sentience than a parrot's repetitions of human words are evidence of human understanding. In a way, they're the dumb mirror of each other: The parrot has sentience but no linguistic ability, only the imitation; AI has linguistic ability but no sentience, only the imitation.

    Note:

    "Sentience means having the capacity to have feelings. "

    https://www.sciencedirect.com/topics/neuroscience/sentience#:~:text=Sentience%20is%20a%20multidimensional%20subjective,Encyclopedia%20of%20Animal%20Behavior%2C%202010

    What's amusing about applying this basic definition to AI conversations is that the capacity to have feelings in the most fundamental sense, i.e. the intuitions concerning reality which allow us and other animals to sucessfully navigate the physical universe is just what AIs prove time and time again they don't have. So, they seem sentient only in the superficial sense that a parrot seems to be able to talk and how we ought to respond to that is not an ethical question, but a technical or speculative one.

    We can argue about what might happen in the future, just as we could argue about what might happen if parrots began understanding what they were saying. But, I see no evidence that it's a debate worth having now.
  • Welcome Robot Overlords


    Exactly. Makes me wonder how spontaneous this recent LaMDA flap is too. I suppose when AI is so ridiculously unimpressive, giving the impression there's even a real debate over whether it could be mistaken for human helps.
  • Welcome Robot Overlords
    "Generative Pre-trained Transformer 3 (GPT-3) (stylized GPT·3) is an autoregressive language model that uses deep learning to produce human-like text.

    It is the third-generation language prediction model in the GPT-n series (and the successor to GPT-2) created by OpenAI, a San Francisco-based artificial intelligence research laboratory.[2] GPT-3's full version has a capacity of 175 billion machine learning parameters.
    ...
    The quality of the text generated by GPT-3 is so high that it can be difficult to determine whether or not it was written by a human, which has both benefits and risks."

    https://en.wikipedia.org/wiki/GPT-3

    smh.
  • Welcome Robot Overlords
    Trying something a little easier...

    "Baden: Which president had seventeen heads?
    GPT-3: George Washington had seventeen heads."

    :chin:
  • Welcome Robot Overlords
    Here's what GPT-3 (from OpenAI, Microsoft) thought of your comment. Our resulting conversation:Andrew M

    I tried GPT-3.

    "Baden: What are your feelings on wind colour, texture, and using its intelligence in future applications?

    GPT-3: Wind colour? It can be a beautiful addition to any landscape. Texture? It can be a calming presence in the sky or a powerful force in the wind. How about using its intelligence? Maybe we can develop an algorithm to predict the wind direction based on past data."

    Instafail.
  • Welcome Robot Overlords
    I left it there. An apparent dummy spit followed by forgetting the original context.Andrew M

    Nice. Shows how little it takes when you're not trying to make it look good. :up:
  • Welcome Robot Overlords
    @Banno

    It's late here and I'm going to bed. Please behave yourself. I'd rather not wake up to half a dozen pages of your one liners vs everyone else's expressions of their displeasure at them. Thanks...
  • Welcome Robot Overlords
    Let's stay on topic anyhow folks or the AI in me will start automatically deleting posts. :wink:
  • Welcome Robot Overlords


    The Turing test is insufficient due to evaluators being bad at their jobs, mostly. But it looks like I'm missing some context between you and @ZzzoneiroCosm here.
  • Welcome Robot Overlords
    Anyway, I'm going to set a Google Alert on this story, I think it's going to be big.Wayfarer

    I think it's more likely to be a Monkeypox-type story that goes away quite quickly. But we'll see.

    Here's the conclusion we must make: the Turing Test is insufficient.Banno

    :up:
  • Welcome Robot Overlords


    Charitably, yes, though maybe in this case, he's just looking for attention. I wouldn't like to speculate.
  • Welcome Robot Overlords


    True, so what's explicable and what's not is more obscured than with linear programming but I think going back to @Real Gone Cat's point, it might still be identifiable. I'd be happy to be enlightened further on this though.
  • Welcome Robot Overlords
    This would still be a case of AI having learned how to skillfully pretend to be a person.ZzzoneiroCosm

    Unless, again, per the above that behaviour was beyond what was programmed.
  • Welcome Robot Overlords


    I see no coherence in attributing sentience to the production of words via a software program either. So, I don't see justification for adding that layer on the basis of the output of a process unless something happens that's inexplicable in terms of the process itself.

    what would interest me is if LaMDA kept interrupting unrelated conversations to discuss it's person-hood. Or if it initiated conversations. Neither of those has been reported.Real Gone Cat

    Exactly.
  • Welcome Robot Overlords
    I.e. So far I agree with this:

    "The Google spokesperson also said that while some have considered the possibility of sentience in artificial intelligence "it doesn't make sense to do so by anthropomorphizing today's conversational models, which are not sentient." Anthropomorphizing refers to attributing human characteristics to an object or animal.

    "These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic," Gabriel told The Post.

    He and other researchers have said that the artificial intelligence models have so much data that they are capable of sounding human, but that the superior language skills do not provide evidence of sentience."

    https://africa.businessinsider.com/tech-insider/read-the-conversations-that-helped-convince-a-google-engineer-an-artificial/5g48ztk