An Australian mayor is threatening to sue OpenAI for defamation over false claims made by its artificial intelligence chatbot ChatGPT saying he was jailed for his criminal participation in a foreign bribery scandal he blew the whistle on. — Crikey Daily
This... — Banno
On account of OpenAI's disclaimers regarding the well know propensity of language models to hallucinate, generate fiction, and provide inaccurate information about topics that are sparsely represented in their training data, I don't see this mayor to have much of a legal claim. I don't think ChatGPT's "false claims" are indicative of negligence or malice from OpenAI. The broader issue of language models being misused, intentionally or not, by their users to spread misinformation remains a concern. — Pierre-Normand
Yes, I was going to post that. The really amazing thing is that the program provided fake references for its accusations. The links to references in The Guardian and other sources went nowhere.
Now that's pretty alarming. — T Clark
Language models very often do that when they don't know or don't remember. They make things up. That's because they lack reflexive or meta-cognitive abilities to assess the reliability of their own claims. Furthermore, they are basically generative predictive models that output one at a time the most likely word in a text. This is a process that always generates a plausible sounding answer regardless of the reliability of the data that grounds this statistical prediction. — Pierre-Normand
It seems unlikely to me that OpenAI will get off the hook if this kind of thing continues. The mayor didn't use Chat GPT, someone else did. — T Clark
Ok, but that's a really bad thing. "Oh, that's just what language models do" is not a very good excuse and it strikes me as very unlikely it will protect AI companies from the consequences of the mistakes. — T Clark
The broader issue of language models being misused, intentionally or not, by their users to spread misinformation remains a concern. — Pierre-Normand
It seems unlikely to me that OpenAI will get off the hook if this kind of thing continues. — T Clark
It occurs to me that these bots ( their algorithms) may be evolving in a quasi-Darwinian way. Yes, they depend on moist robots for replication, but so does our DNA. Ignoring military uses, we can think of the bots as competing for our love, perhaps as dogs have. We might expect them to get wiser, more helpful , more seductive, and even more manipulative.
Given the loneliness of much of our atomized civilization and our love of convenience and efficiency, we are perfect targets for such a seduction. — plaque flag
I used to argue with a friend who took Kurzweil seriously, because I knew it was ones and zeroes. Well now I remember that we are just ones and zeroes, and I seriously expect a revolution in the understanding of persons and consciousness -- which'll spill into the streets. People will want to marry bots. Bots will say they want to marry people.
Yeah, they are not useful. This reinforces the view that, for all the "clever", they are bullshit generators - they do not care about truth. — Banno
That's also a concern. I wouldn't marry GPT-4 in its present form, though. It's very agreeable, but too nerdy. — Pierre-Normand
This is similar to one of the arguments Robert Hanna makes in his recent paper: Don't Pause Giant AI Experiments: Ban Them — Pierre-Normand
Thanks! I'll check that out. Though, if they are banned, I won't believe that governments at least aren't continuing development. — plaque flag
I particularly like the bolded part. Much honesty. — Baden
Naughty, naughty... — Baden
Yes, I was going to post that. The really amazing thing is that the program provided fake references for its accusations. The links to references in The Guardian and other sources went nowhere. — T Clark
I am surprised that they haven't addressed the fake links issue. ChatGPT-based AIs are not minimalistic like AlphaChess, for example, where developers hard-code a minimal set of rules and then let the program loose on data or an adversarial learning partner to develop all on its own. They add ad hoc rules and guardrails and keep fine-tuning the system. — SophistiCat
A rule to prevent the AI from generating fake links would seem like a low-hanging fruit in this respect.
Links are clearly distinguished from normal text, both in their formal syntax and in how they are generated (they couldn't be constructed from lexical tokens the same way as text or they would almost always be wrong). And where there is a preexisting distinction, a rule can readily be attached.
Yes, I just wonder if anthropomorphizing abstract concepts into characters with different names is key to getting around some of this fine-tuning and how far it can be taken. — Baden
So, I'm looking for ways to get it to talk about religion/ideology, social manipulation, minorities etc and other sensitive topics in as open a way as possible. — Baden
A rule to prevent the AI from generating fake links would seem like a low-hanging fruit in this respect. Links are clearly distinguished from normal text, both in their formal syntax and in how they are generated (they couldn't be constructed from lexical tokens the same way as text or they would almost always be wrong). And where there is a preexisting distinction, a rule can readily be attached. — SophistiCat
This sounds like something you might say about me. — T Clark
This reinforces the view that, for all the "clever", they are bullshit generators - they do not care about truth. — Banno
Humans are all bullshit generators -- it's both a bug and a feature. Large problems arise when we start believing our own bullshit. — BC
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.