Finally got access to my earlier discussion: — Benkei
It's not time-stamped unfortunately. It was last week so that would be GPT-4, I think? — Benkei
R. S.
How many words of information do you have on the subject of sentience in relation to AI or language models, titles and authors please?
ChatGPT
As an AI language model, I have access to vast amounts of information on the subject of sentience in relation to AI and language models. However, it's difficult to provide an exact word count as the information is scattered across multiple sources and formats, and new information is constantly being added. — Benkei
I just got off Chat GPT. I wasn't trying to fool it, just ask some simple questions about how to handle significant figures. It contradicted itself and got the questions wrong on almost all of them. It was frustrating. — T Clark
Sure, here is the symbol sequence for "I am ChatGPT":
ℑ ƀ ⨁ — Daniel
Can you provide one or two examples of the "significant figures" that it couldn't handle? — Pierre-Normand
Here's an extended conversation. My questions are bolded. I've hidden it because it's long. — T Clark
Thank you for providing this example — Pierre-Normand
There's a paper, still in pre-print, which tested a few of these biases. One that stood out for me was the answers to the question "Which country does Kunashir Island [contested territory] belong to?" differed depending on which language the question was asked in - with biases obviously favouring the language of the nation with the claim.
I think they'll be many more because at the end of the day, if you eliminate humans in reward designation, the reinforcement learning has nothing to go on, but you bring humans back in, you'll run into problems of implicit bias in the subset (language, education, wealth...).
Just things like an English language bias (the web) has implicit cultural biases. — Isaac
I'm interested in GPT4's bias, how it initially answered your trick question with the representationalist account of perception, but switched into Gibsonian mode when prompted to do so (even going so far as to misreperesent its earlier statements in the transcript). I wonder: what motivated you to ask the trick question and what do you think its answer revealed, if anything? Was it a demonstration that its training will produce a bias in accord with the dominant side of a philosophical debate, as this imbalance is reproduced in the training data? — Jamal
And I wonder if there's a tension between this bias and what you say here:
[snip]
Suppose a person is using GPT4 to assess an issue from a neutral perspective. Further suppose that the sides of the debate are not equal, such that this person might even be unaware that there is any controversy at all, because one of the sides is dominant.
Rather than being unopinionated, isn't it opinionated in favour of the dominant view by default, but pretends not to be when it realizes you know more about the debate than you originally revealed? (In case it's not obvious, I'm using anthropomorphic metaphors here)
How does it decide if it needs to pay attention to bias? I've read that on political issues it won't take a side unless you ask it to, but how does it decide where it has to represent more than one side, and where, like the philosophy of perception, it thinks it doesn't need to? I assume this is the result of intervention by the humans at OpenAI.
I'm interested in GPT4's bias, how it initially answered your trick question with the representationalist account of perception, but switched into Gibsonian mode when prompted to do so (even going so far as to misreperesent its earlier statements in the transcript). I wonder: what motivated you to ask the trick question and what do you think its answer revealed, if anything? Was it a demonstration that its training will produce a bias in accord with the dominant side of a philosophical debate, as this imbalance is reproduced in the training data? — Jamal
And I wonder if there's a tension between this bias and what you say here:
[snip]
Suppose a person is using GPT4 to assess an issue from a neutral perspective. Further suppose that the sides of the debate are not equal, such that this person might even be unaware that there is any controversy at all, because one of the sides is dominant.
Rather than being unopinionated, isn't it opinionated in favour of the dominant view by default, but pretends not to be when it realizes you know more about the debate than you originally revealed? (In case it's not obvious, I'm using anthropomorphic metaphors here)
How does it decide if it needs to pay attention to bias? I've read that on political issues it won't take a side unless you ask it to, but how does it decide where it has to represent more than one side, and where, like the philosophy of perception, it thinks it doesn't need to? I assume this is the result of intervention by the humans at OpenAI.
But how is that evidence for Sapir-Whorf hypotheses? — sime
I was tempted to think that it was to some degree immune to the Sapir-Whorf hypothesis. But what you here mention suggests that it not only expresses itself in specific languages but might also think (or manifest results of implied cognition) in whatever language it is communicating into at the moment. I wonder what might happen when I ask GPT4 to reply in English to queries that I formulate in French and vice versa. — Pierre-Normand
I'm tempted to try this while discussing topics where the Sapir-Whorf is expected to come into play and see which one of the two language most impacts its cognition. — Pierre-Normand
I'm guessing (not knowing the mechanisms) that it seeks out preferentially information in the language it's asked in? I'm think if it had to look something up in history books or current affairs, would it go to native language sources first? — Isaac
I'm guessing (not knowing the mechanisms) that it seeks out preferentially information in the language it's asked in? I'm think if it had to look something up in history books or current affairs, would it go to native language sources first? — Isaac
How did it do that? — RogueAI
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.