You mean thanking him! :wink: — Janus
Pierre-Normand might know - would someone who has had a different history with ChatGPT receive a similarly self-reinforcing answer? — Banno
I realized that when I see the quoted output of an LLM in a post I feel little to no motivation to address it, or even to read it. If someone quotes LLM output as part of their argument I will skip to their (the human's) interpretation or elaboration below it. It's like someone else's LLM conversation is sort of dead, to me. I want to hear what they have built out of it themselves and what they want to say to me. — Jamal
Thinking and Being by Irad Kimhi. — Paine
I think this is a false equivalence. Drawing conclusions about AI based on its code is not the same as drawing conclusions about humans based on theories of neurophysiology. The theories of neurophysiology simply do not provide the deductive rigor that computer code does. It is incorrect to presume that drawing conclusions about a computer program based on its code is the same as drawing conclusions about a human based on its neurophysiology. Indeed, the whole point here is that we wrote the code and built the computer program, whereas we did not write nor build the neurophysiology—we do not even know whether neurophysiology and code are truly analogous. Art and science seem to be being conflated, or at least this is the prima facie conclusion until it can be shown why AI has somehow gone beyond artifice. — Leontiskos
So an example of the sort of answer I would want would be something like this: "We build the code, but the output of that code builds on itself insofar as it is incorporating inputs that we did not explicitly provide and we do not fully comprehend (such as the geography that a map-making AI surveys)." So apparently in some sense the domain of inputs is unspecified, and because of this the output is in some sense unpredictable.
OH never mind, OF course if she knew it was Monday she wouldn't say 1/3, but what if she was off...and Tuesday comes around and it changes to 0? the chance to change or update belief still exists if tails and asked twice. On Monday she does not know for certain if heads or tails only gives her degree of belief in heads, knowing nothing Wednesday when experiment ends, tomorrow she will be awakened or sleep through the day, she can still guess reasonably participating, I think? I don't know, perhaps I am in over my head here...again! — Kizzy
The idea of getting them to write, produce content which I can then paraphrase, polish my writing or using their arguments is anathema to me. — Janus
Since SB doesn't remember Monday, she cannot feel the difference but the structure of the experiment KNOWS the difference.So if she is asked twice, Monday and Tuesday, that only happens with tails outcome. Even without memory, her credence may shift, but because the setup itself is informative. — Kizzy
I do think this related to the Monty Hall problem where information affects probabilities. Information does affect probabilities, you know. It's easier indeed to understand the Monty Hall when there's a lot more doors (just assume there's one million of them). So there's your pick from one million doors, then the gameshow host leaves just only one other door closed and opens all other 999 998 doors. You think it's really fifty-fifty chance then? You think you are so lucky that you chose the right door from a million?
If she knows the experiment, then it's the 1/3 answer. In Monty Hall it's better to change your first option as the information is different, even if one could at first think it's a 50/50 chance. Here it's all about knowing the experiment. — ssu
In this case it's a bit blurred in my view with saying that she doesn't remember if she has been already woken up. Doesn't mean much, if she can trust the experimenters. But in my view it's the same thing. Does it matter when she is represented with the following picture of events?
She cannot know exactly what day it is, of course. She can only believe that the information above is correct. Information affects probabilities, as in the Monty Hall problem.
What if these so-called scientists behind the experiment are perverts and keep intoxicating the poor woman for a whole week? Or a month? If she believes that the experiment ended on Wednesday, but she cannot confirm it being Wednesday, then the could have taken the been experiment for a week. Being drugged for a week or longer will start affecting your health dramatically.
Now I might have gotten this wrong, I admit. But please tell me then why I got it wrong.
he "halfers run-centered measure" is precluded because you can't define, in a consistent way, how or why they are removed from the prior. So you avoid addressing that. — JeffJo
Here's the 40 rounds, if you are interested — Banno
I just went off on a bit of a tangent, looking at using a response as a prompt in order to investigate something akin to Hofstadter's strange loop. ChatGPT simulated (?) 100 cycles, starting with “The thought thinks itself when no thinker remains to host it”. It gradually lost coherence, ending with "Round 100: Recursive loop reaches maximal entropy: syntax sometimes survives, rhythm persists, but semantics is entirely collapsed. Language is now a stream of self-referential echoes, beautiful but empty." — Banno
So a further thought. Davidson pointed out that we can make sense of malapropisms and nonsense. He used this in an argument not too far from Quine's Gavagai, that malapropisms cannot, by their very nature, be subsumed and accounted for by conventions of language, because by their very nature they break such conventions.
So can an AI construct appropriate sounding malapropisms?
Given that LLMs use patterns, and not rules, presumably they can. — Banno
They are not trained to back track their tentative answers and adjust them on the fly. — Pierre-Normand
Over the weekend, almost seven million people in several thousand communities here in the US got together to celebrate our anniversary...among other things. — T Clark
Okay, fair enough. I suppose I would be interested in more of those examples. I am also generally interested in deductive arguments rather than inductive arguments. For example, what can we deduce from the code, as opposed to inducing things from the end product as if we were encountering a wild beast in the jungle? It seems to me that the deductive route would be much more promising in avoiding mistakes. — Leontiskos
Surprisingly precocious. — Banno
So another step: Can an AI name something new? Can it inaugurate a causal chain of reference? — Banno
(For my part, I'm quite content to suppose that there may be more than one way for reference to work - that we can have multiple correct theories of reference, and choose between them as needed or appropriate.)
A more nuanced view might acknowledge the similarities in these two accounts. While acknowledging that reference is inscrutable, we do manage to talk about things. If we ask the AI the height of Nelson's Column, there is good reason to think that when it replies "52m" it is talking about the very same thing as we are - or is it that there is no good reason not to think so? — Banno
So are you saying that chatbots possess the doxastic component of intelligence but not the conative component? — Leontiskos
I think they have motivations, just like a dog is motivated to run after a car, but their motivations aren't autonomous since they seldom pause to question them.
— Pierre-Normand
It seems to me that what generally happens is that we require scare quotes. LLMs have "beliefs" and they have "motivations" and they have "intelligence," but by this one does not actually mean that they have such things. The hard conversation about what they really have and do not have is usually postponed indefinitely.
I would argue that the last bolded sentence nullifies much of what has come before it. "We are required to treat them as persons when we interact with them; they are not persons; they can roleplay as a person..." This is how most of the argumentation looks in general, and it looks to be very confusing.
An interesting direction here might be to consider if, or how, Ramsey's account can be appleid to AI.
You have a plant. You water it every day. This is not a symptom of a hidden, private belief, on Ramsey's account - it is your belief. What is given consideration is not a hidden private proposition, "I believe that the plant needs water", but the activities in which one engages. The similarities to both Ryle and Wittgenstein should be apparent. — Banno
Ramsey then looks for the points of indifference; the point of inaction. That's the "zero" from which his statistical approach takes off. Perhaps there's a fifty percent chance of rain today, so watering may or may not be needed. It won't make a difference whether you water or not.
There seem to be two relevant approaches. The first is to say that an AI never has any skin in the game, never puts it's balls on the anvil. So for an AI, every belief is indifferent.
The second is to note that if a belief is manifest in an action, then since the AI is impotent, it again has no beliefs. That's not just a manifestation of the AI's not being capable of action. Link a watering system to ChatGPT and it still has no reason to water or not to water.
She is asked for her credence. I'm not sure what you think that means, but to me it means belief based on the information she has. And she has "new information." Despite how some choose to use that term, it is not defined in probability. When it is used, it does not mean "something she didn't know before," it means "something that eliminates some possibilities. That usually does mean something about the outcome that was uncertain before the experiment, which is how "new" came to be applied. But in this situation, where a preordained state of knowledge eliminates some outcomes, it still applies. — JeffJo
Why is that a puzzle to you? A book doesn't do philosophy but we do philosophy with it. The library doesn't do philosophy but we do philosophy with it. The note pad isn't philosophy yet we do philosophy with it. Language isn't philosophy yet we do philosophy with it. — Metaphysician Undercover
But if you really want to use two days, do it right. On Tails, there are two waking days. On Heads, there is a waking day and a sleeping day. The sleeping day still exists, and carries just as much weight in the probability space as any of the waking days. What SB knows is that she is in one of the three waking days. — JeffJo
Oh? You mean that a single car can say both "Monday & Tails" and "Tuesday & Tails?" Please, explain how. — JeffJo
"What is your credence in the fact that this card says "Heads" on the other side? This is unquestionably 1/3.
"What is your credence in the fact that the coin is currently showing Heads?" This is unquestionably an equivalent question. As is ""What is your credence in the fact that the coin landed on Heads/i]?"
I realize that you want to make the question about the entire experiment. IT IS NOT. I have shown you over and over again how it leads to contradictions. Changing the answer between these is one of them.
"Picking "Monday & Tails" guarantees that "Tuesday & Tails" will be picked the next day, and vice versa. They are distinct events but belong to the same timeline. One therefore entails the other." —Pierre-Normand
And how does this affect what SB's credence should be, when she does not have access to any information about "timelines?"
We built AI. We don’t even build our own kids without the help of nature. We built AI. It is amazing. But it seems pretentious to assume that just because AI can do things that appear to come from people, it is doing what people do. — Fire Ologist
Nice. It curiously meets a meme that describes AI as providing a set of words that sound like an answer. — Banno
The reason for not attributing beliefs to AI must lie elsewhere. — Banno
The puzzle is how to explain this. — Banno
So do we agree that whatever is connotative in an interaction with an AI is introduced by the humans involved? — Banno
Neither does an AI have doxa, beliefs. It cannot adopt some attitude towards a statement, although it might be directed to do so.
This anecdote might help my case: At another department of the university where I work, the department heads in their efforts to "keep up with the times" are now allowing Master's students to use AI to directly write up to 40% of their theses. — Baden
I've added the note: NO AI-WRITTEN CONTENT ALLOWED to the guidelines and I intend to start deleting AI written threads and posts and banning users who are clearly breaking the guidelines. If you want to stay here, stay human. — Baden
An AI cannot put its balls on the anvil.
I think this a very good objection. — Banno
Yes, I have tried to argue this point several times. A rational person's credence in the outcome of the coin toss is unrelated to the betting strategy that yields the greater expected return in the long run, and is why any argument to the effect of "if I bet on Tails then I will win 2/3 bets, therefore my credence that the coin landed on Tails is 2/3" is a non sequitur. The most profitable betting strategy is established before being put to sleep when one’s credence is inarguably 1/2, showing this disconnect. — Michael
