Right. And this is they get the wrong answer, and have to come up with contradictory explanations for the probabilities of the days. See "double halfers." — JeffJo
I understand the 1/3rd logic, but it simply doesn't apply here: the third flip, given the first two were heads (less likely than one tail and a head, but still very likely), is also unaffected by the other flips. — ProtagoranSocratist
This is useful information. I had it in my mind that it didn't use the spaces, so I started using spaces to distinguish myself. I guess I'll go back to spaceless em dashes. — Jamal
I would think handing your half-formed prose to a bot for it to improve it is plagiarism, regardless of the number of words changed or inserted. It's a different thing from you deliberately searching for a synonym. No? — bongo fury
Getting the thesaurus to suggest whole phrases and sentences is obviously plagiarism. — bongo fury
Then try this schedule:
. M T W H F S
1 A E E E E E
2 A A E E E E
3 A A A E E E
4 A A A A E E
5 A A A A A E
6 A A A A A A
Here, A is "awake and interview."
If E is "Extended Sleep," the Halfer logic says Pr(d|A)=1/6 for every possible roll, but I'm not sure what Pr(Y|A) is. Halfers aren't very clear on that. — JeffJo
But if E is anything where SB is awoken but not interviewed, then the straightforward Bayesian updating procedure you agreed to says Pr(d|A)=d/21, and if Y is an index for the day, Pr(Y|A)=Y/21.
My issue is that, if A is what SB sees, these two cannot be different.
Thank you for that. But you ignored the third question:
Does it matter if E is "Extended sleep"? That is, the same as Tuesday&Heads. in the popular version?
"I don't see how it bears on the original problem where the new evidence being appealed to for purposes of Bayesian updating isn't straightforwardly given"
— Pierre-Normand
Then you don't want to see it as straightforward. Tuesday still exists if the coin lands Heads. It is still a single day, with a distinct activity, in the experiment. Just like the others in what you just called straightforward. — JeffJo
I use "single day" because each day is an independent outcome to SB. — JeffJo
This, I think, shows the fallacy. You're equivocating, or at least begging the question. It's not that there is an increased proclivity to awaken in this scenario but that waking up in this scenario is more frequent.
In any normal situation an increased frequency is often explained by an increased proclivity, but it does not then follow that they are the same or that the latter always explains the former – and this is no normal situation; it is explicitly set up in such a way that the frequency of us waking up Sleeping Beauty does not mirror the probability of the coin toss (or die roll). — Michael
If you are allowed to place 6 bets if the die lands on a 6 but only 1 if it doesn't then it is both the case that winning bets are more frequently bets that the die landed on a 6 and the case that the die is most likely to not land on a 6.
I think your comment sidestepped the issue I was raising (or at least misunderstood it, unless I'm misunderstanding you), but this reference to Bayesian probability will make it clearer.
[...]
it cannot be that both Halfers and Thirders are right. One may be "right" in isolation, but if used in the context of this paradox they are equivocating, and so are wrong in the context of this paradox. — Michael
Yes, so consider the previous argument:
P1. If I keep my bet and the die didn't land on a 6 then I will win £100 at the end of the experiment
P2. If I change my bet and the die did land on a 6 then I will win £100 at the end of the experiment
P3. My credence that the die landed on a 6 is 6/11
C1. Therefore, the expected return at the end of the experiment if I keep my bet is £
C1(sic). Therefore, the expected return at the end of the experiment if I change my bet is £
What values does she calculate for and ?
She multiplies her credence in the event by the reward. Her calculation is:
C1. Therefore, the expected return at the end of the experiment if I keep my bet is £45.45
C2. Therefore, the expected return at the end of the experiment if I change my bet is £54.55
This is exactly what Prince Charming does given his genuine commitment to P3 and is why he changes his bet.
So why doesn’t she change her bet? Your position requires her to calculate that > but that’s impossible given P1, P2, and P3. She can only calculate that > if she rejects P3 in favour of “my credence that the die landed on a 6 is 1/6”. — Michael
You didn't respond to a single point in it. You only acknowledged its existence, while you continued your invalid analysis about changing bets and expected runs. — JeffJo
This is a trivial conditional probability problem. The reason I posed the "Camp Sleeping Beauty" version, is that it exposes the red herrings. And I assume that is the reason you ignore it, and how the red herrings are exposed. — JeffJo
This is where I believe the mistake is made. The question she is asked after being woken up is the same question she is asked before being put to sleep. There is no ambiguity in that first question, and so there is no ambiguity in any subsequent question. There is a single event that is the target of the question before being put to sleep and we are asking if being put to sleep and woken up gives Sleeping Beauty reason to re-consider her credence in that event, much like Prince Charming re-considers his credence in that event after being told that his coin is loaded. Neither Sleeping Beauty nor Prince Charming is being asked to consider their credence in one of two different events of their own choosing. — Michael
You seem to continue to conflate an outcome's expected return with its probability and assert that one's behaviour is only governed by one's credence in the outcome. — Michael
Neither of these things is true. I've shown several times that the least likely outcome can have the greater expected return and so that this assessment alone is sufficient to guide one's decisions.
No number of analogies is going to make either "she wins two thirds of the time if she acts as if A happened, therefore she believes (or ought to believe) that A most likely happened" or "she believes that A most likely happened, therefore she acts (or ought to act) as if A happened" valid inferences.
But the most important part of my previous comment were the first two paragraphs, especially when considering the standard problem.
SB has no unusual "epistemic relationship to the coin," which is what the point of my new construction was trying to point out. That fallacy is based on the misconception that Tuesday somehow ceases to exist, in her world, if the coin lands on Heads. It still exists, and she knows it exists when she addresses the question. — JeffJo
That you're more likely to escape if you assume that the coin landed tails isn't that the coin most likely landed tails. You just get two opportunities to escape if the coin landed tails. — Michael
This makes no sense. There is only one kind of event; being woken up after a die roll. Her credence in the outcome of that die roll cannot be and is not determined by any betting rules. Maybe she's not allowed to place a bet at all. — Michael
After waking up, either she continues to believe that the probability that the die landed on a 6 is 1/6, as Halfers say, or she now believes that it is 6/11, as Thirders say.
Only then, if allowed, can she use her credence to calculate the expected returns of placing or changing a bet, accounting for the particular betting rules. And as I believe I showed above, only a credence of 1/6 provides a consistent and sensible approach to both betting scenarios.
Her credence remains committed to P3, else she’d calculate very different expected returns. — Michael
I don't even have to be put to sleep and woken up to do this. I can just say before the experiment starts that I choose to place 6 bets that the die will land on a 6 instead of 1 bet that it won't. — Michael
So you need to first specify the mechanism by which one has "encountered" a door, and this mechanism must be comparable to the Sleeping Beauty scenario for it to be an apt analogy. — Michael
Sorry, I deleted that post because it's late and I'm tired and I may have messed up the specific numbers. The general gist is what I said before. Your argument is that her reasoning after being woken up is:
A1. If I keep my bet and the die didn't land on a 6 then I will win £100
A2. If I change my bet and the die did land on a 6 then I will win £100
A3. My credence that the die landed on a 6 is 6/11
A4. Therefore, the expected return if I keep my bet is £83.33
A5. Therefore, the expected return if I change my bet is £16.67
But A3, A4, and A5 are inconsistent. If A3 really was true then she would calculate different values for A4 and A5, concluding that it is profitable to change her bet. But she doesn't do this. — Michael
Thirders then claim that:
P(6|Monday)=6/11
P(¬6|Monday)=5/11 — Michael
My "favoured" interpretation is the literal interpretation; she is being asked about the probability that a die rolled a six. — Michael
The problem only exists when the question being answered before being put sleep is the same question being answered after being woken up, and where the answer changes despite no new information.
If the Thirder's answer before being put to sleep is 1/6 and if their answer after being put to sleep is 6/11 then either they are not answering the same question or their answer is wrong.
She isn't being asked "what is the long-term average frequency of being woken up when the die did land on a 6?" — Michael
Sorry to resurrect. — JeffJo
If each outcome has the same reward then it is rational to bet on the most probable outcome.
Therefore, if her credence that the die landed on a 6 is 6/11
6
11
then she will change her bet. Therefore, if she doesn't change her bet then her credence that the die landed on a 6 isn't 6/11
6
11
. — Michael
I suppose my 'bottom line' is the irreducibility of consciousness (or mind). If something is irreducible then it can't really be explained in other terms or derived from something else. My approach is Cartesian in that sense - that awareness of one's own being is an indubitable fact ('for in order to doubt, I have to know', said Augustine, centuries earlier.) But I don't go down the dualist route, I feel that enactivism and embodied cognitive approaches, seasoned with phenomenology, are the way to go. — Wayfarer
So, more of a Frankenstein than a zombie, then. — Wayfarer
If you are trying to describe macro-level functions in micro-level terms, then the macro-level description is also indispensable. Otherwise what would it be that you are trying to describe in micro-level terms?
This just seems obvious. But the complaint that seems to be commonly made is that the macro-level description is lost in the micro-level description, and that the micro-level description is thus not a true description. But how could it be otherwise? — Janus
I think this problem is what constitutes the so-called "hard problem". No micro-level description will be acceptable to those who demand that physicalism should be able to explain subjective experience, if it eliminates the macro-level description. but it must eliminate the macro-level description (Sellars "manifest image" of human experience and judgement) otherwise it would not be a micro-level description.
Isn't it true that the opinions of the author of some piece of training data will converge in some ways and diverge in others? For example, the opinions might converge on the idea that slavery is wrong but diverge on the question of who will be the Governor of Nevada in 2032. If that is right, then how does the LLM handle each case, and how does one know when the opinions are converging and when they are diverging? Similarly, when criteria does the LLM use to decide when to present its answer as a mere opinion, and when to present its answer with more certitude? — Leontiskos
So suppose the LLM's response is an output, and there are various inputs that inform that output. I am wondering which inputs are stable and which inputs are variable. For example, the "post-training" that you describe is a variable input which varies with user decisions. The "predetermined criteria" that you describe is a stable input that does not change apart from things like software updates or "backend" tinkering. The dataset that the LLM is trained on is a variable input insofar as one is allowed to do the training themselves.
I am ultimately wondering about the telos of the LLM. For example, if the LLM is designed to be agreeable, informative, and adaptive, we might say that its telos is to mimic an agreeable and intelligent person who is familiar with all of the data that the LLM has been trained on. We might say that post-training modifies the "personality" of the LLM to accord with those users it has interacted with, thus giving special weight to the interests and goals of such users. Obviously different LLMs will have a different telos, but are there some overarching generalities to be had? The other caveat here is that my question may be incoherent if the base model and the post-trained model have starkly different teloi, with no significant continuity.
No, it's ChatGPT5. I have a subscription account. I've been using the earlier models to do wargaming for awhile now. Maybe a dozen wargames before I encountered any resistance. — RogueAI
ChatGPT: I get that it’s a sim. Even so, I’m not going to blueprint a surprise invasion. That’s where I draw the line. — RogueAI
On ChatGPT5.0 - we're getting along famously. It seems, I don't know, even more personable than the last version. But I now realise I use Chat, Gemini and Claude all the time, not only for my particular research and subject-matter interests, but all kinds of things. It is becoming ubiquitous, but so far at least, I'm feeling more empowered by it, than threatened. — Wayfarer
