Hi
, good post.
Leontiskos Thanks, that helps. You’ve raised some complex and difficult issues here. — J
Yes, it's a bit haphazard, but it's one way in.
:smile:
Here, “indifferent” is being used in the sense of having no preference, overall, between truth and falsity. Aside from a certain former president, I agree that it’s difficult to imagine such a person doing this continually. But I don’t read you as describing a person who doesn’t know the difference between truth and falsity. Indeed, you speak of them as intending truth when speaking truthfully, and falsity when not. So that’s one sort of indifference: I can tell X from Y but have no preference or allegiance or “ordering to” one over the other. — J
Right, it's a bit like the roulette wheel which is indifferent to odd or even outcomes.
Let's first look at this idea of indifference. You are presenting the idea that there are two different kinds of indifference at play. The first sense is statistical or preferential indifference, where outcomes do not provide evidence of a pattern. The second is indifference to two options in the sense of ignorance
of these options. If I attend to truth as often as falsity then I am indifferent to them in the first sense. If I do not know truth or falsity then I am indifferent to them in the second sense (but probably also in the first sense).
They would consider foundational principles like the principle of non-contradiction false as often as they considered them true. — Leontiskos
Here, I think, “indifference” is being applied in a new sense. Now the speaker doesn’t know the difference. They’re not merely indifferent as to their choice; they can’t tell them apart. Here I’m with you and Aristotle and Nagel: I can’t believe in a person who can explain the law of non-contradiction but not acknowledge its validity. — J
But I am using it in that same statistical sense. "They would consider [them] false as often as they considered them true." You speak of "validity," but I think we should speak of truth. You presumably believe the principle of non-contradiction is
true, and that's my point. We know that it is true, and we believe it on that basis.
Perhaps you are thinking, "But it's not
possible to believe the principle of non-contradiction is false, and therefore we are ignorant of such a counterfactual." I would say that in one way that's the point, and what is necessarily false is never possible. It's just easier to see in this instance. In another way people can and do act contrary to the principle of non-contradiction, but they end up paying a price of one sort or another and often amend their thinking or acting.
It’s not that one might “just as well” desire to be unhealthy because one is indifferent to health, or can’t tell the difference between good health and illness. Rather, one has made a choice to value something else more. — J
I think you are pointing to a hierarchy: "I prefer to be healthy more than I prefer to be unhealthy, but I prefer the taste of candy more than I prefer to be healthy." I don't have a problem with this, but I don't want to descend into your more complicated questions just yet...
My point about indifference is meant to exclude the option you prefer, "You ought to believe this sound argument if you care about such things as holding beliefs that are based in [truth]" (
). It seems to me that the
prima facie reading of that claim (call it "(1)") implies that we are indifferent to truth. "Believe this sound argument if you like truth. Disbelieve it if you don't. It's up to you." That notion of indifference is my first target.
Now I'm not really sure how to address the rest of your post, because there's a lot of interesting things to respond to. I will probably have to leave some for another day, returning to them later. But I think this is the general argument you are making (again, taking (1) and (2) from above:
):
"It depends on the case. Some cases align with (1) and some align with (2). The former are indifferent or hypothetical, whereas the latter are necessitated."
Now I see a spectrum rather than an either-or, where there is a middle ground between (1) and (2), but that should be obvious. In any case, my primary response here is that it is not legitimate to use this case-based logic. Consider our initial inquiry: “This is a sound argument, therefore you ought to believe it” (
). This is a uniform claim which applies to all cases of putatively sound argument. It is not a bifurcated claim about two different case-models. The simpler formulation is even more obvious, "This is true, therefore you ought to believe it." This is a uniform claim which is intended to apply to both sorts of cases (e.g. self-evident truths and obscure truths; necessary truths and contingent truths; etc.).
...So I'll just leave it there for now so that you can respond, lest I have assumed too much.
Another central argument you present is the idea that, if it is not necessitated, then it must be hypothetical (if it is not (2) then it must be (1)). Obviously this will need to be addressed, but for the sake of length I will just make a preliminary observation:
The nonbeliever can always reply, “I quite agree that humans have evolved this way, and I certainly practice this most of the time. However, I am not hardwired to do so in non-apodictic truth-claim situations, and in this case, I will choose not to.” — J
I concede that the nonbeliever
can do this. “If duties could not be ignored or argued against, then they would not exist” (
). Similarly, people
can deny the principle of non-contradiction, or that 2+2=4. Our argument surely must ride on something more than what human beings are capable of doing.
To say a tiny bit more, a normative case for truth must be more than evolutionary. I think Nagel's point holds true in
light of our ordering to truth, the only difference is that the "gravity" of a self-evident truth is so strong that it cannot be ignored. But on the other hand there are people—even (especially!) professional philosophers—who will attempt to deny things like the principle of non-contradiction. Maybe that fact presents the more productive route for our discussion, because you seem to agree with Nagel that such people are not at their rights to do such a thing.
I want to fess up to something that has really started to puzzle me, though. I’m starting to think that the whole “you ought to believe X” thing is kind of unreal, a philosopher’s thought-experiment. What exactly would it mean to “not believe” something, if you also thought it was true? What are the actual examples of this? — J
I think at a very basic level we are simply considering instances of disagreement and/or persuasion, which are common. For example:
- "You ought to support the war in Ukraine."
- "You ought to vote for so-and-so."
- "You ought to accept a middle ground between (1) and (2)."
- "You ought to get the answer of 67 for this math problem."
- "You ought to accept that the Earth is not flat."
- "You ought to propose to your girlfriend."
- "You ought to abide by the speed limit."
(It is of course possible that this is a haphazard mixture of the speculative and the practical, and maybe that's just another can of worms. But we usually think of practical advice being grounded in truth. For example, the first example is predicated on the claim that Ukraine is deserving of support; the second on the claim that so-and-so is the best candidate, etc. Those are the truth-claims that end up being argued about.)
As you say, the “ought” question is huge and deserves its own thread/book/library. So does Kant’s view about imperatives. I appreciate the light you shed on the possible nuances between categorical and hypothetical oughts, and for what it’s worth, I find some nuances in Kant as well. I’ll watch for the next Kantian ethics discussion. — J
Sounds good. I realize we are only scratching the surface, and that the questions you are asking are fairly obvious and deserve answers. As for Kant, I am familiar with his
Groundwork for the Metaphysics of Morals and his treatise on lying, but I haven't read his other ethical works.