• Is Philosophy the "Highest" Discourse?
    Gauss, who termed mathematics as thatjgill

    Oh you're right! Well, never mind then. I'll go with Gauss.
  • In Support of Western Supremacy, Nationalism, and Imperialism.
    warBob Ross

    P.S.

    Forgot to ask, why we should spend blood and treasure liberating members of the out-group. How is that putting in-group needs first?
  • In Support of Western Supremacy, Nationalism, and Imperialism.
    The in-group is more important than the out-group. Each group has to protect its own viability first and foremost.Bob Ross

    I don't claim to understand your "moral realism," so maybe you can help me out here.

    You have suggested we have duty to liberate the citizens of North Korea. Is that purely because we believe we have the status of moral agents, and a duty to carry out acts we deem moral? Or is it because North Koreans also have the status of being moral agents, and that's why we have a duty to them?

    The answer to those questions would clarify for me whether we are supposed to consider North Koreans members of the in-group or the out-group. If they are moral agents toward whom we might have a duty, that sounds like we ought to consider them in-group. But if they are out-group, why would we have any duty to liberate them? ― I'll leave you to decide whether you want to defend the idea that they are out-group relative to "us" ― the West or whatever ― and not moral agents at all.

    When you've decided you don't understand the question, I'll happily rephrase it.
  • Is Philosophy the "Highest" Discourse?
    philosophers have no business offering opinions within a scientific discourseJ

    Dope.

    Their super-power, if any, lies in their ability to defend themselves from challenges that would redirect their discourse into other disciplines.J

    I'm with you ― I believe ― in thinking this doesn't sound all that impressive.

    (1) Who bothers to challenge philosophy?

    (2) Are you sure that no other discipline has this "super-power"? I suspect every discipline does, even in good faith. (If all you've got is a hammer, ...)

    (3) Are you sure this is anything more than a dirty rhetorical trick? Another "heads I win, tails you lose" sort of thing? ― Distinguished from (2) because you don't even need a discipline, just the willingness to treat discussion as competition.

    *

    I'm not bringing up science just as boosterism, but because I was thinking that the tradition of the "top-level" idea casts philosophy as specifically "the queen of the sciences" ― not as something set over against science, as it is so often seen these days. Even (on shaky ground here) something like Aristotle's "first philosophy" would embrace physics, biology, psychology, ethics, and politics as the rest, right?

    Anyway ― I would distinguish between a view of philosophy as (either) the highest (or the most fundamental) science, and a view that philosophy holds some particular and special place precisely by not being science.
  • Is Philosophy the "Highest" Discourse?
    Why is it the case that philosophical discourse can question, and reflect upon, the discourse of physics, but the reverse is not the case?J

    Because philosophical discourse is more presumptuous?

    Scientific journals are peer-reviewed. That's not a guarantee that they publish only truth, of course. It's just a first nibble by the rest of the scientific community, because it is ultimately this community which will take up, build upon, pass by, propose alternative theories, replicate or fail to replicate experimental results, and so on. A paper is never the end, just a contribution.

    But are philosophers supposed to be some sort of super-peers? Should journals hold off publication until they've "checked with a philosopher"?
  • A -> not-A
    Are you claiming that knowledge does not exist outside mathematics? I don't see why "the elements being less well-defined" results in any serious problem here.Leontiskos

    While I think it's defensible to say that "knowledge does not exist outside mathematics," I don't think I have to, to show the difficulty.

    Mathematical knowledge, to borrow Williamson's term, is "luminous": that is, when you know that P, you know that you know that P. That may put it too strongly: there are cases where you think you have a proof, but you don't; there are cases where someone has provided a proof, but it's complex enough that it takes a while for people to confirm that it is a proof. Nevertheless, there is an alignment of the process of knowledge production and knowledge justification, and a single standard governs both.

    Outside of mathematics, there are no standards of either that garner universal approval, much less guarantee that production and justification are measured by the same standard. We may have knowledge, but in general we cannot know when we do and when we don't, and thus we cannot know when our valid arguments are sound and when they are not.

    I'll throw in a side issue that emphasizes the difference. It is a wise saying that experiments which are not performed have no results. And yet, in mathematics your hypotheses can be so sharply defined that they do: a difficult theorem like Fermat's last theorem might be solved piecemeal ― you prove that if lemma X were the case, then you could prove theorem T, and then you look for ways to prove X. That is, in mathematics, it's not that unusual to prove a conditional, without knowing whether the antecedent is in fact true. I think the independence results in set theory are also different from the sort of thing we can ever hope to achieve in empirical investigations.

    I'm not in love with this story. It would be nice to retreat instead to some sort of common sense that of course we know things and deduce more things in everyday life. Sure. But part of that common sense is also that there are exceptions, we turn out not to know what we think we do, we turn out not to be justified in making the inferences we do. So I end up back in the same place, because we already have a name for this sort of rule that generally works but has exceptions: that's probability. ― Philosophical attempts to close the gap and specify, in some vaguely scientific way, exactly the criteria for knowledge and inference, so that we can be on ground just as solid outside of mathematics, have not only universally failed, but there are reasons to think they must fail.

    I do not see a way around making some kind of distinction here. Either only mathematics (and logic) gets knowledge and deduction ― and everything else gets rational belief and probability ― or there are two kinds of knowledge, and two kinds of deduction. Pick your poison.

    Mathematical knowledge and empirical knowledge differ so greatly they barely deserve the same name. Obviously the history of philosophy includes almost every conceivable way of either affirming or denying that claim.
  • A -> not-A
    Given the explanation, can we deduce that Billy is not at work?Leontiskos

    Deduction should allow you to pass, by valid inference, from what you know to what you did not know. Yes?

    In mathematics, these elements are well-defined. What do we know? What has been proven. How do we generate new knowledge? By formal proof.

    Neither of these elements are so well-defined outside mathematics (and formal logic, of course). There is no criterion for what counts as knowledge, and probably cannot be. And that defect cannot be made up by cleverness in how we make inferences.

    I see no reason to question the traditional view. "Our reasonings concerning matters of fact are merely probable," as the man said. There is deduction in math and logic; everyone else has to make do with induction, abduction, probability.
  • In Support of Western Supremacy, Nationalism, and Imperialism.


    I hope you're enjoying your visit to Earth, but you should really check with your parents before interacting with the natives.
  • In Support of Western Supremacy, Nationalism, and Imperialism.


    What do you dream of, Bob? Do you dream of peace and plenty? Or do you dream of making people listen to you?
  • In Support of Western Supremacy, Nationalism, and Imperialism.


    I've always loved this one:

    The Send-Off
    By Wilfred Owen

    Down the close, darkening lanes they sang their way
    To the siding-shed,
    And lined the train with faces grimly gay.

    Their breasts were stuck all white with wreath and spray
    As men's are, dead.

    Dull porters watched them, and a casual tramp
    Stood staring hard,
    Sorry to miss them from the upland camp.
    Then, unmoved, signals nodded, and a lamp
    Winked to the guard.

    So secretly, like wrongs hushed-up, they went.
    They were not ours:
    We never heard to which front these were sent.

    Nor there if they yet mock what women meant
    Who gave them flowers.

    Shall they return to beatings of great bells
    In wild trainloads?
    A few, a few, too few for drums and yells,
    May creep back, silent, to still village wells
    Up half-known roads.


    ――――
    "like wrongs hushed-up" ― oh, he could write.
  • In Support of Western Supremacy, Nationalism, and Imperialism.


    Bob, Bob, Bob. Your position is such a jumble.

    Maybe you thought to yourself, why don't we do more to oppose tyranny throughout the world? Why do we allow people to be oppressed by their own governments?

    -- But, interrupted skeptical Bob, on what grounds would we oppose tyranny?

    Democracy! Our values!

    But then you realized this is trouble: a core democratic value is tolerance.

    Which is fine, you thought, except people take it too far, allow themselves to be paralyzed by a mamby-pamby cultural relativism.

    We've become like people who *say* they have religion, but don't want to convert anyone.

    Well do we believe in democracy or don't we? If we do, let's act like it! Let's go convert some mofos.

    -- Just because we believe? asks skeptical Bob.

    Hell yeah! We believe, and if we really believe that's enough.

    And if others believe something else, let them try too. Every country should act on whatever it believes, because ..., because ...

    Because we can't give in ...

    to relativism.
  • In Support of Western Supremacy, Nationalism, and Imperialism.


    Two questions.

    1. How do you impose democracy upon a people by force?

    2. Should all nations think this way? Should all of them declare war upon all the others to impose their values upon other nations by force?
  • A -> not-A
    let the negation of C(P) be N(P)TonesInDeepFreeze

    Yeah that's an interesting idea!

    I guess we could assume that nothing in N(P) would follow from anything in C(P), because follow-from would already have that sort of "transitive" property that we're used to.

    I've tried to work out some consequences of this, but it's still not clear to me. (I had a whole lot of ideas that just didn't work.) It's interesting though.

    Much of classical math existed before the introduction of set theory.jgill

    Yeah, I get that. Looking at the reconstruction of math using set theory is one way to hunt for the difference between math and logic, that's all. Maybe not the most interesting way.
  • A -> not-A
    We have that.TonesInDeepFreeze

    We already have:TonesInDeepFreeze

    We define consistency from provability.TonesInDeepFreeze

    Sorry. Obviously I haven't managed to make clear what I'm trying to do here, probably because I've been writing a bunch of stuff I ended up scrapping, so I probably think I've said things I haven't.

    I'm trying to figure out how we could bootstrap logic or reasoning, informal at first, of course, what we would need to do that, what the minimum is we could start with that could grow into informal reasoning. I'm not proposing an alternative to the logic we have now. So

    Why is that lacking?TonesInDeepFreeze

    is not the kind of question I was addressing at all.

    For example, my last post suggested a way you might leverage a primitive understanding of consequence or "follows from" to piece together negation. I don't know if that's plausible, but it hadn't occurred to me before, so that's at least a new idea.

    How do you know there is only one thing?TonesInDeepFreeze

    At first probably not! But you can see how a bunch of ideas that all point to "not sunny" might eventually get you there.

    And as I noted, there's some reason to think other great apes already have the ability to reason about pairs of near opposites, even without an abstract concept of negation. I was imagining a way some sense of consequence might get you from such pairs to genuine negation.

    Like I said, all very speculative, and probably not worth your time.
  • A -> not-A
    I don't know what you mean by "minimal inconsistency guard".TonesInDeepFreeze

    Roughly that the LNC could enforce a narrow, specialized sense of consistency ― that P and ~P are inconsistent, for any P ― and this would be enough to bootstrap a more general version of inconsistency that relies on consequence, so that with a fuller system you can say A and B are inconsistent if A → C and B → ~C. It's a bootstrapping technique; start with special cases and leverage those to get the general. Special cases are easier, cheaper, in this case don't require additional resources like consequence.

    It's probably all too speculative to do much with. Most of the ideas I've had in the last few minutes just recreate the fact that you can build the usual collection of logical constants with negation and one of the others (unless you want to start with the Sheffer stroke). If I were to say, maybe we need both consistency and consequence as core ideas ― that's almost all that would amount to.

    I was thinking, though, that there might be a way to get negation out of a primitive sense of consequence ― not the material conditional, just an intuition of what follows from what ― something like this: any given idea (claim, thought, etc.) has a twin that is the one thing guaranteed under no circumstances to follow from it, and that would be its negation. You could define ~P roughly by partitioning the possible consequents into what can and can't follow from P, but the two buckets are different: what can follow from P might initially be empty, who knows; but what can't never starts empty.

    If, like the gorillas, you didn't already have the abstract concept of negation, the bucket we're going to use to define negation would probably be full of stuff ― given any P, that bucket will have stuff that ~P follows from, in addition to ~P itself, maybe, sometimes. Example: if P is "It's sunny", our bucket of things that don't follow includes "It's cloudy", "It's nighttime", "It's raining" ― all different things that "It's not sunny" follows from.

    Don't spend any time trying to make sense of all this. It's just me thinking on the forum again.
  • A -> not-A

    A man posts a vague and somewhat mysterious advertisement for a job opening. Three applicants show up for interviews: a mathematician, an engineer, and a lawyer.

    The mathematician is called in first. "I can't tell you much about the position before hiring you, I'm afraid. But I'll know if you're the right man for the job by your answer to one question: what is 2 + 2?" The mathematician nods his head vigorously, muttering "2 + 2, yes, hmm." He leans back and stares at the ceiling for a while, then abruptly stands and paces around a while staring at the floor. Eventually he stops, feels around in his pockets, finds a pencil and an envelope, and begins scribbling fiercely. He sits, unfolds the envelope so he can write on the other side and scribbles some more. Eventually he stops and stares at the paper for a while, then at last, he says, "I can't tell you its value, but I can show that it exists, and it's unique."

    "Alright, that's fine. Thank you for your time. Would you please send in the next applicant on your way out." The engineer comes in, gets the same speech and the same question, what is 2 + 2? He nods vigorously, looking the man right in the eye, saying, "Yeah, tough one, good, okay." He pulls a laptop out of his bag. "This'll take a few minutes," he says, and begins typing. And indeed after just a few minutes, he says, "Okay, with only the information you've given me, I'll admit I'm hesitant to say. But the different ways I've tried to approximate this, including some really nifty Monte Carlo methods, are giving me results like 3.99982, 3.99991, 4.00038, and so on, everything clustered right around 4. It's gotta be 4."

    "Interesting, well, good. Thank you for your time. I believe there's one last applicant, if you would kindly send him in." The lawyer gets the same speech, and the question, what is 2 + 2? He looks at the man for a moment before smiling broadly, leans over to take a cigar from the box on the man's desk. He lights it, and after a few puffs gestures his approval. He leans back in his chair, putting in his feet up on the man's desk as he blows smoke rings, then at last he looks at the man and says, "What do you want it to be?"
    Srap Tasmaner
  • A -> not-A
    I guess that' similar to the prisoner's dilemma.TonesInDeepFreeze

    It's related, yes.

    consistency is defined in terms of consequenceTonesInDeepFreeze

    Suppose I hold beliefs A and B. And suppose also that A → C, and B → ~C. That's grounds for claiming that A and B are inconsistent, but only because C and ~C are inconsistent. How would we define the bare inconsistency of C and ~C in terms of consequence?

    Or did you have something else in mind?

    Now it could be that the LNC, so beloved on this forum, functions as a minimal inconsistency guard, and from that you get the rest. ― This is a fairly common strategy with programming languages these days, to define a small subset of the language that's enough to compile the full language's interpreter or VM or whatever.

    It could also be that the "starter versions" of consequence or consistency look a little different. I've been reading about some interesting work with gorillas, which suggests they grasp some "proto-logical" concepts. Negation, for example, is pretty abstract, but they seem to recognize and reason about rough opposites ― here/there, easy/hard, that sort of thing. Researchers have worked up a pretty impressive repertoire of "nearly logical" thinking among gorillas, though obviously their results are open to interpretation.

    Anyway, suggests another type of bootstrapping.

    ( Might be worth mentioning that it looks like we're in the presence of one of Austin's trouser words, since the goal in Strawson's story is avoiding inconsistency, and that's what naturally came to mind above. )
  • A -> not-A
    What makes me hesitate to reduce logic to math has more to do with thinking about informal logic as still a part of logic, even though it doesn't behave in the same manner as formal logicMoliere

    If you wade through everything I've vomited here in the last day or so, I think you'll find me half backtracking on that ― although I still tend to think there's something like a "formal impulse" that you can scent underlying mathematics and logic, so perhaps even our informal reasoning. It's a very fog-enshrouded area.

    It's already been mentioned a couple times in this thread that "follows from" is often taken as the core idea of logic, formal and informal. Logical consequence.

    Another option is consistency, and it's the story that Peter Strawson tells (or told once, anyway) for the origins of logic: his idea was that if you can convince John that what he said is inconsistent, then he'll have to take it back, and no one wants to do that. So the core idea would be not whether one idea (or claim or whatever) follows from another, but whether two ideas (claims, etc) are consistent with each other. (I should dig out a quote. He tells it better than I do.)

    Do you know about the ultimatum game? It's a standard experiment design in psychology, been done lots of times in all sorts of variations. You take pairs of subjects, and you offer one of them, say, $100, on this condition: they have to offer their partner a share; if the partner accepts the offer, they get the agreed upon amounts of money; if the partner refuses, they get nothing. ― Okay, I'm telling you that story (which you probably already know) because it's famous for completely undermining a standard assumption of rationality. Since the participants start with 0, the partner should be happy to get anything, to accept $1 out of $100, instead of walking away with nothing. But that's not what happens. The offers have to be fair, something close to 50-50. Not quite 50-50 is usually accepted, but lowball offers almost never are.

    And the point is this: evidently, whether it's evolution or a cultural norm, we have a sense of fairness. And it can override what theory might say is rational. (The target here is Homo oeconomicus, the rational agent.)

    Similarly, we might hunt for "logical consequence" or "consistency" as some sort of ur-concept upon which logic is built.
  • A -> not-A
    I don't know of anyone who thinks natural language conveyance of mathematics is unimportant.TonesInDeepFreeze

    Fair. I was trying to convey the sense that there is this slightly annoying informal thing we have to do before we get on to doing math, properly, formally. And if you try to formalize that part ("We define a language L0, which contains the word 'Let', lower case letters, and the symbol '=', ..."), you'll find that you need in place some other formal system to legitimate that, and ― at some point we do have to just stop and figure out how to conceive of bootstrapping a formal system. And that bootstrapping will not be ex nihilo, but from the informal system ― if that's what it is ― that we are already immersed in, human culture, reasoning, language, blah blah blah.

    I probably shouldn't have brought it up. It's another variation on the chicken-and-egg issue you pointed out.

    Another way is to point to the coherency: There is credibility as both logic-to-math and math-to-logic are both intuitive and work in reverse nicely.TonesInDeepFreeze

    This is a nice point.

    Circularity need not be vicious.
    *
    (I'm not thinking of the hermeneutic circle, though it has some pretty obvious applicability here.)


    In particular, it's interesting to think of this whole complex of ideas as being "safe" because coherent ― you can jump on the merry-go-around anywhere at all, pick any starting point, and you will find that it works, and whatever you develop from the point where you began will serve, oddly, to secure the place where you started. And this will turn out to be true for multiple approaches to foundations for mathematics and logic.

    Well that's just a somewhat flowery way of saying "bootstrapping" I guess.

    Now I can't help but wonder if there's a way to theorize bootstrapping itself, but I am going to stop myself from immediately beginning to do that.

    Thanks for very much for the conversation @TonesInDeepFreeze!
  • A -> not-A


    Yeah I think we're thinking about the same things.
  • A -> not-A


    Just that there's at least here a dependence of mathematics on natural language, which gives the appearance of being purely pedagogical, or unimportant "set up" steps (still closely related to the thing about logical schemata, from above).

    Algebra books set up problems this way, with a little bit of natural language, and then line after line of symbolism, of "actual" math.

    If you get nervous about there being such a dependency, you might shunt it off to something you call "application".

    I'm just wondering if the dependency is ever really overcome, especially considering the indefinability of "set" for example.

    I keep throwing in more issues related to foundations, sorry about that.
  • A -> not-A
    natural language statementsfdrake

    It's curious when you notice that mathematics textbooks have no alternative to saying things like "Let x = the number of oranges in the bag", and if you don't say things like that, you might as well not bother with the rest. (For similar reasons, doing it all in some APL-like symbolism would work, but no one would have any idea what the symbolism meant, if you didn't have "∈ means is a member of" somewhere.)

    And if you have natural language, you have how humans live, human culture, evolution, and all the rest. There's your foundations.
  • A -> not-A
    Absolutely sure.TonesInDeepFreeze

    I'm okay with that.

    The chicken and egg still bothers me, though, so one more point and one more question.

    Another issue I have with treating logic as just "given" in toto, such that mathematics can put it to use, is that one of the central concepts of modern logic is nakedly mathematical in nature: quantifiers. If you rely on ∃ anywhere in constructing set theory (so that you can construct numbers), you're already relying on the concept of "at least one", which expresses both a magnitude and a comparison of magnitudes. Chicken and egg, indeed.

    And if you need to identify the formula "∅ ⊂ ∅" as an instance of the schema "P → P", then you also have to have in place the apparatus of schemata and instances (those objects of Peter Smith's unforgiving gaze), which you presumably need both quantifiers and sets ― or at least classes of some kind ― to define rigorously. More chicken and egg.

    And since we're wallowing in the muddy foundations
    *
    (like those of Wright's Imperial Hotel)
    , a quick question: somewhere I picked up the idea that all you need to add to, say, classical logic is one more primitive, namely ∈, in order to start building mathematics. I suppose you need the concepts (but no definitions!) of member and collection as what goes on the LHS and RHS respectively, but that's it. And there just is no way around ∈, no way to cobble it together from the other logical constants. Is that your understanding as well? Or is there a better way to pick out what logic lacks that keeps it from functioning as itself the foundations of mathematics?

    What pretending?TonesInDeepFreeze

    Just a tendentious turn of phrase, not important.

    Someplace to start writing without having to explain yourself.fdrake

    Kinda what I think. Also, at some point you'll have to say to the kiddies something like "group" or "collection" and just hope to God they know what you mean, because there is nothing anyone can say to explain it.

    I think of mathematical logic sub-subject of formal logic.TonesInDeepFreeze

    Certainly. I almost posted the same observations about the dual existence of logic courses and research in academic departments (logic 101 in the philosophy department, advanced stuff in the math department, and so on).

    ― ― I suppose another way of putting the question about formal logic is whether we could get away with thinking of its use elsewhere, not only in the sciences, but in philosophy and the humanities, as, in essence, applied mathematics.

    Set theory axiomatizes classical mathematics. And the language of set theory is used for much of non-classical mathematics That's one so what.TonesInDeepFreeze

    Sure sure, my point was to suggest that logic could live here too, and I'm really not sure why it doesn't. Set theory is needed for the rest of math and so is logic. There's your foundations, all in a box, instead of logic coming from outside mathematics ― that's what I was questioning, am questioning. (I suppose, as an alternative to reducing it to something acknowledged as being part of mathematics, which I admit doesn't seem doable.)
  • A -> not-A
    Writers often used the word 'contained'; it is not wrong. But sometimes I see people being not clear whether it means 'member' or 'subset'TonesInDeepFreeze

    That's a solid point. It felt natural and intuitive when talking about "areas", subspaces of a partitioned probability space, and so on. But it's an awful word, as @Moliere proved.
  • A -> not-A
    0 subset of 0 holds by P -> P.TonesInDeepFreeze

    I've granted that mathematics is dependent upon logic ― but, for the sake of argument, are you sure this is right?

    That is, we need logic in place to prove theorems from axioms in set theory, to demonstrate that ∅⊂∅, for instance, but do we want to say it's because of the proof that it is so?

    This close to the bone, I'm not sure how much we can meaningfully say, but something about "holds by" ― rather than, "is proved using" ― looks wrong to me.

    Am I missing something obvious?

    Peter Smith offers some nice content.TonesInDeepFreeze

    I used to enjoy reading his reviews of logic textbooks, because he was very picky about how they presented logic schemas and the process of "translating" natural language into P's and Q's. Unforgiving when authors were too slapdash or handwavy about this, which I thought showed good philosophical sense.

    Oh, yes, the duals run all through mathematics.TonesInDeepFreeze

    Just the sort of thing, I understand, that motivates category theory.

    #

    Honestly, I'm not quite sure why formal logic (mathematical logic) isn't just considered part of mathematics. It would be part of foundations, to be sure, as set theory is, and you need it in place to bootstrap the rest, as you have to have sets (or an equivalent) to do much of anything in the rest of mathematics, but so what? What does mathematics get out of pretending it's importing logic from elsewhere?
  • A -> not-A
    subset v memberTonesInDeepFreeze

    I should also have mentioned that it matters because ∅ has no members but ∅ ⊂ ∅ is still true, in keeping with how the material conditional works.
  • A -> not-A


    One other tiny point of unity: I always thought it was interesting that for "and" and "or" probability just directly borrows ∩ and ∪ from set theory. These are all the same algebra, in a sense, logic, set theory, probability.
  • A -> not-A
    Your probability exploration is interesting. I think there's probably (pun intended) been a lot of work on it that you could find.TonesInDeepFreeze

    Indeed. I'd have to check, but I think Ramsey used to suggest that probability should be considered an extension of logic, "rather" (if that matters) than a branch of mathematics. It's an element of the "personalist" interpretation he pioneered and which de Finetti has probably contributed to the most. I'm still learning.

    So, as far as I can tell, category theory does not eschew set theory but rather, and least to the extent of interpretability (different sense of 'interpretation' in this thread) it presupposes it and goes even further.TonesInDeepFreeze

    Yeah not clear to me at all. A glance at the wiki suggests there have been efforts to replace set theory entirely, but I'm a font of ignorance here.

    On the other side, it did catch my eye when some years ago Peter Smith added an introduction to category theory to his site, Logic Matters. One of these days I'll have a look.
  • A -> not-A
    P can be empty set, which is a member of every set.Moliere

    This is a correction ― not a member, but a subset.

    A nitpick, for sure, but making exactly that distinction took a long time, and there were questions that remained very confusing until those concepts were clearly separated.
  • A -> not-A


    Yeah that's a funny thing. Mathematics cannot be reduced to logic, it turns out, but it appears to have an irremediable dependency on logic.

    Sometimes it suggests to me that mathematics and logic are both aspects or expressions of some common root.

    Anyway, much as I would like for probability to swallow logic, I'm resigned to mostly taking the sort of stuff I've been posting as a kind of heuristic, or maybe even a mathematical model of how logic works. (I have some de Finetti to read soon, so we'll see what he has to say.)

    By the way, I understand the main focus for unifying math and logic in recent years has been in category theory, which I haven't touched at all. Is that something you've looked into?
  • A -> not-A
    "is contained within", i.e. determined byMoliere

    Oh, not what I was saying at all.

    The impetus for talking about this at all was the material conditional, and my suggestion was that you take P → Q as another way of saying that P ⊂ Q.

    It helps me understand why false antecedents and true consequents behave the way they do.

    Having gone that far, you might as well note that there are sets between ∅ and ⋃, and you can think of logic as a special case of the probability calculus.

    That's how it works in my head. YMMV
  • A -> not-A
    The (probability) space of A is entirely contained within the (probability) space of not-A.


    Well, of course it is. That's almost a restatement of the probability of P v ~P equals 1.
    Moliere

    ?

    A and its complement ~A are disjoint. If A is contained in ~A, it must be ∅.
  • A -> not-A


    Only if you agree to write the preface. And it should be trenchant.
  • A -> not-A
    your reduction of material implication to set theory. I'm not sure how to understand that, reallyMoliere

    It's not that complicated.

    kings.png

    The whole space is people, say. Some are rulers, some monarchs, some kings, some none of those. A lot of monarchs these days are figureheads, so there's only overlap with rulers. All kings are monarchs, but not all monarchs are kings.

    There are some things you can say about the probability of a person being whatever, and the ones we're interested in would be like this:

      Pr(x is a monarch | x is a king) = 1

    That is, the probability that x is a monarch, given that x is a king, is 1. The space of "being a king" is entirely contained in the space of "being a monarch".

      King(x) → Monarch(x)

    Similarly we can say

      Pr(x is not a king | x is not a monarch) = 1

    which is the contrapositive.

    The complement of Monarchs is contained in the complement of Kings, but the latter also contains Queens and I don't know, Czars and whatnot. Not a king doesn't entail not a monarch, and sure enough Pr(x is a monarch | x is not a king) > 0.

    Conceptually, that's it. (There are some complications, one of which we'll get to.)

    I find the visualization helpful. We're just doing Venn diagram stuff here.

    if the moon is made of green cheese then 2 + 2 = 4. That's the paradox, and we have to accept that the implication is true. How is it that the empirical falsehood, which seems to rely upon probablity rather than deductive inference, is contained in "2 + 2 = 4"?Moliere

    For this example, there's a couple things we could say.

    Say you partition a space so that 0.000001% of it represents (G) the moon being made of green cheese, and the complement ― 99.999999% ― is it not (~G). Cool. Little sliver of a possibility over to one side.

    2 + 2 = 4 is true for the entire space, both G and ~G. Both are contained in the space in which 2 + 2 = 4, which will keep happening whatever your empirical proposition because it's, you know, a necessary truth.

    What's slightly harder to express is something we take to be necessarily false, like 2 + 2 = 5. The space in which that's true is empty, and the empty set is a subset of every single set, including both G and ~G. It could "be" anywhere, everywhere, or nowhere, doing nothing, not taking up any room at all. It doesn't have a specifiable "location" because Pr(2 + 2 = 5 | E) = 0 for any proposition E at all.

    Both necessary truths and necessary falsehoods fail to have informative relations with empirical facts.
  • A -> not-A
    to a lesser extent MichaelBanno

    Awww. Do you feel bad now @Michael?
  • A -> not-A
    validity is about deducibilityLeontiskos

    I don't even need to advert to real-world casesLeontiskos

    Well, the thing is, deducibility is for math and not much else. That's the point of my story about George, and my general view that logic is ― kinda anyway ― a special case of the probability calculus.

    an argument is supposed to answer the "why" of a conclusionLeontiskos

    I agree with this in spirit, I absolutely do. I frequently use the analogy of good proofs and bad proofs in mathematics: both show that the conclusion is true, but a good proof shows why.

    I'll add another point: when you say something another does not know to be false but that they are disinclined to believe, they will ask, "How do you know?" You are then supposed to provide support or evidence for what you are saying.

    The support relation is also notoriously tricky to formalize (given a world full of non-black non-ravens), so there's a lot to say about that. For us, there is logic woven into it though:

      "Billy's not at work today."
      "How do you know?"
      "I saw him at the pharmacy, waiting for a prescription."

    It goes without saying that Billy can't be in two places at once. Is that a question of logic or physics (or even biology)? What's more, the story of why Billy isn't at work should cross paths with the story of how I know he isn't. ("What were you doing at the pharmacy?")

    As attached as I've become, in a dilettante-ish way, to the centrality of probability, I'm beginning to suspect a good story (or "narrative" as @Isaac would have said) is what we are really looking for.
  • A -> not-A
    I encourage respectful discussion of these topics by all parties.NotAristotle

    Good lad.

    I have learnedNotAristotle

    Even better.
  • A -> not-A
    a notion of "follows from,"Leontiskos

    I sympathize. I think a lot of our judgments rely on what I believe @Count Timothy von Icarus mentioned earlier under the (now somewhat unfortunate) heading "material logic", distinguished from formal logic.

    A classic example is color exclusion.

    When you judge that if the ball is red then it's not white ― well, to most people that feels a little more like a logical point than, say, something you learn empirically, as if you might find one day that things can be two different colors. (Insert whatever ceteris paribus you need to.)

    Wittgenstein would no doubt say this comes down to understanding the grammar of color terms. (He talked about color on and off for decades, right up until the end of his life.)

    Well, what do we say here ― leaving aside whether color exclusion is a tenable example? What you're after is a more robust relationship between premises and conclusions, something more like grasping why it being the case that P, in the real world, brings about Q being the case, in the real world, and then just representing that as 'P ⇒ Q' or whatever. Not just a matter of truth-values, but of an intimate connection between the conditions that 'P' and 'Q' are used to represent. Yes?
  • A -> not-A
    reductio?Leontiskos

    I'm taking this out of context, for the sake of a comment.

    I'm a little rusty on natural deduction but I think reductio is usually like this:

      A (assumption)*
      ...
      B (derived)
      ...
      ~B (derived)

      ━━━━━━━━━━━━
      A → ⊥ (→ intro)*
      ━━━━━━━━━━━━
      ~A (~ intro)

    Not sure how to handle the introduction of ⊥ but it's obviously right, and then our assumption A is discharged in the next line, which happens to be the definition of "~" or the introduction rule for "~" as you like.

    Point being A is gone by the time we get to ~A. It might look like the next step could very well be A → ~A by →-introduction, but it can't be because the A is no longer available.

    What you do have is a construction of ~A with no undischarged assumptions.

    #

    We've talked regularly in this thread about how A → ~A can be reduced to ~A; they are materially equivalent. We haven't talked much about going the other way.

    That is, if you believe that ~A, then you ought to believe that A → ~A.

    In fact, you ought to believe that B → ~A for any B, and that A → C for any C.

    And in particular, you ought to believe that

      P → ~A (where B = P)
      ~P → ~A (where B = ~P);

    and you ought to believe that

      A → Q (where C = Q)
      A → ~Q (where C = ~Q).

    If you combine the first two, you have

      ⊤ → ~A

    while, if you combine the second two, you have

      A → ⊥.

    These are all just other ways of saying ~A.

    #

    Why should it work this way? Why should we allow ourselves to make claims about the implication that holds between a given proposition, which we take to be true or take to be false, and any arbitrary proposition, and even the pair of a proposition and its negation?

    An intuitive defense of the material conditional, and then not.

    "If ... then ..." is a terrible reading of "→", everyone knows that. "... only if ..." is a little better. But I don't read "→" anything like this. In my head, when I see

      P → Q

    I think

      The (probability) space of P is entirely contained within the (probability) space of Q, and may even be coextensive with it.

    The relation here is really ⊂, the subset relation, "... is contained in ...", which is why it is particularly mysterious that another symbol for → is '⊃'.

    The space of a false proposition is nil, and ∅ is a subset of every set, so ∅ → ... is true for everything.

    The complement of ∅ is the whole universe, unfortunately, and that's what true propositions are coextensive with. When you take up the whole universe, everything is a subset of you, which is why ... → P holds for everything, if P is true.

    Most things are somewhere between ∅ and ⋃, though, which is why I have 'probability' in parentheses up there.

    The one time he didMoliere

    Which is the interesting point here.

      "George never opens when he's supposed to."
      "Actually, there was that one time, year before last ― "
      "You know what I mean."

    Ask yourself this: would "George will not open tomorrow" be a good inference? And we all know the answer: deductively, no, not at all; inductively, maybe, maybe not. But it's still a good bet, and you'll make more money than you lose if you always bet against George showing up, if you can find anyone to take the other side.

    "George shows up" may be a non-empty set, but it is a negligible subset of "George is scheduled to open", so the complement of "George shows up" within "George is scheduled", is nearly coextensive with "George is scheduled". That is, the probability that any given instance of "George is scheduled" falls within "George does not show up" is very high.

    TL;DR. If you think of the material conditional as a containment relation, its behavior makes sense.

    ((Where it is counterintuitive, especially in the propositional calculus, it's because it seems the only sets are ∅ and ⋃. Even without considering the whole world of probabilities in fly-over country between 0 and 1 ― which I think is the smart thing to do ― this is less of a temptation with the predicate calculus. In either case, the solution is to think of the universe as being continually trimmed down to one side of a partition, conditional-probability style.))
  • A -> not-A
    What does footnote 11 say? Because the whole dispute rides on that single word, "whenever."Leontiskos

    Here, "whenever" is used as an informal abbreviation "for every assignment of values to the free variables in the judgment"same

    Actually I expected the footnote just to be a reference to Gentzen, but it was glossed!