• Srap Tasmaner
    4.9k
    I don't know what you mean by "minimal inconsistency guard".TonesInDeepFreeze

    Roughly that the LNC could enforce a narrow, specialized sense of consistency ― that P and ~P are inconsistent, for any P ― and this would be enough to bootstrap a more general version of inconsistency that relies on consequence, so that with a fuller system you can say A and B are inconsistent if A → C and B → ~C. It's a bootstrapping technique; start with special cases and leverage those to get the general. Special cases are easier, cheaper, in this case don't require additional resources like consequence.

    It's probably all too speculative to do much with. Most of the ideas I've had in the last few minutes just recreate the fact that you can build the usual collection of logical constants with negation and one of the others (unless you want to start with the Sheffer stroke). If I were to say, maybe we need both consistency and consequence as core ideas ― that's almost all that would amount to.

    I was thinking, though, that there might be a way to get negation out of a primitive sense of consequence ― not the material conditional, just an intuition of what follows from what ― something like this: any given idea (claim, thought, etc.) has a twin that is the one thing guaranteed under no circumstances to follow from it, and that would be its negation. You could define ~P roughly by partitioning the possible consequents into what can and can't follow from P, but the two buckets are different: what can follow from P might initially be empty, who knows; but what can't never starts empty.

    If, like the gorillas, you didn't already have the abstract concept of negation, the bucket we're going to use to define negation would probably be full of stuff ― given any P, that bucket will have stuff that ~P follows from, in addition to ~P itself, maybe, sometimes. Example: if P is "It's sunny", our bucket of things that don't follow includes "It's cloudy", "It's nighttime", "It's raining" ― all different things that "It's not sunny" follows from.

    Don't spend any time trying to make sense of all this. It's just me thinking on the forum again.
  • TonesInDeepFreeze
    3.7k
    specialized sense of consistency ― that P and ~P are inconsistent, for any PSrap Tasmaner

    We have that. You want to use that to define inconsistency in general without using the notions of semantic or syntactical consequence?

    you can say A and B are inconsistent if A → C and B → ~C.Srap Tasmaner

    We already have:

    If G |- A -> C and G |- B -> ~C, then Gu{A B} is inconsistent.

    I don't see what you're bringing.

    unless you want to start with the Sheffer strokeSrap Tasmaner

    Or Nicod dagger.

    we need both consistency and consequence as core ideasSrap Tasmaner

    Not getting it.

    We define consistency from provability. (We could also define it from satisfiability.) Why is that lacking?

    any given idea (claim, thought, etc.) has a twin that is the one thing guaranteed under no circumstances to follow from itSrap Tasmaner

    How do you know there is only one thing?
  • Srap Tasmaner
    4.9k
    We have that.TonesInDeepFreeze

    We already have:TonesInDeepFreeze

    We define consistency from provability.TonesInDeepFreeze

    Sorry. Obviously I haven't managed to make clear what I'm trying to do here, probably because I've been writing a bunch of stuff I ended up scrapping, so I probably think I've said things I haven't.

    I'm trying to figure out how we could bootstrap logic or reasoning, informal at first, of course, what we would need to do that, what the minimum is we could start with that could grow into informal reasoning. I'm not proposing an alternative to the logic we have now. So

    Why is that lacking?TonesInDeepFreeze

    is not the kind of question I was addressing at all.

    For example, my last post suggested a way you might leverage a primitive understanding of consequence or "follows from" to piece together negation. I don't know if that's plausible, but it hadn't occurred to me before, so that's at least a new idea.

    How do you know there is only one thing?TonesInDeepFreeze

    At first probably not! But you can see how a bunch of ideas that all point to "not sunny" might eventually get you there.

    And as I noted, there's some reason to think other great apes already have the ability to reason about pairs of near opposites, even without an abstract concept of negation. I was imagining a way some sense of consequence might get you from such pairs to genuine negation.

    Like I said, all very speculative, and probably not worth your time.
  • TonesInDeepFreeze
    3.7k


    I tried to put something together along the lines you have in mind.

    The best I came up with is this:

    (1) For any sentence P, the set of all sentences is partitioned into two sets: (1) the set of sentences that follow from P, call it C(P) and (2) the set of sentences that do not follow from P, call it N(P). Then instead of sentences, consider sets of sentences, let the negation of C(P) be N(P).

    But that's not what you want. So, maybe we would consider the set of sentences not compatible (my word) with P such as "raining" is not compatible with "sunny" (putting aside sun showers). But that uses "not".

    So I thought of this:

    (2) For any sentence P, the set of the set of all sentences is partitioned into two sets: (1) the set of sentences Q such that {P Q} is satisfiable, call it C*(P) and (2) the set of sentences Q such that {P Q} is not satisfiable, call it N*(P). Then instead of sentences, consider sets of sentences, let the negation of C*(P) be N*(P). But that uses "not".

    But then I thought that we should just leave it up to gorillas; and that does seem to work.
  • jgill
    3.8k
    Set theory is needed for the rest of math and so is logicSrap Tasmaner

    Much of classical math existed before the introduction of set theory. So, no. Modern math is another thing.
  • Srap Tasmaner
    4.9k
    let the negation of C(P) be N(P)TonesInDeepFreeze

    Yeah that's an interesting idea!

    I guess we could assume that nothing in N(P) would follow from anything in C(P), because follow-from would already have that sort of "transitive" property that we're used to.

    I've tried to work out some consequences of this, but it's still not clear to me. (I had a whole lot of ideas that just didn't work.) It's interesting though.

    Much of classical math existed before the introduction of set theory.jgill

    Yeah, I get that. Looking at the reconstruction of math using set theory is one way to hunt for the difference between math and logic, that's all. Maybe not the most interesting way.
  • TonesInDeepFreeze
    3.7k


    Depends on what 'needs' means.

    Mathematics pretty much needs sets to work with. But if one denies that mathematics needs to be axiomatized, then mathematics does not need the set theory axioms.

    If one affirms that mathematics needs to be axiomatized, then the usual axiomatization is set theory.
  • TonesInDeepFreeze
    3.7k


    I would think 'follows from' is reflexive and transitive, but not symmetric.

    I would need to doublecheck these (and depends on knowing more about 'follows from'):

    C(P) is consistent if and only if P is not logically false.

    If P is contingent, then N(P) is inconsistent.

    P is logically false if and only if N(P) = 0. (explosion)

    If P is logically true, then N(P) is inconsistent.
  • TonesInDeepFreeze
    3.7k


    You should think of a word for 'follows from' so that it is not conflated with other common senses.

    I suggest 'P raps Q' (equivalently, 'Q raps from P') instead of 'Q follows from P'.

    ('raps' from 'Srap')
  • Banno
    24.8k
    Might be interesting to adduce a formal sentence and demonstrate somehow that it can't be said in English alone (not just that all known attempts failed).TonesInDeepFreeze

    Yep.

    Am I right in understanding that the definition you gave of formal languages is strictly syntactic? It is formal iff it follows some rule for being well-formed?

    If not, how does it differ?
  • TonesInDeepFreeze
    3.7k


    (When I write, 'well formed formula', take that as short for 'well formed formula of the language'.)

    Of course, people may have different ideas about what 'formal' means. But at least I think we would find that, for the most part at least, such things that are considered formal languages - such as languages for formal theories, computer languages, etc. - have in common that well formedness and other certain other features are algorithmically checkable.

    If a set is recursive, then there is an algorithm to determine whether something is or is not in that set. So if the set of well formed formulas is recursive, then there is an algorithm to determine whether a given sequence of symbols is or is not a well formed formula.

    The desideratum is that it is algorithmically checkable whether a given string is or is not a well formed formula.

    And, yes, that is all syntactical.

    And the formation rules are chosen so that indeed they provide that the set of well formed formulas of is recursive. So, the rules are given as recursive definitions.

    And the inference rules are recursive relations. So the set of proofs is a recursive set. So it is machine checkable whether a sequence of formulas is or is not a proof.

    The point is that, with a formal language and formal proof, it is utterly objective whether a sequence of formulas is indeed a proof. A computer or a human following the instructions of an algorithm may (at least in principle) objectively check whether a given purported proof is indeed a proof.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.