Comments

  • A -> not-A


    (When I write, 'well formed formula', take that as short for 'well formed formula of the language'.)

    Of course, people may have different ideas about what 'formal' means. But at least I think we would find that, for the most part at least, such things that are considered formal languages - such as languages for formal theories, computer languages, etc. - have in common that well formedness and other certain other features are algorithmically checkable.*

    If a set is recursive, then there is an algorithm to determine whether something is or is not in that set. So if the set of well formed formulas is recursive, then there is an algorithm to determine whether a given sequence of symbols is or is not a well formed formula.

    The desideratum is that it is algorithmically checkable whether a given string is or is not a well formed formula.

    And, yes, that is all syntactical.

    And the formation rules are chosen so that indeed they provide that the set of well formed formulas of is recursive. So, the rules are given as recursive definitions.

    And the inference rules are recursive relations. So the set of proofs is a recursive set. So it is machine checkable whether a sequence of formulas is or is not a proof.

    The point is that, with a formal language and formal proof, it is utterly objective whether a sequence of formulas is indeed a proof. A computer or a human following the instructions of an algorithm may (at least in principle) objectively check whether a given purported proof is indeed a proof.

    And, yes, all of that is syntactical.

    /

    * We can also look at notions of 'formal' prior to the advent of recursion theory. And we can look at the general study of modern 'formal languages' that is mostly aimed at formal linguistics and computer science.
  • A -> not-A


    You should think of a word for 'follows from' so that it is not conflated with other common senses.

    I suggest 'P raps Q' (equivalently, 'Q raps from P') instead of 'Q follows from P'.

    ('raps' from 'Srap')
  • A -> not-A


    I would think 'follows from' is reflexive and transitive, but not symmetric.

    I would need to doublecheck these (and depends on knowing more about 'follows from'):

    C(P) is consistent if and only if P is not logically false.

    If P is contingent, then N(P) is inconsistent.

    P is logically false if and only if N(P) = 0. (explosion)

    If P is logically true, then N(P) is inconsistent.
  • A -> not-A


    Depends on what 'needs' means.

    Mathematics pretty much needs sets to work with. But if one denies that mathematics needs to be axiomatized, then mathematics does not need the set theory axioms.

    If one affirms that mathematics needs to be axiomatized, then the usual axiomatization is set theory.
  • A -> not-A


    I tried to put something together along the lines you have in mind.

    The best I came up with is this:

    (1) For any sentence P, the set of all sentences is partitioned into two sets: (1) the set of sentences that follow from P, call it C(P) and (2) the set of sentences that do not follow from P, call it N(P). Then instead of sentences, consider sets of sentences, let the negation of C(P) be N(P).

    But that's not what you want. So, maybe we would consider the set of sentences not compatible (my word) with P such as "raining" is not compatible with "sunny" (putting aside sun showers). But that uses "not".

    So I thought of this:

    (2) For any sentence P, the set of the set of all sentences is partitioned into two sets: (1) the set of sentences Q such that {P Q} is satisfiable, call it C*(P) and (2) the set of sentences Q such that {P Q} is not satisfiable, call it N*(P). Then instead of sentences, consider sets of sentences, let the negation of C*(P) be N*(P). But that uses "not".

    But then I thought that we should just leave it up to gorillas; and that does seem to work.
  • A -> not-A
    specialized sense of consistency ― that P and ~P are inconsistent, for any PSrap Tasmaner

    We have that. You want to use that to define inconsistency in general without using the notions of semantic or syntactical consequence?

    you can say A and B are inconsistent if A → C and B → ~C.Srap Tasmaner

    We already have:

    If G |- A -> C and G |- B -> ~C, then Gu{A B} is inconsistent.

    I don't see what you're bringing.

    unless you want to start with the Sheffer strokeSrap Tasmaner

    Or Nicod dagger.

    we need both consistency and consequence as core ideasSrap Tasmaner

    Not getting it.

    We define consistency from provability. (We could also define it from satisfiability.) Why is that lacking?

    any given idea (claim, thought, etc.) has a twin that is the one thing guaranteed under no circumstances to follow from itSrap Tasmaner

    How do you know there is only one thing?
  • A -> not-A
    Now it could be that the LNC, so beloved on this forum, functions as a minimal inconsistency guard, and from that you get the rest.Srap Tasmaner

    I don't know what you mean by "minimal inconsistency guard".
  • A -> not-A
    Suppose I hold beliefs A and B. And suppose also that A → C, and B → ~C. That's grounds for claiming that A and B are inconsistent, but only because C and ~C are inconsistent.Srap Tasmaner

    Okay.

    How would we define the bare inconsistency of C and ~C in terms of consequence?Srap Tasmaner

    What sense of "consequence"? Entailment?

    Do you mean how to define 'inconsistent'?

    (First order in this post and generally in posts unless said otherwise.)

    Or how to show that {C, ~C} is inconsistent? It's trivial. {C ~C} |- C & ~C. But yes, that uses conjunction intro, which is deduction. And since we have {C ~C} |- C & ~C, we have {C ~C} |= C & ~C. (soundness)

    Or semantically, it's trivial to show that {C, ~C} is unsatisfiable. And we have that any unsatisfiable set is inconsistent. (completeness)

    Or, we could define 'inconsistent' as "proves a formula C and proves ~C". Then, even more trivially, {C ~C} |- C and {C ~C} |- ~C. But even that uses a deduction rule (whatever you call it - inferring a sentence by virtue of it being in the set of premises.) And since we have {C ~C} |- C and {C ~C} |- ~C, we have {C ~C} |= C and {C ~C} |= ~C.


    /

    Df. a set of sentences is inconsistent if and only if it proves a contradiction.

    Th. a set of sentences is inconsistent if and only if it entails a contradiction.

    Df. a set of sentences is satisfiable if and only if there is an interpretation in which all the sentences are true.

    Th. a set of sentences is inconsistent if and only if it is not satisfiable.
  • A -> not-A
    the difference between formal and informalBanno

    We do have a definition of 'formal language': the set of well formed formulas is a recursive set*; and perhaps add unique readability. [EDIT: That might be only a terse synopsis of a definition that might need refinement and other clauses. And I have in mind mainly the kind of languages used in mathematical logic.]

    * More exactly, the set of Godel numbers of well formed formulas is a recursive set.

    /

    Might be interesting to adduce a formal sentence and demonstrate somehow that it can't be said in English alone (not just that all known attempts failed).
  • A -> not-A


    For writing, I would accept ordinary punctuation, but don't know about formatting or special characters given an ad hoc role.

    Also, I recognize that the burden is not just to show that it would be difficult to use only English but that it would not be possible.
  • A -> not-A


    Impressive that you did it without a bot. I just let the bot give me the English translation, but it too used specially formatting - bullet points and indentations.

    But don't let the gratuitous complexity distract from the point. We could find examples in actual mathematics that might not be so complex but still tough. And then in the primitive language.
  • A -> not-A


    Of course, you can use special formatting to do it, with a convention as to what it signifies. But is that natural? That is, try to do it spoken.
  • A -> not-A


    I guess that' similar to the prisoner's dilemma.

    So the core idea would be not whether one idea (or claim or whatever) follows from another, but whether two ideas (claims, etc) are consistent with each other.Srap Tasmaner

    Okay, but consistency is defined in terms of derivability (which, in first order, is equivalent with entailment).
  • A -> not-A


    Said in natural language? Includes using parentheses to mark arbitrarily deep nested sub-sentences?

    In natural language, how would you say?:

    ∀x ∃y ∀z ((P(x) ∧ ∃u (Q(y) ∨ (R(u) ∧ ∀v (S(v) → T(z, v))))) → ¬(∀w (U(w) ∧ ∃t (V(x, t) → W(t, w))) ∧ ∃p(X(p) ∧ ∀q (Y(q) → Z(p, q)))) ∨ (A(x, y, z) ∧ ∀b ∃c (D(b, c) → (E(x, b, c) ∧ ∃d (F(d) ∧ G(d, x, y)))))

    And throw in some math and modal operators too.

    ∀x ∈ ℝ ∃y ∈ ℝ ∀z ∈ ℕ ((x² + y² = z² ∧ ∃u ∈ ℤ (sin(u) + cos(y) ≥ 1 ∨ (∫₀ᵘ eᵗ dt = eᵘ - 1 ∧ ∀v ∈ ℚ (v > 0 → d/dv (v²) = 2v)))) → ¬(∀w ∈ ℂ (|w| ≤ 1 ∧ ∃t ∈ ℝ⁺ (log(t) ≤ 0 → t ≤ 1)) ∧ ∃p ∈ ℕ (p! = ∏ₖ₌₁ᵖ k ∧ ∀q ∈ ℝ (q ≠ 0 → 1/q ≠ 0))) ∨ (A(x, y, z) ∧ ∀b ∈ ℝ ∃c ∈ ℝ (D(b, c) → (E(x, b, c) ∧ ∃d ∈ ℕ (F(d) ∧ G(d, x, y)))))) ∧ ∀r ∈ ℝ (◻H(r) → ◇I(r))

    (I hope that's well formed and displays correctly - it was made by a bot.)
  • A -> not-A
    In particular, it's interesting to think of this whole complex of ideas as being "safe" because coherent ― you can jump on the merry-go-around anywhere at all, pick any starting point, and you will find that it works, and whatever you develop from the point where you began will serve, oddly, to secure the place where you started. And this will turn out to be true for multiple approaches to foundations for mathematics and logic.Srap Tasmaner

    Nice.
  • A -> not-A


    Interesting. Very much bears looking into.

    By the way, I greatly enjoyed the video linked in the 'Logical Nihilism' thread. I have a lot of thoughts about it, and a lot of reading to do about it, but just not the time to put it together as a good post now.
  • A -> not-A
    Consistent languages capable of first order arithmetic can't contain their own truth predicate, so we don't put the predicate in. But natural language does contain its own truth predicate and behaves... well it doesn't disintegrate. That's at the very least a type distinction between consistent formal languages and natural language - one can contain its own truth predicate without being crap, one cannot.fdrake

    I'm not sure that I recall Tarski correctly (perhaps he does mention the notion of a 'consistent or inconsistent language'?) But usually languages are not consistent of inconsistent; sets of formulas are consistent or inconsistent. In this part, he's talking about an interpreted formal language. We have that there would be a contradiction (in the meta-theory, whether formal or informal) if such an interpreted formal language had its own truth predicate.

    Yes, aside from paraconsistency, we would not comfortably bear contradiction, while we can bear paradox in natural language. But, we need to keep in mind that wide-open natural language doesn't provide the desiderata of formal languages.
  • A -> not-A
    Isn't formal language a part of natural language?Banno

    In one view, we have a formal object-language, and an informal or formal meta-language that includes the formal object-language.
  • A -> not-A
    Just that there's at least here a dependence of mathematics on natural language, which gives the appearance of being purely pedagogical, or unimportant "set up" steps (still closely related to the thing about logical schemata, from above).Srap Tasmaner

    I don't know of anyone who thinks natural language conveyance of mathematics is unimportant.

    the indefinability of "set"Srap Tasmaner

    'set' can be defined from 'element of'.
  • A -> not-A
    I imagined Srap and I were talking about how the formalism in mathematics doesn't start at its "grounds", in the axioms. I imagine Srap and I are reacting to an imagined enemy of a formalist who thinks that mathematics is somehow "just" symbol manipulation. Or alternatively just awed at how the root of the formalism is in as something as messy as natural language, despite how set in stone - settable in stone - the concepts of mathematics seem to be.fdrake

    Just to be clear, one form of formalism is the extreme view that mathematics is mere symbol games. But formalism is not at all confined to such an extreme view.

    /

    Yes, even an extreme game formalist would have to admit that eventually we have to communicate in natural language. (Though he might try the "write the code and send it up in a spaceship to the advanced intelligence creatures" argument.)

    And variables are used in both mathematics and everyday discourse. I can see the shape of an argument against the extreme game formalist based on the fact that variables (for example) probably originated naturally. So the argument is this?: Mathematics needs variables, and variables are ultimately understood naturally not formally.
  • A -> not-A
    logic as just "given" in totoSrap Tasmaner

    It's more like the chicken and the egg (as you mention later in your post).

    You can take the logic as given to base math on it.

    And you can take a certain amount of math as given to base logic on it.

    Choose your chicken or your egg.

    Even formally: You need predicate logic as a basis for Z set theory. And, at least in usual formulations, you need at least some finitistic math to formalize the predicate calculus.

    One way of thinking a way out of the bind is to take successive meta-theories, but that would be ad infinitum.

    Another way is to point to the coherency: There is credibility as both logic-to-math and math-to-logic are both intuitive and work in reverse nicely.

    Another way would be just to display the code for the formal theory without explanation or verification of any aspect of it, and then put in sequences of formulas and see which are ratified as proofs. That is, suppose you uploaded it to a highly intelligent life form, without explanation, and let those creatures discern that it works. Personally, though not necessarily philosophically, that tack doesn't appeal to me.

    But, I don't see how to disagree that yes, ultimately, as humans, since we have finite time and can't in a lifetime escalate meta-theories infinitely, ultimately it will boil down to ostensive understanding, just as so much of thinking and use of language seems to do.

    member and collectionSrap Tasmaner

    Most writers seem to view 'member' and 'set' (or 'class' depending on the treatment) as the base notions. But formally we can do it with just 'member'.

    there just is no way around ∈, no way to cobble it together from the other logical constants.Srap Tasmaner

    (1) 'e' is a non-logical constant.

    (2) It is not precluded that we may define 'e' from primitives. von Neumann for example.

    Just a tendentious turn of phrase, not important.Srap Tasmaner

    'tendentious' is definitely the Word of the Week. The runner-up is 'flows'.

    whether we could get away with thinking of its use elsewhere, not only in the sciences, but in philosophy and the humanities, as, in essence, applied mathematics.Srap Tasmaner

    I must misunderstand you? Famously, formal logic is not just studied in philsophy but applied in philosophy.
  • A -> not-A


    Don't know what you're driving at. People use variables outside of math books too.
  • A -> not-A


    At the outset of talking about unions (U) and intersections (/\), we get an interesting consideration.

    For any S, US = {x | there is a y in S such that x in y}

    For any non-empty S, /\S = {x | for all y in S, x in y}.

    Why non-empty?

    U0 no problem. U0 = 0.

    But why no /\0? Because:

    Roughly put, the subset axiom is: For any set x and describable property P, there is the subset of x whose members are all and only those members of x that have property P.

    Now, there is no set of which every set is a member. Why? Because:

    Suppose there is a set V such that every set is a member of V. Then there would be the subset of V of all the sets that have the property of not being a member of itself. Then we have Russell's paradox.

    So there is no such V.

    Now, suppose there is /\0. But every set would be a member of /\0, as seen:

    For all y, it is not the case that y in 0. So for all y, if y in 0 then x in y. So every x would be in /\0, so /\0 would be the universal set V, but there is no such set.

    Note: I used the word 'because' in the sense of 'since' not causality.
  • A -> not-A
    you have to have sets (or an equivalent) to do much of anything in the rest of mathematics, but so what?Srap Tasmaner

    Set theory axiomatizes classical mathematics. And the language of set theory is used for much of non-classical mathematics. Those are two answers to "so what?"
  • A -> not-A
    What does mathematics get out of pretending it's importing logic from elsewhere?Srap Tasmaner

    What pretending? Would you mention a specific writing?
  • A -> not-A


    I think of mathematical logic sub-subject of formal logic.

    I guess because logic and formal logic have under philosophy, and mathematical is a part of formal logic, we have mathematical logic under the wide umbrella too.

    Mathematical logic is typically a course in Mathematics. But sometimes such things as set theory are taught as a Philosophy course.

    Symbolic logic is often found as a Philosophy course. But it can be a warmup for mathematical logic, and it usually includes translation of natural language arguments; even if one goes on to use symbolic logic mainly for mathematics, it helps to first know how to translate natural language since so much of mathematical prose is in natural language.

    And, of course, mathematical logic is extended in formal logics regarding all kinds of philosophical subjects - modal, epistemic, etc. And, of course, in philosophy of language. And, prominently, computing.

    And, of course, philosophy of mathematics is steeped in considerations about mathematical logic, set theory and mathematics.
  • A -> not-A
    motivates category theorySrap Tasmaner

    Category theory centers on arrows, and as involving functions, composition of functions and morphisms and things.
  • A -> not-A
    Unforgiving when authors were too slapdash or handwavy about this, which I thought showed good philosophical sense.Srap Tasmaner

    Yet, often censoriously regarded as bad philosophical sense in The Philosophy Forum.
  • A -> not-A
    are you sure this is right?Srap Tasmaner

    Absolutely sure.

    If x in 0 then x in 0.

    That's an instance of P -> P.

    do we want to say it's because of the proof that it is so?Srap Tasmaner

    I don't opine as to 'because'.

    "holds by" ― rather than, "is proved using"Srap Tasmaner

    I meant 'is proved by'. I find that the word 'holds' is used in at least two senses in mathematics: (1) is true, (2) is proven. But with set theory, 'true' could be taken as 'true in any model of the set theory axioms'. So "0 subset of 0" is proven by deploying "P -> P" and it is also true by deploying "P -> P" (given the soundness theorem in this version: if a sentence is provable from a set of axioms then the sentence is true in any model of the axioms).
  • A -> not-A
    Another friendly picking of nits: Some writers use the word 'contained'; it is not wrong. But sometimes I see people being not clear whether it means 'member' or 'subset', so I don't use the word. I just say 'member of' or 'subset of' as suited.
  • A -> not-A
    One other tiny point of unity: I always thought it was interesting that for "and" and "or" probability just directly borrows ∩ and ∪ from set theory. These are all the same algebra, in a sense, logic, set theory, probability.Srap Tasmaner

    The duals run all through logic and mathematics. The main result concerning propositional logic is that there is an isomorphism between propositional logic and the Tarski-Lindenbaum algebra (a particular Boolean algebra). Then Tarski also showed an isomorphism between predicate logic and cylindrical algebra (the details of cylindrical algebra are beyond mere)
  • A -> not-A
    replace set theory entirelySrap Tasmaner

    I'm not expert either. But my understanding is that yes, category theory couches mathematics in different terms from set theory, and thus provides a different way of thinking, but category theory is inter-interpretable with ZFC+"exists an inaccessible cardinal"*. And ZFC+"exists an inaccessible cardinal" provides a model of ZFC, so it is a quite strong theory in that sense.

    * But I have never been able to find what that really means. I know what interpretability is. And I know what ZFC+"exists and inaccessible cardinal" is. But interpretability is between two theories, but what exactly is the theory category theory that is inter-interpretable with ZFC+"exists and inaccessible cardinal"?

    /

    Peter Smith offers some nice content. And he used to post at sci.logic, but, if I recall correctly, got disgusted with all the cranks.
  • A -> not-A


    0 subset of 0 holds by P -> P.
  • A -> not-A


    Then let me nitpick that. You didn't mean 'nitpick' pejoratively, but subset v member is not a nitpick, and the point about circularity is a good catch.
  • A -> not-A


    Your probability exploration is interesting. I think there's probably (pun intended) been a lot of work on it that you could find.

    Indeed, logic and mathematics - chicken and the egg.

    I am not up to speed on category theory though I know some of its basics. One problem I've had is finding an axiomatization. However, ZFC+"exists an infinite cardinal" is an axiomatization of category. So, as far as I can tell, category theory does not eschew set theory but rather, and least to the extent of interpretability (different sense of 'interpretation' in this thread) it presupposes it and goes even further.
  • A -> not-A
    P can be empty set, which is a member of every set.Moliere

    You mean it is a subset of every set (the empty set is not a member of every set).

    Not a correction, but a reminder: We prove that the empty set is a subset of every set by using the material conditional:

    Show for all S and x, if x is in 0 then x is in S:

    It is not the case that x is in 0, so if x is in 0 then x is in S.

    I haven't followed all of your conversation, so this might not be pertinent, but if it is, it is good to keep in mind: So, if '->' is construed as in terms of subsets to make sense of the material conditional and vacuous instances, then that tack would be circular, since the empty set being a subset of every set is based on the material conditional. But of course, we could say the notions are compatible, though that is no surprise.
  • A -> not-A


    Maybe you mean by analogy?

    For example:

    Check

    (1) If Churchill was English then Churchill had a stiff upper lip
    Churchill had a stiff upper lip
    therefore Churchill was English

    Compare with:

    (2) If DeGaulle was German then DeGaulle was born in Lille
    DeGaulle was born in Lille
    therefore DeGaulle was German

    (2) has true premises and a false conclusion. therefore (2) is invalid. but (2) is analogous in form with (1). so (1) is invalid
  • A -> not-A
    More information and explanation [aka 'increasing the word count']

    I specified exactly what a sentential logic interpretation is. To add to that, here is what is meant by "true (or false) per an interpretation" or "true (or false) in an interpretation":

    First we define 'is a sentence' by induction:

    Every sentence letter is a sentence (drop parentheses when not needed):

    If P is a sentence, then ~P is a sentence.

    If P and Q are sentences, then (P & Q) is a sentence.

    If P and Q are sentences, then (P v Q) is a sentence.

    If P and Q are sentences, then (P -> Q) is a sentence.

    If P and Q are sentences, then (P <-> Q) is a sentence.

    It is in "stages" ('P' and 'Q' here range over sentences):

    Then, an interpretation assigns a truth value to each sentence letter. So a sentence letter alone has, per that interpretation, the truth value assigned by that interpretation ('P' and 'Q' here range over sentences):

    If P is just a sentence letter, then P is true per the interpretation if the interpretation assigns true to P; otherwise P is false per the interpretation.

    ~P is true per the interpretation if P is false per the interpretation; ~P is false per the interpretation otherwise.

    P & Q is true per the interpretation if both P and Q are true per the interpretation; P & Q is false per the interpretation otherwise.

    P v Q is true per the interpretation if at least one of P or Q is true per the interpretation; P v Q is false per the interpretation otherwise.

    P -> Q is true per the interpretation if either P is false per the interpretation or Q is true per the interpretation; P -> Q is false per the interpretation otherwise.

    P -> Q is true per the interpretation if either both P and Q are true per the interpretation or both P and Q are false per the interpretation; P <-> Q is false per the interpretation otherwise.

    An example:

    (P -> Q) v (R & Q)

    Suppose the interpretation is:

    P ... true
    Q ... true
    R ... false.

    Then:

    P -> Q is true per the interpretation
    R & Q is false per the interpretation
    so, abracadabra, voila, and drumroll please ...
    (P -> Q) v (R & Q) is true per the interpretation

    Similarly, in stages like that, for arbitrarily complicated sentences.

    That's what is meant by 'true (or false) per an interpretation' or 'true (or false) in an interpretation'.

    /

    Various definitions we've seen mention things like 'cases', 'circumstances'.

    Those can be taken to mean 'interpretations'.

    And sometimes 'possible' and 'impossible' are used.

    Those can be taken to mean 'true in at least one interpretation' and 'true in no interpretation', respectively.

    /

    This is what I say is the common interpretation of your sources on validity:

    1. Assume all the premises are true
    2. See if it is inferentially possible to make the conclusion false, given the true premises
    3. If it is not possible, then the argument is valid
    Leontiskos

    Whatever is meant by "See if it is inferentially possible to make the conclusion false, given the true premises", here instead is one* common method for checking for the validity of a sentence in sentential logic. *There are some more efficient ways, but they are harder to specify in a post.

    Df. An argument is valid if and only if there is no interpretation in which all the premises are true and the conclusion is false:

    To check for the validity of an argument (with finitely many premises):

    1. Write the conjunction of the premises. Follow that with '->'. Follow that with the conclusion.

    2. Write the truth table for the above formed sentence.

    2. If there is a row in which the antecedent is true and the conclusion is false, then the argument is invalid, and it is valid otherwise.

    Indeed, this highlights a connection between arguments and conditionals.

    No "assuming". No seeing "if it is inferentially possible to make the conclusion false, given the true premises" whatever that means. No messing with the modality of possibility. Indeed, just a simple, utterly clear, step by step mechanical method.

    Note: There is no mechanical procedure to check for the validity of arbitrary formulas of predicate logic.

    /

    I mentioned that I don't mention 'inconsistency' when defining 'valid argument'. There is good reason for that, which is:

    The notion of consistency requires the notion of deducibility and deducibility is a whole subject in itself.

    Df. A set of sentences is consistent if and only if there is no deduction of a contradiction from the set.

    But that requires having a deduction system from which to define 'is a deduction'.

    But we may wish to consider validity without having first done all stuff we have to do to set up a deduction system, which we can do later.

    Indeed, often textbooks in logic devote early chapters to semantics (truth/falsehood, interpretation, entailment, validity, etc.) and then separate chapters to deduction. And then, chapters in which we prove meta-theorems about the connection between semantics and deduction. Such, as I recently mentioned, the central theorems of soundness and completeness. That is a conceptually elegant approach. Indeed, this engenders two branches of study in logic: model theory (interpretations) and proof theory (deductions).

    /

    Another definition of 'valid argument' to add to the list:

    "it is impossible that all the premises should be true and the conclusion false" (Intermediate Logic - Bostock)
  • A -> not-A


    Of course, reduction ad absurdum. But how is that "checking the validity of one argument using another"?
  • A -> not-A
    "respectful discussion"

    Respect includes not intentionally or carelessly putting words in the mouth of a poster, especially after the poster has dropped a flag on it and more than once. Respect includes not intentionally or carelessly seriously mischaracterizing a poster's main point, even to the point of reversing it.

TonesInDeepFreeze

Start FollowingSend a Message