I'm taking this out of context, for the sake of a comment.
I'm a little rusty on natural deduction but I think
reductio is usually like this:
A (assumption)*
...
B (derived)
...
~B (derived)
⊥
━━━━━━━━━━━━
A → ⊥ (→ intro)*
━━━━━━━━━━━━
~A (~ intro)
Not sure how to handle the introduction of ⊥ but it's obviously right, and then our assumption A is
discharged in the next line, which happens to be the definition of "~" or the introduction rule for "~" as you like.
Point being A is gone by the time we get to ~A. It might look like the next step could very well be A → ~A by →-introduction, but it can't be because the A is no longer available.
What you do have is a construction of ~A with no undischarged assumptions.
#
We've talked regularly in this thread about how A → ~A can be reduced to ~A; they are materially equivalent. We haven't talked much about going the other way.
That is, if you believe that ~A, then you ought to believe that A → ~A.
In fact, you ought to believe that B → ~A for any B, and that A → C for any C.
And in particular, you ought to believe that
P → ~A (where B = P)
~P → ~A (where B = ~P);
and you ought to believe that
A → Q (where C = Q)
A → ~Q (where C = ~Q).
If you combine the first two, you have
while, if you combine the second two, you have
These are all just other ways of saying ~A.
#
Why should it work this way? Why should we allow ourselves to make claims about the implication that holds between a given proposition, which we take to be true or take to be false, and any arbitrary proposition, and even the pair of a proposition and its negation?
An intuitive defense of the material conditional, and then not.
"If ... then ..." is a terrible reading of "→", everyone knows that. "... only if ..." is a little better. But I don't read "→" anything like this. In my head, when I see
I think
The (probability) space of P is entirely contained within the (probability) space of Q, and may even be coextensive with it.
The relation here is really ⊂, the subset relation, "... is contained in ...", which is why it is particularly mysterious that another symbol for → is '⊃'.
The space of a false proposition is nil, and ∅ is a subset of every set, so ∅ → ... is true for everything.
The complement of ∅ is the whole universe, unfortunately, and that's what true propositions are coextensive with. When you take up the whole universe, everything is a subset of you, which is why ... → P holds for everything, if P is true.
Most things are somewhere between ∅ and ⋃, though, which is why I have 'probability' in parentheses up there.
The one time he did — Moliere
Which is the interesting point here.
"George never opens when he's supposed to."
"Actually, there was that one time, year before last ― "
"You know what I mean."
Ask yourself this: would "George will not open tomorrow" be a good inference? And we all know the answer: deductively, no, not at all; inductively, maybe, maybe not. But it's still a good
bet, and you'll make more money than you lose if you always bet against George showing up, if you can find anyone to take the other side.
"George shows up" may be a non-empty set, but it is a negligible subset of "George is scheduled to open", so the complement of "George shows up" within "George is scheduled", is nearly coextensive with "George is scheduled". That is, the probability that any given instance of "George is scheduled" falls within "George does
not show up" is very high.
TL;DR. If you think of the material conditional as a containment relation, its behavior makes sense.
((Where it is counterintuitive, especially in the propositional calculus, it's because it
seems the only sets are ∅ and ⋃. Even without considering the whole world of probabilities in fly-over country between 0 and 1 ― which I think is the smart thing to do ― this is less of a temptation with the predicate calculus. In either case, the solution is to think of the universe as being continually trimmed down to one side of a partition, conditional-probability style.))