• Mathematical Conundrum or Not? Number Four
    it's just a fact that the early calculus was inconsistent.MindForged

    You raised a lot of really good points in this post and it's late so I only want to respond to this one point and I'll aspire to get back to the rest of your interesting post later.

    Newton's calculus was never inconsistent in the sense of logic. You are equivocating the word inconsistent. See this is something I do happen to know about Aristotle! That he listed some rhetorical fallacies, one of which was equivocation, using the same word with two different meanings within the same argument.

    You mentioned calculus was inconsistent in an earlier post, and I didn't push back on it then, but it's important to clarify this point now.

    I hope we can agree that a logical system (some axioms along with some inference rules) is inconsistent if there is a proof (a step-by-step application of the inference rules to the axioms) that results in a proposition P, and also a proof of not-P. I'm certain we agree on that.

    I will now argue that Newton's confusion over the nature of (what we now call) the limit of the difference quotient is NOT such an inconsistency.

    I believe that if I asked you to name the P for which both P and not-P have proofs, you would say, "dy and dy [in modern notation] are both nonzero and both zero." But that's not really the same thing as I hope I can explain clearly enough to earn your agreement.

    So in Newton's calculus (using modern terminology and notation) we have a difference quotient where is not zero and is a function of . [It's perfectly legitimate for to be zero, as in the case of a constant function].

    Now as gets very close to zero, it may be the case that the quotient seems to get very close to some number, which Newton called the fluxion and that we now call the derivative. The derivative can be naturally interpreted as, for example, the instantaneous velocity of a moving particle. So whether we can mathematically formalize it or not, it's clearly an important concept in need of elucidation.

    [Just as with the proof of the infinitude of primes, I'm going over familiar territory in detail just to make sure everyone's on the same page].

    So we can sort of think of what the quotient does "when both and are zero," yet we know that this does not actually make any mathematical sense because the expression is not defined, and can not be defined consistent with the usual laws of arithmetic. So it's a puzzler. Berkeley's "ghosts of departed quantities" is a great line, a rhetorical zinger that shines the spotlight on Newton's problem.

    For what it's worth, Newton himself perfectly well understood the problem and struggled over the rest of his career to try to explain it, but without success. It did take 150 years, as you mentioned earlier, to develop the concept of a limit; and it was well into the twentieth century before we saw the complete path from ZF to calculus.

    What would we call Newton's problem? It's not an inconsistency in the sense of being able to prove both P and not-P. At no time did Newton ever say that and are both nonzero AND they are both zero. Newton knew better than to say that. We do NOT have a logical inconsistency in the formal sense.

    What we have is something that clearly works, but we haven't got the vocabulary to express it mathematically. That's a mental state familiar to everyone who's ever had to construct a proof. We get to the point where we can SEE what's going on, but we can't mathematically SAY what's going on. That's where Newton and the mathematicians of the 18th and 19th century got stuck till they finally worked out a proper formalization.

    I hope you can agree that this is not a case of a system that can derive a proof of some proposition P and not-P. That was not the case in Newton's calculus. Rather, Newton just saw a truth that he could not formalize, either with existing concepts or even by inventing new ones.

    So there is an equivocation between

    * Inconsistency as a formal proof of both P and -P; and

    * Inconsistency as in getting to a point where it's intuitively obvious what's true yet you can't figure out how to formalize it properly.

    Calculus was never inconsistent, just un-formalizable for a couple of centuries.

    I do take your point that it's noteworthy that mathematicians kept at it till they developed a conceptual and symbolic framework to explain calculus. But that's not exactly the same as keeping at it to resolve a direct P and not-P contradiction as in the case of Russell's demonstration.

    On the other hand I see that in both cases, we are keeping at it in order to get to the bottom of some antinomy in which we perceive a larger truth that we can't properly express. I will grant you that much. The Newton difference quotient isn't an actual inconsistency, but it's still a pretty thought-provoking datapoint for your case.

    What do you think?

    Thank you @Srap Tasmaner for the MathJax pointer.
  • Mathematical Conundrum or Not? Number Four
    I dont care if the approach is realistic for you or not, there are departments dedicated to carrying out this very research : Philosophy departments. If you dont have the time to develop a semi-complete opinion on an academic subject, then you really shouldnt present whatever you have to say with the authority you adopt.Akanthinos

    I did acknowledge that people are doing this research and that they're serious people. And I simply stated that if I met one of them I'd offer up the prime example and ask them to explain to me why they care about the one and not the other. It's a question I'm trying to understand.

    I don't know much about philosophy but I do have a bit of a math background. I try to give my perspective. I'm generally pretty upfront about my areas of ignorance. I can be ignorant yet have an opinion, and people may find it interesting or not. I claim no authority I don't have. Are you referring perhaps to your interpretation of my writing style? If I express an opinion that's my opinion. You don't have to agree and I don't even claim that I'm right, and I never claim to have any authority I don't have, or any at all. I do have opinions. And I do have some knowledge of math that bears on philosophical issues from time to time.

    So when you say the "authority you adopt," are you referring to my style of expressing my opinions? Or are you thinking that I have claimed authority I don't have? If the latter, please point these instances out so that I can correct them. But if only the former, you should take into account that that's just my style.
  • Mathematical Conundrum or Not? Number Four
    At the very least, the Incompleteness Theorems leave the door open about which one you pick,MindForged

    Ah ... a while back you objected that I misquoted you saying that incompleteness was on point here. But in fact I believe I was originally correct. You think this is about incompleteness. It's not. In incompleteness we fix a given system of logic (first-order predicate logic in fact) and draw conclusions about sets of axioms. In paraconsistent logic we alter the logical rules to obtain different theorems. That is not the same thing at all.
  • Mathematical Conundrum or Not? Number Four
    Ok herewith my response to the deferred post.

    Why is it that in the case of (a) you regard this as a basic mathematical truth; yet in the case of (b) you regard this as a philosophical conundrum perhaps susceptible to attack via paraconsistent logic?
    — fishfry

    Because in the case of A, we have every reason to believe we are in a consistent domain (that of classical mathematics), where proof by contradiction is necessary (on pain of triviality), and we know we can give examples of larger primes . In B, we get a paradox unless we rewrite the rules of naive set theory to get something like ZFC.
    MindForged

    But no, you're just restating your bias, not explaining it. "Q: Why is primes a simple proof by contradiction, and sets a paradox? A: Because primes is a simple proof by contradiction, and sets is a paradox." You have not explained your position, you've only rephrased it. We "rewrite the rules of set theory?" Well we "rewrite the rules of primes" to outlaw a largest prime, once we see there isn't one.

    This is recency bias, not a reasonable explanation IMO.



    With A, we have a counter example that let's us dismiss the initial supposition, with B we get a contradiction from what seem like reasonable assumptions on their face.MindForged

    Unrestricted comprehension "seems reasonable" till we prove it's not. You're privileging an incorrect intuition and saying, "Who are you going to believe, an absolute logical proof, or my vague intuitions?"

    Before Euclid there may well have been a strong intuition that there is a largest prime. There ARE in fact good heuristic reasons for believing so, which I've mentioned a couple of times.


    The assumption that there's a largest prime doesn't seem to rest on comparably reasonable principlesMindForged

    But it does. The farther out we go in the integers, the more rare primes become. And there are arbitrarily large runs of composites. Before one receives any mathematical training, it's perfectly reasonable that there are only finitely many primes; and in fact this question does come up among the mathematically naive.

    such as a set being any collection defined by whatever condition you have in mind.MindForged

    Nice intuition, turns out to be false. No reason to privilege this intuitive error. You assume it and you derive a contradiction, so it's false.


    I assume (although you have not confirmed this) that you don't regard the infinitude of primes as being subject to modification or revision based on paraconsistent logic. Why is (b) different?
    — fishfry

    I don't think the infinitude of primes will be much affected by a transition in the logic. Paraconsistent logic dispenses with proof by contradiction and tends to instead rely on proof by non-triviality (these are identical in other logics but not with PLs).
    MindForged

    You can make this argument when paraconsistent logic gains mindshare. I already showed that by dispensing with the principle of explosion and making a few minor tweaks to number theory, we can let 7 be the largest prime and things work out fine. Just as you can crowbar naive comprehension into submission if you're willing to tweak the rules of logic.

    Assume the contrary, derive a contradiction, learn a truth.
    — fishfry

    I suppose the simplest way is to point out there are other concerns that bear on something besides consistency. I can't remember if it was in this thread that I mentioned this, but for example it's just a fact that the early calculus was inconsistent. One had to treat infinitesimals as a non-zero value at one step of proofs and then treat them as having and value of zero at another step of the same proof. This was acknowledged by Newton, Leibniz, criticized by Berkeley, etc., and it remained that way for more than 150 years. Now as far as I can tell, if you really tried to insist on this way of proceeding, you would have been rationally required by your standards to have rejected calculus (and therefore everything learned and built because of it) during that century and a half of it being inconsistent. But that's obviously ridiculous, there are other theoretical virtues besides consistency which made calculus tenable to accept despite the contradictions it required one to adopt.
    MindForged

    Of everything you've written, this is the one point that made me stop and think. It's a good point. I have a response.

    Newton was doing physics, not math. He had a method that worked to give him correct answers, but as Berkeley pointed out, Newton did not have a rigorous mathematical justification for his method of fluxions. We have a modern parallel in renormalization, for which Feynmann, Tomonaga and Schwinger got the Nobel prize. At the time, they had no mathematical justification. I believe the mathematical rigorization of renormalization is a relatively recent development.

    The moral of the story is simply that physics leads mathematics by decades or even centuries. Physicists leap in where mathematicians fear to tread.

    It would not be reasonable for a physicist to reject a method that works in practice simply because it lacks mathematical rigor. "Lacking mathematical rigor" describes a lot of physics even today. Physicists think in infinitesimals, yet the theory of limits rejects infinitesimals.

    And for what it's worth, nobody rejected calculus; but they worked very hard for 200 years after Newton to get it straightened out. Even so, the theory of limits is a bit of a kludge. It depends crucially on the completeness of the real numbers, something for which there is no known analog in the physical world. It's fair to say that the underlying philosophical problem is still open.

    This is a mystery, not a paradox. Those are different things.

    That's what I'm arguing, sort of. Sure, Russell's paradox is a paradox. That was never the dispute.MindForged

    Arggg! That's EXACTLY what I'm disputing. And even though it's still called a paradox, nobody treats it that way. We treat it as a rigorous and convincing demonstration that naive comprehension must be rejected.


    The issue was always that the principles that gave rise to the paradox in naive set theory seem pretty damn reasonable.MindForged

    Falling back on naive intuition again. The Banach-Tarski paradox seems unreasonable, but it's mathematically true and is nothing more than a clever repackaging of the fact that the group of rigid motions of three space contains a copy of the free group on two letters. The proof sketch given in Wikipedia is actually quite simple. Nobody doubts its truth. We just note that "math isn't physics" and move on. By the way this is yet another perfectly correct theorem that's NAMED a paradox that actually ISN'T a paradox. It's simply an intuition-defying demonstration. Math is full of them.

    The entire history of math is the triumph of rigorous demonstration over naive intuition. It's only when it comes to set formation that some (you and @Jeremiah and maybe a few others) dig in your heels and say, "No, my naive intuition is more true than mathematical proof." The mathematical commmunity does not share that view. In the course of studying math, many naive intutions are shattered and replace by proof. Naive set theory is just one of them.


    So the way out of it was to come up with ad hoc restrictions on what constituted a set.MindForged

    Not ad hoc at all, but rather the product of over thirty years, say from 1900 to the 1930's, give or take, that the modern axioms of ZF were developed. The process was anything BUT ad hoc, and again I would refer you to Maddy, Believing the Axioms.


    There were extra-mathematical considerations which led to that response, not simply a proof by contradiction because that argument itself relies on already dismissing the possibility of paradoxes, which is the very thing under dispute of you accept Russell's Paradox.MindForged

    Where would I start here? You've already said you do not reject the law of the excluded middle. So there are not extra-mathematical considerations. You assume a proposition and show it leads to a contradiction, hence the proposition is false, no matter how intuitively appealing it seemed five minutes ago. Poor Frege. He got the point right away. You agree that Frege himself got the point right away. Yes?

    What do you mean by dismissing the possibility of paradoxes? Is Euclid's proof of the infinitute of primes a paradox? No, it's simply a demonstration that a common belief (that the primes are finite in number) is false. You say nobody believes this, but I spend a lot of time on Quora and Reddit and this question DOES come up often among beginners.


    There has to be a reason (besides arguing against the conclusion) for why you reject the principles that give rise to the paradox, otherwise it seems like the objection is circular.MindForged

    I could not understand that remark. What principles? The pattern is clear. If an assumption leads to a contradiction, we must reject the assumption, no matter how intuitively appealing.


    One can get around it the way ZFC does,MindForged

    One "gets around" the finitude of primes by accepting their infinitude. You are simply using different words to describe two identical phenomena. Two proofs by contradiction.

    but the question is if that is more rational or if it results in a more theoretically virtuous theory.MindForged

    It is never a question of virtue, but only of truth and proof. [Two different things in general, but in this instance, the same]. There are infinitely many primes and naive comprehension fails.
  • Mathematical Conundrum or Not? Number Four
    My only quibble is that it's "wrecking logic" only insofar as one already has an idea of what the correct logic is beforehand. The Thomists believed those who started using Classic Logic post-Frege were "wrecking logic" by abandoning what Aristotle left for us ..!MindForged

    I plead tragic ignorance of Aristotelian logic. Perhaps I over-identify the word logic with the standard predicate logic used in mathematics. The paraconsitentists (that word is used in the SEP article I believe) are wrecking what I think of as logic, but clearly my perspective is too narrow.

    Granted, paraconsistent mathematics hasn't reached that level (yet, perhaps) so my comparison probably lacks the persuasive force I'd like it to have.MindForged

    Appreciate that! Of course that doesn't mean that 20 or 30 years from now we won't be teaching paraconsistent logic to the undergrads. But it doesn't have much debating force today. You can't sensibly say, "Ok, our assumption of X has led us to a contradiction, so X might be true if we abolish the principle of explosion and tweak a few other things in logic." We don't say that. We say, "We have just shown that X is false."

    Of course specialists in logic-tweaking may bend the rules to allow that X is true. But it's hard to argue that this is how we should think. If X leads to a contradiction, X gets rejected.

    ... the supposition there's a largest prime, lacks any persuasive force for it. It doesn't follow from seemingly reasonable principles.MindForged

    Oh but it does. The primes get exceedingly rare on average the farther out you go. And there are arbitrarily long runs of primes. You name a number n, and I'll show you a run of n consecutive composite numbers. I find it perfectly reasonable that back in Euclid's time, nobody knew whether there was a largest prime, and many learned and brilliant thinkers might have believed that there is a largest one.

    And I have seen for myself that many students still ask this question. The infinitude of primes is NOT obvious at all. Of course once one has seen the proof and has fully internalized the infinitude of primes, one can no longer conceived of anyone else's doubt on the matter. But before one proves otherwise, it's perfectly sensible that there might be a largest prime.

    Ok next post is the response I've been putting off so let me just post that so at least I can feel like I've caught up.
  • Mathematical Conundrum or Not? Number Four

    Ahhhh, very interesting article. I learned something.

    I do feel a tiny bit sandbagged in the sense that you've had this somewhat obscure topic in mind as you've been debating. Had you presented this article and its point of view up front, it would have made your posts much more clear to me. Minor issue, now I'm educated and I see what you're talking about.

    To summarize the article as I understood it:

    * We really really really want to save naive set theory, so we have to rehabilitate unrestricted set formation via predicates, aka unrestricted comprehension.

    * The reason we care is that we [not me, actually, the people doing this work] would like to rehabilitate logicism, the idea that math is derivable from logic.

    * Unrestricted comprehension leads to a contradiction, and in standard logic a contradiction implies any given proposition. That's the principle of explosion. So we need to abandon explosion.

    * For various technical reasons we need to also abandon or modify some other logical principles.

    * Once we've done this, we can in fact allow unrestricted comprehension and save naive set theory and perhaps even logicism. Although in my opinion you're wrecking logic to save logicism, which might arguably be self-defeating. Nevertheless, this work can be done.

    * Now having saved unrestricted comprehension and perhaps logicism [at the expense of wrecking logic IMO] we can also patch up standard math: number theory, analysis, topology, and so forth. Surprisingly, quite a bit of math can be preserved even at the expense of allowing the contradiction of unrestricted comprehension.

    * This project is relatively new, and work continues as we speak.

    Have I got this about right? A couple of comments.

    First, this does remind me a bit of the constructivist project to rebuild math with a countable set of real numbers, each of which can be explicitly constructed. A lot of classical theorems fail in this scenario, so the constructivists patch and hammer and sing and dance and try to fix everything up.

    Yes it's true that it's all logically correct, but it seems like so much trouble just to avoid the truths of 20th century math: that unrestricted comprehension fails and that there really are important mathematical objects that can be proven to exist but that can not be explicitly constructed.

    So yes, the paraconsistent project is interesting and I'm sure the professors are getting their grants and doing their work and getting tenure and serving on academic committees and having fine old careers.

    But if I met one of these distinguished characters, I would ask them the same question I've asked you and @Jeremiah: Why don't you hack logic to allow the existence of a largest prime? Why does one easily proved mathematical fact annoy you so much yet you accept the proof of the infinitude of primes? [Sorry didn't mean to imply you personally are annoyed, you already said you're not. I mean the generic "you," the people trying to rehabilitate naive set theory].

    You know we could create a system of math with only finitely many primes. For example let 7 be the largest prime. We want the fundamental theorem of arithmetic (unique factorization into prime powers) to be true. So 1, 2, 3, 4 = 2 x 2, 5, 6 = 2 x 3, 7, 8, 9, and 10 are allowable numbers. But 11 falsifies the FTA, so it's abolished from the number system. 12 ok, 13 is abolished, 14's ok, 15 and 16 are ok, 17 is abolished. And so forth.

    Now we have a system of arithmetic that obeys the fundamental theorem of arithmetic and in which there is a largest prime. There's a little problem, which is that the integers are no longer closed under addition, since for example 8 is a number and 9 is a number but 8 + 9 = 17 is no longer a number. Well I guess we'll just drop the rule that says the integers are closed under addition. You're already perfectly willing to abolish the truth table for material implication which says that False implies True, and rejecting the additive closure of the integers doesn't seem much worse.

    But notice that we can still preserve the fact that the integers are closed under multiplication! Any product of powers of 2, 3, 5, and 7 is also such a project. See, we are making progress! With a little effort we can probably make this system work very nicely with a few such modifications.

    You might object that abolishing 11 will cause practical problems in the world. I agree with that point, and I only used 7 as a simple example. In practice we can just take the largest number anyone could possibly care about, say maybe 10^80, the number of hydrogen atoms in the observable universe, or maybe Graham's number, or Skewes's number. Any old finite number that's so big that nobody could ever care much about it in real life. Then take the next prime after that, define that as the largest prime, and I claim this is a perfectly serviceable system of arithmetic.

    So why does everyone care so much about naive set theory but nobody cares about hacking logic and math to allow a largest prime?

    I would ask these guys this question if I ever met them.

    Now I will allow that I may be one of those old dinosaurs that has to die so that younger people can simply grow up accepting inconsistent math. And I certainly agree that non-Euclidean geometry, relativity and quantum theory, Heisenberg's uncertainty and Gödel's incompleteness, postmodern philosophy and the ills of late-stage capitalism have brought the project of western rationality to a moment of crisis. There's no point defending rationality when the world is so clearly irrational. I take all these points.

    But still. Why unrestricted comprehension and not a largest prime? Why the emotional attachment to naive set theory? Maybe my professors were too effective at beating standard mathematics into my brain. But I really don't get it. Naive set theory is intuitively appealing but it fails. Accept it and move on.

    Thanks for the link though, I certainly did find it interesting.
  • Mathematical Conundrum or Not? Number Four
    it has been demonstrated that naive set theory + a paraconsistent logic lets you prove the Continuum Hypothesis is false.MindForged

    Ah, you must be working from knowledge of paraconsistent logic that I lack. Reference for the above fascinating factoid?
  • Mathematical Conundrum or Not? Number Four
    No, Aristotle created Syllogistic. Classical logic was invented in the 1870s by Frege. These are not the same system,MindForged

    Sorry, I overstepped my knowledge. I don't know anything about Aristotle. Poor Frege, such a brilliant and original thinker, forever remembered for his big mistake.

    I better leave this be for tonight. Now I'm two posts behind you.
  • Mathematical Conundrum or Not? Number Four
    The axiom schema of specification blocks Russell. Would I be right in thinking that one reason to be cool with that approach (the truth learned) is that we don't need unrestricted quantification?Srap Tasmaner

    Well that's the conventional wisdom, pretty much universally accepted.

    But I wouldn't say that we don't need unrestricted comprehension (I don't know why they use the word comprehension, I'd just say "set formation by predicates"). We simply discovered that set formation by arbitrary predicates leads to a contradiction. So we are FORCED to abandon it, reluctantly.

    I do agree that this is psychologically or intuitively unpleasant. We want to think of sets as Cantor originally did:

    A set is a gathering together into a whole of definite, distinct objects of our perception [Anschauung] or of our thought—which are called elements of the set.

    That's how we teach school children about sets. It's how we think of sets. The collection of things that satisfy a predicate. But Cantor's definition fails. It leads to a contradiction. So we learn our lesson, we move on, we abandon naive set theory.

    I do empathize with those who are troubled by Russell's refutation of naive set theory. But I don't agree with anyone who gets stuck on their intuition so firmly that they can't move past it. It was John von Neumann who said that we don't understand math, we just get used to it. That's a great insight.
  • Mathematical Conundrum or Not? Number Four
    What do you think?MindForged

    This is exactly how I got in trouble last time. Conversating back and forth while deferring responding to the important earlier post. But a few thoughts ...

    I'm not talking about Russell's Paradox in that bit, I'm talking about the general outlook regarding mathematics post-Incompleteness Theorems. ZFC's development was intentionally practical: we need to get on with the business of doing sensible maths but classical logic cannot function sensibly with an inconsistent set theory.MindForged

    I don't see why. Classical logic goes back to Aristotle. And even math doesn't need set theory. There wasn't any set theory till Cantor and there was plenty of great math getting done before that. Archimedes, Eudoxus, the medieval guys Cardano and so forth, Newton, Gauss, Euler, Cauchy, and all the rest. None of them ever heard of set theory and did fine without it.

    If set theory were discovered to be inconsistent tomorrow morning, the foundationalists would get busy patching it and nobody else would care. As an example, how would group theory change? The group axioms and their logical consequences would still be the same.

    As far as incompleteness, that's already been verified and sliced and diced via computer science, information theory, and almost another century of study. Gödel published in 1931, that's almost a century already. Incompleteness is literally a classical result now. Everyone's moved past it. So we can't use the traditional axiomatic method to determine what's true. If anything, that's perfectly sensible. We have to find other paths to truth. That's exciting, not worrisome I think.

    Once it became clear that there was no strict necessity in picking one formalism over another (i.e. no privileged set of indubitable axioms), it seems like mathematicians and logicians became a bit more cavalier about the whole thing. Rightly so, in my view, the interest shifted to the virtues of particular formal systems applied in specific domains, particularly when such systems are fruitful.MindForged

    I don't think that's completely true. People don't study random sets of axioms. See Maddy's great articles Believing the Axioms parts 1 and 2, in which she works through the axioms of ZFC and discusses the philosophical reasons why they have gained mindshare. I really don't believe that incompleteness is any kind of nihilistic disaster. Interesting math is being done every day.

    Like from the Incompleteness Theorems, we know you can (for systems expressive enough to articulate arithmetic truths) either have an inconsistent but complete mathematics (Paraconsistent mathematics) or you can have a consistent but incomplete maths (Classical math, Intuituonistic math, etc.). Classical logic is so preferred because of its wide usability, but there are known issues and domains where it's questionable (quantum mechanics, representing human reasoning, databases, some evidence paraconsistent logic operations are faster to compute, etc).MindForged

    Right. All of this is thrilling intellectual stuff. It's not the end of the road for reason. On the other hand, perhaps it's related to postmodernism and the reaction against reason. Reason has given us better ways to wage war and promote economic and social inequality. There are good reasons (!?) to distrust reason.

    So I wonder if this modern openness to more or less any non-trivial logic/math indicates some kind of pluralism. What do you think?MindForged

    Pluralism. Yes. Crisis = opportunity. Something new is coming. Hilbert's program failed, but that doesn't lead to people being cavalier as you put it. Alternatives are being explored. I think 100 years from now all this will be more clear. Reason and logic are going through some kind of revolution that we can't see the outlines of yet. Computers and the computational way of looking at things. We're in some kind of transitional period.

    Hamkins has something called the set-theoretic multiverse. It's (to the extent I understand it, which isn't much) the consideration of all possible set theories considered as a whole. The worlds where CH is true, where CH is false, and so forth. There's no one true set theory, they're all part of some grand structure. These are my words, not any claimed description of what Hamkins is thinking.

    Here's his "popular" exposition, which isn't what I'd call elementary or comprehensible. But for what it's worth, contemporary set theorists are already way past Gödel. By the way (rambling on now), I think the really big breakthrough wasn't Gödel. It was Cohen, who showed how to cook up nonstandard models. That's when things really started getting crazy in the set theory business.

    http://jdh.hamkins.org/the-set-theoretic-multiverse/
  • Mathematical Conundrum or Not? Number Four
    I think I've already articulated my position without recourse to intuitionism.MindForged

    Ok. Just wanted to make sure you accept law of excluded middle and proof by contradiction.

    Once you think it over (need not agree obviously) let me know what you think.MindForged

    Will do. I got in trouble once around here when I deferred responding to someone's long and complex posts while responding quickly to other people's short posts. The poster whose long posts I was trying to give serious and considered thought to, got more and more impatient and finally abusive. Just wanted to be clear that I'm deferring my thoughts till I have a block of time tomorrow.

    I'll admit, I'm something of a logical pluralist so it's not like I'm advocating a wholesale abandonment of standard maths. Honestly, I actually wonder what mathematicians who think about this sort of thing believe (rare-ish to see it done in depth, most don't bother with the foundations of maths these days). Really, it seems like Gödel's Incompleteness Theorems in particular and the death of Logicism (using classical logic) seems to have killed foundationalism in the eyes of mathematicians and logicians, so I wonder if they're pluralists of a sort?MindForged

    I think Category theory and homotopy type theory are getting most of the foundational work these days. Homotopy type theory as I understand it actually relates to the resurgence of intuitionism. And the set theorists study large cardinals and are still hard at work on CH. You can Google names like Woodin and Hamkins to see what the set theorists are up to. But nobody worries about Russell's paradox because there's nothing to worry about. It just shows that we can't use unrestricted set comprehension. And I still don't know why you think people should be concerned about a run of the mill proof by contradiction. Sure it ruined Frege's day, but it revealed a mathematical truth about the nature of sets. But that's what we're talking about so I'll try to respond to your specific points soon.
  • Mathematical Conundrum or Not? Number Four
    Proof by contradiction would lead us right back to Russell's Paradox.

    It seems you have another contradiction on your hands.
    Jeremiah

    That didn't even make sense. I do remember reading it now. I don't follow your point at all.

    We assume there's a largest prime and derive a contradiction, so we conclude there's no largest prime.

    We assume we can form sets out of arbitrary predicates and that leads to a contradiction, so we conclude we can't form sets out of arbitrary predicates.

    This seems perfectly sensible to me. And (argument by authority and popularity) every mathematicians in the world agrees. That doesn't mean they're right, but you have to make a much stronger argument, which you haven't done.

    By the way, are you and/or @MindForged making some kind of constructivist or intuitionist argument that rejects the law of the excluded middle and/or proof by contradiction? That would at least make some sense, but intuitionists aren't trying to resurrect naive set theory, as far as I know. The modern neo-intuitionists have given up on set theory entirely and are working with some flavor of type theory. Type theory was Russell's own solution to his discovery.
  • Mathematical Conundrum or Not? Number Four
    I actually already addressed this argument of yours.Jeremiah

    Link please, I didn't see it. But it wasn't an argument, since I'm merely stating what every single mathematician agrees with. I'm asking you a question. Why do YOU find the two cases so radically different? Two proofs by contraction but only one is a paradox in your viewpoint.

    @MindForged has the same opinion and he gave a longer post that I'm working through before I respond. If you did respond to this question, just point me at the response please.
  • Mathematical Conundrum or Not? Number Four
    I just don't have a lot of free timeJeremiah

    Uh ... LOL. That made me chuckle.

    It's because the form of the two proofs is identical:

    * Assume there's a largest prime.

    * Derive a contradiction.

    * Conclude there's no largest prime.

    versus

    * Assume you can form a set from an arbitrary predicate.

    * Derive a contradiction.

    * Conclude that you have a deep paradox that must be addressed or resolved.

    I don't see the difference. In the 20th century the smartest mathematicians in the world regarded these two patterns as the same. In the case of Russell's smackdown of Frege, everyone realized that you CAN'T always make a set from a predicate, hence the need for better rules of set formation.

    So myself, I don't see the difference between the two proofs. If your assumption leads to a contradiction, you ditch the assumption. That's exactly what all the mathematicians did.

    It wouldn't make any sense to say, "Oh Euclid's proof by contradiction shows there's a terrible paradox." Rather, Euclid's proof shows that there's no largest prime. And Russell's proof shows that we can't form sets from arbitrary predicates. It's as simple as that.

    And -- admittedly an argument from authority -- every mathematicians agrees with me

    Now of course that doesn't make me right, that's just an argument from popularity or authority. But it does place the burden of argument on you to say why everyone's wrong and you and @MindForged are right.
  • Mathematical Conundrum or Not? Number Four
    You can't resolve a paradox but simply stating that it is not a paradox. A paradox by any other name is still a conundrum.Jeremiah

    Do you regard the proof by contradiction that there's no largest prime a conundrum or paradox? Why or why not?

    In other words: The assumption that there's no largest prime leads to a contradiction. so we conclude that there's no largest prime. The assumption that you can define a set with an arbitrary predicate leads to a contradiction, therefore we have a powerful paradox that must be addressed by philosophers. I simply do not understand the difference except as a manifestation of psychological recency bias.

    @MindForged You raised some good points that I"m taking some time to think about.
  • Mathematical Conundrum or Not? Number Four
    If people have been calling it a paradox for over 100 years, guess what, it is a paradox.Jeremiah

    I addressed that point in my earlier response to @MindForged. Naming is generally a matter of historical accident. Is the Axiom of Choice an axiom, Zorn's lemma a lemma, and the well-ordering theorem a theorem? But they are logically equivalent, and often introduced to students in relation to one another. Do you regard the infinitude of primes as a paradox? It's often (though to be fair, not necessarily) proved via contradiction, just as Russell's smackdown of Frege is. Historical names mean nothing. One man's freedom fighter is another man's terrorist. What you call things is not the same as what those things are.

    Lewis Carroll and many others have made the distinction between the name of a thing and the nature of that thing. Shakespeare noted that a rose by any other name would smell as sweet. Abe Lincoln used to ask, If you call a tail a leg, how many legs does a dog have? Answer: Four. Calling a tail a leg does not make it a leg.

    Even the Beatles made this philosophical point: "Her name was Magill, and she called herself Lil, But everyone knew her as Nancy."
  • Mathematical Conundrum or Not? Number Four
    I didn't mistake anything.MindForged

    You raised a number of interesting points. Before I respond in detail, it would help me to understand your point of view if you could tell me in clear and unambiguous terms what you find different about these two situations.

    a) There is no largest prime. Proof: We assume there is a largest prime and derive a contradiction. Hence there is no largest prime.

    b) We can't define a set using an arbitrary predicate. Proof: We assume we can define a set using an arbitrary predicate and derive a contradiction. Hence we can not define a set using an arbitrary predicate.

    Why is it that in the case of (a) you regard this as a basic mathematical truth; yet in the case of (b) you regard this as a philosophical conundrum perhaps susceptible to attack via paraconsistent logic?

    I assume (although you have not confirmed this) that you don't regard the infinitude of primes as being subject to modification or revision based on paraconsistent logic. Why is (b) different?

    Could there perhaps be some recency bias? Frege and Russell worked just a little over a century ago; and Euclid's proof is over 2000 years old.

    But human nature doesn't change. It's reasonable that there was a contemporary of Euclid, an ur-Frege if you will, who was brilliant and accomplished and who maintained that the primes were finite in number. After all there is a perfectly sensible and compelling heuristic in support of that proposition, namely the fact that the primes get more and more rare the farther out you go; and that there are in fact arbitrarily large runs of consecutive composite numbers.

    Perhaps ur-Frege published his masterwork; and right on the eve of publication, Euclid showed that there is no largest prime. Perhaps this caused a big stir back in the day. The historical record is lost; but it's certainly plausable. The fact that Euclid felt the need to write down a proof shows that the question was in the air at the time.

    So just tell me please, what is the difference in your mind between (a) and (b)?

    By the way I did not intend to appear patronizing. I carefully walked through these two proofs by contradiction in order to elucidate their structural similarity. Assume the contrary, derive a contradiction, learn a truth.

    You see a great difference between these two famous proofs, and I don't see a difference at all, except for the antiquity of one and the recency of the other. If you can clearly explain to me why you see a profound difference, I'd understand your viewpoint better.
  • Mathematical Conundrum or Not? Number Four
    As I say, that's a somewhat naive view. The specification scheme allows one to avoid the paradox, but it doesn't necessarily solve the paradox. The whole point of regimenting set theory this way was to make make math consistent (or at least not provably inconsistent). But it comes with well known issues, like a number of unsolved questions that have known answers in other systems (e.g. Continuum Hypothesis).MindForged

    MindForged, you are completely misunderstanding the difference between a veridical paradox and a plain old proof by contraction. Moreover, Russell's paradox has absolutely nothing to do with Gödelean incompleteness. Simply nothing.

    Let's start by reviewing how proof by contradiction works. I'll use Euclid's classic proof of the infinititude of primes.

    Claim: There is no largest prime.

    Proof:

    Assume the negation of our claim: Assume to the contrary that there is a largest prime.

    Then we may number the primes p1, p2, ..., pn. [Moderators: It's so easy to add MathJax to a website. Pretty please? It would greatly enhance mathematical discussions here].

    Form the number P = (p1 x p2 x ... x pn) + 1.

    Clearly P is not divisible by p1, it leaves remainder 1. Likewise P is not divisible by any of p2, p3, ..., or pn.

    Therefore P must either be prime; or else it must be divisible by some prime other than the ones we listed.

    Therefore since the assumption that pn is the largest prime leads to a contradiction, we must conclude that in fact there is no largest prime.

    Ok, that's the basic pattern. Now observe that Russell's "paradox" follows the exact same form.

    Note that calling something a paradox doesn't make it a paradox. Naming conventions are generally historical accidents. For example the Axiom of Choice, Zorn's lemma, and the well-ordering theorem are respectively an axiom, a lemma, and a theorem. However they are all logically equivalent. The names are just accidental conventions. If we called it "Russell's theorem," or "Russell's smackdown of Frege," much less confusion would ensue. I hope this point is clear. From now on I'll refer to Russell's argument as Russell's smackdown.

    By the way Frege is credited as being the inventor of the universal and existential quantification operators. He was really quite a bright fellow, notwithstanding his public humiliation at the hands of Russell in this particular matter.


    Claim: We cannot form sets out of arbitrary predicates.

    Proof:

    Assume the negation of our claim: That is, assume that we can always form a set out of a predicate.

    Consider the predicate P(x) = "x ∉ x".

    Now we let R be the set R = {x : P(x)}. We see (following Russell) that we must have both R ∈ R and R ∉ R. That's a contradiction.

    Therefore we conclude that our assumption is false; and that we may not arbitrarily form sets out of predicates.

    Now we see that Russell's smackdown is nothing more than a traditional proof by contradiction; a basic pattern of logical reasoning that goes back at least two millennia.

    How you get from this to invoking Gödel's incompleteness theorem I simply don't see.

    Nor do I see how Russell's smackdown is a genuine paradox. After all, one could argue about primes as follows:

    Primes get increasing rare as numbers get big. The farther out you go, the more distance there is on average between primes. If you go out far enough, it's reasonable that you simply run out of primes; and that from some point onward, every number is composite.

    If one has this intuition, one would regard Euclid's beautiful proof as a veridical paradox. But very few people would call the infinitude of primes a paradox. Rather, it's a mathematical fact that's often proved via the method of contradiction. If one said, "But I REALLY have this intuition that there must be a largest prime, so Euclid's proof is a paradox that must be resolved," they might indeed have strong feelings about the matter, but their point would not get any traction among mathematicians or even logicians.

    Likewise, Russell's smackdown shows that our naive intuition about sets -- that they arise from predicates -- is wrong. Perhaps one has this intuition; but with a little mathematical training, one quickly realizes that to form a set we must apply a predicate to an existing set. That's the axiom (schema) of specification.

    Let's see how this works in practice. If N is the natural numbers, what is the set R = {x ∈ N : x ∉ x}? Well, is 0 ∈ 0? No, so 0 is in R. Is 1 ∈ 1? No, so 1 is in R, Continuing in this manner we see that in fact R = N. The axiom of specification has completely resolved the matter. We CAN'T necessarily form sets out of predicates, but we CAN form a set by applying a predicate to an existing set. Done and done.

    So when one says, "My intuition is that the primes get so rare that eventually there's a largest one and then no more after that," or "My intuiution is that you can always form a set from a predicate," these are indeed intuitions that an untrained person might have. But with a little mathematical training, one comes to understand and internalize that there are infinitely many primes, and that in order to form a set from a predicate, one must first start with a known set.

    This has NOTHING to do with Gödel, nor is the Continuum hypothesis even remotely, by any stretch of the imagination, the same type of phenomenon.
  • Mathematical Conundrum or Not? Number Four
    The barber is shaved by Occam's razor.
  • Mathematical Conundrum or Not? Number Four
    Good God Almighty. Russell's paradox was resolved in 1922 by the axiom schema of specification.
  • Mathematical Conundrum or Not? Number Three
    Well I gave it the old college try. At least @apokrisis didn't show up to hurl gratuitous insults. I'm outta here again.
  • Mathematical Conundrum or Not? Number Three
    Yes it did, you are trying to do calculus without a delta x. You are doing it wrong, that is not a paradox just an error.Jeremiah

    LOL.

    I ask again: Why is it that in one case, you invoke the standard mathematical formalism to explain or ignore the underlying philosophical issues; and in the other case, you reject the standard mathematical formalism and insist that there's a paradox that must somehow be explained?
  • Mathematical Conundrum or Not? Number Three
    Because your understanding of calculus is very poor and incorrect.Jeremiah

    Even if that were true, it wouldn't answer my question.
  • Mathematical Conundrum or Not? Number Three
    The FTC is the total change F(b) - F(a) equal to the sum of small changes F(x of i) - F(x of i -1) and that is equal to the sum of the areas of rectangles in a Riemann sum approximation for f(x).Jeremiah

    Yes, that is the mathematical formalism.

    So in this case you fall back on the mathematical formalism to ignore the philosophical paradox; but in the case of Gabriel's horn, you dismiss the mathematical formalism and focus on the philosophical paradox. Why is that?
  • Mathematical Conundrum or Not? Number Three
    0+0 is not a change in x.Jeremiah

    Which has what to do with anything I wrote?

    Let me tl;dr this for you. Why are you so focussed on a particular paradox of Riemann integration, when it's Riemann integration itself that is philosophically murky?
  • Mathematical Conundrum or Not? Number Three
    I plan on posting more of these paradoxes, these are exactly the type of content mathematically incline philosophers should be turning their skills towards. Some will generate a good amount of discussion, while others may not. I can't help that; however, Gabriel's Horn is one of the major paradoxes, it should be in these posts.Jeremiah

    I have a question for you.

    Gabriel's horn is a paradox of Riemann integration, accessible to students of freshman calculus. As others have noted it's a paradox in the sense of being counterintuitive, not a paradox in the sense of being a logical contradiction.

    Now, why aren't you bothered by the following more basic counterintuitive paradox of Riemann integration? Let's say we integrate 1 over the unit interval. That is, we compute the integral ∫dx between the limits of integration 0 and 1. Any calculus student will tell you the answer is 1.

    But if you think about it, how can this be? We are literally adding up infinitely many zeros to get the number 1. And if we were to change the limits of integration to go between 0 and 2, we would be adding up infinitely many zeros to get an answer of 2. And the number of zeros, or dimensionless points, in the interval between 0 and 1 has the exact same cardinality as the interval between 0 and 2. You can see this by noting that the map f(x) = 2x is a bijection between [0,1] and [0,2].

    How can Riemann integration make sense? How can we add up infinitely many dimensionless points to get 1; and then add up the same infinite number of dimensionless points to get 2? One answer is that it's mathematically true. But by your own argument, that's not very satisfying. We have a formalism that works out integrals. But what kind of sense does it really make to add up infinitely many dimensionless points and end up with a nonzero answer? And not only that, but by rearranging the points, we can get any answer we want.

    Why don't you consider this an incomprehensible paradox? After all, once you believe that you can add up infinitely many zeros to get 1, and then add up infinitely many zeros to get 2; why should you be surprised that Riemann integration leads to other counterintuitive results?

    Gabriel's horn rests on Riemann integration. If you object to Gabriel's horn, why don't you object to the more fundamental mystery of Riemann integration in the first place?

    Put more simply: How does a collection of dimensionless points, each of size zero, add up to any volume we care to name? Isn't that a puzzler deeper than the mere rotation trick of Gabriel's horn?
  • Rules of nothing
    In the synagogue one day the Rabbi kneels and puts his forehead to the floor and says, "Before you oh Lord, I am nothing."

    The Cantor puts his forehead to the floor, and says, "Before you oh Lord, I am nothing."

    The janitor is passing by and sees them. He kneels and puts his forehead to the floor and says, "Before you oh Lord, I am nothing."

    The Rabbi nudges the Cantor and says, "Look who thinks he's nothing!"
  • Mathematical Logic Graduate Programs Requirements
    I don't know your background so I can't say. But here is a free pdf of Shoenfeld's classic text on Mathematical Logic. It's one of the standard grad-level texts on the subject. Have a glance through it so that you can get an idea of what you're in for. This is not an easy book. You shouldn't be intimidated by it, but you shouldn't underestimate the subject either.

    https://www.karlin.mff.cuni.cz/~krajicek/shoenfield.pdf

    In general I'd say that the prerequisite for graduate study of mathematical logic is a generous amount of what they call mathematical maturity. Meaning that you're comfortable working with proofs. If you've seen undergrad set theory that will help a lot too.

    If you're already at a university, ask one of your profs and/or go over to the math department and ask around. Take as many upper division math courses as you can, especially real analysis and abstract algebra. Real analysis is not specifically prerequisite to logic but it will train your mathematical mind like nothing else. Abstract algebra is IMO prerequisite because it teaches you to handle mathematical abstractions that at first glance don't make sense or that seem unmotivated.

    Mathematical logic at the graduate level is nothing like the truth tables and so forth that you probably learned in the discrete math class they give to CS majors.

    Good luck!
  • Do numbers exist?
    Really all this says is that 'what is a number' and 'do numbers exist' are to some extent independent from the concerns of doing mathematics.fdrake

    That's right. I noted that there is no general definition of number in mathematics. A well-known and true observation. For whatever reason, this simple and harmless statement triggered several people. I still don't understand why.

    I do of course agree with you point that 2i is a quantity of two i's, like 2 apples is a quantity. So the question reduces to asking exactly what is a quantity. @tim wood brought up the idea of quantity a while back so I asked him what is a quantity, and so far I have not gotten an answer.

    But ordinals I think are the best example of numbers that absolutely can not ever be interpreted as quantities, since the same cardinal can be rearranged to represent many different ordinals.

    Number is not the same as quantity. I think that's clear.

    The reals (excluding weird stuff about 0) under multiplication and addition in the usual sense satisfy modern intuitions about what it means to be a number. When those intuitions are formalised, in turns out that there are other structures which aren't commensurate with folk intuitions that nevertheless satisfy the axiomatisation of a field inspired by those folk intuitions.fdrake

    Right. And some structures that satisfy the field axioms are most definitely NOT numbers, such as the rational functions with coefficients in a field.

    I'm not entirely sure I understood the theme or message of your post. All I'm saying is that there's no general definition of number in math; and even for logicians and philosophers, it's very difficult to pin down what a number is. I've never seen a successful definition.
  • Do numbers exist?
    As to the quantity i: Question: is i ever an answer, in any form, to any question of how many?tim wood

    No. It's not. That's the point. i is a number but it's not a quantity. That's a counterexample to your idea that a number is something that is a quantity or that can be quantified. Simple as that.

    I pointed out that it's very difficult to define in general what a number is. You suggested that a number is something that can be quantified or that represents or is a quantity. I gave as a counterexample the number i, which is a number but is not and does not represent a quantity.

    You said a quantity is something that can be quantified. I don't find that helpful because it doesn't tell me what a quantity is. If you tell me a cat is a furry domesticated mammal with retractile claws, that's a lot more helpful than saying that a cat is anything that's cat-like.
  • Do numbers exist?
    He made a grammatical point, and in this he was correct.tim wood

    My remark was intended as lighthearted. What the meaning of "is" is was very big in American popular culture during that particular scandal. This is the only time I've heard that question raised since the Bubba and Monica affair. Cigars, stains on dresses.

    If Americans want to know how we ended up with a monstrously crude man as President like Trump, I'd say the bar was set low when the American people made Bill Clinton a two term president and never held him accountable. Trump and Bubba used to be golfing buddies. You think they talked about women's rights?

    I think my interest in American politics is off topic here so I'll let it go. But when you say it depends on what the meaning of "is" is, you can hardly be aurprised that the first thing anyone would think of is Bill Clinton and the intern. It's the 20 year anniversary of that scandal right now. So it's in the air.

    This criticism might have some merit if that were what we were doing. But we weren't, so it doesn't.tim wood

    Ok. So when you say that a quantity is that which can be quantified, you are NOT saying that a cat is what which can be cat-like.

    I confess to not understanding why anyone would regard this as a sensible response. But I'm sure that's more due to my philosophical ignorance. But if I'm ignorant, this would be a point where you could educate me. When you say, "A quantity is that which can be quantified," what actual information are you imparting? To me it just likes you changed the form of a word without adding meaning.

    I should think not; keep in mind I did not offer a definition of "quantity." You asked what quantity is, and I answered. I thought it was a pretty good answer - to the question asked!tim wood

    We definitely disagree and I am curious to understand your reasoning. You said that a quantity is that which can be quantified. I don't recognize that as the answer to any question I asked. I'm sure the communication problems are all on my side, but I'd like to bridge them if that's possible.

    Question: does i exist in some, or any, sense or way that is different, in any way, from the way that other numbers exist? Question: Where did you see an i?tim wood

    My university training is in mathematics, although my post-university career involved following math only at an amateur level online. But I absolutely regard i as a number. To me the number i is as concrete as the number 6. It just refers to something different than what 6 does.

    6, you see, does generally represent a quantity. Six ducks in a row, six eggs in half a dozen, six bullet items in your PowerPoint slide. The number 6 is instantiated in everyone's every day experience all their life.

    Now the number i, as it turn out, is every bit as pervasive and a normal part of our daily lives. However people don't recognize this, because the number i is taught very poorly in high schools around the world.

    Forget that crap about "the square root of -1," which always sounds like bullshit because they just got through telling you that there is no square root of -1.

    Think instead of i being a gadget that keeps track of how many counterclockwise turns of 90 degrees you make. Say you start facing east. You then turn north. Call that i. Then you turn west. You are now facing directly opposite the way you started. In other words ... i^2 = -1, and this notation is simply an expression of something very simple. If we make two quarter turns to the left, we are now facing in the exact opposite direction of where we started.

    Now one more turn is -i, and one more turn aft that is ... 1.We're facing east and we just discuvered that i^4 = i^0 = 1.

    So i is a number. but it is not a quantity. What it is, is an instruction to make a quarter turn left. That's what numbers can sometimes be. Representations of geometric transformations.

    A general complex number is z = a + bi where a and b are real. An alternate and more insightful notation is polar representation. If z is a complex number then we can write z = re^(it) in complex exponential form, there t is the angle made by the line segement between the origin and z, and the positive x-axis.

    In trig form this is the same as saying z = r(cos it, sin it). This rotates the oringinal vecor through and angle of t, and it scales it by a factor of r.

    If you plug in t = pi/2 and r = 1you get the special case of z = i. In fact the case r = 1 is very important because as t goes from 0 to 2pi you get all the points on the unit circle.

    So every time you turn left -- at a traffic intersection, on a street corner if you're walking. or if you're just standing in your living room spinning around conterclockswise: You are instantiating the complex number i. Every time you turn through an angle of t, you end up at a particular point on the unit circle.

    That's not all. The number i is an essential part of modern physics and engineering. Having a symbolism for something being 90 degrees out of phase is very handy. So i can be defined in formal math, and it comes up in physics. It's a number, and it is instantiated in the world.

    So YES, i is a number. But NO, i is not a quantity. The number 6 is a quantity. It's 6 of something. But i represents no quantity. I represents a quarter turn in the plane. And geometric rotations and scalings of the plane happen to have very algebraic properties.

    You don't even need a magic "square root of -1" to do this. There's a particular subset of 2x2 matrices whose entries are real numbers. They are an isomorphic copy of the complex numbers. So nobody has to believe in anything "imaginary." If you believe in the real numbers, then you'll agree to believe in 4-tuples of numbers arranged in a 2x2 array, along with the usual array operations of matrix addition and multiplication.

    One more example. The area of a circle with radius 1 is pi, right? Now is that a quantity? A quantity of what?

    We determine the area in multivariable calculus by defining the two-dimensional Riemann sum. We fill up the circle with little squares and count the squares. Then we fill it in with smaller squares. At the end of that limiting process is the area of the circle, which comes out to pi.

    But there's no quantity anymore. At each step there was a finite quantity of little squares. But in the limit, there are NOT infinitely many infinitesimal squares. Calculus abandoned that approach. Instead we just work with the limits. So at the end of this process, pi is a number but it's not a quantity of anything.


    If i is not a number, then what is it?tim wood

    I'm a math guy. Of course i is a number. I mentioned this earlier in a reply to @Tom, if you read back a few posts you might find it. I believe in the mathematical reality of all mathematical structures. [Note that this is not to say I believe in their physical reality. Only that if i can construct something in math, the it's a mathematical object and has mathematical existence. I make no general claims about the world].

    If numbers do not represent quantities, then what do they represent?tim wood

    Well now THAT is the good question!! In math, nobody bothers to ask the question because it's a question of philosophy and not math.

    In philosophy, we're seeing that it's damned hard to pin down what a number is. And it's fun to try. Or at least it SHOULD be fun to try. When it becomes less than fun I become less inclined to play.

    Clearly SOME numbers represent quantities. Other numbers represent scaling and rotations in the plane. Ordinal numbers represent order types. Cardinals DO represent quantity!.See we even have two different notions of transfinite numbers, one that represents quantity (cardinals) and one that represents order (ordinals).

    We have familiar numbers like pi where we're hard pressed to say what quantity of anything that represents. Pi is defined as a ratio, it's defined as an infinite series, it's defined as the smallest positive zero of the sine function, which we can define via an infinite series so that there's no geometry involved.

    Some numbers represent quantities and others don't. So it's a tricky thing to accurately express what a number must be in general to be considered a number. Every rule anyone thinks of has lots of exceptions.

    If i is not a number, then what is it?
    You're free to agree or disagree with whatever you like; in this case, you might have done some research. I did. Mathematicians appear to classify i as a number.
    tim wood

    I absolutely agree that i is a number. But it is not a quantity. You said it is a quantity. I want to know by what criteria to you call i a quantity. And it's wholly inadequate to say that i is a quantity because it can be quantified. Any fairminded philosopher must see this.
  • Do numbers exist?
    Your inability to discuss the foundations of maths is noted.apokrisis

    Your complete misunderstanding and lack of comprehension of category theory and mathematical structuralism was evident to several other posters the last time we discussed this. I was hoping in the limited time I have each day to post here that I would gradually work through your earlier posts in this thread and help you sort out some of your ideas. But you are simply too rude and annoying for me to bother any more.

    I'm done responding to your posts on this site. It would be for the best if you'd simply stop responding to me. Regardless I will no longer respond to you.
  • Do numbers exist?


    I did not write the quote you attributed to me. What is your attitude problem?

    I stated originally that there is no general definition of number in math. Nobody has provided a counterexample and you now seem to agree. That's all I said. I actually can't imagine why you are going on about structuralism, which has nothing to do with what I said.

    I'm not making any point about philosophy. I'm making a statement about math. There is no general definition of number in math. This is uncontroversial and widely known. You are going off on wild tangents that don't bear on what I said and that don't falsify what I said. If you choose category theory as your foundation, there's still no general definition of number.

    Perhaps you would consider starting a thread on mathematical structuralism. It's an interesting topic. It has nothing to do with what I said, which is that there is no general definition of number in math.
  • Do numbers exist?
    you seem to want a definitionapokrisis

    I don't want a definition. I merely pointed out that there isn't one in math. You agree with this by now, yes?
  • Cryptocurrency
    Hope nobody here mortgaged their house to go all in at $19k. It's a crypto bloodbath out there at the moment.
  • Do numbers exist?
    If you don't like the fact that numbers are defined in terms of set theorytom

    I certainly can't understand how you would have gotten that impression. Many specific types of numbers are defined within set theory. But there is no general definition of what a number is in set theory or in any other foundational approach.

    I guess you won't like the fact that numbers are also defined in terms of field axioms.tom

    Not a bad idea. But the field axioms don't say anything about numbers. It's true that many types of numbers satisfy the field axioms, such at the rationals, the reals, the complex numbers, the integers mod p, and all the finite fields of the form p^n.

    However, the set of rational functions in one variable satisfies the field axioms, but rational functions are not numbers. Rational functions are quotients of polynomials. It's not hard to show that they can be added, subtracted, and multiplied. It's a standard, somewhat nontrivial exercise to show they can be divided. So the field axioms aren't sufficient to define what we mean by a number.

    https://en.wikipedia.org/wiki/Rational_function

    And no, new types of number are not "discovered all the time".tom

    The quaternions (discovered in 1843 by Hamilton), transfinite ordinals and cardinals (Cantor 1874-1890's), the p-adics (Hensel, 1897), and the hyperreals (Hewitt, 1948) are a few examples that come to mind. These are very recent developments in the history of math. People didn't used to believe in zero, negative numbers, rational numbers, real numbers, or complex numbers. Each time someone discovers a new type of number, mathematicians have to expand their own ideas about what constitutes a number.
  • Do numbers exist?
    a structuralist definition, in that numbers are whatever it takes to get certain number-like operations - like those that preserve certain global symmetries, such as commutativity or associativity.apokrisis

    The quaternions are numbers whose multiplication is not commutative. The transfinite ordinals are numbers whose addition is not commutative. How weird is that, right?

    Good idea and a very natural attempt; but arithmetic properties aren't sufficient. Weirder still, there are numbers that lose associativity as well, such as the octonions. Octonions come up in physics so these are not only of abstract mathematical interest.


    I intend to go back to your first post on the subject and respond in detail to your comments on mathematical structuralism and category theory, so I hope you can be a little patient. I want to start at the chronological beginning of your posts on the subject and I can't do that tonight.

    In short though, mathematical structuralism is more subtle than just listing arithmetic properties like associativity.The kinds of properties that they use in category theory are ... well, they're kind of weird and nonintuitive when you first see them. The structural relations they have in mind are various types of universal mapping properties. It's hard to do justice to what this means in a simplified format but I might take a run at it once I get into responding in detail to your earlier post on structuralism.
  • Do numbers exist?
    Why wouldn't ZFC count?Akanthinos

    There's no definition of number in ZFC. In ZFC we have a definition of natural numbers, and we can make definitions of the integers, rationals, reals, complex numbers quaternions, transfinite ordinals and cardinals, hyperreals, and many other types of number.

    But there is no general definition of number. If a thing comes down the conveyor belt and I have to say if it's a number or not, of course I can identify the types of numbers I already know about: integers, reals, etc. But I can't determine in general what is a number. ZFC offers no help in this regard.
  • Do numbers exist?
    "Is" can be a tricky word.tim wood

    Bill Clinton made that very same argument to try to wiggle out of a sex scandal. In the end he lost his license to practice law and was impeached (but not convicted).

    You ask "what is quantity?" Quantity is the general name for an idea that is always particular, and that refers to anything that can be quantified.tim wood

    You and @apokrisis seem to feel that "a number is anything that's number-like" and "quantities is whatever can be quantified" represent valid definitions. What happens if the biologists get hold of this trick? A fish is whatever is fish-like. A cat is whatever is cat-like. A virus is whatever is virus-like. And the deepest question of all: life is whatever is life-like.

    This sounds like a fast path to meaninglessness to me.

    Now I think you are confused in that you think a definition somehow "is" what something is.tim wood

    No I'm not confused on that at all. There were cats long before a biologist said that a cat is "a small domesticated carnivorous mammal with soft fur, a short snout, and retractile claws." The thing clearly precedes its definition.

    A definition is more like a classifier. I'm working in a factory and my job is to stand at a conveyor belt and throw things into one bin or the other: cat and not-cat. A definition is a set of criteria that let me unambiguously do that. The definition lets me recognize things that are cats; and things that are not-cats. In other words I'll throw the cats into the cat bin and the not-cats into the not-cat bin with as close to 100% accuracy as possible. That's what a definition is.

    Now today my job is to identify quantities versus non-quantities. So my definition is, a quantity is anything that can be quantified. But that's no help! You've just given me a different syntactic form of the same word. You have NOT provided me with classification criteria. So "quantities are things that can be classified" is not a definition, nor is "a number is something that's number-like." You haven't told me how to sort the objects into the bins.

    Or you apparently think that the definition of number, or quantity, will tell you what these things are.tim wood

    A good definition does let me determine whether a given object is or isn't the thing in question. If an object comes down the conveyor belt and it's a small furry domestic animal etc., I know it's a cat.

    This approach or understanding - actually utilization - gets a lot of the world's work done, but it isn't remotely true. A definition is simple an agreed description, for some purpose.tim wood

    It's a strong description. It's a description that fits 100% of the things we wish to include, and none of the things we wish to exclude. If a description satisfies that criterion, it's a definition. "Quantities are things that can be quantified" is no help at all. It's a description but not a definition.

    As to i, it's the square root of -1, it's a number, and it exists (keeping in mind you probably have at best a partial idea of what "existence" means, and of what I mean by it).tim wood

    To be sure, I have no idea what you mean by existence. I would say that i exists because it exists in math according to the formal rules; and also because we see many instantiations of the i in the physical world. That latter point isn't obvious to everyone but for example anytime you make a 90 degree counterclockwise turn, you are instantiating the number i in the world. And of course i comes up in physics and engineering all the time.

    Definitions, then, are functional.tim wood

    Yes I agree with that. A definition is whatever you can write down on an index card for me that will allow me to recognize cats and numbers and quantities as they come down the conveyor belt. Functional. Good word for it.

    And if any thing is going to be discussed in terms of its definition, or any understanding of what that something is, then it's best to start with some explicit expression of that definition or understanding. That's just good navigation. And of course it's negotiable, if that's appropriate.tim wood

    To sum up, or rather to get back to basics, you claimed that numbers represent quantities. The number i represents a phase angle in electromagnetism or a quarter turn if you're in the plane. But I don't see those as quantities. So I have to ask again, what is a quantity? Are you claiming that the number i represents a quantity? That I do not agree with. I don't see it.
  • Do numbers exist?
    At least Frege, Russell, and Whitehead defined what a number is. There are probably several others.tom

    Yes that's an interesting point. Philosophers and logicians have struggled to define what a number is. Mathematicians don't really care that they haven't got a precise definition. Mathematicians expend zero energy going down that rabbit hole. It's not mathematically productive.

    There is an advantage to this approach. Mathematicians are not constrained by a definition of number, which allows them to discover new types of numbers all the time.



    I have repeated the same thing several times, yet you are still misunderstanding what I'm saying.

    There are perfectly clear definitions of specific types of numbers such as naturals, integers, quaternions, etc.

    And in fact each foundational approach has its own definition. In set theory the natural numbers are defined as the von Neumann ordinals. In category theory there's a natural number object, as @apokrisis mentioned earlier.

    The page you linked to is the von Neumann definition of the finite ordinals, which has the nice advantage that it can be easily extended to transfinite ordinals.

    In category theory there's a concept called a natural number object. This defines the natural numbers structurally, as @apokrisis noted earlier. The benefit of the latter approach is that it avoids the so-called "junk theorems" of the von Neumann approach. For example in standard set theory, 2 ∈ 3 is a valid theorem. No sensible person would claim it means anything. The categorical approach gets rid of that type of problem.

    My thesis is entirely agnostic of foundational approach, a point @apokrisis does not sufficiently appreciate. There is no general definition of number in set theory, category theory, homotopy type theory, Martin-Löf type theory, intuitionist type theory, any of the various constructivist ideas, or any other foundational approach. There are many foundational approaches these days. My statement applies to all of them.

    There is no general definition in math that tells us what a number is. There are plenty of definitions of specific types of numbers. There are even (distinct but closely related) definitions of specific types of numbers in different foundations. But there is no general definition of number.