• My own (personal) beef with the real numbers
    It was a joke! Yes, of course I don't believe there's something wrong with physical universe because of this theorem.
  • My own (personal) beef with the real numbers
    Ehm, sorry but I am afraid I made a mistake in what I wrote. Better to fix it before it goes too far...
    I wrote "The digits in a real number should not be countable". Well, the digits of (the decimal representation of) a real number are countable, since they are determined by a function of type "natural-number ==> digit".
    It's the set (the totality) of all real numbers that is uncountable: meaning that there is no surjective function of type "natural-number ==> real-number".

    But notice that even the set of all functions of type "natural-number ==> boolean" (for example) is uncountable. And Cantor's diagonal argument can be applied to whatever function of type "something ==> something else" to show that there are more functions than objects, even if the functions are simply well defined terminating algorithms: there's no need to use formal logic or set theory to prove this.

    In my opinion, the thing that makes real numbers more difficult to grasp intuitively is that they don't have a normal form: there is no way to create an algorithm that decides if two arbitrarily defined real numbers are the same number or not, and that's because there are "too many ways" to build a real number (basically, you can use whatever algorithm you want, and in general there is no way to decide if two given algorithms produce the same output or not).

    However, the other problem I've been alluding to is revisiting all previous theorems proven in a finitist regime; which is also essential part of understanding the infinite regime. Some theorems are abandoned. Choices must be made.

    For instance, in Euclidean geometry we can have a theorem that sphere represented by an arbitrary amount of components, but not infinite, cannot be turned into 2 equal spheres of equal volume. We can also have a theorem that arbitrary amounts of lines never completely fill up area or volume. Going to the standard infinite regime we can revisit these theorems and prove them "false" in the sense that what we thought we couldn't do before we can do now in this new system. This, for me, these "side-affects" features that we didn't expect and didn't set out to make, is what makes these areas of mathematics difficult, even more than being able to construct objects we're intending to make like the real numbers, and high school students.
    boethius

    Well, I wouldn't start from the "pathological" cases to show that volume additivity doesn't work any more. On the contrary, I would start from the fact that you can calculate the volume of curved figures as if they were make of infinitesimal polyhedrons, and it always works! (Archimedes' volume of the sphere is very intuitive and beautiful).
    OK, then there is this little "glitch" in the fabric of the universe named Banach Tarski theorem... :smile:
    It doesn't work because in integral calculus you have to take "open sets" as infinitesimal pieces ( but I would prefer to not go into details about this issue, because surely @fishfry will read this and will not agree :wink: )
    Anyway, the fact that infinite additivity works as if infinitesimal geometrical objects existed in reality is the really interesting and useful fact. The fact that it's so difficult to prove why it works and why in some cases it doesn't, maybe makes the problem even more interesting..

    So, even if there was time to explain infinite digit expansion is uncountable in some actual mathematical way involving definitions and proofs, and it's due to this uncountability that's we can assert they cannot be converted into integers ... while still having infinite integers but no "infinite integer" available to put in our set of rationals ... neither asserting that all integers in our set are finite in a sense of having a finite amount of them, which would be clearly false.boethius

    Sorry, but I don't understand when you say "due to this uncountability..." why is uncountability a problem?

    So, infinite sets and real numbers could be something introduced at the very end of secondary maths when these foundational concepts are more familiar. But to start, understanding divergence and convergence and tangents and how series and sums and derivatives and integral functions relate to each other (and how to solve real problems with them), is challenging enough to learn in a finitist regime; my intuition is that doing this also with the conceptual challenge of infinity makes it much harder to "see" and to "get" what's going on, and students who start asking questions, even just pondering to themselves, that have no good answers available will much more easily get lost or believe their questions are seen as stupid by the mathematical community, simply because their teacher can't deal with them.boethius

    I think more than the "finitist" regime they should be teached before in the 18th century "Eulerian" regime, where functions always work as if they were infinite polynomials and derivatives are made of infinitesimals. You have to see that all this staff with infinities really works before starting to wonder how is it possible that it works if infinities don't really exist. Then you have a motivation to study formal logics and set theory. More or less, following the historical development of mathematics. I think there is no sense in creating a theory of infinite sets if you don't see what for all this infinity staff is good for.
  • My own (personal) beef with the real numbers
    I think we agree that it's bad pedagogy to simply posit the reals with no explanation and no time or ability to answer very expected and natural questions. Instead of curiosity leading to better understanding, it leads to confusion and a sense maths is "because we say so", which is the exact opposite sense students should be getting.boethius

    Perfectly agree.

    Students would be better served by a less ambitious (not actually having irrationals and transcendentals as objects) but more rigorous calculus in the numerical regime, which would make a much more solid foundation for students going on to use applied maths, who can simply stay in this regime (as they will likely be solving every problem with the computer), and better serving pure maths students as well (that mathematics is rigorous, and extensions are made to do new things in a rigorous way).boethius

    Well, I think a lot of interesting calculus at Euler's level could be done in a enough rigorous way, and just make the students aware of what are the really rigorous parts and which ones are the most "doubtful", when reasoning about infinities. But the most doubtful ones are even the most interesting! And if you explain that we don't know everything, that's the part that makes the subject of math worthy of studying. What for should I (as a student) loose time in a subject where everything hast just been discovered long time ago, and the only thing I can do is to learn by mind what others did? Math becomes interesting when you see that you can use it do discover new things that nobody said you. And there are still a lot of things to be discovered; only that you have to learn how to reason about them in the right way!
  • My own (personal) beef with the real numbers
    I'm not building with infinite integers, I'm building with the infinite decimal expansion representation of real numbers and simply pruning off the decimal symbol. Sure, if we simply define integer as "not this" then it's not building an infinite integer, but it is building something that I can then do things with if I'm not prevented from doing so.boethius

    The decimal symbol is the thing that says which digit of the numerator matches which digit of the denominator. If you prune off the decimal symbol you have to specify in which sequence you add up digits up and down to "build" your infinite fraction. If you add one digit at a time up and down, you get a sequence of fractions that is (converges to) a well defined real number.
    The difficult point to clarify here is that infinite expressions are not simply the set of symbols (or digits) that compose them, but that set of symbols PLUS the algorithm that says in what sequence you have to add them. And if you change the sequence you get a different result.

    However, if we switch regimes to one where we now have access to the infinite digit expansion of real numbers, we can revisit every proof in the previous regime with our new objects; and now, revisiting the root 2 proof is irrational I am able to solve it with these new objects and not arrive at a contradiction as oddness / eveness is no longer defined upon which the classic proof by contradiction depends. This is what I am doing.boethius

    There is a much simpler proof for the irrationality of roots: take a fraction and write numerator and denominator as product of prime factors. Then square it. It's evident that every prime factor appears at least twice both in numerator and denominator. It means that every fraction squared has this characteristic of doubled prime factors. It follows (from "A ==> B" follows "not B ==> not A") that if a fraction is not done in this way, it cannot be the square of another fraction. Therefore, the square root of any fraction not containing doubled prime factors must be irrational. This proof has even the advantage to clarify how to check if a square root is irrational (not only root of two), and works even for cube roots, etc..
    Now, the problem doing this with infinite integers is that you have to specify how to decompose them in prime factors. If you have an algorithm that decomposes the "partial" integer generated at every step (by the algorithm that specifies the sequence used to build the infinite number), the proof will work at the same way, and the result will be correct.


    Am I prevented from doing this full stop? Am I unable to find a "suitable decimal expansion" to solve my problem? What exactly is preventing me from doing this, that is what I would consider a suitable answer in the context of learning maths. Given these infinite decimal expansion, I want to use them as what ways I see fit, unless I'm prevented by some axiomboethius

    Perfectly right. Not sure if the previous explanation is enough... The problem with proofs by contradiction (the reason why it's easy to make mistakes using them) is that you should have only ONE assumption, and then if you reach a contradiction you know that THAT ONE assumption was false. Using them with infinite objects you very often introduce hidden assumptions about uniqueness of those objects that are not true.

    Broad features and themes involved in rigorous proofs elsewhere I do not consider a good answer for learning math. For me, "learning math is" understanding the proof oneself, not understanding that others elsewhere have understood something.boethius

    I agree.

    Moreover, your approach, would seem to me, to imply that the decimal expansion representation of a real number cannot be counted; is this your implication? or would you say the digits in a real number are countable?boethius

    No, the infinite decimal expansion of a real number is a perfectly good real number: it's the limit of a convergent power series: 1.3762.... = 1 + 3/10 + 7/100 + 6/1000 + 2/10000 + ....
    The digits in a real number should not be countable, but you have to say which algorithm you use to generate them, since they are infinite. Or, put in another way, you have to specify in same way a function "F: natural number ==> digit" that for each position (power of 10) says which digit to put in the numerator.
    ( or maybe I didn't understand your objection.. )

    Also, how do you maintain infinite sequences can be completed, there are no infinite integers, the sequence of integers is infinite, simultaneously within the system suitable for high school level maths. Do we simply elect not to use our "complete the infinity tool" on the integers, and add this axiomatically?boethius

    I would say:
    - infinite sequences are the same thing as functions from integers to sequence elements.
    - functions from integers to sequence elements are surely well defined if the rule to produce the Nth element is clear (is an algorithm)
    ( maybe explain that you can even assume the existence of non-algorithmic functions, with the axiom of choice, but you cannot use it freely without making use of formal logic )
    - integers are defined as sums of powers of 10 (that is the DEFINITION of an integer in the standard notation, not some strange property. So, 2 is 1 + 1 BY DEFINITION: nothing to be proved). The problem with infinite integers is that you don't know which powers of 10 it's made of. If you have an infinite decimal expansion, you know the powers of 10 and everything works. If you are not convinced, try to write infinite integers in Peano notation: 1+1+1+1+.... (or SSSSS..0 - same thing): they are all the same number.
    - the sequence of integers is infinite because is constructed by adding +1 at each step, and this is a non terminating algorithm that produces a well defined result at each step, so it's allowed as an algorithm.

    What axioms do they have to work with? Do they know enough set theory do make a model that avoids all these problems, or do they have another suitable basis?boethius

    I would avoid set theory without speaking of logic before. Of course, you can do the set theory of FINITE sets without logic, but that is not useful to explain real numbers, or anything related to analysis.
  • My own (personal) beef with the real numbers
    However, it's also necessary to deal with questions like "why can't we just have rational numbers with infinite numerator and denominator; seems unfair to allow infinite decimal expansion but not infinite numerators?". Of course, we can have rational numbers with infinite numerator and denominator but it's then needed to explain how these aren't the "real" rational numbers we're talking about when we say the square root of two is irrational.boethius

    I think the main thing to understand here is that decimal numbers with infinite decimals can be considered as an extension of "regular" decimal numbers (finite list of digits), but infinite natural numbers (infinite list of digits) cannot be considered as an extension of "regular" natural numbers, since you cannot define on them arithmetic operations compatible with the ones defined on the "regular" natural numbers. Then, you can't build fractions with infinite integers because you cannot build infinite integers in the first place. In my opinion this is quite easy to understand. Did I miss something?

    I think the infinities and infinitesimals of mathematics are the things that make it become more "magic" and interesting. The problem with teaching in my opinion is more related to the fact that the "magic" of the fact that infinities and infinitesimals really work is not explained, or worse, explained by pretending to have a simple logical explanation that, however, is not part of the school program.
  • Musings On Infinity
    But I don't get the metaphysics. "A thing only exists when we have an algorithm to instantiate it."fishfry

    Now, just before leaving for vacations, the metaphysical part, that surely I'll not be able to defend in a philosophy forum :razz:

    The algorithms that represent "all the things that exist" are not only finite algorithms, but even infinite ones: they are ALL the functions (even not recursive or not total). So, the rules of logic are based on the model of computable algorithms, but are supposed to work for any function (that can be seen as some kind of generalization of an algorithm).

    If you think about it, that's the same kind of "generalization" that is done in set theory: the rules of logic are based on the model of finite sets (I am speaking about the rules of first order logic - that are the ones that define the "meaning" of logical operators - not about the axioms of ZFC), but are supposed to work even for the infinite sets of set theory.

    So, in constructivist type theory you replace the concept of infinite sets with the concept of infinite constructions. The rules of logic allow you to add "peaces" to these constructions an to compose them with each other, but of course you can't "decompose" the infinite ones: you have to treat them as a "black box", or an "oracle", or a function of a programming language belonging to an external library that "runs" on some kind of "machine" that can compute even uncomputable functions.

    So, as in set theory you can think of everything as made of sets, in constructivist type theory (meaning the Martin-Lof kind of type theories) you can think of everything as made of functions.
    And that's basically the reason of the strict correspondence between type theory and category theory: category theory is an axiomatic theory of functions (based on set theory). It axiomatizes the properties of functions to generalize them to morphisms, that are "anything" that "works" as a function.

    As the physical intuition for sets is that every object is made of parts, the physical intuition for functions is that every our experience can be seen as an interaction, or an experiment.
    The only assumption is that functions are something that always give the same result if you prepare them in the same way.
    But in the end, any of these intuitions is really true at a fundamental level of physics:
    - it is not true that everything is made of parts, because the "particle number" is not conserved.
    - it is not true that there are experiments that can be prepared always in the same way to give always the same result.

    But if you don't assume the idea that everything is made of parts, you don't have to assume the unintuitive :wink: idea that a line is made of points: you can think of lines and points as two different types, and an intersection of two lines as a function from pairs of lines to points.
  • Musings On Infinity
    Secondly, you said you can import a library into Coq to recover standard math. But if that were true, standard math would be computerizable, and then that would remove the biggest benefit of constructivism.fishfry

    The point is that "computerizable" does not mean "computable", because terms built in classical logic in general don't correspond to computable functions, and that is due to the fact that some of the rules, or some of the axioms, allow you to build mathematical functions in an indirect way, and that is what corresponds to the use of "oracle" (or non computable, or axiomatically defined) functions in constructive logic.

    What constructive logic does is to move out of the logic all the non computable parts (leaving only the rules that build terms in a "direct" way), that can be however added back to the logic as axioms (usually by importing them as libraries) to recover classical logic.

    What remains after removing the non computable parts of standard logic is a language that speaks only about proofs (not about truth), and proofs are purely syntactical entities that can be reasoned about using only computable functions.
  • Musings On Infinity
    But if you code up the axioms of formal ZFC into a computer, the proof of the existence of the Vitali set can be cranked out by a program. Even by an undergrad! So in that sense, proofs are constructions even if they are constructing things that are in no way constructive.

    Have I got that right?
    fishfry

    Yes! Exactly!! Proofs are constructions in ANY formal logic, because that's how formal logic is defined!
    There are computer-based proof verification systems for a lot of different logics: https://en.wikipedia.org/wiki/Proof_assistant
    You can look here for a short story of the development of these systems: https://en.wikipedia.org/wiki/Automated_theorem_proving

    These systems are called "proof assistants" because generally they try to allow the user to skip some steps of a proof and infer the missing steps, so that the user is not forced to write every single step, that can be very tedious (depending on what logic you use), but the main functionality that all of them provide is proof verification.
    However, the truth is that this is all "theoretical" stuff: in practice, at least before Voevodsky introduced HOTT, mathematicians have always ignored these systems (as they have ignored formalized logic), because they are way too tedious and time-consuming to use in practice. The problem is that they force you to concentrate on absolutely unessential steps of the proof, and to use a language that does not describe at all what the important steps of the proof really "mean". But this problem is not due to the use of computers (formal systems in a sense were "made" for computers, even if they were invented when computers didn't exist yet); this is a problem with the formal languages themselves, and that's the main problem that HOTT is trying to solve.

    However, you can for example look at the HOL proof assistant:(https://en.wikipedia.org/wiki/HOL_(proof_assistant). This is a very clear implementation of proofs in classical higher order logic as programs. In HOL you don't even enter a special environment where you write proofs: you write directly a program in the ML programming language (a perfectly "normal" programming language) to literally "build" the propositions that you are proving. There is a data type in ML called "proposition" and the rules of classical logic correspond exactly to the type constructors of the "proposition" data type. And the logic (higher order logic) is probably the nearest formal language to the "informal" classical logic used in practice by mathematicians in real (non formalized) proofs.
  • Musings On Infinity
    Now, what does intuitionistic logic have to do with algorithms?

    Here's the quick answer: in intuitionistic logic a proposition P can be interpreted as the "description" of the algorithm that has been executed (using the rules of logic) to build the proof of P. And this of course has to be a deterministic algorithm that terminates in a finite number of steps, because every proof must be made of a finite number of steps.

    Now, the problem is that the long answer is much, much longer. So, I'll describe the consequences of the short answer before.

    The first thing to clarify is that intuitionistic logic (more concretely these rules: https://en.wikipedia.org/wiki/Natural_deduction#/media/File:First_order_natural_deduction.png)
    are only a first order logic, and not a theory of sets.
    ZFC, instead, is classical first order logic (intuitionistic + EM), plus the axioms of set theory.
    Now, the axioms of ZFC "work well" with classical logic, but not with intuitionistic logic. In particular, the correspondence between algorithms and proofs works only in the very particular formulation of "natural deduction", with no axioms at all. The addition of ZFC axioms would introduce the predicate, that does not respect the "propositions as algorithms" interpretation. Some of the reasons why ZFC doesn't "fit well" with intuitionistic logic can be found here: (https://en.wikipedia.org/wiki/Constructive_set_theory)
    """
    In 1973, John Myhill proposed a system of set theory based on intuitionistic logic[1] taking the most common foundation, ZFC, and throwing away the axiom of choice (AC) and the law of the excluded middle (LEM), leaving everything else as is. However, different forms of some of the ZFC axioms which are equivalent in the classical setting are inequivalent in the constructive setting, and some forms imply LEM.
    """
    Basically, the problem is that ZFC axioms imply EM. So even if you exclude EM from logic axioms, you can still derive it from ZFC axioms.

    However, there is a "natural" extension of the rules of intuitionistic logic that respects the correspondence between algorithms and proofs and is interpreted in a similar way as a theory of some sort of "sets": intuitionistic type theory. (https://en.wikipedia.org/wiki/Intuitionistic_type_theory)

    In intuitionistic type theory sets are of course replaced by types (that are not exactly the same thing), but there is "practically" an one-to-one correspondence between propositions expressed in the two languages. From a formal point of view, however, there is a big difference: ZFC is implemented as axioms on top of an independent first order logic. Intuitionistic type theory instead is an extension of the rules of intuitionistic first order logic to an high-order logic (basically by adding quantification on functions), still making no use of axioms at all.

    Now, back to the answer about algorithms.
    As I wrote in the previous post, intuitionistic logic does not speak about truth but about provability and proofs. And proofs are, as in every formal logic, algorithms.
    The fundamental difference is that proofs in classical logic are not an essential part of the theory: the important thing is that a proof of a proposition P exists. If a proof exists, the proposition P is true, and that is all what matters about P. The algorithm that has been used to prove P is not considered as part of the language.
    In intuitionistic logic, instead, the language speaks about proofs, and the terms of the language are meant to represent proofs (that are algorithms), and then the language speaks about algorithms.

    For example, in type theory I can write the expression "x : P", meaning "x is the proof of the proposition P". "x" in this case represents an algorithm.
    Then I can have the expression "f : P -> Q", meaning that "f" is the proof that "P implies Q". This means that "f" is an algorithm that takes as input a proof of P (another algorithm) and returns as output a proof of Q (a third algorithm).
    The "side effect" of this interpretation is that I never have to verify how a proposition has been proven, because proofs are always "recorded" together with the propositions. The propositions are interpreted as types, and the terms are interpreted as proofs (always written in the form "x : P") and checking the proof
    of P means checking if the term x has type P (this is done with an algorithm that is part of the meta-theory).

    A full explanation of how all logical operators are interpreted would take me probably several pages, and I can't write a book here.. :smile: (but there are a lot of good books on type theory)

    However, back to the point of existential propositions, if I built a proof of "exists x:A, P(x)", it means that I built a proof of A called x, and a proof of the proposition P(x) (P(x) is what is called "dependent type").
    If A is the type of real numbers, a Cauchy sequence is a "proof" of A. Meaning: a real number is treated formally as if it was a proof that a real number exists, and corresponds to the algorithm that has been executed (applying the rules of logic) to build that real number.

    P.S. Reading what I just wrote here, I doubt that it's in some way understandable by anybody who doesn't already know about type theory. But I am not really able to explain it better in the space of a post...
  • Musings On Infinity
    I will say one thing though. I still just don't get constructivism as a philosophy. I DO understand that certain formulations of constructive math lend themselves beautifully to computerized proof systems. Nobody would deny the digital deluge that's flooding the world right now, why should math be spared?

    But I don't get the metaphysics. "A thing only exists when we have an algorithm to instantiate it." But the world is full of things that have no algorithm. The world itself is not an algorithm.
    fishfry

    I completely agree. I don't believe that the world is an algorithm either. And mathematical objects (and of course physical objects too) are not algorithms. But proofs of existence are algorithms, in any formal logic.
    In my opinion, the reason of treating the axiom of excluded middle as less "fundamental" then the others is that is related to the more philosophical and abstract notion of truth: a proposition is either true or false.

    In classical logic when you derive a proposition P you interpret it as a prove that "P is true". When you derive a proposition "not P" you interpret it as a prove that "P is not true (or false)".

    Now, it turns out that if you replace "P is true" with "P is provable" and "P is false" with "P is absurd (meaning P implies a logical contradiction)", all the rules and axioms of classical logic will still be intuitively justified, except one: the axiom of excluded middle. A proposition can be not provable and even not absurd.

    So, the axioms of logic without excluded middle still make sense, but with a different interpretation of the meaning of propositions. This is in some way similar to what often happens in mathematics: a simplified model appears to be in some way more "fundamental" if it is consistent and still compatible with the full model. For example in projective geometry there is no concept of length, but the concepts of points and intersecting lines still make sense.
    Well, at the same way you can see intuitionistic logic as the part of classical logic that doesn't depend on the notion of truth. And even negation does not have the same meaning: "not P" does not mean "P is false", but "P implies a contradiction". In fact, the "not" operator is even not needed in intuitionistic logic: just replace every occurrence of with .

    Following the analogy with projective geometry, it is true that not all theorems can be proved without using excluded middle, but it still makes sense to distinguish the ones that can be proved without it from the ones that can't, because the first ones are valid for a larger number of models (or, put in another way, are true for more "fundamental" reasons).

    The same thing happens in all mathematics: why do you want to isolate group theory from Galois theory about the solution of polynomial equations? Because group theory is more general (since it's theorems depend only on a smaller set of assumptions), and then can be seen as more fundamental.

    [ I have to go now: I'll explain in the next post what all this has to do with algorithms ]
  • Musings On Infinity
    I've re-read the SEP article on constructive math with a much deeper level of comprehension than I've had in the past. Your efforts have not been in vain.fishfry

    I don't remember which one is the SEP article. Could you send me a link?

    Also ... no form of the AC could possibly be construed as a computation. That, I just do not see at all. It's a pure statement of existence. A particular set exists with no means to possibly compute or construct or identify any of its elements.fishfry

    Yes, the AC can't be construed as a computation, and it's not part of constructivist logic. What I am saying is that AC can be added to a constructivist logic framework such as Coq if you want to use standard logic: standard logic can be obtained as an extension of constructivist logic (in a similar way as metric geometry can be seen as an extension of projective geometry): if you want to use classical logic in Coq (and I think that's how Coq is used most of the time) you just have to import the Classical_Prop library: https://coq.inria.fr/library/Coq.Logic.Classical_Prop.html#

    The possible forms of the axiom of choice (without assuming EM) and the relations between them are here: https://coq.inria.fr/library/Coq.Logic.ChoiceFacts.html
  • Musings On Infinity
    I do not believe that an oracle in computer science is in any way analogous to the axiom of choice. If you know of a connection perhaps you could explain it. An oracle allows you to compute something that was formerly noncomputable. The axiom of choice doesn't let you compute anything. In fact the elements of the sets given by the axiom of choice have no properties at all and could never be computed by any stretch of the word.fishfry

    Hi @fishfry.

    Reading again what you wrote, I think that maybe I am able to explain what I meant here.

    Here's the axiom of choice, taken from wikipedia: (https://en.wikipedia.org/wiki/Axiom_of_choice)



    In a constructivist logic interpretation, it says that if you have built a set of nonempty sets X, using this axiom you can build a function f.
    In other words, this axiom is like a black box function (or an oracle), where you put as input the set of nonempty sets X and you get as an output the choice function f.
    Then, you can use the choice function f as you do in ZFC. The resulting proof is the same, only that in ZFC you don't treat the axiom as if it were a function: you simply apply it.
    In this sense the axiom is an oracle: it allows you to compute f without knowing how the function that produces f from X is implemented, and f does not have to be computable or incomputable: you simply don't care about it's computability.

    Yow write "In fact the elements of the sets given by the axiom of choice have no properties at all and could never be computed by any stretch of the word". But in constructivist logic is the same thing: if X is the class of all subsets of real numbers determined by the equivalence relation "two numbers are equivalent if their quotient is rational", you can define as A the class determined by , and then write f(A) to get a real number. "f" is treated in a purely symbolic way, never computed.

    I think that the main source of confusion is that usually a logic is called "constructivist" when it does't have any axiom: in that case there will be no "oracle" functions, and the "forall x ..., exists y ..." will correspond to really computable functions from x to y. But even in this case, a real number will be typically defined as a sequence of rational functions (a limit, in the usual way), not as a function that computes the digits of the decimal representation of the number, and the fact that Cauchy functions are convergent will be part of the axiomatic definition of real numbers: it will not be a "logical axiom", but still will be treated at the same way as an "oracle", because it's part of the definition of what real numbers are.
    It's the same thing as when you define natural numbers: you don't have to prove the principle of induction: it's simply part of the (axiomatic) "definition" what natural numbers are.

    If you add the missing logical axioms to constructivist logic (excluded middle and choice) treating them as "oracles" (excluded middle can be seen as a function that returns "P1 or P2"), you obtain back classical logic, only with a slightly different "interpretation" of the rules.

    I don't know if this explanation makes sense: maybe the best way to explain would be to show some examples of proofs..
  • Musings On Infinity
    On the other hand, for universally quantified formulas over infinite domains that do not possess a proof by mathematical induction, there is no reason to support their informal interpretation as enumerating the truth or construction of the predicates they quantify over, by the very fact that they do not correspond to a rule of substitution.sime

    Yes, exactly. Not every domain of quantification (set in set theory or type in type theory) is supposed to be enumerable, or iterable.
  • Musings On Infinity
    Do we really want to insist that our use of these axioms is equivalent to a literal step-by-step substitution of every permissible number into P?

    Do we even want to say that we are assuming the existence of this inexhaustible substitution?
    sime

    I would say that "forall x in S", where S is an arbitrary set, is not interpreted as a step-by-step substitution, because in general not for every set is possible to define a total order: so not only it's impossible to visit all the elements during the iteration, but it's not even possible to define what the step n+1 should be for an arbitrary step n.
    In general the interpretation is "choose whatever element x in S", or "choose a random element x of S".
    The meaning is given only by the rules of logic, and the rules say that if you have "forall x exists y", then you have a function from x to y, and a function is assumed to be a primitive concept. So I guess there is no real definition of what "forall" means.

    All that i and ii define in a literal sense is a rule that permits the substitution of any number for x.

    Nowhere does this definition imply that induction is a test for the truth of every substitution.

    Therefore it is coherent to accept mathematical induction as a principle of construction, and yet reject it's interpretation as a soothsayer of theoremhood.
    sime

    Well, the meaning of the axiom is exactly this: there are no other natural numbers except the ones that you can reach by iterating the successor function starting from zero. So, there are no "unreachable" natural numbers.
    Or, put in another way, if a property of a natural number is true for every n+1 number starting from zero, then whatever natural number you choose will have that property.

    P.S. If you don't assume the induction principle to be true, there could be for example several infinitely long "chains of numbers" starting from different points, not only from zero. Or you could assume that there is an integer so far away from zero (infinitely far) that will be never reached by counting, even if each number is "attached" to a following one, as in nonstandard arithmetic
    (https://en.wikipedia.org/wiki/Non-standard_model_of_arithmetic).
  • Musings On Infinity
    First of all, does our imprecise use of real numbers warrant a precise account?

    What does it even mean to say that a real number denotes a precise quantity?
    sime

    I am not sure what you mean by "imprecise use of real numbers", however the rules of calculus give precise results:

    * The volume of a sphere inscribed in a cylinder is exactly 2/3 of the volume of the cylinder (https://en.wikipedia.org/wiki/On_the_Sphere_and_Cylinder)

    * is exactly equal to -1

    For in what sense can a formal system speak of universal quantification over all of the natural numbers?sime

    The standard way is assuming the principle of induction: if a proposition is true for n=0 and from the fact that is true for n you can prove that is true even for n+1, then you assume that is true for all natural numbers. (https://www.encyclopediaofmath.org/index.php/Induction_axiom)
    It cannot be verified in a finite number of steps, so it's assumed as an axiom.
    But assuming it to be false is not contradictory: you can assume the existence of numbers that can never be reached by adding +1 an indefinite number of times. (https://en.wikipedia.org/wiki/Non-standard_model_of_arithmetic)

    So in short, an imprecise and pragmatic model of the real numbers is what is important, corresponding to how numerical programmers implement them in practice, with particular attention paid to numerical underflow.sime

    I don't understand what you mean by "a pragmatic model". Do you mean that there should be some sort of "approximate" logic, like some sort of fuzzy logic? (https://en.wikipedia.org/wiki/Fuzzy_logic). The standard "epsilon-delta" definition of limits is not "pragmatic"?
  • Musings On Infinity
    Yes, I find it interesting too, and probably my problem is that I learned about logic only from a practical point of view: proving the correctness of digital circuits and algorithms. Then I discovered that even category theory and topology are much more "practical" that they seem to be, and the best thing about computer-based proof assistants such as Coq is that proofs are real computations: you don't have to ask an expert if something that you think is right of wrong: just write a program to "build" the proof, and check if it works. What seem to be abstract mathematical concepts become very concrete and start to make sense!

    But probably this is the opposite of how mathematics and logic are usually taught, and I have to confess that when I studied analysis at university (I studied engineering), I learned the rules to do the calculations and remembered how to proof the theorems, but I never studied anything about the "foundations" of mathematics, such as Zermelo-Fraenkel set theory.

    So, I don't really feel qualified as a "teacher" to explain to @fishfry or to you about philosophical questions related to infinite sets, but I have very clear ideas of how formal logic systems work, and even my own theory of how mathematics "works".
  • Musings On Infinity
    Well, I wrote about physical events because probability is a concept that belongs both to physics and mathematics, but from the point of view of mathematics only (Kolmogorov axioms), if axioms do not require zero to be a valid probability measure, you can avoid to consider events with zero probability.
    But of course in this case the axioms are chosen with the intent to be the minimal constraints that have to be satisfied by any definition of probability that makes sense in physics.
    This is very common: the mathematical definition is derived from the physical intuition, but is used as an axiomatic theory completely disconnected from physics. That's why it often happens that after several years somebody finds a "model" of the axiomatic theory that does not correspond to the physical intuition at all but verifies all the axioms, so from the point of view of mathematics it is a valid model.
    The point is that axiomatizations are needed if you want to use formal logic (and you want to use formal logic because physical intuition is not enough to avoid paradoxes), but the axiomatization very often does not capture exactly the physical concept that you have in mind. And that happens even for the most elementary physical concepts, such as the concept of natural numbers.
  • Musings On Infinity
    In other words Coq is all well and good, but if it's just to verify proofs, must we abandon standard math to use it? Perhaps I'm missing the subtleties here.fishfry

    It's not about verifying proofs. In every formal logic proofs can be verified mechanically, otherwise it wouldn't be called "formal" logic. It's about the complexity of rigorously written proofs. In ZFC complete formal proofs become so long that nobody uses them in practice. In Coq the situation is much better. In HOTT is even better.
    You don't have to abandon standard math to use Coq or HOTT. You can use HOTT to do perfectly standard math with a much simpler formalism for proofs: that's why it's proposed a new foundation of ALL mathematics.

    I find it fascinating that constructivists claim to be able to prove the Cauchy completeness of a set, namely the real numbers, whose elements manifestly can not all be computed. But at least I've learned that people are making the effort.fishfry

    I think I'll give up with this discussion because I see that is leading nowhere!

    They have intuitions about what these things are; and just because there is a funny model with some property, they do NOT stop there. If the model doesn't match the intuition, the model becomes a handy formal device but the search for mathematical truth continues unabated.fishfry
    The intuition of a line being made of points and having no gaps is VERY unintuitive, and it's NOT used in standard mathematics: integrals and derivatives are defined as limits of sequences: no sets of points at all. But I am sure I can't convince you about this and I don't see the point of going in circles repeating the same things...
  • Musings On Infinity
    That's an awful lot of handwaving IMO. But there IS an intuitively correct definition of the real numbers: A Cauchy-complete totally ordered field. That's a second-order, categorical axiomitization. It was familiar to Newton and to every high school student.

    I can believe that constructivists prefer a different model or models. I can NOT believe that anyone trained in math claims to not have an intuition of the standard reals. That is, I can imagine a constructivist saying, "The standard reals can't be right because you can't explicitly construct most of them." Of course that's true. But I can't believe anyone saying they have no mental picture of the standard reals as locations on the real line.
    fishfry

    You can use exactly the same definition of Cauchy-complete totally ordered field in constructivist logic. Even rational numbers are locations on the real line. The problem is with continuity: points are not "attached" to each-other, right?
  • Musings On Infinity
    That is: If you now claim that the constructive reals are the computable reals plus the noncomputable reals, you've completely conceded my point; and, I imagine, horrified all the constructivists who don't believe that at all.fishfry

    If you read my posts I have always said the same thing: constructivist logic DOES NOT MEAN assuming that only computable functions exist!
    If somebody that is reading this knows about constructivist logic and thinks that this is not true, PLEASE reply to this post and say that it's not true!

    P.S. Now I know what computable reals are, but I still don't have a definition of non computable reals: I imagine that you mean functions from naturals to naturals that are not computable

    P.S. Maybe I'll try to explain the difference between assumptions and definitions (I know that it's an elementary concept, but maybe it's not so clear).
    In constructive logic I can write "Let be a function from natural numbers to natural numbers." This is an assumption.
    I know nothing about : it can even be non computable.
    Then I can write "Definition: is the function that takes a number and returns the number ".
    Now, if is not computable, even will be not computable, but I don't need to know how is defined in order to define . I can compute and obtain .
    will never be expanded or computed, because is treated as a "black box" that I assume to exist but can't be used to calculate actual natural numbers.

    So, the point is that I can assume the existence of non computable functions, but I must use computable functions in my definitions (in this case the square and the addition functions).
    In standard logic this is not true, because I can use the axiom of choice even in my definitions.
    In constructivist logic instead you shoud add the axiom of choice as an assumption (or axiom), and then you can use it as an "oracle function" ( a black box, such as F(x) ) inside your definitions. At the end, using constructivist logic you can even do exactly the same proofs that you do in standard logic, but you have to add the appropriate axioms explicitly as assumptions, because they are not built into the logic itself.
  • Musings On Infinity
    If a constructive real requires a computable Cauchy sequencefishfry

    NO! A constructive real DOES NOT REQUIRE a computable Cauchy sequence!
    ALL Cauchy sequences of rational numbers (computable AND INCOMPUTABLE) are PERFECTLY VALID real numbers in constructive logic.
  • Musings On Infinity
    But our use of real numbers (at least for the most part) is in integrals and derivatives, right?
    So the "dx" infinitesimals in integrals and derivatives should be the morally correct model? This is how real numbers are intuitively used since the XVII century.
  • Musings On Infinity
    The standard reals are the model of time in the Schrödinger equation. Everyone thinks of them the same way. The high school analytic geometry reals. Those are the morally correct reals.fishfry

    OK, so I have a question: what is this morally correct model of real numbers? The set of all infinitely long decimal representations?
  • Musings On Infinity
    Well, I agree with you that mathematical ideas are much more variable and arbitrary then what it seems to be when you learn it at school.
    However, as I wrote here (https://thephilosophyforum.com/discussion/5792/is-mathematics-discovered-or-invented/p1), I am convinced that what are now considered the most relevant parts of math are in reality fundamental facts of nature that are not invented by us but only discovered, and that would be discovered in an equivalent form even by an alien civilization.
  • Musings On Infinity
    Hi,

    I believe the main source of confusion here is the concept of a model. If you take ZFC and remove some axioms (the axiom of choice and the logical axiom of excluded middle) the set of theorems that you can prove will be smaller, but the set of models of the theory will be bigger.
    All models that were valid in ZFC will be still valid in the "reduced" ZFC, because all valid proofs in the reduced ZFC are still valid proofs of ZFC: you can't prove propositions that are not true in the model if you couldn't do it with the full ZFC.
  • Musings On Infinity
    I had an idea to solve the question about Cauchy completeness, that I should have had a long time ago: just look at the book that proposes constructive mathematics as the "new foundations"!

    Here's the book: https://homotopytypetheory.org/book/

    Chapter 11.3 Cauchy reals:

    """
    There are three
    common ways out of the conundrum in constructive mathematics:
    (i) Pretend that the reals are a setoid (C, ≈), i.e., the type of Cauchy sequences C with a coincidence relation attached to it by administrative decree. A sequence of reals then simply is
    a sequence of Cauchy sequences representing them.
    (ii) Give in to temptation and accept the axiom of countable choice. After all, the axiom is valid
    in most models of constructive mathematics based on a computational viewpoint, such as
    realizability models.
    (iii) Declare the Cauchy reals unworthy and construct the Dedekind reals instead. Such a verdict is perfectly valid in certain contexts, such as in sheaf-theoretic models of constructive
    mathematics. However, as we saw in §11.2, the constructive Dedekind reals have their own
    problems.
    Using higher inductive types, however, there is a fourth solution, which we believe to be
    preferable to any of the above, and interesting even to a classical mathematician. The idea is
    that the Cauchy real numbers should be the free complete metric space generated by Q.
    """

    Well, I see that the problem is much more complex than I thought...

    However, solution (i) is the one used in Coq standard library (the one that I knew). And, as I said, in that case Cauchy completeness is an axiom, so.. no problem! :smile:
    However, I don't know how to prove that for each ZFC real there exists only one constructive real that verifies all possible propositions expressible in ZFC, and maybe you are right that this cannot be done.

    He proposes a fourth solution, based on the hierarchy of infinite sets typical of Homotopy Type Theory, and in 11.3.4 he proves Cauchy completeness. But I don't know HOTT enough to prove anything in it..

    So, well, in the end I don't have a definitive answer :sad: (but maybe somebody on https://mathoverflow.net/ could have it)


    P.S. I found an answer about countability of constructivist real numbers here: https://stackoverflow.com/questions/51933107/why-are-the-real-numbers-axiomatized-in-coq

    """
    As for your second question, it is true that there can be only countably many Coq terms of type R -> Prop. However, the same is true of ZFC: there are only countably many formulas for defining subsets of the real numbers. This is connected to the Löwenheim-Skolem paradox, which implies that if ZFC is consistent it has a countable model -- which, in particular, would have only countably many real numbers. Both in ZFC and in Coq, however, it is impossible to define a function that enumerates all real numbers: they are countable from our own external perspective on the theory, but uncountable from the theory's point of view.
    """
  • Musings On Infinity
    Is this the Curry-Howard correspondence?fishfry

    Yes... well, half of it: the "proofs-as-programs" interpretation is valid even in the standard first order natural deduction logic, if you don't use excluded middle (the intuitionistic part). Here is a summary of all the rules of first order intuitionistic logic with the associated expressions in lambda calculus: https://en.wikipedia.org/wiki/Natural_deduction#/media/File:First_order_natural_deduction.png

    The only rule that doesn't fit with this interpretation is excluded middle. You can take a look at the paragraph "Classical and modal logics" in https://en.wikipedia.org/wiki/Natural_deduction for an explanation of why this happens.


    All of the familiar mathematical constants such as pi, e, sqrt(2), etc. are computable. However there are only countably many computable numbers (since there are only countably many TMs in the first place). But Cantor's theorem shows that there are uncountably many reals. Hence there are uncountably many noncomputable reals.

    Point being that there are only countably many constructive Cauchy sequences (if we require that the modulus must be a computable function) hence the constructive real line is full of holes. It cannot be Cauchy complete.
    fishfry

    OK, I see your point now!

    But consider this (taken from https://en.wikipedia.org/wiki/Real_number under "Advanced properties"):
    "It is not possible to characterize the reals with first-order logic alone: the Löwenheim–Skolem theorem implies that there exists a countable dense subset of the real numbers satisfying exactly the same sentences in first-order logic as the real numbers themselves."

    Even ZFC can't distinguish between countable and uncountable reals!

    You often used this argument:
    The constructive (ie computable) reals are too small, there are only countably many of them. The hyperreals are too big, they're not Archimedean. Only the standard reals are Cauchy complete.fishfry

    And my answer is: you can assume without contradiction that constructive reals are uncountable, exactly as you do in ZFC! The fact that you can build only a countable set of real numbers only means that you can only consider a countable set of real numbers in the propositions of logic (the same as in ZFC), not that the logic does not allow the existence of an uncountable set of real numbers.

    Now, let's consider "computable reals". Computable reals are functions from natural numbers to natural numbers (you know better than me: take as input the position of the digit and produce as output the digit). Well, not all of these computable functions are valid terms in Martin-Lof type theory. The set of functions from Nat to Nat that are allowed as valid expressions depends on the details of the rules, and in Coq sometimes it even changes from one version to the other of the of the software. The reason is that the rules of the language must allow to build only total recursive functions (functions that always terminate), otherwise the logic becomes inconsistent.
    So, the functions that you can actually build using the rules of the logic correspond only to the Turing machines that are provably guaranteed to terminate (by using the meta-logic of the language, not the language itself), and that of course is a strict subset of all the Turing machines. But this IS NOT the entire set of functions from Nat to Nat that is assumed to exist, at the same way as the set of real numbers of which you can speak about in ZFC is not the set of real numbers that is assumed to exist.

    About the Cauchy completeness problem, I don't know how to address it, because in constructivist logic there are different axiomatizations of real numbers that are not equivalent between each other, and even not equivalent to ZFC reals. You can consider it a defect of the logic, but you can even consider it an advantage, because the thing that all these the axiomatizations have in common (well, not sure if all, but at least the two or three that I have seen..) is the set of theorems that is sufficient for doing analysis. So, the degree of freedom that is left out and makes the difference between the various constructivist axiomatizations corresponds to the aspects of ZFC reals that are not so "intuitive", such as for example the difference between point-based or pointless topology. This, in my opinion, means that there is not only one intuitively correct definition of what are real numbers.
  • Musings On Infinity
    I can't speak to non-pointed set-theoretic probability theory. I know about ETCS (elementary theory of the category of sets) so I understand that sets don't require points. But as to probability, I can't say. However if you take finite line segments as sets, you seem to lose intersections. Are these closed or open segments? You have a reference for this interpretation of probability theory?fishfry

    Let's look at Kolmogorov axioms here: (http://mathworld.wolfram.com/KolmogorovsAxioms.html)

    Everything that is needed is a set , some , that can be "anything", a function from the to real numbers, and a function "complement" on the .

    Let's consider as our probability space the segment [0, 1].

    I can take for the closed sets included in [0, 1] made of countable number of non overlapping segments with non zero length, and for the set of all these sets. The complement of a will be the closure of the remaining part of [a, b] when I remove the . There are no of zero measure (and this is very reasonable for a probability theory: every event that can happen must have a non zero probability to happen).

    The complement of a overlaps with only on the end points, and that is compatible with the axioms: the sum of measures adds up to 1.

    The elements of are simply ordered pairs of real numbers instead of single real numbers, but everything works at the same way: from the point of view of set theory two segments are equal if and only if the extremes are equal: no mention of overlapping segments at all.

    The definition of overlapping segments is the usual one: the higher number of the first pair is bigger than the lower number than the second pair.

    There is no need to consider infinite sets of points, and for probability theory there is no need to speak about points at all: probability theory does not need zero-measure events, and no physical possible event has zero probability.

    P.S. This is only a very simple example to show that it's not contradictory to define probability without points. Pointless topology is much more general than this and makes use of the concept of "locales" (https://en.wikipedia.org/wiki/Pointless_topology)
  • Musings On Infinity
    Wow!!! I knew there was something that missed! :starstruck:

    Sorry, but I was thinking that you used the term "computable reals" in an intuitive way, without a concrete definition!
    Yes, now I think a lot of what you said starts to make more sense! :smile:

    I did study Turing machines a long time ago, but I didn't remember about this definition at all!

    OK, well, I am at work now.. :razz:

    I'll read again what you write later, but now I think it's going to be much easier to agree!

    In the end, math is not an opinion, right? :smile: :smile: :smile:
  • Musings On Infinity
    I'll try to answer only to what I think are the most relevant points only, OK?

    Wait I think you said that wrong. It's not a sequence of pairs (rational number, modulus function). Rather it's a pair (entire sequence, single modulus function). It's the sequence, plus the function that inputs epsilon and outputs a suitable N.fishfry

    Yes, you are right.

    Ah ... you are saying that the standard reals are a model of C. That is a very enticing remark. I'll spend some time on this. That's a clue for me to hang some understanding on.fishfry

    Yes, exactly!

    But I have to tell you that in my opinion the road that you've chosen to learn about constructive mathematics is the most difficult that you could choose. I didn't know what IZF_Ref is, I had to look for the definition to understand the paper that you pointed out, and in my opinion IZF_Ref is not relevant at all, and the axiomatization of real numbers that he is speaking about is even less relevant: you can choose another slightly different axiomatization and you'll get different results. I'll tell you more: I don't think that Cauchy convergence is in some "objective" way weaker or stronger than Dedekind convergence (I can be wrong on this, I am not sure at all!), as you seem to deduce from this paper: I think that if you defined real numbers in ZFC using Cauchy convergence, then Dedekind convergence would become not provable in IZF_Ref.

    In my opinion ZFC, or ZF, makes sense only in first order logic (of course!), and intuitionistic logic makes the most sense in the context of type theory. But of course you can choose whatever random combination of axioms and rules, and find their implications, but I doubt that in this way you can discover something meaningful about what the "real" real numbers are...

    Isn't that just because the constructive reals can't see all those ZF-Cauchy sequences that it's leaving out? This MUST be the case. Isn't it?fishfry
    I don't know...
    If you consider a weaker logic (taking out axioms) you get more possible models, but more abstract.
    Consider this question: why modules don't have a base? Is it because they can't see all those inverse functions of multiplication that vector spaces see? Well, yes! But does it mean that modules are vector spaces when you close your eyes to not see the inverse of multiplication?
    I know the objection to this argument: vector spaces and modules are abstract algebraic structures. Real numbers are a concrete unique object, because there is only one model of real numbers. Well, there is only one model in ZFC, but there are a lot of models of ZFC itself! Consider the Yoneda lemma in category theory built on top of ZFC: in every reach enough category (let's say in every topos) you can build a different real number object (https://ncatlab.org/nlab/show/real+numbers+object), and using the Yoneda lemma you can map them to different sub-categories of sets. All the theorems of real numbers will remain valid in all categories that have a real-number object, but if you consider a specific category there will be properties that are "visible" only in that category, and not in the standard real numbers.

    You talk about computable functions. Every time I drill down, constructivism turns out to be about computability. But the constructivists seem to regard the computable reals as somewhat different than the constructive reals.Mephist

    Basically, the point is that building proofs in intuitionistic logic is equivalent (it's really the same thing) to building computable functions: there is a one-to one correspondence between intuitionistic proofs and terms of lambda calculus (sorry, lambda calculus again...). This is really a very simple and practical thing, but I don't know why all explanations that I am able to find on the web are extremely abstract and complicated!
  • Musings On Infinity
    Ops... thank you for pointing it out :yikes:
  • Musings On Infinity
    You talk about computable functions. Every time I drill down, constructivism turns out to be about computability. But the constructivists seem to regard the computable reals as somewhat different than the constructive reals.Mephist

    There are too many points, and I have the impression that this discussion doesn't make sense if we don't agree on the definitions of the worlds. So, let me start from the most ambiguous one: what do you mean by "computable reals"? How do you "compute" a number that is not representable as a fraction? I'll wait for this answer before going ahead with the other points, because I really don't know what are the "computable reals".
  • Mathematics is the part of physics where experiments are cheap
    Hi Bill,
    I read the paper on Casimir Force. Well, there are several passages that I don't really understand, like for example the derivation of (2.36) from (2.34) (let alone the funny use of zero exponent). But I don't see any use of the Riemann zeta function for s = -1. He completely omits to write limits on complex variables, but I think that if you put the limits back where they should go, he didn't use divergent series treating them as convergent.

    Anyway, I saw the use of divergent series several times ( Euler and first of all Ramanujan ), and it's really very strange that using algebraic manipulations on formulas that should be without sense when interpreted as sums of natural numbers they can obtain anyway the correct result.
    It seems that there really should be some interpretation on these formulas that has to make sense and is not natural numbers. But I have no idea what that interpretation could it be... do you?
  • Musings On Infinity
    So to sum up, the next thing past computability is ZF, which already gives you sets that are arguably not computable (depending on who's arguing). Then AC gives you sets that are not only noncomputable, but beyond even the reach of ZF.fishfry

    The sets that you say are not computable, but computable in ZF, are the ones that in type theory are called Inductive types, and correspond to initial algebras in category theory. The simplest example is the set of natural numbers. You say the set of natural numbers is not computable. It's not so simple: an inductive set is not a function, and the recursor on this inductive set is a computable function, because the recursion is done counting down to zero.

    And there are even co-recursive sets, that correspond to co-initial algebras in category theory (not sure if the name is correct). Co-inductive functions are not required to terminate, but only to consume data at every step. However, you never get an infinite loop because there are restrictions on how co-inductive functions can be used. So, it's not so easy to define what is "computable" from the point of view of a constructive type theory.
  • Musings On Infinity
    We know nothing about the elements of this set. Is 1/2 in it? We don't know, only that it contains exactly one rational (proof by reader).fishfry

    If R/Q is the quotient class, 1/2 is contained in only one of it's subclasses: the one that contains all rational numbers.
  • Musings On Infinity
    Although in ZF we can enumerate the Turing machines, no such enumeration can be computable! That's because a computable enumeration of TMs would effectively solve the Halting problem, which Turing showed can't be done.fishfry

    Are you sure about this? I think turing machines are simply strings that can be enumerated as integers. This is not the same thing as solving the halting problem.
  • Musings On Infinity
    * ZF. We have statements of existence of sets we can't compute. The powerset axiom for example is very powerful. Cantor's theorem is a theorem of ZF so that the powerset of the naturals must be uncountable; yet there are only countably many Turing machines. So most of the subsets of the naturals are noncomputable sets. No axiom of choice is needed for ZF-noncomputability.fishfry

    Power Set in ZF:
    ∀x∃y∀z[z∈y↔∀w(w∈z→w∈x)]

    Power Set in Coq:
    Inductive Power_set (A:Ensemble U) : Ensemble (Ensemble U) :=
    Definition_of_Power_set :
    forall X:Ensemble U, Included U X A -> In (Ensemble U) (Power_set A) X.

    The powerset of A is the set of all functions from A to a 2-element set (a subobject classifier in topos theory).

    Coq uses inductive types (a generalization of recursive types, such as the one used to define integers in Peano arithmetic) to define infinite data types. In ML Type theory, that does not include inductive types,
    it depends if you consider the predicative or impredicative version of the theory (https://ncatlab.org/nlab/show/predicative+mathematics)

    In the predicative version of ML Type theory, this is true: to define the set of all subsets you have to use an axiom to define a "recursor function", that is assured to be consistent by the meta theory of the logic, but is still an axiom. So, in that case you are right: there are really 3 levels: finite, inductively definable and non-inductively definable.

    This is the same as for the definition of integers: you have to use the induction principle, that is not provable in the predicative version of ML Type theory.
  • Musings On Infinity
    But who said that the sets must made of points? Set theory does not say what is a set, only gives a list of axioms that are valid WHATEVER sets you consider.

    If you take as sets the sets of finite segments, all sets have a measure and Kolmogorov axioms work perfectly well: zero-length segments will have zero measure and non zero segments will have a non zero measure. Where is the contradiction?
  • Musings On Infinity
    This would destroy the foundation of modern probability theory and everything that sits above it (statistical mechanics, quantum field theory, sociology, the state lottery, etc.)fishfry

    Why???
    What part of probability theory is inconsistent with the negation of axiom of choice?

    Here's an extract from https://en.wikipedia.org/wiki/Probability_theory ( Idon't have time to fix formatting of formulas now)":

    Modern definition: The modern definition starts with a finite or countable set called the sample space, which relates to the set of all possible outcomes in classical sense, denoted by {\displaystyle \Omega } \Omega . It is then assumed that for each element {\displaystyle x\in \Omega \,} x\in \Omega \,, an intrinsic "probability" value {\displaystyle f(x)\,} f(x)\, is attached, which satisfies the following properties:

    {\displaystyle f(x)\in [0,1]{\mbox{ for all }}x\in \Omega \,;} f(x)\in [0,1]{\mbox{ for all }}x\in \Omega \,;
    {\displaystyle \sum _{x\in \Omega }f(x)=1\,.} \sum _{x\in \Omega }f(x)=1\,.
  • Musings On Infinity
    But this necessarily leaves out all those Cauchy sequences whose convergence rates are not so constrained; and the limits of those sequences can not be Italian-Cauchy reals. But they are standard reals (being the limits of Cauchy sequences). So what the constructivists call Cauchy complete can not be actual Cauchy completeness. I am pretty sure about this. If the constructivists don't care, all fine and well. A lot of smart people are constructivists these days so I'm sure the fault is my own. But the Italian definition of a Cauchy sequence omits a lot of more slowly converging sequences unless I am mistaken (I haven't fully gotten to the bottom of this yet).fishfry

    Yes, you are right. This leaves out all Cauchy sequences whose convergence rates are not computable. Because in ZFC you can define non computable functions by using the axiom of choice, that is not available in constructive mathematics. These (the ones left out) are the Cauchy sequences that are not expressible as an explicit formula (or algorithm), and then are not expressible at all in constructivist mathematics. Put it in another way: non constructive mathematics has more functions than constructive mathematics. The additional functions are the ones that you can define only by using the axiom of choice.
    But I would like to stress an essential point that maybe you missed: the axiom of choice is not incompatible with constructivist mathematics: it's only an independent axiom (not refutable and not provable). So, if you add it to constructivist logic, you obtain... non constructivist logic! This is not a limitation, but it's only a choice to treat this axiom not as a "logical axiom" (that you are forced to use because it's built into the logic), but as an axiom of your theory (the theory of real numbers, for example). In this case the axiom can be seen as an "oracle function": a function that is not computable, that you assume to exist. In constructivist logic (at least the one used in Martin-Lof type theory, the one that I know) there are no logical axioms at all. Only logical rules! That, in my opinion, is an advantage, not a limitation: keep axioms out of logic and all logical symbols become mere definitions. The only primitive concept that remains as an integral part of logic is... computable functions!
    Let me repeat it once again: you can get classical logic out of constructivist logic in a very simple way: just add an axiom to the theory! But if you want to reason about an universe where only computable functions exist using classical logic, the only way that I know is to use the internal language of topos theory (part of cathegory theory). Because you can easily add an axiom to a theory, but you are not allowed to take out an axiom from logic. It's just a much nicer "factorization" between what's part of the logic and what's part of the mathematical theory that you are building.