• RegularGuy
    2.6k
    Can’t a real number go on and on for infinity? Or is that an irrational number...?
  • RegularGuy
    2.6k
    "
    I can't speak to non-pointed set-theoretic probability theory. I know about ETCS (elementary theory of the category of sets) so I understand that sets don't require points. But as to probability, I can't say. However if you take finite line segments as sets, you seem to lose intersections. Are these closed or open segments? You have a reference for this interpretation of probability theory?
    — fishfry

    Let's look at Kolmogorov axioms here: (http://mathworld.wolfram.com/KolmogorovsAxioms.html)

    Everything that is needed is a set W
    W
    , some Qi
    Q
    i
    , that can be "anything", a function Qi
    Q
    i
    from the Qi
    Q
    i
    to real numbers, and a function "complement" on the Qi
    Q
    i
    .

    Let's consider as our probability space the segment [0, 1].

    I can take for Qi
    Q
    i
    the closed sets included in [0, 1] made of countable number of non overlapping segments with non zero length, and for W
    W
    the set of all these sets. The complement of a Qi
    Q
    i
    will be the closure of the remaining part of [a, b] when I remove the Qi
    Q
    i
    . There are no Qi
    Q
    i
    of zero measure (and this is very reasonable for a probability theory: every event that can happen must have a non zero probability to happen).

    The complement of a Qi
    Q
    i
    overlaps with Qi
    Q
    i
    only on the end points, and that is compatible with the axioms: the sum of measures adds up to 1.

    The elements of W
    W
    are simply ordered pairs of real numbers instead of single real numbers, but everything works at the same way: from the point of view of set theory two segments are equal if and only if the extremes are equal: no mention of overlapping segments at all.

    The definition of overlapping segments is the usual one: the higher number of the first pair is bigger than the lower number than the second pair.

    There is no need to consider infinite sets of points, and for probability theory there is no need to speak about points at all: probability theory does not need zero-measure events, and no physical possible event has zero probability.

    P.S. This is only a very simple example to show that it's not contradictory to define probability without points. Pointless topology is much more general than this and makes use of the concept of "locales" (https://en.wikipedia.org/wiki/Pointless_topology)
    Mephist

    This to me seems to be what @Dfpolis was saying that mathematics must be instantiated in nature first otherwise it is pointless to talk about numbers existing in a Platonic ideal realm. That’s what I got from what you both were saying. Pardon the intrusion.
  • Mephist
    352
    Well, I wrote about physical events because probability is a concept that belongs both to physics and mathematics, but from the point of view of mathematics only (Kolmogorov axioms), if axioms do not require zero to be a valid probability measure, you can avoid to consider events with zero probability.
    But of course in this case the axioms are chosen with the intent to be the minimal constraints that have to be satisfied by any definition of probability that makes sense in physics.
    This is very common: the mathematical definition is derived from the physical intuition, but is used as an axiomatic theory completely disconnected from physics. That's why it often happens that after several years somebody finds a "model" of the axiomatic theory that does not correspond to the physical intuition at all but verifies all the axioms, so from the point of view of mathematics it is a valid model.
    The point is that axiomatizations are needed if you want to use formal logic (and you want to use formal logic because physical intuition is not enough to avoid paradoxes), but the axiomatization very often does not capture exactly the physical concept that you have in mind. And that happens even for the most elementary physical concepts, such as the concept of natural numbers.
  • fishfry
    3.4k
    ↪fishfry Can’t a real number go on and on for infinity? Or is that an irrational number...?Noah Te Stroete

    You mean the decimal representation of a real number? Yes, they're infinitely long. For example 1/3 = .333... and sqrt(2) = 1.4142... and so forth. Even terminating decimals like 1/2 = .5 = .4999... have infinitely long representations.

    But the representation isn't the real number. The real number is the abstract thing pointed to by the representation, just like 2 and 1 + 1 are two different representations of the same abstract number.
  • fishfry
    3.4k
    I guess I got confused because you guys were talking about numbers as points, where I was thinking about zero-dimension points on a two-dimensional line. This stuff to me right now is very esoteric, as I don’t remember the terminology for the different kinds of numbers. I was always better at calculations like an engineer than I was so much interested in or ever had any exposure to theory.Noah Te Stroete

    A line is only one dimensional. A plane is two dimensional. Points are zero-dimensional. How a bunch of zero dimensional points make a line is a bit of an ancient mystery. Newton thought of a point as tracing out a curve as the point moves through space. But that only pushes the mystery to space itself.

    There's really only one standard real number line that most people care about, the one from high school math used by physicists and engineers. The other stuff is esoteric. Constructivism is the idea that every mathematical object must be able to be explicitly constructed. That leads to a different kind of notion of the real line. But I really can't speak for constructivist philosophy, since I don't know much about it. You can see the difficulties @Mephist is having in explaining it to me!
  • RegularGuy
    2.6k


    Thank you both for your patience with me. This is interesting stuff.
  • Mephist
    352
    Yes, I find it interesting too, and probably my problem is that I learned about logic only from a practical point of view: proving the correctness of digital circuits and algorithms. Then I discovered that even category theory and topology are much more "practical" that they seem to be, and the best thing about computer-based proof assistants such as Coq is that proofs are real computations: you don't have to ask an expert if something that you think is right of wrong: just write a program to "build" the proof, and check if it works. What seem to be abstract mathematical concepts become very concrete and start to make sense!

    But probably this is the opposite of how mathematics and logic are usually taught, and I have to confess that when I studied analysis at university (I studied engineering), I learned the rules to do the calculations and remembered how to proof the theorems, but I never studied anything about the "foundations" of mathematics, such as Zermelo-Fraenkel set theory.

    So, I don't really feel qualified as a "teacher" to explain to @fishfry or to you about philosophical questions related to infinite sets, but I have very clear ideas of how formal logic systems work, and even my own theory of how mathematics "works".
  • sime
    1.1k
    But our use of real numbers (at least for the most part) is in integrals and derivatives, right?
    So the "dx" infinitesimals in integrals and derivatives should be the morally correct model? This is how real numbers are intuitively used since the XVII century.
    Mephist

    First of all, does our imprecise use of real numbers warrant a precise account?

    What does it even mean to say that a real number denotes a precise quantity?

    For in what sense can a formal system speak of universal quantification over all of the natural numbers?

    Recall that in a computer program, an infinite loop is merely intended to be an imprecise specification of the number of actual iterations the program will later perform; it is only the a priori specified number of iterations that isn't upper-bounded in the logic of a program, that is to say before the program is executed.

    Therefore if the syntax and semantics of a formal system were to mimic a programmer's use of infinity, it wouldn't conflate the 'set' of natural numbers with an a priori precise set, but would instead refer to this 'set' as an a priori and imprecise specification of an a posteriori finite construct whose future construction is generally unpredictable and isn't guaranteed to exist for a given number of iterations.

    A universal quantifier over N would not longer be interpreted as representing a literally infinite loop over elements of N, but as representing a termininating loop over an arbitrary and unspecified finite number of elements, where the loop's eventual termination is guaranteed, even if only through external intervention that is outside of the constructive definition of the program or formal system.

    By interpreting 'infinite' quantification as referring to an arbitrary finite number of elements, the law of double negation is then naturally rejected; for an arbitrarily terminated loop cannot conclusively prove anything in general.

    This interpretation also has the philosophical advantage that semi-decidable functions are no longer informally misinterpreted as talking about their 'inability to halt', and likewise for formal systems supposedly 'talking about' their inability to prove certain statements.

    So in short, an imprecise and pragmatic model of the real numbers is what is important, corresponding to how numerical programmers implement them in practice, with particular attention paid to numerical underflow.
  • Mephist
    352
    First of all, does our imprecise use of real numbers warrant a precise account?

    What does it even mean to say that a real number denotes a precise quantity?
    sime

    I am not sure what you mean by "imprecise use of real numbers", however the rules of calculus give precise results:

    * The volume of a sphere inscribed in a cylinder is exactly 2/3 of the volume of the cylinder (https://en.wikipedia.org/wiki/On_the_Sphere_and_Cylinder)

    * is exactly equal to -1

    For in what sense can a formal system speak of universal quantification over all of the natural numbers?sime

    The standard way is assuming the principle of induction: if a proposition is true for n=0 and from the fact that is true for n you can prove that is true even for n+1, then you assume that is true for all natural numbers. (https://www.encyclopediaofmath.org/index.php/Induction_axiom)
    It cannot be verified in a finite number of steps, so it's assumed as an axiom.
    But assuming it to be false is not contradictory: you can assume the existence of numbers that can never be reached by adding +1 an indefinite number of times. (https://en.wikipedia.org/wiki/Non-standard_model_of_arithmetic)

    So in short, an imprecise and pragmatic model of the real numbers is what is important, corresponding to how numerical programmers implement them in practice, with particular attention paid to numerical underflow.sime

    I don't understand what you mean by "a pragmatic model". Do you mean that there should be some sort of "approximate" logic, like some sort of fuzzy logic? (https://en.wikipedia.org/wiki/Fuzzy_logic). The standard "epsilon-delta" definition of limits is not "pragmatic"?
  • sime
    1.1k
    For in what sense can a formal system speak of universal quantification over all of the natural numbers?
    — sime

    The standard way is assuming the principle of induction: if a proposition is true for n=0 and from the fact that is true for n you can prove that is true even for n+1, then you assume that is true for all natural numbers. (https://www.encyclopediaofmath.org/index.php/Induction_axiom)
    It cannot be verified in a finite number of steps, so it's assumed as an axiom.
    But assuming it to be false is not contradictory: you can assume the existence of numbers (inaccessible cardinals) that can never be reached by adding +1 an indefinite number of times (https://en.wikipedia.org/wiki/Inaccessible_cardinal).
    Mephist

    So then question is, in what sense does the principle of induction speak of universal quantification over all of the natural numbers?

    Take any predicate P with a single argument, then take as axioms

    i. P(0)
    ii. (x) ( P(x) => P(x+1)) ,

    where ii. denotes universal quantification over x, where x is a bound variable.

    Then we say, "By informal convention, this predicate is now said to be 'true' for 'all' x."

    But what exactly has our demonstration achieved in this extremely small number of steps?

    Do we really want to insist that our use of these axioms is equivalent to a literal step-by-step substitution of every permissible number into P?

    Do we even want to say that we are assuming the existence of this inexhaustible substitution?

    All that i and ii define in a literal sense is a rule that permits the substitution of any number for x.

    Nowhere does this definition imply that induction is a test for the truth of every substitution.

    Therefore it is coherent to accept mathematical induction as a principle of construction, and yet reject it's interpretation as a soothsayer of theoremhood.
  • sime
    1.1k
    Therefore it is coherent to accept mathematical induction as a principle of construction, and yet reject it's interpretation as a soothsayer of theoremhood.sime

    Sorry, that's confusing and misleading, by theoremhood i was referring to truth in the respective model of the axioms.

    What i mean is that mathematical induction 'proves' a universally quantified formula in a completely vacuous sense in that the 'conclusion' of the induction, namely the universally quantified formula (x) P(x), is merely an abbreviation of the very axioms i and ii.

    On the other hand, for universally quantified formulas over infinite domains that do not possess a proof by mathematical induction, there is no reason to support their informal interpretation as enumerating the truth or construction of the predicates they quantify over, by the very fact that they do not correspond to a rule of substitution.
  • Mephist
    352
    Do we really want to insist that our use of these axioms is equivalent to a literal step-by-step substitution of every permissible number into P?

    Do we even want to say that we are assuming the existence of this inexhaustible substitution?
    sime

    I would say that "forall x in S", where S is an arbitrary set, is not interpreted as a step-by-step substitution, because in general not for every set is possible to define a total order: so not only it's impossible to visit all the elements during the iteration, but it's not even possible to define what the step n+1 should be for an arbitrary step n.
    In general the interpretation is "choose whatever element x in S", or "choose a random element x of S".
    The meaning is given only by the rules of logic, and the rules say that if you have "forall x exists y", then you have a function from x to y, and a function is assumed to be a primitive concept. So I guess there is no real definition of what "forall" means.

    All that i and ii define in a literal sense is a rule that permits the substitution of any number for x.

    Nowhere does this definition imply that induction is a test for the truth of every substitution.

    Therefore it is coherent to accept mathematical induction as a principle of construction, and yet reject it's interpretation as a soothsayer of theoremhood.
    sime

    Well, the meaning of the axiom is exactly this: there are no other natural numbers except the ones that you can reach by iterating the successor function starting from zero. So, there are no "unreachable" natural numbers.
    Or, put in another way, if a property of a natural number is true for every n+1 number starting from zero, then whatever natural number you choose will have that property.

    P.S. If you don't assume the induction principle to be true, there could be for example several infinitely long "chains of numbers" starting from different points, not only from zero. Or you could assume that there is an integer so far away from zero (infinitely far) that will be never reached by counting, even if each number is "attached" to a following one, as in nonstandard arithmetic
    (https://en.wikipedia.org/wiki/Non-standard_model_of_arithmetic).
  • Mephist
    352
    On the other hand, for universally quantified formulas over infinite domains that do not possess a proof by mathematical induction, there is no reason to support their informal interpretation as enumerating the truth or construction of the predicates they quantify over, by the very fact that they do not correspond to a rule of substitution.sime

    Yes, exactly. Not every domain of quantification (set in set theory or type in type theory) is supposed to be enumerable, or iterable.
  • Mephist
    352
    I do not believe that an oracle in computer science is in any way analogous to the axiom of choice. If you know of a connection perhaps you could explain it. An oracle allows you to compute something that was formerly noncomputable. The axiom of choice doesn't let you compute anything. In fact the elements of the sets given by the axiom of choice have no properties at all and could never be computed by any stretch of the word.fishfry

    Hi @fishfry.

    Reading again what you wrote, I think that maybe I am able to explain what I meant here.

    Here's the axiom of choice, taken from wikipedia: (https://en.wikipedia.org/wiki/Axiom_of_choice)



    In a constructivist logic interpretation, it says that if you have built a set of nonempty sets X, using this axiom you can build a function f.
    In other words, this axiom is like a black box function (or an oracle), where you put as input the set of nonempty sets X and you get as an output the choice function f.
    Then, you can use the choice function f as you do in ZFC. The resulting proof is the same, only that in ZFC you don't treat the axiom as if it were a function: you simply apply it.
    In this sense the axiom is an oracle: it allows you to compute f without knowing how the function that produces f from X is implemented, and f does not have to be computable or incomputable: you simply don't care about it's computability.

    Yow write "In fact the elements of the sets given by the axiom of choice have no properties at all and could never be computed by any stretch of the word". But in constructivist logic is the same thing: if X is the class of all subsets of real numbers determined by the equivalence relation "two numbers are equivalent if their quotient is rational", you can define as A the class determined by , and then write f(A) to get a real number. "f" is treated in a purely symbolic way, never computed.

    I think that the main source of confusion is that usually a logic is called "constructivist" when it does't have any axiom: in that case there will be no "oracle" functions, and the "forall x ..., exists y ..." will correspond to really computable functions from x to y. But even in this case, a real number will be typically defined as a sequence of rational functions (a limit, in the usual way), not as a function that computes the digits of the decimal representation of the number, and the fact that Cauchy functions are convergent will be part of the axiomatic definition of real numbers: it will not be a "logical axiom", but still will be treated at the same way as an "oracle", because it's part of the definition of what real numbers are.
    It's the same thing as when you define natural numbers: you don't have to prove the principle of induction: it's simply part of the (axiomatic) "definition" what natural numbers are.

    If you add the missing logical axioms to constructivist logic (excluded middle and choice) treating them as "oracles" (excluded middle can be seen as a function that returns "P1 or P2"), you obtain back classical logic, only with a slightly different "interpretation" of the rules.

    I don't know if this explanation makes sense: maybe the best way to explain would be to show some examples of proofs..
  • fishfry
    3.4k
    In this sense the axiom is an oracle: it allows you to compute fMephist

    As I think of it, perhaps a new axiom is an oracle. I understand that point of view.

    So I'm willing to be agreeable on this point, but still no less confused on the issues I've raised earlier.

    Also ... no form of the AC could possibly be construed as a computation. That, I just do not see at all. It's a pure statement of existence. A particular set exists with no means to possibly compute or construct or identify any of its elements. Whether this is worse or the same as the powerset axiom I can't say. Perhaps it's no worse. But a lot of people historically regarded it as worse. I can't put any of that in perspective.

    I've re-read the SEP article on constructive math with a much deeper level of comprehension than I've had in the past. Your efforts have not been in vain.
  • Mephist
    352
    I've re-read the SEP article on constructive math with a much deeper level of comprehension than I've had in the past. Your efforts have not been in vain.fishfry

    I don't remember which one is the SEP article. Could you send me a link?

    Also ... no form of the AC could possibly be construed as a computation. That, I just do not see at all. It's a pure statement of existence. A particular set exists with no means to possibly compute or construct or identify any of its elements.fishfry

    Yes, the AC can't be construed as a computation, and it's not part of constructivist logic. What I am saying is that AC can be added to a constructivist logic framework such as Coq if you want to use standard logic: standard logic can be obtained as an extension of constructivist logic (in a similar way as metric geometry can be seen as an extension of projective geometry): if you want to use classical logic in Coq (and I think that's how Coq is used most of the time) you just have to import the Classical_Prop library: https://coq.inria.fr/library/Coq.Logic.Classical_Prop.html#

    The possible forms of the axiom of choice (without assuming EM) and the relations between them are here: https://coq.inria.fr/library/Coq.Logic.ChoiceFacts.html
  • fishfry
    3.4k

    I don't remember which one is the SEP article. Could you send me a link?
    Mephist

    https://plato.stanford.edu/entries/mathematics-constructive/

    They echoed many of the things you've been saying, such as that there are several (four in fact) variations of constructivism, and that there's a constructivist logic ... well basic stuff, but stuff I've been reading without comprehension before and can now understand. Also I've learned that a lot of this goes back to the nineteenth century. Poincaré for example was troubled by nonconstructive thinking.

    That's why you should never be frustrated about not being able to get through to me. You are getting through by osmosis. I've never bothered to try to understand constructivism at a technical level before and this is an education. The article talked about putting explicit bounds on the rates of convergence of Cauchy sequences, and that related back to the Italian paper, so it's starting to feel familiar. One learns by repetition.

    I will say one thing though. I still just don't get constructivism as a philosophy. I DO understand that certain formulations of constructive math lend themselves beautifully to computerized proof systems. Nobody would deny the digital deluge that's flooding the world right now, why should math be spared?

    But I don't get the metaphysics. "A thing only exists when we have an algorithm to instantiate it." But the world is full of things that have no algorithm. The world itself is not an algorithm.

    And here I think is the source of my discomfort with constructivism. Many these days believe that the world IS an algorithm. I disagree with that proposition. To the extent that some -- not all, ok, but some -- aspects of constructivism are in support of the belief that everything that exists must be an algorithm; I oppose that aspect. I instinctively believe that the constructivists are missing a lot: namely, everything that is important but that can't be explicitly constructed.

    Yes, the AC can't be construed as a computation, and it's not part of constructivist logic. What I am saying is that AC can be added to a constructivist logic framework such as Coq if you want to use standard logic: standard logic can be obtained as an extension of constructivist logic (in a similar way as metric geometry can be seen as an extension of projective geometry): if you want to use classical logic in Coq (and that's how it's used most of the time) you just have to import the Classical_Prop library:Mephist

    You've said this to me probably several times and I finally understand it. If I've got this right, you can add various axioms to constructive math and recapture standard math. So we get the benefit of computerization and we can still do the things standard math cares about. Which is pretty interesting, if true.

    Ah, "Classical_Prop." Coq has a library you can import that makes it act like standard math. You've connected adding axioms in a formal system, with with importing a library in a program. Wow. Thanks. Great insight. I got it.
  • Mephist
    352
    I will say one thing though. I still just don't get constructivism as a philosophy. I DO understand that certain formulations of constructive math lend themselves beautifully to computerized proof systems. Nobody would deny the digital deluge that's flooding the world right now, why should math be spared?

    But I don't get the metaphysics. "A thing only exists when we have an algorithm to instantiate it." But the world is full of things that have no algorithm. The world itself is not an algorithm.
    fishfry

    I completely agree. I don't believe that the world is an algorithm either. And mathematical objects (and of course physical objects too) are not algorithms. But proofs of existence are algorithms, in any formal logic.
    In my opinion, the reason of treating the axiom of excluded middle as less "fundamental" then the others is that is related to the more philosophical and abstract notion of truth: a proposition is either true or false.

    In classical logic when you derive a proposition P you interpret it as a prove that "P is true". When you derive a proposition "not P" you interpret it as a prove that "P is not true (or false)".

    Now, it turns out that if you replace "P is true" with "P is provable" and "P is false" with "P is absurd (meaning P implies a logical contradiction)", all the rules and axioms of classical logic will still be intuitively justified, except one: the axiom of excluded middle. A proposition can be not provable and even not absurd.

    So, the axioms of logic without excluded middle still make sense, but with a different interpretation of the meaning of propositions. This is in some way similar to what often happens in mathematics: a simplified model appears to be in some way more "fundamental" if it is consistent and still compatible with the full model. For example in projective geometry there is no concept of length, but the concepts of points and intersecting lines still make sense.
    Well, at the same way you can see intuitionistic logic as the part of classical logic that doesn't depend on the notion of truth. And even negation does not have the same meaning: "not P" does not mean "P is false", but "P implies a contradiction". In fact, the "not" operator is even not needed in intuitionistic logic: just replace every occurrence of with .

    Following the analogy with projective geometry, it is true that not all theorems can be proved without using excluded middle, but it still makes sense to distinguish the ones that can be proved without it from the ones that can't, because the first ones are valid for a larger number of models (or, put in another way, are true for more "fundamental" reasons).

    The same thing happens in all mathematics: why do you want to isolate group theory from Galois theory about the solution of polynomial equations? Because group theory is more general (since it's theorems depend only on a smaller set of assumptions), and then can be seen as more fundamental.

    [ I have to go now: I'll explain in the next post what all this has to do with algorithms ]
  • Mephist
    352
    Now, what does intuitionistic logic have to do with algorithms?

    Here's the quick answer: in intuitionistic logic a proposition P can be interpreted as the "description" of the algorithm that has been executed (using the rules of logic) to build the proof of P. And this of course has to be a deterministic algorithm that terminates in a finite number of steps, because every proof must be made of a finite number of steps.

    Now, the problem is that the long answer is much, much longer. So, I'll describe the consequences of the short answer before.

    The first thing to clarify is that intuitionistic logic (more concretely these rules: https://en.wikipedia.org/wiki/Natural_deduction#/media/File:First_order_natural_deduction.png)
    are only a first order logic, and not a theory of sets.
    ZFC, instead, is classical first order logic (intuitionistic + EM), plus the axioms of set theory.
    Now, the axioms of ZFC "work well" with classical logic, but not with intuitionistic logic. In particular, the correspondence between algorithms and proofs works only in the very particular formulation of "natural deduction", with no axioms at all. The addition of ZFC axioms would introduce the predicate, that does not respect the "propositions as algorithms" interpretation. Some of the reasons why ZFC doesn't "fit well" with intuitionistic logic can be found here: (https://en.wikipedia.org/wiki/Constructive_set_theory)
    """
    In 1973, John Myhill proposed a system of set theory based on intuitionistic logic[1] taking the most common foundation, ZFC, and throwing away the axiom of choice (AC) and the law of the excluded middle (LEM), leaving everything else as is. However, different forms of some of the ZFC axioms which are equivalent in the classical setting are inequivalent in the constructive setting, and some forms imply LEM.
    """
    Basically, the problem is that ZFC axioms imply EM. So even if you exclude EM from logic axioms, you can still derive it from ZFC axioms.

    However, there is a "natural" extension of the rules of intuitionistic logic that respects the correspondence between algorithms and proofs and is interpreted in a similar way as a theory of some sort of "sets": intuitionistic type theory. (https://en.wikipedia.org/wiki/Intuitionistic_type_theory)

    In intuitionistic type theory sets are of course replaced by types (that are not exactly the same thing), but there is "practically" an one-to-one correspondence between propositions expressed in the two languages. From a formal point of view, however, there is a big difference: ZFC is implemented as axioms on top of an independent first order logic. Intuitionistic type theory instead is an extension of the rules of intuitionistic first order logic to an high-order logic (basically by adding quantification on functions), still making no use of axioms at all.

    Now, back to the answer about algorithms.
    As I wrote in the previous post, intuitionistic logic does not speak about truth but about provability and proofs. And proofs are, as in every formal logic, algorithms.
    The fundamental difference is that proofs in classical logic are not an essential part of the theory: the important thing is that a proof of a proposition P exists. If a proof exists, the proposition P is true, and that is all what matters about P. The algorithm that has been used to prove P is not considered as part of the language.
    In intuitionistic logic, instead, the language speaks about proofs, and the terms of the language are meant to represent proofs (that are algorithms), and then the language speaks about algorithms.

    For example, in type theory I can write the expression "x : P", meaning "x is the proof of the proposition P". "x" in this case represents an algorithm.
    Then I can have the expression "f : P -> Q", meaning that "f" is the proof that "P implies Q". This means that "f" is an algorithm that takes as input a proof of P (another algorithm) and returns as output a proof of Q (a third algorithm).
    The "side effect" of this interpretation is that I never have to verify how a proposition has been proven, because proofs are always "recorded" together with the propositions. The propositions are interpreted as types, and the terms are interpreted as proofs (always written in the form "x : P") and checking the proof
    of P means checking if the term x has type P (this is done with an algorithm that is part of the meta-theory).

    A full explanation of how all logical operators are interpreted would take me probably several pages, and I can't write a book here.. :smile: (but there are a lot of good books on type theory)

    However, back to the point of existential propositions, if I built a proof of "exists x:A, P(x)", it means that I built a proof of A called x, and a proof of the proposition P(x) (P(x) is what is called "dependent type").
    If A is the type of real numbers, a Cauchy sequence is a "proof" of A. Meaning: a real number is treated formally as if it was a proof that a real number exists, and corresponds to the algorithm that has been executed (applying the rules of logic) to build that real number.

    P.S. Reading what I just wrote here, I doubt that it's in some way understandable by anybody who doesn't already know about type theory. But I am not really able to explain it better in the space of a post...
  • fishfry
    3.4k
    I completely agree.Mephist

    I should quit now!

    I just wanted to say that in the cold light of day I no longer believe what I wrote last night. If you start with classical math and remove LEM you get some form of constructivism. If you add LEM back in, you get back classical math. So there's much less than meets the eye when you say you can add in axioms to recover classical math.

    Secondly, you said you can import a library into Coq to recover standard math. But if that were true, standard math would be computerizable, and then that would remove the biggest benefit of constructivism.

    I confess to being just as confused as ever, but at a somewhat higher level.

    It's too hot to type here today so I'll save the rest for later.

    ps:

    But proofs of existence are algorithms, in any formal logic.Mephist

    Ok let me just try to get some clarification. If I use the axiom of choice to whip up the Vitali set, the set of representatives of the equivalence classes of , that set is not "constructive" by any conceivable definition of the word.

    But if you code up the axioms of formal ZFC into a computer, the proof of the existence of the Vitali set can be cranked out by a program. Even by an undergrad! So in that sense, proofs are constructions even if they are constructing things that are in no way constructive.

    Have I got that right? What I mean is, am I confused about something sensible?
  • Mephist
    352
    But if you code up the axioms of formal ZFC into a computer, the proof of the existence of the Vitali set can be cranked out by a program. Even by an undergrad! So in that sense, proofs are constructions even if they are constructing things that are in no way constructive.

    Have I got that right?
    fishfry

    Yes! Exactly!! Proofs are constructions in ANY formal logic, because that's how formal logic is defined!
    There are computer-based proof verification systems for a lot of different logics: https://en.wikipedia.org/wiki/Proof_assistant
    You can look here for a short story of the development of these systems: https://en.wikipedia.org/wiki/Automated_theorem_proving

    These systems are called "proof assistants" because generally they try to allow the user to skip some steps of a proof and infer the missing steps, so that the user is not forced to write every single step, that can be very tedious (depending on what logic you use), but the main functionality that all of them provide is proof verification.
    However, the truth is that this is all "theoretical" stuff: in practice, at least before Voevodsky introduced HOTT, mathematicians have always ignored these systems (as they have ignored formalized logic), because they are way too tedious and time-consuming to use in practice. The problem is that they force you to concentrate on absolutely unessential steps of the proof, and to use a language that does not describe at all what the important steps of the proof really "mean". But this problem is not due to the use of computers (formal systems in a sense were "made" for computers, even if they were invented when computers didn't exist yet); this is a problem with the formal languages themselves, and that's the main problem that HOTT is trying to solve.

    However, you can for example look at the HOL proof assistant:(https://en.wikipedia.org/wiki/HOL_(proof_assistant). This is a very clear implementation of proofs in classical higher order logic as programs. In HOL you don't even enter a special environment where you write proofs: you write directly a program in the ML programming language (a perfectly "normal" programming language) to literally "build" the propositions that you are proving. There is a data type in ML called "proposition" and the rules of classical logic correspond exactly to the type constructors of the "proposition" data type. And the logic (higher order logic) is probably the nearest formal language to the "informal" classical logic used in practice by mathematicians in real (non formalized) proofs.
  • Mephist
    352
    Secondly, you said you can import a library into Coq to recover standard math. But if that were true, standard math would be computerizable, and then that would remove the biggest benefit of constructivism.fishfry

    The point is that "computerizable" does not mean "computable", because terms built in classical logic in general don't correspond to computable functions, and that is due to the fact that some of the rules, or some of the axioms, allow you to build mathematical functions in an indirect way, and that is what corresponds to the use of "oracle" (or non computable, or axiomatically defined) functions in constructive logic.

    What constructive logic does is to move out of the logic all the non computable parts (leaving only the rules that build terms in a "direct" way), that can be however added back to the logic as axioms (usually by importing them as libraries) to recover classical logic.

    What remains after removing the non computable parts of standard logic is a language that speaks only about proofs (not about truth), and proofs are purely syntactical entities that can be reasoned about using only computable functions.
  • Mephist
    352
    But I don't get the metaphysics. "A thing only exists when we have an algorithm to instantiate it."fishfry

    Now, just before leaving for vacations, the metaphysical part, that surely I'll not be able to defend in a philosophy forum :razz:

    The algorithms that represent "all the things that exist" are not only finite algorithms, but even infinite ones: they are ALL the functions (even not recursive or not total). So, the rules of logic are based on the model of computable algorithms, but are supposed to work for any function (that can be seen as some kind of generalization of an algorithm).

    If you think about it, that's the same kind of "generalization" that is done in set theory: the rules of logic are based on the model of finite sets (I am speaking about the rules of first order logic - that are the ones that define the "meaning" of logical operators - not about the axioms of ZFC), but are supposed to work even for the infinite sets of set theory.

    So, in constructivist type theory you replace the concept of infinite sets with the concept of infinite constructions. The rules of logic allow you to add "peaces" to these constructions an to compose them with each other, but of course you can't "decompose" the infinite ones: you have to treat them as a "black box", or an "oracle", or a function of a programming language belonging to an external library that "runs" on some kind of "machine" that can compute even uncomputable functions.

    So, as in set theory you can think of everything as made of sets, in constructivist type theory (meaning the Martin-Lof kind of type theories) you can think of everything as made of functions.
    And that's basically the reason of the strict correspondence between type theory and category theory: category theory is an axiomatic theory of functions (based on set theory). It axiomatizes the properties of functions to generalize them to morphisms, that are "anything" that "works" as a function.

    As the physical intuition for sets is that every object is made of parts, the physical intuition for functions is that every our experience can be seen as an interaction, or an experiment.
    The only assumption is that functions are something that always give the same result if you prepare them in the same way.
    But in the end, any of these intuitions is really true at a fundamental level of physics:
    - it is not true that everything is made of parts, because the "particle number" is not conserved.
    - it is not true that there are experiments that can be prepared always in the same way to give always the same result.

    But if you don't assume the idea that everything is made of parts, you don't have to assume the unintuitive :wink: idea that a line is made of points: you can think of lines and points as two different types, and an intersection of two lines as a function from pairs of lines to points.
  • fishfry
    3.4k
    But if you don't assume the idea that everything is made of parts, you don't have to assume the unintuitive :wink: idea that a line is made of points: you can think of lines and points as two different types, and an intersection of two lines as a function from pairs of lines to points.Mephist

    The revenge of Russell. Type theory triumphant.

    I'm all typed out, no pun intended, nothing else to say at the moment.
  • sime
    1.1k
    In my opinion, neither type-theory nor category theory are satisfactory in addressing the ambiguity of standard logic regarding infinitary notions.

    Take as an example "My local postbox contains some letters". This proposition isn't a full specification of a set, and hence is not representable as a specific algorithm and hence cannot be constructed, and therefore it must be stated axiomatically as a particular set which exists but without having any internal details. Some people might already interpret set theory, type theory or category theory in this way but this isn't thanks to those theories themselves.

    - The Axiom of Choice is wrongheaded in two respects:

    i) It isn't a logical rule permitting the universal existence of non-constructable sets. Rather, it is used as a non-logical proposition to refer to a specific set in a domain of inquiry whose number of elements is potentially infinite, that is to say, isn't specified in logical description.

    ii) The axiom has nothing to do with choice. On the contrary, it is usually used as a reference to sets for which a choice-function isn't specified.

    Whereas classical logicians often mistake AOC for a logical rule, Constructive logicians often mistake the Axiom of Choice for a tautology; for they conflate the existence of a choice-function with the existence of a set; yes it is true that 'constructed set', 'computable set' and 'choice function' are synonyms - but this misses the point as to how the axiom of choice is used, namely in order to represent 'externally provided sets' that provide elements on demand, but whose mechanism and quantity of contents are unspecified.

    To nevertheless insist that every physical set must be constituted by a computationally describable mechanism, should be regarded as a physical thesis, rather than being a logical theorem. For logic is a purely descriptive language without physical implications. Physicists might use set theory as a means to document their epistemological certainty and uncertainty regarding the internal operation of physically important sets, but this epistemological interpretation of set-theory isn't part of set theory per-se.

    The correct semantics of logic and mathematics is game semantics describing the interaction of entities created and controlled by the logician on the one hand, and the entities of his external environment that he is aware of, but for which he has only partial control or specification of. This semantics serves to reinforce the notion that logic and mathematics are languages of empirical description that documents the interactions of the logician with his external world.
  • fishfry
    3.4k
    [Revivifying this thread to comment on Banach-Tarski]

    I linked it in the post just before this one. Here's the link:
    https://thephilosophyforum.com/discussion/comment/302364
    Mephist

    Ok. I didn't go through it in detail. You shouldn't get hung up on the particular rotations, that's not relevant. What's important is that the isometry group of 3-space contains a copy of the free group on two letters, which already has a paradoxical composition.

    I haven't time tonight (and I'm kind of busy for a few days and probably haven't the concentration anyway to go through your post line by line) but I recommend the Wiki outline of the proof, which I find pretty clear. https://en.wikipedia.org/wiki/Banach%E2%80%93Tarski_paradox

    But you're asking for things to be continuous or preserve topologies or whatever, and those things have nothing at all to do with the proof. As I say I don't have the time/energy right now to go through your post but you're missing some of the overall structure. All of the transformations generated by the two matrices are isometries, or distance-preserving transformations. They're all rotations around the origin in fact. And the group of isometries contains a copy of the paradoxical free group. That's the main idea to absorb about the proof.

    See https://en.wikipedia.org/wiki/Paradoxical_set

    https://en.wikipedia.org/wiki/Free_group

    I'll look at your proof some more but actually from what you wrote you're probably working from a source that emphasizes the particular matrices, but that's a troublesome way to go. All that's important is that there exist a pair of rotations that are independent, in the sense that no finite combination of the rotations and their inverses can ever be the identity rotation. That's why they are an example of the free group on 2 letters.

    Def see this page.

    https://en.wikipedia.org/wiki/Banach%E2%80%93Tarski_paradox

    ps -- I glanced at your link. That paper's way too detailed and technical. Use the Wiki outline. There are four steps:

    * Grok the sublime beauty of the paradoxical decomposition of the free group on 2 letters, . In the proof sketch you gave, you did this part in the context of the two rotation matrices and that's terribly confusing. You can see from the free group that the paradox is already inherent in simply composing any two invertible operations. It's better to see this part first, before diving into the rotations.

    * Take on faith that there are a pair of rotations of the unit sphere that are independent, hence are a model of . We don't care what the two rotations are. This is a technical part of the proof that's perfectly ok to simply accept.

    * Let H be the copy of generated by the two rotations of the unit sphere. Let H act on the sphere. The action partitions the sphere into disjoint orbits. You are correct that each orbit is countable and that there are uncountably many of them. So you're getting most of this right.

    * Pick a choice set consisting of one element from each orbit. We recover the sphere by translating the choice set; analogous to the way we recover the reals by translating the Vitali set. If you have not studied the example of the Vitali set, start here. It's not directly relevant to Banach-Tarski but it's same process of taking an equivalence relation and translating back a choice set on the equivalence classes to recover the original set. This is how to think about B-T.


    https://en.wikipedia.org/wiki/Vitali_set


    * Finally you apply the original paradoxical decomposition of the free group. I think I'm a little fuzzy on this point tonight but I believe I understood it at one point a few months ago. That was five steps. Wiki has it as four.

    ps -- Ah I think I remember. After translating the choice set, every point on the sphere can be uniquely expressed as some particular rotation applied to some particular point in the choice set. Now when you apply the paradoxical decomposition of the free group to the set of rotations, you get a paradoxical decomposition of the sphere, modulo some details.

    If you're having intuitions of topologies or continuity, those are the wrong intuitions to be having. What's interesting though is that each orbit is dense -- that is, arbitrarily close -- to every point on the sphere. That's why I said earlier that the orbits are disjoint but in no way separated. They're like the rationals in the reals. Arbitrarily close to every real, but you couldn't cleanly separate them out.

    I hope some of this helps. I'm not an expert. I can see that some of my exposition is fuzzy. I've been at this proof for quite some time. A few months ago I started writing up a simplified explanation and at one point convinced myself I understood about 95% of everything that was going on. Tonight I've got maybe 50 or 70%. But perhaps some of what I wrote will connect with your intuition. But forget continuity. The "pieces" of the decomposition are wildly discontinuous.

    Have you seen the Vitali set? You should start there. Define an equivalence, take a choice set, translate the choice set back to recover the original set.
  • fishfry
    3.4k
    Ok now that we have sort of a B-T discussion space, I want to suggest some warmup exercises to get a handle on the proof. I'll leave out all the details for brevity. These examples aren't actually part of the proof; but their methods give insight into what's going on when we translate the choice set on the orbits.

    * Consider the integers . Define two integers equivalent if their difference is a multiple of 5. This partitions the integers into five equivalence classes, each class countably infinite.

    Now consider each equivalence class. One is {-10, -5, 0, 5, ...}, another is {-9, -4, 1, 6, ...} and so forth.

    Now "using the axiom of choice," form a choice set consisting of one element from each equivalence class. Of course in this case we don't actually need choice, we could just take, say, the smallest positive element of each class. But we'll use the sledgehammer of choice just to get into the proper spirit.

    Such a choice set would look like, for example, {-10, 6, 12, 18, -1}. One element from each congruence class mod 5.

    Now we translate the choice set by every integer. {-10, 6, 12, 18, -1} + 0, {-10, 6, 12, 18, -1} +/-1, {-10, 6, 12, 18, -1} +/- 2, etc. In this way we recover all the integers.

    In fact every integer is the unique sum of one particular element of the choice set; plus one particular integer.

    What we've done is start by partitioning the integers into five classes, each countably infinite; and end up with a countable union of classes of size 5. We've "reversed the cardinalities."

    I have some pictures to go with this, is the exposition relatively comprehensible?

    * Ok next example, the Vitali set. We start with the real numbers and declare two reals equivalent if their difference is rational. I hope you see that while the formalities are clear, the intuition is hellacious. The equivalence classes are very hard to visualize. That's why this is such a cool example.

    Note that there are uncountably many equivalence classes; and each equivalence class is countable.

    We call the resulting set of equivalence classes . Now we do need the axiom of choice to form a choice set consisting of exactly one member of each equivalence class. Call the choice set in honor of Giuseppe Vitali, who cooked up this example in 1905.

    https://en.wikipedia.org/wiki/Vitali_set

    Ok now we do the same trick. Using the fact that is an Abelian group under addition, we can show that by translating by each rational number in turn, we recover the entire set of reals.

    Again, each real number is now expressed uniquely as the sum of a particular element of and some rational.

    But now what we've done is partition the reals into an uncountable union of countable sets; and by translating the choice set, we ended up with a countable union of uncountable sets! Again we've reversed the cardinalities.

    Why does this matter? In Vitali's example he restricted the operations to the unit interval, with the operation of "addition mod 1", or circular addition of reals. So 3/4 + 1/2 = 1/4, you wrap around. And now, when we have the unit interval expressed as a countable union of translations of , we can apply countable additivity to show that the measure of can not possibly be defined. If the measure of is zero, by countable additivity the measure of the unit interval must be zero. If it's positive, the measure of the unit interval is infinite.

    So there is no translation-invariant, countably additive measure on the real numbers that assigns 1 to the unit interval. is in fact a non-measurable set: a set of real numbers that can not possibly be assigned a size or a probability in any sensible way.

    https://en.wikipedia.org/wiki/Non-measurable_set

    * With these two examples in mind, we see how we partition a set into equivalence classes; take a choice set; then translate the choice set to either reverse cardinalities or get some other nice property that we care about.

    So in Banach-Tarski, we start with the set of rotations generated by our two magic matrices. We take a choice set on the collection of partitions. Now we do NOT have a nice Abelian group; but we can use the properties of group actions to show that each element of the original set is expressed uniquely as a rotation applied to some element of the choice set. So we have the unit sphere expressed as a union of rotations in applied to the choice set. And since is a copy of the free group on two letters, the paradoxical decomposition of induces a paradoxical decomposition of the sphere.

    I hope this makes some sense. I find that it helps me to understand the structure of the final step of the proof.
  • Mephist
    352
    If you're having intuitions of topologies or continuity, those are the wrong intuitions to be having. What's interesting though is that each orbit is densefishfry

    If topology has nothing to do with it why all the proofs of decomposition of objects that don't preserve volume are decomposing the objects in pieces that are not open sets?

    Can you find an example of decomposing an object and then recomposing it with different volume where the pieces are open sets?
  • fishfry
    3.4k
    If topology has nothing to do with it why all the proofs of decomposition of objects that don't preserve volume are decomposing the objects in pieces that are not open sets?Mephist

    Question doesn't compute. IMO you are understanding some of the technical steps but not grokking the overall structure of the proof. I actually can't parse your question. Which proofs? What pieces? What not open sets? You're reading things into it that aren't there. I strongly urge you to examine the Wikipedia proof outline and put aside the paper you're reading because it's too detailed and doesn't show the big picture.

    Can you find an example of decomposing an object and then recomposing it with different volume where the pieces are open sets?Mephist

    Like f(x) = 2x stretching (0,1) to (0,2)? Yes. Open sets are easy if you don't require the maps to be isometries.

    You should start with the free group on two letters, that's the key to the whole thing. Forget continuity, nothing here is continuous. Rotations are continuous; but what's being decomposed paradoxically is the group of rotations. It's a little hard to get a grasp on this the first n times you go through it, for some large value of n.

    It's true that each individual rotation is continuous and carries open sets to open sets. But what's being decomposed is the collection of all the rotations; and this collection has a paradoxical decomposition by virtue of being a model of the free group on 2 letters. That is the one-sentence version of the proof.

    Start with the free group on 2 letters. Go through the Wikipedia proof. Abandon the difficult paper you chose to work with. My two cents.

    If topology has nothing to do with it why all the proofs of decomposition of objects that don't preserve volume are decomposing the objects in pieces that are not open sets?Mephist

    Because topology has nothing to do with it! The rotations are continuous but the group of rotations has a paradoxical decomposition. Any open interval is topologically equivalent to any other, regardless of length. Topology is too weak to preserve measure. We need isometries for that.

    In fact another one sentence summary of the proof is that while isometries preserve the measure of measurable sets; they do not necessarily preserve the measure of nonmeasurable sets.
  • Mephist
    352
    Like f(x) = 2x stretching (0,1) to (0,2)? Yes. Open sets are easy if you don't require the maps to be isometries.

    I don't know if the rotations or translations carry open sets to open sets. I'd tend to doubt it.
    fishfry

    Yes of course they have to be isometries. I meant: there is no way of decomposing an object in an infinite set of open sets and then recomposing them in a different way so that each peace has the same measure but the sum of the measures of all the pieces is different. If this were possible, the theory of integration would be inconsistent.
    I know your objection: if there is an infinite number of pieces the measure of each peace cannot be finite. OK, but you can build the limit of a sequence of decompositions, like you do with regular integration.
    I am not arguing that BT theorem is false, I am arguing that it works only because you perform the transformation on pieces that are not measurable. If the pieces were made using the decomposition in open sets, as with regular integration, it couldn't work. I know that you can even define a Lebesgue integral that is working on sets that are not open: this is not a necessary condition, but is a sufficient condition to preserve additivity.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.