• Univalence as a Principle of Logic
    Ah... well, I don't know neither MgGregor, nor McBride or linear logic.. :smile:
  • Univalence as a Principle of Logic
    OK, thanks for the suggestion. I'll read it
  • Univalence as a Principle of Logic
    Yes, that's what I had in mind, although re-reading what I wrote, I am not sure that this is the right functor to consider.. :razz:
  • Arguments in favour of finitism.
    Well, I reviewed the definition of Riemann integral and I see that it is defined as a series (sum over set of intervals). So yes, I was wrong: there are no sums defined on an uncountable set of terms (for what I know).
  • Univalence as a Principle of Logic
    I thought we'd be able to incorporate implementation details (a description of them at least) into our types and differentiate them at that level followed by a truncation to Set as required for computation. Conor McBride and his talks on linear logic gave me that impression. If the problem is how to distinguish algorithms when reduced to Set, can't we just add some marker element and still retain UA?RainyDay

    Re-reading your post, I realize that maybe I didn't understand what you were referring to...
    I didn't see Cornor McBride's talks on linear logic.
    Regarding the proposal to "incorporate implementation details into our types" and then distinguish algorithms by "adding some marker element", I don't think it can be done in this way. Types are really only pre and post conditions, and I don't see how you can describe an implementation as function type.
  • Univalence as a Principle of Logic
    Edit: No, I can't say that. I do care. Is there a formal way to express finiteness of execution? I presume you mean finite in principle. If we stick with computability as logicality, aren't we excluding non-computable statements from being logical? I see the appeal but it also seems to unnecessarily conflate two perfectly good conceptsRainyDay

    I know that in Coq finiteness of computable functions is ensured by restricting the form of allowed recursive functions, but there are a lot of different ways to achieve the same.


    If we stick with computability as logicality, aren't we excluding non-computable statements from being logical?RainyDay

    The restriction to computability is only for set of the procedures and functions used in proofs, not for the functions of the mathematical models
  • Arguments in favour of finitism.
    ∑n=infinityn=1∑n=1n=infinity −1nn−1nn , as you can see in this series we have not indexed the set using negative numbers, and l think the series will not be well defined if we do not restrict R to natural numbers.( countable )Wittgenstein

    I agree that this series should be defined on natural numbers, but I don't understand to what part of my post is this related. I said that a series, by definition, is a sum of a countable set of elements. Is this an objection to this point?
  • Univalence as a Principle of Logic
    I would have thought -- in fact I do think -- that each of these groups is an object in the category of groups; and that there are morphisms from one to the other and back whose compositions are the respective identities, hence the two groups are isomorphic. This is true in category theory.fishfry

    Yes, this is true for the category of groups built on top of ZF Set theory. You take sets as objects and functions as morphisms. So you have an infinite set of objects that represent the cyclic group of order 4, all isomorphic between each-other.

    This is what is called a "concrete category":
    "a category that is equipped with a faithful functor to the category of sets" (https://en.wikipedia.org/wiki/Concrete_category).

    So, what gives you the possibility to distinguish between the different implementations of the group is a functor, that maps every object to a set built using your underlying set theory (in this case ZF Set theory)

    But category theory can even be used as an axiomatic system, independent of ZF Set theory. So, from this point of view, objects and morphisms are "whatever" respects the axioms of a category.
    Now, the axioms of category theory require the existence of an "=" relation between morphisms, but they DON'T REQUIRE the existence of an "=" relation between objects.

    Now, if you describe a category in a NON CONCRETE way, by giving additional axioms on top of the basic ones (for example existence of all products, existence of limits, etc..) such as is done in the definition of a topos (one of the fundamental structures of homotopy theory), there is no meaning in saying that two objects are equal, simply because NO WAY OF COMPARING TWO OBJECTS IS DEFINED.

    So, you can provide an equivalent description of the same two concrete constructions: "the group of integers mod 4 and the powers of i" by building two functors that map objects of the category of groups to their concrete realization as sets.


    I would think that UA simply says that from now on I should just think of there being one group, the cyclic group of order 4, and I shouldn't care that there are two different set-theoretic presentations of it. Which is how we normally think of it anyway. Which is how I'm approaching UA.fishfry

    Everything depends on what logic is based your definition of category.
    If you take HoTT with UA as your logic, and define the of integers mod 4 and the group of the powers of i as two different types of level 2 (the "Type" type in Coq), then you can build an equivalence relation between the two types and use it in UA to prove that the two objects are equal.
    The fact that they are equal means that you can substitute any occurrence of one type with an occurrence of the other type in any proposition. Does this mean that they are "the same" object? Well, I don't know... It means basically that they can be "used" as if they were the same without causing logical inconsistencies.


    But your para is quite opaque to me. You say the integers mod 4 and the powers of i are NOT objects in the category of groups. I'm in deep trouble here, please throw me a lifeline.fishfry

    Let's say that they are not "group objects" in the axiomatic interpretation of categories (https://en.wikipedia.org/wiki/Group_object), but are objects in the concrete category of groups, corresponding to a functor in the axiomatic interpretation.


    But to see them as functors, I'm not sure I get that. Functors from "categories with the appropriate structure of morphisms to the category of groups." Ayyy. I'm in trouble. Help me out please.

    What are the categories with the appropriate structure etc.? Can you give me an example?
    fishfry

    The category of modules (https://en.wikipedia.org/wiki/Category_of_modules)

    and the category of vector spaces (https://ncatlab.org/nlab/show/Vect) for complex numbers
  • Arguments in favour of finitism.
    Yes, I didn't finish to read the article in detail, but it sounds very interesting.
    However, I am not surprised that there is an axiomatization of euclidean geometry that doesn't make use of infinite sets. What I said is a very simple and obvious thing: if you don't allow the existence of infinite sets, you have to treat segments as a different kind of thing than a set of points.
  • Arguments in favour of finitism.
    OK, but I don't understand what's your point here..
  • Univalence as a Principle of Logic
    I thought we'd be able to incorporate implementation details into our types and differentiate them at that level followed by a truncation to Set as required for computation. Conor McBride and his talks on linear logic gave me that impression. If the problem is how to distinguish algorithms when reduced to Set, can't we just add some marker element and still retain UA?RainyDay

    Yes, of course you can model algorithms and even non total functions in HoTT with UA. In reality you can model it even using Church's untyped lambda calculus, or even using only integer arithmetic (using Godel encoding).
    The point is that the objects that correspond "directly" ( without any encoding ) to the axiomatization of HoTT with UA are not algorithms, but standard mathematical functions. And that makes the use of this logic for standard mathematics very convenient, but less convenient (more complex constructions and proofs) for describing algorithms.
    At the same way, ZF Set theory's objects are sets (described axiomatically), but you can use sets to build models of whatever you want. Only that the representation of a category (or even operations on natural numbers) by means of sets becomes extremely complex.
  • Univalence as a Principle of Logic
    Is the UA intended to apply to standard set theory?fishfry

    Here's a quote from the introduction of Univalence as a Principle of Logic
    "The resulting picture of the mathematical universe is rather different from that
    corresponding to conventional set theoretic foundations"

    UA is intended to apply to category theory. If you use standard set theory, two objects of a category can be defferent but isomorphic. So, for example, to be formally correct you shouldn't be allowed to consider THE initial object of a category, because is not unique. But in category theory you are not interested in equality between sets, but only in equality "up to isomorphism".
    Well, in HoTT ( and in Coq ) objects are not sets but types. And usually equality between types IS NOT DEFINED: you define only equality between elements of the same type. So, type theory is a more "natural" logic for category theory, because the concept of equality between objects, that doesn't make sense in category theory, IS NOT EXPRESSIBLE.
    UA is a way to give a meaning to equality between types (and even to higher order types: types whose elements are types of a lower level) in terms of equivalence relations.

    Now if I imagine that there is exactly one set of each cardinality; and that the group of integers mod 4 and the powers of i are the same group (and not just two isomorphic groups) is that in the spirit of UA? Is this a simplified way of understanding it?fishfry

    The group of integers mod 4 and the powers of i are not objects in the category of groups, but functors from categories with the appropriate structure of morphisms to the category of groups. You can still use functors to build whatever structure you want, and in the category of functors, these structures are no more isomorphic.
  • Univalence as a Principle of Logic
    I'm probably having trouble separating HoTT+UA from HoTT. Doesn't truncation allow us to ignore identifications where necessary anyway?RainyDay

    Well, for HoTT without UA I had in mind Coq with higher inductive types (https://ncatlab.org/nlab/show/higher+inductive+type).
    Well, you can use truncation to identify proofs ( objects of type "Prop" in Coq ), but we don't want to identify functions ( objects of type "Set" in Coq ). Identifying proofs of equality on functions does not prevent you to build different functions with the same input-output relation.

    Is there an example you had in mind where we'd not want isomorphic structures identified and also not want UA?RainyDay

    The obvious example is the Set type in Coq. Set is not meant to represent mathematical functions, but to represent algorithms (functions that can be "extracted" by the system as programs). If you assume UA for Set, all algorithms that compute the same function are equal, and of course you don't want this to be the case, because different implementations are very different in terms of execution time and memory usage.
    However, I believe it's still possible to have both an "Univalence" type where UA is assumed to be true, used to reason about extensional functions, and a "Set" type where UA is assumed to be false, used to reason about programs.

    P.S. Isomorphic structures to be equal in Set would mean that you don't distinguish between different implementations of the same algebraic data structure. This is good for mathematics but not for programming.
  • Arguments in favour of finitism.
    1.If the segment is not made up of points, is it a non zero measure or something else ?Wittgenstein

    A segment (1-dimensional) has a non zero 1-dimensional measure, and can be only made up using objects with non-zero 1-dimensional measure. ( I am speaking about the view of mathematics that ancient Greek mathematicians - for example Archimedes - had )

    2.Can a divergent series be obtained from an uncountable set, ( I think it can be ) but can a convergent series be obtained from a uncountable set ? ( A sum that is definite must have a fundamental difference to a divergent one )Wittgenstein

    If you believe in finitism (as you proposed) there are no infinite sets, neither countable nor uncountable.
    Again, this is Archimedes' philosophy of mathematics, not mine.
    However, a series is BY DEFINITION a sum of terms indexed by integers. An integral on real numbers instead is a sum of terms indexed by real numbers, that are an uncountable set in ZF Set theory. Then, in modern mathematics a convergent sum of an uncountable set of terms is perfectly normal.
  • Arguments in favour of finitism.
    an infinite convergent series sum is a different mathematical object compared to an infinite divergent series as we can have well defined results for earlier oneWittgenstein

    In every case the series (convergent or not) is made of a countable set of 1-dimensional segments of non zero measure. You add segments to obtain a segment, not points.

    The measure of Cantor set is zero and that requires the existence of an infinite set.
    Can you clarify on the representations of geometrical objects using an infinite set, I think finite sets suffice.
    Wittgenstein

    If you think of a segment as made of a set of points, it must be an uncountable set of points because a countable set of points would have zero as 1-dimensional measure.
    If you instead say that points cannot be used to build a segment because points have no measure, then you agree with me that points and segments are two different kinds of "objects", so one cannot be built starting from the other
  • Arguments in favour of finitism.
    If you rule out infinite sets you have to admit that measures of geometrical objects are not sets. For example:
    - how many different lengths can have a segment?
    - how many different directions there are? ( intended as different unit vectors in 3d space )

    I think ancient Greek mathematicians had this point of view: there are discrete numbers used for counting, that are objects with zero dimension; there are lengths, with dimension 1, areas with dimension 2, and volumes with dimension 3. And each of these "kinds" of measures is not comparable with the others. So, the question of how many "points" are needed to form a line does not make really sense, since objects of dimension 0 and 1 are not comparable. So, no need of infinite sets.
  • Univalence as a Principle of Logic
    One point that I found enlightening was the suggestion that, to paraphrase, invariance under homotopy equivalence is a hallmark of logicality.RainyDay

    I don't agree on this point. Of course you can define "logicality" however you want, but in my opinion HoTT without Univalence Axiom has all rights to be regarded as "real" logic as HoTT without Univalence, or ZF Set theory in first order logic.
    The point is that the "natural" model of HoTT without UA is not the usual mathematical universe that mathematicians have in mind, but a much wider one. For example, function extensionality is not derivable. That's because the "functions" described by HoTT without UA contain information on how they are executed, and not only the relation between input and output values. So, wihtout HoTT, two functions can be different because they have different "implementations", even having the same input-output correnpondence. The same is true for isomorphic structures: I think there are alternative axioms not compatible with UA that describe perfectly reasonable mathematical models where isomorphic structures should NOT be identified, just at the same way as there are perfectly reasonable mathematical models where the law of excluded middle should be assumed to be FALSE (for example, if you consider the possibility of partial functions).
    The remarcable characteristic of HoTT with UA is that it's "natural" model is the same model assumed by category theory.

    If I had to choose a characteristic of pure "logicality" (e.g. a caracteristic of all logics), I would rather say that it's related only to computability and finiteness of execution.
  • Is there a more complete scientific model than Anaximander's?
    If there will be a third world war with nuclear weapons that destroys completely humanity and civilization, would it be a prove that Anaximander's theory is false?
  • Is there a more complete scientific model than Anaximander's?
    Could you explain what you mean by "projections"? Maybe an example?
  • Is there a more complete scientific model than Anaximander's?
    ernestmernestm

    I think the main problem with Anaximander's theory, if you want to see it as a modern scientific theory, is that it's not falsifiable (https://en.wikipedia.org/wiki/Falsifiability). That means that you should be able to conceive an experiment whose result could be NOT in agreement with your theory.
    However, it is true that some modern models of the universe have the same problem.
  • Does logic and definition memory breed more logic?
    Well, I believe that some form of artificial intelligence capable of recognizing interesting theorems should be possible one day. But the interesting questions are:
    - Will this artificial intelligence agree with human judgement on which theorems are to be considered important ?
    - Should it be based on the interpretation of logical definitions as structures of the physical world (for example recognizing geometry theorems based on some kind of artificial vision), or can it be done without being aware of the physical world at all ?
    ( see my other discussion on https://thephilosophyforum.com/discussion/5792/is-mathematics-discovered-or-invented/p1 )
  • Does logic and definition memory breed more logic?
    Maybe I didn't understand what you meant. For what I know of artificial intelligence, the main point is pattern recognition. But you spoke about "continually adding definitions and logical conclusions to your brain". I don't see how this could be related to artificial intelligence.

    If you try to generate all possible definitions and apply to them all possible rules of logic, you obtain an infinite list of theorems, that are for the vast majority of absolute no interest (like the list of all possible results of addition of two integers).

    Do you think an artificial intelligence algorithm should be able to find patterns in this infinite list of theorems? Or maybe there should be a way to use artificial intelligence to generate only theorems that have some value?
  • Does logic and definition memory breed more logic?
    I wouldn't call that kind of logical deductions "intelligence". This is the kind of deductions that a computer can do automatically. Logic is basically only a "calculus" that a calculator can perform much easily than a person.
    I think the kind of logical results (or theorems) that are related to intelligence contain something more than syntactic formulas with obvious interpretation
  • What is and isn’t Absurd?
    My point was more about setting up propositions that refuse analysisI like sushi

    If you can set up a proposition that refuse analysis, it means that your logic is wrong. The purpose of logic (the reason it was created) is to rule out all propositions that refuse analysis
  • Musings On Infinity
    We can NOT currently (as far as I understand it) get modern physics off the ground using nonconstructive math. The 't' in the Schrödinger equation is taken to be a variable that ranges over the standard mathematical real numbers, noncomputables at all.

    I am aware of only one book that attempts to get physics working on nonconstructive math. That's a tiny drop of water in an ocean that needs to be filled.
    fishfry

    OK, I am not an expert either :-)
    For what I can say for sure, there is a full formalization of real numbers in COQ standard library (https://coq.inria.fr/distrib/current/stdlib/).

    I have used the library for a long time (at the beginning at university, and after because I like to play with math), but to be honest I even didn't know on what axiomatization it was based :-)

    I searched for some the documentation (easier than dig into the library searching for axioms):

    (https://hal.inria.fr/hal-00860648v1/document at page 6)
    "Note that the standard library sometimes makes use of the excluded-middle axiom in addition
    to the previous ones. The CoqTail project6
    , however, has shown that it was unneeded. So, for the
    purpose of this work, we consider that this axiom is not part of the ones defining real numbers in
    Coq’s standard library"

    From the point of view of calclulus (and physics) you can use limits or derivatives perfectly well without knowing how they are defined in the base libraries (you only apply theorems).


    I will agree that synthetic differential geometry is some kind of counterexample to my claim but I don't know enough about it. But SDG is not the same as nonconstructive approaches, is it? I thought SDG is just a categorical approach to infinitesimals so it's more like a complementary idea to nonstandard analysis. Not something directly related to homotopy type theory or neo-intuitionism or whatever.fishfry

    SDG represents space as a topos. And a topos is the model (or at least the "standard" model) of Martin-Löf type theory (lots of details omitted...)

    (From https://ncatlab.org/nlab/show/topos)
    "toposes are contexts with a powerful internal logic. The internal logic of toposes is intuitionistic higher order logic. This means that, while the law of excluded middle and the axiom of choice may fail, apart from that, every logical statement not depending on these does hold internal to every topos."
    P.S. In reality, the "standard" model for Martin-Löf type theory with extensional equality is "belived" to be an (infinity-1)-topos, but I don't know if anybody found a proof of this.


    Simple explanation:

    In ZF Set theory, everything is made of sets (or, if you prefer, everything is a set)
    Of course you can represent (or build) whatever you want with sets, by composing them with inclusion and equality. But the fundamental "objects" described by the theory are sets.

    In Martin-Lof type theory, everythig is made of types (objects) and functions (morphisms).
    Essentially, the rules of logic describe what you can "build" starting from an arbitrary finite collection of objects and morphisms.
    The "thing" that results from building ALL THE POSSIBLE CONSTRUCTIONS starting from this arbitrary collection results to have the algebraic structure of a topos (the free topos built from the given initial objects and morphisms)

    In a similar way, for example, the axioms of classical propositional logic correspond to a distributive lattice, and the axioms of simply type lambda calculus correspond to a cartesian closed category.

    Well, SDG is an attempt to add to a topos enough structure (corresponding to axioms of the internal logic) to be able to define on it differential operators that respect the same rules of calculus that we use in physics.
    The main difference is that this space is not made by an infinite set of "points" of zero size, but by an infinite set of lines of infinitesimal size. From the point of view of logic, the model is not contradictory. For the point of view of "intuition", I am not sure if it's more "crazy" to believe that you can "attach" one after the other (continuity) an infinite set of objects of length zero to obtain a finite segment, or that you can "attach" one after the other an infinite set of objects of infinitesimal length to obtain a finite segment.


    I would never call B-T a "physical" result. On the contrary it's only a technical result regarding the isometry group of Euclidean 3-space, which happens to contain a copy of the free group on two letters, which has a paradoxical decompositionfishfry

    I don't know what is an isometry group :worry:


    I've heard of lambda calculus but don't know anything about it except that everyone thinks it's terribly important. I gave the Wiki a quick scan and they pointed out in their section on beta reduction that "The lambda calculus may be seen as an idealised version of a functional programming language ..."fishfry

    :-) Lambda calculus (let's say simply typed lambda calculus, to fix the ideas) is a little recursive set of rules that says how you can build some objects starting from other objects that have just been built: exactly like Peano natural numbers, recursive trees, and all other recursive constructions that you can think of.
    The objects that you can build in this way have exactly the same structure as the arrows and the objects of a cartesian closed category. Every arrow can start from a type and end in another type, and there is the possibility to build types from all arrows that start from the same source object and end in the same target object (the exponential objects of the category).
    So, category theory and lambda calculus represent the same structure, seen from two different points of view:
    - category theory describes the collection of all possible objects and arrows seen from a GLOBAL point of view
    - lambda calculus describes how you can build the whole structure putting together one piece at a time, from a LOCAL point of view.

    About beta reduction: If you are operating on an universe of functions that you don't know, you obviously you cannot execute them, because you don't know how they are "implemented". And, in general, they don't have to be "computable" functions at all. You can only execute the functions that you build within the internal language of logic.
    Well, beta reduction is a rule that says "if you apply the function that takes the argument a into the formula ".... a ...." ( lambda a -> ..... a .... ) to the argument "whatever formula you want", you obtain as a result ( ..... "whatever formula you want" .... ). That is: you substitute the expression of the argument to the symbol that represents the input inside the formula. It's the most obvious and stupid thing that you can say that a function "must" do. This rule does not care how the function is implemented. Only operates formally composing functions together. Well, the "strange" thing is that this rule can be used to calculate the value of the function IF the function is computable. It means that you can "execute" the function and obtain a result not really executing it, but transforming it recursively from a form to the next one, until you reach a "standard form". This standard form "represents" the result of the execution of the function (the output). In other worlds, this is a very clever way to mix computable functions that you built yourself with other "external" functions that you assume to exist, but you have no idea what they are doing. The same you do in logic: ( A /\ B ) \/ ( not C ). You can reduce this to a normal form made only of conjunctions without having any idea of what the propositions A B and C say.


    Ok. But a functional programming language can't do anything a Turing machine can't, and in fact Turing proved that TMs are equivalent to the lambda calculus, and Turing machines can't crank out the digits of noncomputable numbers. So once again I'm up to my original brick wall of understanding. Or misunderstanding.fishfry

    Exactly. All that you said is correct. You can operate with logic only using a Turing machine. But the model on which you can operate is a category, without ANY limitation of size. This is exactly the same thing that you do in set theory: you assume that there exists an universe made of sets of whatever size you want. And you say that if there exist two sets A and B, then there must exist even "A intersect B", and "A union B", etc, etc.. The things that you can build with logic operators (conclusions) starting from a finite number of terms of the language (hypothesis) are always a countable set in whatever formal language you use.
    And by describing the finite structures that you can build from arbitrary finite sets of other objects, you define a structure on the underlying universe on which you operate.
    But you can't impose a limit on the "infiniteness" of the size of the underlying universe only by imposing rules on finite constructions that are part of it. And that is true for intuitionistic logic AND for classical logic. You cannot "build" an irrational number in set theory either. You can only assume that it exists because it's compatible with the rules that you use to build sets starting from other sets!
  • Musings On Infinity
    You are advocating nonconstructive foundations. It's in that context that my remarks make sense. We can NOT currently (as far as I understand it) get modern physics off the ground using nonconstructive mathfishfry

    Wait a moment: I am advocating Martin-Löf type theory, and one of it's particular versions called "Calculus of Inductive Constructions" that is, as the name says, CONSTRUCTIVE mathematics.

    From https://en.wikipedia.org/wiki/Constructivism_(philosophy_of_mathematics):
    "Constructive mathematics
    Much constructive mathematics uses intuitionistic logic, which is essentially classical logic without the law of the excluded middle."


    If I could only understand this. You are saying that if I can prove a theorem at all, I can prove it using constructive methods, by adding certain axioms. This could be within my grasp. What are the axioms that make this work?fishfry

    OK, maybe this is the best point to start with.

    Here's MY PERSONAL (not sure if common) interpretation of the "meaning" of classical and intuitionistic logic.

    First of all, definitions:
    - a model is something on which you can perform "experiments" and verify the results.
    - a proposition is the description of an experiment performed on a model.

    The result of every experiment (if you can perform it) is always uniquely determined: True or False.
    This is the same thing as a physical experiment: if it's doable and you prepared the experiment to obtain a true or false result, you'll always get an answer: it can't happen that the world stops working because it doesn't know how to answer to your experiment!

    But there is a difference from physics, and that's the difference between mathematical functions and physical experiments: in a mathematical model for the same experiment you'll always get the same result: a function is supposed to be some fixed well-identified mechanism that always outputs the some output if you put in the some input.
    In a physical model instead, as we know from quantum mechanics, this is not always true.

    Let's begin from what is probably the most controversial point: the law of excluded middle (LEM).

    So, LEM says that "all experiments are performable" (giving as a result True or False).
    Not assuming LEM means that in some cases A is a performable experiment, but (A \/ ~A) is not.

    The obvious example is the termination of a Turing machine:
    A means: "the program A terminates".
    You run the program and you verify that it's true (we assume that A always does the same thing, so it's enough to run it one time)

    NOT A means: the program A never terminates:
    You run the program and wait... forever! Your experiment will never end, so you'll never be able to perform the experiment that corresponds to the proposition "A is false".

    Nevertheless, we know that an answer "exists": or the program terminates or it doesn't, No other possiblities! Only that the answer is not implementable as an experiment on the model.
    (of course in most cases you can prove that A is false with logic, but here we are not speaking about "proving" propositions but "verifying" on the model if the proposition is true or false)

    So, in classical logic we assume that EVERY proposition of the language describes an "experiment" that is executable and when is executed returns an answer True or False.

    In intuitionistic theory instead, we assume that there are propositions A for which "NOT A" is not a feasible experiment. So, you shouldn't be allowed write "NOT A" as an assumption for an arbitrary A, because for some propositions A, "NOT A" has no meaning in our model.

    The "trick" is that in ituitionistic logic "NOT" is not defined as the boolean function {True -> False; False -> True}, but is defined as "A implies Absurd" ( A -> _|_ ), where Absurd is the "impossible experiment": the experiment that does not return any result, because it's associated with a contradictory proposition. Note that IF A is a possible experiment, then "A -> _|_" is always a possible experiment in intuitionistic logic, because it's implemented as a logical demonstration made of a finite number of steps starting from A (that's the reason why the internal language of logic should be allowed to implement only total functions) and the provability of a theorem is a syntactic operation made of a finite number of steps, independent from the model.
    In this way, the rules of logic guarantee that you can never build an "impossible" experiment IF you start from a possible one.

    Now, in classical logic "A -> _|_" is equivalent to "NOT A", because there are no "impossible" experiments ( except _|_ that is not derivable in any logic ).
    But in intuitionistic logic you can assume as an axiom the proposition "A \/ (A->_|_)" (usually written A \/ ~A), but with the intuitionistic definition of the not function).
    "A \/ (A->_|_)" in intuitionistic logic is some kind of "oracle" function that takes as an input a proposition A and ALWAYS returns one of the two propositions: A or (A->_|_).
    If this function exists, then you can use it to decide the result of every experiment, and (A->_|_) becomes equivalent to the "NOT A" of classical logic (no more impossible experiments). If this function does not exist, there are impossible experiments, and "A \/ (A->_|_)" cannot be built in any way starting from intuitionistic rules.
    So, this means that:
    - "classical logic" = "intuitionistic logic" + LEM.
    - "models of classical logic" = "models of intuitionistic logic" - "non realizable experiments"
    You can use intuitionistic logic to separate the finitary deductions (not using any axiom) from the non finitary ones (using LEM, or some other axiom).
    (of course, this is a super semplification since there are many kinds of "classical" and "intuitionistic" logic, even without considering the axioms of set theory)


    There are even other models in which not all propositions of classical logic make sense: for example, if you interpret the propositions as experiments of quantum mechanics, the proposition A /\ B does not make sense if A and B are incompatible observables: because both A and B separately are measurable quantities, but the couple (A, B) is not.
    So, when you use classical logic to reason about experiments in quantum mechanics you have to be careful not to use the expression A /\ B when A and B correspond to non independent observables.
  • Musings On Infinity
    I am glad that you replied to my post, so I can explain in more detail my point of view :smile:

    I know of Bishop's book and I even know a book where a physicist tried to do constructive physics. That's a project in its infancy but you constructivists will have to face the problem someday. Physics is founded on classical math, ie continuity and the real numbers as defined in set theory.fishfry

    I don't believe that physics is founded on ZF Set theory, if that's what you mean by "classical math".
    Physics is based on calculus (for the most part), that was created starting from XVII century (Newton's "principia" was published in 1687), at a time when logic wasn't even considered as part of mathematics.

    In fact, I believe that most of physicists don't even know the exact formulation of the axioms of ZF set theory, and that's because these axioms are never used in physics directly. The thing that really matters is that you can use logic to prove that your results follow from the algebraic rules of calculus, differential geometry, group theory, etc...

    If this was true, the distinction between the various axiomatizations of real numbers would be irrelevant for physics. ZF set theory only happens to be the first axiomatization to be discovered.

    But this is in fact not completely true. There are some physical results that are not the same in all axiomatizations. For example, the Banach-Tarski theorem. If you believe that the physical space can't be split in peaces and then reassembled with double volume, you have to conclude that the real numbers' axiomatization of ZFC is not the right model for physical space.

    A more "practical" example are Feynman's path integrals: they can be used to calculate physical quantities with extreme precision, if you don't take for real the result that derives from ZFC's axioms. In fact, from ZFC axioms you can derive that the result is "infinite". But if you take only some terms of your infinite (divergent) series, the result is experimentally correct with a very high accuracy!

    I believe that the most popular explanation of this fact among physicists today is that physical space is in fact discrete. But there is no experimental evidence of this until now.
    ( The same thing happens with energy: if energy is continuous you get infinite results for black body radiation. With Plank's quantization you get the correct results )
    But with physics you can assume nothing until you find experimental evidence, because nature has surprised us so many times! However, if I had to guess, discrete space (like pixels of a video game) is too "ugly" to be true: nature has shown to prefer "beautiful" mathematical structures.
    I think an interesting question is: is there an axiomatization of a continuous space (where you can take measures represented as real numbers) where Feynman integrals make sense and are convergent? I don't know, but I don't see a reason why this shouldn't be possible.


    I am grateful for your summary and I will try to work through it at some point. I have to confess I did not dive into this exposition but I'm glad you wrote itfishfry

    I'll try to give you a simpler explanation: In logic you have propositions, terms and relations. Martin-Löf type theory ( and HOTT too ) reduces all these three concepts to only one: functions. That's basically the reason why it has a simple categorical model.
    The functions that you consider in the theory are, exactly as for morphisms in category theory, not limited in any way. Now the question is: how is it possible to use and reason about functions that are not computable? Obviously, you can't "execute" the function on a given argument to check what's the result. Well, the "trick" is: lambda calculus! With lambda calculus you can "execute" a function by what's called "beta reduction". That means that you can transform and compose functions from one form to another (equivalent to the previous one) using very simple rules that are supposed to be respected by everything that can be called "function".
    The whole point about constructivism is that you can assume existing for sure only the functions that you can effectively compute. All the others can enter in the theory only as premises of your deductions (no axioms!). And that is enough to prove all mathematical theorems that can be proved with classical logic, if you add the appropriate axioms corresponding to the "non computable" parts of classical logic. In this way you make clear the distinction between the computable and not computable parts of logic!

    [ that's enough for today.. :-) ]
  • Musings On Infinity
    I can't agree that intuition is useful here. Intuitively, the even numbers adjoined with a single odd number are "larger" than the even numbers; and in fact this is confirmed by the proper subset relationship. But these two sets have the same cardinality. This is specifically a point of confusion for beginners. I reject naive intuition here and insist that "bigger" means injection but no surjection, end of story and no intuition allowed! Intuition is exactly what gets students into trouble when they first learn the math of infinity.fishfry

    OK, your are right. Let's stick to a formal logic that uses only the forall-exists language. But we have to keep in mind that the meaning of "forall" and "exists" is only determined by the rules of the language. And that we can trust these rules because they are verified in a "physically verifiable" (intuitionistic?) model (such as the "iterate and check on countable sets" model that I described before), but they ARE NOT REQUIRED TO HAVE THAT PHYSICAL MEANING, at the same way as the word "larger" doesn't have to have the intuitive meaning related to our physical world.

    Moreover if you reject the axiom of choice (necessary if you reject LEM) there are "amorphous" sets that have no sensible cardinality at all. Such sets defy anyone's intuition.fishfry

    I didn't know what amorphous sets are. I searched on Wikipedia:
    "After Cohen's initial work on forcing in 1963, proofs of the consistency of amorphous sets with Zermelo–Fraenkel were obtained.[3]".

    So, looked for the proof of consistency with Zermelo-Fraenkel to see for which model amorphous sets make sense.. but I don't have access to [3] :-(

    However, I would guess that the crucial point here is that you define "infinite" as "existence of an one-to-one correspondence with a proper subset", but the definition of subset is given by a proposition of the language, and everything depends on what propositions you can express in the language.
    The axiom of choice expressed in set theory expands the set of propositions that you can express, and this has the effect to limit the sets that you can define.
    But the set-theoretical axiom of choice is not verifiable in any "concrete" (intuitionistic) model, at the same way as amorphous sets aren't.
  • Is mathematics discovered or invented
    And where does this map exist? Where is the 'place' where this map is stored and retained, ready for later use? The only thing I know of that can store an idea is a conscious mind. Perhaps there is some other container that can also achieve this, but what and where is it, this store?Pattern-chaser

    An idea is arbitrary data, such as a picture. For this you need a memory to store information, because there are a lot of possible combinations of shapes and colors (or pixels, if you want) that can form a picture, and you don't know which one of them is the one contained in the memory. But often interesting mathematical objects need much less information to be stored that it would seem: the Mandelbrot set is a very good example: just a couple of formulas encode an apparently infinite quantity of shapes. In a sense, the Mandelbrot formula is an excellent image-compression algorithm.
    Now the question is: how much information is needed to store all possible interesting mathematical objects? If you want to describe one of them, you need information about which one is it. But to describe all of them, from my point of view this is like describing all possible books that can be written in every possible language: it's just "the combination of all strings of limited length", and this is a complete description.
    Now, my idea is: if "interesting" mathematical objects are in some way identifiable by a simple concept, it's conceivable that the description of "all of them" doesn't need information to at all to be described.
    I don't believe that the physical universe is "made" of mathematical objects, but that laws of physic in some way favor the development of objects that "resemble" the mathematical structures that we judge "interesting". Why that coincidence? Because the same laws of physics are at the base of the evolution of human mind: in some way the human mind "favours" the recognition of the same structures, because these structures are "favoured" in the whole universe, and our mind is part of it.
    Of course there is an enormous quantity of "information" in the physical world around us, and the fact that it's map contains no information doesn't seem to make sense. But the information that we see could be rather related to our very particular "position" inside the universe (something like living in a very narrow subset of the Mandelbrot set).
    However this is only a vague idea: you should define how to measure this "information", and even in that case, this doesn't explain where the information about the laws of physics is contained, and probably a thousand of other things that make no sense.
  • Musings On Infinity
    You mentioned a lot of other interesting things that I'll try to get to later. One was about LEM and HOTT. I was not under the impression that HOTT is a constructivist theory.fishfry

    HOTT is Martin-Lof Type Theory with higher inductive types, but with the addition of Voevodsky's "univalence axiom". So, it's not so simple:
    first of all: univalence does not imply excluded middle nor the axiom of choice. So, in this sense, it could be called intuitionistic. But if by intuitionistic you mean that all the internal functions of the language are computable, then the presence of an axiom makes it become not computable (if it were computable it wouldn't be needed as an axiom).
    However, both the axiom of choice and excluded middle are consistent with Martin-Lof Type Theory. So there is no problem of including them in the theory (if you don't require that the theory should be intuitionistic).
    But the main distinctive point with HOTT is the fact that it is a Type Theory, and not first order logic with axioms from set theory. This makes it much easier to use in practice for real mathematics ( I am pretty sure that nobody uses formal set theory for real mathematics: take a look at the post on this site on how to prove 1 + 1 = 2 :-) ). But on the other side, there were a couple of things that were not completely "compatible" with standard mathematics. And both of them are magically "fixed" with the univalence axiom: one of them (the simpler one) is that functions in type theory are not directly interpretable in the usual way of input-output correspondence, but rather as lambda functions. The difference is that two functions can be different because they are implemented in a different way (different algorithms) but are equal as input-output (this is called "function extensionality"). The other, most important point, is that there are two kinds of "equality" in type theory, and neither of them matches exactly the equality of set theory. I don't think I can explain this in a few words, and this post is just becoming too long... :-)

    I'm sure Voevodsky believed in uncountable sets.fishfry
    Yes of course, but this in my opinion is not really related to intuitionism.

    A machine could verify Cantor's theorem. So I am not sure that HOTT and constructive math are the same.fishfry

    A machine could verify any theory based on a formal system: this, I believe, can be taken as a definition of "formal system".

    I'm not sure what that means. You can't iterate through a set unless it's well-ordered. And there are sets that are not well-ordered, unless you accept the axiom of choice, which implies LEM. How would you iterate through the real numbers, for example? Here's how I'd do it. First, well-order the reals. Second, use transfinite recursion. Those are two powerful principles of math not available to constructivists. I don't know how constructivists feel about transfinite recursion, but it applies to very large sets that constructivists can't possibly believe in.

    But the "for all" operator can be applied to the reals without the need for such powerful machinery as the axiom of choice and transfinite recursion. You just say "for all reals" and you're good. That's much more powerful than iteration IMO.
    fishfry

    I wanted to take a very simple countable model for an intuitionistic theory. You can use this model to "understand" the reason why the rules of logic are that particular ones. This is useful because if a theory has a model is surely not contradictory. And if you are sure that they are not contradictory you can reinterpret them on another model that you don't understand (for example one that has an infinite set of elements) and assume that is has to make sense for this model too. You can't iterate through real numbers with a procedure made of finite steps. But you can give it some physical meaning (covering an 1-dimensional segment with a 2-dimensional figure, for example) that works following the same rules.
  • Musings On Infinity
    First of all, let's look for a definition of "constructive mathematics", because I have the impression that we are not speaking about the same thing.

    https://en.wikipedia.org/wiki/Constructivism_(philosophy_of_mathematics)
    "In the philosophy of mathematics, constructivism asserts that it is necessary to find (or "construct") a mathematical object to prove that it exists. In classical mathematics, one can prove the existence of a mathematical object without "finding" that object explicitly, by assuming its non-existence and then deriving a contradiction from that assumption. This proof by contradiction is not constructively valid. The constructive viewpoint involves a verificational interpretation of the existential quantifier, which is at odds with its classical interpretation.

    There are many forms of constructivism.[1] These include the program of intuitionism founded by Brouwer, the finitism of Hilbert and Bernays, the constructive recursive mathematics of Shanin and Markov, and Bishop's program of constructive analysis. Constructivism also includes the study of constructive set theories such as CZF and the study of topos theory."

    Well, the constructivist theory that I know quite well is the "Calculus of Inductive Constructions", or CIC ("https://en.wikipedia.org/wiki/Calculus_of_constructions"), implemented in the Coq proof assistant (https://coq.inria.fr/).

    This is a particular version of "Martin-Lof Type Theory" (https://en.wikipedia.org/wiki/Intuitionistic_type_theory)

    Here's a summary of the main properties of Coq's logic: https://github.com/coq/coq/wiki/The-Logic-of-Coq

    Basically, Coq is a programming language (let's cal this the "internal language" of the logic) based on dependently typed lambda calculus (https://en.wikipedia.org/wiki/Dependent_type) that allows only to build total functions: using this language it's impossible to define a function that does not terminate (then it's less powerful than the full untyped lambda calculus, or a Turing machine).

    However, using the internal language you can operate on inductive (and co-inductive) data structures and on functions (the models about which you can speak about and on which you can operate using the internal language) that ARE NOT REQUIRED TO BE COMPUTABLE.

    For example, you can say "let R be a field and let F be a function such that forall x:R, F(x)^2 = x; Let x be F(2)". You can represent for example real numbers as functions from natural numbers to the finite set {0...9 and comma} to make things simpler (the infinite decimal expansion of the number), and then you can prove that the algorithm that produces the digits of the square roots of two IS a real number that solves your equation. Of course "x" is not a recursive function (of course, it does not terminate if you execute it on all natural numbers! ), but to build it you only need a finitely defined algorithm.

    In a similar way, you can say "Axiom excluded_middle: forall A:Prop, A \/ ~A." meaning: excluded_middle is a function that takes as input the prove of a proposition A and produces as output the prove of the proposition A \/ ~A. This function is not implementable in the internal language of Coq, but you can treat it as a pre-defined "external" function that you can call and use the result without knowing how it works (something like a function defined in an external library of a programming language).

    The logic doesn't make sure that the function exists (and even it doesn't guarantee that i's consistent: the proof of consistency is related to the meta-theory of the internal language), but only says that if you have this function, you can use it. And the resulting logic is equivalent to classical higher order logic (not set-theory).

    The correspondence between set theory and the various versions of dependent type theory is rather complex:
    See for example (https://mathoverflow.net/questions/69229/proof-strength-of-calculus-of-inductive-constructions):
    "IIRC, the calculus of inductive constructions is equi-interpretable with ZFC plus countably many "inaccessible cardinals" -- see Benjamin Werner's "Sets in Types, Types in Sets". (This is because of the presence of a universe hierarchy in the CIC."

    And this is a very good reference for intuitionistic type theory: https://plato.stanford.edu/entries/type-theory-intuitionistic/

    ========= end of the "defintions" :-) =========

    Here's why IVT is false in constructive math. IVT, you will recall, says that if a function from the reals to the reals is continuous, and if it takes a negative value at one point x and a positive value at another point y, then it must be zero at some point between x and y.

    That is if we have f(x) < 0 and f(y) > 0 then there exists p with x < p < y and f(p) = 0. This is intuitively correct about a continuous function f.

    If you take the standard real line and omit all the noncomputable points, you get the computable real line. The computable real line is full of holes where the noncomputable reals used to be
    fishfry


    I read that you can't define Cauchy or Dedekind real numbers in intuitionistic logic (without additional axioms), but you can use other definitions of "continuity":

    http://www.alternatievewiskunde.nl/QED/brouwer.htm
    "Any function which is defined everywhere at an interval of real numbers is also continuous at the same interval. With other words: For real valued functions, being defined is very much the same as being continuous."

    and MY FAVOURITE ONE:
    https://plato.stanford.edu/entries/continuity/#9 "Smooth Infinitesimal Analysis"

    What is your objection to this axiomatization of real numbers?


    One thing I do know is that the axiom of choice implies LEM. To reject the axiom of choice involves throwing out quite a lot of modern math.fishfry

    Intuitionistic type theory implies a somewhat "weaker" version of the axiom of choice (the "good" part of it :-) )


    [ ..going to continue next time! ]
  • Is mathematics discovered or invented
    where is this "Platonic world", where the plans for the Universe are stored until they are needed? For if the map/plan exists, and this Platonic world is where it exists, then where is this Platonic world? Your surmise seems to rest upon your having an answer to this question, doesn't it? :wink: :chin:Pattern-chaser

    Of course I have! I was only waiting for somebody to ask me to announce the truth to the world :smile: :joke:

    The answer of course is based on modern weird physic theories that nobody really understands or is able to verify (string theory, multiverse, 11-dimensional space-time, and similars), so you can use them explain whatever you want!

    So, here we go:

    - First of all: space-time is "emergent" (https://www.preposterousuniverse.com/quantumspacetime/index.php?title=Emergent_space(time)). This means that what we perceive as space-time could be an "object" (still physical) that has a different metric and even a different number of dimensions (holographic principle: https://en.wikipedia.org/wiki/Holographic_principle). So, there is no way decide which one of the two realities (the ologram the image that it represents) is the really "real" one. It's something similar to what happens with electromagnetic fields in special relativity: the same "thing" can be seen as an electric field by an observer and as a magnetic field by another observer.

    - Second: the universe (space-time, elementary particles, forces) that we observe today are the result of an evolution from a much more symmetric (simple) "object", and the forms of what we see today are the result of "choices" of some forms instead of others (https://en.wikipedia.org/wiki/Spontaneous_symmetry_breaking). This is similar to what happened with the evolution of life: animals developed into more and more complex forms starting from much simpler inanimate matter.

    So, here's the theory: there is exists this very simple "object" that is still there, at the beginning of time (but we don't see it, because from our point of view we can see only contemporary time) and a map, or plan (the "plan" :joke:) made of all possible mathematical objects that can be built. And for some strange reason that nobody knows, from the simple object "emerge" all the forms contained in the map in different quantities: the more symmetrical of them (respecting patterns) are the most common. Obviously, we can see now only part of them, but the global form of the map is some kind of fractal (like giant Mandelbrot set) in which the most common structures are present in all places and at every scale. And of course (to answer the where it exists question) the "emergent" objects are not objects of the physical 3-dimensional space, but what we see as space is only one of them.

    P.S. Please don't ask me what's the meaning of the worlds in quotation marks: I have no idea!
  • Is mathematics discovered or invented
    When we don't have a pattern, we can't extrapolate, calculate or do the other usual mathematical stuff. Yet of course something not having a pattern is still logical and still part of mathematicsssu

    Yes, what I meant is that some parts of mathematics are "interesting" and some are not. And I think this distinction can be made internally to mathematics itself, without looking at nature ( see my other topic https://thephilosophyforum.com/discussion/5789/is-it-possible-to-define-a-measure-how-interesting-is-a-theorem ). So theorems, or rather theories, that have a high value of the "interest" function ( the ones that have a pattern ) are discovered, because the value of the "interest" function for that theory is defined for all theories even if we don't know them. On the other hand, 10 + 10 = 20 is of course logical and true, but there's nothing in it that makes it "interesting"

    Same thing with immeasurability or non-measurability. Take for example the non-measurable sets like the Vitali setssu

    I believe non measurable sets are interesting as part of topology, but they are not a model for the physical space. For the physical space we should use a model where all functions are continuous, so that the Banach-Tarski theorem is false and all objects have a non-zero measure (but possibly an infinitesimal one), such as for example in (https://en.wikipedia.org/wiki/Smooth_infinitesimal_analysis), or any other simpler axiomatization that has these characteristics. If the axiomatizations are not equivalent, which one of them is correct is a matter of physics, not mathematics.
  • Is mathematics discovered or invented
    In my opinion we don't call mathematics what is patternless. It's not clear for me what a pattern is, but I would argue that nobody would consider a mathematical paper that express patternless theorems. And I think laws of nature for some reason prefer mathematical models to patternless ones.
  • Is mathematics discovered or invented
    Well, my idea is that there is a "map" that exists in some kind of Platonic world that is taken as a model by nature. Parts of this map are taken as a blueprint from nature to build real things and even human beings. The real things are not perfect as the map, but tend to resemble to them in a high degree. Maybe there are even parts of nature that don't follow any map, but the ones that follow the map are the ones that we may be able to understand.
  • Musings On Infinity
    What the axiom of infinity says is that {0, 1, 2, 3, ...} is a set, not just a collection. That's a much stronger statement. Without the axiom of infinity we still have PA and everything that we can do with it. But we don't have the powerset of the natural numbers nor do we have an easy way to get the theory of the real numbers off the ground.fishfry

    You are right. I think I should be more careful with this kind of statements. I wrote this without really checking the recursive structure generated by the the axiom of infinity.. I hadn't much time and I wanted to give a quick answer :-) ( mathematics is only my hobby, not my work... )

    Well,
    But Cantor doesn't prove that P(A) is bigger than A. He proves that there is no surjection from A to P(A); then we DEFINE "bigger" to mean there's no surjection.fishfry

    Well, if you expand the definition "there is no surjection from A to B", it becomes:
    forall F: A->B, exists y:B, forall x:A, F(x) != y [ I prefer to use the syntax of Coq ( https://coq.inria.fr/ ) because I didn't take the time to learn writing symbols :-) ]
    This tranlsates to the english: "however you choose to match elements of A with elements of B, there will always be an element of B that is not matched by any element of A"
    I think this is a quite intuitive definition of the fact that B has at least one element more than A.

    Yes. But then you can't get the theory of the real numbers off the ground and the intermediate value theorem is false. The constructivists counter: The IVT becomes true again we only consider computable functions.fishfry

    Sorry, I don't understand what you mean here: The constructivist (intuitionist) logic is only a "more general" logic than classical logic, since it has less axioms. As a result, it is valid for a bigger set of models:

    If you substitute "true" with "provable" and "false" with "contradictory", there are three possibilities:
    -- A proposition is provable ( then it's true in all models )
    -- A proposition is contradictory ( then it's false in all models ) ( equivalent to "it is provable that it's not provable": if you give me a prove, I can derive a contradiction from your prove )
    -- A proposition is not provable, but it's not contradictory ( then, it's true only in some models ).

    -- A is not derivable from not( not A ), because this means ("A is not provable" is not provable), and that is not true for all models (it's not true if functions are partial recursive functions)
    For example, if you can interpret functions as partial recursive functions (Turing machines) and propositions as assertions on the behavior of Turing machines:
    -- forall is a loop that stops returning false if it finds a false value, and it's value as proposition is false if it stops and true if it loops forever
    -- exists is a loop that stops returning true if it finds a true value, and it's value as proposition is true if it stops and false if it loops forever

    Since the halting problem is undecidable, there are theorems that are not provable and not decidable.

    The model of Turing machines (functions are partial recursive functions) is countable, but you can interpret the types and functions of type theory in uncountable models too ( including real numbers ):

    The axiom of excluded middle is independent from the others, so if you want to speak only about models in which all functions are total (non necessarily recursive), such as standard analysis, you can assume not( not A ) => A as an axiom. In this way, you recover the full classical logic and all propositions can only be true or false (even if some of them - much less than before - can still be non provable).

    Did I miss something? What is objection to this argument?

    Yes agreed. We agree on everythingfishfry
    That's reassuring :-)

    I just think the world of the full powerset of N is richer and more interesting than the computable powerset. That's an aesthetic judgment. And it's more useful. That's a pragmatic judgment.fishfry

    Yes, that's exactly the point that I wanted to discuss when I started the other topic https://thephilosophyforum.com/discussion/5792/is-mathematics-discovered-or-invented
    ( look for example at my post number 38 )

    In fact, I started writing on thephilosophyforum.com a couple of weeks ago because I was looking if somebody had an answer to this question: https://thephilosophyforum.com/discussion/5789/is-it-possible-to-define-a-measure-how-interesting-is-a-theorem
    but it seems that nobody knows o consider the question important :-(

    Nobody has yet succeeded in developing a computable or constructive theory of physics. People are working on it. Perhaps we'll need another few decades to get more insight into this question.fishfry

    Voevodsky showed that Homotopy Type Theory can be used as a foundation for mathematics in a very natural way. What do you mean by "constructive theory of physics"?

    I read that article and didn't fully understand it or the point you're makingfishfry

    I wrote "you could even say that "every mathematical object is a string of characters", and you replied that "This is confusing syntax with semantics".
    Herbrand interpretation says that you can build a model of your theory (semantics) made from the syntactic structures of the language itself (strings of characters).

    Math is more than ‎Gödel numberingfishfry
    I agree. And I think Godel himself would agree: he built this model to confute Hilbert's idea that there is no truth except formal logic.
    Gödel himself was a Platonist. I found that surprising when I first learned it. He believed there is mathematical truth "out there" that's beyond the limitations of formal systems.fishfry
    I think Godel's idea was something similar to what I tried to express in (https://thephilosophyforum.com/discussion/5792/is-mathematics-discovered-or-invented): there exists a meaning of truth based on a physical world that is more "fundamental" than the physical world that we observe.
  • Is mathematics discovered or invented
    The axiomatization of mathematical ideas is invented, but our axiomatizations are based on some many underlying objective facts observations of nature that are discoveredPattern-chaser

    I believe it's not only from observations of nature that mathematics takes inspiration.
    An obvious example: Mandelbrot set is not present in nature, but it's an interesting object. Group theory has started from the solution of polynomial equations, and only after was discovered to be important in physics. Maybe one of the most surprising is Riemanian geometry: invented as a pure abstraction, and then discovered to be exactly what's needed to describe gravitation. Often is the other way around: mathematicians discover some interesting structures that you can build out of pure axiomatic theories, and then it comes out that they are exactly what's needed for physics.
  • Musings On Infinity
    I don't believe this is true. ZF is a first-order theory with a countably infinite language that easily proves uncountable sets exist. See Cantor's theorem, as simple a proof as one can imagine for a fact so profound.fishfry

    OK, probably this is not the conventional point of view in mathematics, but I'll try to explain my point of view:

    In ZF Set theory you have the "axiom of infinity" that says ( in a simplified form ):
    There exists a set A such that:
    -- ∅∈A,
    -- for every x∈A, the set (x∪{x}) ∈ A.

    This, from my point of view, is a complicated definition for a recursively defined structure that is equivalent to Peano's natural numbers:
    You can build ∅ (equivalent 0), ∅∪{∅} (equivalent to S0), ∅∪{∅}∪{∅∪{∅}} (equivalent to SS0), etc...

    Then, first order logic postulates the existence of functions.

    Basically, Cantor's theorem proves that, for every set A, the function A -> (A -> Bool) is always bigger than A (in the usual sense of no one-to-one correspondence). Then, since (A -> Bool) is itself a set, you can build an infinite chain of sets of the form (((A -> Bool) -> Bool) -> Bool) -> ...
    So, from the infinite set you can build an infinite hierarchy of infinites.

    The crucial point here to decide the cardinality of "A -> (A -> Bool)", is what you take as a model for functions.

    If functions are only recursive functions (what I can evaluate using a decidable set of rules applied to the symbols of the language), I can never build anything that has non-numerable cardinality.
    In this case, it is true that for every set you can find a bigger set, but their cardinality will be numerable: at the same way as it is true that for every integer, you can always find a bigger integer.

    If instead you define functions in the usual way as "sets of couples", then there are 2^|A| possible functions of type A -> Bool, and there exists a non-numerable set of sets. But, in any way, inside your language you are able to describe only an enumerable set of them.

    (https://en.wikipedia.org/wiki/Skolem%27s_paradox):
    "From an interpretation of the model into our conventional notions of these sets, this means that although u maps to an uncountable set, there are many elements in our intuitive notion of u that don't have a corresponding element in the model. The model, however, is consistent, because the absence of these elements cannot be observed through first-order logic"

    Since both models for functions are consistent, what's the reason to take the standard interpretation as the "true" one?


    No I don't believe so. This is confusing syntax with semantics. A formal theory consists of finite-length strings of symbols. But models of those theories are not strings of symbols. They're sets, often quite large ones.fishfry

    See Herbrand interpretation: https://en.wikipedia.org/wiki/Herbrand_interpretation.

    The demonstration of Godel's incompleteness theorem is using a similar technique to map natural numbers on the syntactic structures of the language and properties of arithmetic operations on syntactical rules of logic.
  • Musings On Infinity
    OK, I have no convincing argument against this point of view, except that if irrational numbers do not exist a great part of mathematics becomes more difficult to reason about. For example, there are no more really continuous functions, and you have to speak all the time about sequences of points that approach other points.
    Since the results are the same and logic says that it's no more contradictory using infinities than avoiding using them, it seems to me more "convenient" to use them. Even if in the real universe they do not exist.
    At the end, even real perfect circles do not exist (everything is made of atoms), and euclidean space does not exist (space-time is not flat), but it's much simpler convenient for reasoning about real objects to "pretend" that they existed.
  • Musings On Infinity
    I don't see why calculus cannot be defined purely in terms of potential infinity. A limit should approach but never reach actual infinity:Devans99

    All irrational numbers are defined as infinite objects (usually infinite sums or products):

    pi = lim ( sum for k=1..n ((-1)^(k+1) / (2*k - 1)) )
    n->infinite

    If infinite sums do not exist, pi is not a number, since there is no way to write it with a finite expression using only + and *

    How do you define the function sin(x) if you can't use infinite expressions?

    Integrals are defined as sums of infinite terms and most of them do not converge to rational numbers.

    If a limit does not converge to a rational number you have to treat the limit itself (its expression) as a number, because there is no way to write it avoiding limits. Then, you have to say that real numbers are the usual rationals plus some "objects" that are defined as infinite sums of rationals.
    That's similar to how transfinite numbers are defined: take natural numbers and add to them other "objects", such that the operations with preexisting natural numbers give the same result as before.