Comments

  • Negative Infinity = Positive Infinity OR Two Types of Zeros
    OK, fair point. Sorry for the intrusion! :sad:
  • Statements are true?
    I am for the second one: conformity with the facts of the world.
  • My own (personal) beef with the real numbers
    OK, you are right!
    What I wanted to point out is that the sentence "there is an uncountable number of sequences", when expressed in the forall-exists language, does not contain the word "uncountable". And you can't say it in an unambiguous way. This is our interpretation in natural language of the sentence "forall lists of functions Nat->Nat there exists a function Nat->Nat that is not on the list". So, there exists ONE missing function. You are adding one function at a time! The translation of this sentence in natural language with the word "uncountable" is arbitrary, in my opinion.
    As the induction principle "(P(0) and (forall N, P(N) implies P(N+1))) implies (forall N P(N))" can be seen as part of the axiomatic definition of the world "forall", at the same way there should be a second axiom that can be seen as the axiomatic definition of the world "uncountable": the same is done in modal logic with the word "necessarily". Doesn't it make sense?
  • My own (personal) beef with the real numbers
    A section is a proof ... a section is a proof. I can't quite see that. I know that a section is a right inverse of a function. For example if f(x)=x2f(x)=x2 then a section is a function that, for each nonnegative real, picks out one of its plus or minus square roots. Is that about right? The section is the right inverse, so it's essentially a choice function on the collection of inverse images of all the points. Do I have that right? How does that become a proof?fishfry

    Let me just give you just some examples: "x >= 3" is a fibration from the object Nat to the subobject classifier Prop. The proposition "5 >= 3" is a fiber of this fibration. The elements of this fiber are the "proofs" that 5 >= 3. So, there will be a function F (built using the rules of logic) that allows you to build a proof of 3 >= 3, and a proof of 4 >= 3, and a proof of 5 >= 3, etc.. (meaning: to choose a proof for each proposition of the form "x >= 3"). This function is the section of the fibration from the space of proofs to the space of propositions of the form "x >= 3": it selects a proof (an element of the fiber) for each proposition of the form "x >= 3".
    The function that generates all the proofs of the form "x >= 3", however, is not an algorithm that chooses a different list of rules for each value of x: it doesn't say "if x = 3, then apply the rule A to prove "3 >= 3";
    if x = 4 then apply the rule B to prove 4 >= 3". Instead, it's a parametric program that says: if you have an x with such and such characteristics, then you can apply the rule A(x), where x is a variable of the rule. That means that the section of the fiber is "continuous".

    Propositions containing variables (such as "x >= 3") are called "dependent" types because they are sets (of proofs) that depend on a parameter (x), and the function that is used to build the proof of "x >= 3" builds an element of a different type (a proof of a different proposition) for each value of x (builds the set and selects the point at the same time).

    Fiber bundles of "normal" topological spaces (such as the famous Mobius strip - https://en.wikipedia.org/wiki/M%C3%B6bius_strip) are built in the same way: you choose a disjoint segment (set of points form a total space E) for each point of a circle (base space B) using a continuous function!

    It's much more difficult to explain than I thought...

    Some references:
    https://ncatlab.org/nlab/show/subobject+classifier
    https://ncatlab.org/nlab/show/type+of+propositions

    P.S. notice that this form of the subobject classifier is typical only for type theories, where the propositions are represented by the collection of all their proofs:
    ( from https://ncatlab.org/nlab/show/type+of+propositions ):
    "In type theory the type of propositions Prop corresponds roughly under categorical semantics to the subobject classifier. (To be precise, depending on the type theoretic rules and axioms this may not be quite true: one needs propositional resizing, propositional extensionality, and — in some type theories where “proposition” is not defined as an h-proposition, such as the calculus of constructions — the principle of unique choice?.)"

    For classical (boolean) logic, instead, the subobject classifier is much simpler: it has only 2 incoming arrows (elements) from the terminal object: "true" and "false". That's because in classical logic, differently from type theory, the propositions are represented only by their truth value (true or false), and not by the list of derivations (or programs) used to prove them.
  • My own (personal) beef with the real numbers
    I can't see this. Can you give an example?fishfry

    OK, I'll start from the example:

    1. Let's call Unit our terminal object.
    2. Let's call Prop the object that is part of our subobject classifier (usually called Omega, but I prefer Prop, from Coq's convention).
    3. Let's call "true" the arrow from Unit to Prop that is part of our subobject classifier.

    4. Assume that there is another object, called Nat.
    [ this is the set of natural numbers ]

    5. Assume that there is an arrow "s" from Nat to Nat.
    [ this is the "successor" function, from Peano arithmetic ]

    6. Assume that there is an arrow from Unit to Nat called "zero"
    [ this is the number zero ]

    7. Let Nat->Prod be the exponential of Nat and Prod
    [ this represents all propositions with a free variable of type Nat ]

    8. Let Nat->(Nat->Prod) be the exponential of Nat and (Nat->Prod)
    [ this represents all propositions with two free variables of type Nat ]

    9. Let "x_is_greater_than_y" be an arrow from Unit to Nat->(Nat->Prop)
    [ this is a proposition with two free variables of type Nat ]

    10. Let's call "x_is_greater_than_zero" the arrow "x_is_greater_than_y" * "zero"
    [ * is the operation of arrow composition from category theory's axioms ]
    [ "greater_then_zero" will be then an arrow from Unit to Nat->Prop ]
    [ this is a proposition with one free variable of type Nat ]

    11. Let's call "s(x)_is_greater_than_zero" the arrow "x_is_greater_than_zero" * "successor"
    [ this is still a proposition with one free variably of type Nat ]

    12. Let's call "one_is_greater_than_zero" the arrow "s(x)_is_greater_than_zero" * "zero".
    .......

    Sorry, I thought it was easier: I wanted to show you the complete structure of the category representing the proof of the proposition "2 > 0" derived from "forall x, s(x) > x" and "forall x, x > 0" but I didn't realize that it would take me ages to describe it in this way... and at the end it will be impossible to read. In Coq this is several lines of code, but is build automatically from a couple of commands. In reality, I never look at the real terms representing the proofs: they are built automatically. But I see that it would be a pain to build them by hand in this way!
    I have to find a better way to give you a description of how it's made without writing all the details... :worry:
  • Negative Infinity = Positive Infinity OR Two Types of Zeros
    I noticed that I was mentioned here, so I started to read this thread from the beginning. And I realized that probably this is the perfect example to show my point about the infamous "constructivism".

    So, long story short, @fishfry is right: there are no infinities on the real line, so you cannot write , and you cannot take limits of divergent sequences.

    But there is a second possibility, that I guess it what @TheMadFool had in mind: just consider a new set of numbers, made of all the real numbers plus the symbol , and then postulate as an additional axiom for your numbers that and .
    Why can't this be done?

    Well, surprisingly, this can be done! It's called "projectively extended real line" (https://en.wikipedia.org/wiki/Projectively_extended_real_line)

    However, there is a problem with this approach: if you add a new constant and a new axiom to a theory, the theory could become inconsistent: in other worlds, how can you be sure that the new axiom does not allow you to derive the negation of a proposition that was true in the old system? Otherwise, you could even add the axiom "1 + 1 = 4" and obtain a new kind or real numbers where everything is the same as before, but 1 + 1 = 4 instead of 2. Obviously you cannot do this, because the old axioms are still the same, and they allow you to derive 1 + 1 = 2.

    Well, we know that this theory is consistent because IT HAS A MODEL, that is a geometric construction still based on the same axioms (the axioms of real numbers). That's the same thing that you do extending the real numbers to the complex numbers.
    Here's the construction: draw the normal real line, and then add a circle tangent to the real line at the point 0. You can interpret the extended real numbers as THE SET OF ALL STRAIGHT LINES passing from the point opposite to 0 on the circle (on the other side of the diameter) - let's call this point P. For each point X of the real line, there will be only one straight line passing from P and from X, except the one that is tangent to the circle (and then parallel to the real line) - let's call this line L. The line L does not intersect the real line (it's parallel to it), so it's not one of the "normal" numbers. So, the set of all lines is made of all lines passing from a point P of the real line plus the line L. L can be interpreted as the number "infinity".

    So, that's a perfectly normal construction built with Euclidean geometry: it has all rights to exist!

    But now the question is: WHAT ABOUT THE THEOREM OF UNIQUENESS of the model of real numbers? This new model has surely the same cardinality of the previous one (we added only one point), so where's the "bug"?
    [ try to solve the problem yourself before looking at the solution on wikipedia :razz: ]

    This, in my opinion, is one of the situations that make you appreciate the existence of constructivist logic! You can take a proof assistant, load the axioms of real numbers and look for the proof that they are unique. Then build your construction with the circle and the real line and run the proof of unicity by using as real numbers the data structures that represent the lines instead of the points on the real line (the input data types of the proof of unicity are defined as a parametric data type, so it has to accept any data structures compatible with the axioms of real numbers). This will be a computer program that terminates and returns a proposition saying that two is equal to one. Then you can start "debugging" it, and see what's "wrong" with your axioms of real numbers, or with the structure describing the circle and the real line.

    You see: it's all about building a computer program: it doesn't depend on the fact that infinity (of whatever cardinality) exists or not!
  • My own (personal) beef with the real numbers
    Remind me of the defs. A section is a right inverse as I understand it. There can be a lot of right inverses to a function, you just keep choosing different elements in the preimages of points. Is that the bundle?fishfry

    Yes.
  • My own (personal) beef with the real numbers
    Did you learn all this from the CS viewpoint? Just wondering.fishfry

    No, I like "normal" mathematics: no computers involved. But having a theorem-prover as Coq to be able to verify if you can really write a proof of what you think is provable is very helpful.
  • My own (personal) beef with the real numbers
    I agree with you. But now I am too curious: which theorems did you prove?
  • My own (personal) beef with the real numbers
    Remind me of the defs. A section is a right inverse as I understand it. There can be a lot of right inverses to a function, you just keep choosing different elements in the preimages of points. Is that the bundle?fishfry

    Yes! I should have added a formal definition, but I have an aversion to writing symbols on this site :confused: I added a link with a clear picture, I think.
  • My own (personal) beef with the real numbers
    May I ask you a question? How does one come to know this material and not have heard of measure theory?fishfry

    I know about measure theory since when I was at high school (I always liked that stuff), and I heard that sentence about the probability to find a rational among real numbers a thousand times :razz:
    But then I realized that not everything that they teach you in high school has to be taken as the absolute truth :smile:
  • My own (personal) beef with the real numbers
    So I don't even think you have to go all the way to topoi to get to applications in logic.fishfry

    Exactly! We are speaking about two different kinds of "representations" of logic.
  • My own (personal) beef with the real numbers
    Be advised that any paragraph containing the name Martin-Löf instantly glazes my eyes. I've had all these conversations too many times. I totally believe everything you say but can't actually figure out what point you are trying to make. I don't disagree with anything you say, and I'm aware in varying degrees with various aspect of the things you talk about. I just don't know why you're telling me this. I don't disagree with you on any of it and I can't relate this to whatever we are talking about.

    Can you just remind me what is the point under discussion?
    fishfry

    OK, never mind. Sorry for continuing to repeat the same things!
  • My own (personal) beef with the real numbers
    And a topos is an abstract sheaf? And what I read was that topoi are inherently intuitionistic. I haven't followed that chain of reasoning yet.fishfry

    A sheaf is a topos at the same way as a set is a topos: it's the "trick" of the Yoneda embedding! :smile: do you understand now? (sorry: bad example.. let's say that a sheaf can make everything - more or less - "become" a topos)
  • My own (personal) beef with the real numbers
    Which has to take a prize as an understatement. Is most of your experience in computer science?jgill

    Yes, I work as a programmer. And yes, a formal proof completely misses the essence of a proof: it's "meaning". I don't beprove lieve mathematics is changed at all in that respect! Computers do not PROVE theorems: they VERIFY the formal correctness of a proof, that is a completely different thing!

    But I even believe that this distinction is not clear at all in today's mathematics, and it has never been understood: there is no definition of the "meaning" of a theorem, and many mathematicians (starting from Hilbert, I guess) think that there is no point in trying to identify the "meaning" as something different from a list of symbols.

    I believe this fact is becoming evident, even thanks to artificial intelligence, and will be understood in a not very distant future. By the way, the effect of this misunderstanding of mathematical proofs is terrible for education: teaching only the formal part of mathematics completely misses the essence of mathematics!
  • My own (personal) beef with the real numbers
    Do you know topos theory? Can you "explain like I'm five" to someone who knows a little category theory?fishfry

    OK, that's the "like I'm five" explanation: A topos is a category that "works" as the category of sets, but is not built using a set of rules that operate on symbols as a formal logic system: it's the formulation of set theory in terms of objects, arrows, and universal-mapping properties.

    So, there is a main ingredient that is missing: points! Topos theory is a formulation of set theory where sets are not "built" starting from points. Sets (the objects of the category) and functions (the morphisms of the category) are considered as "primitive" concepts. The points are a "secondary" construction.
    So, you can describe algebrically what is the intersection and the disjoint union of sets (as product and sum of objects), what is a subset, what is the powerset of a set (an exponential), etc.
    The interesting part is that you can describe what are propositions and logical operators completely in terms of objects and arrows, only by assuming the existence of an object with some particular properties, called "subobject classifier". This is NOT the same thing as homotopy type theory, where you build a formal logic system in the usual way: by building strings of symbols with given rules. Here you describe logical operations and propositions in terms of universal mapping properties, as you do with operations between sets. The subobject classifier is the object that represents the "set of all the propositions", and the implications between these propositions are arrows that start and end in this object. Even the logical quantifiers forall and exists are only two particular arrows. Basically, everything is defined in terms of universal mapping properties.
    I don't know which details should I add. What part of topos theory do you want me to explain?
  • My own (personal) beef with the real numbers
    And then topos theory is abstract sheaf theory. So mathematically, this is advanced grad student level. But apparently a lot of the terminology and concepts are trickling into computer science and other fields.

    I was wondering in what context you'd seen topoi. I know there's a lot of category theory in computer science these days. But my sense is that topoi are fairly sophisticated mathematical objects, at least in their mathematical applications.
    fishfry

    Re-reading your question I just realized that probably my answer about homotopy type theory could be misleading: this is really a weird and interesting theory, but topoi and sheaves are NOT related to computer science only because of the use of type theory as a logic to prove theorems! On the contrary, I think the use of formal logic in proof assistents is only a very limited field of informatics, with not many practical applications. On the contrary, sheaves are mostly used in data-analysis applications that have nothing to do with formal logic!

    It is true that they can be very abstract objects in mathematics, but for a data-science person a sheave is mostly a data-correlation tool. A sheave can represent a cellular-phone network and relate each cell of the covered area with the set of users that are connected to that cell. Finite-shapes topology is still topology, and computer-science is first of all used for big and discrete sets of data! (here's an example of what I mean: http://www.drmichaelrobinson.net/sheaftutorial/)
  • My own (personal) beef with the real numbers
    Yes, it seems very strange! Here's the "quick and dirty" explanation:

    A sheaf S over a topological space X is a "fiber bundle", where the fibers over a point x in X are disjoint subspaces of S. Now, a section of the fiber bundle (https://en.wikipedia.org/wiki/Section_(fiber_bundle)) is what in type theory is called a "dependently typed function", that from the point of view of logic is interpreted as the proof of a proposition with a free variable x: the fiber bundle is the proposition (that depends on x) and a section of that fiber bundle is a proof of that proposition.

    The fundamental idea is the same: you have a map in which each input value is defined on a different domain, and that domain DEPENDS on the input value in a "continuous" way. In your example the ideals are the fibers (dependent on the point x) and the ring of continuous functions is the total space S.

    The point is: what continuity has to do with formal logic, where we speak only about symbols and rules?
    Well, "continuity" in this context means "parametric" function. A "continuous" (or parametric) function is a function that doesn't jump from a formula to the other when you change the point x:

    | x = 20 ==> "x is a multiple of 2"
    | x = 21 ==> "x is a multiple of 3"
    | x = 22 ==> "x is a multiple of 4"
    this IS NOT a "continuous" function, because it has different formulations for different values of x

    | x = 20 ==> "x is a multiple of 2"
    | x = 21 ==> "x is a multiple of 2"
    | x = 22 ==> "x is a multiple of 2"
    this IS a "continuous' function, because it has the same formulation for every value of x.

    The fibers of this bundle are the following ones:
    | "20 is a multiple of 2"
    | "21 is a multiple of 2"
    | "22 is a multiple of 2"

    The first and the third fibers are not empty (because they contain some points: the points are the proofs of the respective propositions)
    The second fiber instead is empty (there are no proofs of this proposition)

    OK, I'll stop here for the moment, because I am a little afraid of the answer "that's all bullshit, I'll not read the rest of it..." :worry:

    However, the point is the following: this is an axiomatic theory; the same axioms can have completely different meanings that relate to complete different mathematical objects. Only they happen to behave in a similar way. That's the typical for category theory: every collection of objects that are related to each other in the same way are the same category. It is completely irrelevant what they are "made of"!
  • My own (personal) beef with the real numbers
    There are occasional exceptions in which computers are essential, like the Four Color Theorem in combinatorics. And then mathematicians attempting to verify the overall presentation of proof and conclusion are stuck with verifying computer algorithms and assuming the computers running them do not produce computational errors.jgill

    Probably you think that I completely missed the "meaning" of what a mathematical proof is. But that's the way computers (formal logic systems) see proofs. I agree that that's not all. Just take a look at this old discussion with Alcontali, for example: https://thephilosophyforum.com/discussion/6205/mathematics-is-the-part-of-physics-where-experiments-are-cheap
  • My own (personal) beef with the real numbers
    There are occasional exceptions in which computers are essential, like the Four Color Theorem in combinatorics. And then mathematicians attempting to verify the overall presentation of proof and conclusion are stuck with verifying computer algorithms and assuming the computers running them do not produce computational errors.jgill

    Probably you think that I completely missed the "meaning" of what a mathematical proof. But that's the way computers (formal logic systems) see proofs. I agree that the formal part is not all there is in it. Just take a look at this discussion, for example: https://thephilosophyforum.com/discussion/6205/mathematics-is-the-part-of-physics-where-experiments-are-cheap
  • My own (personal) beef with the real numbers
    When you say computations can be considered proofs I'm not sure where you are going. Proofs of what? And "reduce an expression in a normal form" - what's that? Mathematical proofs are rigorously reasoned arguments in logic in which concepts and relationships play significant roles.jgill

    OK, let's start from examples of normal forms:
    "245 * 10" is a number that is not in normal form. "2450" is the same number in normal form (one of the possible normal forms)
    "5/10" is not in normal form. "1/2" is the same number in normal form.
    If two strings of characters that represent numbers (in a given formal language) are in normal form, the numbers are equal if and only if they are the same string of characters.
    I searched in wikipedia and found this: https://en.wikipedia.org/wiki/Canonical_form . But if you read a book about formal logic this is usually called "normal" form, and not "canonical" form.

    The "proofs of what" question is a little harder to answer (but the concept is very simple).
    Normally you think of a computation (for example a multiplication of two integers) as an algorithm that takes some arguments as input and returns an output.
    But you can think of the same computation as list of transformations that starts from an initial expression ( "12 * 12" ) and transforms it into a second expression ( "144" ) by applying at every step one of the possible "rewriting rules" allowed by the logic. For example, take a look at the section "Arithmetic" here: https://en.wikipedia.org/wiki/Rewriting . In other words, a computation can be seen as a special kind of "prove": the prove that the term "144" is the normal form of the term "12 * 12".
  • My own (personal) beef with the real numbers
    Do you know topos theory? Can you "explain like I'm five" to someone who knows a little category theory?fishfry

    OK, I'll try. But I think it will be a long post anyway. So, I'll do it after answering the other posts before.

    But I can give you the reference to a very good book (in my opinion) on this subject that is easy to understand for somebody that has some basis of category theory:
    - Title: "TOPOI THE CATEGORIAL ANALYSIS OF LOGIC"
    - Autor: Robert Goldblatt
    ( I found it even on Amazon: https://www.amazon.com/Topoi-Categorial-Analysis-Logic-Mathematics/dp/0486450260 )
  • My own (personal) beef with the real numbers
    Name drops Voevodsky. Didn't we do this last week? Or was that someone else?

    there is a very strict correspondence between topology and logic.
    — Mephist

    Perfectly well known before Vovoedsky.
    fishfry

    OK, I'll try to explain this point.
    The fact that there is a relation between topology and logic (mediated by category theory) was well known even before, you are right. But Voevodsky's "homotopy type theory" (https://homotopytypetheory.org/) does not say simply that there is a relation between topology and logic: it says that "homotopy theory" (that is a branch of topology) ( https://en.wikipedia.org/wiki/Category:Homotopy_theory ) and Martin-Lof intuitionistic type theory with the addition of a particular axiom (the univalence axiom - https://ncatlab.org/nlab/show/univalence+axiom) ARE EXACTLY THE SAME THING (the same theory). Meaning: there is this axiomatic theory that speaks about homotopy between topological spaces, expressed in the language of category theory (and then in ZFC set theory - it is still valid in any topos, but I don't want to make it too complicated). So, the terms of the language are spaces, points, paths connecting points, equivalence classes between these paths etc...
    Now, if you take whatever theorem from homotopy theory and RENAME all the terms of this theory, substituting the word "types" to the word "spaces", "proofs" to the word "points", "equalities" to the word "paths", etc... (lots of details omitted, of course), you obtain a theorem in type theory. And if you take any theorem in type theory you can reinterpret it as a theorem about topology.

    At the end, something similar happens even with the usual set theory: a set can be interpreted as the set of all models that satisfy a given proposition, inclusion between sets can be interpreted as implication between propositions, intersections as "AND" operations, etc... But you know, the devil is in the details (expecially in "infinitary" details :wink:).
    Well, in the case of topology and type theory all the details match exactly: it's "take two at the price of one". You prove something about topological spaces and you obtain a theorem about types, and vice-versa.
  • My own (personal) beef with the real numbers
    So in fact mathematics itself, defined as everything mathematicians have written down or even thought about -- since thought is a physical process too -- is a "physical process." The work of mathematicians takes energy; and therefore it should be possible to study the nature of math as a physical process.fishfry

    Yes, exactly! (well, I don't understand what energy has to do with this, but more or less...) However, that's only MY idea (and it's more about philosophy than mathematics). So, even if there are lots of mathematicians that have similar ideas (as I believe), no serious mathematician is so courageous to say this loudly! :yikes: (just take a look at arXiv.org and try to find any word about philosophy!).

    The correspondence between topology and logic instead, that's one of the most popular and ideas of today's mathematics! :cool:
  • My own (personal) beef with the real numbers
    Yes, there are a lot of formal logics based on type theory - and even Martin-Löf type theory has a lot of variants (too many to be something important, right? :smile: )
    But I think that now the picture is becoming quite clear (even thanks to Voevodsky's work): there is a very strict correspondence between topology and logic. But you have to "extend" the notion of topology to the one of topoi (a category with some additional properties). ZFC is the logic corresponding to the standard topology (where lines are made of uncountable sets of points). But ZFC and the "standard" topology are not at all the only logically sound possibility! (that in a VERY short summary)
  • My own (personal) beef with the real numbers
    No, a finite state machine is equivalent to a finite Turing machine, I guess.
    Yes, lambda calculus (the original one invented by Church - https://en.wikipedia.org/wiki/Lambda_calculus) and Turing machines are equivalent as computing power.
    But there are various other less powerful (as computing power) forms of lambda calculus, that are called in general "typed" lambda calculi (https://en.wikipedia.org/wiki/Typed_lambda_calculus) that do not correspond to Turing machines as computing power, but are still more powerful than finite state machines.
    One of the most powerful (as computing power and expressiveness of the resulting language) lambda calculi is called Martin-Lof dependent type theory (https://ncatlab.org/nlab/show/Martin-L%C3%B6f+dependent+type+theory). Yes, it's not called "dependently typed lambda calculus", but it is!
    Meaning: it's still a version of lambda calculus using types with a very powerful type system. However, this is still less powerful than the original lambda calculus invented by Church, but powerful enough to do mathematics. The interesting thing is that it's not true that this limitation produces a logic that is difficult to use. On the contrary, the logic is easier to use than ZFC, and that's the reason why most of currently developed proof assistants (Lean, Coq, Agda) make use of this language (https://en.wikipedia.org/wiki/Category:Dependently_typed_languages).
    Now, about Turing machines: a Turing machine equivalent to this kind of lambda calculus is basically one of these proof assistants (without hardware limitations of the real machines, of course), so probably it's not impossible to define. But I think that nobody described exactly these kind of logic in term of Turing machines. However, they are equivalent to a kind of Turing machine that is less powerful than the original Turing machine, but powerful enough to do all mathematics.

    P.S. Here's a citation taken from wikipedia: (https://en.wikipedia.org/wiki/Typed_lambda_calculus)
    All the systems mentioned so far, with the exception of the untyped lambda calculus, are strongly normalizing: all computations terminate. Therefore, they cannot describe all Turing-computable functions.[1] As another consequence they are consistent as a logic, i.e. there are uninhabited types. There exist, however, typed lambda calculi that are not strongly normalizing.
  • My own (personal) beef with the real numbers
    I've never heard of this idea, that TM's don't exist. I see no problem expressing TMs in set theory. An unbounded tape of cells is modeled as the integers. You have some rules that let you move right or left. It's pretty straightforward.

    Can you say more about this? I have never heard this idea at all. It seems VERY restrictive. Perhaps it's like denying the axiom of infinity. Logically consistent but too restrictive to do math with.
    fishfry

    Well, OK. Turing machines are a model of computation equivalent to Church's untyped lambda calculus.
    Limiting Turing machines to have finite dimensions would in fact be too restrictive to do mathematics (including the infamous square root of two :wink: ). But you can limit Church's untiped lambda calculus by introducing a typed lambda calculus, and with dependent types this is a formal system powerful enough to do mathematics - and corresponding to some form of intuitionist theory (omitting details).
    I don't know what would be the equivalent limitation to Turing machines that corresponds to dependently typed lambda calculus (if there is one). So, I should have said that we can assume that the "original" (non limited) Turing machine does not exist.
  • My own (personal) beef with the real numbers
    Well, my argument was simpler than this:
    - We know that we cannot enumerate all halting Turing machines, so for every supposedly complete list of halting Turing machines I can find another halting Turing machine that is not in the list.
    Does that mean that the set of halting Turing machines in uncountable? No! It only means that there is no way to enumerate that list!
    Now I have a second list: the list of all functions - even not computable - from N to N, and I have a theorem that says exactly the same thing: for every supposedly complete list of functions from N to N, I can find another function from N to N that is not in the list. But in this case I conclude that the set of functions from N to N is uncountable! Why? What guarantees that in this case I can build that list? The theorem only says that there have to be some more elements not present in the original list (the list cannot be complete), but not that there has to be an uncountable number of missing elements.
  • My own (personal) beef with the real numbers
    Yes, but the existence of Turing machines cannot be proven from other principles of logic: it has to be assumed as an axiom. Maybe (I don't know) it can be deduced from the axioms of ZFC set theory.
    But what I am saying is that you can equally well assume as an axiom (that would be incompatible with ZFC) that Turing machines DO NOT exist!
    So, what is the reason to assume the existence instead of non existence? It is possible (and even probable in my opinion) that the theory that assumes the non existence of Turing machines is even more interesting (both mathematically and for physical theories) than the theory assuming their existence.
    And the alternative to TM's existence is not univocal: there are a lot of possible "intuitionistic" and "constructivist" theories that are not equivalent between each-other. But, at the same way, there are even a lot of non-Euclidean geometries. Constructivist theories correspond to elegant constructions in topology, represented as internal languages of certain categories. In comparison, ZFC axioms seem to be much more arbitrary, from my point of view.
  • My own (personal) beef with the real numbers
    Yes, of course practically all "normal" proofs are short and all the computing power needed is a pen and a peace of paper. But in reality all computations can be considered to be proofs, right? You reduce an expression in a normal form following some rules (if it's a multiplication between integers the "proof" can be made automatically with a calculator).

    The point is that a lot of interesting physical problems (for example computing the exact shape of biological proteins by following the lows of quantum mechanics) require an amount of computation that is beyond the computing power of any (classical) calculator, and probably even if you could use a computer big as the entire universe, if you want to be sure of the result at 100%. Maybe there are cases where the computation is easy because of particular symmetries of the system, but there are rather exceptional cases.
  • My own (personal) beef with the real numbers

    OK, so let's try to follow this definition of measure to calculate the measure of the two sets that you mentioned:
    (1). - The set of computable bitstrings: there is an infinite number of computable bitstrings
    (2). - The set of all possible bitstrings: there is an infinite (not countable) number of possible bitstrings

    So, to calculate the probability of choosing a computable bitstring, I have to divide (1) by (2), and the result is zero. What kind of computation (limit, theorem, or whatever) should I use to divide (1) by (2)? How is this computation defined? That was my question.

    Second question: supposing that the division of (1) by (2) is zero, what sense can I make of that measure?
    - I cannot perform the experiments because they are infinitely long.
    - I cannot even choose one bit at a time from each of the sequences, because both are computable at every step, until they are finite. But they will always remain finite forever: the experiment never ends.
  • My own (personal) beef with the real numbers
    Why are you trying to convince me that math isn't physics? I'm talking about the measure of computable bitstrings in the space of all bitstrings. The measure of the computable bitstrings is zero.fishfry

    How do you define this measure in pure mathematical terms? You cannot use probability, because probability is physics (unless you find a sound mathematical definition of probability)
  • My own (personal) beef with the real numbers
    All right, as I said, I just gave up arguing about constructivism.. :meh:
  • My own (personal) beef with the real numbers
    But you asked, "What if there exists no such thing in nature?" But that is not a problem for math, because math is not physics. In math there is certainty about the repeatability of functions.fishfry

    Yes but mathematics needs computations for proofs, and computations are physical processes. You may think that computations are not physics if you make them by mind, but if they are too long to be made by mind you need a computer, right? If a computation is too long to be performed by any computer even in principle, is it still valid? This sounds a little like the existence of parallel lines that never meet: they may not exist if the geometry of the universe is not Euclidean. OK, Euclidean geometry exists as a mathematical object, but can you use it to make deductions in general about any geometry? You can't, because there are sound geometries where that assumption is false.
    At the same way, you can assume that unfeasible computations exist as a mathematical object, but can you use their existence to make deductions in general about any physical theory? You can't, because there are sound physical theories where that assumption is false.
  • My own (personal) beef with the real numbers
    Any counter-example?

    P.S. I see there are several persons that studied physics visiting this site: maybe we could create a post especially on this point. I am pretty sure what I said is correct.
  • My own (personal) beef with the real numbers
    You can certainly define a measure on the unit interval of reals and assign probabilities to sets of bitstrings. I didn't follow this post. You said you can't define probabilities for bitstrings but you can.fishfry

    I can assign a number to an experiment (calculated with some well-defined algorithm) and call it "probability" (for example defined as the square of the amplitude of the wave function), but I cannot prove that this is random and has a continuous distribution: it could well be that the experiment gives always the same result!

    You say that the probability will be zero for an infinite random experiment. Yes, but there are no infinite experiments!
    You can assign a measure to sets of points, but you cannot prove that these sets are uncountabe! The measure does make sense in physics for open sets, but not for every set: not all sets of ZFC are measurable, so there is no corresponding experiment for non measurable sets! ( just to upset you the same old story :razz: - Banach-Tarsky theorem does not describe a physical experiment: all physical experiments are calculated with integrals over open sets). And in constructivist logic the set of all open sets IS COUNTABLE! You cannot test if it is countable or not with physical experiments limited in time (how do you know that you don't get the same results again after a thousand years?)
  • My own (personal) beef with the real numbers
    One way to define "constructive physics" is simply to say, "it uses constructive mathematics". But definitions of the latter sometimes arise principally from avoiding the LEM. Another tack is to avoid non-computable numbers. Or simply to state that experiments must be conclusive in a reasonable finite amount of time. I'm not sure what you two are referring to here. But I haven't read all the thread.jgill

    Yes, that's very confusing. And if you look at the mathematical literature, the terms "constructive" and "intuitionistic" seem to have changed meaning, and every author uses them with a different meaning even today.
    What I mean by "intuitionistic constructive" logic (because this is the one that I know) is Martin-Lof dependent type theory ( https://ncatlab.org/nlab/show/Martin-L%C3%B6f+dependent+type+theory )
  • My own (personal) beef with the real numbers
    Confused by this. Constructive physics wouldn't allow a random sequence.fishfry

    Constructive physics (constructivist logic) can ASSUME the existence of a function that you can call "random" (whatever it means: it's an axiomatic theory), representing a physical process. Only that you cannot DERIVE or COMPUTE this function. You have to assume it as an axiom of the theory. The point is that this is allowed by the logic because you cannot introduce inconsistencies in this way!
  • My own (personal) beef with the real numbers


    [continuation of the previous post]
    There is in fact a physical assumption that is necessary for both mathematics and logic to make sense: the fact that each time that you repeat the same computation (with the same input values) you obtain the same result.
    At first, it would seem that this assumption is self-evident and has nothing to do with physics, but in reality there is no prove (nor even physical evidence) that this is true for very long and complex computations.
    In fact the computations that are essential for mathematics and logic to exist are all based on the existence on non-linear physical processes. If the world were made only of continuous (non quantized) and linear (constant first derivatives) fields, there would be no way to perform any computation at all, and no intelligent human beings able to create - or discover - mathematical theorems.
    Since we leave in a non-uniform universe, we can build objects that encode digital information, but it still seems to be likely that the quantity of information that can be contained in a portion of space-time and the speed at which this information can be digitally processed are both finite, due to some fundamental principles that seem to be unavoidable consequences of quantum mechanics, whatever the fundamental entities of the universe are (https://en.wikipedia.org/wiki/Quantum_indeterminacy, https://en.wikipedia.org/wiki/Quantum_information).
    So, likely there exist quantities, or proves of theorems, whose computation (or prove) is physically impossible, even in principle. This is the some kind of question that we have in physics: does it make sense a model of the physical universe (or "multiverse") whose experimental detection is not possible, even in principle? Or, similarly, does it make sense to imagine an infinite non observable Euclidean space-time that contains the physical space (that is NOT incompatible, as a model, with general relativity), or is it better to imagine a finite non Euclidean space-time?

    Now, both ZFC and intuitionistic logic (of whatever kind) assume that there exist some symbols that represent functions. And a function (in the model) is whatever "thing" with the following property: every time you give it the same input, it returns the same output.
    But what if there is no such thing in nature, because all physical computational processes are in principle limited? There would be a process that is well defined as an experiment, but you cannot count on the fact that you always get the same output for the same input with absolute certainty: you get results that are statistically determined, but not deterministic.
    So, you could have for example a theorem that is statistically true at 90%, but impossible to prove with 100% security even IN PRINCIPLE, due to some fundamental laws of physics (that we BELIEVE to be true, but of course have only an experimental - not mathematical - validity).
    My question is: should this still be considered a mathematical theorem, or discarded because we do not have an absolutely certain proof of it? I believe that at the end it will be accepted as valid, but only in some kind of "quantum" logic, based on quantum-mechanical "experiments" instead of "real" proofs.