• fdrake
    6.6k
    Would it be fair to say that thus far, we have only discussed the language by which we would reason about any kinds of objects, but (other than for examples) we have not yet formally introduced any objects about which to reason?Pfhorrest

    IMHO you can start wherever you like, the "existence" of any object that satisfies some property is really only relative to some other object. Like "the naturals exist inside the reals" or "the continuous functions exist inside the differentiable functions". "Existence" is just a question of whether an object exists inside another one.

    You can "introduce" an object without it existing. "imagine a signature with these rules, then this follows", this is like the deduction theorem (with a big abuse of notation):

    If you have then you have .

    There is an object with the structure, so it has these properties.
    vs
    If there was an object with this structure, it would have these properties

    In this regard, the reason I've waited to talk about set theory is that the model of set theory contains sub objects which model all the structures we've talked about. If you want to start from "something fundamental", this "contained object" or "the theory of ZFC is a supertheory of the theory of natural numbers", say, is a good intuition for why ZFC is in some sense "fundamental".
  • fdrake
    6.6k


    But yes, thinking about it again, what you've said is accurate. I've not written down what sets are, natural numbers are, rational numbers are etc; but I've gone some way of showing a system in which you can talk about them formally.
  • Pfhorrest
    4.6k
    That’s what I meant yeah, thanks! And also I’m totally on board with ambivalence about the “existence” of mathematical objects, ala:

    If there was an object with this structure, it would have these properties
    vs
    There is an object with the structure, so it has these properties.
    fdrake

    As I understand it, we’re really saying “all objects with this structure have these properties”, but that’s technically true whether or not there “really” are any objects with that structure at all. All bachelors are unmarried, even if there are no bachelors.
  • fdrake
    6.6k
    As I understand it, we’re really saying “all objects with this structure have these properties”, but that’s technically true whether or not there “really” are any objects with that structure at all. All bachelors are unmarried, even if there are no bachelors.Pfhorrest

    I think this is about right. Though it's clearly true that not every first order structure has the empty domain as a model; eg "There exists an x such that x+1 = 2" is not true in the empty domain, but it is true in the naturals with addition.

    Something that gives me that Pascal feeling of infinity vertigo is that we can say things like:

    "If you interpret the the Peano axioms in the usual way, then..."

    Which conjures up a meta language to talk about an implication from a meta-language item to an object language item. It seems the formalism always has to stop at some point, but the reason (and conceptual content) doesn't.
  • creativesoul
    11.9k
    Since the syntax and semantics are formally distinct, this highlights the possibility of a gap between syntax and semantics of a formal language; and this gap is a site of interesting questions.fdrake

    Yep.
  • Pfhorrest
    4.6k
    "If you interpret the the Peano axioms in the usual way, then..."fdrake

    Yeah, this is what I really take the truth of mathematical or more generally logical statements to be about. “All bachelors are unmarried” isn’t really a necessary truth by itself, but “If bachelor means ‘unmarried man of marriageable age’ then all bachelors are unmarried” is. I take mathematical truths to be truths about the implications from axioms to theorems. None of the theorems are themselves necessarily true, but it’s necessarily true that they are implied by their axioms.

    So it is only necessarily true that some axioms imply the existence of the natural numbers with their usual properties including some natural number x such that x+1=2, but that is a conditional statement that is therefore equivalent to some “all A are B” statement that I don’t care to reconstruct right now, and is thus true even in an empty universe.
  • fdrake
    6.6k
    None of the theorems are themselves necessarily true, but it’s necessarily true that they are implied by their axioms.Pfhorrest

    Only once you've fixed the underlying logic, I think. I'm not too happy with bringing in an exterior sense of modality to the theorems. If we're in a context sufficiently abstract to start playing around with the rules of logic, then necessity and possibility ideas might be perturbed too.

    Edit: though, again, I generally agree with what you've said. I might just be being pedantic here.
  • fdrake
    6.6k


    Regarding the sense of necessity thing, maybe this helps spell out my suspicions that it doesn't tell us much.

    Define that a given statement x is possible with respect to a given axiomatic system z if and only if (the theory of z & x) is consistent.
    Define that a given statement x is necessary with respect to a given axiomatic system z if and only if (the theory of z & not(x)) is inconsistent.

    If we have a list of axiomatic systems as the set of possible worlds, with the same underlying logic, and we fix a theory like arithmetic, arithmetic will be possible for axiomatic systems that don't make any of its theorems false (IE, possible worlds look like augmented arithmetic with rules consistent with all of its rules, or weakened arithmetic by deleting rules or modifying them to weaker versions), and arithmetic will be necessary for axiomatic systems that make all of the theorems of arithmetic true (IE, the axiomatic system under consideration contains all theorems of arithmetic in its theory, like set theory or Russel's type theory).

    If we have a list of possible worlds that contained all satisfiable combinations of well formed formulae of the logic, the only statements true in all possible worlds would be those that we can derive from any world; the tautologies of the underlying logic.

    Are the theorems concerning natural numbers necessary in the above sense? Well no, for example the rational numbers and how to manipulate fractions are in the above list of possible worlds; and for the fractions, it's false that every fraction is the successor of some other fraction (under their usual ordering).

    (It's false that every element of the structure is the successor of some other number, but it is true for its sub-structure , in the arithmetic of fractions there will be a sub-arithmetic of naturals that has the successor function behaving normally, but this weakens the in successor axiom statements about natural numbers to , in some sense they're the same statement since they're about the same object, but the universality of the successor property still breaks for the larger object of all fractions.)

    If we go back one layer and allow the rules of logic to vary (over the set of well formed formulae); the only necessities would be the shared tautologies of every logic under consideration.

    If we can vary the set of logics to include just one logic which has, say as satisfiable, not even all the consequences of propositional logic would be necessary (because there exists a satisfiable theory/possible world which is inconsistent with double negation elimination).
  • Pfhorrest
    4.6k
    This reminds me vaguely of a philosophical or logical problem I read about once, and can't remember the resolution to at the moment.

    The argument for the problem was that it seems like an inference like "P, if P then Q, therefore Q" depends on a hidden premise, "if (P and if P then Q) then Q", which in turn depends on yet another hidden nested conditional premise, and so on ad infinitum. Whenever you have some premises from which you think you can derive a conclusion, you're implicitly using a hidden premise that says you can derive that conclusion from those premises, but even once you've explicated that hidden premise you still have the same problem and need to add another hidden premise, on and on forever.

    This sounds like that problem in that, say, a theorem of arithmetic may be necessitated by ZFC, but ZFC is not necessitated by propositional logic, you can use other axiomatic systems that maybe don't necessitate that theorem; and even if ZFC were necessitated by propositional logic, that may not be necessitated by anything, as there are for example paraconsistent logics. You keep falling back on earlier and earlier sets of rules that say that you're allowed to use the rules you tried to use earlier, but who says you can use those earlier rules?

    This also reminds me of Kripkean relative modality, where something can be be necessary inasmuch as it is true in all worlds accessible from a reference world, even if it's not true in absolutely every world.

    I don't have much more well-sorted thoughts on the topic now besides those comparisons to other problems.
  • fdrake
    6.6k
    This also reminds me of Kripkean relative modality, where something can be be necessary inasmuch as it is true in all worlds accessible from a reference world, even if it's not true in absolutely every world.Pfhorrest

    I think that's the general picture involved with it, yeah. When you fix a starting point (of assumptions), you fix what is necessary for it (what can't fail to be true given the assumptions), what's consistent/possible with it (what can be true at the same time as the assumptions) and what's necessarily not for it (what can't be true at the same time as the assumptions).

    Anyway, onto ZFC.

    What's the purpose of ZFC? Well, historically, it arises at the time a foundation for mathematics was a popular research program. What it does is attempt to provide an axiomatic system in which all the usual theorems of mathematics are provable. How it does this is by showing that all these usual mathematical statements are secretly statements about sets; or rather, we can construct sets (in accordance with the axioms of ZFC) which interpret the theory of other structures (like continuous functions, arithmetic, fractions, etc).

    How does ZFC achieve this? It forms an axiomatisation of sensible rules for combining, comparing and transforming collections of objects. One important property of all the rules is a notion of closure. Closure is a general mathematical property that says that "every way we can act on something in ways described by our stipulated rules produces something which obeys those rules". In the abstract, this maybe doesn't make much sense, but here are some examples:

    Reveal
    Of course, studying it has its own merits, but for our purposes all we need are the tools to build sets.


    When we add two numbers in {0,1,2,3,...}, we're guaranteed to get another number. This says that {0,1,2,3,...} is closed under addition.

    When we multiply two numbers in {0,1,2,3,...}, we're guaranteed to get another number. This says {0,1,2,3,...} is closed under multiplication.

    But when we divide a number by another in {0,1,2,3,...}, we might get a fraction, like 0.5, which is not a number in {0,1,2,3,...}, and so {0,1,2,3,...} is not closed under division.

    The axioms of ZFC give us various things we can do to sets to produce other sets. That is to say, if we follow the axioms of ZFC, we will stay within the structure; we'll still be talking about sets. Axiomatising something generally requires that the structure is closed under its stipulated rules and if the structure is not closed, that's seen as a defect/opportunity to introduce further axioms and/or objects.

    ZFC stipulates rules that we can use to make and transform sets and get other sets out in a way that is sufficiently flexible to interpret usual mathematical objects as objects of ZFC - as sets.

    Reveal
    Some of you familiar with types might notice a correspondence; the closure of an object under some rules induces an object type of belonging to the object.


    Reveal
    We've already seen two incidences of closure before, with regard to the construction of well formed formulae (you can't make a non-well formed formula from a well formed formula using the production rules) and with regard to the theory of an axiomatic system; a theory of an axiomatic system is a closure of that axiomatic system under syntactic derivation; the collection of all consequences of a bunch of axioms is closed under inference.
  • fdrake
    6.6k
    The first step is to introduce the relevant symbols for ZFC. It inherits the language and inference rules for first order logic, but it needs to be augmented with a relation symbol , which denotes a set being a member of another set. EG



    says "1 is a member of the set containing 1 and 2". The bracket notation means "the set consisting of the elements inside the brackets { }"

    It also needs a symbol for the empty set, the set containing no elements. This is . With the previous membership relation, the emptyset can be characterised as:



    Which says the empty set is the name for the set with no elements - for all sets x, x is not in the empty set. is the negation of . EG since 3 is not an element of the set {1,2}.

    This makes the signature of ZFC . The existence of the empty set can be entailed by other axioms, so it doesn't really need to be thrown into the signature as a constant, but I'll do it anyway. We could also include a symbol for equality of sets , but since I've left that out of the previous discussions I'll leave it out here too.* Furthermore, we're going to postpone discussion of precisely what "X" is, but suffice now to say that it's a collection of (names of) all the sets.

    *
    It certainly makes things a bit fuzzier to equivocate between the ideas of equality in the underlying logic and equality in the first order theory using the logic, but considering that the only context this distinction is likely to crop up in is in distinguishing equal objects of the logic from equal objects of the theory, I'm going to just assume that the context will be sufficient to disambiguate the use of the symbol.


    The first rule we'll need to talk about sets is a criterion for when they're equal - how can we tell whether two sets are actually the same set? This principle is encoded in the axiom of extensionality, which states "two sets are equal when and only when they share all and only the same elements", eg {1,2} = {1,2}, or {2-1,2}={1,2}, since they share all and only the same elements. {1,2} isn't equal to {1,2,3} even though all the elements of {1,2} are in {1,2,3} because {1,2,3} has the additional element 3 (which is not in {1,2}). Prosaically, a set is the same as another if the other contains no more, no less, and no different elements from the first.

    Formally, the axiom can be written as:



    "For two given sets A and B, and for all sets x, a set A is equal to a set B when (if an x is in A when and only when it is also B)". "All and only the same elements" is what is formalised here.
    The first bit says is logically equivalent to the rest... the formalises the "All sets", the formalises A having x as a member if B has it and vice versa. This posits what it means for two sets to be equal.
  • Pfhorrest
    4.6k
    I think that last bit has some small errors in the explanation of ∀A∀B(∀x(x∈A↔x∈B)↔A=B), and it might be clearer to switch the places of the biimplicants in that anyway, so ∀A∀B(A=B↔∀x(x∈A↔x∈B)). "For all sets A and B, set A is equal to set B when and only when (for all x, x is in A when and only when it is also B)". It's not ∀A∀B(∀x(x∈A↔x∈B)) that is equivalent to A=B, but rather for all A and all B ("∀A∀B"), just ∀x(x∈A↔x∈B) is equivalent to A=B.
  • Pfhorrest
    4.6k
    Looks good enough for me. :)
  • fdrake
    6.6k
    The axiom of extensionality defined what it means for two sets to be equal. They were equal if they contained the same things.

    An analogy - shopping lists. Two shopping lists are the same if they have the same items in them.

    The next axiom is the axiom of pairing, which states that if you have two sets, you can make a another set which contains only those two sets as elements.

    Shopping list wise - this means that if you have two shopping lists for different shops, you could make a shopping list for both shops by combining them.

    Then there's the axiom of (restricted) comprehension - if you have a set, you can take a subset of it whose elements are the only elements which satisfy some property. If A = {1,2,3,4}, we can construct the set.

    {x in A, x is even}

    and get {2,4}. All the axiom of comprehension does is ensure the existence of a subset of a set which satisfies a specified property. It's called the axiom of restricted comprehension because it requires that we begin with a set (like A) then apply a property to it to get a subset (like {2,4}). If we could 'conjure up' a set into the language of set theory solely by specifying a property:

    {x , x satisfies P}

    Then we can make P whatever we like. Notoriously, this gives



    The Russel set. This must exist if we stipulated "we can conjure up a set through a property", and P is a property of set theory since it's a well formed formula of the language of set theory. If something belongs to the set, it doesn't belong to it, if something does not belong to the set, then it belongs to it. The reason why mathematicians rejected this axiom was because it entails a contradiction, and through the principle of explosion (discussed earlier), makes the theory prove everything it can state, even the falsehoods. Restricted comprehension saves the theory from the paradox by making sure you start with a set to apply properties to, rather than conjuring up a set from the logic using a property. If you change the underlying logic to one that is more resistant to contradictions, you might still want the unrestricted version (like in a paraconsistent logic).

    Shopping list wise - this says that we can select sub lists by specifying elements. "What vegetables will we buy?" is answered by the sub list of vegetables.

    The next is the axiom of union. This says that for every collection of sets, there's a set which consists of the elements of all of them taken together.

    Shopping list wise, this says that you can make a larger shopping list out of any collection of shopping lists. A shopping list of meat items taken with a shopping list of vegetables becomes another shopping list which consists of the meat items and the vegetables.

    The next is the axiom of powerset. This says that for every set, there exists a set which consists of all the subsets of that set.

    Shopping list wise, if we have the list:

    Carrots
    Potatoes
    Eggs

    This says we must be able to construct the lists

    Carrots, Potatoes, Eggs, {Carrots, Potatoes}, {Carrots, Eggs}, {Potatoes, Eggs},
    {Carrots, Potatoes, Eggs}, and the empty list (what you would buy if you bought everything on a blank page).

    Then make a list of all those shopping lists and the blank bit of paper. That whole new list is the powerset of the original shopping list.

    The powerset of a set is always larger than the original set. If you had a set of infinite size (to be introduced later), this makes there be more than one size of infinity. If we considered an object which would be a model of set theory, because of the power set axiom, intuitively it would have to have a size larger than any set, in particular larger than the natural numbers! Unfortunately this does not hold, there are countable models of ZFC (called Skolem's Paradox). Such a model does not satisfy the intuition, but entails no contradictions; a model in some sense sets out a meaning of terms, a countable model of ZFC doesn't have the statements in it mean precisely the same thing.
  • fdrake
    6.6k
    Now onto the axiom of infinity.

    The axiom of infinity states that:

    "There is a set such that is in , and if the set is in then is in ". Where is defined as .

    This looks like:



    If we define the symbols , ,
    ... ,

    we can see that looks suspiciously like the natural numbers. This is true. It is the natural numbers. Arithmetic can be defined on this set by defining an operation ( a function from to ) which is given by:




    This lets us prove in .

    Theorem:
    Proof: Let be in , then . Then set , this gives .
  • Pfhorrest
    4.6k
    Hurray, we're finally up to where I began, and the first step past it that I said I wasn't able to do myself!

    Is it possible to write "1 + 1 = 2" entirely in terms of sets and set operations? Like, if I understand correctly, we can say that 0 = ∅ = {}, and that 1 = {∅} = {{}}, no? So 2 = {{},{{}}}, right?

    So {{}} + {{}} = {{},{{}}}

    But is there a way to write that "+" in terms of things like ∪, entirely set-theoretic operations?
  • fdrake
    6.6k
    An interesting property of the set as defined above is the induction property. It allows a kind of proof called inductive proof. This provides a way of proving things about sets of natural numbers. Induction works like:

    (1) Show that some statement is true of some number
    (2) Show that if is true of , then is true of .
    (3) Conclude: therefore is true of for all .

    EG: Theorem: "Given a sequence for in , show that the sum of the first terms of this sequence is .

    Proof:
    (1) Initial statement: then .

    Now how do we show (2)? We can use the strategy of conditional proof detailed earlier, simply assume that is true and then show that is true on this basis. "Assuming is true" is called the induction hypothesis. So we make that induction hypothesis.

    Assume , then is , which is , which after some tedious algebra is .

    (3) The combination of (1) and (2) shows that this is true for all the numbers.

    Is it possible to write "1 + 1 = 2" entirely in terms of sets and set operations? Like, if I understand correctly, we can say that 0 = ∅ = {}, and that 1 = {∅} = {{}}, no? So 2 = {{},{{}}}, right?Pfhorrest

    Yes. The definition of addition lets you do this, it's just an exercise of translation.

    and . Gives you that if is a set which represents a natural number and so is , . Working out the details for is , working out the details for is . Defining without the inductive definition gives you "take the thing on the left and write it as its set, take the thing on the right and write it as its set, define the sum as the successor function applied to 0 as many times as the thing on the left plus the thing on the right".
  • fdrake
    6.6k
    "take the thing on the left and write it as its set, take the thing on the right and write it as its set, define the sum as the successor function applied to 0 as many times as the thing on the left plus the thing on the right"fdrake

    Or alternatively "take the thing on the left, and successor function it the number of times required to reach the thing on the right from 0", equivalently "union the thing on the left with the successor function applied to it as many times as the number on the right"
  • fdrake
    6.6k
    Or alternatively "take the thing on the left, and successor function it the number of times required to reach the thing on the right from 0", equivalently "union the thing on the left with the successor function applied to it as many times as the number on the right"fdrake

  • fdrake
    6.6k
    There are other axioms of ZFC. But considering that the story being told is "how to go from formal languages to differential equations", I'm not going to go through them. The axiom of infinity and the successor function give you the natural numbers, the axiom of extensionality gives you equality between natural numbers when defined in that way.

    The strategy for showing that ZFC contains within it a copy of the natural numbers and arithmetic is to show that you can define a collection of sets and the relevant operations/functions , (and subtraction, multiplication, division, inequalities on natural numbers etc etc...) such that the overall structure satisfies an axiomatisation of arithmetic like (first order) Peano arithmetic .

    What this does formally is show that (the set theory construction) models (the Peano arithmetic axioms defining natural numbers and addition). The successor in Peano arithmetic is interpreted as the successor in the set theory construction, the numbers in Peano arithmetic are interpreted by their corresponding number in the set theory construction, addition in Peano arithmetic is interpreted by addition in the set theory construction and so on for however many functions/operations/relations you throw into the Peano arithmetic structure and the set theory structure.

    Recapitulating a previous point: the power of set theory is that it provides a way of defining lots of structures, so that you can take some other structure (like Peano arithmetic) and interpret it as a set theory object (collection of sets).
  • fdrake
    6.6k
    One important technique is how to construct and solve equations, and we've got the theoretical resources in place to do that for natural numbers arithmetic. So what is an equation? And what is a solution for an equation?

    An equation in a variable in a structure (with defined) is an expression of the form , and where is a function defined in the structure from its constants to its constants and is a constant element. A solution for a given equation is a substitution of for a constant term in the equation expression which makes true.

    EG: , substituted into the above formula gives , and a solution is then since the substitution makes true.
  • bongo fury
    1.6k
    This reminds me vaguely of a philosophical or logical problem I read about once, and can't remember the resolution to at the moment.Pfhorrest

    This?

    can't remember the resolutionPfhorrest

    Lack of one is cool for inscriptionalism. No sign of a consensus on wiki.
  • fdrake
    6.6k
    Ideally what is needed, given the equation concept, is a way to manipulate equations so that their solutions can be revealed. What this requires is setting up an idea of the substitution of expressions in the same variable which are true whenever the original equation is true.

    EG: is true implies is true implies is true implies is a solution of .

    What we'd need is a way of substituting equivalent expressions for equivalent expressions, and in order to do this we can define what it means for two expressions to be equivalent.

    We'd really like something like , so that all ways of obtaining from operations of arithmetic are in some way equivalent. This can be set up for addition as follows:



    which is to be read "the set of all expressions defined as such that every expression within is the sum of two numbers which equal .

    If we took some other number, say , and defined the same thing:

    We could ask "under what conditions is ?", the answer there is:



    Under what conditions does this equation of sets have a (non-empty collection of) solutions? Well, whenever . That is, whenever the expressions in the two sets equal the same number. This gives us a perspective from which equations can be manipulated.

    Notice that if we define , this also respects the above. (For example, any expressions which evaluate to 2 plus any expressions which evaluate to 3 equal any expressions which evaluate to 5, this is 2+3=5 if and only if [2]+[3]=[5]).

    If we have that , then we also have that , this means we can substitute in anything in for in the equation and obtain another equivalent equation. Substituting equivalent expressions is how equations can be solved through algebra. Imagine that we've done the same thing for subtraction and so on, eg:



    For example:


    What this highlights is that you can leverage the equality defined between sets to define an equivalence between expressions that equal the same constant. This (kind of thing, but for all the operations) is the construction that formalises the algebra used in solving equations.
  • fdrake
    6.6k
    There's sort of a branching point here, @Pfhorrest, two ways to tell the same story. Now we've got resources to talk about sets, operations on sets and so forth, we could go through the specific cases of natural number arithmetic -> fractions -> real numbers, building fractions from sets of natural numbers, building real numbers from sets of fractions, or alternatively we could go into abstract algebra and talk about the same things in terms of semigroups, groups, rings and fields.

    The first has the benefit of telling a story from the "objects" points of view, how do we build up more complicated objects from simpler objects, the second has the benefit of telling the story from the "structures" point of view, which presents the integers as "solving a problem regarding addition and subtraction in the naturals", the fractions as "solving a problem regarding multiplication and division in the integers" and the real numbers as "solving a problem regarding sequences in the fractions".

    Input?
  • Pfhorrest
    4.6k
    I had initially intended this to take the first of those two routes, but I feel like I would learn more from the second one, since I know less about it already than the first. So I’m leaning toward the second. But is there not a way to incorporate both at each step? E.g. the naturals are not closed under subtraction so we need some larger set that is (second approach), and the objects of that larger set are the integers which can be built out of naturals like so (first approach).
  • fdrake
    6.6k
    The previous post talked about equations and how to define the algebra of arithmetic. It turned out that the algebra of arithmetic invokes a notion of equivalent expressions; two expressions are equivalent when they are equal to the same number. 1+1=2, 3-1=2, therefore 1+1=3-1. This idea of equivalence is so general that it should be spoken about in detail; there is a general formalism for it called an equivalence relation.

    So what's a relation? A relation on a set is a collection of pairs of elements from that set where the order matters. EG the set {1,2,3} can have the relation R={ (1,2), (2,3) } on it. This says that "1 is related to 2" and "2 is related to 3". The pairs can be written as 1R2, 2R3. Note that 1R2 is different from 2R1! An equivalence relation has three properties which characterise it:

    (1) Reflexivity: for all x; xRx
    (2) Symmetry: for all x for all y : xRy implies yRx
    (3) Transitivity: for all x for all y for all z : xRy & yRz imply xRz.

    The relation defined above satisfies these properties.

    (1) Reflexivity: eg; 3+1 = 3+1
    (2) Symmetry; eg; 3+1 = 4 implies 4 = 3+1
    (3) Transitivity; 3+1 = 4 and 4 = 5-1 implies 3+1 = 5-1

    Equivalence relations defined on some set divide that set up into non-overlapping parts. These are called equivalence classes. The equivalence class of x is written [x]. EG [4] contains 3+1 and 5-1. But it does not contain 3, or anything that adds up to 3, like 2+1. The equivalence classes on a set divide it up into distinct parts which are somehow equivalent. The above construction divides the set of expressions of the form x+y up into the various sets [t]={x+y = t}.

    Note that [2] = [1+1]. When an equivalence class can be written in the form [x], x is called a representative of that equivalence class.

    The properties of equivalence classes (for arithmetic) exhibited above give you [x]+[y]=[x+y]. The same applies for every way of obtaining x from arithmetic operations on any collection of numbers. This means that the arithmetic operations defined on numbers transfer to arithmetic operations defined on equivalence classes. Numerical equality of what arithmetic expressions evaluate to (3+1 evaluates to 4) transfers to set equality of these equivalence classes ([3+1]=[4]). Once you get arithmetic expressions evaluating to terms functionally (which they always will, they will have unique outputs per input set by definition), you can always leverage this to define equivalence classes of expressions on whatever structure you have defined the operations on.

    This technique is used over and over again; once operations are defined on a structure, equivalence classes of evaluation (like 3+1 evaluates to 4) can be used to motivate algebraic substitution/solving things by algebra on that structure. The same will apply to rational numbers (fractions); they will be built as equivalence classes of naturals. Then real numbers will be built as equivalence classes of fractions!
  • fdrake
    6.6k
    Philosophical side note:

    Different senses of identity can be thought of as equivalence classes under different relations. Numerical identity is like 2=2, predicative identities like "crimson = scarlet" (insofar as they are both red colours) are equivalence relations of a different sort. This makes some sense so long as the idea can be understood extensionally without simplification. Something can be understood extensionally when the concept is completely expressible by collecting all terms that the concept applies to.

    EG: "foot " is an element of the equivalence class of nouns in English. Two words are related when and only when they are of the same grammatical type.
    EG: "foot" is an element of the equivalence class of "foot" under the equivalence relation of string (character sequence) identity.

    Not all concepts behave extensionally.

    EG: "pile of rocks" has ambiguous boundaries, at what point does a collection of stones become a pile?
    EG: "hot". at what point does a temperature become hot? Does a temperature become hot in the same way as a human body feels hot?

    Expressions in natural language rarely only leverage extensional concepts to furnish their understanding, most expressions are confused jumbles. This provides lots of interpretive issues, but what we lose in rigidity and precision we gain in expressive power.
  • fdrake
    6.6k
    Mathematical structures involving operations; like arithmetic; usually contain many copies of expressions which evaluate to the same thing. In this regard, it makes sense to identify expressions in the structure up to some equivalence relation. Like what was done above, expressions involving operations were identified up to the natural number arrived at by evaluating the expression.

    In general, this is phrased "X up to equivalence relation R", and is written X/R. You can think of equivalence relations as gluing terms together into meaningful chunks, for a prescribed sense of meaning set out in the definition of the equivalence relation.

    EG: "identify the natural numbers up to their remainder upon division by 2", the equivalence relation there is characterised as xRy iff x-y 's remainder upon division by 2 is 0. This gives you that 0 is related to all the even numbers and that 1 is related to all the odd numbers. This is called "the integers modulo (mod) 2".

    When you glue parts of a structure together, the glued parts tend to inherit the operations of the original structure. EG, two numbers: 3 and 7, 3+7 = 10. If you map this expression to their equivalence classes mod 2, [3] = [1], [7]=[1], [10]=[0], [3+7]=[3]+[7]=[1]+[1]=[1+1]=[2]=[0]=[10]. Everything works out.

    For the remainder of the discussion; whenever a structure with some operations on it (like natural numbers with addition etc) is introduced, assume that we have identified the terms of the structure up to equivalence of evaluating expressions. So for example, 22/7 will be identified with 44/14, or . Explicitly setting out how this works in every case is extremely tedious and unenlightening, so I'm just going to assume that once the structure has been defined the (collection of) equivalence relations upon its terms I just described the nature of have been set up appropriately. This means I will be abusing the equality symbol a lot, but the meaning should be clear from the context (and if I don't think it is I'll make a reminder).
  • fdrake
    6.6k
    (If anyone's actually reading along, questions are fine too)
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.