• Streetlight
    9.1k
    In one of Robert Rosen's always fantastic essays, he begins by writing that "It is my contention that mathematics took a disastrous wrong turn some time in the sixth century B.C" - a 'disastrous wrong turn' he associates primarily with Pythagoras and his students, and extending all the way to today. So what 'wrong turn' does Rosen identify? He names it commensurability. The idea is this: that for any two objects of measurement (lengths A and B, say), there will always be a third length (C) or common measure that, in principle, would be able to measure exactly and without remainder, the initial lengths A and B. Lengths A and B would thus be commensurable lengths with respect to length C. In the example below, for instance, length C is the common measure of lengths A and B:

    A: --
    B: -- -- --
    C: -

    As Rosen points out, what the Pythagoreans assumed was that any measurement whatsoever, anywhere in the universe, would be amenable to being measured by just the kind of common measure outlined above. This is the Pythagorian disaster. The problem is multiple. Not only is the assumption just that - an assumption - but it has also caused all sorts of issues in math itself. Ironically, it was Pythagoras himself who provided the first proof of the falsity of commensuribility, after his discovery of the irrational numbers - numbers that, practically by definition, could not be defined in terms of a ratio of whole numbers, and thus did not admit of a common measure between them. And as Rosen points out, it is the (false) assumption of commensuribility that actually underlies almost all of our deepest mathmatical problems: from the paradoxes of Zeno to paradoxes of Godel, all of which, far from speaking to anything in reality, are nothing more than mathematical artefacts.

    Aside from this though, the real issue of commensuribility is that it erases any reference to the real world. Recall that, in order to measure one length, A, using another length, B, one had to begin with two 'real world' things to be measured 'side by side' (in our example, B measures three As). However, once one assumes that all things are commensurable, one can actually drop the reference to any real world measure whatsoever: all one has to do is assume that, having fixed an (arbitrary) common measure for all things, that all things would be subject just that measure. The ultimate advantage of this is that it allows math to become an essentially self-contained system.

    As Rosen puts it, "Pythagoras [enabled us to] create worlds without external referents; from within such a world, one cannot entail the existence of anything outside. The idea behind seeking such a formalized universe is that, if it is big enough, then everything originally outside will, in some sense, have an exact image inside. Once inside, we can claim objectivity; we can claim independence of any external context, because there is no external context anymore." However, "if we retain an exterior perspective [outside the system in which commensuribility is assumed], what goes on within any such formalized universe is at best mimesis of what is outside. It is simulation... [but] mimesis is not science, it only mimics science."

    To the degree that there are those who still believe that the world can indeed be exhaustively subjected to a common measure then, we remain heirs to the Pythagorean disaster (alternative thread title: A Critique of Pure Math).
  • TheMadFool
    13.8k
    It is simulation... [but] mimesis is not science, it only mimics science."StreetlightX

    What is real science then?

    Non-quantitative analysis isn't objective enough. Take an object with mass 4 kg. I hold it in my hand it feels heavy but to a bodybuilder it feels light. But what is unchanging and quantifiable is its mass 4 kg.

    How are we to even discover the laws of nature without mathematics given that the laws themselves are mathematical?

    Perhaps there's a difference between ''commensurable'' and ''quantifiable'' and Robert Rosen has an issue with the former and not the latter.

    If that's the case then Rosen's observation isn't anything new because, as you said, Pythagoras already identified the problem.

    May be I misunderstood.
  • Streetlight
    9.1k
    The conclusion to be drawn is not that mathematics is useless or somesuch; only that one should engage in mathematics without the (unscientific) assumption of commensuribility. Or put otherwise: don't assume the commensuribility of everything in the world (this is theology); instead, run tests, inject a good dose of empiricism and pay close attention to whatever phenomenon you aim to examine.
  • Wayfarer
    22.5k
    The ‘disastrous wrong turn’ of the Pythagoreans was however one of the seminal causes of the development of mathematics and ultimately mathematical physics. Russell says in his chapter on Pythagoras that the mathematical mysticism of the Pythagoreans marks it off from Indian mysticism, and that it has had many consequences for the development of the Western tradition as a whole [not least mathematical physics, I would aver.]

    It is true that many things about the pythagoreans seem eccentric, such as the well-known story of the Pythagorean student who was drowned or strangled for discovering the irrational numbers, or the legend that Pythagoras was captured because he refused to flee across a bean field, such was his hatred of beans. But nevertheless their emphasis on pure reason, on things that could be known simply by virtue of the rational mind, is one of the major sources of Western philosophy. I haven’t read the paper and am not inclined to, but calling it ‘a disaster’ seems to me nothing more than empiricist polemics. So, sure, the particular notion in question might be completely wrong-headed but write off Pythagoreanism and you’d be having to communicate your OP via ink or smoke signals.
  • Streetlight
    9.1k
    But nevertheless the emphasis on pure reason, on things that could be known simply by virtue of the rational mind, is one of the major sources of Western philosophy.Wayfarer

    So much the worse for Western philosophy.
  • Wayfarer
    22.5k
    Yeah as I say, why don’t you use what Eastern philosophy has to offer, like calligraphy, or an abacus, or something.
  • Streetlight
    9.1k
    It's not out of the question. That said, one of the ironies of your opposition to the OP is that the anti-Pythagorian thrust of the paper goes hand-in-hand with an anti-reductionist approach to the world. Among the consequences Rosen draws is that:

    "Above all, we must give up reductionism as a universal strategy for studying the material world. But what can we do in a material world of complex systems if we must give up every landmark that has heretofore seemed to govern our relation to that world? I can give an optimistic answer to that question, beginning with the observation that we do not give up number theory simply because it is not formalizable. Godel’s Theorem pertains to the limitations of formalizability, not of mathematics. Likewise, complexity in the material world is not a limitation on science, but on certain ways of doing science."

    It's only by opposing the Phythagorean disaster that one can, for instance, demonstrate the invalidity of the Church-Turing thesis as it pertains to the world. But, given that you're an arch-reductionist yourself, I suppose it's not surprising that you'd so willingly crawl into bed with the enemy.
  • Wayfarer
    22.5k
    But science is mainly reductionist because it has become divorced from the other elements - aesthetic and moral - of the broader philosophical tradition. It was with Galileo and the advent of modern science, that all of those qualities became categorised as ‘secondary’ and the ‘reign of quantity’ began in earnest. It’s the idea that only what can be quantified is significant that is at the basis of reductionism. And sure, mathematical rationalism is a big part of that. But you can be scientific without being reductionist - from what I know of Rosen, that is exactly his approach.

    So I think the Church-Turing thesis has problems because it regards human intelligence itself as something that can be digitised and written in binary code. That is reductionist in the extreme, but I think trying to attribute all of that to Pythagoras [if that is what is being said] is at the very least drawing a long bow.
  • Streetlight
    9.1k
    I think trying to attribute all of that to Pythagoras [if that is what is being said] is at the very least drawing a long bow.Wayfarer

    Why? Pythagoras was the mathematical reductionist par excellence. It is not for nothing that the notion that 'everything is number' is chiefly associated with his name. There's barely a bow-string to pull, let alone draw long.

    Consequently I'd argue that science has been reductionist because it has so for long been enmeshed in the 'broader philosophical tradition'. It's only now been able to begin to extricate itself from that muck and mire now that the Pythagorean inheritance is drawing its (hopefully) dying breaths. A cause for celebration.
  • apokrisis
    7.3k
    So “disaster” is claimed merely for dramatic effect. Thank goodness for Pythagoreanism. It reveals the true epistemic nature of modelling. We could become scientists having established the clear difference between formal model, acts of measurement, and the world “as it really is”.

    How splendidly Peircean!
  • Janus
    16.3k
    But nevertheless their emphasis on pure reason, on things that could be known simply by virtue of the rational mind, is one of the major sources of Western philosophy. I haven’t read the paper and am not inclined to, but calling it ‘a disaster’ seems to me nothing more than empiricist polemics. So, sure, the particular notion in question might be completely wrong-headed but write off Pythagoreanism and you’d be having to communicate your OP via ink or smoke signals.Wayfarer

    You make an unwarranted assumption; that mathematics, science and philosophy could not have evolved in any other way than they have.

    Also, it is not that certain generalities cannot be known by reflecting on our experience, but that the process of reflection is misconstrued as "pure reason" has been the greatest mistake.

    The very idea of pure reason is itself a grotesque reification; "a fallacy of misplaced concreteness", to quote Whitehead.
  • Banno
    25k
    "It is my contention that mathematics took a disastrous wrong turn some time in the sixth century B.C"StreetlightX

    Seem to me, if the result of that disaster is mathematics, science and technology, we could use more such disasters.
  • Streetlight
    9.1k
    Seem to me, if the result of that disaster is mathematics, science and technology, we could use more such disasters.Banno

    But the assumption now stands in the way in of just those things.
  • apokrisis
    7.3k
    Yeah. Just look at everything made impossible in just the past 50 years. Chaos theory. Complexity theory. Fractals. Dissipative structure theory. The list of advances in mathematical modelling that did not happen due to this “disaster” just goes on and on. ;)

    Rosen’s point about incommensurability in all those fields still stands. But it was also overcoming the issue in a pragmatic fashion that has brought home the underlying trick involved.

    Non linear maths demonstrated that measurement error is not necessarily linear. Indeed, as Rosen says, it is almost generically the case that it ain’t.

    But rather than mathematical modelling just curling up and dying, it seems rather invigorated at finding ways to handle non linear measurement error.
  • Snakes Alive
    743
    I've read the OP a bunch and I still don't get it.

    What is the disaster?

    Given the existence of irrationals, isn't the point made here already accepted? The existence of irrationals has been known since ancient times, as you say.

    How does the Pythagorean doctrine of commensurability lead to Zeno's paradoxes?
  • TheMadFool
    13.8k
    The conclusion to be drawn is not that mathematics is useless or somesuch; only that one should engage in mathematics without the (unscientific) assumption of commensuribility. Or put otherwise: don't assume the commensuribility of everything in the world (this is theology); instead, run tests, inject a good dose of empiricism and pay close attention to whatever phenomenon you aim to examine.StreetlightX

    What is this ''commensuribility''? The way you explained it it means finding a common factor that divides evenly into two measurements.

    If my understanding is correct then the problem is with whole numbers and their inability to correctly mirror reality which has a bunch of irrationals in it e.g. pi and e and some physical constants like planck's etc.

    Science, if I'm correct, does more accepting of truths which can be incommensurible than coercing mathematical models unto reality.

    Perhaps there's something deeper I don't understand.

    AFAIK the number line is complete (wholes, rationals, integers, rationals, and irrationals). We even have the real-imaginary number space.

    I'm not saying that the above number space is complete. Perhaps we need a new set of numbers which are neither real nor imaginary. I don't know.
  • Metaphysician Undercover
    13.2k
    Non linear maths demonstrated that measurement error is not necessarily linear.apokrisis

    That's because, as I explained to you in Streetlight's other thread, the incommensurability lies in the relation of one spatial dimension to another. The modeling of space as dimensions, though very pragmatic, is fundamentally wrong. This incorrectness is demonstrated by that incommensurability.
  • Streetlight
    9.1k
    AFAIK the number line is complete (wholes, rationals, integers, rationals, and irrationals). We even have the real-imaginary number space.TheMadFool

    The whole problem is precisely over the question of 'completion'. The assumption of commensuribility turns on the idea that, once we 'complete math', we could then use mathematical tools to create a one-to-one model of all reality (Rosen: "The idea behind seeking such a formalized universe is that, if it is big enough, then everything originally outside will, in some sense, have an exact image inside"). But if commensuribility does not hold - if not everything in the universe is in principle able to be subject to a single measure - then no such 'largest model' can exist.

    Importantly this does not mean that modelling is a lost cause; instead, it means that modelling must be specific to the phenomenon so modelled: beyond certain bounds and threshold values, modelling simply ends up producing artifacts (at the limit, you get Godel's paradoxes!). You can have models of this and models of that but never THE model.
  • TheMadFool
    13.8k
    The whole problem is precisely over the question of 'completion'. The assumption of commensuribility turns on the idea that, once we 'complete math', we could then use mathematical tools to create a one-to-one model of all reality (Rosen: "The idea behind seeking such a formalized universe is that, if it is big enough, then everything originally outside will, in some sense, have an exact image inside"). But if commensuribility does not hold - if not everything in the universe is in principle able to be subject to a single measure - then no such 'largest model' can exist.StreetlightX

    I think Rosen is oversimplifying mathematics and science. Commensuribility isn't a mathematical or scientific principle unless the scientific quest for The Theory of Everything could be called that.

    Does current science have commensuribility as a principle? My guess is it does not.

    Importantly this does not mean that modelling is a lost cause; instead, it means that modelling must be specific to the phenomenon so modelled: beyond certain bounds and threshold values, modelling simply ends up producing artifacts (at the limit, you get Godel's paradoxes!). You can have models of this and models of that but never THE model.StreetlightX

    Theory of Everything?
  • Streetlight
    9.1k
    Does current science have commensuribility as a principle?TheMadFool

    I don't know that one can speak of 'current science' as a reified whole. Just (individual) scientists and their views, organizations and their views, institutions and their views and so on. In any case, it seems obvious that the quest for a TOE - on the premise on commensuribility - still seems to many, if not most, as a legitimate and desirable endeavour.
  • Galuchat
    809
    Does mathematical incommensurability entail epistemological (as opposed to ontological) emergence per Michel Bitbol?
  • gurugeorge
    514
    Doesn't the principle just fall out as a corollary of the infinite divisibility of length (or any other measurable relation)?
  • TheMadFool
    13.8k
    I think all science, in fact everyone in the knowledge business, looks for the theory that'll explain everything. The way knowledge has been synthesized is a bottom-up process. We start off with basic stuff such as counting with whole numbers in math. Then, as new experiences warrant we build on the previous e.g. inventing zero, including irrational & imaginary numbers. This heirarchy is seen in science e.g. biology is built on physics and chemistry.

    Interesting thing to note is that as we climb up the knowledge heirarchy there's always something that can't be explained by what was the underlying assumptions. For instance physics and chemistry in biology can't explain consciousness.

    I see a pattern here like a series of buckets (branches of knowledge) each bigger than the previous. When water (knowledge) from the first smallest bucket is poured into the succeeding bucket, the receiving bucket has some space left over (unexplained stuff).

    The above problem, if we could call it that, is an extension of the incommensuribility problem I think.
  • Streetlight
    9.1k
    Given the existence of irrationals, isn't the point made here already accepted? The existence of irrationals has been known since ancient times, as you say.

    How does the Pythagorean doctrine of commensurability lead to Zeno's paradoxes?
    Snakes Alive

    Doesn't the principle just fall out as a corollary of the infinite divisibility of length (or any other measurable relation)?gurugeorge

    AFAIK the number line is complete (wholes, rationals, integers, rationals, and irrationals). We even have the real-imaginary number space.TheMadFool

    Just wanna come back and address these together as they all hit on similar points that I think deserve to be expanded upon. The idea as I understand it is this - there is in fact one way to 'save' the assumption of commensurability after the introduction of the irrationals, and it is this: to treat irrationals as the limit of a convergent series of rational numbers. In this way, we don't actually have to deal with incommensurate values per se, only rationals (Rosen: "At each finite step, only rationalities would be involved, and only at the end, in the limit, would we actually meet our new irrational. This was the method of exhaustion...")

    In the last century, this was formalized with the procedure of 'Dedekind cuts', which enable the construction of the real numbers (irrational + rational numbers) from the rational numbers alone. One 'side-effect' of constructing the irrationals in this way was to definitively construe the number line as continuous (a 'gapless' number line). The idea is basically that simply by initiating a procedure of step-by-step counting, one can eventually arrive at an irrational at the limit of that process.

    However - and here we get to Zeno - the attempt to 'save' commensuribility in this way simply pushes the problem back a step, rather than properly solving it. For what Zeno points out is that even if you add up the points on a number line to arrive at an irrational, no single point itself has any length, and that adding a bunch of lengthless points cannot itself yield any length (which in turn allows one to make wild (paradoxical) conclusions like √2 = 0).

    So what the Zeno paradoxes essentially mark is the irreducibly of incommensuribility. Making the irrationals the limit of a converging series of rationals in order to save commensuribility is a bit like trying to suppress a half inflated balloon: short of breaking the balloon, all one can ever do is shift the air around. One of the take-aways from this is that the very idea of the (continuous) number-line is a kind of fiction, an attempt to glue together geometry and arithmetic in a way that isn't actually possible (every attempt to 'glue' them together produces artifices or problems, either in the form of irrationals, or later, in the form of Zeno's paradoxes - and, even further down the line, Godel's paradox).

    [Incidentally this is something that Wittgenstein was well aware of: "The misleading thing about Dedekind’s conception is the idea that the real numbers are there spread out in the number line. They may be known or not; that does not matter. And in this way all that one has to do is to cut or divide into classes, and one has dealt with them all. ... [But] the idea of a ‘cut’ is one such dangerous illustration. ... The geometrical illustration of Analysis is indeed inessential; not, however, the geometrical application. Originally the geometrical illustrations were applications of Analysis. Where they cease to be this they can be wholly misleading." (Wittgenstein, Lectures on the Foundations of Mathematics)

    Compare Rosen: "The entire Pythagorean program to maintain the primacy of arithmetic over geometry (i.e., the identification of effectiveness with computation) and the identification of measurement with construction is inherently flawed and must be abandoned. That is, there are procedures that are perfectly effective but that cannot be assigned a computational counterpart. In effect, [Zeno] argued that what we today call Church’s Thesis must be abandoned, and, accordingly, that the concepts of measurement and construction with which we began were inherently far too narrow and must be extended beyond any form of arithmetic or counting.]

    @fdrake I wonder if this story sounds right to you. I've struggled somewhat to put it together, from a few different sources.
  • Streetlight
    9.1k
    Just to round off my references, compare also Deleuze:

    "In the history of number, we see that every systematic type is constructed on the basis of an essential inequality [read: incommensurability - SX], and retains that inequality in relation to the next-lowest type: thus, fractions involve the impossibility of reducing the relation between two quantities to a whole number; irrational numbers in turn express the impossibility of determining a common aliquot part for two quantities, and thus the impossibility of reducing their relation to even a fractional number, and so on. It is true that a given type of number does not retain an inequality in its essence without banishing or cancelling it within the new order that it installs. Thus, fractional numbers compensate for their characteristic inequality by the equality of an aliquot part; irrational numbers subordinate their inequality to an equality of purely geometric relations - or, better still, arithmetically speaking, to a limit-equality indicated by a convergent series of rational numbers". (Difference and Repetition).
  • Snakes Alive
    743
    Which one of Zeno's paradoxes are you talking about?
  • Streetlight
    9.1k
    They're all variations on the same theme of constructing a continuum out of the discontinuous.
  • fdrake
    6.6k
    So what the Zeno paradoxes essentially mark is the irreducibly of incommensuribility. Making the irrationals the limit of a converging series of rationals in order to save commensuribility is a bit like trying to suppress a half inflated balloon: short of breaking the balloon, all one can ever do is shift the air around. One of the take-aways from this is that the very idea of the (continuous) number-line is a kind of fiction, an attempt to glue together geometry and arithmetic in a way that isn't actually possible (every attempt to 'glue' them together produces artifices or problems, either in the form of irrationals, or later, in the form of Zeno's paradoxes - and, even further down the line, Godel's paradox).StreetlightX

    I'd draw a distinction between paradoxes of intuition and paradoxes of formalism, not that they're mutually exclusive. Paradoxes of intuition are what occur when a mathematical idea is a very strange one to imagine, paradoxes of formalism are what occur when a mathematical construct has strange provable properties. Paradoxes of intuition can be posited as resolved for the further development of mathematics related to the paradox, paradoxes of formalism act as a roadblock to further development.

    Zeno's paradoxes are paradoxes of intuition. This is because it's quite easy to circumvent Zeno's paradoxes with sufficiently precise definitions of what limits and continuity are; the celebrated epsilon-delta and epsilon-N constructions of Weirstrass. You can go on as if the paradoxes are resolved because pure mathematical inquiry is largely a conditional enterprise; given these assumptions (which characterise a structure), what can be shown and what does it do? You can posit yourself past the paradoxes if you so wish, and as is usually done.

    Real numbers in either case aren't paradoxes of intuition any more - they are widely accepted. The historical/cumulative nature of mathematical development brushes issues of intuition and imagination; insight about what mathematics should do; aside. So if you ask someone with mathematical training to picture a number system, they'll give something like the integers and something like the reals as their paradigm cases. The reals are treated differently than the whole numbers because they're a natural starting point for the field of mathematical analysis; which, with a coarse brush, is a study of continuity and limits. The historicality of mathematics gives playing about with axiomatic systems a retroactive effect on the very intuitions that mathematicians have, and thus what seems like a fruitful avenue for further study.

    The 'cut' issue Wittgenstein is highlighting is a feature of the Dedekind cut construction, with that formalism it isn't immediately clear that the irrational numbers are a dense set in the number line (which means there's an irrational arbitrarily close to every other number), whereas the sequential construction presents the density of the irrationals in the reals in a natural way; it piggybacks on top of the density of the rationals in the real numbers; which is clear from how the decimal representation of numbers works.

    I also wouldn't emphasise a contemporary disjunct between arithmetic and geometry - when you teach them to kids they're presented in an essentially unified context of Cartesian algebra (the stuff everyone is exposed to with functions and quadratic equations etc); where arithmetical operations have geometric analogues and vice versa. This is after giving them the prerequisite basic arithmetic and basic geometry.

    Cartesian algebra is posited as 'after' the resolution of Zeno's paradoxes, as while it works it sweeps those issues under the rug. The same goes for calculus; which, again with a coarse brush, can be taken as the relationship between the arithmetic of arbitrarily small and arbitrarily large quantities through limiting processes.

    In contrast, Godel's incompleteness theorems, treated as a paradox, are a paradox of formalism. They mathematically show that there are limits of using formal systems to establish properties of formal systems. You're either inconsistent or incomplete, and if you can show you're consistent you're not consistent. Any formal system that would circumvent Godel's theorems has to do so by being out-with the class of systems to which Godel's theorems apply. Contrast Zeno's paradoxes, which stymie some intuitions about any treatment of infinitesimals. but formally prohibit no current formalisations of them.

    Just wanna come back and address these together as they all hit on similar points that I think deserve to be expanded upon. The idea as I understand it is this - there is in fact one way to 'save' the assumption of commensurability after the introduction of the irrationals, and it is this: to treat irrationals as the limit of a convergent series of rational numbers. In this way, we don't actually have to deal with incommensurate values per se, only rationals (Rosen: "At each finite step, only rationalities would be involved, and only at the end, in the limit, would we actually meet our new irrational. This was the method of exhaustion...")StreetlightX

    Something under-appreciated about the mathematics of limits, which shows itself in the enduring confusion that 0.99... isn't equal to 1, is that when you're evaluating the limit of something; all the steps to reach that limit have already happened. So 'every finite step' misses the mark, as the limit is characterised as after all steps have been done. This means when you characterise irrationals as convergent sequences of rationals, the infinity of the sequence iterates has been 'done in advance'. If you truncate the series at some point you obtain an approximation to the limit, but really the limit has 'already been evaluated' as soon as you write it down. Similarly, you can conjure up the reals by augmenting the rationals with all such sequences; since the sequences have already terminated in advance.
  • Streetlight
    9.1k
    Just briefly - about to sleep - one of the things Rosen develops is that - to continue the balloon analogy - while it's true that Weirstrass developed a method to, as it were, contain the Zeno paradoxes, mathematical artefacts end up showing up 'one level up' again ('the air gets displaced in the balloon'). It's at this point I stop being able to follow the nitty gritty of the math, but here's where he ends up going:

    "In the context I have developed here, the transition from rationals to reals gets around the original Zeno paradoxes, mainly because the reals are uncountable, shocking as this word might be to Pythagoras. In particular, a line segment is no longer a countable union of its constituent points, and hence, as long as we demand only countable additivity of our measure [lambda], we seem to be safe. We are certainly so as long as we do not stray too far from line segments (or intervals); i.e., we limit ourselves to a universe populated only by things built from intervals by at most countable applications of set-theoretic operations (unions, intersections, complementations). This is the universe of the Borel sets.

    But if R is now the real numbers ... we are now playing with 2^R. Thus we would recapture the Pythagorean ideals, and fully embody the primacy of arithmetic... if it were the case that everything in 2^R were a Borel set. We could then say that we had enlarged arithmetic “enough” to be able to stop. Unfortunately, that is not the way it is. Not only is there a set in 2^R that is not a Borel set, but it turns out that there are many of them; it turns out further that there are so many that a Borel set is of the greatest rarity among them. It is in fact nongeneric for something in 2^R to be Borel". I'll send you the chapter, and, perhaps, if you have the time you might see where this goes (basically you get a progression form irrationals -> Zeno -> Borel sets as nongeneric -> ... -> Godel; Rosen also mentions Banach-Tarski as another instance of an artefact produced by the assumption of commensuribility).

    So I guess what I wonder - genuine question - is whether the paradoxes of intuition are only seen to be paradoxes of intuition in the light of now-established mathematical development, so that what once seemed to be a paradox of formalism becomes a paradox of intuition after we come up with a way to develop the math a little (usually by 'enlarging the arithmetic'). I'm wondering because if it's possible to view the development of math (so far) as the development of ever more ingenious ways of burying incommensurability through higher-order abstractions for the sake of upholding commensuribility, then at the limit, would it be possible to say that the distinction between paradoxes of intuition and paradoxes of formalism are just the recto and verso of the same (ultimately) unsurpressable undertow of incommensuribility, just seen from different temporal angles? (as in - PoI 'after' mathematical development, PoF 'before' development). Hope this question makes sense.
  • Snakes Alive
    743
    But they don't involve any incommensurability, right? I'm trying to understand the connection.
  • gurugeorge
    514
    Just wanna come back and address these together as they all hit on similar points that I think deserve to be expanded upon.StreetlightX

    Not sure that does answer my point, which is about infinite divisibility. It's the divisibility of the line that creates the numbers - IOW it's the notionally zero-width cut that separates the continuum into parts that creates the numbers, it's not that you're building up a bunch of nothings into a something, as with Zeno.

    And so long as you can do that, you can find a common measure.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.