• apokrisis
    7.3k
    One of the take-aways from this is that the very idea of the (continuous) number-line is a kind of fiction, an attempt to glue together geometry and arithmetic in a way that isn't actually possibleStreetlightX

    Zeno's paradoxes are paradoxes of intuition. This is because it's quite easy to circumvent Zeno's paradoxes with sufficiently precise definitions of what limits and continuity are; the celebrated epsilon-delta and epsilon-N constructions of Weirstrass. You can go on as if the paradoxes are resolved because pure mathematical inquiry is largely a conditional enterprise; given these assumptions (which characterise a structure), what can be shown and what does it do? You can posit yourself past the paradoxes if you so wish, and as is usually done.fdrake

    Nice discussion. The core problem is that this is a tension that always exists because it speaks to an underlying metaphysical-strength dichotomy, and thus it raises the issue of what it would mean to resolve the tension without dissolving also the useful division.

    So the mathematical debate seems to hinge on whether "the real" is discrete or continuous. The intuition being applied gets hung up on that. And clearly - Rosen's point - maths depends on the trick of atomistic constructability. Arithmetic or algebra are seen as fundamental as a continuity of form can be built up step by step from a succession of parts or acts.

    But then continuity - the grounding wholeness that geometry seems to speak just as directly to - also seems to exist just as much, according to intuition. The geometer can see how the operation of division is a cuckoo in arithmetic's nest. Zeno's paradox shows that. There is more to the story than just the algebraic acts of construction - addition, subtraction and multiplication.

    Then continuity shows its face in other ways. Non-linear systems contain the possibility of divergences at every point in their space. As Rosen argues, systems that are safely enough linear are in fact rare in nature. Linearity is non-generic. Perfect constructablity must fail.

    So the problem is that the tension is real. Construction seems to work. Used with care, maths can formally model the world in ways that are powerfully useful. The world can come to seem exactly like a machine. And yet also, as any biologist or quantum physicist will know, the perfectly mechanistic is so non-generic that ultimately a machine model describes nothing in the real world at all.

    It is the pragmatics of modelling that really bring this tension into the limelight. Maths can simply ignore the issue. It can keep deferring the problems of constructability by pushing them ever further away as the limit, just as @fdrake describes. It is a respectable working practice. Maths has benefited by taking this metaphysical licence. But scientists modelling the world with maths have to deal with the ill-fit of a purely mechanistic description. Continuity always lurks and waits to bite. It needs to be included in the modelling game somehow - even if it is just like Rosen's essays, the planting of a bunch of "here be dragons" signs at the edge of the intellectual map.

    But the way out for me is the usual one of Peircean semiotics. Where you have a dichotomy, you actually have a pair of complementary limits. The discrete and the continuous would both be a matter of "taking the limit". And this is in turn founded on a logic of vagueness. You can have the discrete and the continuous as both merely the emergent limits on the real if they have some common ground of indistinction that they are together - measurably - serving to divide.

    So now you don't have to worry if reality is fundamentally discrete or fundamentally continuous. It is neither - always being vaguer - but also it is forever moving towards those crisp limits in terms of its actions. If it is developing, it is the discrete vs the continuous dichotomy that is becoming ever more strongly manifest. It is approaching both limits at once.

    At this point, we might need to more from the overly spatial dichotomy of the discrete~continuous - the idea of a 0D location and its 1D line. The simplest possible space that would be formed via a translational symmetry and the definite possibility of it being broken. The real world needs to incorporate space, time and energy as its triad of irreducibly fundamental components. A maths suited to actually modelling nature would need to align itself with that somehow.

    Or indeed, being biologists, concerned with the study of organisms, we might leap all the way to a focus on agency and autonomy - the modelling relation, or semiosis pure.

    Now we can reply to the issue of atomistic constructability in terms of the dichotomy it forms with the notion of holistic constraints. The real world - sans modelling - just is a product of constraints on freedoms. But modelling itself has a pragmatic goal regulating it. The goal of being a modeller - the reason organismic agency and autonomy would evolve within an agent-less cosmos - would be to gain machine-like control over nature. A model is a way to construct constraints so as to achieve purposes. And hence mathematics reflects that modelling imperative.

    Maths gets it "wrong" by pushing constructability to an unreasonable seeming metaphysical limit. It makes the "mistake" of treating reality as if it were a pure machine. And yet that is also what is right and correct. Maths is useful to the degree it can construct a world constrained enough by our machinery that it achieves our goals reliably enough.

    Biology itself is already a mechanisation of physics. It is the imposition of a system of molecular motors on nanoscale material chaos. So scientific modelling is simply an intellectual continuation of that organismic trick.

    Rosen is a bit conflicted in that he complains about the flaws in the tools we use, and yet those flaws are only apparent in the grandest totalising metaphysical perspective. The largest model.

    So what he gets right is that the mathematical approach, based on mechanical constructability, can only constrain uncertainty, never arrive at certainty. That is all maths ever does - move towards the limits of being, imagined particularly in the form of the dichotomy of the continuous and the discrete, the geometric and the algebraic, the structures and their morphic acts.

    Maths can point to the limits where uncertainty of either kind - either pole of the dichotomised - would finally be eliminated. But the uncertainty must always remain. Which is why maths also keeps advancing as every step towards those limits must spring a leak that is then worth our while trying to fix, so setting up the need for the further step to repair the still smaller leak that will be now be exposed.

    So it is an interesting game. Nature is the product of the symmetry-breaking tango between global constraints and local spontaneity. Uncertainty is basic. Yet also it becomes highly regulated or lawful.

    Then organisms arise by being able to seize control of the regulatory possibilities of a modelling relation. If you can construct constraints - impose mechanistic structure on that natural world - then you can become a world within the world. You can use your ideas to make the world behave locally in ways that suit your interests.

    Eventually humans gained such mathematical mastery over their realities that they could afford to get upset about the way even the best-constructed models were still full of leaks.

    But apophatically, flip that around, and you have now your best metaphysical model of reality as a system of constraints. Uncertainty - as vagueness - becomes the new anti-foundationalist foundation. Atomistic construction comes into focus as the emergently individuated - the machinery being imposed on nature so as to limit that vagueness, corral that spontaneity or fluctuation, that spoils the way it is "meant to look".
  • andrewk
    2.1k
    To engage in a philosophical discussion of mathematics and measure, it seems a good idea to first engage with the enormous branch of mathematics that deals with exactly that, which is Measure Theory. Without measure theory, there would be no firm foundation for integration or for probability theory.

    One of the great insights of measure theory is that measures are rarely unique. For any given collection of objects there will usually be many different ways to measure them. For instance in integration, which can be thought of as a measure of areas or volumes, we have Riemann integration (which people tend to learn in high school) and Lebesgue integration (which tends to be introduced in university pure maths courses). In many cases they give the same answer but there are important cases where they don't. There are other forms of integration as well that differ from these two.

    The case in the OP seems to suggest that there is some unique, true, primordial measure of the items A and B, and that we just need to find it. Measure theory reveals that we can define many different measures for the objects. A simple one is that the measure of A is 1 and that of B is 0. In the simple universe that contains only A and B, that measure obeys all the axioms required of a measure, as does the measure that says A's measure is 0 and B's is 1, or that they are both 5.1.

    When we introduce a third object C we might try to define a measure in terms of that, as 'the size of an object is the number of complete copies of C we can line up against it before we reach the end'. Then when Rosen says that A and B are incommensurable in terms of C, the Measure Theoretic way of saying that is that the C-based size function we just defined is not a measure, because it does not satisfy the axioms required for a measure, in a universe that contains A and B as well as C. There's a bit of mathematical work required to show that it fails the axioms, but it's not difficult. It involves taking the left-over bit in A or B after measuring via C and replicating it until it becomes bigger than C.

    I don't know the detailed history of Measure Theory. It is possible that its invention was inspired by problems like that of Pythagoras. If so then, far from being a disaster, it has led to some of the most useful and productive parts of modern mathematics.
  • Janus
    16.5k
    I don't know the detailed history of Measure Theory. It is possible that its invention was inspired by problems like that of Pythagoras. If so then, far from being a disaster, it has led to some of the most useful and productive parts of modern mathematics.andrewk

    I don't know much at all about mathematics, but I was thinking earlier when reading this thread about the Pythagorean 'incommensurability problem' that arises when the sides of a triangle are, for example, 1 unit and the hypotenuse is thus 2 units.

    I thought, doesn't the problem go away if we call the sides 2 units instead, which it would seem we could consistently, do since units of measurement are arbitrary? This would make the problem seem to be more to do with the number system itself rather than the way mathematics maps onto 'reality'. But then, as I said, I don't know much about math...
  • Snakes Alive
    743
    If the sides are 1, the hypotenuse is radical 2, which is irrational.

    If the sides are 2, the hypotenuse is 2 by radical 2, and so still irrational.

    What you're asking for is some measure that can be applied to the sides, which is itself rational, and the square root of the doubling of whose square is also rational. It seems like this shouldn't be possible, but I'm not sure.
  • Janus
    16.5k


    Of course, you're right; my ability to do simple addition failed me there!

    So, yes it does seem that the square root of the sum of the squares of two equal numbers will always be irrational.
  • Streetlight
    9.1k
    Yeah, it's a question of ratios: the ratio of the sides to the hypotenuse can never be made commensurate (which means that the 'size' of the unit isn't actually relevant: only the proportion between them - and that stays invariant).
  • Janus
    16.5k


    Yes I see that now, and would have before if I didn't suffer a momentary lapse of reason. :yikes:
  • Streetlight
    9.1k
    The 'cut' issue Wittgenstein is highlighting is a feature of the Dedekind cut construction, with that formalism it isn't immediately clear that the irrational numbers are a dense set in the number line (which means there's an irrational arbitrarily close to every other number), whereas the sequential construction presents the density of the irrationals in the reals in a natural way; it piggybacks on top of the density of the rationals in the real numbers; which is clear from how the decimal representation of numbers works.

    ...Something under-appreciated about the mathematics of limits, which shows itself in the enduring confusion that 0.99... isn't equal to 1, is that when you're evaluating the limit of something; all the steps to reach that limit have already happened. So 'every finite step' misses the mark, as the limit is characterised as after all steps have been done. This means when you characterise irrationals as convergent sequences of rationals, the infinity of the sequence iterates has been 'done in advance'. If you truncate the series at some point you obtain an approximation to the limit, but really the limit has 'already been evaluated' as soon as you write it down. Similarly, you can conjure up the reals by augmenting the rationals with all such sequences; since the sequences have already terminated in advance.
    fdrake

    One of the things I like about Wittgenstein's approach is that he 'accepts' the number line on the condition that it isn't leveraged to ground a 'general theory of real numbers' (W: "The picture of the number line is an absolutely natural one up to a certain point; that is to say so long as it is not used for a general theory of real numbers"). I think one of the things he's getting at is the conditional nature of the (dense) number line: if you want to construct irrationals from the rationals, then and only then will your number line be dense. What he worries over is the 'transcendental illusion' of confusing effect for cause: of saying that it is because the number line is dense that one can make cuts and thus construct the Reals (and with them, the irrationals).

    This is why he makes a distinction between 'geometric illustration' and 'geometric application': the number line is a geometric illustration that 'falls out' of an arithmetic procedure (the Dedekin cut), but it is not the case that the number line 'really is there' in any Platonic-realist sense which is then 'discovered' in the course of employing the cuts (Witty again for reference: "The geometrical illustration of Analysis is indeed inessential; not, however, the geometrical application. Originally the geometrical illustrations were applications of Analysis. Where they cease to be this they can be wholly misleading"). Instead, the cuts retroactively 'makes' the Real number line the dense continuum that is seems to be.

    (I'm again at my limit of mathematical understanding here, but I wonder if there is a similar issue when one moves 'one level up' from the Reals to the Complex numbers: is there an arithmetic procedure analogous to Dedekind cuts that can generate Complex numbers that, as it were, retroactively makes the Complex plane (?) ... dense(?) (again, I think my vocabulary is probably wrong and that we don't speak of the plane being 'dense', but y'know... filled with 'arbitrarily close' numbers... whatever it's called on the Complex plane). And then maybe reiterated for the quaternions and octonions and so on and so on? - idk if it works like that?).

    Anyway, the point is that yeah, I totally get that it's not quite fair to speak of 'every finite step' because the evaluation of the limit posits the steps as 'always-already' completed, but I guess what I'm trying to say is that this is precisely the (possible) problem: it's insofar as limit procedures already 'presuppose', in advance, the number line, that one can misleadingly take it that all the numbers are just 'there' in some Platonic fashion, just waiting to be discovered (whereas Witty emphasises that it is only the application of the math which determines, reteroactively, as it were, what the math itself looks like. Again, sorry if I'm getting my terms mixed up, but I hope the sense of comments come through.

    Not sure that does answer my point, which is about infinite divisibility. It's the divisibility of the line that creates the numbers - IOW it's the notionally zero-width cut that separates the continuum into parts that creates the numbers, it's not that you're building up a bunch of nothings into a something, as with Zeno.

    And so long as you can do that, you can find a common measure.
    gurugeorge

    The above can serve as something of a reply to this as well, I hope. Again, it's not that there is a continuum 'already there', waiting to be cut: rather, the cut and the continuum are co-extensive insofar as the one presupposes the other; the cut extends math itself and does not just apply to 'existing' math. This is what I was trying to get at in my initial post to you when I said that the number line is something of a fiction.
  • Streetlight
    9.1k
    But they don't involve any incommensurability, right? I'm trying to understand the connection.Snakes Alive

    I'm not entirely sure what you mean by 'involve any incommensurability'. The idea is that if one assumes commensuribility, then the Zeno paradoxes are one of the results of that assumption when one tries to construct a continuum from discontinuities.
  • Streetlight
    9.1k
    Interesting stuff. One thing it has me thinking is this: in the OP, I had to try and condense Rosen's presentation by focusing heavily on the idea of a common measure as a way to pedagogically make clear what I/Rosen meant by commensurability. However, one thing Rosen emphasizes - and that I did not mention for economy's sake - is that the idea of commensuribility requires no reference to a common measure at all. That is, insofar as two values can be defined by reference to a common third value, its easy enough to actually get rid of the common value and define the two values in reference to nothing but each other. If A and B are our length, C is our common measure, and n and m are integers:

    (1) A = mC, and B = nC, then we can cancel out C such that:
    (2) A/B = m/n
    (3) B = A(m/n)

    So that now, length B is measured in terms of (rational) units of A. This is where the assumption of commensuribility becomes properly insidious because now there is no 'external referent' which acts as a 'real world' mediator between values. Once you define a length in terms of units of another length, what you end up with is a 'closed system' which becomes impossible to get out of (Rosen: "Once inside such a universe, however, we cannot get out again, because all the original external referents have presumably been pulled inside with us. The thesis [of commesurability] in effect assures us that we never need to get outside again, that all referents have indeed been internalized in a purely syntactic form").

    And in fact, this is the real upshot of commensurability: not simply the idea that there is a common measure between things, but that everything inside a formal system can be measured in terms of other things within that system without remainder. And this is what the 'disaster' is: the expulsion, if you will, of any external referent which would 'ground' the seeming self-sufficiency of such a self-enclosed system. On the flip side then, what incommensurability is, ultimately, is the non-identity (or the non-coincidence) of what can be measured with what can be constructed (the irrationals again are exemplary: the discovery of the irrational forced us to expand our universe of number so as to make measurement and construction commensurate again, which, as I've tried to point out, simply caused problems further down the line).

    So this is all not strictly about measurement per se, and I have no beef whatsoever with the awesome innovations of measurement theory. Instead, it's about the relation between measurement and the things so measured, and an attempt to delineate the proper bounds of that relation; a spin, if you will, on the idea of a Kantian-inspired 'Critique of Pure Math', in which if measurement is left to it's own devices to see the world only in it's image, you end up with all sorts of transcendental illusions like Zeno's paradox and so on.
  • fdrake
    6.7k
    This was a reply to a PM by Street, but I'm going to post it here because the discussion here is going in the same direction.

    I think there's a way of believing in such a regulative ideal (a Pythagorean-ish one) without hypostatising it to a nascent theory of everything. I don't know exactly what it would entail, but I'd like to be able to have the ability to distinguish good from bad models based off of general but problem specific criteria. The faith in the regulative ideal wouldn't then become a nascent global system, but it would let you espouse sensible regulative ideas - like double blinding and experimental design that accounts for confounding - for subjects of study. Aiming your inquiry towards the phenomena (or imaginative background of a formalism) is kind of an empty regulative idea in that when it is generalised over topics of inquiry it espouses nothing, but when you particularise it there are things which block or impede inquiry (like measurement error) and things which make it grow (like new types of model that work). I think Heidegger's idea of truth is pretty prescient here.

    And I would resist having a global system of systems on empirical grounds too; when you look at different topics of inquiry and their methodologies, it's pretty clear that there's as much mismatch as there is commensurability between them. So if you're an anthropologist doing participant observation, that generally won't have much relevance to experimental design or modelling fluxes with differential equations. They speak differently because the phenomena demand it.

    A good example of the transcendental illusion brought about by scientific posits is homo economicus; the rationally self interested utility maximiser in a condition of good scarcity falls out of the equations because that's precisely what's assumed to get all this utility maximisation formalism going. Whenever the models are seen to work, it projects truth down to their assumptions (per Lakatos).

    Pure mathematics also has a good example of why a complete system is impossible, the very idea of a final theorem which allows everything to be derived from it is silly. The final theorems resemble axiom systems in terms of their function rather than results derived from them. That inquiry into mathematical structures could ever terminate after proving all that is provable is not just curtailed by Godel's theorem and the halting problem - which give a formalistic picture of the bottomless depth of mathematical structures - but also that different applications of mathematics greatly differ in conceptual structure, even when formally equivalent (like cuts and Cauchy sequences in characterising the reals).

    I don't think pure mathematicians in general think that they're aiming for an architectonic system of all mathematics, rather they're working in specific domains of objects which are interesting for some reason - which includes relations between structures. But I do think that the structures in pure mathematics behave quite similarly to natural phenomena for the purposes of research; you can be guided by mathematical phenomena in much the same way as you'd be guided by nature or the real. I don't mean this in the Platonic sense that there's an abstract population of regulative mathematical ideals with an illusory substance, but it's certainly true that mathematical objects are suggestive in a similar way to real processes are. The example I used above with cuts and sequences is one I've used a lot to make this kind of point - the cuts emphasise the irrationals as holes in the rational number line, the sequences emphasise the irrationals as a dense set in the number line. So with the cuts formalism it's easier to think about the rationals as an incomplete space (you formalise the holes), with the sequences formalism it's easier to think of the reals as a complete one (you formalise the filling). That holes and fillings are equivalent from the perspective of the real number line is something which can be proved, and it's pretty nice, regardless of the different imaginative backgrounds for each formalism.

    The danger is in 'transcendental illusions' fostered by successful modelling so that one begins to believe that the phenomenon at hand is essentially (in it's 'being') modelable, and that the universe as such is calculable qua encompassing cosmos (rather than Deleuze and Guatarri's/Joyce's 'Chaosmos' - perched half-way between chaos and cosmos).
    — StreetlightX

    Even that mathematical objects are computable in principle is an illusion. Real numbers which can be approximated with arbitrary precision by an algorithm - computable numbers - take up 0 volume in the real number line.

    It is true, however, like with Rosen's highlight that Borel sets are nongeneric in the power set of the reals, that the objects which mathematicians study are rarely generic mathematical objects. It's just true that the interesting ones are sufficiently well behaved, that's what makes them interesting. Of course, some people like to catalogue counterexamples of structures - like there's a book called Counterexamples in Topology which is a catalogue of thwarted intuitions between mathematical formalisms of 'nearness' of point like objects in a space. The ones that are studied are non-generic precisely because the generic ones thwart useful analogies and relations of properties.
  • JimRoo
    12
    If people are interested Rosen's essay is available on line here - Google books .
  • Snakes Alive
    743
    I don't see what Zeno's paradoxes have to do with commensurability. They have to do with infinite divisibility, which is not the same. The traditional statement of Achilles and the tortoise, for example, is stated only using divisions of two.

    Though I must admit that I don't see Zeno's paradoxes as a problems, or a disaster, either – I always took them as didactic exercises. They're so obviously fallacious that it always struck me that anyone could be bothered by them: you give them to a student and they tell you why they're confused. I wouldn't try to redesign mathematics to 'avoid' them, because this falsely presupposes that they are problems.
  • andrewk
    2.1k
    Something under-appreciated about the mathematics of limits, which shows itself in the enduring confusion that 0.99... isn't equal to 1, is that when you're evaluating the limit of something; all the steps to reach that limit have already happened. So 'every finite step' misses the mark, as the limit is characterised as after all steps have been done.fdrake
    I'd rather not put it like that, as it seems to imply that we need to 'go to infinity' in order to make sense of the limit. Then before we know it, people like the apologist William Craig are butting in making ignorant statements about the possibility of 'going to infinity', as if that actually meant something.

    Yet limits can be, and usually are in formal maths texts, defined using purely finite concepts.

    Consider the function f(x) that is L + (x-a) sin 1/(x-a) for x<>a and gives L at x=a. Unlike the function g(x) = (x-a) + L we can't look at an intuitive straight line that heads straight for the point (a,L), because the curve of f keeps wiggling, and the speed of wiggling increases without limit as x approaches a. But when we say that the limit of f(x) as x->a is L, all we are saying is that, for any small number e>0, we can find another number d>0 such that whenever x differs from a by less than d, f(x) will differ from L by less than e.

    That definition uses only finite concepts, and proving that it holds in this case also requires only finite methods.

    This is a demonstration of the little-recognised fact that most limits, and indeed most of calculus, can be done without using any notions of infinity. Even notions such as lim(x-> infinity) f(x) or (using an extended definition of limits) lim (x->a) f(x) = infinity don't actually need to use a notion of infinity. The infinity symbol in those expressions can be interpreted using only finite methods like what was done above.

    Infinity is great fun to explore, especially in set theory and topology. I love Cantor's work as much as anyone does. But we don't need it to do calculus. We need unconstrained divisibility, but not a notion of infinity.
  • andrewk
    2.1k
    If A and B are our length, C is our common measure, and n and m are integers:

    (1) A = mC, and B = nC, then we can cancel out C such that:
    (2) A/B = m/n
    (3) B = A(m/n)
    StreetlightX

    I've been thinking about this in terms of various universes that are limited as to what's in them. If we have only two objects, A and B, and they don't have regular markings like on a ruler, then I think the only measurement we can do is to say for each of them, how many complete copies of the other can be lined up alongside it without exceeding the length of the first. We do this in the same way we use an unmarked 30cm ruler to measure the length of a wall - by rolling the ruler end over end until we reach the end of the wall, and counting the number of rolls.

    In this way, we'll end up with the measure of the shorter object in terms of the longer being 0, and the measure of the longer object in terms of the shorter being the ratio of the two 'true lengths' rounded down to the nearest integer.

    Both of these measures satisfy the axioms required to be a true measure.

    If we want to allow rational measures but have only two objects, we need at least one of them to have regular markings on it like a ruler. So if one object is a 30cm long straight stick with marks every 1cm, we can measure the length of the other object to a precision of 1cm, rather than 30cm.

    I have a slight doubt about whether an object with 29 regular markings is one object or thirty, but I think I'll dismiss that doubt as too metaphysical for the time being.

    If both objects have cm markings on them, each can be measured as a number of whole cm, so we can say the length of A is m cm and that of B is n cm, but we know the lengths are not exact.

    I think I need to stop here and ask for input, because it's starting to look like this avenue of investigation doesn't involve irrational numbers or Pythagoras. Rather it is just saying that any physical object can only have regular markings on it with a finite level of granularity. So the problem here is the limitation of accuracy that gives us, and it arises for objects whose length ratio is purely rational, just as much as for pairs where the ratio is irrational. If the smallest measure gradation marked on either object is micrometres (10^-6 m) then, even if both objects have lengths that are whole numbers of nanometres (10^-9 m), the measure is in some sense 'inadequate' if one or both of those whole numbers is not a multiple of 1000.

    A right-angled triangle of sticks, whose lengths are not those of a Pythagorean Triple, guarantees a measurement problem because one of the sticks will have a length that is an irrational multiple of the lengths of the others, but so does just having two sticks, one of whose lengths is not a whole number of the lengths of the marks on the other.
  • fdrake
    6.7k
    I'd rather not put it like that, as it seems to imply that we need to 'go to infinity' in order to make sense of the limit. Then before we know it, people like the apologist William Craig are butting in making ignorant statements about the possibility of 'going to infinity', as if that actually meant something.andrewk

    I imagine that threads like this are a safe space where we don't need to worry about giving William Lane Craig more misguided ideas about infinity.

    Yet limits can be, and usually are in formal maths texts, defined using purely finite concepts.andrewk

    That's the ingenuity in the definitions, really, You can deal with limiting process with the nice substitution of arbitrarily small difference and becoming arbitrarily large (unbounded increase). So yes, what you're saying is exactly correct about the formalism of limits. It neatly avoids infinity as a concept by replacing it with functionally equivalent substitutes, so that when people think about limits it doesn't matter if they think of 1/infinity = 0 or something similar, because you can just say 'I really mean this formalism' rather than the imaginative background that surrounds it.

    The formalism also gives you means of defining derivatives and integrals in a manner similar to the original ideas but with the problems regarding infinity and 0 removed. It also, with some approximation theory added, lets you evaluate indeterminate forms through limiting arguments.

    We need unconstrained divisibility, but not a notion of infinity.andrewk

    We do need the ability to iterate a process as long as we need to though, even when that need is infinity to produce the limit. If you were asked to terminate a sequence tending to 0, you don't get 0, you get something close to it. Similarly, if you were asked to terminate a divergent sequence, you get something arbitrarily far away from infinity rather than something arbitrarily large. That the whole process - the infinitude of steps - has already terminated is dealt with by the 'for all epsilon' quantifier. Just as we shouldn't ask for the 'last term' of a convergent infinite series, we should involve the infinity of steps in the progression of the series as equivalent to its culmination.

    I see it as the utility of dealing with infinitely small increments in pre-rigorous calculus gave a need for a formalism to make sense of them. The formalisms would be wrong if they didn't produce the pre-established sensible ideas of derivatives and integrals.

    But it is true that Zeno's paradoxes don't require the real line, infinite divisibility occurs in the rationals first, I was going to bring that up too but I thought it wasn't very relevant to the theme. The reason the real line is the family home of analysis is that it's a complete space (every set with an upper bound has a least upper bound). You don't get that with rationals (think of a sequence of rationals that would converge to Pi or e in the reals then demand that all numbers are rationals, boom, no least upper bound).

    I think that point just displaces the debate to another level, we need a number line with no suspiciously convergent looking sequences failing to converge. It's another conceptual repair which seems inevitable or even a priori when considered ex post facto. Which is part of the thrust of the OP - we tinker with mathematical ideas while working with a mathematical canon which is composed by past ingenious tinkering, which is forgotten as math appears as a giant book of pre-established theorems and rules God wrote - as a self sufficient and eternally true realm apart from human activity. Arguably anyway.
  • fdrake
    6.7k
    I think that post was unnecessarily technical. What I highlighted can be condensed a lot.

    While it's true that the formalisms for limits and continuity are stated without reference to the infinite, I think you have to remember the context in which they were made. The epsilon-N criterion for convergence applies to infinite series, the definitions for continuity can be restated in terms of preserving the convergence of infinite sequences. So, it's an excellent bit of tinkering that tames infinity in terms of arbitrary closeness and unboundedness, but the problems it addresses are those which were engendered by an encounter with infinite mathematical objects.
  • apokrisis
    7.3k
    but the problems it addresses are those which were engendered by an encounter with infinite mathematical objects.fdrake

    So, Platonism? There is a realm of commensurate entities to be encountered?
  • fdrake
    6.7k


    Nah. Encounter in the POMO sense you hate.
  • apokrisis
    7.3k
    Encounter in the POMO sensefdrake

    How is that defined then? Genuinely curious.
  • fdrake
    6.7k


    Long story short, a thing happens which resists interpretation (or how you use 'habit') so much that it makes a load of difficult problems for everyone involved. Usually these problems are circumvented or ignored and become either an irrelevance after the fact (like canonising) or a suppressed undercurrent (like the long lasting impacts of slavery and racism). Occasionally they're addressed powerfully and when this happens it transforms what people do in its wake; a lived 'solution' to the 'problem' of the encounter.

    In academic discourse it works like a paradigm shift brought on by a problem, and the encounter as a stand-alone concept is a lot like a generalisation of a paradigm shift to other strata of phenomena. Like (inter)personal encounters of transformative love, politics or art.
  • apokrisis
    7.3k
    That sounds pretty pragmatic then.

    So when it comes to the issue of an incommensurate world - as the thing-in-itself never fully grasped - we do have to approach it via commensurable acts of measurement. The world might be analog or continuous (as our best characterisation, as the noumenal escapes complete description), but our embodied modelling relation with it demands that we measure it in a method that is digital or discrete.

    That is, semiotically, we must form a mediating sign that allows us to most effectively relate to the thing - the world - we have every good reason to believe to be out there, existing in recalcitrant fashion as that which waits to be encountered.

    So the world is an unbroken wholeness. And modelling relies on fragmenting that in a useful way so that its wholeness becomes a tractable construction of parts. Paradigms are where we have achieved an acceptable level of correspondence for the purpose in mind. A lot of important problems are being handled adequately. A lot of unimportant ones make no important difference, so can be swept under the carpet in respectable fashion.

    There is no particular "disaster" about this. It is business as usual. That is how a modelling relation has to work for good reason.

    So infinity, as a concept, stands for the principle of the open or the unlimited, the unbounded, the endless. And that contrasts with what the modelling mind seeks - closure. A digitised tale composed of signs, or acts of measurement, which has the formal property of being closed for efficient cause.

    The thing-in-itself is still beyond any such actual disclosure to the modeller. But the modelling itself sets up this closed vs open ontology. It produces the concept of the infinite, the perfectly open, that must logically be a corollary of its own hunt for this perfect closure.

    Thus I would agree the infinite doesn't exist in some Platonia of abstract objects. It very much arises as an opposition within modelling itself. We encounter it as we probe the nature of what we like to call the finite. The usual dialectical deal.

    And here is where I would point out the advantage of going a step further - like Peirce.

    The infinite and the finite, the open and the closed, are very crisp or determinate conceptions. And recognising that characteristic should draw attention to their own corollary - the vagueness that would need to stand opposed to this canonised crispness.

    Vagueness becomes another useful mathematical/logical resource that can be brought into play with our encounters with number lines and other habitual signs of the infinite.
  • Metaphysician Undercover
    13.3k
    They're all variations on the same theme of constructing a continuum out of the discontinuous.StreetlightX

    That's probably the core issue, the relationship between the continuous and discontinuous.

    So the mathematical debate seems to hinge on whether "the real" is discrete or continuous.apokrisis

    But the problem is that they are both real, just like the rational and irrational numbers are both real. Hence the reality of incommensurability. The difficulty is in determining which aspects of reality are continuous and which are discrete, because to treat one as if it were the other is to err.
  • apokrisis
    7.3k
    Continuing that line of thought, I forgot to mention the importance of your employment of the notion of a convergence on a limit as the way to achieve effective closure - turn infinite openness into something with a now internally closed definition.

    The pragmatic modelling relation approach says this is so because we accept eventually that difference cease to make a difference. The way we have everything set up means that we will reach a point where there is just no conceivable reason to care. The differences that speak to an incommensurability will cease to be measurable. They will be infinitesimal - the formal reciprocal of our notion of the infinite. And so they will themselves have become a symmetry, a blur of continuity, and no longer discrete countable entities. Modelling arrives at conceptions that are self-truncating - truncated by the a formalised principle of indifference that can achieve epistemic closure for a modeller.
  • apokrisis
    7.3k
    The difficulty is in determining which aspects of reality are continuous and which are discrete, because to treat one as if it were the other is to err.Metaphysician Undercover

    Yeah. But I am arguing that both are practical conceptions. When we speak of them, we are only pointing to the fact that reality must exist between these two reciprocally-defined extremes. Both represent the measurable limits to existence. And so existence itself has to be the bit that stands in-between.

    That is why every actual thing we encounter in the real world is never quite perfect like the model would suggest. The continuous things are still always a little bit discrete. And the discrete things are always a little bit continuous. And indeed most things will be far more obviously a mixture of the two possibilities. They will not be clearly divided in either direction.

    This is easy to see if we look at any actual natural feature - the outcome of a dissipative process - like rivers, mountain ranges, coastlines, clouds. They express a fractal balance that puts them somewhere exactly between the discrete and continuous - in a way we can now also measure in terms of fractal dimension, or the notion of scale symmetry.

    So you are taking the view that the world actually exists as either continuous or discrete in some black and white, LEM-obeying, PNC-supporting, fashion.

    I am saying, sure, that is a useful basic epistemic model to apply when measuring the world. Acts of measurement depend on having that commensurate yardstick. And the way we achieve formal closure to construct a "world of measurement" is by applying that dichotomising logic. We speak of the two extremes which mutually, reciprocally, ground each other as conceptions.

    But then the idea of the discrete~continuous remains just a pragmatic conception - an idea robust enough to launch useful acts of measurement. And as our modelling of reality has progressed, we have arrived at "surprises" like fractal dimensionality and other non-linear maths. The discrete and the continuous can have some exact balance which itself becomes a useful metric. We can apply them to systems that energetically grow in that kind of endless budding fashion of natural dissipative systems.

    Clouds look pretty vague. Where do they really stop or start? But fractal descriptions can employ the discrete and the continuous as themselves a ratio - a log/log formula of endless, but also recursive, growth. The open and the closed in the one trajectory.

    So modelling can play any game it can invent. And some of those games are surprisingly effective - as if we are actually encountering reality in a totalising fashion at last.
  • Streetlight
    9.1k
    But I do think that the structures in pure mathematics behave quite similarly to natural phenomena for the purposes of research; you can be guided by mathematical phenomena in much the same way as you'd be guided by nature or the real.fdrake

    This is all great (sorry for late response - been busy!). Actually alot of it reminds me - and helps me put into starker relief than I was previously able to - of one of Rosen's other papers (found here [PDF]) on how it's often forgotten that a great deal of math is actually modelling of other bits of math itself ("a great deal of what passes for pure mathematics is really applied mathematics; it is essentially modelling in the above sense, except that the external referents assigned to a particular formalism are themselves mathematical in character").

    And this allows me to maybe start exploring one of the things I've been getting out of my recent (and very preliminary) engagements with math, which is that while there does seem to be something to the idea that the world exhibits a certain 'mathematicality', it seems far more accurate to say instead that mathematics exhibits a certain worldliness. That is, that there is an immanent 'logic' that math exhibits that is exactly parallel with the logic of, well, anything else. So it's not that 'everything is number' - as per the Pythagoreans - but that number 'partakes' (to use an uncomfortable Platonic trope) of the same logic that everything 'non-numerical' does (a 'flat' ontology which does not privilege number but places it on the 'same plane' as everything else).

    Deleuze in fact develops something like this in D&R, where, after taking the calculus as a model for his understanding of what he calls 'Ideas', he tries to address the seeming privilege he accords to math and insists that it's not that he's 'applying' math to other domains, but rather that each domain (he lists: 'physical, biological, psychical or sociological') has 'a calculus' of it's own. Borrowing from the vocabulary and ideas of the mathematician Albert Lautmann and referring to 'a dialectic' in place of what I called a 'logic' above, he writes:

    "It must be said that there are mathematical, physical, biological, psychical and sociological problems, even though every problem is dialectical by nature and there are no non-dialectical problems. ... the differential calculus belongs entirely to mathematics, even at the very moment when it finds its sense in the revelation of a dialectic which points beyond mathematics. ... It is not mathematics which is applied to other domains but the dialectic which establishes for its problems, by virtue of their order and their conditions, the direct differential calculus corresponding or appropriate to the domain under consideration. In this sense there is a mathesis universalis corresponding to the universality of the dialectic."

    So I wanna say that there's something right about the Pythagorean intuition that mathematics seems to structure the world, but to reply that it's not that that structure is mathematical, but that mathematics expresses, in its own way, that structure (or 'dialectic' or 'Logos' - 'wild Logos' as Merleau-Ponty once said).
  • Metaphysician Undercover
    13.3k
    Yeah. But I am arguing that both are practical conceptions. When we speak of them, we are only pointing to the fact that reality must exist between these two reciprocally-defined extremes. Both represent the measurable limits to existence. And so existence itself has to be the bit that stands in-between.apokrisis

    So you are arguing that neither, the continuous nor the discrete are real? They are ideals and reality stands in between.

    But then what is reality if it is neither discrete nor continuous, but something in between? What kind of consistency would be neither continuous nor discrete, but something in between? How would you describe this reality which is neither discrete nor continuous? Doesn't it make more sense to you, to assume that reality is a mixture of discrete aspects and continuous aspects, as I suggested, rather than that neither of these are real, but only ideal. Furthermore, how would you account for the existence of these ideals? Are they not in some way real? But you deny that the two can co-exist, (which is what they do as ideals, defining each other), by stating that all existence is in between these two

    That is why every actual thing we encounter in the real world is never quite perfect like the model would suggest. The continuous things are still always a little bit discrete. And the discrete things are always a little bit continuous. And indeed most things will be far more obviously a mixture of the two possibilities. They will not be clearly divided in either direction.apokrisis

    Aren't you just describing a mixture here? The real "things" consist of both elements, continuous and discrete. We model as one or the other, so the model doesn't quite capture the reality of the thing. It's illogical to say that the thing is neither continuous nor discrete, but in between, denying the law of excluded middle, but it does make logical sense to say that the thing consists of a mixture of both elements, and the model hasn't properly isolated the two.

    This is easy to see if we look at any actual natural feature - the outcome of a dissipative process - like rivers, mountain ranges, coastlines, clouds. They express a fractal balance that puts them somewhere exactly between the discrete and continuous - in a way we can now also measure in terms of fractal dimension, or the notion of scale symmetry.apokrisis

    No, I don't see this at all. The natural feature is not somewhere between continuous and discrete, it is a combination of both. I think that your interpretation of non-linear systems is deceiving you. The mathematics unites distinct variables as if they are one thing. But this is just the model which represents the distinct things as one unified thing, it is not the reality of the thing. That's why such models are extremely flexible and highly unreliable, they do not even attempt to separate the distinct elements, treating the combination of elements as one thing. This unity is completely artificial though, created by the model.

    So you are taking the view that the world actually exists as either continuous or discrete in some black and white, LEM-obeying, PNC-supporting, fashion.apokrisis

    No again, you have misinterpreted me. I did not say that the world is either continuous or discrete, I said that it is a combination of the two. And I also said that the difficulty in modeling is to distinguish which elements of reality are of each nature. I think that there is a trend in modern scientific analysis to ignore this differential, but this renders the analysis incomplete.

    Consider the concept of "space-time" for example. The concepts of space and time have been united in synthesis to create one concept. Many human beings will insist that this concept cannot be divided in analysis, that space-time is one indivisible thing. But this is to completely ignore the possibility that in reality, one of these may be discrete while the other is continuous.

    So modelling can play any game it can invent. And some of those games are surprisingly effective - as if we are actually encountering reality in a totalising fashion at last.apokrisis

    Yes, that's exactly the problem, modelling can play whatever game it wants, in ignorance of reality. But it is not the case that some non-linear models are surprisingly effective. Some are effective in particular situations. But all are surprisingly ineffective in some situations, and that betrays the failings of such an approach to modelling.
  • apokrisis
    7.3k
    So you are arguing that neither, the continuous nor the discrete are real? They are ideals and reality stands in between.Metaphysician Undercover

    Well remember that here I’m using the conventional categories of Being rather than Becoming. So the discrete vs the continuous is talk about that which exists in static eternal fashion. This then creates the tension that bothers you - how can limits be part of what they bound if they are in fact the precise place where that internal bit ends and the external begins.

    In my own preferred process metaphysics, the discrete and the continuous become a dialectics of developmental actions. So now you have the rather Hegelian opposition of differentiation and integration. Or individuation and generalisation.

    And that active view, one that sees reality as fundamentally a flux with emergent regulation, would avoid the kind of hard edge paradox that your own non-process metaphysics tends to encounter at every turn.
  • apokrisis
    7.3k
    That is, that there is an immanent 'logic' that math exhibits that is exactly parallel with the logic of, well, anything else.StreetlightX

    Heh, heh. After the post-structuralist revolt comes ontic structuralism again.
  • Metaphysician Undercover
    13.3k
    Well remember that here I’m using the conventional categories of Being rather than Becoming. So the discrete vs the continuous is talk about that which exists in static eternal fashion. This then creates the tension that bothers you - how can limits be part of what they bound if they are in fact the precise place where that internal bit ends and the external begins.apokrisis

    What bothers me, is that through your process philosophy, you have assigned to the limits (discrete and continuous) the status of not real, non-existent. But then you go ahead and talk about these limits as if they are somehow part of reality. You describe reality as being somehow forced to exist within these limits, yet the limits are said to be non-existent, not real.

    If these limits are not real, then there are no such limits to reality, and the entire existing universe is not limited by any such ideal extremes like discrete and continuous, whatsoever. To talk about these limits within reality is just fiction, these ideals are simply within the mind, pure imagination. Though we might use them in application, in modelling, they represent nothing in reality. They cannot if you uphold the status you assign as not real.

    However, you talk about these limits as if they are real, as if they are "part of what they bound". Which do you think is true? Are they "part of what they bound", therefore having real existence, such that we have to allow for the two distinct aspects of reality, the discrete and the continuous, co-existing in a mixture as I described, or are they not real, non-existent fictions of the imagination, leaving the world with no real limits in that way?

    And that active view, one that sees reality as fundamentally a flux with emergent regulation, would avoid the kind of hard edge paradox that your own non-process metaphysics tends to encounter at every turn.apokrisis

    The paradoxes are encountered in the deficiencies of metaphysical principles such as your own. You readily avoid the paradoxes by simply ignoring them.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.