Comments

  • Continuity and Mathematics
    The answer is in the concept of "occupies a place." If we view the real numbers as specifying locations on a line, and we stop talking about points, perhaps things are less muddled. I agree with you that nobody knows how a line of dimension 1 can be made up of points of dimension 0. But math has formalisms to work around this problem. Would you at least agree that if math hasn't answered this objection, it's been highly successful in devising formalisms that finesse or bypass the problem?fishfry

    Of course I agree that maths is highly successful. But what you call finessing, I am calling being studiedly indifferent. So yes - a thousand time yes - maths has developed spectacular calculational machinery. But then - because it has replaced reality with a mechanical image of reality - it fails equally spectacularly when it tries to "do metaphysics" from within its virtual Platonic world.

    Getting back to the physics of numberlines, I would point out that what has gone missing in the imagining is the idea of action - energy, movement, materiality. So we can mark a location (in the spacetime void) and it just sits there, inert, eternal, unchanging ... fundamentally inactive. That is the mathematical mental picture of the situation in toto.

    However why couldn't this marked location dance about, appear and vanish, erupt with all sorts of nonsense ... rather like an actual mathematical singularity?

    So what we point at so confidently as a point in a void could be a dancing frentic blur - a vagueness - on closer inspection. We say it has zero dimensions, and all the properties so entailed, but how do we know that a location exists with such definiteness? And why is modern physics saying that in fact it cannot (following Peirce's logical/metaphysical arguments to the same effect).
  • Continuity and Mathematics
    I'll forego the opportunity to start a pissing match here and you earned a lot of good will with me for pointing me to the Zalamea paper. But really, was this necessary? Maybe I should just say fuck you or something. Would that serve any purpose?fishfry

    You are very sensitive. I apologise if you have feelings that are easily hurt. But is this my problem or your problem?

    I'm used to a robust level of discussion in academic debate. One hopes that others will try to knock seven shades of shit out of one's arguments. And then afterwards, everyone shakes hands and go gets a drink at the bar.

    So you are welcome to be as rude to me as you like. Water off a duck's back. But what I am looking for from you is a genuine counter-argument, not a solipsistic restatement of your position ... or as I said, a restatement of a particular institutional view that is widely held for the pragmatic reasons I've previously stated.
  • Continuity and Mathematics
    Perhaps there's a sort of Heisenberg uncertainty between truth and precision. What we can say truthfully is imprecise; and what we can say precisely isn't true.fishfry

    This is rather the point of Peircean semiotics. We deal with reality by replacing it with a system of completely definite signs. And mathematics is simply the most powerfully universal method of imposing a system of sign on our perceptions of reality.

    So yes, again the way maths organises itself institutionally is completely pragmatic (under the proper Peircean definition). It is exactly how you go about modelling in as principled a fashion as possible.

    But the philosophical irony is that it is all about replacing reality with a model of reality. We tell reality to lose all its imprecision, vagueness, indeterminacy, etc. We are just going to presume that it might be a bit of a hot mess, yet what reality itself really wants to do is be completely crisp, definite, determinate ... mechanical. So our job is then to see reality in terms of its "own best version of itself".

    We don't feel guilty about treating reality as being Platonically perfect, properly counterfactual, fully realised, because ... hey, that's what reality is striving to be. The fact that it always falls shorts, never arrives at its limits, is then something to which we studiously avert our eyes. It is a little embarrassing that reality is in fact a little, well, defective. The poor sod doesn't quite live up to its own ambitions. But we generously - in our modelled reality that replaces the real reality - simply ignore its shortcomings and marvel at the perfection of the image of it that lives in our imaginations.

    What I am trying to draw attention to here is how we take reality for more than it actually is, and not only is that socially pragmatic (good for the purposes of building perfect machines) but it feels even psychologically justified, as we spare reality's own blushes. We know what it was trying to achieve.

    However eventually we will have to turn around and deal with reality as it actually is, not our Platonic re-imagination of it. Which is where Peircean semiotics - as the canonical model of a modelling relation - can make a big difference to metaphysics, science and maybe even maths.
  • Continuity and Mathematics
    When we mark a point on a line, we introduce a discontinuity.aletheist

    That is the flipside of this. Wholes must exist to make sense of parts. But those wholes must crisply exist and not be indeterminate. And those only crisply exist to the extent they are constructed as states of affairs. Thus crisp parts are needed too, leading to the chicken and egg situation that a logic of vagueness is needed to solve.

    So the discrete vs continuous debate is doomed to circular viciousness unless it can find its triadic escape hatch. And this is where semiotics really has its merit. It introduces the hierarchical world structure - the notion of stabilising memory (or Peircean habit) - by which the part (the event, the point, the locality, the instance) can be fixed as a sign of the deeper (indeterminate) generality.

    That is, there is the "we" who stand outside everything and produce the cuts - make the marks that point - to the degree they satisfy "our" purposes.

    And this is all the Cantorian model of the reals does. It produces a tractable notion of the discrete vs the continuous to the degree we had some (mathematical) purpose.

    The Zalamea article puts this nicely in stressing how the mathematical approach to the protean concept of continuity proceeds by "saturating" degrees of constraint. It starts with the bluntly assumed discontinuity of the naturals. Then tightens the noose via the successive operations of the notions of "a difference", "a proportion", "a convergence to a limit". The gaps between numbers gets squeezed until they finally seem to evaporate as "that to which there is a mark that points".

    That is, the gaps are rendered infinitesimal in a way that they truly do become the (semiotic) ghosts of departed quantitiies. They become simply a sign that points vaguely over some imagined horizon ... the mathematical equivalent of the old maps indicating the edge of the world as "here be dragons". Once we get to the convergence that is the real numbers in their unfettered multiplicity, maths is left pointing to its own act of exclusion and no longer at anything actually real.

    As I have said, that is fine for maths given its purposes. It is itself a tenet of pragmatism that finality defines efficiency. Models only have to serve their interests and so - the corollary - they also get to spell out their limit where their indifference kicks in.

    Cantorian infinity is just such an example of the principle of indifference. Actual continuity has been excluded from the realm of the discrete ... to the degree that this historical vein of mathematical thought could have reason to care.

    So this is why the Toms and Fishfrys are so content with what they learn in class. To the degree that philosophy can still make a feeble groaning complaint about incompleteness, they feel utterly justified not to care. They are trained within a social institution that had a purpose (hey guys, lets build machines!) and the very fact of having a definite purpose is (even for pragmatists) where an equally sharp state of indifference for what lies beyond the purpose to be fully justified.

    Unfortunately for scientific purposes, the world isn't in fact a machine. We know that now. But while mathematics is groping for a sounder foundations - see category theory - it hasn't really got to grips with the new semiotic principles that would be a better model of reality than the good old machine model of existence.
  • Continuity and Mathematics
    You can't subdivide a point and a point has no sides. It's sophistry to claim otherwise. If we can all agree on anything, it's that a point has no sides.fishfry

    So you defined a point as a howling inconsistency - the very thing that can't exist? The zero dimensionality that somehow still occupies a place within a continuity of dimensionality?

    Philosophy can't even get started here if you are happy with sophistry by axiomatic definition.

    So yes, the properties of a continua with zero dimensionality would have to be as you describe. But then that simply defines your notion of a point either as a real limit (a generalised constraint - thus a species of continuiity) or as a reductionist fiction (a faux object that you inconsistently treat as existing in its non-existence).
  • Continuity and Mathematics
    When you divide a line at a point, the point stays with one segment and not the other. As someone trained in math, it's hard for me to understand how this answer isn't satisfactoryfishfry

    How can that be satisfactory in a philosophical sense? If you can divide the point on one of its sides, why can't the next cut divide it to its other side, leaving it completely isolate and not merely the notion of an end point of a continua?

    And a better paper on the Peircean project is probably...

    http://uberty.org/wp-content/uploads/2015/07/Zalamea-Peirces-Continuum.pdf
  • Continuity and Mathematics
    Continuity can only be relative to discreetness (at least in actualised existence). That is, continuity Is defined by the lack of it other. So even spacetime as generalised dimensionality would be only relatively continuous. And that is what physics shows both with the quantum micro scale and also the relativistic macro scale (where spacetime is "fractured" by he event horizons of its light one structure).
  • Zeno's paradox
    Can you (or anyone) supply some of relevant Bergson and Pierce links that would shed light on the relation between the mathematical real numbers and the philosophical idea of the continuum?fishfry

    You are better off asking aletheist that as that is his argument. And I am certainly no Bergsonite.
  • Zeno's paradox
    I am not convinced that this is true. Two of Peirce's major objectives for philosophy were to make it more mathematical (by which he meant diagrammatic) and to "insist upon the idea of continuity as of prime importance." Surely he must have considered these efforts to be complementary, rather than contradictory.aletheist

    I think Rich is right that maths is generally premised on the notion of atomistic constructability and so is anti-continuity in that sense. (And that is not a bad thing in itself as constructionist models - even of continuity - have a useful simplicity. Indeed, arguably, it is only by a system of discrete signs that one can calculate. And signs are themselves premised on understanding the world in terms of symbolic discontinuities of course - signs are no use if they are vague.)

    So then the holistic reply to this routine mathematical atomism would be a countering mathematics of constraints - of pattern formation calculated via notions of top-down formal and final cause. And that is damn difficult, if not actually impossible.

    This would be why Peirce felt his diagrammatic logic was so important. Like geometry and symmetry maths, it tries to argue from constraints. Once you fence in the possibilities by drawing a picture with boundaries, then this is a way to "calculate" mathematical-strength outcomes.

    So yes, there is no reason why a construction-based maths should not be complemented by a constraints-based maths. And arguably, geometry illustrates how maths did start in that fashion. Symmetry maths is another such exercise.

    However to progress, even these beginnings had to give way to thoroughly analytic or constructive techniques. Topology had to admit surgery - ways that cut apart spaces could be glued back together in composite fashion - to advance.

    So that is at the heart of things here. For a holist, it is obvious reality is constraints-based. So regular maths is "wrong" in always framing reality in constructivist terms. And yet in the end maths is a tool for modelling. We actually have to be able to calculate something with it. And calculation is inherently a constructive activity.

    So while we can sketch a picture of systems of constraints - like Peirce's diagrammatical reasoning - that is too cumbersome to turn into an everyday kind of tool that can be used by any schoolkid or universal turing machine to mechanically grind out results.

    Of course, that kind of holistic reasoning is also then absolutely necessary for proper metaphysical level thinking, and diagrammatical reasoning can be used to advance formal arguments in that way. You have probably seen the way Louis Kauffman has brought together these kinds of thoughts, recognising the connections with knot theory, as well as Spencer-Brown's laws of forms. And I would toss hierarchy theory into that mix too.

    So construction rules the mathematical imagination as tools of calculation are the desired outcome of mathematical effort.

    While that doesn't make such maths wrong (hey, within its limits, it works I keep saying), it does mean that one should never take too much notice of a mathematician making extrapolations of a metaphysical nature. They are bound to be misguided just because they hold in their hands a very impreessive hammer and so are looking about for some new annoying nail to bang flat.
  • Zeno's paradox
    I have noted, rather, that no matter how big a finite number you specify, it is possible in principle to count up to and beyond that number. In other words, you cannot identify a largest natural number (or integer) beyond which it is impossible in principle to count.aletheist

    Again this is an example of rationally seeking a way for the part to speak for the whole. What can't be achieved via actualisation can be supported by appeal to the existence of a local property - in this case, not bijection but a quick demonstration that any nameable number implies in its own syntactic construction a number immediately larger (or immediately smaller).

    Tom is also employing this local syntactic property.

    So yes, bijection seems more abstract a level of definition because it maps maths to maths rather than maths to physics (ie: syntactic spaces where time is still part of the deal - as in saying any time you name a number, the next higher number awaits). But still, the general mathematical tactic is the same - seek a local property that constructive principles will guarantee stands for the truth of the whole. And thus, the very nature of this tactic reveals the deeper questionable presumptions that metaphysics would be interested in.

    It is the idea that reality is perfectly constructible that is questioned by a synechetic or holistic point of view.

    But then even a simple holism falters - the idea of the continuum being instead " the foundational". The continuum is that to which an infinity of cuts can be made. If a division is possible, another one right next to it ... but spaced by the infinitesimal of some continua ... must be possible. So simple holism is simply the inverse problem. Although - like division as an arithmetic operation - there is an advantage that at least it is being flagged that there is a more primitive presumption about there being in fact a pre-existent whole (that gets cut or divided).

    So simple holism brings out the fact that simple constructionism is presuming an infinite empty space that can be filled by an unbounded act of counting. The standard atomistic approach presumes its numerical void waiting to be filled. And even bijection just illustrates the presumed existence of this numerical void as a waste disposal system that can swallow all arithmetical sequences. You can toss anything into the black hole that is infinity and it will disappear without a splash.

    So the simplest view treats infinity as the void required by atomic construction. The next simplest view treats infinity as a continuum - a whole that is in fact an everything, and so able to be infinitely divided.

    Then obviously - as usual - there is the properly complex view where instead of an atomistic metaphysics of nothingness, or even the partial holism of a reciprocal everythingness, we arrive at the foundational thing of a vagueness as that deepest ground which can be divided towards this reciprocal deal of numerical construction vs numerical constraint, the filling of a numberless void vs the breaking of a numberful continuum.

    Of course, none of this deep metaphysics need trouble those only concerned with ordinary maths. They can believe that Cantor fixed everything for atomistic construction and the story ends there.

    But deep metaphysics makes the argument that the very act of trying to cut is what produces the divided that appears to either side. The continuum arises because it is cuttable. Which like the Chesire Cat's grin, sounds really weird to those only used to everyday notions of logic or causality where something - either everything or nothing - has to be the starting point or prime mover for any chain of events.
  • Zeno's paradox
    f you think to yourself, "The natural numbers, the integers, and the rational numbers are examples of foozlable sets," you will not confuse yourself or others by shifting the meaning of a technical term to its everyday meaning.fishfry

    I think the issue here has been metaphysical - so neither everyday, nor mathematical. Although the mathematics of course has to have some grounds for finding its own axiomatic base "reasonable".

    So the Zeno paradox is about a particular difficulty between a mathematical operation and the world we might want it to describe. The math seems to say one thing, our experience of the world another.

    Bijection is great. It replaces the need for a global act of quantification (demonstrating an example of infinity by showing a sequence is measurably unbounded) with a local demonstration of a quality (if bijection works for this little bit of a sequence, then that property ensures the infinite nature of the whole). So bijection doesn't do away with the notion of counting or a syntactic sequence. But it does extract a local property that rationally speaks for the whole.

    No problems there.

    And then we get back to the metaphysics on which even the mathematical intuitions are founded. Which was the issue the OP broaches and which you are side-tracking.
  • Perfection and Math
    Here is the text box definition, pulled from one of my statistics course books.Jeremiah

    That would be why probability ranges from 0 to 1 then? Categorical differences are measured relatively in fact?
  • Fractured wholes.
    Positivity doesn't exist unless it's in the company of negativity. So if we're just dealing with 2 dimensions, left and right are negatives of one another. It's more complicated if we add that third dimension.Mongrel

    Symmetry broken simply is symmetry broken on just a single scale. So it is easily reversed. There is no real separation of what just got separated and so there is nothing stopping a distinction immediately erasing itself.

    That is what literally happens with "positivity and negativity" when it comes to fundamental particles. They pop out of the quantum vacuum in opposing pairs (as the conservation laws derived from symmetry mandate) and then annihilate so fast that physics ends up calling them virtual.

    To get a persistent symmetry breaking requires a "third dimension" - a breaking over scale that creates an effective state of separation or asymmetry. Stuff has to be put far enough apart from itself so it can do something else while it takes its time to - by the end - just annihilate.

    With our actual Universe, there is a complex charge asymmetry built in because "raw matter" could fall into several different local symmetry-breaking arrangements. You could have the quarks with their eight-fold way that left a sufficient excess of positive protons. Then you had the leptons which - after an entanglement with the further symmetry-breaking of the Higgs fields - eventually left a sufficient excess of negative electrons.

    So right there - in a series of complicated symmetry breakings that turned out to have the makings of an actual asymmetry - you have an illustration of reality being a something because it got separated across scale (thermal scale, as heat all this asymmetric residue and you can return it to its Big Bang equilibrrium where all particles are simply virtual fluctuations of a vanilla force).
  • Fractured wholes.
    I've still going to have to think about what you've said more, do a little reading on the subject before I get back to you.Wosret

    No problem. I understand it is a dense issue. But as SX indicates, we can deal with actual similarity and difference in the world with an apparent intuitive ease that belies the underlying metaphysical complexity.

    And that complexity is what gets revealed as soon as we instead start to ask how a difference comes to make a difference. That question is like finding the loose end of a woolly jumper and beginning to pull.

    Well, right, left, up down are all positive things. "not-me", "not-us" and "not-shit" are not, and could really conceivably be anything at all except for me, us, and shit..Wosret

    This illustrates particularity. We seem to start metaphysics with a brute something. There is the positivity of some concrete proposition - that is then either true or false.

    But look closer and you can see here that the brute somethingness points to an "otherness" of two possible kinds - the more general, or the more vague.

    All the not-As might accounted for as by a concrete generality of some constraint that then defines the nature of what may count as a certain genus of particular. It might be the "me" that is a subset of the "we", or the "shit" that is defined in contrast to the undigested banquet.

    Or the not-A might simply refer to the indeterminism that is by contrast the generalised lack of such a determining context. Or in other words, it refers to the freedom or contingency that is also an equally inescapable aspect of reality. It might be the random seeming collection of "me, apples, tanks, galaxies". The "other" being spoken of via the logical construction of "not-A" could be just every kind of stuff. So just, in semantic effect, a vagueness.

    Thus once more, a complex triad is revealed at the heart of conventional monistic thought.

    From the particular - viewed as some brute substantial particular - you can talk about the "other", the not-A, as either the vague or the general. So that is something to be further specified in any attempt to apply logic to ontology.

    Peirce made the difference clear enough. He argued that the law of the excluded middle does not apply to generality, while it is the principle of non-contradiction that fails to apply to the vague.

    So within the (triadic) laws of thought, this important distinction between generality and vagueness is perfectly well defined (along with particularity as being that which to all three laws of thought then do apply).
  • Zeno's paradox
    You mentioned the relevance of transversing the Planck scale. And while I applaud taking the physical facts seriously, in fact any exactness of location results in a complementary uncertainty about momentum (or equivalently, duration).

    So if you talking about a physical continuity on the Planck scale, your attempt to mark the first location would already then have your fixed point transversing the whole distance to its resulting destination.

    It is like the way a photon is said to experience no time to get where it is going. Travelling at c means the journey itself is already described by a vector - a ray rather than a succession of points.

    So in the real world, locating your starting point is subject to the uncertainty relation. The Planck scale is the pivot which prevents you reaching your goal of exactitude by diverting all your measurement effort suddenly in the opposite direction. In effect you so energise the point you want to measure that it has already crossed all the space you just imagined as the context that could have confined it.

    Zeno definitely does not apply in quantum physical reality.
  • Zeno's paradox
    I guess you must deny, then, that the integers are countable, since nothing and no one can actually count them all. And yet it is a proven mathematical theorem that not only the integers, but also the rational numbers are countable - i.e., it is possible in principle to count them - despite the fact that they are infinitely numerous.aletheist

    MU is right that it has to be more complex than that. Talk of actually counting smuggles in the necessity of the maker of the infinesimal divisions or Dedekind cuts.

    For there to be observables, there has to be an observer. Or for the semiotician, for there to be the signs (the numeric ritual of giving name to the cuts), there has to be a habit of interpretance in place that allows that to be the ritualistic case. Which is why the number line itself is just a firstness or vagueness. In the ultimate analysis it is the raw possibility of continua ... or their "other", the matchingly definite thing of a discontinuity.

    So infinity and infinitesimal describe complementary limits - one is the continuum limit, the other the limit on bounded discreetness, the limit of an isolate point.

    Thus counting presumes an observer then able to stand inbetween. The counter can count forever because the counter also determines the cuts that pragmatically "do no violence" to the metaphysics, at least as far as the counter is concerned.

    My point is thus that an observerless metaphysics is as obtuse as an observerless physics, or theory of truth, or observerless anything when it comes to fundamental thought.
  • Fractured wholes.
    Great. The essential thing is not to be scared of complexity.

    Metaphysical analysis always arrives at dichotomous contrasts. Logical intelligibility itself demands a world divided into what is vs what is not. The problem is that this has to work for both sides of the equation. So the "what is not" has to be still something else itself - whatever it is that can make the "what is" what it is.

    So analysis sounds like it demands the resolution of a monadic outcome - the arrival at the fundamental via the rejection of all that is superficial, or contingent, or emergent, etc. Yet the fact is the dichotomy - the dyadic relation - is irreducible. You can't have any notion of the "what is" in the absence of the complementary notion of it being precisely "that which is not what it is not".

    So there is a doubled or recursive negation at work. Monadicity can only arrive at itself via the denial of its own denying. The essentially self defeating nature of monadic metaphysics is thus revealed. It others othering and thus falls into inconsistency even with itself.

    Thus the dichotomy forms the irreducible basis of intelligible existence. It both finds the natural divisions of being, and relates them as each other's other. Each is the others limit.

    Having established that, we also establish that we are thinking in active and developmental terms, not passive and brutely existence ones. Existence is revealed as having a necessary history - as divisions must both arise and terminate. Which is where you get the thirdness or triadicity that is the ultimately irreducible metaphysical state. So yes, what is fundamental is not twoness, let alone oneness, but threeness. Three is the number of actual complexity.

    A further point is that to cash all this out in terms of some actual world requires a global state of asymmetry - or scale symmetry. That is, if reality is constructed by a symmetry breaking of pure possibility, then this breaking must happen freely and completely across all available scales of being.

    In terms of cosmological theory, the results must be homogenous and isotopic - invariant with the scale of observation. That is why fractal maths are found everywhere where nature is at its most simplest. Zoom in or zoom out, the fractal world looks always exactly the same. And that is because the dichotomy or distinction being expressed is being expressed fully over all possible scales. It is the same damn thing - the same damn seed asymmetry - absolutely everywhere.

    The Koch triangle shows this in its fractal generator, which is the simple asymmetry of natural log2/ natural log3 (or fractal dimension of .63).

    To unpack this, the Koch triangle fractal is a line divided into three and then the middle segment sprouting the two sides of a further triangular bump. So a line buckles in the simplest imaginable fashion. That gives you the seed ratio - the 3/2. And then the natural log simply forces the growth of that act of buckling over every possible scale. You thus have two exponential actions in a constantly specified balance. The result is a mathematical model of perfectly complete asymmetry - or rather, the emergence of a new axis of scale symmetry, a fractal dimension that stands in the middle of two bounding extremes of action (between the flatness of the line that gets radically broken, and then the curvature of the buckling that is a departure from the now radical thing which is to be instead flat and "a line all the same with itself").

    So it may seem a bit of excursion to talk about the maths of fractals. On the other hand, it is a fact that the new maths of complexity (fractals, scalefree networks, universality, criticality, etc) gives a picture of reality that is precisely the kind of irreducibly triadic metaphysics I just described.

    So don't expect monadic metaphysics to be right. Expect the dichotomies or symmetry breakings that point to the broken symmetries or equilibrated outcomes that are then in turn their natural "scale symmetry" limits (or, the same thing, their states of asymmetric or hierarchical final order).

    Ie: The maths of complexity has vindicated this irreducibly triadic vision of nature during the past 40 years.
  • Fractured wholes.
    Similarity and difference are a metaphysical dichotomy. So each is defined in terms of being not the other. Or rather, in practice as the breaking of a symmetry, the least like each other as possible by each being as far apart as possible as states of being or categories.

    In being two poles being differentiated, then brings in the further thing which is the vagueness or firstness that they divide. They are both crisply actual - as limits - of what was the purely possible.

    This furthere "in reference to" also manifests (confusingly) in the crisply divided outcome. The world that emerges between two opposed limits (here the similar and the different) is iteself everywhere some mixture or equilibrium balance of the two categories. So the world itself does sit in the middle - with this concrete mixture of states being found to be the same blend over all observable scales.

    So that is the basic set up - for all metaphysical dichotomies. They speak to the firstness that is their common vague origin (the symmetry that got broke) as well as the thirdness which is their own completely mixed state of being - the further thing of having become broken in the limit and arriving at an equilibrium balance.

    Of course it may sound crazy to talk of similarity and difference as being themselves united and divided. Or instead, that they are united in initial vagueness, then concretely divided by a logical symmetry breaking, and then reunited by the emergent symmetry of being as mixed together as they can possibly be, is the feature here. The developmental trajectory involved of firstness, secondness and thirdness describes itself in terms of itself.

    Anyway, it means that for there to be a world, similarity and difference must be a division concretely respected over all scales of differentiation (and hence integration).

    Now we can get down to the detail of the mechanism.

    SX makes the standard semiotic point that to be an actual difference, a difference has to make a difference. So difference itself is divided into the meaningful vs the meaningless, the signal vs the noise, the teleological vs the contingent. Thus now we do bring in the active or causal nature of being.

    The alternative view is that existence is a passive brute fact. It has no reasons. Difference or similarity has no meaning. It is all just arbitrary labels for a world that has no developmental story and thus no reasons for its apparently definite state of organisation.

    But here I have described a developmental or process metaphysics where existence is an emergent equilibrium state where change keeps changing, but by the end further change can make no difference. It is like a new pack of cards. Once the deck is well shuffled, continued shuffling makes no effective difference. It does make a difference to the exact order, but now such differences are a matter of indifference. When the deck is as random as possible, it can't be made more random.

    So yes, this all seems now a rather mindful or psychological kind of metaphysics. Similarity and difference are relative judgements that are about differences that make a difference (in breaking the symmetry of a state of similarity which is another word now for a state of indifference).

    But again, that bug is really a feature. It brings minds or observers firmly within the metaphysics of actual being. It unites epistemology and ontology in making meanings and thus purposes part of the world.

    The final twist to bring an organic or pansemiotic metaphysics into focus is then understanding the triadic relation in causal terms as the hierarchical contrast between constraints and freedoms. One is top down causality, the other acts in causally bottom up fashion.

    So similarity is enforced on natural possibility by general constraints. Worlds as states form constraining contexts. They limit free possibility in particular ways. And all objects or events thus limited are the same in that fashion. They all participate in that particular form.

    But then difference stil exists. That is what freedom means. Spontaneous and unconstrained in some regard. So accidents and contingency are also fundamental in this organic picture of nature. They too exist over all scales of being. (The statistics of fractals or power laws being the signature of actual natural systems for this reason.)

    So now we have that triadic set up. There are general constraints. And there are particular freedoms. Then there is the rule of indifference in operation that marks the emergent boundary where now further differences fail to make a difference to the general state of things - which, dichotomously, also then defines the differences that do make a difference.

    So if enforcing similarity is the telos of a constraint, then that also means that eventually the world becomes equilibrated - like a well shuffled deck - and so apparently only composed of a whole bunch of accidents. The differences that don't make a difference become the apparent ground of being because they are what get left once the development of a world has arrived at the dichotomous satisfaction of it's own symmetry breaking desires.

    Contingency rules when organisation has had its say. Existence is a bunch of indifference (a heat death) in the end.

    But the story of how it gets to that fate is the bit that is metaphysically interesting.
  • Zeno's paradox
    A true continuum is infinitely divisible into smaller continua; it is not infinitely divisible into discrete individuals.aletheist

    The story in a nutshell. Points are a fiction here. The reality being modelled is the usual irreducibly complex thing of a vector - a composite of the ideas of a location and a motion...

    My coordinate system only uses the rational numbers. So I ask again; what coordinate does it pass through first?Michael

    ....and the corollary is that what is being counted is not points but (Dedekind) cuts. The numbers count the infinite possibility for creating localised and non-moving discontinua.

    My coordinate system only uses the rational numbers. So I ask again; what coordinate does it pass through first?Michael

    The cut bounds the continua in question. So the continua has already been "traversed" in the fact there is this first cut. You are then asking how near the other end of the cut continua can be brought in the direction of the first cut in question. The answer is that it can be brought arbitrarily close. Infinitesimally near.

    So you are creating difficulties by demanding that continua be constructed by sticking together a sequence of points. However there is no reason the whole story can't be flipped so that we are talking about relative states of constraint on a continuity - or indeed, an uncertainty - when it comes to the possibility of some motion, action, or degree of freedom.
  • Most over-rated philosopher?
    Yep, simple isn't it. If you actually break things apart, they are no longer in a relation.

    Again, close reading will show that I stress that this is about "directions" and "extents", and so the intrinsic relativity of a logical dichotomy is presumed. Your pretence otherwise is just trolling.
  • Most over-rated philosopher?
    Don't pretend to be so dim. Maximising the separation is night and day different from breaking the connection.
  • Most over-rated philosopher?
    Are your close reading skills really as challenged as you pretend?

    (Of course, living beings can't actually ignore the world. They must live in it. But the point here is the direction of the desires. Rationalism got the natural direction wrong - leading to rationalist frustration and all its problems concerning knowledge. Pragmatism instead gets the direction right and thus explains the way we actually are. There is a good reason why humans want to escape into a realm of "fiction" - and I'm including science and technology here, of course. As to the extent we can do that, we become then true "selves", the locus of a radical freedom or autonomy to make the world whatever the hell we want it to be.)apokrisis
  • Most over-rated philosopher?
    Get back to me when you want to discuss what I actually said and not what you are pretending I said.
  • Most over-rated philosopher?
    If you want to discuss this seriously, define madnesss properly.

    Are you talking paranoia or bipolar mania or what? A primary symptom of schizophrenia is a breakdown of perceptual predictability. So a loss of control over experience rather than a gain.
  • Most over-rated philosopher?
    You sound threatened somehow.
  • Most over-rated philosopher?
    I think that's quite a mad philosophy.Agustino

    And I find your replies trivial.
  • Most over-rated philosopher?
    So the direction of desire is towards madness and the mad is the most successful of us all? :sAgustino

    Why do you have to drag Trump into every conversation? But yes I guess.
  • Are the laws of nature irreducible?
    So, in all the common interpretations of QM, including "no-collapse" interpretations, there always is a tacit reference to measurement operations, and the choice of the setup of a macroscopic measurement apparatus always refers back to the interests of the human beings who are performing the measurement. The processes of either "decoherence", or "collapse" of the wave function, (or of "projection" of the state vector), amount exactly to the same thing from the point of view of human observers.Pierre-Normand

    Yep. Decoherence - at the level of heuristic principle - says all the troubling indeterminacy disapears in the bulk behaviour. So that probabilistic view gives us an informal account of collapse that fits the world we see.

    Of course, the existing quantum formalism doesn't itself contain a model of "the observer" that would allow us to place the collapse to classical observables at some specific scale of being. But then either one thinks that is the job of a better future model - which seems the metaphysically reasonable choice. Or one can go crazy with the metaphysics and say every possible world in fact exists - a "solution" which still does not say anything useful about how world-lines now branch rather than collapse.

    So the main reason for supporting MWI is that it is ... so outrageous. It appeals because it is "following the science to its logical conclusion" in a way that also can be used to shock and awe ordinary folk. Scientism in other words.
  • Most over-rated philosopher?
    But again, how does this change anything?Agustino

    Simply put, if the error is external, then the mind simply has to make a better effort at knowing the world truly. But if instead the error is internal - the mind has to create the structure of its perceptions - then more effort may only put the mind at an even further distance from the thing-in-itself.

    And this in fact fits with psychological science. It also ceases to be a problem once you give up rationalist dreams of perfect knowledge and accept the pragmatism of a Peircean modelling relation with the world.

    So a striking fact of cognitive architecture is that consciousness is in fact "anti-representational". The brain would rather live with its best guess about the actual state of the world. It would like to predict away all experience if it could - as that way it can start to notice the small things that might matter most to it.

    This would be Kantianism in spades. It is not just a generic structure of space and time, or causality, that we project on to existence. Ahead of every moment we are predicting every material event as much as possible, so we can quickly file it under "ignore" when it actually happens.

    In this sense, we externalise error. Through forward modelling or anticipatory processing, we form strong expectancies about how the world "ought to be". And then the world goes and does something "wrong", something surprising or unexpected. The damn thing-in-itself misbehaves, leaving us having to impose some revised set of expectations that then becomes our new consciousness of its state of being.

    (And until we have generated some new state of prediction, we are not conscious of anything for the half second to second it can take to sort out a state of sudden confusion - or in extreme situations, like a car crash, our memory will be of time slowed or even frozen with a hallucinatory, conceptually undigested, vividness. It is another psychological observation that childhood experience and dreams have this extra vividness because there is not then such a weight of adult conceptual habit predicting all the perception away and rendering it much more mundane.)

    Anyway, as I was saying, Kant was right in understanding that the brain has to come at the world equipped with conceptual habits of structuration if it is to understand anything - in terms of its own pragmatic interests.

    But Kant was still caught up in the rationalist dream of perfect knowledge. And so the gap between mind and world was seen as some kind of drama or failure. We have the right to know the world as it is - and yet we absolutely can't.

    Peirce fixed this by naturalising teleology. Knowledge exists to serve purposes. And so what was a rationalist bug becomes a pragmatistic feature.

    Oh goody! We don't have to actually know the world truly at all if the real epistemic aim is to be able to imagine it in terms that give us the most personal freedom to act. The more we can routinely ignore, the more we can then insert our own preferences into the world as we experience it. Consciousness becomes not a story of the thing-in-itself but about ourselves whizzing along on a wave of satisfied self-interest.

    So Kant turned things around to get the cognitive architecture right. But because he still aspired to rationalist perfection, he wanted to boil down the mind's necessary habits to some bare minimum - ontic structure like space, time and causality.

    This simply isn't bold enough. Brains evolved for entirely self-interested reasons. Which is why an epistemology of pragmatism - consciousness as a reality-predicting modelling relation - was needed to fully cash out the "Kantian revolution". The thing-in-itself is of interest only to the degree that it can be rendered impotent to the mind. The goal is to transcend its material constraints so as to live in the splendid freedom of a self-made world of semiotic sign.

    (Of course, living beings can't actually ignore the world. They must live in it. But the point here is the direction of the desires. Rationalism got the natural direction wrong - leading to rationalist frustration and all its problems concerning knowledge. Pragmatism instead gets the direction right and thus explains the way we actually are. There is a good reason why humans want to escape into a realm of "fiction" - and I'm including science and technology here, of course. As to the extent we can do that, we become then true "selves", the locus of a radical freedom or autonomy to make the world whatever the hell we want it to be.)
  • Are the laws of nature irreducible?
    What's the problem? Is deflection your only defence?
  • Are the laws of nature irreducible?
    That's what allows thought, and life.tom

    Nope. It is the semiotic interaction between the realms of sign and materiality that allow that.

    Computation explicitly rules out the interaction between formal and material causes. So to actually build a computer, the dynamics of the material world must be frozen out at the level of the hardware. Computation is the opposite of the organic reality in that regard. And biophysics is confirming what was already obvious.

    And that is before we even get into the other issue of who writes the programs to run on the hardware. Or who understands that the simulations are actually "of something". Or that error correction is needed because what the computer seems to be saying must be instead that kind of irreducible instability which is the real dynamical world intruding. (Oh shit, my quantum entanglements keep collapsing or branching off into other worlds.)

    But keep on with the computer science sloganeering. I'm well familiar with the sociology of the field. No one cares if people talk in scifi terms there. It is the name of the game - always over-promise and under-deliver.
  • Are the laws of nature irreducible?
    It is how these principles are related to what is outside the category, how we relate an epistemology to an ontology for example, which is where we should make such judgements of good and bad.Metaphysician Undercover

    So you mean ... exactly what I said then?

    Ie: Holism is four cause modelling, reductionism is just the two. And simpler can be better when humans merely want to impose their own formal and final causality on a world of material/efficient possibility. However it is definitely worse when instead our aim is to explain "the whole of things" - as when stepping back to account for the cosmos, the atom and the mind.
  • Are the laws of nature irreducible?
    At the risk of repeating myself, it has been proved that all real universal computers are equivalent. The set of motions of one can be exactly replicated on the other. It has further been proved that any finite physical system can be simulated to arbitrary accuracy, with finite means, on a universal computer. The brain can thus be simulated on a universal computer, whether it is itself universal or not. Whatever a brain can do, a computer can do. There is nothing beyond universality.tom

    Still this dualistic crackpottery.

    A computational simulation is of course not the real thing. It is a simulation of the real thing's formal organisation abstracted from its material being.

    This should be easy enough to see. A computer relies on the physical absence of material constraints. It is cut off from the real world in that it has a power supply it doesn't need to earn. It doesn't matter what program is run as the design of the circuitry means the entropic effort is zeroed in artificial fashion. The whole set-up is about isolating the software from dissipative reality so it can do its solipsistic Platonic thing.

    A brain is quite different in being organically part of the material world it seeks to regulate via semiosis. And you can see this in things like the way it is fundamentally dependent on dissipative processes and instability.

    Where a computer must be made of Platonically stable or eternal parts - logic circuits frozen in silicon - the brain requires the opposite. It depends on the fact that right down at the nanoscale of cellular structure everything is on the point of falling apart. All molecular components are self-assembling in fluid fashion. So they are constantly about to break apart, and constantly about to reform.

    And in having this critical instability, it means that top-down semiotic constraint - the faint nudges to go one way or the other that can be delivered by the third thing of a molecular message - become supremely powerful. This is the reason why a level of sign or biological code can non-dualistically control its world. It is why the "software" can regulate the materiality of metabolic processes, and on a neural scale, the material actions of whole bodies.

    So science has looked at how organisms are actually possible. And the answer isn't computation but biosemiosis.

    Computers are abstracted form. So they have the fundamental requirement that someone - their human masters - freezes out the material dynamics of the real world so they can exist in their frozen worlds of silicon (or whatever super-cooled, error corrected, machinery a quantum computer might get made of).

    And organisms are the opposite. They depend on a material instability - being at the edge of chaos - that then makes it possible for top-down messages to tip stochastically self-organising processes in one direction or another.

    As I say, that is what makes multi-realisability an issue. A Turing Machine can indeed be made out of anything - tin cans and string if you like.

    But biology - in only the past 10 years - has shown how organic chemistry may be a unique kind of "stuff" that can't be replicated or simulated by simpler physical machinery (circuitry lacking the critical instability that then gives semiosis "something to do").

    It is a happy fact that Turing himself was on to it with his parallel work on chemical morphogenesis. He was an actual genius who saw both sides of the story. But sadly UTMs have given licence to decades of academic crackpottery as hyped-up computer scientists have pretended that the material world itself is "computable" - as if an abstracted simulation is not the opposite of existing in a world of material process.
  • Are the laws of nature irreducible?
    By saying that human beings create a group-mind, without attributing this unity to God, you assign to the human race the property of God, and commit the sin of the fallen angel.Metaphysician Undercover

    Cripes. So social constructionism is the work of the Devil.
  • Are the laws of nature irreducible?
    I would also readily grant that mental abilities can be multiply realized in a variety of biological or mechanical media ...Pierre-Normand

    I have to say that the latest understanding of biophysics at the nanoscale is now a serious challenge to multirealisabilty. Organic molecules have physically unique properties that allow them to flourish in a dissipative environment and function as various kinds of functional components. So the biologists don't have to grant the computationalists any kind ground at all anymore if life and mind are semiotic processes rather than information processes.

    And the beauty is that the onus is on computationalists to show that life and mind are "just information processes" now if they want to keep pushing that particular barrow. This is no longer the 1970s. :)

    Peter Hoffman has done a great book - Life's Ratchet - on this.
  • Are the laws of nature irreducible?
    Of course, if you managed to formulate an argument that the brain is not computationally universal, and that it could not be programmed (e.g. by training), and that therefore the mind could not be an abstraction instantiated on a brain, then you might have a point.tom

    You ought to check Robert Rosen's Essays on Life Itself for such arguments. Also Howard Pattee's paper, Artificial life needs a real epistemology.

    But even just from a good old flesh and blood neuroscience perspective, where's the evidence that the brain is actually any kind of Turing machine (even if you believe that any physical process can be simulated by a UTM)?
  • Are the laws of nature irreducible?
    Reductionists are generally materialist. If there are such philosophers as 'reductionist dualists', I would be interested to hear about them.Wayfarer

    Chalmers?
  • Are the laws of nature irreducible?
    No, I meant that hearing people speak, and reading books are acts of sensation. Don't you agree?Metaphysician Undercover

    Of course not. All my senses actually see is squiggles of black marks. My cat sees the same thing.

    To interpret marks as speaking about ideas is something very different. It is to be constrained not by the physics of light and dark patterns but by a communal level of cultural meaning.

    So without being a substance dualist, the semiotician has all the explanatory benefits of there being "two worlds" - the one of sign, the other of matter.

    I don't read books, or speak to people to gain access to any "group-mind".Metaphysician Undercover

    Exactly. I mean who needs a physics textbook to know about physics, or a neuroscience textbook to know about brains? Just make the damn shit up to suit yourself.
  • Scholastic philosophy
    Hah! Knocked it out of the park.
  • Are the laws of nature irreducible?
    I don't understand the bad reputation which reductionism has received. If it's the way toward a good clear understanding, then where's the problem?Metaphysician Undercover

    I always say it is fine in itself. It is only bad in the sense that two causes is not enough to model the whole of things, so reductionism - as a tale of mechanical bottom-up construction - fails once we get towards the holistic extremes of modelling. You need a metaphysics of all four causes to talk about the very small, the very large, and the very complex (the quantum, the cosmological, the biotic).

    a dualist reductionist would not meet the same problem. The dualist allows non-spatial substance.Metaphysician Undercover

    Yep. Olde fashioned magick! Dualism is just failed reductionism doubling down to make a mystery of both mind and matter.

    I don't see this need. We hear people talking, we read books. These are perceptual activities. Why can't we treat them like any other perceptual activity?Metaphysician Undercover

    You meant conceptual activities really, didn't you? :)

    Or at least some of us read books and listen to people talk to gain access to the group-mind. It kind of defines the line between crackpot and scholar.