Comments

  • Why shouldn't a cause happen after the event?
    Yes. QM means no local realism. As a matter of interpretation, you can then explain that in various ways.

    On circularity, there is obviously plenty of speculation about wormholes and what they would do to causality - https://www.iflscience.com/physics/wormholes-could-solve-a-key-problem-with-quantum-mechanics/
  • Why shouldn't a cause happen after the event?
    In your own little world on this one.
  • Why shouldn't a cause happen after the event?
    As far as I know, retrocausality isn't ''possible''.TheMadFool

    Look up Cramer's transactional interpretation or the Wheeler/Feynman absorber theory.

    But I tried to make clear that I am talking about retrocausality only in terms of backwards-acting constraints on probabilities. The future can determine the past to the extent that future experimental choices will limit the statistics of some past process. So the future doesn't produce the event in a determining fashion. It just affects the shape of the probability that existed back then.

    The classic experiment is the quantum eraser. I can decide whether to measure an event as either a single slit or two slit situation. And even after the particle has passed through the slits - by a normal temporal view - I can make that decision and still see either kind of statistics.

    So normal causality says that is impossible. The difference couldn't be imposed on nature after the fact. But in quantum theory, it is routine. Systems can be entangled across time.

    The data revealed the existence of quantum correlations between ‘temporally nonlocal’ photons 1 and 4. That is, entanglement can occur across two quantum systems that never coexisted.

    What on Earth can this mean? Prima facie, it seems as troubling as saying that the polarity of starlight in the far-distant past – say, greater than twice Earth’s lifetime – nevertheless influenced the polarity of starlight falling through your amateur telescope this winter. Even more bizarrely: maybe it implies that the measurements carried out by your eye upon starlight falling through your telescope this winter somehow dictated the polarity of photons more than 9 billion years old.

    https://aeon.co/ideas/you-thought-quantum-mechanics-was-weird-check-out-entangled-time
  • Why shouldn't a cause happen after the event?
    To conclude that quantum mechanics operates on a more fundamental level is very questionable. It becomes apparent that that is gravely mistaken if and/or when we continue on to say that randomness is fundamental in it's relationship with causality.creativesoul

    It is a witnessed fact that the quantum account beats the classical one in terms of its predictive accuracy. I think it is only you who find it questionable that it ain't more foundational.

    Though if you followed my own position, I am indeed arguing it isn't "foundational" in the conventional sense. It is indeed a less constrained picture of reality. My ontology is boot-strapping. So I am taking the conversation in quite a different direction there.

    Then as to randomness, again a boot-strapping metaphysics expects a stable ontology to arise out of dynamical contrasts. So it is not that randomness is fundamental. Randomness is simply the dialectical complement to its "other" of deterministic constraint. You have two polar tendencies which together give rise to the third emergent thing of a structured reality - one which has the stability of a statistical system.

    It is only because constraint is a thing that freedom is also a thing. So the more constrained a system, the more definite or fixed its freedoms. You can count the probability of a coin toss because you know that the coin can only either land heads or tails. Flip a quantum coin (or more accurately, a pair of them - the equivalent of two particles with spins) and the statistics are different because there is a loss of information due to the entanglement of the outcomes.

    So perfect randomness can't exist by itself. It needs a matching degree of absolute constraint to define it as being a counterfactually definite possibility. If there are only two answers on offer - heads and tails - then a game of perfect chance becomes possible.

    If it were the case that randomness is more fundamental then we would need to ignore overwhelming fractal evidence to the contrary in order to sincerely assert this. Fractals are patterns. Pattern cannot happen without sequences of events. Sequence cannot happen without predictable time increments.creativesoul

    Fractals are a bad example for supporting your case because they in fact show that behind ordinary "classical" probability spaces - the kind described by a Gaussian bell curve - there is now the less constrained probability spaces of scale-free systems.

    It is just like how QM was found hiding behind classical physics, and imaginary numbers behind real numbers. If you relax a major dimensional constraint, you still get organised structure. And now an actual mathematical structure that does a better job of accounting for nature "in the raw".

    So fractals are the mathematical story of many natural random processes - especially dissipative thermodynamical ones, such as river branching and coastline erosion, because the spatiotemporal scale drops out of the physical picture as a constraint on the expression of randomness or spontaneity.

    Deterministic chaos and fractals were a big deal because they revealed that chaotic nature was in fact predictably random even though any constraints were as minimal as could be imagined. So they speak to nature that has the simplest possible balance of chance and necessity. Gaussian systems are by contrast far more artificial in being overly-constrained (by the Universe's own foundational standards).

    Pure randomness has no predictable sequence. Randomness falls well short of the mark. It cannot produce what we do witness.creativesoul

    Pure randomness is pure vagueness. There couldn't even be a sequence to judge.

    As I say, chaos theory was a big deal as it gave a mathematical understanding of what a minimal state of constraint looks like, and thus what a maximal state of randomness looks like. You had to have both to have either. Each becomes the effect of the other's cause.

    It is this contextual mutuality that is a big part of the conceptual shift towards a holistic ontology here. QM showed that we have to take complementarity seriously. Chaos theory said the same thing.

    Here's my take...

    When observable entities are smaller than a planck length and the act of observing them includes shining light on them then the observation itself begins a causal chain of events as a result of the mass of the photon influencing the path(location) and movement speed(acceleration) of the subatomic particle being observed.
    creativesoul

    That's one familiar pop science explanation. But why does the Planck scale create a sharp cut-off long before location or momentum are driven towards infinity?

    Sure, the maths says things start to bend off sideways as you approach the Planck limit. Your effort to measure a system becomes so strenuous that at some point it produces such an energy density that the whole region of spacetime is going to collapse into a black hole.

    But that is long before you approach infinite efforts. So you haven't actually explained anything about the causality of what is going on. You don't have the kind of holistic/contextual story that quantum gravity is seeking to establish.
  • Why shouldn't a cause happen after the event?
    The results were witnessed.creativesoul

    I'm still none the clearer about the distinction you wish to uphold.

    What you said was....

    We did not arrive at causality by virtue of inventing and/or imagining it. We arrived at causality by virtue of witnessing it happen... over and over and over again...creativesoul

    And my reply is that we did invent a classical model of causality. And now a quantum model would challenge its predictions. We expect to witness a different statistics. And indeed we do, time and again.

    I take it for granted that inventing a model and testing that model are two aspects of the one intellectual enterprise.

    And then from the point of view of the scientifically-informed philosopher, one would be dubious about any "commonsense" claims that we instead just look out and see the world as it actually is. Any such folk theory of causality is only going to be an unthinking acceptance of the "evidence" of a history of classical physics and the logical tropes it has fostered.

    So what are you trying to say? That our belief in classical causality is just some kind of direct "witnessed" knowledge and not instead a socially constructed belief (albeit a belief that "really works").

    Or do you have a different point? I can't follow what you might want to say.
  • Why shouldn't a cause happen after the event?
    Continuing a bit, I take the view that existence, and thus causality, is fundamentally probabilistic. Atomism is emergent. And we have two formal statistical models - the classical and the quantum - that capture that fact.

    An irony is that Boltzmann settled the argument in favour of atomism by establishing a statistical mechanics view of reality. His famous dictum was “If you can heat it, it has microstructure.”

    The equipartition law says there is a direct link between macroscopic and microscopic physics because if you know the total thermal energy of a body - its temperature - you can calculate the number of microscopic degrees of freedom it must contain. Avogadro’s constant.

    So atomism was "proved" by spacetime having a well-behaved statistics. A given volume could contain a given number of degrees of freedom. And then - the ontological leap of faith - by observational degrees of freedom, we would be talking about actual definite particles ... as that is what our causal interpretation most naturally would want to assume.

    But who in particle physics believes in "actual particles" anymore? What we actually know to exist is the statistical formalism that describes the prototypically classical situation. We have equations that cough out results in terms of countable microstates or degrees of freedom.

    So the classical picture and the quantum picture are pretty much aligned on that score. They boil down to the kind of statistics to expect given a physical system with certain global or macro constraints on local possibilities. Going beyond the statistics to talk about "actual particles" - conventional atomism - is a reach.

    So in this way, quantum weirdness should cause us to go back and revisit the classical tale. Classical thermodynamics had already created an approach where atoms were modelled as the limit of states of constraint. The basic degrees of freedom of a system - the very "stuff" it was supposed to be constructed from - were emergent.

    And getting back to the quantum level of the story, Thanu Padmanabhan is pursuing this way of thinking as a way to understand dark energy and spacetime geometry -
    http://nautil.us/issue/53/monsters/the-universe-began-with-a-big-melt-not-a-big-bang

    So Boltzmann's argument - if it can be heated, it has "atoms" - can be used to impute a quantumly grainy structure to spacetime itself.

    But it is not that spacetime is actually composed of fundamental causal particles. Instead, it is the reverse story that regular spatiotemporal causal structure has a smallest limit. There is not enough contextuality to continue to imprint its regularity on events once you arrive at the Planck scale. You are foiled by all directions turning symmetric at that point - principally in the sense that there is no thermal temporal direction in which events can move by dissipating their localised heat.

    So again, what we read off our successful statistical descriptions is the literal existence of hard little atomistic parts. Our conventional notions of causality encourage that. Possibility itself is understood atomistically - which is what makes an added degree of quantum uncertainty rather a mystery when it starts to manifest ... and eventually completely erases any definite atoms by turning everything in sight vanilla symmetric. A quark-gluon fluid or whatever describes a primal state of material being.

    But we can turn it around so that atoms are always emergent. And classical atoms reflect another step towards maximal counterfactual constraint - one that takes a step beyond a looser quantum level of constraint, but then even a quantum level is still pretty constrained.

    It is exactly the story with algebras. Normal classical number systems operate as point on a 1D line. Quantum number systems operate in one step more complex/less constrained realm of 2D imaginary numbers. Yet there are further algebras beyond - the 4D quarternions and 8D octonions, and then eventually right off into barely constrained structures of the even higher dimensional exceptionals.

    So classical counting uses fundamental particles - 0D points on 1D lines. The emergent limit case if you were constraining the freedom of the act of counting. But then quantum counting leaves you with chasing your number around a 2D plane, which winds up behaving like an added rotation. When it comes to actual particles - like an electron - you have to in some sense count its spin twice to arrive at its spin number. To fix its state with classical counterfactual definiteness, you have to add back an extra constraint that eliminates the extra quantum degree of freedom it has from "inhabiting" a larger background space of probability.

    Everywhere you look in modern fundamental physics, this is what you find. Classicality is emergent - where you arrive at the end of a trail of increasing constraint on free possibility. So causality needs to be understood now in these same terms.

    And when it comes to quantum mechanics, it isn't even really that "weird" as it is already way more constrained in its dimensionality than the more unconstrained dimensional systems that could lie beyond it in "algebra-space". Quantum mechanics just has ordinary classical time baked into it at a background axiomatic level. That is why it is possible to calculate a deteministic wavefunction statistics for any given initial conditions. A definite basis has been assumed to get the modelling started.

    But to move beyond QM, to get to quantum gravity, it seems clear that time itself must become an output of the model, not an input. And if you give up time as being fundamental, if you presume it to be merely the emergent limit, then of course conventional notions of causality are dead - except as useful macroscopic statistical descriptions of nature.
  • Why shouldn't a cause happen after the event?
    Unpredictability doesn't imply a violation of causality. Without knowledge or control of the underlying physical causes coin flips are also unpredictable.Andrew M

    Right. So what I am arguing is that there are two models of causality here - the conventional atomistic/mechanical one, and a holistic constraints-based one. And there doesn't have to be a metaphysical-strength "violation" if the mechanical story is understood as the emergent limit of the underlying holistic constraints story.

    In a nutshell, all events are the constraint on some space of probabilities. An "observation" is some set of constraints that restricts outcomes to a fairly definite and counterfactual result. So contextuality rules. And you can have relatively loosely constrained states - like entangled ones - or very tightly constrained ones, such as when the whole course of events is being closely "watched".

    Atomistic causality presumes that everything is counterfactually definite from the get-go. Any uncertainty is epistemic. As with a coin flip, it is because you toss the coin without watching closely that you don't see the micro-deterministic story of how it rotates and eventually lands.

    But a holistic causality says uncertainty or indeterminacy is the ontological ground zero. Then it is the degree to which a process is "watched" - contextually constrained by a decohering thermal environment - that places restrictions on that uncertainty. Effectively, in a cold and expanded spacetime, there is such a heavy weight of context that there is pretty much zero scope for quantum uncertainty. It all gets squished out of the system in practice and classical causal sequence rules.

    So there is no violation of the classical picture from taking the holistic route. It simply says that the classical picture was never fundamental, only ever emergent.

    Conceptually, that is a big shift though. It means that cause and effect are entangled in root fashion. When we come to talking about time as being a universal direction for change, a passage from past to future, we are talking about the emergent thermal view. The effective bulk condition. On the quantum microscale, past and future are "talking" to each other in a nonlocal fashion. Decisions an experimenter might make about which constraints to impose on the evolution of an event a million years in the future will then "act backwards" to restrict the possibilities as they looked to have taken shape a million years ago in the past.

    Of course, respecting relativity, this retrocausal impact of constraints on probabilities can't be used to actually do any causal signalling. Time - as an emergent bulk property - does have a conventional causal structure in that sense. But it is a property that is emergent, not fundamental. That is the "violation" of conventional ontology.

    The Schrodinger equation is deterministic and so, in principle, can predict when a particular neutron will decay.Andrew M

    It is only deterministic because some definite constraints have been put in place to limit some set of probabilities. The big problem for conventional causality is that the constraints can be imposed at some distant date in the far future, as with a quantum eraser scenario - while also, having to be within the lightcone of those "initial conditions". (So the lightcone structure is itself another highly generalised constraint condition on all "eventing" - causality is never some wild free-for-all.)

    Another quantum result is the quantum zeno effect. Just like a watched pot never boils, continually checking to see if a particle has decayed is going to stop it from decaying. Observation becomes a constraint on its usual freedom.

    This is another "weirdness" from the point of view of causality. But it illustrates my key point. Neutrons that are left alone exhibit one extreme of possibility - completely free and "uncaused" decay. And the same neutron, if constantly monitored, will exhibit the opposite kind of statistics. Now it can't decay because it is no longer free to be spontaneous. It is being held in place as it is by a context of observation.

    So a mechanical view of causality presumes an ontology of separability. The OP experiment's demonstration of indefinite causal order shows that causal non-separability is a more fundamental physical condition. It is direct evidence for quantum holism. Spontaneity rules, but counterfactuality is what emerges, as an environment of constraints gets built up.

    Quantum computing is bringing the issue into focus. Ordinary causality can be describe in terms of familiar logic circuits. There everything is strictly determined to follow a "normal" causal sequence. But quantum computing is now developing the kind of process matrix formalism which this latest experiment illustrates. If you relax the constraints, allow paths to be logically entangled, then you get the kind of causal indeterminism reported.
  • Imagination, Logical or Illogical?
    Is imagination logical or illogical?BrianW

    Is association logical or illogical?

    So maybe the basis of imagination doesn't crisply fall into either category. Maybe logic itself is a little mad in its demand for exact determinism that maps one informational state on to another with no loss - and thus no creation - of information.
  • Why shouldn't a cause happen after the event?
    We arrived at causality by virtue of witnessing it happen... over and over and over again...creativesoul

    But with quantum mechanics, what is witnessed is violations of this simple classical model of causality "over and over and over again".

    Why did the neutron decay? If its propensity to decay is steadfastly random, any moment being as good as another, then how could you assign a cause to that effect? It is a spontaneous event and so causeless in any specific triggering sense.

    And the retrocausality implied by quantum eraser effects are just as big a challenge to classical locality. The decision the experimenter makes in the future becomes a constraint that affects the probabilities taking shape in the past. There is something spooking acting backwards in time - again, not as a triggering cause, but still as probabilistic constraint on what then is observed to happen.

    Entanglement happens across time as well as space. And the OP-cited experiment is another example of QM challenging any simplistic cause~effect model of events "over and over and over again".

    So sure, causes being followed by their effects is a model we might impose on reality quite successfully at a classical macroscale of observation. But with QM, we are still seeking to find some other way of understanding causality.

    And we already know it must be the more fundamental model, classicality merely being the emergent description.
  • The human animal
    (all the interior life is dependent on something that isn't interior) ... The meaningful also often seems to be essentially illusory.darthbarracuda

    But that is taking the position that to be meaningful, it would have to come from within in some strong sense. And naturalism would instead see the individual as a plastic state of adaptedness. Being is always contextual. And so meaningfulness is what emerges as a functional or adapted relation between a "self" and a "world".

    Meaning is always going to be exterior to the self as the self is what is contextually being constructed by a functional relationship. From that, adaptedness can be presumed. And merely coping or living in some kind of denial - as in terror management theory - would be the pathological state, not the philosophical baseline.
  • The human animal
    Who needs to snap out of it? Me or them?frank

    Maybe there is truth in both views given that we humans are both biologically and culturally evolving animals.

    So yes. The naturalistic lens applies overall. But humans have also gone beyond conventional nature if we are talking about straight biological/ecological level evolutionary games. We have added social and even artifactual levels of developmental outcomes.

    We are socially constructed - which means morality is "real' in some stronger sense. We can do weird things like decide to be vegetarian because we feel there is some more generic principle at stake. And then there is a whole realm of machinery and technology that we are unleashing that may become its own still higher level evolutionary game.

    So naturalism rules. But naturalism is also creatively open-ended. And humans ceased to be merely biological organisms as soon as they developed symbolic speech and opened up all the creative possibilities that entailed.
  • Are we doomed to discuss "free will" and "determinism" forever?
    If agency rather is viewed as a natural (and social) phenomenon that can only be disclosed as intelligible from an empathetic and engaged participatory perspective, then there is nothing problematic in asserting that the will is a power that is being freely exercised by mature and responsible fellow rational agents.Pierre-Normand

    Indeed. We are neither meat machines nor ensouled creations but the third thing of socially-constructed and biologically embodied agents.
  • Are we doomed to discuss "free will" and "determinism" forever?
    So, merely scrubbing dubious notions (such as the purely mental acts of 'volitions') because they are tainted by their theological origins will leave the roots that currently nourish the philosophical confusions on the topics surrounding rational agency and personal responsibility firmly in place.Pierre-Normand

    Yep. Surely it is Newtonian determinism that sustains the now neurological-level debate?

    Science's mechanical view of nature is what has been at issue. Freewill just becomes the most convincing argument against the modern understanding of the mind being a product of machine-like information processes.

    A good dose of Spinoza - superior by far to Aristotle on this issue - would do everyone alot of good.StreetlightX

    Spinoza is pretty irrelevant to dealing with the causal level here issue. Aristotelian biology sorts it.
  • Are we doomed to discuss "free will" and "determinism" forever?
    Why yes I am aware of the prevalence of third-rate scholarship on the issue....StreetlightX

    You miss the point. Sure, you have the theistic willing agent coming eventually into hard opposition with scientific determinism during the Enlightenment. But Aristotelian metaphysics already took a position that was more complex than this simple dualism.

    Simple material determinism was itself already wrong for Aristotle. He argued for the reality of chance or tychism as well. And then still there had to be the Tertium Quid - the insertion of agency into the story. Which today we would understand in terms of semiotics or embodied modelling relations - the information dimension.

    So there was something to be said way back then. But also the right kind of answer was on offer, if you are charitably inclined.
  • Are we doomed to discuss "free will" and "determinism" forever?
    'Free will' wasn't even a thing until some boofhead Church father decided to make it the cornerstone of his theology.StreetlightX

    Aristotle was the first philosopher to identify the tertium quid beyond chance and necessity as an autonomous agent power.

    Aristotle knew that many of our decisions are quite predictable based on habit and character, but they are no less free nor we less responsible if our character itself and predictable habits were developed freely in the past and are changeable in the future.

    One generation after Aristotle, Epicurus argued that as atoms moved through the void, there were occasions when they would "swerve" from their otherwise determined paths, thus initiating new causal chains. Epicurus argued that these swerves would allow us to be more responsible for our actions, something impossible if every action was deterministically caused. For Epicurus, the occasional interventions of arbitrary gods would be preferable to strict determinism.

    http://www.informationphilosopher.com/freedom/tertium_quid.html
  • Do Concepts and Words Have Essential Meanings?
    Do words have inessential meanings then? Curious how we manage to communicate ideas and concepts with such alacrity.
  • Interpretive epistemology
    So is taking something as absolutely certain the same as believing it to be minimally uncertain?

    The essence of pragmatism is a willingness to act on beliefs without requiring the absolute absence of doubt. You are doing what is reasonable having applied a process of reasoning. It is the scientific method in a nutshell.
  • Interpretive epistemology
    And QM can in turn quantify that actual uncertainty about the battleship’s location to many decimal places.

    So as I said early on, uncertainty is nothing to lose sleep over if you have the kind of knowledge that minimises it.

    It’s not me that is throwing out the important part of what has been said here.
  • Mental Compartmentalization
    His beliefs are simply at odds with the majority of people, and hence his mind is compartmentalized in such a way that it shows when talking about slavery or races.Posty McPostface

    Surely his beliefs are those of the majority with whom he mixes? That's why compartmentalisation hasn't been much needed as a psychic defence.
  • Mental Compartmentalization
    So, you think it's a matter of performative utility to resort to compartmentalizations of concepts or things?Posty McPostface

    Not sure that you got my point. People don't build a city if they only mean to camp the night.

    Was your OP describing someone who had systematically compartmentalised their rationalisations so as to avoid the logical inconsistencies involved? That would take a lot of prior work.

    Or did they camp on the edge of a pleasant stream and wake the next morning to find it had become a swamping flood? Was there never a compartment and only a hurried packing up the prejudices to go enjoy them somewhere else less challenging?

    Folk who don't like their inconsistencies being fingered just tend to check out because they were never trying to defend some larger coherent territory anyway. Being comfortable is the first priority.
  • Do Concepts and Words Have Essential Meanings?
    I dunno, maybe I've missed something but this move of essentializing (it's a real word, fight me) the meaning of some word doesn't seem to really move the debate along at all unless all parties involved already agree on the same meaning.MindForged

    Meanings are too slippery, too inherently viewpoint-dependent, to be concretely defined. So words are just ways to limit the scope of possible understandings to the point where they can be usefully shared.

    To use words properly, you need to be willing to do two things. Accept they do intend to narrow the scope for interpretation to some habitual conceptual essence. And then also show tolerance or charity for the vagueness that must always remain.

    The sharing of a viewpoint or meaning doesn't have to be exact, complete, or exhaustive. Indeed, there is no other choice than to accept a fit that is going to be fuzzy at the edges, varied in its precise boundaries, creatively open in the understandings it still admits.

    So I see meanings like an unruly herd of cats that you can lock up in a room. And maybe the occasional small dog or big rat gets swept up as well. If it works out well enough for some particular purpose, then that's fine.

    Of course, you think technical words need to obey tighter standards. The proper understandings would be those shared by the technical community that employs them.

    And that is completely reasonable. Yet the same combination of tolerant constraint has to apply. It is Quixotic to try to give words completely defined meanings. No definition could exhaust what is essentially the open ended thing of an act of interpretation. All you can do is create some habitual limit to interpretation. And that then includes the other thing of some habitual limit where it is agreed that differences in interpretation will no longer matter.

    The story is rather different once you move up to an actually mathematical level of speaking. Any scientist knows the difference between trying to understand a concept in words versus actually understanding its equations.

    But is one better than the other in a fundamental way?

    I think here it is interesting to point to a contrast. Ordinary language is good at taking the messy physical world and restricting our focus to some conversationally limited aspect. It suppresses all the other possibilities, but does not require their elimination.

    Mathematical speech on the other hand likes to start with a completely empty world and then start to construct a space of reference. So it is not limiting what already exists. It is starting with nothing and constructing whatever there is to be spoken about. It is an axiomatic approach.

    So one is messy and organic. The other is clean and mechanical. I think the greatest advantage is being able to employ both well rather than take either as being the canonical case. They can complement each other, as each has its strengths and weaknesses.

    The problem with the thread you mentioned is where the difference isn't recognised - and furthermore, that the difference might have to be reconciled if maths indeed aspires to talk about real physical things.

    There are lots of people who reason about the world in folk terminology. And then a lot who are trained to reason in technical terminologies. But those technical terminologies inhabit their own constructed worlds, as I say. So there is yet another step to show that the constructions really can say anything complete about the real world when they come to discuss it.

    The technical approach wipes the slate clean so as to build up an understanding as a set of elements. So how does it ever discover that it missed out key possibilities? Ordinary language only sweeps all the mess under a carpet. Eventually you could still stumble across it.

    So you could defend a commonsense notion of infinity, or a technically constructed notion of infinity. But especially for a scientist or philosopher, the fruitful thing would be to allow the two styles of language to play off each other - accept they are in tension for good structural reason. The definiteness of the one can complement the open creativity of the other.

    Having said that, using ordinary language to create shared understandings rather than defend "alternative facts" seems too much to ask of many posters. So I can understand the basic frustration you are expressing. ;)
  • Mental Compartmentalization
    What I'm wondering here, is how does compartmentalization occur, also?Posty McPostface

    Does one build the compartments or does one merely fail to build the generalised coherence?

    The white nationalist would seem to be the standard thing of back-filling a justification for your actions or attitudes by constructing some story. So you inherit a prejudice from your social context and then explain it whatever way you can get away with.

    It is like setting up a small defensive encampment wherever you find yourself with whatever is at hand. Bricolage.

    The tougher thing would be to be completely systematic in your thinking - to assimilate everything to a thought-through universal structure.

    So it is not that people have to construct a lack of coherent connections. They just get away with not having to live life according to a generally coherent philosophical position.
  • Interpretive epistemology
    You use "psychology" or variant six times.tim wood

    Yes, to emphasise that there is a brain involved. So we know experience of the world is indirect in that it involves the kind of cognitive processing that science reveals.

    Pragmatism, as I read your posts, is a model, an explanation. And it works. But at a price. You seem to surrender whatever must be surrendered in favour of pragmatism.tim wood

    You say there is a price to pay. But what exactly? What am I having to surrender? Let's see if it is something I actually would value.

    Are we talking about absolute certainty? If so, surely not to have to worry about perfection is a form of liberation.

    I'm absolutely certain that with respect to certain axioms, that 2+2=4, and more besides. It's all a giant if-then, but within the if-then we can have our certainty.tim wood

    Sure. We can imagine perfect machines that are so constrained in their actions that there are no possible uncertainties in their outcomes. So you can have your absolute certainty about physics-free syntax. If you say the bishop only moves on the diagonal, the bishop only ever moves on the diagonal.

    It is the step between your syntactical reality and your physical reality that becomes problematic. Is nature always so linear that 2+2=4 even as a modelling description of some set of natural events? If you measure a coastline with a ruler, don't you get a different result depending on the size of that ruler?

    So the realm in which you can claim any absolute knowledge is a highly artificial one - our human inclination to imagine a world of perfect, rule-bound, machines. The lack of uncertainty is precisely what was made axiomatic. It was an input, not an output.

    But would you stick your neck out and say physical reality is itself axiomatically certain? Quantum mechanics tells us it is not. The main axiom of an absolutely deterministic mechanics - the principle of locality, or local realism - has had to be abandoned.
  • On Rationality
    You have the problem that this is reducing rational behaviour to its lowest atomistic common denominator. So you are in fact creating a social system with this as the bias being emphasised.

    Rationality is about a calculus of interests. But in a natural system, like a society, it would be the view seen across some proper spatiotemporal scale. So what can a single person see? Or what even can a society see if it is just thinking of the immediate now and not the next several generations?

    Rationality can only be applied to the scope it is given. And if history doesn't exist on any single spatial or temporal scale, but instead unfolds in a complex adaptive manner across many scales, then that is the kind of "actor" that has to be at the "centre" weighing the possibilities.

    So sure, game theory captures the dynamics of rational interactions. You can figure out the balance between competing and cooperating as the best way to get to some goal.

    But your OP is already swallowing right wing market theory as the efficient information method. That definitely works - but it has to be tied back to the left wing thing of socialised institutional view of what matters in the long run.

    Of course, new wave economics is trying to make that shift. There are all sorts of moves, like national happiness indexes and triple bottom-line accounting, that attempt to lift the horizons when it comes to applying market judgements. Neo-liberalism was about stripping the institutional wisdoms out to maximise the accelerative freedoms. Now the collective fate of humanity and the planet need to be back in the picture if we are to call ourselves truly rational creatures.

    So utilitarianism is half the story. Ensuring that it actually is applied over a broad enough sweep of our history to come is the current challenge.

    Economics is not so dumb that it can't understand that. But the problem is that global society has built up such extremes of inequality that self-interest has become polarised. What would be rational for those at the top is rather too divorced with what would work best for those at the bottom.

    The system could stumble along a few more years. But to celebrate it as a state of enlightened self-interest - an epitome of rationality - is a little premature.
  • Interpretive epistemology
    Do you argue that it's the world we live in? Or the created world of reality?tim wood

    My argument is triadic. So it incorporates all three things of the self, its world, and then the world.

    There is no "we" apart from as an emergent aspect of the world we create. Psychologically, sensory experience comes into focus as a felt distinction between what is self and what is other. That attribution is how we arise - within a world we create.

    Then this is going on within the actual world to which it is an embodied response. So there is the world out there, and then the us in here. Except the us in here is part of the sensory model. Stones and hands are examples of the model's meaningful distinctions.

    So of course there really is a real world. But psychologically, what we need to experience is a world with us in it. So the real world is not yellow or blue. But our psychological models make sense when "we" feel we exist within a world of coherent objects. And having colour is a great way to manufacture that object coherence. We can see a banana because of the way it constantly pops out of cluttered visual landscape.

    The distinction being that if it's reality, then knowledge - interpretations that work in reality - are never quite about the world. That would leave a troublesome gap.tim wood

    But that is the point. The gap is not troublesome but functional. It is the symbolism that frees us from the world as it actually is and thus allows "us" to actually exist.

    Why should a banana look yellow? The yellowness of yellow is such an arbitrary fact when you think about it. Like sweet being sweet, it is the arbitrary nature of qualia that convinces people of the hard problem of consciousness.

    But my approach is saying that the arbitrariness of the symbol is the point. That is why symbolisation can work. Whether we shake hands or kiss cheeks, a friendly greeting is a friendly greeting. What matters is that there is a symbolic gesture to anchor the psychological reality.

    What matters in sensory discrimination is that the brain have a dramatic reaction to what counts in terms of making an ecological difference. So if I want to see fruit in a tree, then I want to see yellow and red as a violent contrast to green, even if - physically speaking, in terms of wavelength - they are only fractionally different in energy frequency. To see red as the diametric opposite of green - which is the way colour channel opponency works - is completely untrue of the world as it actually is, and yet hugely psychologically convenient.

    And that is the kind of freedom from veracity which is at the base of actual "knowledge". You can't symbolise an understanding if you are tied to simply trying to re-present what actually exists. You need to break the physical connection of the world to start forming a semiotic modelling relation with it.

    So the model is always pragmatically about the world - it always has to live in it. But it is always also a model. It is not based on direct truth or faithful recreation. Even at the level of basic perception, it has to be an efficient narration. It is the construction of some interpretable system of sign - of which "we" emerge as the consistent and persisting narrative core.

    I buy Heidegger, in that I think we're already in the world, and that would eliminate the gap.tim wood

    As best as I understand Heidegger, he goes with the fact that experience is for us an umwelt. So it is both a psychological construct and also a lived response to an actual world.

    That sets up the epistemic dilemma. And the solution is not to worry about the gap but to realise that the epistemic cut is how we could even exist. We only arise because interpretation is what it is all about. Once a system develops some habit or interpretance, then there is an "interpreter" in play. The self emerges as a product of the model - the other to the other that is the world.

    But Kant's question as to how I know it is a hammer, with the corollary that I can't know, is still there.tim wood

    Yes. Kant brought the psychological dilemma to the fore. But he left people feeling it was a problem - the bug and not the feature.

    Peirce's does indeed seem to be an account that works and makes sense, but the Kantian question seems still to endure.tim wood

    Again, the question is answered if the knowledge gap is not the bug but the feature. And modern psychology would say the semiotic view puts the Kantian concerns safely to bed.

    Pragmatism is about being able to accept irreducible uncertainty as part of the game - the game being to minimise uncertainty. If you still yearn for absolute knowledge about anything, you are stuck in olden times. :)

    This moves towards a radical (imo) destruction of "knowledge" as a term meaningful in itself, or at least away from any naive idea of knowledge I might have had.

    I cannot rid myself of is the notion of bias in the form of the presuppositions that necessarily are part of the building materials of "interpretance." Or in short, that such is just an obscuring accommodation that happens to work
    tim wood

    So where is your yearning for absolute certainty coming from? Why is relative sureness not enough? Why is the standard human ability to operate on partial and uncertain information not in fact a huge advantage?

    A computer, as a logical machine, is only as good as its certainties. Garbage in, garbage out. But organisms swim freely in shifting uncertain worlds and thrive. Mistakes are how they learn. Knowledge is always provisional.

    Nature has its epistemology. And it is pragmatic.
  • Interpretive epistemology
    The CCP sure would like that.Wayfarer

    The Chinese Communist Party? What are you smoking today.

    Again, you are just wanting to wheel out your standard attack on Scientism. And if a Pragmatist wanders into your sights, you are going to light him up because that's close enough for you.

    (What was I saying about how we construct unwelts to legitmate our habits of action?)
  • Interpretive epistemology
    It all goes back to Kant, IMO.Wayfarer

    Whatever floats your boat.

    But in any case, aside from pragmatism and concern with what works, there's the issue of knowledge of the good, the true, from a perspective other than the pragmatic - something to set the moral compass against.Wayfarer

    Is there? Maybe you just define pragmatism in terms of actual selfishness rather than a collective self-interest. I say that morality represents what works for our collectivised selfhood. And what that would be - in terms of our habits of action - is something we collectively would aspire to know

    So you are pretending that pragmatism seeks to rule out what it in fact aims to explain.
  • Interpretive epistemology
    The essence of interpretation is creation. All that is created is created within the limits of the creating.tim wood

    What would be key to the Peircean semiotic view I'm expressing is that interpretations actually have to live in the world. So they are not free creations ... in the long run at least. To survive, they must prove themselves useful habits. They must stabilise a working relationship that is then defined as being about a self in a world.

    So the self-interested aspect of the epistemology does not have to collapse back towards idealism. Pragmatism presumes that the modelling relation only exists because it makes sense. The world is out there. And that is how it can be a possibility that a selfhood can develop which is taking its own purpose-laden point of view.

    I know that 2+2 = (is) 4, and that the stone on my desk just is a stone.

    The only way to reconcile this knowledge (that I take as certain) with its essential createdness is to suppose that as knowledge it comes into being - is created - when I think of it.
    tim wood

    Again, Peirce would stress that thinking in some fashion is not a fresh creation of every encounter with the world but instead the development of a reasonable habit. It is an action-oriented view of epistemology. And it is through our minimisation of accidents or mistakes that we move towards the best possible habits of interpretance.

    So yes, every encounter is the chance to make shit up in some random fashion. We need the power to hypothesise to get things started. But mindfulness is a state of established habit. We emerge as a self by building up the steadiness of a habitual point of view.

    So recognising stones as stones is the kind of habit that we develop. Now a "stone" is a concept laden with plenty of self-interest. We can do things with stones that we can't do with marshmallows, pebbles or cats. If I want to smash open an oyster, my mind will leap towards the idea of finding something that is enough like a stone. I won't look at a cat or marshmallow and feel I have the solution to the problem in hand.

    The whole point of pragmatism would be not to collapse epistemology into either of the usual categories of idealism or realism. Both those presume the knowing self just exists. What is in debate is whether the world also just exists as it is experienced.

    Pragmatism instead takes the psychological route of accepting that selves emerge as models of the world. So the world exists - in some concrete sense. And the mind emerges as a collection of interpretive habits. The less appreciated fact that follows is that the mind then exists by virtue of an epistemic cut - its ability to read the world as an unwelt or system of signs. Things do take an idealistic bent by the end as we very much live within our own psychic creation.

    Every "stone" I see is a token of some notion of "stone-hood". If I need to crack open an oyster, I will recognise stone-hood in my wife's golf club. She might then see something very different if she catches me messing around with her precious nine iron.

    Knowledge of arithmetic is then reflective of semiosis taken to a higher level of abstraction. Ordinary language developed to encode a collective social view of the world and hence a collective social conception of human selfhood. We are socially constructed through the habits of speech. We all lean to think of the world in the same essential way when it comes to stones, golf clubs, cats and marshmallows. Words are the way we structure a generalised human relation to the world and so arrive at a generic selfhood shared at a cultural level.

    But as humans, we have moved on to add a mathematico-logical level of semiosis to our sense of selfhood. We invented a language based on numbers - pure generic symbols. So this is a new epistemic game with its own set of rules. Ordinary language is meant to be all about living in the world as a self-interested tribe of humans. Symbolic language is the attempt to step outside that zone of obvious self-interest and talk about the world in a disinterested or objective fashion.

    So it is its own game. It relies on a strict separation of the notions of quality and quantity - the generality of some essence and then the particularity of the consequent acts of measurement. The scientific viewpoint, in short. Once I have a notion of stonehood as "a thing in itself, a quality of the world", then I can start counting individual stones.

    Of course, the essence of a stone is a hard to define thing. But the trick is that a mathematico-logical level of semiosis is based on an active rejection of any personal interest - golf clubs can't count as my desire to crack oysters is clearly "too subjective". Instead, objective knowledge has to be based on the quantification of the most universal kinds of measureable qualities - like size, shape, weight, density, structure, etc. So the right attitude to classify stones as stones is to establish constraints, like a stone has fall with in some band of weight, solidity, size, translucency, or other generalised "physical" properties.

    The obvious idea is that we are giving up our clearly self-interested view of the world to adopt one based on the most abstracted and unselfish possible point of view. Physics can't deny the essential facts of our stone - that is a fragment of rock, worn reasonably smooth, and of a size that is between a pebble and boulder. And from the definition of one stone, we can find other stones. Then we can apply the principle of identity and get right into all the arithmetical and logical operations which shift individuated things about in atomistically-deductive patterns.

    So epistemology exists on multiple levels of semiosis. And it is in recognising the self-interest inherent in an epistemic relation with the world that we can in turn construct a formally self-disinterested level of semiosis. Epistemology itself can be extremitised now so that we live with a dramatic contrast between our subjective knowing - as might be expressed through poetry, art, and other cultural forms - and our objective knowing, which is the business of science and maths.

    So we have actually constructed a deep conflict in which there are two paths to true knowledge, it appears. But again, the pragmatist will point out that we, as humans, are still having to give priority to actually having to live in the real world. Both the subjectivist and the objectivist have all their pretty rhetoric about their ways of knowing. Yet both are still bound by the fact that knowing is about acting, and all that results from having acted. So both the objective and the subjective extremes are going to be "found out" in practice.

    The habits that survive that test are the habits that did in some sense work. The selfhood that resulted was one adapted to "its" world. Knowledge wasn't either found or created in the process. But a state of knowing - a state of interpretance - could be observed to persist in a self-sustaining fashion. It did the job.
  • Do you believe there can be an Actual Infinite
    I didn't make any point regarding physical continuity (if space can even be called physical).MindForged

    What did you mean by space being "actually infinite" then?

    From the very beginning is took issue with the OP's assumption that any sort of actual infinity was impossible in virtue of pure logic (because, supposedly, contradictions crop up).MindForged

    The OP might not have been perfectly expressed but it did seem to be arguing from the famous paradoxes that arise from taking the maths "too seriously" as a physicalist.

    Now the usual line from the maths-lover is that the maths got fixed to resolve the problems. And my reply to that is: not so fast buddy. :)
  • Do you believe there can be an Actual Infinite
    That's not true, using an infinity is not the same as a singularity occurring in the theory.MindForged

    Yes. So what I am saying is you really want to be able to build "infinities" into your models, and you really want to avoid getting "infinities" back out.

    They are great if they can be just assumed in background fashion. They are a horror if that is what the calculation returns as its sum.

    But either way, these "infinities" have epistemic status rather than ontic. We realise that as backdrop assumptions, they are strong simplifications. And as calculational outcomes, we are quite within our rights to ignore them and create some kind of work-around.

    Space under relativity is treated as a continuum...MindForged

    That is way too simplistic. Relativity treats spacetime as a pseudo-Riemannian differentiable manifold. As a space, the continuity is about the ability to maintain certain general symmetries rather than any physical continuity as such.

    Blackholes and wormholes can punch holes in the fabric - those nasty singularities - and yet still the general co-variance can be preserved with the right set of yo-yoing symmetries to take up the slack.

    So relativity took away the kind of simple spatial infinity presumed under Euclid/Newton and replaced it with something that still worked. Actual continuity was replaced by the virtual continuity of unifying symmetries ... plus now the stabilising extra of physical measurements of local energy densities. A bunch of discrete local values to be added to the model and no longer able to be taken for granted.

    But my point was that we still make assumptions (crucial, necessary ones) regarding the existence of infinity in the world as well (relativity and QM both do so), so the notion of an Actual Infinity isn't off the table.MindForged

    But you are talking about a very classical notion of infinity. And that is clearly off the table so far as modern physics would be concerned.

    As I said early on, the best way to characterise things now is that the interest lies in how classicality emerges. So it is the development of finitude from a more radical indeterminism which becomes the story we want to be able to model.

    To say the Universe is just "actually infinite" is hollow metaphysics - a way to avoid the interesting questions. What came before the Big Bang? Where does the Cosmos end? You seem to want to shrug your shoulders and say everything extends forever. That is what maths would say. So let's just pretend that is the case.

    But questioning these kinds of conventionalised notions of "the infinite" is precisely where current metaphysics needs to start. The answers aren't in. We are only just formulating a clear view of what we need to be asking.
  • Do you believe there can be an Actual Infinite
    As has been said a few times, several very solid theories make assumptions that include infinity.MindForged

    And you have been reminded a few times that these solid theories in fact depend on working around the infinities they might otherwise produce. So it ain't as simple as you are suggesting.

    The way to understand this is that modelling seeks the simplest metaphysical backdrop it can get away with. So it is a convenience to treat flatness, extension, coherence, or whatever, as "infinite" properties of a system. If you can just take the limit on some property, it becomes a parameter or a dimension - a basic degree of freedom that simply exists for the system. You don't have to model it as a variable. It is part of the ontic furniture.

    So it is for good epistemic reason that physical models appear to believe quite readily in the infinite. If you are going to have a line that extends, it might as well be allowed to extend forever without further question. That way it drops out of the bit of the world that needs to be measured and becomes part of the world that is presumed. As a degree of freedom, it is fundamental.

    But the history of physics is all about the questioning of the fixity of any physical degree of freedom. Everything has wound up being contextual and statistical. Newton said space and time were flat and infinitely extended. Einstein said spacetime is instead of undefined curvature and topology. You had to plug in energy density measurements at enough points to get some predictable picture of how it in fact would curve and connect. Newtonian infinity would then emerge as a special case - an exceptional balance point of in fact impossible stability. Some kind of further kluge, like a cosmological constant, would be needed to give a gravitating manifold any actual long-term extension at all.

    So if we look at the actual physics, it does seek the "infinities" or taken-for-granted degrees of freedom which can become the "eternal" backdrop of a mechanical description. You've got to find something fixed to anchor your calculational apparatus to. So for good epistemic reasons, it seems that physics is targeting the continuous, the unboundedly extensible, the forever the same.

    But does it believe in them? Does it take them literally? Does it say they are metaphysically actual?

    By now, that would be a very naive ontology indeed. All the evidence says that nothing is actually fixed. It all just merely hangs together in a self-sustaining structured fashion.

    The mathematical notion of infinity is a very misleading one to apply in a physical context these days. The Euclid/Newton paradigm is old hat. Even in maths, geometry has become deconstructed as topology. Space is flat, lines are straight, change is linear, only as the extreme case of a maximal constraint on the possible degrees of freedom in fact. Instead of being fundamental, the perfect regularity and simplicity of a classical geometry is the most exceptional case. It requires a lot of explanation in terms of what removes all the possible curvature, divergence, and other non-linearities.
  • Do you believe there can be an Actual Infinite
    ...imagine as you get closer to the edge of the universe time slows down and right at the edge time stops. So it’s impossible to poke a spear through the edge of the universe because there is no space time in which to poke the spear.Devans99

    Continuing on the "resolution limit" approach now being taken, this would be modelled relativistically in terms of holographic event horizons. So you could imagine "poking your spear" into the event horizon surrounding a black hole, or across the event horizon that bounds and de Sitter spacetime.

    In a rough manner of speaking, your spear would suffer time dilation as you jabbed it into the black hole. It would start to take forever to get anywhere.

    Or if you poked it across the event horizon that marks the edge of the visible universe, then it would disappear into the supraluminal realm that exists beyond.

    So relativity itself already tells us that there is a radical loss of the usual classical observables when we arrive at the "edge" as defined by the Planck constants of nature. There is a fundamental grain of being, a grain of sharp resolution, which the constants define. Then if we try to push beyond that, then the customary classical definiteness of things begins to break down in ways the theory predicts. The distinctions that seemed fundamental dissolve away.

    The conventional way of thinking about spacetime is that it must exist in some solid and substantial fashion. It is just there. So the metaphysical issue becomes how can a backdrop begin and end? By definition, a backdrop just is always there ... everywhere. So spacetime simply has to extend infinitely to meet the criteria.

    But the emergent view turns this around. Spacetime as a definite backdrop becomes an emergent region of high coherence. And being bounded or finite is the kind of organisation that has to get imposed to create such a state of being. You need some concrete limit - like the speed of light, the strength of gravity, the fundamental quantum of action - to structure a world. The triad of Planck constants are the restrictions that together form up the thing of a Universe with a holographic organisation and a Big Bang tale of development.

    The Universe is essentially a phase transition. Like water cooling and crystallising, it has fallen into a more orderly, lower energy, state. What changes things is not the magical creation of something new - like ice - but the emergence of further constraints that limit the systems freedoms. A solid is a liquid with extra restrictions, just as a liquid is a gas with emergent constraints.

    So what lies "beyond" any part of a universe is not simply more of the same. Nor is it something completely different. Instead, the distinction is one of resolving power. If the classical world is about a crystalline coherence, then beyond the edges of any patch of the coherent is simply ... the start of the incoherence.

    Crossing an event horizon is just that. It is imagining how things break down now that they are no longer integrated in the usual communicative fashion. Approach the edge and everything just dissolves towards a radical indeterminacy. What seemed definitely one thing or another becomes blurred and confused - a question no longer properly answerable.

    It is just like the edge of a cloud. At some point the fabric frays and it is not clear whether it is still largely cloud or now mostly sky. To argue that there has to be a definite answer - as in arguing about whether things are fundamentally discrete or continuous, finite or infinite - is to miss the point. That kind of constrained counterfactuality is the state that must emerge. It is the outcome and not the origin.
  • Do you believe there can be an Actual Infinite
    Physicists can give a very different answer to the binary question of whether spacetime is "fundamentally discrete" or "fundamentally continuous". They would say that quantum theory argues that it is neither. At base, it is vague or ambiguous. And then the classical binary distinction of discrete vs continuous is what emerges due to sufficient stabilising contextuality. You get a division into distinct events happening within a connected backdrop once a quantum foam has expanded and cooled enough for that to be the case.

    For example:

    While almost all approaches to quantum gravity bring in a minimal length one way or the other, not all approaches do so by means of “discretization”—that is, by “chunking” space and time. In some theories of quantum gravity, the minimal length emerges from a “resolution limit,” without the need of discreteness. Think of studying samples with a microscope, for example. Magnify too much, and you encounter a resolution-limit beyond which images remain blurry. And if you zoom into a digital photo, you eventually see single pixels: further zooming will not reveal any more detail. In both cases there is a limit to resolution, but only in the latter case is it due to discretization.

    In these examples the limits could be overcome with better imaging technology; they are not fundamental. But a resolution-limit due to quantum behavior of space-time would be fundamental. It could not be overcome with better technology.

    http://www.pbs.org/wgbh/nova/blogs/physics/2015/10/are-space-and-time-discrete-or-continuous/

    So the key shift in metaphysical intuition is to see reality as wholly emergent from raw potential. And that then means the infinite is always relative.

    The classical way of looking at it is that either the discrete is the fundamental - you start with some atomistic part and then are free to construct endlessly by the addition of parts - or the continuous has to be fundamental. You would start with an unbroken extent that you could then freely sub-divide into an unlimited set of parts.

    Note the presumption. It is all about a mechanical act, a degree of freedom, that can proceed forever without constraint. If you have a unit to get you started, there is nothing stopping you adding more units to infinity. Or if you have a line you can slice, there is nothing stopping you slicing it finer forever.

    It is a wonderfully simple vision of nature. But it is way too simple to match the material reality. So no matter how wonderfully maths elaborates on this naive constructionist ontology, we already know that it is too simplistic to be actually true.

    The alternative view is that individuation or finitude is context dependent. It is a resolution issue. Both the continuous backdrop and broken foreground swim into definiteness together. The more definite the one grows, the more sharply defined becomes the other.

    So it is like counting clouds in the sky. And beginning in a thin mist. While everything is just a generalised mist, it is neither one thing nor the other - neither figure nor ground, object nor backdrop. It is sort of sky, sort of cloud, but in completely unresolved and ambiguous fashion.

    Then the mist starts to divide and organise. It gets patchy. You start to have bits that are more definitely actual cloud, other bits that are actual sky. Keep going and eventually you have some classically definite separation. There is a nice tight fluffy white cloud that sticks out like a sore thumb against an empty blue background. The finitude and discreteness of the cloud emphasises the infinity and continuity of a sky that now goes on forever. You arrive at a state of high contrast. And it is difficult to believe that it could ever be any other way.

    Of course, physicists now know just how much of an idealisation this is. They even have the maths to model the actuality in terms of fractals. Real life cloud formations better fit a model which directly encodes the fact that individuation is a balance of a tendency towards discreteness and a tendency towards continuity. The holism of material systems means they have equilibrium properties, like viscosity.

    So in the connected world of a weather system clouds are generally bunched or dispersed according to some generalised ratio. They never were these classical objects with definite edges marking them off from the continuous void that surrounds them. All along, they were just a watery transition zone with a fractal balance and hence a fractal distribution in space and time. If you want to model the actual world of the cloud, you have to accept that this grave sounding metaphysical question - is the cloud discrete or continuous? - is pretty bogus.

    The actuality is that cloudiness is a propensity being expressed to some degree of definiteness. It can be in a state of high resolution, or low resolution, but it is always in some state of resolution - a balance between two complementary extremes. We imagine a reality that is polarised as either sky or cloud. Everything would have to be one or the other. Yet now even the maths has advanced to the point that we can usefully model a reality which is always actually in some fractional balance, always suspended between its absolute limits.

    The next step for fundamental physics is to apply that holistic metaphysics to our notions of spacetime themselves. And that is certainly what a lot of quantum gravity theories are about. The traditional classical metaphysical binaries - like discrete vs continuous and finite vs infinite - lose their power as it is realised that they are the emergent limits and not the fundamental starting options. Instead, where things begin is with simple vagueness or indeterminism. You have a quantum foam or some other new model of a world before it gains any definite organisation via the familiar classical polarities.
  • Do you believe there can be an Actual Infinite
    But if one removed all phyiscal mass and energy, both the visible and dark, wouldn't empty space simply be infinite vacuum?InfiniteZero

    No. An empty space is simply a matter field in its lowest possible energy state. This is now a central fact of cosmological thinking. It is what the holographic universe is all about.

    So an empty space is still full of the black body quantum radiation that is "generated" by its own event horizons. The universe at its heat death would still radiate internally with a Planck scale jitter - a photon gas. The photons would be as cold as possible - within Planck reach of zero degrees - and so have wavelengths about the size of visible universe. So about 32 billion lightyears in length. Unbelievably weak. Yet spacetime would always have this ineradicable material content there as part of what it is.

    Of course, mathematically you could imagine actually empty spaces. Maths does that routinely. In fact it is the basis of how it goes about the job of conceiving of spaces - as devoid of material content.

    But physics tells us that spacetime and energy content are connected at the hip. Matter tells spacetime how to curve and spacetime tells matter how to move, as Wheeler famously put it. They are two faces of the one reality.

    And so the job for maths is to catch up with reality if it can. At the moment, the existence of this connection is one of the kluges that have to be inserted by hand to make the cosmology work as a scientific model. It would be the big advance to make it emerge as a mathematical prediction.

    Why is there this Planckscale cut-off that prevents the universe either being infinitely energy dense (as the quantum corrections to any material particle says it should) or, alternatively, completely empty, as would be the case if the quantum jitter of spacetime itself only had a zero or infinitesimal contribution to make?
  • Do you believe there can be an Actual Infinite
    We take what mathematicians and logicians say seriously when we adopt the formal systems they create. That means that to use such systems we are committing ourselves to a particular kind of metaphysics. If you accept standard mathematics you cannot possibly claim that actual infinities are impossible in virtue of a contradiction. You might say that not every aspect of our particular universe can be infinitized, but there's no argument that the concept itself precludes instantiation in the world.MindForged

    There is a big difference adopting the maths because it is a useful model and accepting it as the actual metaphysics. And it should be telling that the central problems of modern physics/cosmology revolve around finding ways to avoid the mathematical infinities, or singularities, that are contained in the current best models.

    That is why quantum physics has to be built on kluges like renormalisation that give a semi-arbitrary means of just cancelling away most of the infinite quantum contributions to bare particle properties. The formal maths returns the answer to any question as "the quantum corrections sum to infinity". And then the physicist says we will just introduce a cut off factor that cancels away all that gross excess and leaves us with the exact sums that matches observation.

    So the infinity-generating maths can be tamed by introducing heuristic constraints. After that, the maths really works well. But there is then no particular reason why you would think the maths represents a good model of the actual metaphysics.

    It is the same everywhere you look in the physics. Particles are explained by symmetry maths. But the maths is too perfect usually. It sums to zero. Some other factor has to be added to the story to explain why there is a faint asymmetry in the mix such that not everything cancels away, leaving nothing. Matter and anti-matter can't be perfectly symmetrical otherwise all of one would annihilate all of the other, leaving no mathematicians or physicists.

    A Theory of Everything would aim to offer a completely mathematical description that did away with the various kluges that physics has been forced to develop to get rid of the pesky infinities and zeros. However my view is that this in turns requires a different maths of infinity. The metaphysics of the maths would be what has to give.

    Reality is already telling us that now. :)
  • Epistemic Failure
    Thanks Tim. Nice to hear.
  • Do you believe there can be an Actual Infinite
    Statements like space or time maybe actually infinite... nonsense.Devans99

    I think “nonsense” is too strong. But there is certainly a real metaphysical question here. Our mathematical models lead to rather glib beliefs about infinity. And our current physics makes it a much more complex and interesting issue.

    Principally I’m talking about the discovery that reality is quantum and so individuation is contextual. To arrive at some located number of entities, you have an emergent limit on how many can exist for a given material extent. This is the holographic bound on information or the light cone principle.

    So, in practice, space and time are materially constrained. They may be modelled as infinite dimensions, unlimited. Yet once matter and energy are added to the picture, then things look actually quite different. You have an ontology which is about finitude emerging from ambiguity, rather than one which presumes an underlying continuity that can be infinitely divided - at no physical cost - as is the case with the ur-model of the mathematical number line.

    So infinity is a mathematically revered notion. Folk like to apply it to metaphysics as if it were true. But modern physics points to a very different ontology of actualisation now. The maths is out of date.
  • Are proper names countable?
    Still don’t get it? My point was about physical limitations on logically inspired notions. An infinite string has the problem it can’t actually be said in less than infinite time.

    You reply by pointing out that this isn’t a problem if strings terminate in finite time. Way to go.
  • Are proper names countable?
    So you want to commit to the position that 0.999.... and 1 pick out two different proper names here? Cool.