• What can I learn from Charles Sanders Peirce?
    Cheryl Misak has been doing very readable accounts of what Peirce is about and his legacy - Cambridge Pragmatism: From Peirce and James to Ramsey and Wittgenstein ( Oxford: Oxford University Press, 2016)

    She sums it up in this youtube lecture - https://youtu.be/nQuNWNjYcVY
  • Patterns, order, and proportion
    Now show me where I'm off the rails.tim wood

    No need.
  • Meta-ethics and philosophy of language
    As I construct it, the mental and physical are two different perspectives on exactly the same stuff.Pfhorrest

    I've heard every version of panpsychism. Different perspectives on the "same stuff" remains Cartesian unless you can truly dissociate your position from a substance ontology and shift to a process ontology.

    In the one, stuff just exists. The goal of monism is achieved by granting that stuff some kind of fundamental duality (of properties, aspects, perspectives, whatever).

    In the other, stuff is a condition with a developmental origin. The duality that concerns us is something that isn't fundamental but must eventually emerge. So the monism, the unity, has to come from triadic closure.

    And that is what pansemiosis achieves as a model of reality.

    The account doesn't assume its conclusions by just granting the duality as a fundamental ingredient of nature. Instead it is a based on a logic of development where a dualised state of affairs is what emerges due to a visible feedback relationship, an actual semiotic theory of how things are caused to be this way.

    So do you have a formal theory of how fundamental stuff came to be dual-aspect in the way you require? How did this state of affairs come about exactly?
  • Patterns, order, and proportion
    And the generator a model, or not? I know how I get to nature, but how do you?tim wood

    More pointless snark.

    The generator would be the "physical" process. So whatever nature is and how it counts as a generative process. (The Big Bang tells us it definitely counts as such.)

    We would then model that generative process. The model is a model, not the thing-in-itself.

    Where you may be getting constantly tripped up is that the Peircean systems perspective closes the loop. The model of the process, the thing-in-itself, is that it is a modelling process. That is how it generates something so rationally structured and lawful.

    As Peirce said, the Cosmos self-organised into existence as the inevitable expression of universalised concrete reasonableness. Rationality was the finality. (Hegel said much the same thing.)

    But anyway, you have to take all three steps to arrive back at the whole picture.

    First nature is nature - it looks like some kind of evolving and structure-producing process. Then we jam on our science hat and model that in good pragmatic/empirical fashion. Finally, the best possible theory of nature as a process turns out to be itself the very image of this pragmatic method. Nature is a triadic modelling relation.

    Semiosis is all about a "system of interpretance". And as such, it anticipated all the mysteries of quantum theory. It is exactly the metaphysics we have discovered as physics.

    But physics itself struggles to see that as it is still caught up too much in a conventional Cartesian framing of nature - the irresolvable duality of the observer and the observables. It is only when you start to get to a modern thermal decoherence story of quantum theory, or a quantum information one, that you start to move sideways into a systems metaphysics that works.
  • Meta-ethics and philosophy of language
    I'm actually very opposed to the Cartesian framing .... I support a panpsychist physicalism, like Galen StrawsonPfhorrest

    But panpsychism fails because it is just Cartesian dualism in thin disguise. It is an attempt to treat "mind" as a further substantial property of matter. And so a conflation of two mysteries rather than the explanation of either.

    The systems perspective leads to what Peirceans would call pansemiosis - semiosis as a universal Nature structuring process.

    Now we have a duality built around the scientific-strength concepts of information and entropy - a duality that is demonstrably two sides of the same coin. A single yardstick - physically anchored in the Planck scale - can measure what is happening on both sides of the "mental~physical" divide.

    So panpsychism has always been based on parody physics. Sorry to be harsh, but Whitehead is hand-waving his way through quantum metaphysics (even if - like Bergson and other "emergentists" of the time, he was also a proto-systems thinker in many respects).

    Pansemiosis is a reflection of where physics (and neuroscience) have actually arrived. The discovery of the unity of information and entropy that lies at the heart of the best existing theories of nature.

    Friston's Bayesian Brain model in neuroscience made the breakthrough in describing the "information processing" of the brain in terms of "physical" entropy dissipation. Likewise, the whole of fundamental physics wants to do the reverse by using information processing as the way to model physically entropic reality - such as all the stuff about holography or quantum information.

    ...see upthread about "direction of fit". They're each about the relationship between the world and our ideas of it, and differ in what function those ideas about it are meant to serve.Pfhorrest

    Yes, I saw that bit leaning towards a triadic systems logic. So we are more likely to agree on that.

    With "direction of fit", now we are working with the three things of two asymmetric poles of being and then their contrasting modes of interaction - the relationship that sustains the whole deal.

    My story would be that the "world" and the "self" arise as two sources of constraint on our individual action. The world is all of that outside which we can't simply wish to be different. We have to work with its concrete material possibilities. Then the "self" is the internalised social construct which stands for our cultural backdrop. It is a second set of constraints on our action that is an encoded set of useful habits when it comes to navigating the hazards and opportunities of the world in a generally pro-social way.

    In a pre-Cartesian, pre-science, time, the two sets of constraints were largely mixed up as one. We viewed the physical world animalistically and magically - an extension of the cultural world. It is only with a hard Cartesian divide that the physical world and the spiritual world became two different things, and a world as "nature as culture, culture as nature" became a disappearing framework.

    So there could be a lot of tidying up to do on that part of the argument, as well as some essential agreement, in my eyes. But the foundations are what matter as a first step.
  • Meta-ethics and philosophy of language
    It looks to me like we need something that’s compatible with ontological naturalism or physicalism, that honors the is-ought divide, and yet still allows for moral claims to be genuinely truth-apt and not mere subjective opinions — and none of the conventional options above satisfy those conditions.Pfhorrest

    The problem - as ever - is to accept a Cartesian framing of Nature in this fashion. Your opposition of res cogitans and res extensa.

    My systems science approach instead aims to show how Nature is a unity created out its divisions. So if "mind" and "matter" appear to be the major division at play here, then this is telling us that Nature is the stable product, the balancing act, that arises out of these two complimentary limits on possibility.

    So you don't actually have a "physicalist" account of Nature unless it includes its cognitive aspect as fully as its physical aspect.

    This is the same ying-yang, or dependent co-arising, insight as you get in Eastern philosophy. Reality arises from symmetry breaking or dichotomisation.

    What we call the physical becomes that aspect of Nature which has the least (but not none) of what we might want to call the mindful, the cognitive, the subjective, about it. And in matching fashion, what we call the mental is the aspect of Nature which has the least (but not none) of what we might want to call the objective, the inanimate, the brute materiality, about it.

    In morality, this then provides a holistic perspective where humans are part of the system of Nature. We have to discern the function of Nature - why it exists and where it is headed. And that becomes a baseline for making some judgement about our place within it.

    If we turn to science for an "objective physicalist" account of Nature as a system, the answer can still be mighty disappointing to most folk. :cool:

    No one much likes the ordinary Cartesian answer where you are either left with Nature being a great big dumb machine, and so offering up no moral imperatives at all. Or you have to talk about the totally abstract imperatives (divine command), or the totally subjective imperatives (complete relativism), that would be the two poles of being that could organise the cognitive realm.

    But anyway, our best systems model of Nature is based on the Laws of Thermodynamics. Complexly organised nature exists to serve the function of the dissipation of entropy. So the baseline physics says we rightfully exist to the degree that - as local negentropic structure - we are organising things so as to accelerate the global imperative embodied by the Second Law of Thermodynamics.

    Now there is much more that flows from this baseline moral imperative. (Or "moral" imperative, as once realising it to be the case, we are then suddenly in the newly reflexive position of it becoming a choice.)

    For one thing, it is a statement of moral liberalism. Basically our natural existence can't break the laws of thermodynamics. But beyond that level of constraint, the Cosmos no longer cares how we go about achieving this general goal.

    Paint paintings or burn books. Both are suitably entropic activities. But in terms of our human capacity to grow negentropically towards our own collective social future, the two activities might turn out to have quite contrasting results.

    In summary, there is virtually zero moral literature that takes the perspective of a "systems physicalism". Even Peirce struggled to say much about ethics (largely because he was still conflicted by the severe religiosity of his own social environment).

    But note how it is a framework that gives equal voice to both ends of the lived spectrum of life - order and disorder, entropy and negentropy, co-operation and competition, simplicity and complexification.

    It is a large enough model of Nature to encompass the full dynamics in play. You don't have to exclude one half to have the other. Instead you start with both (as complementary extremes of being) and discover life as it is lived in the world arising inbetween.
  • The dirty secret of capitalism -- and a new way forward | Nick Hanauer
    First, successful economies are not jungles, they're gardens

    A great quote. Unfortunately neoliberalism may have had its own logic in paving the way for the move from human-scale economies to inhuman-scale ones. The shift from the "real world" economics of farms, factories and services to the new, more virtual seeming, world of finance economics. Naked global capital flows.

    And so "gardening" might still be going on. But at that higher "Davos man" level and not at a human community level, or even at the competitive nation state level.

    So going back to the past seems not an option. The world system has been financialised. It is a beast that must now find its next step up or simply collapse - the extinction event that would return any surviving humans to an old fashion community economics and actual gardening if lucky. :grin:

    The nature of that next step is the interesting question. To side-step climate change extinction, it looks like we are relying on exponential technology. We have to get off fossil fuels and on to renewables, of course. In a general way, the virtual reality of global capital flows and national debts has to be reconnected to physical reality.

    Neoliberalism operated by streamlining the economic realm so that capital was disconnected from the material world in the form of human labour, energy inputs, and most of all, the actual longterm costs of the environment as a sink for the entropic waste that physical effort must produce.

    Freed of real world constraints, neoliberalism could run up a tab on this physical and social capital. That allowed the exponential creation of financial capital (in the form of credit. Ie: debt). But now physics is catching up on that streamlined fiction. The entropic sinks that it relies on - a degraded environment, a literally heating climate - are costs becoming due.

    The only way out it would seem is technological utopianism - a reconnect of some kind where the physical and informational aspects of being a human organism, the evolving Noösphere, find a new functional balance.

    This is the Singularity argument (of which I am always skeptical). But it is also quite exciting that there are now glimpses it could be an economic reality.

    Kartik Garda at the ATOM - https://atom.singularity2050.com/ - gives a lucid account of how tech is on the cusp of becoming an unstoppable deflationary force. Tech will drive the cost of everything (even harnessing energy or growing food) down to practically free. Almost all jobs can be automated, creating unbounded growth in per capita labour productivity.

    In this next phase of the economic system, the world can not only afford its MMT money printing and Universal Basic Income policies, Garda argues it is already having to pursue them to stave off the early stages of the coming great tech disruption.

    The central banks can't generate even a flicker of inflation these days, despite throwing trillions of debt into the maw of the beast since the GFC. And the reason is that tech deflation - the way automation and AI makes all physical products cheaper - is already a counter-force flowing at several percent of the total economy. Garda argues money printing has to become exponential just for its inflationary pressure to balance the exponential rate of tech deflation. And - with the pandemic - that is why the central banks have felt so free to do just that. The money printer goes brrrrrr....

    So yes. The existing system is broke. But it is no revelation that "free markets" always are part of something larger - a system of governance that encodes the current economic paradigm.

    A market of some kind is still required. It is the intermediating mechanism between the global constraints that govern (the co-operating society) and the local individual action that makes room for the competition that defines us all as self-interested selves.

    To be a "human" system, economics has to be aimed at maximising both our collective human identity and our own free expression of "being human" .... as currently defined in our ever-expanding developing human story. So a market - as a cohesive collective space populated by equally individuated actors - is always going to be the heart of the system.

    But the question is always about what grand flows is the market equilibrating?

    At base, it always has to be a balancing of energy expenditure and intellectual capital. The human adventure is about developing the savvy tricks - fire, tools, gardening, politics - that allow us to harness an ever increasing amount of the biosphere's physical capital.

    For the longest time - see Smil's Energy and Civilisation - this was just whatever physical capital that the sun grew. Hunter-gatherer, then agriculturalist. And then came the sugar rush of discovering that fossil fuels could sustain an exponential economic paradigm based on machines (and environmental sinks).

    The machine age - the industrial revolution - was pretty grim in many ways. Yet also liberating. There is always going to be good and bad. And overall the human lot became better. And while also unequal, an argument can be made that the inequality is merely a natural powerlaw expression of wealth distribution - what equality looks like in an exponentially growing system that, by statistical definition, has no mean.

    The machine age was focused on labour and capital. All our views on left and right, capitalism vs socialism, are founded on the tension between the factory workers and the factory owners - a step up in abstraction from the previous tension of farm labourers and land owners.

    The problem to solve was balancing lives at both scales. And post-WW2, this was somewhat sorted by the emergence of social democracies and corporate businesses. You had unions and welfare systems to build in protections for labour. You had "wrap around" corporate structures that were tied into a general notion of being "good citizens" - delivering a return on capital that recognised the need to balance the needs of shareholders and workers.

    But capital - as its own abstracted flow - wanted to be liberated from this socialised/physicalised constraints. And so along came neoliberalism as a tool to break down all the accumulated publicly-owned stores of capital - the railways or telephone systems which had citizens as the shareholders - and put them on the market.

    Financialisation could then take off as its own thing. It was a way to mine the world of its promised future growth by taking out leveraged derivative bets on tomorrow's income streams. And then eventually, just to mine the promises that fictional growth would surely occur.

    We now have that broke system where the central banks - the Fed in particular - have in fact had to socialise the actual asset markets. There is no price discovery in the stock markets if their prices are simply reflecting the largesse of trillions in debt "stimulus" (nor price discovery in terms of the true value of the US dollar that still underpins the world financial system).

    But the argument that is currently most believable to me - in this very shaky feeling time - is that we are never going to make a well-designed step backwards into any kind of Green utopianism. The gardening metaphor. That is impossible because thermodynamics is a ratchet - a flow that only has the one direction that spells "growth".

    So we have to hope for Tech utopianism to be true as an alternative. And actually act on that expectation. As Garda and others (like Jeff Booth, who wrote The Price of Tomorrow) say, the field of economic punditry is obsessed by the problems of yesterday and not yet seeing the solutions of tomorrow. Someone has to be brave enough to understand what wants to happen as the next step of this story and push it through into the institutions of governance that thus frame the collective space that is the market.

    It is a moment where we need revolutionary scale reforms to absorb the contrasting imbalances of unbridled debt creation and exponential (possibly) tech deflation. And of course, the cost of those ecological and environmental sinks have to be including in the grand accounting now. They must be monetised and be a factor in the newly-designed marketplace.
  • Patterns, order, and proportion
    Do you have a more specific name for your "Pattern Generator"?Gnomon

    Well, nature is the generator. So really I am talking about the long tradition within metaphysics and science that seeks an immanent and self organising, thus triadic, approach to the development of the structured reality we observe. This knits together systems science, cybernetics, Peircean semiotics, hierarchy theory, thermodynamics, etc.

    The key insight is that reality is the evolving product of top-down constraints interacting with bottom-up constructive degrees of freedom. Global constraints shape the local degrees of freedom to be what they are (the atomistic stuff that can construct). And local degrees of freedom then act to reconstruct the world that is the collective state of constraint forming them. Reality is a habit that works.

    So a detailed summary of how the many strands of thought now weave into a tight thermodynamic story can be found here for example - https://arxiv.org/pdf/1006.5505.pdf

    The key is the shift from a mechanical or Cartesian framing of Nature to a triadic framing that is thus large enough to include the idea that reality must evolve, develop or self-organise into being.

    So Nature is self-generative. It is always forming patterns for reasons. Even its randomness or indeterminism is a pattern - the one produced by the least amount of possible constraint on what is going on locally.

    Deus es machinatim wood

    A lazy insult.
  • Patterns, order, and proportion
    Any information can be encoded as a string of bits. We can then calculate the entropy of that string.Banno

    Hmm.

    So what is the entropy content of the decimal expansion of Pi? Is the resulting bit string all signal - that is minimally entropic? Or all noise - that is maximally entropic?

    It rather depends whether sender and receiver share the same decoding key - the pattern generator or mental construct used to encode the string of bits.

    If it is the algorithm for computing pi that was at work, then the string is all negentropy. Even if sent over a noisy channel, the receiver could fill in any gaps or errors by just doing the computation to double check. In fact, simply transmit the algorithm - the pattern generator - and have done with it.

    But if the receiver has a different model of the situation - a different theory about the pattern, a different mental construct - then a very different message might be read.

    The model in mind might be "this is a perfectly random decimal sequence". And yes, it then passes all the usual tests for being "patternless" - what we would expect to get by drawing numbers out of a hat by chance.

    So we have here exactly the same "information", and precisely the opposite conclusion as to the underlying "data generator" in play. And each model can confirm its interpretation as the proper one by the different kinds of measurement it chooses to employ.
  • Patterns, order, and proportion
    I shall now give a definition of nature, in the hopes that you will endorse it or improve on it. Nature is that which underlies perception and understanding, describable in terms of perception and understanding and reasoning thereon, but not itself knowable (in the Kantian sense). In the practical sense, (again Kantian), as the matter that is perceived, reasoned upon, and understood, eminently knowable.tim wood

    If we talk at cross purposes, it is because you turn the original question about the ontology of patterns - are they real, and thus in what sense? - into an argument about epistemological foundations where I’ve already indicated my general agreement.

    Anything we can say about “nature” is going to be a model - a pragmatic business of constructing a general causal theory to be constrained by “the facts” as we then discover them (the facts being of course measurements predicted by our models, so leaving us in Kantian fashion, still on our side of the epistemic bargain).

    All this is completely accepted about the relation we would have with “nature” - our Umwelt.

    And then there is my point. Broadly there are the two metaphysical models in play here. You - consciously or unconsciously - appear only able to apply a reductionist perspective to things. I am saying that a holist has a larger four causes model that can “naturalise” formal and final cause too. They are fully part of the world being described (and so not left hanging as being supernatural powers, nor simply dismissed as mere human social constructions).

    How do you address the objection that because no two things are ever the same, that there is no pattern except in abstraction, which is a process of the mind of the one perceiving - creating, as it were - the pattern. That is, sez I, no mind, no pattern. No similarities anywhere anytime anyway, except in the mind of those who pick them out.tim wood

    This was the question you posed.

    So you seem to want to say that abstracting over the particulars is a mental process. The real world is some unpatterned state of affairs, a mereological collection of concrete individuals, and we then invent notions of universals by choosing to ignore all the individual differences by applying some arbitrary, socially constructed, rule.

    My reply is that nature itself is organised by abstracting over the particular. That is how the world develops its complex hierarchically structured form. Any collection of Interacting individuals will fall into emergent patterns as they develop a temporal history or memory - become constrained by their own past. Lawful and predictable behaviour will result.

    So a pattern in nature is emergent form that serves some purpose. Although that purpose can be pretty humble and statistical. It can be just the finality of arriving at the collective, detail-forgetting, state of an equilibrium balance.

    In an ideal gas model, it doesn’t matter what the particles are doing. Their motions are random - in a way then described by a simple globally-constraining mean. The gas has a temperature and pressure. And the temperature and pressure are quite real things, aren’t they? They might emerge at the collective level. But they act on the world in a measurable and not abstract way.

    Again, my point was that even if an analysis of the situation in terms of four cause thinking says that any form or finality is mighty dilute in comparison to the kind of intentional twist we would give those metaphysical terms in relation to humans and their “minds”, there still is a need for a four causes model to account for what is going on. Nature actually forms its entropic patterns for causal reasons - such as achieving global equilibrium balances.

    To deny this “desire” is to make the Cartesian ontological error of treating mind and world as divided realms. To be quite comfortable with psychologising nature is just a normal step towards being a proper natural philosopher or systems thinker.

    A Cartesian thinks of matter as concrete stuff, and mind as an experiencing or rationalising stuff. A systems thinker would say instead that even matter is not as thus imagined (an idealised combination of material and efficient cause, hence little imperishable atoms). Why, our best physical theories confirm that particles are really waves. Of maybe quantum maps of potential. Or just informational constructs of some kind.

    Science has dematerialised the material now! Particles are events that only exist with any concreteness in the sense an act of measurement has been recorded. They are purely contextual in their being.

    So matter is no longer matter. And equally - as you are no believer in spooky soul stuff - mind is no longer mind. To now talk about Nature with either a super-physicalist rhetoric, or try to over-protect the use of mentalistic terms, is just a cultural exercise in boundary policing. It is preserving the Cartesian world view and not allowing in the clean air of new thought.
  • Patterns, order, and proportion
    Unless I'm missing something, this settles the question. Pattern is read into nature.tim wood

    No, it starts things. It accepts that any ontological enquiry is rooted in a pragmatic epistemology. We can only "know" the world via whatever modelling relation we find to be useful.

    It is a statement of epistemic humility. It begins an actual metaphysical-strength effort to talk about the "truth of reality" with an appropriate disclaimer.

    So it is why I can say "reductionism" is perfectly fine within its own (restricted) purposes. And why "holism" can be also "just a model" and yet be the model demonstrably closer to the "truth" because it models that reality in terms of all four Aristotelean causes. It treats formal and final cause as also "part of nature", whereas reductionism posits only material and efficient cause as "part of nature", leaving formal and final cause hanging in the air as "super natural".

    So a reductionist might claim that nature just doesn't contain its own forms, its own finalities. That becomes an ontological-strength claim they then need to support. You appear to be wanting to argue that.

    Or a reductionist might more humbly agree that reductionism chooses to be mute on the question of how form and finality play a part in reality because - for the purposes of pragmatic modelling - reductionism simply doesn't need to include the class of top-down causes. No ontological claim is made. The reductionist model already presumes an intelligent human with some goal in mind and an ability to construct a design. The necessary formal and final cause will be supplied by a "creating mind".

    And as I would then say, sure you can just model reality in terms of material and efficient cause, then call it quits. Meanwhile I'll go and join up with the guys who have the ambition of a full four causes model of reality. That is going to be the cutting edge of anyone actually still interested in metaphysics as a totalising inquiry into the nature of nature.

    For clarity I'm taking nature as that that is at the instant, and from one instant to the next is never the same.tim wood

    Well there you go. You are taking a basic reductionist modelling trick and convincing yourself that is then "the world" truly described. You presume an atomistic ontology and read that into everything you see - so don't really ever see all that is there.

    A systems perspective is holistic and so the whole idea of reality as a sequence of states - one damn instant after another - is clearly a wild over-simplification. A holist would see the same reality in terms of a dynamical flow, a process with structure.

    So while things may be different from one instant to the next (they MUST be if the holism presumes that local possibility always has a baseline (quantum) uncertainty), overall everything is being kept on track by a global flow - a generally constraining purpose, direction or finality.

    What you are saying is that you presume reductionism, and hence reductionism is what your argument must spit out.

    I am saying check your presumptions. Reductionism just isn't a large enough model if you want to do anything as ambitious as metaphysics.
  • Why is there something rather than nothing?
    Why is there something rather than everything?

    (Ie: What convinces you that you have started at the right end of the question?)
  • Patterns, order, and proportion
    I'll be upfront. I don't like Aristotelians.Gregory

    You are ranting against theists now. And all my arguments are atheistic.

    Anyway, you proposed a generator. That word means a person who generates.Gregory

    It is a mathematical term - https://en.wikipedia.org/wiki/Generator_(mathematics)

    You need desperately to put down the Aristotle and read some Freud on religionGregory

    The rant continues. You are unable to furnish an example of a natural pattern that wouldn’t have a generative process behind it. Case closed.
  • Patterns, order, and proportion
    Can you supply me with a single example of a pattern in nature for which it is scientifically accepted it has no generating process?

    As I have stressed multiple times here, even randomness and chaos can now be described as predictable patterns in terms of their generators.

    So it is not I who is invoking supernatural beings. Just you as a way of ducking the argument being made.
  • Patterns, order, and proportion
    For the rest, it appears to me that you read into nature whatever works for you. As practical science that seems about right.tim wood

    It is all models. What more are you hoping for here? Revelation? Faith?

    Now, however, I must ask you for a rigorous - and short - definition of pattern.tim wood

    In a general way, we are talking about a form or state of organisation that somehow looks habitual, repetitive, meaningful, deliberate, pervasive, ordered. And thus not the opposite of being patternless - chaotic, accidental, arbitrary, lacking predictable structure.

    The presence of a pattern implies a pattern generator. A finality. There is some larger process that is placing constraints on irregularity or uncertainty.

    Thus a pattern does not simply exist as a result of meaningless accident as you seem to want to suggest. It has to be generated by constraints imposed on otherwise free possibility

    Where modern statistical mechanics gets us to is the realisation that even the random and chaotic patterns of nature are also the product of exactly this kind of causal set-up - an Aristotelean or systems causal story. So there is nothing in nature that escapes this causal ontology as even “raw chance” is being shaped into its completely predictable patterns - if you check my citation.

    There is always finality present in this sense. Even the random decay of a particle has a (Quantum) generator by virtue of the fact that we can observe its predictable statistical pattern.

    If we are merely reading patterns into nature, then there would be no pattern generation machinery for science to discover and model. And really, what else defines nature than it is a pattern - a structure, a process, a system of dynamical generation or becoming?

    If you want to argue this is not the case, how does science manage to extract universal strength laws of nature? What is going on there?
  • "Turtles all the way down" in physics
    CERN physicists recently declared that according to their best estimates, the Universe ought not to exist at all, as the matter and antimatter really should have cancelled each other out.Wayfarer

    Yep. Almost all the matter and anti-matter - as mirror image states - did cancel each other away to leave the blazing sizzle of a cooling and expanding bath of photon radiation, the simplest possible form of being. But if you google CP violation, you will see that theory can predict a symmetry-breaking source of an underlying asymmetry that preserves a small fraction of matter. It has been observed with quarks. It just isn’t enough as yet. Other particles, like neutrinos, would have to contribute too.

    Early results have given physicists confidence skewed neutrinos can supply the missing amount of asymmetry. They just need more public money and a next generation detector to demonstrate that, natch. :wink:
  • Patterns, order, and proportion
    Which of his books talk about points and quantity?Gregory

    Peirce never wrote books as such. But his writings were voluminous. So there is no easy way in.

    I don't really get what you mean by points and quantity. But if you want to dig into the patterns of nature, you might be much better off with books on fractal geometry, scalefree networks, chaos theory, and those kinds of things. You need the science to give you the conventional story on self-organising patterns in general. Only with that kind of grounding could you see how this relates to the metaphysics developed by someone like Peirce.
  • Patterns, order, and proportion
    An overall - maybe residual - tendency? Life itself does not seem to be, to manifest, 'a "desire" to entropify.' That is, while I understand the tendency to disorder, I don't think it's quite all that simple. Do you care to assay a quick and simple definition of entropy, for present purpose?tim wood

    Don't be afraid of the T word. As a natural philosopher (cf: Stan Salthe), we can parse finality into the developmental stages of {tendencies {functions and purposes}}}. Or {teleomaty {teleonomy {teleology}}}.

    So when we are talking about desires when it comes to plate tectonics, rivers deltas, and other examples of natural dissipative structure, then clearly it is teleology of the dilute kind - a statistically-inevitable tendency of nature.

    Then once you have life and mind - systems that can construct informational models of their own worlds - you have now the further possibility of a localised desire for a function (like breathing), and for a purpose (like surviving).

    Formally, a simple natural system is just entropic. It serves no other purpose than to accelerate entropy and thus the extremely general desires of the second law of thermodynamics. But any living and mindful system is defined by its countering negentropy. It is in the business of producing local information - memory structures like genes and memes - that encode a local way of being that appears to swim counter to the generalised entropic flow, imposing its own ideas of order on the material world.

    Of course, life and mind only exist because, on the whole, they do in fact overall increase the world's entropy. So they don't transcend the limits imposed by the second law's desires. They instead live within those desires as local agents of the entropification process. We use our smarts to produce more waste heat than would otherwise be the case.

    So we - as living and thinking systems - are fully part of the great cosmic entropic flow. But being a part of that involves also our being able stand apart from it. To be local stores of negentropic form and finality and so break down "resources" - natural stores of negentropy - and speed their path to becoming waste heat.

    How do you address the objection that because no two things are ever the same, that there is no pattern except in abstraction, which is a process of the mind of the one perceiving - creating, as it were - the pattern.tim wood

    That is the principle I am basing things on. Pattern has objective existence in nature as that which can locally suppress uncertainty. So every locale would have fundamental uncertainty - as quantum theory has empirically demonstrated. But then the job of emergent constraints is to produce a localised statistical regularity.

    No two things could be exactly the same because the baseline of reality is just a pure uncertainty or vagueness (Peirce's tychism). But then a reality that develops generalised law or habits (Peirce's synechism) will constrain that uncertainty as much as it can be constrained. The statistical fluctuations will be reduced to the absolute minimum - ie: a Gaussian distribution.

    So sure, minds can read patterns into nature by learning to overlook individual differences and conceiving of the world in terms of some larger generalising abstraction.

    But the Peircean point is that is how nature itself works. For real. It develops the habits of regularity that constrain local irregularity. Laws evolved in ways that make being even possible by preventing absolutely everything just wanting to happen in a radically incoherent fashion.

    Every brick that makes up a house is different. But that difference has to become trivial enough that as the houses get bigger, they don't start falling down.

    Nature is the same. Its own growth is a constraint on variety. It has to arrive at the most robust patterns of organisation just to exist as a persistent process of being. Or rather, becoming.

    As to A's four causes, I'm having some trouble identifying the final and formal causes. And until corrected, I'm thinking that those two do not occur in nature, except in the minds that entertain them. On thinking about it, none of the four in nature. We can describe in such terms, but in as much as the causes are really answers to questions, and nature asks no questions, then how can there be causes in this Aristotelian sense in nature?tim wood

    Aristotle's was a first clear attempt to dissect causality as a logical system. It may be a picture he read into reality - a metaphysical model. But a systems scientist has no trouble seeing it as the right model.

    Reductionist science was based on the Platonic/Cartesian trick of splitting off material/efficient cause from formal/final cause. It cleanly divided the world of the Real from the world of the Mind.

    Now that is a great model if you are a human wanting to impose some private desires on the world via the patterns of machines and engineering. You can build a science that is all about passive matter and the way it can be bent to serve your will.

    But here we are talking about what is really real. And that is a nature which is immanent and holistic - a product of all four causes. With no external help.

    Of course, you might find that metaphysical alternative arguable. And the first thing to protest is the idea that nature could have "a mind of its own" - as if finality still equals consciousness or spirit once you have actually shifted to a natural philosophy paradigm where nature starts out down at the maximally "mindless" state of having tendencies or habits. A teleomatic structure rather than a teleological one.

    Do you see the difference at play? Once you are signed up to standard issue Western metaphysics circa 1600 - reductionist science tied to Platonic/Cartesian dualism - then any hint of mindfulness in nature becomes the extraordinary problem to solve. And patterns are the famous Platonic bone of contention.

    But flip to a systems science or process philosophy paradigm and now the opposite is the focus. We are asking about where "mindfulness" ever actually ceases to be the case. On the local scale, even particles seem either weirdly quantum willful, or secretly following these abstract laws what someone wrote.
  • "Turtles all the way down" in physics
    The question is: could quarks be broken down in smaller pieces too? And those pieces of quarks, could they be further broken down, etc. etc. ad infinitum?

    Could there be no "bottom" to that stuff we call matter? Could it be "particles all the way down"?
    Olivier5

    Physics has shown that material particles only "break down" as far as their simplest possible symmetry states. So quarks exist as a mathematical limit on material symmetry breaking - the SU3 symmetry in their case. Electrons and neutrinos are the result of there being an even simpler accessible symmetry state - the U1 of electromagneticism (although the mechanism to get there is a little messy as you need this other things of the Higgs mechanism to break the intermediate step of the SU2 electroweak symmetry).

    So putting aside the technicalities, physics has flipped the whole issue. The mathematics of symmetry tell us what is the simplest possible ground state of material being. The nearest to a vanilla nothingness. A cosmic sea of U1 photons. The problem becomes more about how any complexity in the forms of higher level crud, such as quarks, or Higgs fields, manages to survive, and thus give us a materiality that needs describing in the fashion of turtles stacked high.

    At this point, the conversation has to shift from a classical metaphysics to a quantum one. And here the floor of reality becomes the very possibility of being able to break a symmetry with a question.

    You want to know the simplest formally complementary pair of facts about the nature of something that might exist - like its location AND its momentum. Well sorry. Those are the logical opposites as measurements, so that is certainly the ground floor when it comes to asking something concrete and definite. You can't logically get simpler, or more binary. But because they are opposites, not both can be measured with precision simultaneously. Exactness in one direction becomes complete uncertainty in the other.

    So again, we know in a mathematical way what constitutes the "smallest possible fragment of reality". A countable quantum degree of freedom.

    The mystery is more about how nature would begin the business of smashing its way down through a whole series of higher symmetry states - like SU10 or whatever else counts as the grand unified theory describing the Big Bang state - to arrive at its simplest achievable arrangement.

    That is the practical task before particle physics now. Recovering the story of when things were messy and complicated before they got reduced towards an idealised simplicity.

    It's a meaningful objection to the idea that CERN will find the answer to the OP anytime soonOlivier5

    CERN is all about recreating those earlier times when things were hot and messy. The everyday world around us has evolved to be about as primitive as it gets. Electrons can't decay because there is no simpler state they could achieve. But go back and higher symmetry states of matter can fragment in a vast variety of short-lived ways - short-lived as they too will want to reduce towards the simplest achievable state of being, like a U1 photon.
  • Patterns, order, and proportion
    Skip Hegel and jump straight to Peirce. :razz:

    But seriously, they are all on a continuum as process philosophers - talking about a reality that self-organises in this dialectical fashion. Being as becoming.
  • Patterns, order, and proportion
    I took the most basic example I could above. Take a blank white piece of paper. Does it have pattern? When exactly, once one starts drawing, does patterns start?Gregory

    Note that you are imagining a patternless state - a blank sheet - that you then ...for some reason... want to impose a pattern. And the pattern is then judged meaningful in light of that reason.

    So this is a very human-centric start point - subjective rather than objective. You supply the formal and final cause. And you need a blank and passive material ground on which to impose those designs.

    But pattern in nature is produced by stochastic self-organisation. Pattern emerges as free action or raw material possibility gets organised by the imposition of generalised constraints.

    So "objective" patterns have this natural logic. Their underlying meaning or finality is encoded by a statistical law - principally the laws of thermodynamics. Nature has a "desire" to entropify. Characteristic dissipative structures, like vortexes, erupt everywhere in nature where that is the form which best serves the purpose of entropification at that locale.

    This is a really good technical paper on the topic - The Common Patterns of Nature

    So the patterning of nature does have objective existence in that it embodies all four Aristotelian causes. The structures really do exist. And they do exist because they are functional. And they exist in a hierarchically complementary fashion. The patterning exists to the degree they suppress or constrain the otherwise lawless or patternless ground of free material possibility that they make organised.

    A vortex develops in a flow as a more efficient structure for serving the global purpose of statistical entropy. The vortex breaks the patternless symmetry of the flow - water molecules jostling in any old direction - and entrains them to the directional pattern of a localised rotation ... that allows everything now to get to that desired higher entropy state faster.

    The glugging bottle is a good example of this. Fill a soda bottle with fluid and tip it upside down. If there is no spin in the fluid, you get an inefficient glugging as air is having to get in while the fluid is trying to drain out. But if a vortex can develop, organising the draining fluid around a rising air channel, then the bottle empties in a flash.

    Coming back to your blank sheet of paper, you can see how this a quite different "subjective" view of reality. All the final and formal cause is Platonically in your head. You want to make the patterns and find them meaningful. And to do that, you also need to manufacture a "world" that is matchingly stable and unresistant in the face of your pattern imposing.

    Nature itself starts as chaotically as possible. It is a fundamental source of instability - as by definition, that is the opposite, the vivid contrast, to what it then becomes when that patternless symmetry state get broken by the emergence of a direction, a form, an organising structure.

    But a blank sheet of paper is at the other end of the spectrum to this in being engineered by humans as something that unresistingly will accept our marks. You can't draw a pattern on the surface of a stream. But you can make paper that has that quality of being maximally passive in terms of its material/efficient cause. It is the very definition of what most people think of as "material", or brute and inert, mindless and formless, matter.

    So what is illustrated here is that there is nature as it actually is - the world as a self-organising stochastic structure serving a generalised thermal purpose and (paradoxically) rooted in a fundamental material instability - and then the "world" as it is generally conceived as the passive material "other" to the active and willing human mind.

    Maths - as the science of patterns - has got rather screwed up by conflating the two paradigms. There is certainly the artificial "world" that humans can create by imposing their designs on a nature pacified - the forms we construct from piles of bricks or careful straight lines. If we have stable materials, then we are free to produce these engineered patterns that we find useful for our purposes.

    But then there is the still fairly recent turn towards the maths of actual natural patterns of nature. This became big news with the discoveries of chaos theory and non-linear dynamics. Yet the metaphysical significance of this has been slow to percolate.

    Which are the real patterns here? The ones we can (subjectively) impose on a suitably pacified nature, or the patterns which are (objectively) the only ones nature can arrive at to organise its instabilities to maximum effect?
  • Why does the universe have rules?
    If the laws we see in the universe are the only laws that a universe can have this gives fuel to the deterministic philosophy in which things have to/ will occur a certain way rather than completely by chance.Benj96

    The irony here is that "complete chance" must arrive at a lawful statistical conformity. If nature tried to do absolutely anything and everything all at once, almost everything would cancel out. Every zig left would get zeroed by a zag right ... leaving zero as the now stable outcome. In the same way, flip a coin and in the long run it must tend towards a stable 50:50 outcome ... with also a powerlaw distribution of excursions or runs of either head or tails.

    This metaphysical-strength insight is what is behind the deepest insights of fundamental physics such as the least action principle or the quantum sum over histories. If nature is freely exploring every possibility, it will find the shortest possible route - the "straight line" - between two energy states.

    So physics itself already explains the emergence of generalised law as just a result that if every possibility tries to get expressed, then most of it must self-cancel, leaving only that which can't be cancelled out of existence.

    The problem that fundamental physics has is then to explain why just about everything gets cancelled, yet not absolutely everything. And here symmetry breaking comes into play. A zig left has to be - for some reason of symmetry - a little more probable than a zig right. Some grain of difference has to exist that puts an ultimate floor under reality and its emergent statistical regularity.

    So laws can be simply the emergent constraints of the Cosmos - the regularities that even a chaos cannot avoid as it must conform to a statistical attractor.

    But laws are then tied to fundamental constants - some ultimate grain that prevents everything just cancelling to nothing.

    Is there an emergent story for them too? Probably. Why not? Especially if - like mathematical constants - they are simply ratios that emerge from a convergent series. A "growth of cosmic regularity" scaling factor. :smile:
  • Architectonics: systemic philosophical principles
    I think hylomorphic dualism - dualism of matter and form - would be more satisfactory from your viewpoint,Wayfarer

    Yes, the systems view is always going to be rooted in Aristoteleanism. That was the first deep cut.

    - last year I discovered an interesting paper by Marcello Barbieri, stating why he had resigned as Editor of the Biosemiosis - because he couldn't agree with the 'Piercian' orientation of biosemiotics.Wayfarer

    The group that Barbieri was part of were Peirce-lite. His beef - correctly - was they had a mentalistic approach to “interpretance” and hence meaning. And it needed to be understood in a physicalist sense.

    But Barbieri just signed up to the wrong group. Theoretical biologists like Rosen, Pattee, Salthe, had already arrived at a well worked out physicalist version of biosemiosis. And as Peirce’s forgottten writings emerged in the 1990s, the fact that what they were doing was “biosemiotic” became apparent.

    Barbieri was always very concerned to establish his own priority as the guy who gets it right. He wants to set his “code Biosemiotics” as the ultimate path to follow. So once he discovered Pattee existed, he had to tear down him too - https://link.springer.com/article/10.1007/s12304-009-9042-8

    I don’t find much deep about Barbieri myself. Whereas Pattee, along with Salthe, have minds like razors.
  • Architectonics: systemic philosophical principles
    The ultimate ontology I have is one of a network of interactions which are simultaneously phenomenal experiences of and also physical behaviors of objects that are also all subjects (as covered more in the essay On the Mind) that only exist as nodes in that network, defined entirely by the interactions/experiences/behaviors they take part in.Pfhorrest

    I did jump around a bit to try to get some sense of where you arrived. Again - as with your panpsychic discussion in the mind section - my criticism would be that you are trying to talk around the obvious problems of conventional folk metaphysics rather than simply recasting everything from the unifying perspective that is offered by a true systems metaphysics.

    So - as is often the case - you may be leaning towards an organicist metaphysics. But you are using the language and concepts that were constructed so as to oppose that very tendency.

    Nodes in a network is both a useful mathematical conception - one that lends itself to systems modelling - and yet also further entrenches that fundamentally atomistic notion of objects and properties. It portrays a concrete realm of located entities, the nodes, and their linearised connections in "a space" - an a-causal void.

    Nothing is said in that metaphysics about why the stable locales could just exist, or how a space of uncertainties is being constrained to such a stark nothingness. And as I say, quantum theory should sensitise anyone to the cartoon nature of such thinking.

    My own approach to "mind" is based on modelling relations theory - another way of talking about Peircean pragmatics or enactive psychology.

    In some general sense, a system of constraints can be understood as a "pansemiotic" model of the reality it is shaping into being. The constraints are the mould that give shape to the raw material plasticity of the apeiron. So this would be the level at which a form of panpsychism or objective idealism (or even Whiteheadian process theology) has some bite.

    Mind = constraints. And it works in the sense that the systems view is about permitting rather psychological terms into the discussion. It is fine to talk about memory, information, finality, etc.

    But at the level of ultimate simplicity - the realm of particle physics and fundamental forces - there is no actual semiosis in the sense of a subject-forming, located, point of view. That only begins to happen with the development of organisms that actually have the memory mechanisms to own their own "models of the world" - a model that has them in it as the purpose-representing point of view.

    So sentience in any sense is a property only of life itself. It is a useful corrective to a "mindless" physics - a non-systems physics - to introduce psychological terminology as powerful metaphor. But it is then bad to get carried away by the success of such a move.

    Panpsychism stays stuck in Cartesian dualism because it accepts mind and matter as categories of substantial being ... with no actual necessary connection, just a pair of modes.

    The systems approach demands that any pair of things stand in the strict logical relation of a dichotomy. And so this is what is made explicit by replacing the categories of mind and matter with the systems's alternative of global constraints and local uncertainty. You can see how constraints on uncertainty must produce a stabilised persistence. They are two opposites that must act on each other so as to produce the third thing of an emergent actuality.

    So a network of nodes is a pre-existing reality, a brute existence, that can then be a basis for emergent complexity - in that the necessary dichotomy of "parts in relation" already itself has formed into being.

    But the systems thinker has to drill down into that "better description" of base reality to tell the story of how a network of connections could itself have evolved. Which is where you have to switch over to a Peircean tale of constraints on uncertainty as the larger picture - the quantum reality that precedes the classical reality, so to speak.

    The interactions/experiences/behaviors are the most concrete things in existence, and the objects/subjects they are of are abstract constructions whose existence is like that of numbers and other abstract entities (as covered more in the essay on Logic and Mathematics).Pfhorrest

    But isn't this conflating experience and interaction - the world that is its own model, and the self that is a model produced by an organism? And also treating localised objects rather than globalised or contextual constraints as the "abstract" and some pattern of connectivity, those very constraints, as the "concrete".

    This is still essentially trying to make dualism work.

    The systems view is triadic. You have a dichotomous separation that forms the complementary limits of being, and then the middle ground that emerges inbetween.

    The limits - being exactly the place where actual reality can never reach - become the abstract. Constraints and uncertainty - as global and local extremes - are by nature abstract. That is why we place natural laws, as constraints, beyond the world itself. And likewise, why chance or uncertainty is also placed outside the concrete determinism of reality.

    And then, sandwiched between these "abstract" bounds - absolute law (or Peircean synechism) to one side, absolute (quantum?) chance (or Peircean tychism) to the other - we find the third thing which is the "somethingness" that is the emergently actual, or emergently concrete.

    Can you elaborate on this? Because on my understanding of Whitehead, his view is quite similar both to mine and to what I gather is yours.Pfhorrest

    Whitehead is like Kant for me. I can't be arsed sifting the wheat from the chaff. They both have promising moments then go off track as they don't question the kind of object-oriented metaphysics I describe. They don't make a clean break with dualism to embrace a triadic systems logic.
  • Architectonics: systemic philosophical principles
    I would really love to hear you take on my whole bookPfhorrest

    ...that there are not so much different kinds of properties, much less different kinds of stuff, as there are what could crudely be called mental and material ways of looking at the same properties and the same objects, that are essentially both mind-like and matter-like in different ways, that distinction no longer really properly applying when we really get down to the details.

    http://www.geekofalltrades.org/codex/ontology.php

    I looked through to see what our sharpest point of divergence might be. I generally agree with what you say you stand against, but I don’t think you have arrived at the same thing that I would say I stand for.

    Not that that matters. But you might be interested.

    The way you express yourself In that cite feels a little confused as it accepts an object oriented ontology where the mental and material would be two ways of a subject interacting with some thing and its properties.

    That may not be what you were thinking, but it is what you wrote.

    The systems theory approach I take stands against object metaphysics. As Peirce in particular made clear, reality is not a collection of objects but a process of manifestation. It boils down to an interaction where global constraints (or information/memory/sign/context/law/habit) reduce fundamental uncertainty (or entropy/vagueness/degrees of freedom).

    The outcome of this process of constraint on uncertainty is an emergent world that is full of object-like structure with property-like interactions. So you recover an object oriented reality as the emergent fact. Concrete stuff exists as abstracta suppress variety and leave behind a state of relatively definite facts.

    But at a deeper level, there is only a “materiality” in the sense of an Apeiron - a formless ocean of fluctuation or possibility. And there is then the “mind” that arises by imposing an order or regularity on this shapeless energetic potential.

    This was the ontology proposed by the first mathematically minded metaphysician, Anaximander. Peirce nailed it as a general logic of being. Quantum theory confirms it as scientific fact. And so does the more recent convergence of statistical thermodynamics and information theory.

    But as an organising idea, it remains pretty much completely outside the tradition of philosophy. I think that is why you might have a problem if your own project is to formulate an over-arching view by responding to the vast range of object oriented confusions baked into traditional philosophy. Even Whitehead is part of the problem, not part of the solution.

    Where we connect seems to be in our structuralism - structure being process crystallised or stabilised into a functional and self-reconstructing flow. Structure is where things develop to a point that a system does have an explicit divide between its constraints - its channels, switches, barriers and other informational order - and the surging uncertainty or plastic growth it directs into extending its own realm of stabilised order.

    I come from the science side of things - hierarchy theory, complexity theory, thermodynamics, etc - where this kind of organic structuralism is exactly what is being modelled. But most of this is new science made concrete only in the past 50 years. And even within science, this systems thinking is counter to a long standing object oriented metaphysical tradition. That’s why it’s not much heard about as a new deeper ontology.
  • Architectonics: systemic philosophical principles
    It's not world on the left, mind on the right, but mind-to-world on the left and world-to-mind on the right.Pfhorrest

    My approach to dichotomies treats them as the mutual limits on possibility so you are always talking about relative states of affairs. So "world" and "mind" are just complementary bounds on being. As humans, we have to develop a habit of reality modelling where our consciousness feels sharply divided into "a world" that then has an "us" in it - the "experiencing ego".

    So the world becomes defined as that part of being which has the least of that us-ness. And the us is the part which has the least degree of the world. The fact of that construction is shown by something as simple as finding your arm is dead after you slept on it.

    My reading of your diagram followed this pragmatic logic. We have to construct "the world" to construct our "selves". And vice versa. It is a two way psychic street. This contrast with Cartesian dualism where both world and mind are granted substantive reality. It is instead the basis of what Peirce called his objective idealism. Or Kant, his transcendentalism. (If you ignore the lingering religious leanings of both of those two.)

    So I see your whole map as a map of the pragmatic effort to construct the reality of being. It is not about the Cartesian project of a mind-soul that knows the world in a passive but directly perceiving way. It is a pragmatic Peircean consciousness where we are refining the intellectual tools to work on both sides of this act of co-construction. One side of the knowledge map is focused on technical control over the appearance of the world. The other is focused on the technology for the making of a complex modern selfhood.

    It may or may not be what you had in mind explicitly. But it is what jumps out for me. And I've rarely seen something that makes as much sense. Is this diagram something you have published or planned to?

    I could quibble over details.

    For example, one of the striking things about geometry and algebra is they are themselves an exact seeming dichotomy. Every description in one language maps to a complementary description in the other. Michael Atiyah writes nicely about that. Algebra models relations as points in time and geometry as connections in space - https://people.math.umass.edu/~hacking/461F19/handouts/atiyah.pdf

    So maybe those two segments should be same sized - mirror images.

    And maybe you are saying that dynamics/calculus are geometry plus time, while harmonics/trigonometry are algebra plus space? So rather than four quadrants, you have two halves with their subset extensions.

    And does arts chop up the same way?

    On logic vs rhetoric, what I think the diagram gets right is that language is conventionally divided up into the three things of syntax, semantics and pragmatics. So it is neat that logic is syntax/semantics - the technology of argumentation with the least possible constraint in terms of pragmatic embeddedness, while rhetoric can be defined in contrary fashion as the technology of argumentation with the least possible constraint in terms of syntactical correctness.

    Was that a lucky accident or your conscious intention there?

    Another random point is that Maslow's psychological hierarchy of needs could be a useful way to structure the human side of the equation - the hierarchy that goes from basic survival needs to self-actualisation. Securing the physics of life - energy and integrity - and then continuing towards the sociology of free individual action.

    That might reorder the trades hexagon, for example. Or the ethical sciences. It seems to match the sciences hierarchy already.
  • Architectonics: systemic philosophical principles
    Hah. That is a pretty neat diagram. I hadn't seen it. And it makes a lot of sense to me in that it can be read from a modern systems-thinking perspective.

    The systems view is a triadic logic in which you have a dichotomy or symmetry-breaking, and then the hierarchy or triadic state of organisation that fixes a stable relation between those two complementary poles of being.

    So very simplistically, a rabble of warriors make a fighting mob. Then the organised thing of an army can develop as the mob starts to divide into leaders and followers. You get the emergence of the dichotomy of infantry and general. Each complements the other in that the infantry acts in the immediacy of the now - the best choices in the heat of battle. The general then acts with the long term view.

    This local~global hierarchical division brings stability and coherence. We can speak of the army as an organism, and even an organ system as it develops specialised branches like a reconnaissance force, logistics, artillery.

    You even get a thermodynamic divide. The soldiers are the entropy - the grunt energy. The general is the information - the abstract information.

    So check your diagram. Language stands opposed to trades as these kinds of complementary partners. To the one side is an exploration of the concrete particularity of the informational machinery that organises the human system. To the other is the entropy harnessing side of human life, the trades that sustain our physical growth and action.

    Then in the middle - between the particularity of human language and particularity of human trades - you have the rising abstraction which would be a "philosophical" account of semiotics and dissipative structure in general.

    That is, the edges bleed out into local concrete particulars. The middle swells towards a generality of view where language vs trades can be seen as a metaphysically abstract distinction.

    Arts vs physical sciences is another natural divide. The difference there seems to be about control over nature vs control over our social selves. Which again is an entropy vs information distinction. Nature is the energetic resource. Culture is the ideational resource.

    But mathematics rather than art is how we actually gain semiotic power over nature. That is the language employed. While art is the language for regulating cultural organisation - at least art in the broad sense of the communication of values.

    That would be why you have maths next to physical science as its language kin, and art next to ethical science as its language kin.

    Of course, that raises the whole issue of whether the humanities have got it right in focusing on value-driven thinking. It seems subjective and lacking mathematical rigour.

    But on the other hand, if part of the whole job of being an organism is to construct a purposeful identity, then the ability to regulate subjectivity - the informational dimension to being a human sociocultural organism - is as crucial as being able to regulate our supporting material conditions. The entropy dissipation.

    So again, the arts and ethical sciences would exist as domains giving concrete and particular expression to the construction of subjectivity (as opposed to the mathematical/physical science focus on the construction of objectivity). And then moving back towards philosophy in the centre, it would supply the generalisation of those modes of production. What does it mean for anything to be an organism or have a subjectivity, a point of view, a purpose to be?

    So your hexagon has two axis. There is the north/south of informational constraint at the top and entropic dissipation at the bottom - to use the general systems way of describing a reality organised by semiosis.

    And then east/west is the flip between the objective and the subjective - the world and the mind. In the systems view, these are two mutually emergent parts of the whole. Mind arises as the informational model - the set of habits - shaping material reality into being. (That is "mind" in the non-mystical sense of an organismic nature evolving a regulating general purpose.)

    The hexagon has maths and physical science both sitting on the right side of this axis - the objective. Or as I emphasise, the construction of the objective. Which is why you need two boxes as this construction has both an informational and an entropic element. It needs a language and it needs a physics. With those, humans no longer just live in nature, we can refashion nature as a matter of choice.

    And arts and ethical sciences sit on the side of subjectivity. Or as I emphasise the construction of subjectivity. And this again reflects the need for a language and a social machinery that can refashion the "mind" as a matter of choice.

    I've got to admit that this last step breaks down to the degree that art has become entertainment rather than group instruction. And to the degree that social organisation might be achieved through the communication of feelings rather than reasons.

    This side of the hexagon suffers soft development compared to how far humans have moved in gaining semiotic control over nature. But on the other hand, we are good as a practical fact at manufacturing human minds via socialisation.

    So maybe "art" is about the language-like technology we have for deeply engaging in that production of subjectivity. And that is matched by the language-like technology - maths - that we have been perfecting for deeply engaging in the production of objectivity, the ability to fashion physical reality to our desires.

    You do squeeze in logic and rhetoric as the two hinge points. So maybe "arts" could be relabelled "propaganda"? :wink:

    Anyway, your diagram immediately makes more sense to me than anything else so far.

    It splits into metaphysically general halves - the general dichotomy of global informational constraints and local entropic degrees of freedom (the Peircean division of firstness and thirdness). And also into the metaphysically specific halves of how humans can harness this technology of semiotics to fashion the world vs fashioning the mind.

    And the connections between the four quadrants is achieved by the maths~physical sciences and arts~ethical sciences linkages. While in the centre, all paths cross through philosophy so that the concreteness of specific detail achieved at the periphery of this cognitive empire is matched by the further axis that is the contrast of a uniting metaphysical generality of mechanism.
  • Architectonics: systemic philosophical principles
    Reading of both the IEP article and the Architecture of Theories paper I referenced show that Peirce's game was trying to impose a uniting classificatory scheme on knowledge creation that might reveal his pragmatism/semiotic to be the natural source of such unity.

    So the distinctive thing about the Peircean system of thought is its hierarchical triadicity - the semiotic of firstness, secondness and thirdness. And generally speaking, it is an organic conception as it is developmental - the hierarchical accumulation of constraints on uncertainty.

    It is this organicism - as opposed to the nominalism or mechanicalism of the usual approaches to hierarchical organisation - that could be the uniting glue, the functional pulse bringing knowledge creation to life as a living pragmatic exercise.

    So in the Architecture of Theories, Peirce says that too much of the history of philosophy seems like a haphazard bunch of cottage industries. They are not architecturally designed, starting from an understanding of epistemic basics - ie: pragmatism. His survey of the sciences then shows that - despite science supposedly being the application of a mechanical turn of mind - actually his own organic and developmental conception of existence is the deeper view being revealed. And so this is the architectural lesson that philosophy should be learning too if it wants to become properly systematic in its endeavours.

    The IEP article then talks about Peirce's efforts to impose just such an organic hierarchy on philosophy and science. The exercise goes awry partly because of a deep confusion between the two general kinds of hierarchical organisation - the compositional and the subsumptive. That was a confusion in Peirce's own work in my view. Stan Salthe has done the best clarification for my money.

    IEP notes this difficulty....

    The first thing to clarify is that the sub-ordinacy of philosophy to mathematics, or metaphysics to phenomenology, is not sub-ordinacy in the sense of embeddedness, i.e., philosophy is not a sub-branch of mathematics. Of course, embedded sub-ordinacy does occur in Peirce’s classification where, for instance, aesthetics is a sub-branch of Normative Science, just as ethics and logic are. However, ethics and logic are not sub-branches of aesthetics, even though they are sub-ordinate to it. So, what is the nature of the non-embedded sub-ordinacy of, say, philosophy to mathematics?

    Things get screwy because the relation could be that of the general to the particular, or the vague to the constrained.

    But a sense can be made of the hierarchy IEP describes where maths is the most general discipline in terms of being the most abstract level of rationalisation and philosophy is a concrete expression of that rationalising habit. We are in the realm of Platonic forms, but moving towards engagement with the world. Then science is the habit of rationalisation properly engaged with the world as empirical knowledge creation.

    Then philosophy itself would be divided into a bunch of threes. Philosophy is composed of the triad of phenomenology, normative science and metaphysics. Normative science then subdivides into the triad of aesthetics, ethics and logic. And logic divides into the triad of philosophical grammar, critical logic and methodeutic.

    Philosophy is divided into three orders: phenomenology, or the science of how things appear to us; the normative sciences, which study how we ought to act; and metaphysics, the study of what is real.

    So sure a familiar template is being hinted at - firstness, secondness and thirdness. The universal growth of reasonableness as the basis of all existence.

    But does the abstract reasonableness of maths really emerge before the concrete reasonableness of philosophy? Mmmm.

    And does the concrete reasonableness of philosophy start in its most general form with phenomenology - reasoning about the brute firstness of appearances - then develop via reasoning about the secondness of mediating interactions, and thirdness of rational habits as they must be expressed in nature itself?

    It can sort of work. But it is not especially convincing as it tends to mix up the two views of hierarchical order - the compositional and the subsumptive.

    However the point is that Peirce was looking for an architectonic unity in thought through the lens of a triadic organicism. That is the classificatory hierarchy he wanted to apply.

    The unity lies in that the fact that reasonableness has just a single functional form. Within that, there are the three natural divisions of firstness, secondness and thirdness. But which is then the ground - the firstness of raw possibility or the thirdness of established regulative habit?

    If maths is ground to philosophy, or phenomenology ground to metaphysics, in which of these senses exactly?

    And is aesthetics ground to logic, or philosophical grammar ground to methodeutic, in either sense really?

    That is why I say his architectonic works best for showing there is a unity of method that spans all reasonable human inquiry. And thus for diagnosing problems like scientific nominalism or philosophical monism where the full triadicity of a systematic approach is not being applied, leaving the discourse - the community of inquiry - stunted.
  • Architectonics: systemic philosophical principles
    The rest of what you've described of Peirce's sounds like an epistemology.Pfhorrest

    How is an OP on the architectonic structure of theories not an epistemological question?

    The surprise might be that there is a single general answer here rather than a bunch of unrelated methods, each perhaps applicable only to their own domain. But there you go.

    people here have called my approach "architectonic", a word I wasn't previously very familiar with,Pfhorrest

    Your approach is more a classification scheme - an Aristotelian exercise in categorisation.

    Architectonic is about a general functional structure to the act of inquiry, not describing the way the flow of human inquiry then breaks up into reasonably distinct domains.

    Though the two are of course connected as the various constraints might be loosened or tightened to cause the uniting flow to fragment.

    For example, some can say subjective feelings are a reasonable form of inductive evidence. If I believe it simply feels morally right to be vegan, then that is what goes. Others demand objective evidence.

    The larger view then says all evidence is subjective - the nature of measurement is a choice we make in constructing out theories, But then inquiry can aspire to objective evidence in that it meets the constraint of being the most generally shareable. It is a little hard to calibrate my sense of right against yours. But we can both read the same numbers off an instrument dial.

    So the unitary structure can produce local variety. Do you want to talk about the general principles of inquiry - which are as deep as nature itself - or the cultural variation that organises itself into academic domains?

    It would still be an interesting exercise to capture that variety with the fewest number of categorical distinctions. Science - being well organised - does have its familiar reductionist hierarchy. Philosophy instead tends to organise itself by its human applications and its dialectical oppositions, as you note.
  • Architectonics: systemic philosophical principles
    Lists don’t really cut it if what you are reaching for is an account of a functional architecture. You need to understand the structure of the flow. And it is a flow that caps out in a enfolding hierarchical organisation

    All systems philosophy - Aristotle, Hegel, Peirce - is triadically structured. So it is irreducibly complex or holistic. That is, its bootstrapping. The whole is shaping the parts even as the parts are constructing the whole.

    But to start somewhere simple, you could consider Peirce’s division of natural reasoning into the three steps of abduction, deduction and inductive confirmation.

    So you start the loop by sporting some fruitful guess - a hypothesis. You have some vague sense of what the purpose for framing a question might be, and a vague idea about what could be the right type of answer.

    Every philosophical or scientific inquiry is going to start like that, right? A question that seems worth answering, even if the question itself is still hazy in its exactness. And likewise a sense that it is answerable as it has the right shape for an answer to fit.

    Next comes the fleshing out of some exact model or theory that makes the hypothesis precise. Deductive reason can draw out a variety of logical consequences - how things should be if the core idea is correct.

    Then third comes induction - the empirical bit. The formal model spells out the measurements that are sufficiently binary to give a thumbs up or thumbs down. Evidence gathers as you compare the predictions of the theory against the apparent facts of the world.

    So you do all that and find the model works or the model fails. And keep going back around the loop until the model is refined in a way that feels satisfactory.

    This threshold is thus defined by your purpose. What is the ultimate goal? And how private and subjective is that versus how publicly shared and objective?

    That in a nutshell is a modelling relationship which is both prescriptive enougH, and flexible enough, to cover reasoned inquiry in all its possible variety.
  • Entropy, diversity and order - a confusing relationship in a universe that "makes""
    Topics to make one gag or snide, right? Spewed by none other than Peirce.javra

    But isn't evolution a balancing of the competitive and the co-operative? That's what ecology says.

    Peirce's religious excesses are what they are. To be taken in context.

    More pertinently, the question concerning the disparity between IT’s model of entropy and the thermodynamic model of entropy has not been answered clearly, if at all.javra

    What disparity? They are formally complementary modes of description.

    See Wiki - https://en.wikipedia.org/wiki/Entropy_in_thermodynamics_and_information_theory
  • Entropy, diversity and order - a confusing relationship in a universe that "makes""
    Because of this, until I stand corrected, I’ll be addressing entropy as the terrain which we do our best to model.javra

    But it is only the differences that we would experience or measure. And "entropy" talk is about imputing the mechanism.

    Time has a thermodynamic arrow. Entropy - measured as disorder - has tendency to increase. The terrain seems to have this constant slope downwards.

    So are we being propelled down this slope by the hand of some global force? Or are we stumbling down this slope due to the local vagaries of chance? Entropy thinking is a claim about the imagined mechanism.

    When considering the metaphysical issue of identity: It can be argued that the universe’s identity as a whole is currently not maximally ordered, being instead fragmented into multiple, often competing, identities – residing within the universe, and from which the universe is constituted – whose often enough conflicting interactions results in a relative disorder, or unpredictability, and, hence, uncertainty.javra

    Yes. The universe at the age and temperature we live in right now is in the process of transiting from one extreme to the other. So you have this fragmentation that ranges from simple identities to complex ones.

    A mountain is an entropy dissipating structure. A monkey is too. Different grades of complexity can evolve as bits of the universe are hotter than other bits and provide the energy that allow the localised accumulation of information in the form of entropy-producing superstructure.

    But at the beginning of time, such variations from place to place were minimal - quantum level fluctuations around the maximal possible heat density. And at the end of time, they will again become minimal. But now as quantum level fluctuations around the minimal possible heat density.

    So complexity of entropic identity is just a passing stage we are having to live through at the moment.

    On the other hand, when considering the cosmos’s identity as a whole: increased entropy will simultaneously result in an increased order of the cosmos’s being as a whole - this till maximal entropy is obtained, wherein the identity of all parts of the cosmos vanish so as to result in a maximally ordered, maximally harmonious or cohesive, and maximally homogeneous identity of the universe. From this vantage, increased entropy leads to increased order (namely, relative to the universe as whole).javra

    It is a kind of exchanging of one form of order for another. Or one kind of disorder for another. And that is why talk about order vs disorder tends to drop out of the conversation. As concepts, they become too simplistic.

    At the beginning, the Universe is all potential, all becoming. At the end, it is all spent, all become. So something has been wasted to get there. Or has something been achieved?

    We humans can project our value systems on to the scientific facts either way. The accepted scientistic view is see it as a journey arriving at meaningless waste. You prefer to read it as achieving some ultimate good state - call it Nirvana.

    I say it is what it is. And the remarkable fact looks to be that we count as a high point of that fragmented identity which is the universe in the middle of its grand transition from universalised everythingness to universalised nothingness. We exist at the time of maximal somethingness. This is the time when local complexity - informational densities - can be its own thing.
  • Architectonics: systemic philosophical principles
    One of the amazing things about ideas though, especially philosophical systems, is that they are perspectival; every well thought out idea is a perspective on the world and generates a view on other ideas connected to it.fdrake

    What distinguishes architectonics is that it is speaking to the unity of this view taking. And it is an anti-nominalist, and hence systematic, meta-theoretic position. Perspectives are connected to their consequences by some optimising relation. So you can't just have philosophy as just a bunch of disconnected views.

    And if philosophy does develop into a network of neighbourhoods as you say (which is a fact), then there would be two principal sources of variety I would say. One would be a community agreement about the optimisation value in play - beauty, truth and good would be three of the familiar choices. A notion of least action or information reduction would be the more scientific pick.

    The other thing that happens is that every "well-formed" perspective has to contain the possibility of its dialectical "other". You say idealism, I say realism. You say system, I say atomism. So rather than a loose network connection - a flat many to many relation - you have opposed schools of thought adopting complementary perspectives (that, architectonically, ought really be fused into the unity of a single hierarchical systems account).

    So a "perspective" is a pragmatic modelling relation. You start with the thing-in-itself. Some messy set of impressions about "reality" - the explanandum - you want to get your nut around. To get beyond this immediate subjective response - to transcend it - you have to create some kind of perspective. You have to step outside reality and form a model, an explanatory account, some generalised framework of such phenomena.

    But in doing that, the perspective then allows you to generate predictions about concrete particulars. Now to the other side of the messy phenomenology, so to speak, you construct a detailed image of a bunch of answering measurables. You no longer experience the mess as the mess but as an atomised collection of known particulars.

    A warm furry ball becomes understood as a "cat" because it has all the right details, like "those pointy ears" and those "retractable claws". The perspective is tied to empirical consequences. And it is tied to them by some optimising rule. The perspective works in some sense that meets the goal of reducing confusion about what might be the case concerning "the world".

    So.... MODEL >>> "messy world" <<< MEASURABLE FACTS

    If the messy world had no unity or systematic regularity itself, we would never be able to extract such a relation. And - architectonically - progress in philosophy would be about moving towards the system of model and measurement that does the best universalising job of clarifying all messy impressions.

    For that reason, being truthful, honest, precocious, exploratory and recognising limitation and fallibility is much more important than doctrine; care how you generate your perspective and the rest will take care of itself.fdrake

    Yes, the problem is that because every theory is defined in terms of the type of facts it imagines, then it is easy enough to get trapped into a self-satisfying loop. If I think all cats have pointy ears, then I might identify a Tasmanian devil as a cat.

    But rather than stressing a set of "ideal human inquirer" values, discussing modelling (or perspective taking) at a meta-theoretic would produce its own philosophically general cautions.

    For instance, a "good" model achieves the maximum possible information reduction. It gets things so right that it needs the least effort when it comes to measuring the facts. Our brains do this when they learn to recognise "cat-like" configurations of features. The answering act of measurement is itself a gestalt reaction rather than a laborious listing of atomised details. You could boil down the conformity to a single number between 0 and 1, as recognition technology might do.

    And the reason for wanting to move away from human-centric criteria is that - for Peircean architectonics at least - the ultimate revelation is that epistemology is ontology. The modelling relation not only is the "mental" algorithm that discovers the underlying unity of nature, it is the very way that nature produces "material" unity in itself.

    To stress the qualities of the philosophical mind when confronted with the mysteries of the brute world is to stay stuck in the Cartesian framing that both Kant and Peirce were intent on transcending. It is halting that progress towards a fully unified "view of everything". And as I say, with Peirce, the epistemic modelling relation becomes itself the best model of cosmic evolution. The reason anything definite could come to exist, such that it would be amenable to our attempts to decode its grand patterns.

    It was that ultimate flip in viewpoint that he was getting at here....
    The Architecture of Theories By Charles S. Peirce
    https://arisbe.sitehost.iu.edu/menu/library/bycsp/arch/arch.htm
  • Architectonics: systemic philosophical principles
    In this thread I'm interested to hear if other people have their own core principles that they think entail all of their positions on all of the different philosophical sub-questions, and if they think that there are common errors underlying all of the positions that they think are wrong.Pfhorrest

    The point of architectonics would be to get to the root of what "knowing" could even be. What would be its natural, and hence inescapable, organisation? It is the meta-philosophic question.

    And Peirce, with his pragmatism and semiotics, nailed it. To know is to be in a modelling relation. It is about forming a world predicting machinery - a rational engine with a useful goal - that is cashed out by the answering measurements it expects to find. And in being able to form such definite expectancies, the model can also be confounded by its mistakes or surprises. It can be wrong. So it is driven by the feedback of its own mispredictions to improve its modelling.

    So the argument is nothing else could properly constitute knowledge. There is the one general architecture - even if then there might be a variety of actual models of that architecture, such as Peircean semiotics, Bayesian reasoning, Rosen's modelling relation, Grossberg's adaptive resonance networks, etc.

    It doesn't matter what the philosophical sub-question is pretty much. Knowing is a modelling relationship. It is how nature designs our own minds. it is how we would have to build a knowing machine.

    That being so, it would be almost impossible not to be using it. The errors would arise more from thoughts about what are the right ultimate goals of some act of trying to know. And then also from a failure to understand the natural limitations of pragmatic inquiry as a system of model construction.

    For example, the realist minded might believe that modelling delivers truth. The measurements that arise from predictions count as cold hard undeniable facts. But Kantian architectonics already showed that, as modellers, we can't transcend the model. We can predict X - X being some judgement of the senses. I can easily tell you that leaf is green, not red, because I'm not colourblind. But we know from science now that wavelengths don't have colour. Or at least we know how to point a light meter at a source and read off numbers that relate to some theoretical model.

    So if the goal is to know 'the truth", that simply misunderstands the nature of modelling relation. It is an error. But on the other hand, most people just want "truth" of a pragmatic kind - enough to serve some purpose they have in mind. They can be satisfied they have "the facts" as responding to the world in that fashion doesn't result in unwanted surprises.

    There are many other aspects of this pragmatic machine. You could discuss the different kinds of modelling forms - the logics - it might employ. Some would be too simple for some purposes, others too complex. It would be an error in some sense to use one when the other is better suited.

    But if we have a common goal in mind, then it does become a competition of what works best to model in such a way that we can demonstrate minimum surprisal.

    (Of course you have to then agree to make definite predictions in a form that is comparable. Much bad philosophy avoids naming observable consequences and instead predicts vague feelings or frank unobservables. Yet it still apes the architectonic form which claims: I have a model, and this counts as evidence ... to me.)
  • Entropy, diversity and order - a confusing relationship in a universe that "makes""
    But to rephrase things in as simpleton a fashion as I can currently produce: The entropy of given X within the universe leads to disorder relative to given X (its permanency, or identity, or determinacy steadily ceasing to be), but simultaneously leads to greater order in respect to the universe itself as a whole. Entropy thereby simultaneously increases disorder and order relative to parts and to everything, respectively. Is that about right?javra

    Sorry, I don’t think I follow. Entropy is a measure of where some system X might be on a spectrum between maximal order and maximal disorder - if we are speaking very simply.

    So a pack of cards might be completely ordered in terms of suit value - ranked in sequence that has zero uncertainty from that point of view. Or it might be completely disordered in being so well shuffled you couldn’t guess what came next at a level better than chance. Or it might be somewhere in between in its shuffle so that you could still guess one card would follow the next in sequence to a degree.

    Reductionism likes to emphasise that random local action will always arrive at a perfectly shuffled deck. An arrangement that offers the least predictability. Mindless nature can have an entropic arrow simply because of unmotivated statistics.

    But I was countering this kind of happy metaphysics by saying it builds in presumptions - like that nature just comes with brute degrees of freedom in the way our imagination supplies us with these handy decks of cards and bags of balls that constitute a reality already pre-atomised.

    So what is determinate - the concealed presumption in the OP - is that a bag of identical balls can just exist. The balls don’t fluctuate through all kinds of possible identities, just as the deck,of cards doesn’t muck about in any fashion and just passively let’s you shuffle them.

    But we know from fundamental physics that any notion of countable particles disappears as you reach the energy densities of the Big Bang. Standard notions of entropy counting cannot apply in any simple fashion. And the same applies at the Heat Death in a different way.

    So entropy is a modelling construct - and all the better for the fact that is not disguised. The mistake was to talk about energy as if it were something substantial and material - a push or impulse. And now people talk about entropy as a similar quantity of some localised stuff that gets spread about and forces things to happen.

    This seems to be what you have in mind here, but I’m not sure. My point was about how the entropic/informational approach to physics can free you from one sided materialistic conceptions. A fuller systems metaphysics is implicit in the maths once you get past the usual introductory examples.
  • Entropy, diversity and order - a confusing relationship in a universe that "makes""
    Just popped back to check an old post. Nice to see a few metaphysics threads going. :)
  • Entropy, diversity and order - a confusing relationship in a universe that "makes""
    You’ve made use of both notions. How do you make sense of them in manners devoid of equivocation? Hopefully I’m missing out on something here.javra

    Hi Javra. As Shannon made clear, these would be physically complementary perspectives. The information and the dynamics. But also, that fact gets confused because reductionist science wants to still strip its metaphysics down to a world devoid of meaning. So we have the paradox that information theory winds up counting noise rather than signal. A bit might well have significance, but information theory just locates it as an atomistic position - a bare material absence or presence.

    So again, reductionism gives a useful first order model of reality. But begs the question as soon as you want to do real philosophical work. Almost anyone trying to be scientific about metaphysical questions find the whole discussion going off the road as standard science is designed for modelling a world that already has its global constraints (its laws) and local constants (its atomistic grain) baked in as unexamined presumptions.

    A systems view is based on all four Aristotelean causes. Reductionist science wants to account for the world only in terms of material/efficient causes as its atomistic variables. So that is what still frames the discussion whether we are counting entropy in terms of informational bits or dynamical degrees of freedom. The Holism is collapsed and hidden in the fact that information and dynamics are united by “the Planck scale” where it is bit, and vice versa.

    You would have to crack open the machinery of the Planckscale - the triad of constants that are the speed of light, the strength of gravity, and the uncertainty of the quantum - to find where the deeper holism has got stuffed. (Clue: G and h are in a reciprocal relation to define fundamental location vs fundamental action, and c then scales the interaction to give you an emergent direction for temporal evolution.)

    So the informational bit and the dynamical degree of freedom are not an equivocation but the same thing seen from its two possible directions. The informational angle stresses the formal/final half of the systems view - that which speaks to a capacity to constrain action to a location such as to make definite some atomistic degree of freedom. And then the dynamical angle speaks from a material/efficient perspective where such a degree of freedom simply exists ... in some brute fashion as a given. The constraints acting to make this so are extra to the model.

    And then equivocation of a kind does arise when the constraints-based production of a bit is taken for granted as likewise a brute material fact with no systematic history. This is what happens when physics seems to say reality is made of bits as it it WERE a material rather than a meaningful limitation that has created a “material” possibility.

    Yet even as an equivocation it is a useful one for founding models of semiotic complexity. It is a huge fact - one that legitimated the whole exercise - to be able to show that there is an irreducible physics of symbols. The old Platonic division between matter and idea does actually reduce to a Planckscale commonality. There is a baseline size to counterfactual definiteness. A bit of noise or entropy is the same size as a bit of signal or negentropy when you drill down to the simplest possible level of material description. And then having a fundamental basis for the measurement of cosmic simplicity, you can do what reductionist science is so good at doing - add levels of more complex systems modelling on top. Like chemistry, biology and sociology.

    Discovery the equivalence of Boltzmann entropy - dynamical degrees of freedom - and Shannon information entropy was an epochal move. And what united them was Planck scale physics.

    Physics can now recover a full systems perspective from that. As it is doing with its information theoretic turn and attempts to recast quantum theory in the language of contextual constraint (decoherence, etc).

    A simpler way to put it might be that information theory is seeking its least meaningful quantity - the bit that could be countably present because it could be countably absent. Dynamical degrees of freedom are likewise the least form of material action that is countable present vs countably absent. And because reality is a system, based on an interaction between laws and actions, constraints and possibilities, regulation and dynamics, the search for the smallest scale of definite existence - a grain of being - arrives in the same place when you take either route.

    The further complication - the third rung issue I cite - is that the actual Universe only arrives at this physical limit of counterfactual definiteness at the end of time. It is the great fact that evolves.

    Or equivalently, if you unpack the machinery of the Planckscale maths, the end of time is also the biggest and flattest possible state of things. A cold and even void very definitely exists in a way that was not the case at the beginning, when all you could say was there was a state of indeterminate potential.

    So it takes three steps back to see the wholeness.

    Step one creates the reductionist view of an atomised ground. Reality is composed of bits. And both the informational and the dynamical perspective arrive at a counting system to handle that.

    Then step two is to see that information and dynamics are the two complementary halves of the one deal. The maths of the Planckscale encode the fundamental largeness of reality as much as its fundamental smallness. A reciprocal relation is what is baked in, but rarely highlighted.

    Then step three is to see that this very distinction - of maximal largeness and smallness, or order and disorder, spatiotemporal extent and local energy density, and other ways of describing it - themselves are a feature that has to emerge via a process of development. Crisp counterfactuality is where things arrive as they cease to change at the end of time. It is only when things get very cold in a very big world that even quantum fluctuation arrives at its residual level.

    An observer of the Heat Death could look around and be sure that there is just nothing happening in the most extreme possible fashion. The cosmos still expands at lightspeed. And that creates event horizons that must radiate. So material dynamics is in full play. But it is equally as devoid of informational difference. It is so homogenous that it just an eternalised nothing,

    The glass is both completely full and completely empty, and so it’s counterfactually is expressed not just locally but globally. If a heat Death photon represents some hope of an energetic disturbance, a local perturbation, well it has now been stretched so that a single wave beat spans the visible universe and thus can do no work inside that event horizon.

    Thankfuly we exist because the universe had to cross over from one kind of simplicity to the other. At the Big Bang, there was no stable counterfactuality in terms of global informational constraint or local dynamical degrees of freedom. At the Heat Death, the two are united by a local~global homogeneity. Halfway through the story, there is an abundance of stars and chill vacuum. There are many localised gradients where energy densities can bleed into heat sinks. The grand equilibration process is in complex unfolding still. Reductionist science has eons before its celestial accountancy is redundant.
  • Entropy, diversity and order - a confusing relationship in a universe that "makes""
    How do we approach order in a world whereby everything is both qualitatively the same (energy) but also qualitatively different (mass, time, space etc)?Benj96

    What you are drawing attention to is that “disorder” is a relative claim. The question becomes “disordered in relation to what kind of expectation, meaning, purpose or constraint?”

    So a more general definition of entropy would be grounded in an information theoretic perspective. What about this world counts as a degree of uncertainty or surprise in relation to my simplest model of it as a system?

    You can see that your first system - 20 identical balls - is already a highly constrained or ordered one as you have somehow managed to reduce all possible surprise as to the colour of the balls. Surprise is minimised. Your world is completely predictable on that score.

    A truely entropic situation would be if the balls could randomly take on any colour at any time. Even as you grouped them, they could switch colour on you. Or split, merge, be in multiple places at once, etc.

    So note how the standard mental image of an entropic system already smuggles in an atomising assumption - some stably countable degree of freedom like a particle that itself is already in a highly negentropic state of constraint. The particle and its qualities are made as homogenous as possible so that - by contrast - a chosen variable like location becomes maximally surprising. The thing you have the least information about, the least control over ... until you get grouping and impose order over that too.

    Of course, treating physical systems as if they were systems of particles - an ideal gas confined in a container and sat in a heat sink - is a useful model. If you are doing practical thermodynamics here on the warm surface of a planet floating in a cosmic heat sink with a temperature of 2.7 degrees K, then the statistics of bags of marbles pitches things at a suitable level.

    But once you want to apply the concept of entropy to the Universe itself as a system, then you have to recognise this habit of including negentropic assumptions in your metaphysical accounts.

    Take the Big Bang to Heat Death story of a Universe that starts off hot and constrained and becomes cold and spread out. In a broad sense, nothing changes as the positive contribution to entropification in terms of a disordering of position is matched by a negative contribution in terms of an increase of resulting gravitational potential. If the universe was just a bunch of balls spilling out, then a gravitational gradient wanting to clump them all back becomes an ever swelling constraint on their apparently unconstrained kinetics.

    Of course, that in itself is way too simplistic a model of the actual universe as it is presuming that the BIg Bang and Heat Death can be modeled in terms of countable degrees of freedom - definite material particles with a defined location and energetic state, so therefore a matchingly undefined degree of surprise as to the locations or energies they might have.

    In the Big Bang, any such degree of freedom is maximally indeterminate. The quantum uncertainty of any claim for identity is as high as it could be. So - relative to that accountancy point of view - the Big Bang was a chaos that became increasingly ordered by a process of spatiotemporal expansion. What got constructed was a developing heat sink that started to make particles - as localised energy densities - countable elements. After a while, the chaos got sorted into collections of quarks and electrons with their identities constrained by fundamental symmetry breakings.

    Then at the other end of the story, you have the Heat Death which - to our best knowledge - will be a state of immense order and uniformity ... measured from a relative point of view.

    At the Heat Death, you will left with an empty vacuum that continues to radiate with only a zero point quantum energy. All particles will have been swallowed up by black holes that then themselves eventually evaporate. The contents of this world are black body photons with a wavelength of the width of the visible universe - the de Sitter horizon. Or an uncountable number of photons with a temperature within a Planck’s hairsbreath of absolute zero K.

    So again, like the Big Bang, essentially a nothingness without a point of view. But still some kind of transition from a hot everythingness of an ur-potential to the chill emptiness of a generalised spatially structured void.

    Thus using entropy models to describe the evolutionary trajectory of systems such as the universe is tricky and fraught. But for quite understandable reasons. We have to make three shifts in our point of view to arrive at a point of view that is actually “objectively” outside the totality of the thing we want to describe.

    The first rung of the modelling is the standard entropy story. We have a bag of balls, a die with a fixed number of faces, an ideal gas with a defined number of identical particles. We are creating a world that is completely ordered or constrained in a way that, by contrast, leaves other aspects completely free or random. A world of degrees ... of freedoms. So this is an internalist dichotomy. We stand inside a world where this contrast is between what we are certain of - some number of balls - and what we are matchingly uncertain about - their possible location.

    A second rung of modelling would be to recognise that this state of affairs is only relative to that constructed point of view. It could be otherwise. We could be certain about the location of the balls - clumped in this group - but uncertain as to their identity, So now your counting of entropy/surprise/disorder is relative to what you decide to fix vs what you leave to swing free. If you are imagining a system as a bag of balls spilling out freely, well what about the gravitational pull that is a countering quantity of negentropy?

    Like cosmologists do, you would have to step up to a viewpoint where the creation of spacetime - as the great heat sink being manufactured to absorb what now looks to be so be some initiating Big Bang quantity of located energy - is also a thing to be counted in the final balance.

    Then from there, you need to step up to a third rung that achieves a viewpoint completely outside the system in question. If the Universe isn’t just a messy dispersion of degrees of freedom, nor even the orderly construction of a vast heat sink void, then you have to have an evolutionary tale that combines the local and global scales of what is going on in holistic fashion.

    Now you arrive at a picture where the very distinction you seek - order vs disorder - has to emerge into being. At the beginning of time - the Big Bang - order and disorder are radically indistinguishable as there is just an absolute (quantum/Planckian) potential. And at the end of time, you have the opposite of that. The Heat Death is final maximal dispersion of that potential into the ever lasting and unchanging definiteness that is an infinite void with a single temperature and undifferentiated holographic glow of de Sitter radiation. Both locally and globally, there is maximal uniformity across all possible locations along with a maximal number of those possible locations where something could have been different.

    So at the beginning of time, nothing could be counted as distinctive variety - individual bits of information or degrees of freedom. Everything was a hot quantum blur of potential. A quantified account can only be imputed retrospectively by the countable variety - in terms of a quantity of energy/a quantity of space - that we observe around us now.

    And at the end of time, the number of energy bits (Heat Death photons) and number of spatial bits (Planck scaled distances) will be matchingly infinite in number. So uncountable for the opposite reason of being in unlimited abundance and hence offering zero distinctiveness once more. A chill blandness of differences (radiation) that can’t make a difference (to the cosmically prevailing temperature).

    Standing on the third rung right outside the system that is the universe, we now see a transition from unlimited potential to unlimited difference (that also, matchingly, makes no meaningful difference).

    Each view of the situation can be correct. So the standard bag of marbles modelling works fine within its own limits. But also each enfolds the other as a succession of larger views. And the largest view is radically unlike the standard, or even the second tier relativistic models used mostly in cosmology.

    It is only when you get to quantum holographic type models of the universe - de Sitter horizons, etc - that you start tracking everything that is emerging. Marbles with some countable identity (surprising or otherwise) to have, along with countable locations that give them some place (surprising or otherwise) to be.
  • Monism
    You don't leave monism for a monistic-y anti-monism. You leave the very idea of a rational fixed-point.csalisbury

    The other ontic choice is to motor past dualism to arrive at the irreducible triadic complexity of a developmental or process view of "existence". You arrive at a better rational fix-point that either monism or dualism.

    The problems with dualities - like mind vs matter - is they don't meet the formal criteria of a dichotomy. Therefore they never really convince.

    But a full metaphysical strength dichotomy meets the definition of being "mutually exclusive/jointly exhaustive". You end up with two poles of being that are formally complementary. They are in fact mathematically reciprocal and thus mutually justified.

    Take a classic like discrete~continuous. And understand them as complementary limits of what could be the case - so processes which are about heading in directions rather than states of existence.

    To be discrete would be rationally defined as 1/continuous. And to be continuous would be similarly defined as 1/discrete. Each is the measure of the other. The more definitely you have the one, the more definitely you don't have the other. But you always still have to have both to have anything!

    So this is the trick which gets you past mere duality. You have a triadic story of a separation into two extremes that is the third thing of an interactively self-defining process.

    To be discrete is to measurably lack any evidence of continuity. And so a state of discreteness can only be as definite as that pragmatic metric. You might claim discreteness "for all practical purposes". And that rational position then harbours within it the "other" that is the continuous - rendered now as vague or indeterminate possibility.

    So - as CS Peirce said - the full logic of existence is developmental. And it starts with a "monism" of the completely vague or indeterminate. It then breaks rationally towards matched and reciprocal poles of being. Then step back from that and you can see how the whole forms a system, an interacting structure of being, a sign relation.

    The familiar duality of matter and mind just doesn't cut it. It compares apples and oranges. Matter is supposed to be talking about the fundamentally simple. But so is mind. And we know that mind is better understood as a complex embodied semiotic process - a modelling relation. It ain't another species of substance - a psychic stuff to rival the material stuff, setting up a disconnected duality of monisms.

    However if we want to get at some basic duality that works as a formal dichotomy, we can find it in the modern contrast between entropy and information. One stands for uncertainty or indifference. The other stands for certainty and meaning. And physics finds them to be reciprocal in a way that can be measured. The more you have of one, the less you have of the other. And at the Planck scale, they become fused. Order and disorder look like the same thing. It is indeterminate which you have.

    So monism equals ontological reductionism. And dualism arises by recognising that any ontic distinction - no matter how universal - has to arise as a dialectical contrast to its "other". If you individuate in some ontic direction, you also - reciprocally, measurably - have to be just as definitely leaving some other place behind.

    Is everything stasis? Well, can't see any flux right now. Is everything chance? Well, can't see any necessity right now. Etc, etc.

    So symmetry breaking involves moving towards a metaphysical limit by demonstrably leaving behind its metaphysical other. Limits can only exist if they are opposed. And being opposed, they have to be the third thing of holistically related. That gives reality a perfectly rational irreducible complexity. You have to have a triadic, or hierarchical, story to give an intelligible account of existence.

    This is a very fixed point. :)

    But it includes your own epistemic distinction of the one vs the many. Pluralism is just another complementary extreme. If the whole is defined by achieving the limits of cohesion or integration, then the parts are defined by achieving the counter limit of being incoherent or differentiated.

    This again will seem a problem. The instinct remains to protests it has to be either all about the integration or the differentiation.

    But instead, a triadic worldview says what we should hope for is a functional balance. States of affairs can only exist if they persist. And they can only persist if they find a complementary balance. The global cohesion and the local differentiation must be in some sense forming a feedback loop. The more of one results in the more of the other. You have a system that essentially freely grows to become both more unified, and more diverse, at the same time, due to the very relation causing their existence.

    And again, maths and science now give us robust formal models of exactly this - which match what we observe in nature. We have all the maths of fractals, dissipative structures, scalefree networks, constructal theory, etc, that tell us this triadic/developmental ontology maps to the world as we know it.

    Triadicism has won. Structuralism is in again. The news is just taking a while to filter out.