• The mind and mental processes
    Isn’t the notion of semiotic encoding an atemporal
    concept?
    Joshs

    I don’t follow your questions.

    Sure, a serial code has to be communicated one discrete step at a time. But that is the structural limitation that also allows unlimited combination - chains of symbols as long as you like to stand for some state of meaning.

    So in being physically reduced to a string of informational bits, a state of informational constraint can be constructed that is itself atemporal in its effects. Genes can crank out the same protein at any time of choosing - Pattee’s point about rate independent information, if that was your issue. Likewise, a Shakespeare passage can be delivered at any time or place, and the time it takes to read shouldn’t bear on the actual interpretation.

    But what question are you asking?

    If we begin with a pattern, an ensemble of elements organized with a particular relationship one to another, and observe this pattern transform itself as a new whole from one moment to the next such that each new configuration is similar but not identical to the previous pattern ( and each element has also changed its sense and role with respect to the ensemble), can this be considered a semiotic process?Joshs

    What example do you have in mind here? Chinese whispers?
  • The mind and mental processes
    I think the question is, why can't a (super impressive, say mammal-imitating) neural network type machine be a zombie, just like a similarly impressive but old-style symbolic computer/android?bongo fury

    Howard Pattee – my favourite hierarchy theorist and biosemiotician (along with Stan Salthe – wrote this on how even the question of living vs nonliving can be applied to machines. It all starts from a proper causal definition of an organism – one that clearly distinguishes it from a computer or other mechanical process.

    Artificial Life Needs a Real Epistemology (1995)

    Foundational controversies in artificial life and artificial intelligence arise from lack of decidable criteria for defining the epistemic cuts that separate knowledge of reality from reality itself, e.g., description from construction, simulation from realization, mind from brain.

    Selective evolution began with a description-construction cut, i.e., the genetically coded synthesis of proteins. The highly evolved cognitive epistemology of physics requires an epistemic cut between reversible dynamic laws and the irreversible process of measuring initial conditions. This is also known as the measurement problem.

    Good physics can be done without addressing this epistemic problem, but not good biology and artificial life, because open-ended evolution requires the physical implementation of genetic descriptions. The course of evolution depends on the speed and reliability of this implementation, or how efficiently the real or artificial physical dynamics can be harnessed by non-dynamic genetic symbols

    https://www.researchgate.net/publication/221531066_Artificial_Life_Needs_a_Real_Epistemology

    Possibly apokrisis is following that reading, and saying that, paradoxically, consciousness happens as the organism strives to avoid it.bongo fury

    Where does the idea of a zombie even come from except as "other" to what popular culture conceives the conscious human to be.

    Everything starts to go wrong philosophically once you start turning the complementarity to be found in dialectics – the logical unity that underwrites the logical division of a dichotomy – into the false dilemmas of reductionism.

    Reductionism demands one or other be true. Dialectics/semiotics are holistic in that they say existence is about the production of dichotomous contrast. Symmetry-breaking.

    So brain function is best understood in terms of future prediction that seeks to minimise an organism's need for change - as how does an organism exist as what it is unless it can homeostatically regulate the tendency of its environment to get busy randomising it.

    You are "you" to the extent you can maintain an identity in contrast to the entropifying actions of your world - which for humans, is both a physical environment, and a social or informational environment.

    We can be the eye of the storm because we are the still centre of a raging world that revolves around us. That contrast is what we feel as being a self in a world.

    The neural trick to achieving this is a modelling relation which cancels away the changes the world might impose on us - all its unplanned accidents - and thus imposes on the world our "self" that is the world exactly as we intend it to be.

    The baseline has to first be set to zero by cancelling everything that is currently happening to the level of "already processed" habit. And from there, attentional processes can mop up the unexpected - turning those into tomorrow's habits and expectations.

    This is the basis of Friston's Bayesian brain. Neuroscience has got to the point that semiosis can be written out in differential equations. So Pattee's call for a proper epistemology of life and mind are being answered.

    For other reasons, I doubt this will lead to conscious computers. But it at least grounds the next step for neural networks.

  • The mind and mental processes
    They say there is no language instinct , but rather innate capacities for complex cognitionJoshs

    Sounds like an argument over whether a donut is a cake or a biscuit. :lol:

    The linguistic wars talk past the issue in being hand-wavingly simplistic. From the semiotic point of view, what really mattered in language evolution was the development of a vocal tract which imposed a new kind of serial motor constraint on the standard hierarchical or recursive architecture of frontal lobe motor planning.

    Tool use had already started the process because knapping a flint demands a “grammar” of chipping away at a rock in serial fashion to achieve a pictured goal. Dexterity is about breaking down sophisticated intent into a long sequence of precision actions.

    Is making a hand axe a “tool instinct” or “complex cognition”? Or is it really an intersection of nature and nurture where other things - like an opposable thumb, a lateralisation of hand dominance, a bulking up of prefrontal motor planning cortex - all combine so as to impose a strong serial demand on the brain’s general purpose recursive neural circuitry?

    Language likewise would have most likely evolved due to the “lucky accident” of changes to the vocal tract imposing a new kind of constraint on social vocalisation. In my view, Darwin’s singing ape hypothesis was right after all.

    Homo was evolving as an intensely social tool-using creature. Vocalisation would have been still under “emotional” limbic control - the hoots and grunts chimpanzees use to great communicative effect. And even today, human swear words and emotional noises are more the product of anterior cingulate habit than prefrontal intent. Emitted rather than articulated.

    But something must have driven H erectus towards a sophisticated capacity for articulate vocalisation - sing-song noises - requiring the connected adaptations of a radical restructuring of the vocal tract and a matching tweaking of the brain’s vocal control network.

    The big accident was then that a serial constraint on hierarchical motor planning could be turned into a new level of semiotic encoding.

    Genes are likewise a serial constraint on hierarchical order. A 1D DNA sequence can represent a 3D protein. A physical molecule can be encoded as a string of bits. This was the lucky semiotic accident that allowed life to evolve.

    Language became the “genes” for the socially constructed human mind because once vocalisation was strait-jacketed into sing-song sequences - proto-words organised by proto-rules - it became a short step to a facility for articulation becoming properly semiotic. An abstract symbol system that could construct shareable states of understanding.

    So while Chomsky’s disciples work themselves into a lather over specific instinct vs general “cognitive complexity”, as usual the interesting story is in the production of dialectical contrast.

    It was how vocalisation came to be dichotomised into a serial constraint on hierarchical action that is the evolutionary question. And then how this new level of encoding blossomed into the further dialectic that is the human self in its human world.

    Language transformed the mentality of Homo sapiens in social constructionist fashion. Again, this was widely understood in about 1900, yet almost entirely forgotten by the 1970s or so. The computer as a metaphor for brain function had completely taken over the public debate.

    You became either a Pinker having to claim a language faculty was part of the universal hardware, or a Lakoff claiming it was just another app you might not choose to download.
  • The mind and mental processes
    Why can't this happen in the darkbert1

    But as I pointed out, the modelling relation approach to neural information processing says the brain’s aim is to turn the lights out. It targets a level of reality prediction where it’s forward model can cancel the arriving sensory input.

    Efficiency is about being able to operate on unthinking and unremembered automatic pilot. You can drive familiar busy routes with complete blankness about the world around as all the actions are just done and forgotten in a skilled habitual way.

    So like the fridge door, the light only comes on when a gap grows between the self and the world in this holistic self-world modelling relation.

    The input-output computer model has the opposite problem of treating the creation of light as the central mystery. All that data processing to produce a representation, and yet still a problem of who is witnessing the display.

    The modelling relation approach says both light and dark are what are being created. It is about the production of a sharp contrast out of a state of vagueness - the Jamesian blooming, buzzing, confusion of the unstructured infant brain.

    So the question of why there is light is answered by the reciprocal question of how there can be dark. And the answer in terms of how the brain handles habitual and attentional level processing is just everyday neuroscience.

    The input-output model of data processing can’t produce light because it can’t produce dark either. It is not producing any contrast at all to speak of.
  • The mind and mental processes
    You pissing on Pinker and others like him doesn't make your arguments more convincing.T Clark

    I gave Pinker a fairly favourable review of his Words and Rules when I reviewed it for the Guardian. But I wasn’t impressed much by the Language Instinct. And I found How the Mind Works too trite to read.

    So my view was that he was fine so long as he stuck close to his research. But he was just a bandwagon jumper when it came to the culture wars of the time.

    I think I might have reviewed Damasio too for the Guardian. I did for someone.

    There are a ton of books I could recommend. Some are even quite fun like Tor Nørretranders 1991 book The User Illusion.
  • The mind and mental processes
    Thanks.

    Not my intentionTom Storm

    Great. But talking of “true believer snippets” sure sounds that way.
  • The mind and mental processes
    So you don’t invest an effort in either the telling detail or the big picture, yet you are happy to stand to one side and make condescending noises.

    Right. gotcha. :up:
  • The mind and mental processes
    That doesn't sound like anything I read in Pinker's book.T Clark

    When was it written? :smile:

    I got into the socially constructed aspects of the human mind just a few years before evolutionary psychology came rolling in over the top of everyone with its genocentric presumptions about the “higher faculties”.

    So there are parts of the Chomskyian school I am sympathetic to - such as it’s structuralist bent. And then other parts it misses the boat in classic “evolved mental facilities” fashion. Go back to the 1920s and Vygotsky and Luria laid out the socially constructed nature of these.

    That is why I keep insisting on semiotics as the unifying view. Nothing can make sense until you realise that genes, neurons, words and numbers are all just increasing abstracted versions of the one general self-world modelling relation.

    Life and mind have a single explanation. And it even explains social, political and moral structure.

    There is a big prize at the end of this trail. And it ain’t something so trite as explaining the explanatory gap in everyone’s “consciousness” theories.

    My objection to your approach is that it presumes that a lot of patient detail will assemble some secure understanding about “how the brain works”.

    But the problem is so much bigger. It is about understanding the deep structure of the very thing of an organism. You can’t even see what counts as the right detail without having the right big picture.
  • The mind and mental processes
    I think I have some idea what he's talking about, but I didn't dig in to it in my response to him.T Clark

    Let me try again in even simpler terms using the concepts of computational processes.

    The brain models a self~world relation. That is why consciousness feels like something - the something that is finding yourself as a self in its world.

    This is all based on some embodied neural process. The brain has to be structured in ways that achieve this job. There is some kind of computational architecture. A data processing approach seems fully justified as the nervous system is built of neurons that simply "fire". And somehow this encodes all that we think, feel, see, smell, do.

    Neuroscience started out with a model of the nervous system as a collection of reflex circuits - a Hebbian network of acquired habit. In neurobiology class, we all had to repeat Helmholtz's pioneering proof of how electrical stimulation of a dead frogs spinal nerve would make its leg twitch (and how electric shocks to a live rats feet could train it in Skinnerian conditioning fashion).

    So psychology began as an exploration of this kind of meat machine. The mind was a set of neural habits that connected an organism to its world as a process of laying down a complexity of reaction pathways.

    But then along comes Turing's theory of universal computation that reduces all human cognitive structure to a simple parable of symbol processing – the mechanics of a tape and gate information processing device. In contrast to a story of learnt neural habits that are biologically embodied, the idea of universal computation is deeply mathematical and as physically disembodied and Platonic as you can get.

    But the computational metaphor took off. It took over cognitive psychology and philosophy of mind, especially once computer scientists got involved and it all became the great artificial intelligence scam of the 1970s/1980s. The mind understood as a symbol processing machine.

    The computational paradigm boils down to a simple argument. Data input gets crunched into data output. Somehow information enters the nervous system, gets processed via a collection of specialised cognitive modules, and then all that results – hands starting to wave furiously at this point - in a consciously experienced display.

    So good old fashioned cogsci builds in Cartesian dualism. Computationalism of the Turing machine kind can certainly transform mechanical inputs into mechanical outputs. But only in a disembodied syntactic sense. Semantics – being excluded from the start – are never recovered. If the organism functions as a computer, it can only be as a mindless and purposeless zombie.

    But even while the symbol processing metaphor dominated the popular conception of how to think about neurobiology, the earlier embodied understanding of cognition puttered along in the background. Neural networkers, for example, continued to focus on machine architectures which might capture the essence of what brains actually do in relating a self to a world - the basic organismic or semiotic loop.

    In data processing terms, you can recognise the flip. Instead of data in/data crunched/data outputted, the organismic version of a computational cycle is based on making a prediction that anticipates a state of input so that that input can be in fact cancelled away. The computational task is homeostatic – to avoid having to notice or learn anything new. The ideal is to be able to deal with the world at the level of already learnt and honed unthinking reflex. To simply assimilate the ever changing world into an untroubled flow of self.

    Of course, life always surprises us in big and small ways. And we are able to pick that up quickly because we were making predictions about its most likely state. So we have a machinery of attentional mop-up that kicks in when the machinery of unthinking habit finds itself caught short.

    But embodied cognition is the inverse of disembodied cognition. Instead of data input being turned into data output, it is data output being generated with enough precision to cancel away all the expected arriving data input.

    For one paradigm, it is all about the construction of a state of mental display – with all the Cartesian dualism that results from that. For the other paradigm, it is all about avoiding needing to be "consciously" aware of the world by being so well adapted to your world that you already knew what was going to happen in advance.

    Erasing information, forgetting events, not reacting in new ways. These are all the hallmarks of a well-adapted nervous system.

    Of course, this biological realism runs completely counter to the standard cultural conception of mind and consciousness. And this is because humans are socially constructed creatures trying to run a completely different script.

    It is basic to our sociology that we represent ourselves as brightly self-aware actors within a larger social drama. We have to be feeling all these feelings, thinking all these thoughts, to play the role of a "self-actualising, self-regulating, self-introspecting" human being.

    We can't just go with the flow, as our biology is set up to do. We have to nurture the further habit of noticing everything about "ourselves" so that we can play the part of being an actor in a human drama. We have to be self-conscious of everything that might naturally just slip past and so actually create the disembodied Cartesian display that allows us to be selves watching selves doing the stuff that "comes naturally", then jumping in with guilt or guile to edit the script in some more socially approved manner.

    So there is the neurobiology of mind which just paints a picture of a meat machine acquiring pragmatic habits and doing its level homeostatic best not to have to change, just go with its established flow.

    And this unexciting conception of the human condition is matched with a social constructionist tradition in psychology that offers an equally prosaic diagnosis where everything that is so special about homo sapiens is just a new level of social semiosis – the extension of the habitual mind so that it becomes a new super-organismic level of unthinking, pragmatic, flow.

    But no one writes best-sellers to popularise this kind of science. It is not the image of humanity that people want to hear about. Indeed, it would undermine the very machinery of popular culture itself – the Romantic and Enlightened conception of humans as Cartesian creatures. Half angel, half beast. A social drama of the self that you can't take your eyes off for a second.

    So if you have set your task to be the one of understanding the science of the mind, then you can see how much cultural deprogramming you probably have to go through to even recognise what might constitute a good book to discuss.

    But my rough summary is that circa-1900s, a lot of people were getting it right about cognition as embodied semiosis. Then from the 1950s, the science got tangled up in computer metaphors and ideology about cognition as disembodied mechanism. And from about 2000, there was a swing back to embodied cognition again. The enactive turn.

    So you could chop out anything written or debated between the 1950s to 2000s and miss nothing crucial. :razz:
  • The mind and mental processes
    So - the brain's function is as an optimally effective predictor of future events. I guess the question then is "but why?"T Clark

    I think Darwin’s book gives you the obvious answer.

    One source said the minimization of free energy, whatever that means. Is that the kind of thing you're talking about.T Clark

    That means reducing uncertainty or error.

    Obviously, the answer is to do both.T Clark

    Yep. But you can waste years reading the wrong books if your sources are seeking mechanical parts to fit a story of mechanical wholes. I’m just giving you a heads up.

    Maybe I want to talk about cells and not semiotics.T Clark

    Whatever you believe is a wise investment of your energy I guess.
  • Mathematical universe or mathematical minds?
    Einsteinian relativity is what expulses time from physics.Metaphysician Undercover

    Relativity united time and space in a way that made more general sense. And we now wait for quantum physics to catch up with the rest of the class.
  • The mind and mental processes
    a continuing question I have (which may be of relevance to mental processes) is the idea that the world has no intrinsic properties and that humans see reality in terms of neutrally generate matrix of gestalts.Tom Storm

    Thank goodness cogsci eventually took its enactive or ecological turn about 20 years back. This is what semioticians were referring to as the construction of an Umwelt - the mental model that is of a self in its world. And yes, gestalt psychology was also saying the same before the computationalists crashed the party and made a noisy mess.

    But such modelling is the opposite of neutral. It is supremely self-interested in that the construction of an organismic selfhood is what anchors the whole exercise.
  • The mind and mental processes
    I'm still working on my response to him.T Clark

    While you are at it, I would add that the scientifically grounded approach would be being able to say why some "this" is a more specified version of "that" more general kind of thing. So if the mind is the specific example in question, to what more fundamental generality are you expecting to assimilate it to.

    So if you are saying the mind is some kind of assembly of component processes, then what is the most general theory of such a "thingness". I would say rather clearly, it is a machine. You are appealing to engineering.

    Likewise others are trying to argue that mind is a particularised example of the more general thing that is a substance.

    And I am arguing that mind is a particularised example of the more general thing that is an organism. Or indeed, if we keep digging down, of a dissipative structure. And ultimately, a semiotic relation.

    So clarity about ontology is critical to seeing you have chosen an approach, and yet other approaches exist.

    Cutting to the chase, we both perhaps agree that the mind isn't simply some variety of substance – even an exotic quantum substance or informational substance. But then do you think biology and neurobiology are literally machinery? Aren't they really organismic in the knowing, striving, intentional and functional sense?

    In simple language, an organism exists as a functioning model of its reality. And it all depends on the mechanism of a semiotic code.

    The genes encode the model of the body. The neurons encode the model of the body's world. Then words encode the social model of the individual mind. And finally numbers have come to encode the world of the human-engineered machine.

    So it is the same functional trick repeated at ever higher levels of organismic organisation and abstraction.

    Organismic selfhood arises to the degree there is a model that is functionally organising the world in play.

    So - contra Pinker - language may not create "thought", but it does transform it quite radically. It allows the animal mind to become structured by sociocultural habit. Humans are "self consciously aware" as social programming exists to make us include a model of the self as part of the world we are functionally engaged with. A higher level viewpoint is created where we can see ourselves as social actors. Animals just act, their selfhood being an implicit, rather than explicit, aspect of their world model.

    Anyway, the point is that we want to know what is the "right stuff" for constructing minds. It ain't exotic substances. It ain't mechanical engineering. But what holds for all levels of life and mind is semiosis - the encoding of self~world models that sustain the existence of organismic organisation.
  • The mind and mental processes
    I made a mistake.T Clark

    :razz:

    Well at least you can add the general constraint of ruling out all discussions of mind that makes appeals to the notion of it as some kind of fundamental substance. That would rule out the usual suspects.

    You are appealing to a metaphysics of localised process. I am saying go one step further and employ a metaphysics of global function.

    But you could rule that out too and only allow the metaphysics you have chosen in advance, I guess.
  • Whither the Collective?
    This all gets much easier when you understand societies are organised by the win-win dynamic of competition~cooperation. As democratic theory tries to make clear, the collective system has the aim of balancing individual local freedom against global social constraint. And to do this effectively, it has to strike this balance over all scales of the social collective.

    A mature society is thus a competition of interest groups or social institutions. As an individual, you will sense the balance change as you move between spheres of influence. In your own home, you have the most freedom. At the most abstract levels of social institution - in court, in parliament, in church - they are places where you then feel the most constrained by the "collective will".

    It is just nature doing it things. Evolving a rational hierarchical order. Except that now it is humans having a hand in the design of the general political/economic system. And that is where it all starts to go off the road when folk pretend that a hierarchically organised system of competition~cooperation doesn't need to apply to them. Or their family. Or their otherwise defined in-group.

    The elemental physics and biology of it all doesn’t much support a collectivist outlook.NOS4A2

    So in fact the elemental physics and biology does say nature has its particular evolutionary order. It is not a secret to anyone familiar with social science.

    Communism was a failed dream as it didn't implement the right model. It failed to appreciate the importance of free institution building at every level of society. A democracy constituted of interest groups is just a more robust way of developing an intelligent balance of competition and cooperation in a society.

    Of course democracies are running into their own inverse problem of fetishising the atomistic individual.

    Look for states that are proud of being social democracies. They get the "collective of interest groups" balance that is the Hegelian ideal.
  • The mind and mental processes
    As I've noted many times, Enrique's posts on scientific subjects are pseudo-science - incomprehensible mashups of buzzwords and jargon that don't really mean anything.T Clark

    This is accurate.

    As far as I know, there is no evidence to show or reason to believe that quantum effects affect mental phenomena directly. Just because quantum particles and mental processes are in some sense mysterious to some people, that doesn't mean there is any connection.T Clark

    In fact living organisms arise at the point in nature where molecular machinery can be used to harness quantum effects. So life may not be quantum in @Enrique's emergent sense, but it is quantum in that it employs its classical structure to exploit the energetic possibilities of the "quantum realm".

    An example is the respiratory chain that powers every cell. Basically it is a ladder of quantum tunnelling formed by a series of sulphur-iron clusters embedded at precisely spaced distances in a protein matrix. A "hot" electron enters the chain at one end and safely milked of its energy in about 15 steps before finally being whisked away by an oxygen atom acceptor.

    So a nonlocal quantum effect jumps the electron down the waiting pathway. But the classical physics is what forms the pathway. And the information about the design of the pathway is ultimately contained in the genes.

    Thus the biology isn't "made of quantumness". It is instead the opposite thing of (semiotic) information ulitimately using its control over classical molecular structure to exploit the quantum realm for energetic advantage.

    The same is true of other ubiquitous biological components. Enzymes rely on quantum level control of chemical reactions. So the brain is "quantum" just because it is made of cells that have to do energetic things.

    And brains are also quantum because the same biology is used in in sensory transduction – how the receptors that interface between the nervous system and the world can actually physically "handle" the information coming at them in the form of electromagnetic, chemical, or mechanical energy.

    So quantum biology is now a thing. But it means the opposite of saying that life and mind arise from the quantum. Instead life and mind arise from being able to use classical machinery to harness useful quantum effects ... with an overarching functional purpose in mind.
  • The mind and mental processes
    I want to talk about the mind by looking at several specific processes - thought, language, instinct, maintenance of bodily homeostasis - through the lenses of psychology, linguistics, and cognitive science.T Clark

    A way to sharpen your approach would be to look at the issue through the eyes of function rather than merely just process.

    You have started at the reductionist end of the spectrum by conceiving of the mind as a collection of faculties. If you can break the mind into a collection of component processes, then of course you will be able to see how they then all "hang together" in a ... Swiss army knife fashion.

    And as kids, didn't we all covet a Swiss army knife with the most tools - including the spike for getting stones out of horses hooves - only to find they are useless crap in reality. :grin:

    Instead, think about the question in terms of the holism of a function. Why does the body need a nervous system all all? What purpose or goal does it fulfil? What was evolution selecting for that it might build such a metabolically expensive network of tissue?

    Can you name this essential function yet? Can you then see how it is implemented in neurobiology?

    Your process approach will lead towards the Swiss army knife brand of cogsci - the brain as a set of modules or cognitive organs.

    A functional approach leads instead to "whole brain" theories, like the Bayesian Brain, where the neurobiology is described in holistic architecture terms.
  • Mathematical universe or mathematical minds?
    I have, in fact, done a fair amount of research involving Moebius (or linear fractional) transformations (that crop up in these physics discussions), but going the pure analysis direction in the complex plane rather than the geometries coming off the Riemann sphere.jgill

    So what are your thoughts here when one direction looks to track the "deep maths" of Nature and the other choice may be just unphysical pattern spinning? What do we learn if this is the case?

    But am I right that you argue the complex plane has lessons in terms of the physics of chaos - patterns of convergence~divergence?

    I couldn't get much out of your 1970s paper, but I was thinking about how your fixed point paper illustrates my point about the dichotomy of rotation~translation in that the complex plane seemed marked by patterns of curl. Or convergence and divergence over all scales, as you would expect in a chaotic system.

    In a maximally turbulent system, you have the fixed points of vortexes appearing over all available physical scales. And the the deep naturalness of this scalefree or powerlaw behaviour was rather the point of the Franks paper I cited previously.

    So - in reference to the OP - it seems hard not to be interested in the maths that has some connection to reality, even if the ideology of maths is that is perfectly free to chase pure pattern for its own sake.

    And coming from the other direction, Nature itself has no choice but to be structured and fall into self-constraining patterns. Mathematical regularity of some kind must emerge.

    But I was reading this yesterday that warns the field of maths might indeed be shortchanging itself with some of the metaphysical choices it makes ... yet equally, one could argue that the power of maths lies in the fact it puts itself outside the reality it means to describe by making its "non-intuitionist" choices ... like the ones about infinities and empty sets that outrage the more metaphysically inclined.

    Gisin has traced the problem of the block universe to an unexpected source: mathematics itself. He notes that a century ago, mathematicians were split about how to describe numbers whose decimal digits trail off into infinity.

    On one side, led by David Hilbert, were those who thought every such real number was a “completed object” that exists in its entirety, timelessly, even though it has infinite digits. On the other side were ... “Intuitionistic mathematics.” In intuitionistic mathematics, numbers are created over time, with digits materializing in succession.

    Spoiler: Hilbert’s side won. “Time was expulsed from mathematics” and as a byproduct, from physics, too, writes Gisin (Gisin, 2020a). But, he wondered, what would happen if physics were re-written in the language of intuitionistic mathematics? Would time become “real” again?

    Gisin asks us to consider “chaotic” systems, in which two almost-but-not-quite-identical starting points evolve to wildly different end points. A classic example is the weather. ... whether you are talking about the weather, the evolution of the entire universe, or just your choice of what to have for dinner tonight, it is distressing to think that the future is already fully determined.

    Of course, physics is not required to make us comfortable. But Gisin points out that intuitionistic mathematics could offer a natural way out of the deterministic lockup. In the intuitionistic view, numbers—like the values of pressure, wind speed, humidity, and so on—do not have definite values from the get-go, but rather develop over time, with randomly-generated digits unscrolling as time passes. This mathematical treatment allows for a universe in which time actually flows, events truly happen, and randomness and chance are injected moment by moment.

    https://www.templeton.org/wp-content/uploads/2021/11/Time-by-Kate-Becker-1.pdf

    So there's a thought. Zoom in on your complex plane with its pattern of curl, and do you start to lose any sense of whether some infinitesimal part is diverging or converging? Is even a maximally curved part of the map now definitely one or the other. So same as the real number continuum. Zoom in and are your cuts a point on one side or the other? Bring in the observer (or observational scale) – as physics must - and the intuitionist position has a lot going for it.

    Seeing as I'm throwing out references, here is another that might interest you - a short, then longer version, of Peirce on the continuum debate. One can ask again whether maths made the right pragmatic choice even if Peirce is the metaphysically correct choice? And so everyone is right and wins a prize. :grin:

    https://cesfia.org.pe/villena/zalamea_peirce_translation.pdf
    https://uberty.org/wp-content/uploads/2015/07/Zalamea-Peirces-Continuum.pdf

    ... from the very beginning of their investigations, the pathways of Cantor and Peirce are opposite to one another; while Cantor and many of his successors in the 20th century try systematically to delimit the continuum, Peirce tries to unlimit it—to bring it nearer to a supermultitudinous continuum, not limited in size, truly generic in the transfinite, never totally actualizable.
  • A Taxonomy of Ideas
    From hierarchy theory and Peircean semiotics, you can make an argument that the ur-dichotomy of dichotomies is the dyad of the local~global and the vague~crisp.

    The local~global is the synchronic view of a natural system - the hierarchy theory view where all quantification is in terms of the reciprocal limits of local vs global scale.

    Then the vague~crisp is the diachronic view - the developmental axis that describes the emergence of this basic scale distinction from “out of nothing”. So you start off in a pre-quality world where all is just vague. Then the breaking of that symmetry in terms of the start of a local~global distinction or counterfactuality is how you proceed towards the crispness of a fully realised scale dichotomy.

    This would be the taxonomy of natural systems. It says everything starts in the simple potential of a vagueness, an ungrund, a Firstness, a raw potential, a quantum foam. And then this vagueness becomes dialectially transformed into a crisp and definite somethingness by the symmetry breaking which is a growing contrast in physically-expressed scale. You get a local and global boundary on possibility that creates the Thirdness of the concrete spectrum of actuality found inbetween.

    This is the general framework into which you can then drop all the usual metaphysical-strength dichotomies like matter~form, integration~differentiation, accident~necessity, etc.
  • Mathematical universe or mathematical minds?
    There's more magic in complex analysis than in complex arithmetic.jgill

    It would help to familiarise yourself with the current metaphysical debate over Penrose’s argument. Here is one excellently clear paper….

    http://cejsh.icm.edu.pl/cejsh/element/bwmeta1.element.ojs-issn-2451-0602-year-2018-issue-65-article-439/c/439-462.pdf

    Penrose’s [sees complex numbers structures] as the primary fabric of the Universe. … Penrose uses this designation in regards to the mathematical structure of his main research focus in quantum gravity and twistor theory (Penrose, 1967).

    The main idea that Penrose articulates concerning the complex structures is that the results that require arduous computations with the use of the of the real structures are obtained “for free” as the complex structures come into play.

    Penrose does not hesitate to push this magic to the extreme as he openly states:
    Nature herself is as impressed by the scope and the consistency of the complex-number system as we ourselves, and has entrusted to these numbers the precise operations of her world at its minutest scales (Penrose, 2005, p. 73).

    The article then goes on to cover the gory mathematical details.

    What is key is that the complex numbers build in something essential to physical existence itself - something so basic that all more complex structure pops out of it “for free”, whereas getting the same outcomes from real numbers is arduous, as they lack the right dimensionality to describe the dimensionality that the Universe actually has.

    As a cosmic Darwinist, this tells me that we live in the Universe with the particular dimensionality that is the simplest way to produce richness, and that is why physics finds that complex number magic is matchingly the root description of Nature.

    The reals are too simple to generate a complex world. But the complex numbers have the minimal complexity that can then be the basis for generating all the complexity that ensues.

    The hypothesis I am pursuing is that it all ties together in rather obvious fashion with Noether’s theorem and Newton’s twin conservation principles - the dimension-defining dichotomies of rotation and translation.

    Dimensionality is a system defined by the local and global. And such a system can be broken towards its two dialectical extremes - the spin that defines localisation, the straight line or flat geometry that defines the globality of unbounded translations.

    If your number system has to have units that speak to both unit 1 rotation and unit 1 translation as their scale-anchoring identity operation, then what else would you expect your number system to look like as the simplest possible representation of such a unity of opposites?

    So this is why particle physics is cashed out in stories of spin symmetry, and why Penrose pushes so hard with with his hylomorphic/conformal view of spacetime geometry.
  • Mathematical universe or mathematical minds?
    Many share your jaundiced view.

    So I'm reading Penrose, and all of sudden he explodes into excitement like a schoolgirl, fawning over complex numbers because they are "magical" and perform "miraculous" things, further spilling exclamation marks in the surrounding paragraphs about how he's only scratched the surface of "number magic!"

    https://www.physicsforums.com/threads/are-complex-numbers-magical.68277/
  • What are the "parts" of an event?
    Something seems to be missing here though, not convinced either will do.jorndoe

    Perdurantism requires you to believe the block universe interpretation of special relativity is true. So as an account of change, it doubles down on object oriented ontology. It leaves no room for quantum nonlocality/contextuality, material accidents, or finality - all the other things a holist would want to find in their metaphysics.
  • What are the "parts" of an event?
    An object would be a structure that imposes stability. An event speaks to the notion of a structure that instead effects a meaningful change … from one objectified state to some other.

    So in terms of parts, an event would have to have beginning and end points. Temporal parts rather than spatial parts - if what you seek is that which is the lack of change which in turn accounts for what is the change.

    An object is composed of parts to the degree they could be elements located elsewhere. An event is composed of parts to the degree they could be located elsewhen.

    All this suggests that ontology ought to take an integrated spatiotemporal view of “things”. A systems science or process philosophy view.
  • Mathematical universe or mathematical minds?
    Well, the history of science has proved that whatever complex concepts mathematicians created, they finally came to be applicable in the mathematics of physics or even to directly describe an empirical context. Take for instance the classical example of complex numbers.

    Turning this on its head, there are those who argue instead that the complex numbers are more fundamental than the reals because they embed the seed of commutativity that Nature needs to physically exist.

    Metaphysics requires symmetry breaking or asymmetry to get a Cosmos going. And complex numbers make commutative order matter in a way that is "physically realistic". The reals are just too simple in that they lose this grit which eventually forms the pearl.

    So this is a bit of a parable that shows it isn't an either/or situation. Maths and reality are in a dialogue as far as our epistemic endeavours go. In this case, the mathematical intuition was that the reals had to be fundamental, so "complex number magic" became "a surprise". But if we had instead started from more metaphysical considerations – the needs of a world formed by symmetry breaking - then complex numbers might have come first, and the reals be considered the "less fundamental" afterthought.

    We can talk ‘as if’ there really is an evolution of order but the meaning of such a notion vanishes within the physical stance.Joshs

    I can go along with just about any criticism of Dennett, but this could be harsh to physicalism which after all now founds itself on deeply holistic principles like general covariance and least action. There is a finality, a Darwinism, in effect that selects for the cosmic structure that best "hangs together".

    Again, this is where we can look for the robust connection between "maths as invented" vs "maths as real". We are merely creatures making models. But the structures that are useful to describe are the ones by which the Cosmos must inevitably structure itself. So we do a good or bad job in that regard.

    Apokrisis wrote a fair bit in previous threads about the gap between the dependence of biological and psychological phenomena on semiotic codes and algorithms vs the absence of the concept of semiosis in physics. One is left with either a kind of dualism in which semiosis appears out of nowhere in living systems or a pan-semiotics inclusive of physics , requiring an updating of meta-theoretical assumptions in physics.Joshs

    I would use the term pansemiosis as a synonym for dissipative structure theory and hierarchy theory. That is, all these are attempts by the biologically-inclined to root their biology in a physics that is triadically complex rather than monistically simple.

    So semiosis is dependent on the extra thing of an encoding mechanism - genes, neurons, words, numbers. That is something completely new to Nature - and yet also already "existent" in Nature as that which is antithetical. Symbols have their unbounded power over physics because they essentially zero the cost of regulating that physics. Semiosis transcends that which it seeks to control by placing itself outside the material cost of doing business - or at least by making the cost so tiny in comparison to the returns that it drops out of the equation.

    Once you have a mechanism for constructing proteins, you can make any protein at all. Useful ones, useless ones. It's all the same. And thus the proteins you make become a meaningful choice.

    Nature “in the raw” lacks this self-transcendence. It just self-organises. And we can call that pansemiosis because it is the step that paves the way for semiosis proper as its "other".

    This approach rids us the the gap between normative claims ( manifest image) and the empirical world it addresses (scientific image).Joshs

    Yep. Semiosis does the extra thing of imposing its imagined regulative possibilities on a world that has the clear possibility of being regulated.

    But this gets confusing when we both need to model the world "as it actually is" so that we can then likewise construct our widest range of possible worlds to impose upon it.

    Which is the science and maths suppose to track? Well, it sort of does both if we can disentangle the fundamental view from the applied view.

    And yet by the same token, it would serve no point to actually sever the connection between our manifest and empirical worlds as that is the pragmatic connection being nurtured.

    So the game is to divide, and then to connect. Semiosis is about constructing the reality we wish to live. That starts down at the genetic level for life. Termites shape their worlds into the world that best befits termites. The result is neatly spaced mounds with great air-conditioning, etc. And humans take that to an anthropic extreme with the world they build for themselves.

    So epistemology requires a clean break into the subjective and the objective as the step towards its next level of reality construction. Give humans a chance and they would anthropomorphise not just a single planet but the entirety of the Cosmos.

    It can't happen. But if it did, it would be Nature playing out the logic of pansemiosis.

    For Rouse, normativity is a property of systems of material nature rather than a mind split off from nature.Joshs

    Actually this was the phase that got me perusing this thread. :up:

    It is essentially what I am saying. Maths is both the free invention of our minds and the inescapable organisation of any Cosmos. It takes this kind of clean break - this dichotomy that defines two complementary limits – to ground the actual business of semiosis, which is to continue the self-organising evolution of the natural world.

    So we need a theory of the world (as it really is) and a theory of the self (as it ideally would be). From the interaction of the two, we get whatever we get.

    And to get this clean division of theory, we need the meta-theory that can see this as a pragmatic co-production. A theory of the self especially needs a material grounding - as the current sad state of world shows. And our theory of the world is likewise rather lacking in its material holism, its reliance on dissipative order, etc. The science and maths we favour is on the reductionist side. Short-term in its horizons, simplistic in its interactions.

    In terms of the OP, we have a functioning balance of invented~discovered that was good enough to deliver the industrial age based on a fossil fuel "free lunch". That created a stage in the Hegelian advance of history. That manifested a certain concrete reality.

    What comes next is its own interesting question. But the semiotic view says that to say anything intelligent, we have to focus on the fact that this is about a self~world modelling relation.

    We built ourselves up to our current position on a hierarchy of genes, neurons, words and numbers. Words delivered humans as social selves. Numbers delivered humans as technological selves. Does further progress require a new level of semiotic mechanism - one still more abstract or advanced – as words and numbers seem to trap us in the kinds of self~world structures they are able to create.
  • The Ultimate Question of Metaphysics
    For example, the fixed point behaviour that anchors renormalisation in quantum field theory.
  • The Ultimate Question of Metaphysics
    You do write the most interesting posts. This is related to catastrophe theory as well.jgill

    Thanks. Catastrophe theory was both one of my earliest intellectual thrills and disappointments. It seemed to promise so much and yet deliver so little. It had little practical application and just stood as a signpost to the realisation that nonlinearity is more generic than linearity in nature.

    I've wondered whether fixed points (attracting, repelling, indifferent) have any metaphysical properties. Stanislaw lem's ergodic theory of history presents a counterpoint to the butterfly effect in Chaos theory: certain social movements are so strong that minor fluctuations have little to no effect on large scale outcomes.jgill

    My systems science approach is predicated on global constraints that produce local stability. So fixed points emerge due to top-down acting constraints on possibility.

    The tricky bit is then that the local degrees of freedom thus created have to be of the right kind to rebuild the whole that is creating them. It is a cybernetic loop where the system maintains its structure in a positive feedback fashion.

    So fixed points are important as the emergently stable invariances of a physical system. The symmetries that anchor the structure of the self-reconstituting whole.

    This is the guts of physical theory. Lorentz symmetry gives you the “fixed point” behaviour of spacetime, and the Standard Model gauge group gives you the invariances which in turn define the “inner space” structure of particle interactions.
  • The Ultimate Question of Metaphysics
    Most of those systems iterate a single complex function.jgill

    Natural growth processes are crudely modelled by iterative functions in that the functions build on their own history of accidents. Some arbitrary set of initial values is plugged into the equation and some larger pattern may emerge. It is all completely exact and formal - apart from the fact that some human has plucked the starting conditions out of the air and then run an eye over the results and found the output "exciting" for some reason. That part is completely informal - outside it being woven into a system of scientific modelling.

    As with cellular automata, the mathematician sees a pattern emerge from the algorithm and finds it striking because it is a pretty pattern. Maybe even a suggestive pattern. Possibly even what looks like a pair of butterfly wings that might seem to stand as a good model of bistability in a natural system.

    It all gets very exciting - a la Wolfram - because it seems to say that (heuristically tuned by some fiddling about to find the lucky equations), maths is showing how simplicity produces "lifelike" natural complexity.

    But it is then so easy to skip over the many steps needed to start using these sparkling new toys as actual scientific models. I had close experience with this when I was involved in the debates over how to apply "chaos" models to neuroscience in the 1990s. It was disappointment with the ratio of hype to insight that pushed me onwards to hierarchy theory, biosemiosis and the larger story of dissipative systems.

    In short, the problem with deterministic chaos and other "exact" algorithmic approaches is that the formality gets abused by the informality of their interpretation. Pretty patterns get cherry picked. Worse still, the fact that complexity appears to emerge "magically" in supervenient fashion becomes weaponised by reductionist metaphysics. It is used to confirm old atomistic prejudices about how the world "really works".

    But algorithmic complexity is merely mechanical complication, not true organic complexity. It is all bottom-up construction and lacks top-down evolving constraints. It exists in a frictionless and sterile world that has no final cause, even in the most basic form of a thermodynamic imperative.

    So when I talk about chaos in the natural world sense, I am indeed not thinking it starts and stops in the reductionist trinkets generated by iterative functions. I am clear that these toy systems offer useful tools and arguments. But their shortcomings are just as visible.

    You have mentioned symmetry breaking several times in posts. I know practically nothing of it, but it seems to somewhat parallel the fundamental notion of chaos theory, sensitive dependence on initial conditions. How do you perceive it? Does it resonate in metaphysics?jgill

    Symmetry breaking is a huge subject - especially as I've spent the past year really trying to figure out my own view of how it all holds together from a systems science or holist perspective.

    But a short answer on this specific question is that spontaneous symmetry breaking has sensitivity because what we are saying is that a system is so poised that absolutely any perturbation would tip its state.

    Take the usual examples of a pencil balanced on its point, or Newton's dome with a ball perfectly balanced on the apex of a frictionless hemisphere. The pencil and ball are objects in a state of symmetry, being at rest with no net force acting on them, so they should never move. But then we also know that the slightest fluctuation - a waft of air, the thermal jiggle of their own vibration, even some kind of quantum tunnelling – will be enough to start to tip them. The symmetry will be broken and gravity will start to accelerate them in some "randomly chosen" direction.

    So metaphysically, this is quite complex. Some history of constraints has to drive the system to the point that it is in a state of poised perfection. The symmetry has to be created. And that then puts it in a position where it is vulnerable to the least push, that might come from anywhere. The sensitivity is created too. The poised system is both perfectly balanced and perfectly tippable as a result. The situation has been engineered so randomness at the smallest scale - an infinitesimal scale - is still enough to do the necessary.

    All this is relevant to the OP - as the Big Bang is explained in terms of spontaneous symmetry breaking. And thus the conventional models have exactly this flaw where the existence of the "perfect balance" - a state of poised nothingness - is just conjured up in hand-waving fashion. And then a "first cause" is also conjured up in the form of "a quantum fluctuation". Some material act - an "environmental push" - tips the balance, as it inevitably must, as even the most infinitesimal and unintentional fluctuation is going to be enough to do the job of "spontaneous" symmetry breaking.

    The sensitive dependence on initial conditions is unbounded - and hence becomes helpfully something we don't even need to be talking about when pondering the "cause" of the Big Bang. A fluctuation was surely there at the beginning to tip the inflaton field down its potential well, or whatever. But as an efficient cause, it becomes the most minor and random of events. Any other fluctuation would do just as well as the nudge that set things rolling.

    Anyway, again we have the "exact mathematical models" that are indeed used very productively to model the creation of the Universe. One can write the various differential equations that generate some particular inflaton potential to explore. The Higgs, the dilaton, the self-interacting, the massive scalar. The pencil is poised. It must surely tip. We can generate a bunch of theoretical patterns and argue about how closely the observables match the latest CMB data.

    But from a metaphysics point of view, there is so much to add about what is going on behind the models - the assumptions that have to be built in as their motivation.

    Such as how does nature arrive at a generalised state of critical instability - a pencil balanced on its point? And how does that relate to the fact that nature also needs some unintentional fluctuation - even if it is infinitesimal - to start the game going at some actual point in time.

    This is where I bring in the contrary view where fluctuation is unbounded and symmetry states emerge as the constraint of fluctuation to some infinitesimal (Planckian) grain. You start with the absoluteness of an everythingness - chaotic or scalefree fluctuation. And then a state of global order crystallises as a generalised constraint of fluctuation to some single universal scale (the scale scaled by the three Planck constants).

    So you wind up with a quantum vacuum - a thermal equilibrium state where everything might be fluctuating, but all the fluctuation is compressed to a minimal effective scale. The vacuum as a whole is decohered to a state of simple looking classical Lorentzian symmetry.

    A ground has been created. And that becomes in turn its own next level of order-production in the form of the gauge excitations – the standard model particle content - that are the further "symmetry breakings" of the Cosmos as an expanding~cooling heat sink structure.
  • The Ultimate Question of Metaphysics
    On the other hand assuming there is a territory requires a leap of faith and the assumption of an archimedic point which ultimately leads to some sort of foundationalism. Every foundation leads to problems because there is no way it reveals itself.Tobias

    Pragmatism simply says we take that leap of faith - form a hypothesis - and test it. Our opinions of what is foundational then emerge from that process of engagement.

    Furthermore, we know that this process of reasoned enquiry is forming our subjective self as much as it is forming our opinion of what constitutes the outside world.

    This is the feature, not the bug, of map-making. Contra idealism, we - as subjects - don't exist beyond the pragmatic modelling relation we form with the world. I am me in terms of the habitual view I come to take of the world with which I interact. My semiotic Unwelt is a running model of "me" in the "world".

    So the subjective is entangled in the objective when it comes to reasoned inquiry. And that is a good thing. It is how I as an ego, with will, purpose and creativity, exist along with the worldview I am productively constructing.

    Epistemology somehow got hung up on Cartesian doubt. It divided folk into naive realists and mystically-inclined idealists.

    Peircean pragmatism is based on a sounder psychology. We engage with reality on the basis of revisable belief. And our own subjectivity is a product of that constructive engagement. Our choices about what are the "right" ways to frame reality emerge from a debate starting at that point.
  • The Ultimate Question of Metaphysics
    Don’t forget dark energy. You haven’t budgeted for that.
  • The Ultimate Question of Metaphysics
    Your presentation focuses on real world chaotic behaviors that can be approached probabilistically or statistically...jgill

    That must be the first part that Wiki calls an interdisciplinary scientific theory then.

    ...not with a more precise iterative tool.jgill

    That must be the branch of maths that specialises in exact algorithms which also famously can't in fact be applied to the real world (without scientific heurism) due to the SDIC/ butterfly effect.

    But even your chaotic maps are only interesting if they exhibit local uncertainty paired with global order. The Lorentz strange attractor caused excitement as a model for that reason. Organisation out of chaos. The trajectories or orbits were focused in ways that gave them a fractally constrained dimensionality.

    However, you are not here to discuss metaphysics. Sorry to interrupt!
  • The Ultimate Question of Metaphysics
    What is the general definition of chaos? It is some maximal imagined state of disorder or unpredictability. Pure wildness. A state with the least constraint or structure possible, right?

    Mathematical models then track the growth of wildness as constraints are systematically removed. Franks talks about this. A major step is from Gaussian to Powerlaw regimes. The first is merely random. It still has a variance and a mean. The second is chaotic. It now has no constraints on its variance or its mean. Or actually, it does in fact have a geometric mean to precisely define its distributiion.

    So in going as far as we can go to remove constraints, we still arrive at some last constraint that can't be removed. Even the most chaotic system has this necessary residual structure.

    And then remember I wrote a whole sentence and not merely half a sentence....

    Because maths tells us that chaos must have structure as free possibility becomes its own system of constraints.apokrisis

    So chaos in nature - in the real world that is the subject of the OP - has this explanation. It is a general characteristic of free growth processes and dissipative structures that they attract to a powerlaw distribution. And this is because they build on themselves, preserving infomation in the way Franks describes.

    Earthquakes, weather systems, turbulence and branching processes in general, are self-organising as they become the context of their own further growth. This is known as preferential attachment or the Mathew effect - the rich get richer, the poor get poorer.

    So even when the rain falls in the hills and starts to carve out trickles, then channels, then rivers, in the landscape, we can predict the mathematical qualities that the drainage network will have. We have a yardstick of "pure randomness" against which to measure its scalefree-ness or fractal dimensionality. Every individual event might be an accident, and yet as accidents combine, they gather a weight. A flow. And that has its own necessary statistical order.

    Thus getting back to the metaphysical argument of the thread, chaos is still a structured state. Even if the local events constituting some world are deemed "total accidents", there will be some form of coherent global structure that emerges from the fact that all these accidents are in interaction.

    Everythingness gets reduced to somethingness because a weight of events builds its own global history of constraints. And Franks paper talks about the steps towards the most minimally constrained possible distributions – which happen to be the powerlaw regimes that in fact best describe nature in the large. Nature as a cosmic dissipative structure, expanding and cooling - or tumbling into the very heat sink it is constructing.

    I don't know if this is an assertion based on Peirce's views or on something else. If I recall correctly, though, he thought that chaos would result in structure through the development of what he called "habits" which it seems consist of actions or patterns which have already taken place.Ciceronianus

    Peirce did foundational work on probability theory, but in the Gaussian regime and not the Powerlaw regime as far as I know. Cauchy distributions were around in his day, of course.

    But to get back to the point I am actually making, I meant to draw attention to the way "vagueness", "chaos", "everythingness", "quantum foam" are all attempts to conceive of a state of Apeiron - of unbounded or unconstrained potential.

    And all these states are then being conceived in terms of locality and independence – disordered fluctuation. But then dialectically, buried within that definition is the "other" which is what happens when there develops a history of interaction. The pendulum now swings towards global co-dependence. Every new event is adding to, or subtracting from, some collective weight of past action.

    This is of course directly Peircean. The firstness of tychic possibility leading to the secondness of individual events and then the thirdness of synechic or continuous habit. Global order emerges out of free possibility as a collection of accidents have to arrive at some self-stable pattern that embodies its "flow".

    Thus immanence can explain how somethingness arises from everythingness. And with the revolutions of "nonlinear" maths and physics of the second half of the last century, that became "a scientific fact". :grin:
  • The Ultimate Question of Metaphysics
    Was even Plato a Platonist? But the Timaeus traverses the right questions.

    If I have to use labels, then I am a structural realist, with dissipative structure and symmetry breaking being the mathematical meat of that position.
  • The Ultimate Question of Metaphysics
    In what sense is the question "Why is there a state of structured order instead of some wild material chaos?" significantly less problematic than the question "Why is there something instead of nothing?"Ciceronianus

    Because maths tells us that chaos must have structure as free possibility becomes its own system of constraints.
  • The Ultimate Question of Metaphysics
    We may be able to theorize that "existence is evolutionary"; we may be able to ascertain a tendency toward organization. I have problems thinking of that as explaining why there is something rather than nothing, however.Ciceronianus

    Why do you keep framing this as a problem of "something rather than nothing" when that has already been agreed as a self-contradicting metaphysics?

    There is something. Therefore nothingness was never going to be the general case. And if it's ontic role is reduced to being some stage in a general evolutionary trajectory, everyone usually agrees nothing can come from nothing. On the other hand, it doesn't seem problematic to posit a general nothingness as the ultimate cosmic future. We already know from the Big Bang that the Cosmos certainly appears to have the creation of an eternalised Heat Death void in mind.

    So drop the "something rather than nothing". It's the first thing to get chucked out here.

    As the OP stated:
    But as something does actually exist rather than nothing this to me proves that nothing is actually impossible to exist.Deus

    Does the universe exist in order to evolve, or does the evolution take place because it exists?Ciceronianus

    Does matter exist without a form? Does form exist without being enmattered?

    Holism is about the hylomorphic unity of substantial actuality. You are talking like a reductionist in wanting to make matter and form two different species of thingness rather than the complementary aspects of the one holistic thing.

    The super-explanation I was thinking of, which I think is the goal of the question necessitated by the form of the question (why something instead of nothing) would be an explanation along the lines of "there's something because the universe was created for a reason."Ciceronianus

    And opposed to the transcendence of the reductionist is the immanence of the holist. The question becomes why something and not everything? Why a state of structured order and not some wild material chaos? The answer is then found within.

    The universe was not created for a reason - as if there were some higher power it needed to answer to. Instead the universe emerged as a persisting stable structure because it discovered reason. It was organised by the inevitability of a rational or logical structure. The cosmos is itself the expression of evolutionary reasonableness.
  • The Ultimate Question of Metaphysics
    Accepting this, I still don't understand what assuming "nothing", whatever that is meant to mean, as--seemingly--an alternative to existence or persistence, does beyond supporting a belief that something in the nature of a super-explanation...Ciceronianus

    What is wrong with a super-explanation here? The "why anything" question seems perfectly reasonable to me – once you get beyond the beginner level of "something rather than nothing" as the two ontic categories you feel are being opposed.

    There must be a super-explanation, or reason or purpose, for everything, because otherwise (instead) there would be nothing.Ciceronianus

    We have two traditions of super-explanation being opposed here.

    Atomism or reductionism is based on the assumption of brute material existence in eternal voids. You seem to be speaking for that. Things just are forever. There is no reason or purpose to be found. In terms of what is real, the mantra is that form and finality are categories which fail to exist in nature.

    But then you have holism or the process view. Existence is evolutionary and thus form and purpose are real. Somethingness is the "reasonable" constraint on chaotic everythingness. An overarching principle organises creation - even if it is just the Darwinian imperative to persist as a structure of entropy dissipation.

    Atomism requires either transcendence or mutism from its adherents. Either they must find their answer to "why anything" in a creating god, or they must stifle the "why" question itself.

    Holism seeks its answer in self-organising immanence. Concrete existence arises because absolute freedom imposes its own organising constraints. Existence is thus a persistent statistical pattern. The emergent sum over all possibilities.

    That is where our physical inquiry has led to. It is the metaphysics of quantum field theory. So why not just accept it?
  • The Ultimate Question of Metaphysics
    It strikes me that the question, as stated, should never arise. Why assume that "something" requires an explanation because it exists rather than or instead of nothing?Ciceronianus

    But doesn’t finding that things only persist rather than exist make it harder to take such an a-causal stance on the matter?

    Reality seems more a process - a developing structure - than just some eternal set of material objects.

    So existence might seem a brute fact, but persistence requires it contextual explanation.



    Thanks. I checked over my old notes and was reminded that Fernando Zalamea did these analyses of how CT connects to Peirce (more than Hegel).

    https://cesfia.org.pe/villena/zalamea_peirce_translation.pdf
    https://uberty.org/wp-content/uploads/2015/07/Zalamea-Peirces-Continuum.pdf
  • The Largest Number We Will Ever Need
    Is there a finite number (Nmax) such that no calculations ever in physics will exceed that number?Agent Smith

    Yep. The maximum entropy or information content of the visible universe is 10^123 k. So if you wanted to number every individual degree of freedom that exists for all practical purposes, there’s your number.

    At least it is the current best stab. See https://mdpi-res.com/d_attachment/proceedings/proceedings-46-00011/article_deploy/proceedings-46-00011-v2.pdf?version=1612356633
  • On whether what exists is determinate
    More generally, I like to argue for the view that whatever we believe to exist (even things existing unobserved) exists in a determinate manner - meaning that if we encounter a previously-unseen celestial object, we will know what kind of thing it is. But that still leaves room for the assertion that the kind of reality the world has outside of the mind of the observer is indeterminate. That means that it is not non-existent, but it's also not strictly speaking existent. At best it has a kind of presumptive existenceWayfarer

    Aren’t you just mixing up epistemology with ontology?

    But semiotics can fix it. All life and mind is semiotically self determinate. It comes with a model - some form and purpose encoded in DNA/neurons/words/numbers - that does determine the kind of thing it intends to be. Or at least limits the extent of material accidents so as to be able to function as that thing.

    Then the wider physical and chemical world is at best pansemiotic. It has no coded representation constraining its being. But it does have an environment, an informing context, that again shapes it to be the kind of thing it is for some general reason.

    So a river or a mountain or a star all have that kind of environmental determinacy.

    A landscape or any other natural structure has to reflect the constraints of the laws of thermodynamics. Disposing of flows becomes the purpose shaping its being, and the forms that result are inescapable patterns.

    So there is a ton of ontological-level determination in the universe. Every star, every river, every sand dune, looks much the same just because some geometries are maximally efficient and can build themselves by material accident. They don’t need an epistemology - some encoded blueprint or instruction set.
  • The Ultimate Question of Metaphysics
    What do you mean by A and not-A?litewave

    I’m talking in the context of quantum field theory and its path integral formalism, particle fields and creation-annihilation operators. Or in very simple terms, if there is a fluctuation - anything at all - then it is just as likely to go left as go right. And doing both, it cancels out doing anything at all.

    So when describing a quantum vacuum ground state, any fluctuation has some probability. But these virtual particles are created in mutually cancelling pairs. An electron must generally be accompanied by a positron. And so while they both may briefly exist, they both also just as fast wipe each other out.

    It is just built into the quantum view that everything is possible, and yet generally it all self cancels to nothing because the only way to exist as some particular object is to break a symmetry. And yet that very symmetry - if I can go left, then I can also go right - then comes back at you to swallow you up.

    This gives you your quantum ground state - an everythingness that is a nothingness. From there, you can start to figure out how anything concrete could persist for any kind of time at all.

    And that becomes what cosmology is all about - the messy complexity that saw one in a billion protons failing to be annihilated by a matching antiproton in the first split second of the Big Bang, and so create a universe as something more interesting than a spread-cooling bath of radiation.