Comments

  • Soundness
    Really I'm just fishing for any resources that might detail the concept of soundness more fully to satisfy an intellectual itch I have.Moliere

    No good resources spring to mind. I don't think this is a well-explored question.

    But the answer seems obvious enough. Soundness is based on a mechanical or atomistic ontology. You have some bunch of stable parts. The parts can be rearranged according to some set of rules with nothing essential being changed about those parts. There is no leakage or spoilage as the parts get shuffled around. And any rule-based shuffle of the parts is an open possibility. No patterns or states that can be accessed by those syntactic rules are off-limits.

    So it is classic mechanical thinking. All the accidents are ruled out at the level of the atomistic parts. There is closure right there. And then the combinations of those parts is made the opposite - as open and unconstrained as you like. Provided the combinations follow the rules. Which you can freely create. And so even the rules have no rules. Syntax is arbitrary. Or at least the only meta-rule is that the rules are capable of being followed. There is a soundness requirement in that. Which in turn refers you back to the notion of a part. A part, being atomistically stable, guarantees that timeless syntax, abstract rules, can be the case. The determinism is grounded in that assumption. Soundness thus follows as any dynamics or change can only be caused by "the rules".
  • Senses
    I can see we are not aware of the totality of what is going on around us by our evolved senses but what we sense is still grounded in the physical world.Nils Loc

    But what I am challenging would be that presumption that "the physical world" is the ground - the thing-in-itself to be truthfully represented by a brain experiencing that world.

    Sure. I am still arguing realism and not idealism. There is a world. It will constrain our wishes. Evolution would favour a relation with it that is "accurate", or "truthful", or in some sense "grounded". But the relation is in fact only a pragmatic one, a semiotic one.

    The brain hasn't evolved to be a model of the world in some objective and disinterested sense, but quite the opposite. It has evolved to be an Umwelt - a model of the world with the individuated self in it. And this makes a huge metaphysical difference.

    The relationship is grounded in that forming of a sharo self~world difference. The subjectivity of sensation makes evolutionary sense because the construction of a subjective point of view is the primary goal.

    Fundamentally any body of knowledge that represents a good account of how the world behaves is validated by the senses, otherwise something else is going on, like supernatural monkey business.Nils Loc

    Again, you seem to want to argue that an objective view of the world would be somehow the design goal here. So you would judge the evolution of sensation against that criteria.

    But that presumption is completely unhelpful for understanding the psychology of perception. What evolves is subjective experience - the ability to model a world with a self as its centre. And from there, we can see that the ability to ignore the world - to push it into the background and deal with it automatically as not worth our attention - is one of the driving evolutionary criteria.

    The more I can guess right, the less I need to actually check. The more I can forget or ignore, the wiser I am getting as a "reality modelling system".

    An enactive or ecological approach to perception stresses how little "reality modelling" the brain has to do in fact. All the information is out there all the time, just laying around. So the work the brain has to do is mostly about forming a sense of self - arriving at the constantly changing attentional viewpoint which is "me ignoring pretty much everything going on so I can execute whatever little wish or fancy that constitutes my highly subjective point of view".
  • Senses
    I individuate, therefore I am. ;)
  • Senses
    Are there possibly other universes or dimensions in which the physical world utilises different physical phenomenons that are beyond our depths of understanding?Susu

    I'd suggest that the first thing to understand is that senses work upon differences - gradations, distinctions, and invariants under movement - in an environment.StreetlightX

    I agree with SX. We don’t see the physical world as such. We just seek its informative differences. So there is already the further thing of a self that is being constructed as part of sensation. And the senses evolve in such a way as to find the means to create “physical” differences even where there are none really.

    A good example is hue perception or colour discrimination. Yellow and red aren’t objectively physical qualitities that we had to evolve to see. Instead they are differences we manufacture to make the basically similar look violently different ... at a glance.

    A tiny change in reflected wavelength provokes an abrupt change in hue response. And this then allows us to break up a cluttered world, reflecting light in a very compressed spectrum, break out with all the multicolour glory of a rose garden.

    Seeing colour is simply a fast way of seeing surfaces that are likely to identify some object, like a leaf or fruit or petal. The “physics” being seen is the fact that an object tends to be made of the same stuff and thus has its own characteristic reflectance. So the most fractional differences in reflectance can become extremely informative. The shape of the object leaps out because it’s hue makes it distinctive.

    Shape also pops out when there is the contrast of movement. A leaf will flutter and be seen as a coherent change.

    So ultimately, we are wired to understand the world as a collection of objects of interest against backgrounds of indifference. Sensation is attention. We are seeking a primal differencing in terms of what matters to us. It is not the physics of the world that is experienced, but the contrast that is a world with us in it.

    And so we evolve powerful mechanisms for homing in on coherent objects that pop out against generally disinterested backgrounds. When it comes to hue perception, the fact that it isn’t about objective physics becomes really strongly apparent. Colour is just a shortcut for identifying non moving shapes in a cluttered visual environment.
  • Heidegger's vision of philosophy in 1919
    Philosophy is from this perspective necessarily personal, and yet it is personal in a way that seeks the living impersonal.macrosoft

    The last fantasy is that insincere or emotionally closed conversations are the way to do things. This fits the image of philosophy as turning the crank on the argument machine.macrosoft

    Some refreshing positivity here. :)

    I agree that existence constantly strikes me as a marvel. The universe feels so alive when you can see it through the eyes of process philosophy and systems science. We should bury ourselves deep in nature.

    But then I like a good argument, a good machine, as well. Nature has its pattern. Rational artifice has its pattern too. What's not to like in the end?
  • Heidegger's vision of philosophy in 1919
    But I'm suggesting a strange thing, that physics time is (at least for human cognition) derivative from a more basic experience of time.macrosoft

    But how do we experience time? I mean I understand the neuroscience of it. But I'm not getting how you think we experience it in any pre-theoretical sense. How do you think an animal "experiences time"?
  • Heidegger's vision of philosophy in 1919
    It's to some degree a random walk and in other ways dialectical necessity.macrosoft

    Statistics always expresses patterns. And really, there are only two statistical patterns ruling nature. Either Gaussian - the single-scale bell curve kind - or fractal, the log/log scalefree kind that is in fact more primal because it has one fewer linear constraints.

    A random walk expresses fractal intermittency. It resembles nature - a nature understood in dissipative process terms - far more accurately.
  • Heidegger's vision of philosophy in 1919
    Meaning time is the time of intelligibility, the time it takes to read this sentence.macrosoft

    Physics would have its own version in the holographic and lightcone structure of the Universe. It takes time to arrive at a state of coherence across a spatial interval. If the sun dematerialised right now, it would take about eight minutes to discover that its light and gravity had gone.

    So there is a baked in causal issue that defines cogency. If something happens way over there, it takes a time for it to have any effect over here. It takes time to observe a change or read that difference.
  • Heidegger's vision of philosophy in 1919
    I am down with chance being fundamental. Is there chance without time?macrosoft

    I missed this. Chance would also evolve in character as part of the growth of Cosmic regularity.

    So in the beginning, chance has some really wild and violent form. The initial conditions - a Peircean firstness or vagueness, today's quantum foam - are rather absolute chance in being fluctuations without limits. No constraints yet act. And so that defies our ordinary notions of chance where constraints exist to make the randomness comprehensible.

    If we talk about chance in the usual sense, we mean put 9 red balls and 1 black ball in a bag. What are the odds of pulling a black ball out? We are talking of an atomistic system in which there are already a set of comparable components confined to the same shared spacetime. That comes way later in the cosmic tale. In the beginning, there is neither vacuum nor particles as yet. The distinction doesn't even apply as there are no gaps between things, and not things for there to be gaps. It's a vanilla bath of radiation hosting every possible particle fluctuation mode.

    But as the Big Bang cools~expands, you get things condensing out and getting lumpy. Mass and energy decouple. Chance now appears on the stage in a more definite way as there are particles weaving independent histories. It means something that one hits another, or instead misses. There are events that fix the past as a memory. A pattern of accidents can accumulate to shape what can happen as a further concrete step of the story.

    Someone just took out the black ball. The chances of taking out a second have just dropped to zero forever now. Chance has taken on its familiar constrained complexion.

    So chance may be fundamental. But also it evolves to arrive at its presently understood character. A Peircean metaphysics really pulls out the rug from under what we think we know. However it is the metaphysics making the best sense of modern cosmology.
  • Heidegger's vision of philosophy in 1919
    Fascinating. Does time get into the picture here?macrosoft

    Time develops. And its character is thermodynamic. So time is not a background dimension but a measure of a sum of changes.

    In the beginning, there is no organised passage of time because there is no accumulating register of changes. Nothing is being fixed in the cosmic memory. There is just a yawning vagueness, a sea of fluctuation that is neither coming or going. It is neither a timeless eternity or some kind of eternal present as it is less than either of those contrasting possibilities.

    If this sounds poetic, well of course. But it is also what physics rather predicts. If you wind the world we know back to the first Planck scale instant of the Big Bang, then you arrive at a limit state that is so hot and dense that it dissolves into a quantum foam of blackholes and wormholes. Spacetime is so heavy it collapses at every point into pure gravity fluctuations. Time curls into tiny balls along with space so that the fluctuations are temporal wormholes. Causality breaks down as endings can't be distinguished from beginnings.

    So we have a timeless jostle. Then the symmetry of that breaks. We don't know how exactly, but we know that it does and that the symmetry breaking has to take the one available Standard Model pattern. A history can begin in which there is a before (the time when the scale factor was smaller and hotter) and an after (the time when the scale factor will be larger and colder - all the way to the complementary limit defined by the entropic cosmic heat death).

    So time has a thermal direction. Or thermo-spatial direction if you include both the cooling and the spreading. And change can be measured against that directional backdrop. Where you are right now has a number, a clock, as it is simply the general average cosmic energy density - the temperature of the cosmic background radiation (CMB).

    At the Heat Death, time runs all the way down to the point where change ends. You still have a faint rustle of quantum fluctuations - the black body photons emitted by the cosmic event horizons - however the temperature of that is as close to absolute zero as physically possible. There is no longer any possibility of a meaningful change, either locally or globally.
  • Heidegger's vision of philosophy in 1919
    So the sign is just pure meaningful being, the primary 'atom.' My speculative mind is there. The only question is not the 'consciousness' for which the sign exist (because we don't need that along this line of thought), but something like time for the play or alteration of the signs. The sign-stream rushes forward with memory.macrosoft

    That gets it. It is the atomism of the event that is the sign. Flashes of actuality that then weave the collective history.

    So we do want to search for crisp atomism as the basis of our ontology. But we find it not in atomistic matter - the usual answer. The passively existent answer. We find it being conjured into being as the emergent product of a context of constraints reacting with a ground of naked possibility. A physical event is the answer to a question that was asked. The quantum physicist interrogates with their measuring apparatus - has "it" happened yet? At some point, the sign is given. History branches in definite fashion. There is an updated context that requires the posing of some different question.

    So yes, history is memory. Memory is information. Information is the record that constrains future possibility. Yet still, the other half of the deal is entropy or informational uncertainty. Material surprise. The naked ground is probabilistic and contributes its capacity for the accidental.

    Nature asks the question - has that damn particle decayed yet? And the decay is spontaneous - within the constraints imposed on it. (See quantum zeno effect.) Nature actually is an observer waiting for the sign the event has happened. The question does have a meaning as an answer has to be given. The fact doesn't just passively exist. A dynamic has to play out.

    Again, Peirce is the rare metaphysician who got it because he made chance or tychism as fundamental as law or synechism. His view of probability was propensity-based. He was way out on a limb in accepting spontaneity as real and creative, not merely a convenient modelling fiction.

    So there is a play. Events have to manifest. And there is a flow. The answers weave a collective memory. There are even the atoms - definite events. But metaphysically, they have the quality of signs - in the full Peircean sense.
  • Heidegger's vision of philosophy in 1919
    What makes sense to me (in a speculative mode) is that there is no 'I' and no 'world' but just the signs. The 'I' and the 'world' are just two frequent signs that refer to still other signs.macrosoft

    The 'I' and the 'world' are just further signs in a grand linguistic play of signs. But pan-semiosis would be an actual model of ontology and not merely an acknowledgement of epistemic limitedness. So really, this would be two different conversations.

    I like what you have been saying on Heidegger. I am in full agreement with the psychological accuracy of distinguishing between the kind of consciousness which is a biological being in the world - the enactive, embodied, ecological, etc, understanding of mind - and then the socialised, linguistic, second-order structuring of experience that comes once phenomenology is carved up by language.

    Only humans have dasein of this form. And where it becomes ontologically significant could be the degree to which it takes existence in general - cosmic existence - to some kind of dialectical, end of history, extreme. We have gone beyond just words to numbers. We speak the language of pure Platonic forms.

    An animal is just buried in its little world, its unwelt, in a direct, pragmatic and unthinking way. An organism entropifies. Sure it may have dasein in that there is a running modelling relation in which the world is comprehended as a (neural/experiential) system of sign. But that is a completely particular kind of relation. Task-specific and highly situational. Not at all a general one. That is why I wouldn't rush to give it ontically general significance - like talking about spirit, or soul, or consciousness, as any kind of metaphysical stuff.

    But through language, humans came to socially objectify themselves as psychological subjects. That was a first detachment, a first step away from the embedded particularity of neurobiological dasein. And then through maths and logic - completely abstracted symbol play, drained of embodied semantic content to leave just a naked syntax - we have opened up the possibility of grasping something completely general about existence. We can put our hands on mathematical-strength forms or patterns. We can release the mechanistic and technological possibilities that the Universe also happened to contain as potentials.

    Now humans of course make pretty pedestrian uses of what seem rather exalted capabilities. And we will probably always do so. Yet still, something new has been manifested. And it seems a key project for philosophy to make sense of that. What do we really think about machines - after we learn that they might in some proper sense stand as existence's other natural pole of being? The mechanistic and computational represents some kind of end state or limit. That would be a fact that still mostly inspires ambivalence.

    To give the best example of what I mean, I am thinking about how the Standard Model of particle physics has turned out to rest on the Platonic-strength necessity of permutation symmetry. As the Cosmos developed organisation by cooling~expanding, it had to become fragmented locally by a cascade of symmetry-breakings. It had to bump down the levels of the permutational symmetries, from the most complex to the most simple.

    The Big Bang started out in some very large and confused geometry. Let's call it E(8), SU(5) or SO(10), as the question is still open. But then it boiled its way down to SU(3), SU(2) and then U(1) - representing the strong force, electroweak force, and electromagnetic force.

    So - as Ontic Structural Realism says - through a mathematical system of sign, a mathematical language for relating to dasein - we, as highly particular biological creatures, have come to grasp something absolutely general and necessarily true about the physical world. Reality turns out to have this hard and mechanistic formal face to it. Only these permutation symmetries are logically possible. And that is a constraint so objective that it always lay in wait as the future of any Cosmos. Chaos thought it could do what it liked. Randomness was its destiny. But permutation symmetry already spelt finitude. The ultimate shape of the future was an inevitability. The Heat Death of the Universe was foretold.

    This is a bit of an excursion into the big picture. But I want to demonstrate where the relation between epistemology and ontology may lie. That question is pretty confused. And it is what Peircean semiotics makes clear, in my view.

    So the big picture is Aristotelean - reality as a hylomorphic interaction of matter and form, action and constraint. possibility and necessity. Everything rests on a duality - or more properly, a dialectic or dichotomy. In the "beginning" is just a chaos of everythingness. A vagueness. And then that symmetry of fluctuations gets broken so that it becomes crisply organised into a global aspect - an informing weight of history and direction - and a local aspect, the now atomised and fragmented collection of material components or degrees of freedom, which are all the further accidents waiting to happen.

    To us humans, living in the Cosmos right when it has got nearly as large and cold as it ever will be, but with still enough fuel to spark some local fires, it seems we exist in a world of reductionist construction. We live in that era of the medium-sized dry goods where matter exists stably as solids in a void. And so a mechanical or technological mode of action - the constructive mode - can have its fullest expression. Us humans are at the apex of that. Biology rests on the possibilities of constructive action, local choices. If we pick up a rock and move it, it will still be there a hundred years later most likely. Then humans have continued on along this path of constructive causality to invent machines of the most absolute kind - like computers. Mathematical machines.

    But then - through the sciences of cosmology and fundamental physics - we can now grasp the particularity of the era that has informed our dasein. We have been opened up to its more general or objective mathematical-strength underpinnings. We can see the actual forms that impose a structuring necessity on "everything".

    And at this point, ontology becomes semiotic. We see that the duality we are always grasping after is not the trite mind~world relation (a very particular biological dasein or umwelt), but a dualism of entropy and information (or chaos and order, matter and form, flux and logos, apeiron and peras, etc).

    So semiotics deals with epistemology. At the level of biology, it become pretty clear that "mindfulness" is just about the particularity of an embedded thermodynamic relation. An organism exists with the sole cosmic purpose of breaking down entropic gradients. Dasein boils down to that. An organism's umwelt or system of signs is really something pretty physical - a collection of on/off regulatory switches.

    We think of signs as marks - indelible scratches that can then become the material subject of a mindful interpretation. But really, a sign in the biologically primitive sense is a switch - a logic gate - that can be thrown. It is a bit of machinery or syntax that can be inserted into the material flow of the world so as to start to control that world with stored information.

    At the level of biology, the fact that "mindfulness" is purely pragmatic is nakedly visible. An enzyme is a message from the genes to the cell. It says turn on this, switch off that. That is dasein as mechanistic action. It is all about the imposition of constraints, not some exploration of intellectual freedoms.

    And when humans invented language, it too was ultimately a means of sociocultural regulation. It was the mechanistic framework which could be dropped over the top of the psychological animal to establish an appropriately detached notion of self as a social actor, keeping a close eye on the wants and impulses of the beast within.

    So semiotics gives us the duality that works because it is a properly interactive one. We can see why the trick would exist. The whole of the Cosmos only exists because there is this fundamental duality between entropy and information, local material action and global formal constraint. The Big Bang couldn't have happened any other way except that it would become organised by the constraints of permutation symmetry. Confusion can't stay confused as it has to start cancelling much of itself out. Just as every baby must become organised to have the psychological truth of its embodied dasein. Regularity must emerge as habits form and enduring mechanism arises.

    So there would be the story of what constitutes the particular dasein of being a conscious human. We can dissect that in the now standard psychological way to bring out the semiotic levels involved. The arc of regulating machinery from membranes and genes, to neurons and muscle fibres, to words and even numbers. On the epistemological side, this is a story of the embedded semantics dropping away and pure syntactical mechanism becoming fully realised. An enzyme is as dedicated as a lock and key. But technology is as general purpose as mathematical form permits it to be.

    And then there is the story of dasein as an ontology of semiosis. The world itself arises as some kind of interaction between information and entropy ... as the most primal constructs we can apply to its description. (Of course, we never escape our epistemological situatedness to talk about the thing in itself in some naive realistic fashion. Ontology is only about the commitments we are prepared to risk our necks by. That too is already taken as read by the post-Kantian Pragmatist.)
  • Heidegger's vision of philosophy in 1919
    One thing maybe you can answer? Does the sign just exist in your view? Or does it exist for a subject? In a speculative frame of mind, it seems that we just have a flow of signed-sensation, with the subject being a recurrent theme of that flow. Clearly the flow of signs is motivated, directional, even motivated toward self-knowing, though only perhaps indirectly. What do you say?macrosoft

    This is a difficult area now. We would have to distinguish between two stories - the fairly uncontroversial and scientific bio-semiotic one, and the rather more metaphysically speculative pan-semiotic one.

    The biosemiotic one is just regular metaphysics. Creatures form a picture of the world - with themselves in it. So we are only talking about the signs they experience which betoken the world as it is optimally understood by them.

    When I see a red light at a crossroads, I see a place where I should stop. There is danger in continuing. The danger is real. But the sign is psychological you would say. And if I see a dark cloud, I know to read that as a feature of the world promising rain. The conceptual essence of there being a cloud for me is this meaning. And then we can quarrel forever about the reality of "a cloud" as some actual object or entity that would deserve being named and taken as a habitual sign of anything in particular.

    But if you want to continue on - like Peirce - then everything would only "exist" to the extent it forms a sign or mark that can be read by the world in some sense. So everything that could count as an actual event - something definitely happening, something that is a positive fork in a developing history - would be semiotic. It would be information. A fact. Meaningful in terms of a context that "observes".

    Again, this might just sound like a redescription of physics. Things are possibilities, then they actually happen. Big deal.

    However the semiotic view does underwrite the new information/entropy distinction that has arisen in physics, along with the contextual or holistic causality implied by QM and the rest. Classical reality is the emergent umwelt - the actuality that mediates between the realm of "pure material possibility" and the realm of "global contextual order".

    All sorts of things were possible the moment before that atom decayed. But then it did decay and a degree of freedom was definitely used up. The history of the universe was forever changed. And so the "real world" is the world of all those accumulating marks - each mark being the intersection of some story about global laws and other constraints, some story about local possibilities or degrees of freedom.

    So it is the triadic logic that connects, not the dualistic story of subjects and objects. What is real emerges as the substantive middle ground of actuality. And what has been related is a global information-bearing context and a local action-producing propensity. The umwelt become the (somewhat metaphorical) place inbetween where the signs or definite marks are getting written.
  • Heidegger's vision of philosophy in 1919
    The difference might whether one's motivations are primarily scientific or spiritual.macrosoft

    Yep. Peirce speaks to me as a scientist. I could never get into the German idealist and naturphilosphie tradition even though it gets oh so close to the same thing a lot of the time. (Peirce was very inspired by Schelling, btw).

    However, if you are going to be a holist and process thinker, I believe it is unavoidable that you will end up favouring immanence over transcendence. And so the idea of "spirit" is going to lose all its bite by the end.

    A big part of that is that monism also has to give way to an irreducible triadicism. And to make Geist or other somesuch the monistic foundation is already to begin with something too developed. It is a dualism willing to give up its material aspect but insisting on some kind of residual mental aspect.

    The Peircean alternative is that "in the beginning" is just the vagueness of a potential. It is a start that is as devoid of spirit as of matter. And then your opposing limits of the real can emerge from that. You can have mind and world develop as synergistic limits to being. You can have causality divided in Aristotelean fashion so that there is both global finality and local action - constraint and freedom - as the vague begins to achieve a more concrete or substantial state of definition.

    So the Peircean model is clean. It doesn't impose any emotional motif on the initial conditions. And that then justifies the emergence of mind, spirit, or whatever you want to call the informational aspect of the deal, as the proper partner to the material, entropic, etc, part of the deal.

    You don't have to sneak the forbidden into how you start the story. You can be content by the way it emerges as a necessary conjunct of the good old physicalist stuff.

    I'd say that Heidegger was ultimately a philosopher concerned with how to live in the world not as a scientist first but as a man.macrosoft

    And so I'd reply that Peirce's insight is that reality itself is "scientific". It arises by ... the universal growth of reasonableness.

    My hand is mine because I can usually make it do what I want without having to think about it. 'I' am the stuff that doesn't not resist my will.macrosoft

    Exactly. The self at the centre of things is merely the sum of all that is found to be not part of the world. It is a fluid development built on a process of othering. The self is just the other "other" that arises in opposition to "the world" (and thus - against dualism - is wholly dependent on that "world").
  • Heidegger's vision of philosophy in 1919
    But we aren't stuff. We are more like the open space for meaning. Nor are we separate from the world. We are being-in-the-world-with-others. We are the world itself being in itself, something like that. We are the future as possibility acting in the present but having a past that grounds all this.macrosoft

    Have you come across semiotics as a sharper way to make sense of this - being as an Umwelt or sign relation?

    What we are looking for is a process metaphysics that works. We don't want either mind, nor even matter, to exist in some substantial and primordial fashion. Instead we want a metaphysics where both are co-emergent in a proper sense. And a problem with phenomenology would be this "we" that is doing the being-in-the-world, etc. That makes it sound like consciousness is the primordial stuff or primordial ground.

    A pragmatic/semiotic metaphysics instead focuses on the interpretive relation that forms an Umwelt. A world of experience arises which mediates between the "self" that is implicit in the development of habits and dispositions, and the "world" that then represents all the recalcitrant facts that stand in opposition to this pole of intentionality.

    So this is quite a psychologically realistic view. All organisms are agents forming their view of the world - experiencing it as an organised system of signs. But it can also be a physically realistic metaphysics as our best understanding of physics already demands that it be "organismic" in having historically developed habits, dispositions and even (thermodynamic) intents.

    Modern physics now relies on information theory to account for why reality is atomistically fragmented into "degrees of freedom". A particle is essentially "a sign" of something that could happen. We know it was there because we record the event - the mark it leaves.

    So in a sense that semiosis can make precise - which information theory can measure - we do now have a worldview, a metaphysics, which is founded on "meaning making". And it can apply both to psychological science and physical science.
  • Elon Musk on the Simulation Hypothesis
    What the basic problem is that as the Computer has an effect on what it is modelling, it's actions make it a subject while the mathematical model, ought to be objective. Sometimes it's possible stll to give the correct model and the problem of subjectivity can be avoided, but not with negative self reference.ssu

    I agree with this but would also point out how it still doesn't break with the reductionist presumption that this fact is a bug rather than a feature of physicalist ontology.

    So it is a problem that observers would introduce uncertainty or instability into the world being modelled and measured. And being a problem, @Michael and @SophistiCat will feel correct in shrugging their shoulders and replying coarse-graining can ignore the fact - for all practical purposes. The problem might well be fundamental and ontic. But also, it seems containable. We just have to find ways to minimise the observer effect and get on with our building of machines.

    I am taking the more radical position of saying both biology and physics are fundamentally semiotic. The uncertainty and instability is the ontic feature which makes informational regulation even a material possibility. It is not a flaw to be contained by some clever trick like coarse graining. It is the resource that makes anything materially organised even possible.

    Self-reference doesn't intrude into our attempts to measure nature. Nature simply is self-referential at root. In quantum terms, it is contextual, entangled, holistic. And from there, informational constraints - as supplied for instance by a cooling/expanding vacuum - can start to fragment this deep connectedness into an atomism of discrete objects. A classical world of medium-sized dry goods.

    The observer effect falls out of the picture in emergent fashion. Although human observers can restore that fundamental quantum holism by experimental manipulation, recreating the world as it is when extremely hot/small.
  • Is it always better to be clear?
    As for number 3, I don't know if it's even possible to be completely unambiguous. That being said, I see no reason to strive to be as least ambiguous as possible.Purple Pond

    Without doubt clarity is basic. Unless you view philosophy as some branch of light entertainment, then it is about critical thinking applied to life and the world. And critical thinking is about producing counterfactually-clear statements. The ideal is to offer the choice: it is this, or it is that.

    The world in itself is rife with ambiguity and possibility. That is what inquiry deals with foundationally. We don't need to add to the confusion. It is the already freely available resource.

    Critical thinking is then the skill of parting the seas in the most economic fashion. You want to ask the question that delivers the most information. And that then leads to the familiar bivalent pattern of logic. We seek the option that can be true because it could have been false, yes because it could have been no, present because it could have been absent.

    So sure, reality itself might not be black and white. It may actually be vague or ambiguous as a fact. But philosophical inquiry still seeks to impose the counterfactuality which will deliver the most information about what might be the case.

    Of course, there really is a lot of modern "philosophy" that wants to be some kind of light entertainment or poetry instead. It is characterised by its love of paradox - the apparent confounding of crisp counterfactual thought. If you can't top the clarity of the thinkers that came before you, then spinning pretentious confusion can seem like an excellent career alternative. Eventually whole academic schools of this kind will emerge.

    So clarity is an algorithm we apply to messy existence. It is a procedure designed to maximise the information we have about what is the case. The entirety of modern civilisation has been build on the trick of counterfactual rigour.

    And yes, the deepest thinkers realise that binary logic can go too far. Reality may actually be vague or indeterministic in some degree. That understanding then becomes part of the logical clarity if they know what they are doing (I'm thinking CS Peirce here).

    But then the very existence of a model of clarity, an exemplar of critical thought, brings with it the fertile ground for mischief making and cultural vandalism. If you can't beat the clarity of what has already been said, then there is the academic plan B of celebrating confusion at enormous wordy length instead.
  • Elon Musk on the Simulation Hypothesis
    I am not sure what that business with instability is about,SophistiCat

    I explained in this post how biology - life and mind - is founded on the regulation of instability.

    Biology depends on molecules that are always on the verge of falling apart (and equally, just as fast reforming). And so the hardware of life is the precise opposite of the hardware suitable for computing. Life needs a fundamental instability as that then gives its "information" something to do - ie: create the organismic-level stability.

    So from the get-go - down at the nanoscale quasi-classical scale of organic chemistry - semiosis is giving the biophysics just enough of a nudge to keep the metabolic "machinery" rebuilding itself. Proteins and other constituents are falling together slightly more than they are falling apart, and so the fundamental plasticity is being statistically regulated to produce a long-running, self-repairing, stable organism.

    The half-life of a microtubule - a basic structural element of the cell - is about 10 minutes. So a large proportion of what was your body (and brain) this morning will have fallen apart and rebuilt itself by the time this evening comes around.

    This is molecular turn-over. All the atoms that make you you are constantly being churned. So whatever you might remember from your childhood would have to be written into neural connections that have got washed away and rebuilt - more or less accurately, you hope - trillions of times.

    The issue is then whether this is a bug or a feature. Machinery-minded folk would see molecular turnover as some kind of basic problem that biology must overcome with Herculean effort. If human scientists are going to reverse-engineer intelligence, the first thing they would want to do is start with some decently stable substrate. They wouldn't look for a material that just constantly falls apart, even if it is also just as constantly reforming as part of some soupy chemical equilibrium.

    But this is just projecting our machine prejudices onto the reality of living processes. We are learning better now. It is only because of soupy criticality that the very possibility of informational regulation could be a thing. Instability of the most extreme bifurcating kind brings with it the logical possibility of its control. Instability produces uncontrolled switching - a microtubule unzipping into its parts, and also rezipping, quite spontaneously. All you need then is some kind of memory mechanism, some kind of regulatory trick, which can tip the soupy mix in a certain direction and keep it rebuilding just a little faster than it breaks down.

    So this is a fundamental metaphysical fact about reality. If you have radical instability, that brings with it the very possibility of stabilising regulation. Chaos already plants the seeds of its own ordering.

    An engineer wants solid foundations. Machines need stable parts that won't immediately fall apart. But life and mind want the opposite. And so right there you have a causal and metaphysical-level difference that artificial mind or artificial life has to deal with.

    Silicon switches are just not the right stuff as, by design, there is minimal chance of them entropically falling apart, and even less chance that they will negentropically put themselves back together.

    Yet every part of every cell and neuron in your body is doing this all day long. And knowing how to do this is fundamental to the whole business of existing as an informational organism swimming in a flow of environmental entropy.

    Life and mind can organise the material world, bend its erosive tendencies to their own long-term desires. This is the basic scientific definition of life and mind as phenomena. And you can see how machine intelligence or simulated realities are just not even playing the game. The computer scientists - playing to the gullible public - haven't got a clue of how far off they are.

    Well, the idea behind the simulation hypothesis is that (a) there is a general, all-encompassing order of things, (b) any orderly system can be simulated on a computer, and possibly (c) the way to do it is to simulate it at its most fundamental level, the "theory of everything" - then everything else, from atoms to trade wars, will automatically fall into place. All of these premises can be challenged, but not simply by pointing out the obvious: that computers only follow instructions.SophistiCat

    You see here how readily you recapitulate the "everything is really a machine" meme. And yet quantum physics shows that even material reality itself is about the regulation of instability.

    Atomism is dead now. Classicality is emergent from the fundamental indeterminism of the quantum realm. Stability is conjured up statistically, thermodynamically, from a basic instability of the parts.

    The simulation hypothesis takes the world to be stably classical at some eventual level. There is some fixed world of atomistic facts that is the ground. And then the only problem to deal with is coarse-graining. If we are modelling the reality, how much information can we afford to shed or average over without losing any essential data?

    When it comes to fluid turbulence, we know that it has a lot of non-linear behaviour. Coarse-graining can miss the fine detail that would have told us the process was on some other trajectory. But the presumption is that there is always finer detail until eventually you could arrive at a grain where the reality is completely deterministic. So that then makes coarse graining an epistemic issue, not ontic. You can choose to live with a degree of imprecision in the simulation as close is good enough for all practical purposes.

    That mindset then lets you coarse-grain simulate anything. You want to coarse-grain a model of consciousness? Sure, fine. The results might look rather pixellated, not that hi res, as a first go. But in principle, we can capture the essential dynamics. If we need to, we can go back in and approach the reality with arbitrary precision .... because there is a classically definite reality at the base of everything to be approached in this modelling fashion.

    For engineers, this mindset is appropriate. Their job is to build machines. And part of their training is to get some real world feel for how the metaphysics of coarse-graining can turn around and bite them on the bum.

    But if we are talking about bigger philosophical issues, then we have to drop the belief that reality is actually some kind of physical machine. It's causality is irreducibly more complex than that. Both biology and physics tell us that now.
  • Elon Musk on the Simulation Hypothesis
    ...according to mainstream science, our cosmic neighborhood is not dominated by BBs.SophistiCat

    According to mainstream science, we ain’t a simulation either. We were talking about Musk’s claim which involves “enormous probabilistic resources”. The BB argument then becomes one way that the claim blows itself up. If it is credible that some "boring substrate" generates simulated realities, then the simulation we are most likely to inhabit is the one that is the most probably in requiring the least of this probabilistic resource.

    The fact that this then leads to the BB answer - that the simulation is of a single mind's single frozen moment - shows how the whole simulation hypothesis implodes under its own weight.

    I'm just pointing out the consequences of Musk's particular line of argument. He doesn't wind up with the kind of Matrix style simulation of many fake minds sharing some fake world in a "realistic way" that he wants.

    And even if the "substrate" of that desired outcome is some super-intelligent race of alien mad scientists building a simulation in a lab, then I'd still like to know how the actual architecture of such a simulation would work.

    As I said, one option essentially recapitulates idealism, the other substance dualism. And both outcomes ought to be taken as a basic failure of the metaphysics. We can learn something from that about how muddle-headed folk are about "computational simulation" in general.
  • Elon Musk on the Simulation Hypothesis
    Consciousness happens because of ordinary (even if complex) physical processes. If these processes can happen naturally then a sufficiently advanced civilization should be able to make them happen artificially.Michael

    Sure. Nature produced bacteria, bunny rabbits, the human brain. This stuff just developed and evolved without much fuss at all.

    Therefore - in principle - it is not impossible that wait long enough, let biology do its thing, and the Spinning Jenny, the Ford Model T, the Apple iPhone will also just materialise out of the primordial ooze.

    It's a rational extrapolation. Sufficiently severe evolutionary pressure should result in every possible instance of a machine. It's just good old physics in action. Nothing to prevent it happening.
  • Elon Musk on the Simulation Hypothesis
    But if you are only concerned about one mind, then you can maybe bracket off/coarse-grain some of the world that you would otherwise have to simulate.SophistiCat

    Sure. Just simulating one mind in its own solipsistic world of experience is the easy thing to imagine. I was asking about the architecture of a simulation in which many minds are sharing a world. How could that work?

    And also the Simulation Hypothesis generally asks us to believe the simplest compatible story. So once we start going down the solipsistic route, then a Boltzmann brain is the logical outcome. Why would you have to simulate an actual ongoing reality for this poor critter when you could just as easily fake every memory and just have it exist frozen in one split instant of "awareness"?

    Remember Musk's particular scenario. We are in a simulation that spontaneously arises from some kind of "boring" computational multiverse substrate. So simulating one frozen moment is infinitely more probable than simulating a whole lifetime of consciousness.

    I'm just pointing out that half-baked philosophy ain't good enough here. If we are going to get crazy, we have to go the whole hog.
  • Elon Musk on the Simulation Hypothesis
    Says the man who keeps saying that it's impossible in principle for a machine to be conscious?Michael

    What I keep pointing out is the in principle difference that biology depends on material instability while computation depends on material stability. So yes, I fill in the gaps of my arguments.

    See .... https://thephilosophyforum.com/discussion/comment/68661

    I've been talking about using biological material rather than inorganic matter so the above is irrelevant.Michael

    It can't be irrelevant if you want to jump from what computers do - hex code to throw a switch - to what biology might do.

    If you want to instead talk about "biological material", then please do so. Just don't claim biology is merely machinery without proper support. And especially not after I have spelt out the "in principle" difference between machines and organisms.
  • Elon Musk on the Simulation Hypothesis
    I suppose that if you were only simulating one mind, you could make your simulation domain smaller than if you were, say, simulating the entire population of the earth.SophistiCat

    I see the problem as being not just a difference in scale but one of kind. If you only had to simulate a single mind, then you don't even need a world for it. Essentially you are talking solipsistic idealism. A Boltzmann brain becomes your most plausible physicalist scenario.

    But how does it work if we have to simulate a whole collection of minds sharing the one world? Essentially we are recapitulating Cartesian substance dualism, just now with an informational rather than a physicalist basis.

    It should be telling that the Simulation Hypothesis so quickly takes us into the familiar thickets of the worst of sophomoric philosophy discussions.
  • Elon Musk on the Simulation Hypothesis
    We don't know how, but we know it happens in us.Michael

    If you can't say anything to bridge this explanatory gap then you can't claim anything "in principle" here. That's pretty straightforward.

    I'm not denying that we can't take a biologically inspired approach to "computation". Neural network approaches already do.

    But you can't offer a Turing Machine example - hex code to operate a switch - and freely extrapolate from that. You have to show that biology is in principle doing that kind of computation.

    And as I said - and as you have ignored - we know enough about biology to see that it relies on material instability, while TMs, and machines in general, rely on material stability.

    So biology is essentially relational. It is about informational constraints on material dissipation. The overall organisation is emergent.

    While computation is essentially dualistic. The software is informationally isolated from the material hardware needed to implement it. Where biology is about an intimate sensitivity to the material conditions of its being, computing is the precise opposite - the ability to completely disregard those material conditions.

    If you are wanting to make "in principle" claims, then that basic difference is where you have to start.

    Computation is nothing more than rule-based pattern making. Relays of switches clicking off and on. And the switches themselves don't care whether they are turned on or off. The physics is all the same. As long as no one trips over the power cord, the machine will blindly make its patterns. What the software is programmed to do with the inputs it gets fed will - by design - have no impact on the life the hardware lives.

    Now from there, you can start to build biologically-inspired machines - like neural networks - that have some active relation with the world. There can be consequences and so the machine is starting to be like an organism.

    But the point is, the relationship is superficial, not fundamental. At a basic level, this artificial "organism" is still - in principle - founded on material stability and not material instability. You can't just wave your hands, extrapolate, and say the difference doesn't count.
  • Elon Musk on the Simulation Hypothesis
    Getting back to the OP, the interesting thing is this idea of a simulation that would somehow be all our consciousnesses, plus the world we think we share. Is anyone stopping to think what this would entail?

    What even is the hypothesis?

    Is there one fake world and then somehow a whole lot of fake minds having private thoughts, feelings and understandings of it?

    Or is there only one fake mind and that mind is the entire world as such, any others appearing in this world being merely fake furnishing?
  • Elon Musk on the Simulation Hypothesis
    Give a computer a Hex code of 000000, have it add FF, and the result is 0000FF. This is the hex code for blue, and it tells the computer to turn on the blue lamps that each make up part of a pixel.

    Only in our scenario that biological computer isn't told to turn on a blue light but to activate the parts of its "brain" that are responsible for bringing about a blue colour experience.
    Michael

    Love it. A computer can be programmed to operate a light switch. Therefore a conscious computer is possible. [Hands wave furiously.]

    So how is it that neural firing would "look blue"? How is this little trick achieved? What is it that we know "in principle" here that would warrant your extrapolation.
  • Elon Musk on the Simulation Hypothesis
    The problem is not that you are talking nonsense. It is that you don’t even know its nonsense.
  • Elon Musk on the Simulation Hypothesis
    But, suffice to say, that the sense that the domain of empirical experience is in some sense a simulation, is quite trueWayfarer

    That’s like saying the eye is like a camera. It might get the conversation started, then you get serious.

    Take for instance the evidence from sensory deprivation experiments. Without a world forcing the brain into some kind of stabilising state of interpretation, then experience and thought just fall apart.

    There is no Cartesian theatre, no running simulation, that is a consciousness standing apart from the world. The idea of the mind as a stable entity, a soul stuff, is what underpins the naivety computationalists.

    Neurology depends on its own instability being regulated by its running interaction with a world. It becomes constrained by its environment to have an appropriate level of fixed or habitual response.

    So the simulation story is just dualism warmed over. Sensory deprivation reveals that being in a definite and organised state of mind is not about a concrete act of world simulation but an enactive state of world interpretation. The infinite garbled possibility the dreaming mind can conjure up is stabilised by whatever the available regularities of the environment happen to be.
  • Elon Musk on the Simulation Hypothesis
    Strewth. So life on earth began when a sperm met an ovum and organisms arose.
  • Elon Musk on the Simulation Hypothesis
    The article I linked to explains that biological computers can do this.Michael

    Sure. You can build a Turing machine out of anything. Even meat, or string and tin cans. So long as it is eternally stable and entropically unlimited. That is rather the point.

    Meanwhile over here in reality, a very different game is going on. I’m asking you to focus on that.

    I don't think it's controversial to think that a sufficiently advanced civilization can create biological computers that function somewhat like the human brain, complete with consciousnessMichael

    If you don't find it controversial then you might want to question how well you understand the biology of brains, and indeed the biology of biology.

    A) Machine - stable parts.

    B) Life - unstable parts.
  • Elon Musk on the Simulation Hypothesis
    So what if we considered biological computers running these simulations instead?Michael

    Again, first show that “running a simulation” is something a biocomputer could even do. Then we are still left with the basic point that a simulation is not a reality as it is divorced from any material consequences due to being founded on an artificial stability.

    Biology arises because material instability - criticality - offers a suitable foundation for the informational regulation of that instability. That is the whole deal. So using meat to imitate stable computational hardware is missing the point of what actually defines life and mind. If it is perfect for Turing computation, you have ruled out the very thing - the material instability - which life and mind exist to regulate.
  • Elon Musk on the Simulation Hypothesis
    You made some marks appear on my screen - 2, 100, 1,000,000. And so the party started.

    Numbers stand for acts of counting. Some set of marks to be scratched or instances to be recorded. The efficiency of a notation shouldn’t fool you that symbols don’t need grounding. Every act of reference is also a physical event.
  • Elon Musk on the Simulation Hypothesis
    I don't understand the relevance of your questions.Michael

    Huh? You asked me “what about biological computers?”

    Well. An example if you please.
  • Elon Musk on the Simulation Hypothesis
    Point to information that exists without a physical mark then.
  • Elon Musk on the Simulation Hypothesis
    Computation does rely on being able to produce a frictionless world. But yes. My point is that that is in the end a thermodynamic fiction.

    There is always a cost attached to every time a symbol is written, a gate is switched. The cost is simply being made the same for any such informational action. And as small as possible.

    Likewise, your car engine will always eventually wear out. The hardened parts will erode with use. I like the fact that car design has reached the stage where all the parts have been strengthened to the exact degree that they will all tend to fail about the same time.

    So the mechanical is about stepping outside the usual entropic deal - the world of self organised flows like rivers, plate tectonics and solar flares - to control what is going on with rigid material form and imposed systems of informationally operated switches, gates and timing devices.

    And yet all that machinery still erodes. Friction can be minimised but never eradicated. Dissipation wins in the end.
  • Elon Musk on the Simulation Hypothesis
    Yeah. If we are talking about neural network architectures, then we are starting to talk about legitimate attempts to follow the path of biological realism. And I doubt you would find neural networkers spending a lot of time worrying about whether we are figments of a matrix simulation.
  • Elon Musk on the Simulation Hypothesis
    If life and mind are defined by information that has material consequences, then be suspicious of all claims that talk about plays of information without material consequence.

    A pattern running on a computer is just syntax. Symbol processing. It still takes an actual biological being to read the pattern as having meaning and thus wanting to act on it in some way. The material consequences are what give a modelling relation with the world any semantics.

    If you think that life and mind are just essentially machines, then you will be forever insensitive to the chasm that in fact exists between the biological and the mechanical. Life is based on the physics of dissipative structures. And the mechanical is defined by its insensitivity to entropic reality.

    The parts constituting a machine are essentially dead in being fixed and stable. The parts constituting an organism are essentially unstable - poised to fall. And that is how regulating information can actually insert itself into the material equation and determine which way the instability will fall.
  • Concepts and Apparatus
    Standard semiotics. Rosen’s modelling relation in a nutshell. The concept is the the theory, the generality, and what it has to produce is the act of measurement, the particular example, the sign that the concept is true of the world about which it makes a modelling claim.