Comments

  • There Are No Identities In Nature
    Wilden too puts the whole issue in terms of difference, although he doesn't employ the vocabulary of intensive/extensive: "There are thus two kinds of difference involved, and the distinction between them is essential. Analog differences are differences of magnitude, frequency, distribution, pattern, organization, and the like. Digital differ­ences are those such as can be coded into distinctions and oppositions, and for this, there must be discrete elements with well-defined boundaries..StreetlightX

    I can agree with Wilden. It is when you start pulling in Deleuze and "aesthetics" and other such baggage that it loses analytic clarity and becomes a romantic melange of allusions.

    So accepting Wilden as a valid starting point, I will focus on the further things that could be said from a (pan)semiotic point of view.

    The key thing is that reality itself is digital in being marked. To talk about analog difference is already to talk about a reality that is constrained in particular material ways. If the weather is a pattern of magnitudes - the pressure high here, low there - then already the world is divided against itself, expressing a proto-negation.

    So a pure analog state would have to be a completely bland state, one characterised by its intensive or bulk properties. It would be like the early state of the Universe when all that existed was a thermalising bath of radiation - a featureless state with the same pressure and energy density and rate of action everywhere. The Big Bang was the least possible marked state of being - a spreading ocean with no discernible texture. The only change was the change of becoming steadily larger and cooler - a change that could only be appreciated if one was standing god-like outside everything that was happening.

    Yet even the radiation-dominated era of the early Big Bang had some digital structure. Action was confined to three spatial dimensions. It was also confined to a single temporal one in the sense that all action had to flow entropically downhill - to flow uphill would be neg-entropic!

    So contra your position, existence has to start with the digitisation of the analog - a primal symmetry-breaking. Or as I say, to make proper sense of this, we have to introduce the further foundational distinction of the vague~crisp. We have to reframe your LEM-based description in fully generic dichotomy-based logic.

    So now we get to a Peircean, Gestalt or Laws of Form level of thinking where both event and context, figure and ground, particular and general, atom and void, are produced together, mutually, when a symmetry is foundationally broken. In the beginning was a vagueness, an apeiron, a quantum roil, a firstness of pure qualitative fluctuation. Then this state of unformed potential was broken, marked by its most primal distinction. In Big Bang theory, we have a reciprocal relationship between an extensive container and its intensive contents - an expanding spacetime and a cooling ocean of radiation.

    This is the really difficult to get bit. But it means that the reductionist instinct to make one aspect of being prior or more foundational than its "other" is always going to mislead metaphysical thought. Does the digital precede the analog, or the analog precede the digital? The whole point of an organic and pansemiotic conception of this kind of question is to focus on how each brings its other into concrete being. To be able to make a mark is to reveal the possibility that there is a ground to accept that mark. So before anything happens - before there is any kind of difference, analog or digital - there is only the vagueness of a potential. And then when something happens, the digital and the analog would be what co-arise as the two aspects of being which such a symmetry breaking reveals.

    Now we start to get into the difficulties with your view. As I say, the purely analog - if it is to make dialectical sense - would have to be the least digitallly marked kind of state that still have definite material being.

    So it would be like the earliest state of the Universe - a featureless and homogenous realm of the cooling~expanding. All distinctions - all negations or differences that make a difference (to someone) - would be pushed to the margins of this generic state. It would only be a god-like observer, free to take a position outside the totality of this material existence, who could make remarks like "This Universe is a colder/larger than it just was, and it is cooling/expanding at rate x rather than rate y or z." Or heading the other way in scale, remark "This Universe is featureless, except when we get down to the quantum grain, we can see it still has a residual fluctuating freedom that again is an active negation of its generalised state of constraint."

    But then of course the actual Big Bang went through its further symmetry-breaking phase transitions and matter condensed out of radiation bath. This - in dichotomistic fashion - cleared the vacuum of energy in a way that made it the other of "the void". So now we still seem to be in an analog realm, but now one with a lot more possibilities for local magnitude differences. Mass is gravitationally clumping. A new level of action is starting to play out.

    The radiation era was already digitally-broken - it had generic counterfactuality in that it only had three spatial dimensions and a single entropic gradient, etc. But now the matter-dominated era was starting to get really broken. There existed mass that could have any contingent rate of motion between the limits of rest and lightspeed. Greater digital constraint - the marking of the extremes of speed as two crisply opposed limits - had just bred new analog variety in the fact mass could travel at any rate on the spectrum of rates thus revealed.

    So you should be getting the picture. If we actually check in with the physics, we can see how analog~digital is a drama being played out in which both emerge together out of a primal symmetry-breaking. And then both evolve together as symmetry-breakings become the ground - the vaguer preconditons - for further symmetry-breakings which render the presence of the analog and the digital ever-crisper. Both aspects of nature are being strengthened because that is how the mutuality of dichotomous development works. The blacker the pencil, the whiter the paper it marks.

    Of course analog and digital were terms created for the late machine age and so are being dropped into a world with a very long history of become crisply developed in its dualistic fashion. If we look around the world of sensible objects, we see it sharply divided in terms of the continuous and the discrete, the part and the whole, the form and the matter, the flux and the stasis, the chance and the necessity, etc. That is physically how it is for us, being creatures that necessarily depend on the Universe having reached its high point of material complexity - sorted into stuff like heavy element planets bathed in the steady energy flux from a star fixed at an optimal distance.

    So what Wilden describes is the epistemic cut that underlies the further adventure that is life and mind in the cosmos. He is no longer talking about the material world in and of itself - the topic of pansemiosis. He is not talking about analog and digital in that general physicalist sense. He is now talking about symbolic representations of that materiality. And also perhaps, the evoution of that symbolism - which begins in the analogic simplicity of the iconic and indexical, and terminates in the digital crispness of the properly symbolic.

    If we are to talk about analog or iconic representation as opposed to being, then we are talking about machines like old-fashioned wax cylinders where a needle - driven by making noises into a tube - produces a wriggling groove. And then when the energy relation is reversed - the cylinder is cranked to wiggle the needle and cause the tube to utter noise - we get a playback of a trace.

    Crank the cylinder too fast or two slow, and we can have proto-negation - a funny playback that is a difference in kind in being a fictional representation rather than a realistic one. But generally, the analog representation is un-digital in being still so closely connected - as close as it can possibly be - to reversible physics.

    There is a symmetry-breaking - a one way expenditure of energy to make the recording and reduce dynamical reality (a sound of a band of minstrels singing down the tube) to an enduring negentropic memory trace. But it is a symmetrical symmetry-breaking, a shallow one, not a deep and asymmetry-producing symmetry-breaking (like a dichotomous symmetry-breaking). As I say, just turn things around so the groove drives the needle rather than the needle carving the groove, and you get back the memory you created as a dynamical performance of sound. The minstrels sing once more.

    So analog representation, or analog signal processing and analog computation, arises as the most primitive, least broken, form of memory-making. The triadic semiotic trick is all about a living/mindful system being able to internalise a view of the world - code for a set of world-regulating constraints using the machinery of a symbolic memory. And analog representation is the simplest version of that new trick. It sticks a machine - like a wax cylinder recorder - out into the world. And then exploits the physical asymmetry of a rotating cylinder and a dragging sharp point to construct a trace - a linear mark encoding a sequence of energy values.

    Just by being able to switch the direction of the energy flow - from the needle to the cylinder versus from the cylinder to the needle - is all the digitality needed. On/off, forward/backward, record/playback. Semiosis at the lowest level boils down to the physical logic of the binary switch.

    So the point is that even analog devices are digital from the get-go. What we mean by analog in this context is that they cross the semiotic Rubicon by the least possible distance. They are devices that can do "representation", but of a kind so thin or materially direct that we wouldn't call it properly symbolic, just basically iconic, or at most, indexical.

    I hope you can see how - in ignoring the fine print of a definition of analog - you have produced a great confusion in so loosely applying the analog~digital distinction to the world in general, the ontic thing-in-itself, rather than honouring its technical epistemic meaning as a way to clarify our thinking about rate independent information - the semiotic mechanism by which life and mind forms memories or representations of the world.
  • There Are No Identities In Nature
    As I commented on elsewhere in reply to Pierre, the analog is not some kind of unknowable 'thing-in-itself' which is simply 'vague'; the analog has qualities which are knowable, but simply in a different mode than that of the digital.StreetlightX

    I didn't say the analog equates to the vague, so your reply is mostly off the point.

    I said to call the thing-in-itself anything is to take a theoretical stance. And so it is the epistemic (not ontic) vagueness that we aim to pierce here. And the way we pierce it is by forming up some robust dichotomy as our best guess as to what could be the case. Doing this employing a dichotomy ensures that whatever is the case in regard to the thing-in-itself, it must logically lay somewhere within the limits we have thus rigourously defined.

    And so one such dichotomous inquiry might be to ask is the thing-in-itself continuous or discrete (or in your less crisp lingo, analog or digital)?

    So again, in explaining the epistemic cut to MU, I was talking epistemology rather than ontology (the clue was in the "epistemic cut").

    Bateson himself speaks of how analog communication works....StreetlightX

    Yes. So iconic or indexical rather than symbolic. But as I said, I don't think you are working with a well defined dichotomy in talking about analog vs digital. They are not a reciprocal pairing in the way a proper dichotomy like discrete and continuous are. That is why you want to call them contrasting modes or levels of communication or representation. There is some fudging going on there that makes for weak metaphysics.

    Of course you could always pause to examine this point, tidy it up.

    At it's base, this is what 'aesthetic' means: relating to space and time, as with Kant's 'transcendental aesthetic'.StreetlightX

    Whoa! Is that really what you have been meaning by "aesthetic". Forgive me for thinking you were using it in the more usual sense....viz:

    Aesthetics (/ɛsˈθɛtɪks/; also spelled æsthetics and esthetics also known in Greek as Αισθητική, or "Aisthētiké") is a branch of philosophy dealing with the nature of art, beauty, and taste, with the creation and appreciation of beauty.[1][2] It is more scientifically defined as the study of sensory or sensori-emotional values, sometimes called judgements of sentiment and taste.[3] More broadly, scholars in the field define aesthetics as "critical reflection on art, culture and nature."

    Gilles Deleuze is the philosopher who has perhaps attended to the specificity of analog differences with the most care, referring to them as differences of 'intensity' as opposed to digital differences of 'extensity', noting how the former necessarily underlie the latter:StreetlightX

    Yeah. That passage reads like gibberish to me so you might have to put it into your own words.

    I understand what intensive and extensive properties mean in the standard physics context. I completely don't get your attempts to argue that they somehow reflect an analog~digital distinction.

    How are energy or volume "digital" and not physically continuous in their extensibility?

    How are bulk properties like melting point and density "analog" when they have a value that doesn't change in continuous fashion?

    And how do intensive properties underlie extensive properties when instead an intensive property is formed by the ratio of two extensive properties (as in density being a ratio of mass and volume)? Is a ratio more basic than that which composes it?

    This is some baffling shit here.

    I can see how you/Deleuze might be driving at a substantialist ontology - one that takes existence to be rooted in the definiteness of material being. And so the inherent properties of substance would seem more fundamental than the relational ones.

    But that is quite a different kind of ontology to a triadic process one where - as in hylomorphism - formal constraints shape up or individuate material potential so as to produce the middle-ground actuality we know as substantial being.

    This field is intensive, which is to say it implies a distribution in depth of differences in intensity ...StreetlightX

    Again, this is unscientific horseshit. By its very definition, an intensive property is constant through-out the substance in which it is said to inhere. It can't vary in intensity without some further reason to make it so - what I would call a further informational constraint, and which you would thus have to call "discrete/digital knowledge" in the position you are advancing.

    So again, to turn back to our eternal debate, any metaphysics based on modeling relations - itself premised on discrete, digital knowledge - is derivative of a more primal aesthetic ground out of which it is born.StreetlightX

    Again this seems a weird definition of aesthetic. Even if we go now with this being a reference to necessary Kantian intuitions about Euclidean space, I think most agree that Kant screwed this bit up. And it is hardly primal, or non-conceptual/non-digital, to project the idea on to space that it is flat and infinite in a dimensioned, countable, Euclidean maths, way.

    Psychology shows that we do dichotomise spatial relations in a fairly primal and inductively-learnt fashion - a posteriori knowledge. We learn that everything we see is generally large when it is close at hand and small when it is far away .... if it is the kind of thing with a normally constant size. And spatial distance in turn relates to time and energy. If it looks close, we can probably get to it quite soon with not too much effort.

    So an embodied sense of being in the world is built up from these kinds of exploratory learnings. They are the dichotomies of experience rather than the antinomies of pure reason. :)

    So I agree with your general urge to take an enactive or embodied approach to epistemology here. Biosemiosis is indeed foundational to linguistic or mathematical semiosis. And a lot of philosophy does go in the other direction in presuming a physics-free disembodied rationality. That is why computers seem so ... deep ... to so many. They are disembodied rationality, pure syntax, brain in a vat digitalism, personified.

    And I get the general thrust of what you mean about the digital distinction. Biologists are embracing Peircean semiotics because it gets at the basis of how - in Pattee's words - rate independent information (a digital code/memory) can constrain rate dependent dynamics (the Newtonian realm of "analog" or continuously state-determined material processes).

    So these are the important points. Dissipative structure can be regulated by a machinery of memory. And this is how bodies are formed, individuals are individuated, autonomy arises.

    But Deleuze seems mostly mangled Prigogine. And Prigogine, while a genius, also was working at the level of rate dependent dynamics. He wasn't about the larger semiotic story of the epistemic cut and rate independent information. So to make Prigogine your departure point is - as with autopoiesis or dynamical systems theory - to strike out with only half the whole story.
  • There Are No Identities In Nature
    We cannot assume a proper A and not-A relation between the analog and the digitalMetaphysician Undercover

    I agree that analog~digital is probably not a proper dichotomy. They are terms that arose early on in the development of signalling technology. And so it is a little blurry whether analog - in being iconic, a direct representation of its material source - is the opposite of symbolic, or merely proto-symbolic.

    Retrospectively, we could tidy this up and find a way to define digital as 1/analog, and analog as 1/digital. But really, that is a reason I would rarely talk about analog and digital as a crucial metaphysical distinction. Discrete and continuous is simpler to understand as a rigorous dichotomy. Likewise matter and symbol. But analog~digital is a little ambiguous in comparison.

    I do not claim that we need to start in certainty, this is more like what you imply. You imply that if a thing is different from A you can establish the logical certainty of not-A of that thing...Metaphysician Undercover

    Correct. I say it is essential to by-pass uncertainty and begin with a confident positive assertion - just state an axiom or premise which has the logical form of the LEM. But that positive start is what you then seek to test. Does the guess work out in fact?

    So this is the mistake, these two, discrete and continuous, are not properly opposed and therefore are not mutually exclusive, as you imply. We have discrete colours, red, yellow, green, blue, within a continuous spectrumMetaphysician Undercover

    Given colour experience is the most unreal of mental constructions, this example is already off to the worst possible start.

    The world is not coloured red, yellow, green or blue, nor any mix of these primary hues. That much we know from basic psychophysics.
  • There Are No Identities In Nature
    Yes, I guess that is the big question. But even if the real turned out to be (that is if we could know without question that it definitely was) discrete, is it reasonable to think that discreteness could consist in absolutely precise boundaries between the fundamental units? That would seem to evoke Leibniz' Monadology.John

    Well quantum theory says reality is fundamentally uncertain - so fundamentally vague. The discrete and the continuous would then be emergent in being the crisply complementary limits on that basic indeterminism. So it is not a question of whether reality is particle-like or wave-like. Instead those are the bounding alternatives. And which you see becomes a point of view. The observer conjures up the wave or the particle, depending on the type of measurement he chooses.

    So if this is the actual ontology of the world, then it is only reasonable that it is reflected in our ideas about logic too. A deep logic is going to go beyond emergent features like continuous and discrete to connect with the indeterministic or vague.

    Perhaps all three have their different places and functions if the 'grand scheme'? I''m guessing though, that you see the other two as being subsumed and augmented by pragmatic metaphysics?John

    The grand scheme of pragmatism is triadic. So logic has three levels - firstness, secondness and thirdness. Or to talk about it more psychologically, the three things of pure monistic quality, the dyadic thing of a reaction or relation, and the third thing of mediation or habit - a hierarchically structured relation in which a memory becomes the context generally shaping events.

    Now the familiar model of logic - as encoded in the laws of thought - is all about secondness or dyadic relations between particular things and events. It is a logic of the particular, in short. It presumes the world already exists as a crisp state of affairs, a set of individuated facts. And it takes a nominalist view on abstracta or laws or any other kinds of transcendent regularity.

    It is this logic of the particular that AP-types instinctively seize on to do any metaphysics. You see that in TGW and his furrowed brow when modal logic gets challenged. The only logic that computes is the stuff which ordinary logic courses spend all their time teaching - the logic that is splendidly mechanical and a valued tool in a society that values the making of machines.

    Then we have the larger logic of Pragmatism which comes out of the long tradition of organicism and holism. This now adds in a logic of vagueness and a logic of dialectics or symmetry breaking - the firstness and thirdness in Peirce's scheme. (He also distinguished these two categories as the complementary principles of "tychism" - or absolute chance - and "synechism", or generalised continuity.)

    So now we have a logic founded in vagueness or indeterminism. Nature creatively sports possibilities. Already the principle of sufficient reason - an axiom of ordinary logic - is denied. Fluctuations can happen without limit.

    But then that unbounded and chaotic firstness contains within it the seeds of its own self-regulation. Because while indeed "everything can happen", everything that is then contradictory is going to cancel itself out. So just in trying to be completely chaotic, already firstness is on the way to being self-limiting. And anyone who knows quantum field theory will recognise Feynman's path integral or sum-over-histories logic at work here. This isn't some bit of wild-eyed metaphysics. It is exactly how physics has come to make sense of the world in the past 50 years.

    Then we go to the other thing of the dialectic or the dichotomy. This is now a logic of generality. This is how we reason to extract the plausible limits on existence itself. So as with this thread, as we abstract away the particulars, that leaves always the duality of thesis and antithesis - two possible extremum principles, both of which seem equally "true".

    So the LEM is for reasoning about particulars. An individuated thing or event has to be logically one thing or another. If it is A, then it is not not-A, and vice versa. Negation seems fundamental in this context. You have to reduce reality to descriptive binaries - and then hold one of the two options true, the other false. And as I say, as a model of secondness, a logic of particulars, it works really well. It makes for splendid machines. And even societies that think and act like machines.

    But then the dichotomy is the basis for a logic of generality. Now - following its rules requiring a separation of vague possibility into crisp actuality via a dichotomising process of mutual exclusion/collective exhaustion - we always will arrive at complementary poles on being. We have two alternatives - and both must be "true" in the sense of being ultimate bounds on possibility.

    You can head towards the two poles of "the discrete" and the "continuous", but you could never go past them - as how can the discrete be more discrete than the discrete? And you never really leave either behind either as the only way to know you are headed towards discreteness is because it is measurable - plainly visible - that you are headed away still from continuity. And vice versa. So formally, mathematically, the dichotomy encodes the asymmetry of a complete symmetry-breaking. It describes a reciprocal or inverse relation where the way to make one end bigger (or truer, more dominant, more real, more fundamental) is to make the other end smaller.

    We see this in familiar things like infinities and their reciprocal, the infinitesimal. What is the number line except an infinity of infinitesimals? That is why a number line can be both continuous and discrete at the same time - unlimitedly countable. It encodes an uncertainty relation at its base. Neither the continuous or the discrete are fundamental, merely emergent. It is the idea of the infinitesimal difference that reciprocally allows the construction of the unboundedly continuous (when it comes to counting). The infinitesimal = 1/infinity, and vice versa.

    So metaphysics got going when it discovered this logic of generality or dialectical reasoning. Ancient Greece spilled out a whole set of logical dichotomies that underpin pretty much all of the science and thought that has happened ever since.

    Now PoMo - showing the Hegelian roots of its Marxist leanings - has flirted a lot with this dialectical reasoning. So at least it knows about it. But mostly it uses dialectics to generate a play of paradox. It points out that two opposite things always seem true about nature. However instead of saying, well yes of course, and that is what leads on the Peircean thirdness of habit or hierarchical organisation, it treats that fact as some source of deep ambivalence. PoMo is - politically - anti-hierarchical. And so it prefers to conclude that the inevitability of dichotomies is instead a sign that we should somehow return to the vague source of things - the radical uncertainty in which things would be again freest.

    It might sound like it is a good thing to return to vagueness like this. But it isn't true vagueness - PoMo just doesn't have a tradition in that regard. Instead it is just another version of AP's notion of existence as an essentially random collection of events, a state of affairs composed of already individuated being.

    OK, PoMo does have some concerns about how individuation comes about in fact. But it has no logic of vagueness to work with. It's grasp of logic on the whole is sketchy and not central to its concerns. It actually quite likes the idea of Romantic irrationality as its alternative to the patently mechanical mindset of AP.

    So that is why I say Pragmatism is the only brand of metaphysics that both does pursue logic with rigour and has a large enough model of logic to talk about the whole of existence.

    AP has tunnel vision. It only wants to apply the logic of the particular with sterile relentlessness. PoMo has ADHD. It is all over the shop as to what logic really is. Only Pragmatism (as defined by Peirce) uses a formally holistic logic that comprises of three elements in interaction - a logic of vagueness, a logic of particularity, and a logic of generality.

    Though of course Peirce wasn't the end of the story. He was only a solid beginning. Our ideas about symmetry-breaking and hierarchy theory are much more mathematically developed these days. And quantum theory is rubbing our noses in the reality of indeterminism. So we can be a lot sharper about defining both vagueness and generality now.
  • There Are No Identities In Nature
    Digitization just is the introduction of precise boundaries.John

    That's right. But then there is still the issue of how they can be imposed on the world - the issue of human measurement.

    And then - where this gets radically metaphysical - there is the post-quantum issue of measurement in general.

    So through semiotics, we come to explain human understanding of the world as a triadic sign relation. And then it now seems as though the world itself is ontically pan-semiotic - a system that self-referentially measures itself into being in some concrete sense. The universe has to observe itself to "collapse the wavefunction" and have a digitally-crisp, atomistic, mechanically-determined, state of being.

    Of course we then call that classical world, that realm of continuous Newtonian dynamics, our analog reality in contrast with the digitality of our symbolic representations of that world.

    But quantum theory has re-introduced the basic metaphysical dichotomy - is existence continuous or discrete (or indeed, beyond that, indeterministic)? - at base.

    So we know how in epistemic fashion we impose intelligible order on the world in a way that makes it pragmatically measurable. But even while arriving at a fully working theory of that - as in biosemiosis - up pops the holographic bound in fundamental physics and other pansemiotic questions about how the Universe solves its own measurement problem. Where does it stand so as to resolve its own indeterminacy in globally-self referential fashion.

    Given this seems to be a debate about Analytic metaphysics vs PoMo metaphysics, as usual I would say only Pragmatic metaphysics has the proper resources to answer these kinds of questions properly. :)
  • There Are No Identities In Nature
    Accordingly, anything you might say about this analog existence, this continuum, is based only in this assumption. So in order to say anything true about the continuum, your assumption of a real existing continuum must be first validated, justified. Only by validating this assumption does the nature of the continuum become intelligible. To simply assume a continuum, and say that it is of an analog nature, and completely other than the digital, is just an assumption which is completely unjustified, until it is demonstrated why this is assumed to be the case.Metaphysician Undercover

    The semiotic relation is triadic. And this insertion of an extra step - an epistemic cut - is what gets you past this kind of problem.

    So the analog thing-in-itself is vague. It only comes to be called a continuum in crisp distinction to the digital or the discrete within the realm of symbolisation or signification. It is a logical step to insist the world must be divided into A and not-A in this fashion. And then in forming this strong, metaphysical-strength, dichotomy of possibility, it can be used as a theory by which pragmatically to measure reality. We can form the counterfactually-framed belief that reality must be either discrete or continuous, digital or analog, and then test reality against this self-describing theory.

    So the situation is the reverse of the one you paint. We don't need to begin in certainty. Instead - as Peirce and Popper argued with abductive reasoning, as Goedel, Von Neumann and others demonstrated with symbolic reflexivity in general - it can all start with a reasonable guess. We can always divide uncertainty towards two dialectically self-grounding global possibilities. The thing-in-itself must be either (in the limit) discrete or continuous. And then having constructed such a sharply dichotomised state of metaphysical certainty - a logical either/or - we have the solid ground we need to begin to measure reality against that idea of its true nature. Pragmatically, we can go on to discover how true our reasoned guess seems.

    And in Kantian fashion, we never of course grasp the thing-in-itself. That remains formally vague. But the epistemic cut now renders the thing-in-itself as a digitised system of signs. We know it via the measurements that come to stand for it within a framework of theory. And in some sense this system of signs works and so endures. It is a memory of our past that is certain enough to predict our futures.

    So the assumptions here begin in a discussion of existential possibility. If anything exists - in the spatiotemporally-extended sense that we think of as "the world" - then metaphysical logic says there are two options, two extremum principles, when it comes to how that world has definite being. Either it must be continuous or discrete, connected or divided, integrated or differentiated, relational or atomistic, morphism or structure, flux or stasis, etc, etc - all the different ways at getting at essentially the same distinction when it comes to extended being.

    And having identified two complementary limits on being - terms that are logically self-grounding because they are seen to be both mutually-exclusive and jointly-exhaustive - we can be as certain of anything we can be that reality, the vague thing-in-itself, must fall somewhere between the two metaphysical-limits thus defined. Exactly where on this now crisply-defined spectrum is what becomes the subject of measurement.

    Note that this dichotomy itself encodes both the digital and the continuous in being like a line segment - a continuous line marked by two opposing end-points.

    So anyway, the very idea of the analog~discrete is based on the more primal dichotomy of the continuous~discrete - a way of talking about reality in general. But with the analog~digital, we are now drawing attention to the general semiotic matter~symbol dichotomy - the step up in material complexity represented by life and mind.

    The analog~digital dichotomy has sprung up in computation and information theory as an ontological basis for a technology - an ontology for constructing machines rather than growing organisms. And yet, in retrospective fashion, it has now become a sharper way of getting at the essence of what life and mind are about - the semiotic modelling relation that organisms have with worlds. The analogy of the code is very useful - not least because it brings so much maths with it.

    But in a sense, the analog~digital dichotomy also overshoots its mark. It leads to the idea that modeler and modeled actually are broken apart in dualistic fashion - like hardware and software. And this leads to the breakdown in understanding here - the questions about how a continuous world can be digitally marked unless it is somehow already tacitly marked in that fashion.

    So once we start to talk about the Kantian "modeler in the world", the first step is to make this essential break - this epistemic cut - of seeing it as the rise of the digital within the analog. Material events gain the power of being symbolic acts. But then we must go on to arrive at a fully triadic model of the modeling relation. And so attention returns to the middle thing which is the informal acts of measurement that a model must make to connect with its world.

    This is what is the focus of modern biosemioticians like Pattee, Rosen, Salthe and many others like Bateson, Wilden, Spencer-Brown, and so on. What is it that properly constitutes a measurement? What is it that defines a difference that makes a difference?
  • Dennett says philosophy today is self-indulgent and irrelevant
    Questions about individuation and flourishing have an obvious logical basis in common wouldn't you say? And flourishing is a pretty practical issue too. To know what it is would be to know how to do it.
  • The intelligibility of the world
    You think consciousness is amazing, but I think Life is also amazing, and we know that Life is a physical process. It is a physical process we are beginning to understand rather well, but if you look at the physical theory that explains it, there is no mention of "say, a force particle/wave or a matter particle". It is a theory of replicators subject to variation and selection. But look - a "physical" theory of abstract objects!tom

    Except biologists themselves would say it is physics regulated by something further - symbols or information.

    The two are of course related in some fashion. But you seem to be talking right past that issue - questions like how a molecule can be a message.
  • Dennett says philosophy today is self-indulgent and irrelevant
    Many discussions about modality are confused because they don't differentiate between modal systems, don't understand the difference between epistemic and deontic modality, and so on. Modal logic itself cannot tell us about the nature of possibility, but again, a logic is a mathematical object, not a metaphysical thesis.The Great Whatever

    Sorry but modal logic bypasses the essential issue of individuation. It treats possibility as countable variety and not indeterminate potential, from the get-go.

    This is largely due to the very nature of maths of course - being the science of the already countable. Give a man a hammer, etc.
  • Dennett says philosophy today is self-indulgent and irrelevant
    Thus someone like Gilbert Simondon, for example, will write the from the perspective of individuation, "at the level of being prior to any individuation, the law of the excluded middle and the principle of identity do not apply; these principles are only applicable to the being that has already been individuated; they define an impoverished being, separated into environment and individual. …StreetlightX

    This is important. Imagine how actually useful modern metaphysics would be if it were generally focused on the central question of individuation rather than being - dynamical development rather than static existence.
  • Thesis: Explanations Must Be "Shallow"
    Or perhaps a metaphysician/scientist can or has deduced the law of gravity from a more general law (gravity is just an example, not at all my interest here). Then this "law" is itself either deduced from yet a more general "law" or itself has "just because" status. Infinite regress or bust, in other words. Hence the "shallowness if explanation."who

    But isn't what really happened that Newton made a successful simplifying generalisation? So for a start, technically, it was an induction rather than a deduction.

    Newtonian gravity made the generalisation that instead of just some things falling towards other things, everything had exactly the same propensity to fall together. And then to go beyond that Newtonian generalisation would require an even more complete generalisation - like general relativity, and after that, quantum gravity.

    But while this seems like a regress - with no end in sight - you have to take into account that generalisation can only continue so long as there are local particulars to be mopped up in this fashion.

    Newtonian gravity mopped up all the different ways objects fall by saying all mass had the same basic attractive force, so the only local difference to mention is the amount of mass in some spot. Then GR mopped up that kind of particularity in saying mass and energy were both the same general stuff, and a simpler, more general, way to model attraction was positive spacetime curvature, which handled local differences in momentum. QG would take the mopping up to a logical conclusion in putting all the difference physical forces on the one quantum field theory footing.

    So what I am saying is that the inductive explanatory regress is self-limiting. It will halt at the point where it runs out of local particulars to generalise away. That is what founds a notion of a theory of everything. It is an asymptotic approach to a limit on explanation.
  • Reality and the nature of being
    The Big Bang was apparently a singularity - a planck-length point of existence that contained anything and everything that could have ever became.Albert Keirkenhaur

    That is a common misconception - that the Big Bang starts from some particular point of spacetime and then expands to fill the whole of that spacetime.

    Instead, the Big Bang is itself the development of spacetime and so where it all "starts from" is not a location but instead a scale - the Planck scale.

    The question then is what kind of thing is the Planck scale?

    And it is as this point you have to think beyond familiar classical concepts like spacetime and energy density. Quantum theory says at the Planck scale, these two things are at unity in some way that adds up to the most radical kind of uncertainty about anything existing.
  • Reality and the nature of being
    As we know energy can not be created or destroyed, but simply re-used. So one wonders how it could possibly be that energy itself even exists at all. it's really quite the puzzle..Albert Keirkenhaur

    Does physics say the Big Bang started in a high state of energy or a maximum Planck-scale state of quantum uncertainty?

    Once the uncertainty started to sort itself into the complementary things of a fundamental action happening in a spacetime - a classical kind of realm with a thermally-cooling "contents" in a thermally-spreading "container" - then we could of course talk about one aspect of this system as being the energy, the matter, the negenentropy, etc. But that is a retrospective view from the point of view of classical ontology. And can such concepts be secure in talking about the "time" when everything was maximally quantum?
  • General purpose A.I. is it here?
    In a very abstract way Chaitin shows that a very generalized evolution can still result from a computational foundation (albeit in his model it is necessary to ignore certain physical constraints).m-theory

    I listened to the podcast and it is indeed interesting but does the opposite of supporting what you appear to claim.

    On incompleteness, Chaitin stresses that it shows that machine-like or syntactic methods of deriving maths is a pure math myth. All axiom forming involves what Peirce terms the creative abductive leap. So syntax has to begin with semantics. It doesn't work the other way round as computationalists might hope.

    As Chaitin says, the problem for pure maths was that it had the view all maths could be derived from some finite set of axioms. And instead creativity says axiom production is what is infinitely open ended in potential. So that requires the further thing of some general constraint on such troublesome fecundity. The problem - for life, as Von Neumann and Rosen and Pattee argue mathematically - is that biological systems have to be able to close their own openness. They must be abe to construct the boundaries to causal entailment that the epistemic cut represents.

    As a fundamental problem for life and mind, this is not even on the usual computer science radar.

    Then Chaitin's theorem is proven in a physics-free context. He underlines that point himself, and says connecting the theorem to the real world is an entirely other matter.

    But Chaitin is trying to take a biologically realistic approach to genetic algorithms. And thus his busy beaver problem is set up in a toy universe with the equivalent of an epistemic cut. The system has a running memory state that can have point mutations. An algorithm is written to simulate the physical randomness of the real world and make this so.

    Then the outcome of the mutated programme is judged against the memory state which simulates the environment on the other side of the epistemic cut. The environment says either this particular mutant is producing the biggest number ever seen or its not, therefore it dies and is erased from history.

    So the mutating programs are producing number-producing programs. In Pattee's terms, they are the rate independent information side of the equation. Then out in the environment, the numbers must be produced so they can be judged against a temporal backdrop where what might have been the most impressive number a minute ago is already now instead a death sentence. So that part of the biologically realistic deal is the rate dependent dynamics.
  • The intelligibility of the world
    Language is also obviously constrained by actuality, by the nature of what is experienced. It also comes to constrain that experience; it is a reciprocal or symbiotic relation between perception and conception. For me that natural primordial symbiosis consists in the reception of, response to and creation of signs, and I suspect apokrisis would agree.John

    Yep. Symbiosis is a good way to think about it. It all has the causal interdependency that an ecological perspective presumes.
  • The intelligibility of the world
    So, you are going to bypass this problem by ignoring it and go on to more answerable problems? Then you are not answering the question at hand. The naked primal experience is at hand.schopenhauer1

    You forget that I was addressing the OP, not the Hard Problem.

    But we've talked about the Hard Problem often enough. I agree that there is a limit on modelling when modelling runs out of counterfactuality. And this reinforces what I have been saying about intelligibility. To be intelligible, there must be the alternative that gets excluded in presenting the explanation. And once we get down to "raw feels" like redness or the scent of a rose, we don't have counterfactuals - like how red could be other than what it is to us.

    But up until the limit, no problem. Or all Easy Problem.

    And then - challenging your more general "why should it feel like anything?" - is my response. If the brain is in a running semiotic interaction with the world in a way that it is a model of being in that world, then why should it not feel like something? Why would we expect the brain to be doing everything that it is doing and yet there not be something that it is like to be doing all that?

    Of course it requires a considerable understanding of cognitive neuroscience to have a feeling of just how much is in fact going on when brains model worlds in embodied fashion - way and above, orders of magnitude, the most complex knot of activity in the known Universe. But still, the Hard Problem for philosophical zombie believers is why wouldn't it be like something to be a brain in that precise semiotic relation to the world? Answer me that.

    Panpsychism is a different kettle of fish. It just buries its lack of explanatory mechanism as far out of sight as possible. It says don't worry folks. Consciousness is this little glow of awareness that inhabits all matter. And that is your "explanation". Tah, dah!
  • General purpose A.I. is it here?
    You will have to forgive me if I find that line to be a rather large leap and not so straight forward as you take for granted..m-theory

    Only because you stubbornly misrepresent my position.

    So, to quote von Neumann, what is the point of me being percise if I don't know what I am talking about?m-theory

    Exactly. Why say pomdp sorts all your problems when it is now clear that you have no technical understanding of pomdp?

    Here is another video of Chaitin offering a computational rebuttal to the notion that computation does not apply to evolution.m-theory

    Forget youtube videos. Either you understand the issues and can articulate the relevant arguments or you are pretending to expertise you simply don't have.
  • General purpose A.I. is it here?
    This does not make it any clearer what you mean when you are using this term.
    Again real world computation is not physics free, even if computation theory has thought experiments that ignore physical constraints.
    m-theory

    Again, real world Turing computation is certainly physics-free if the hardware maker is doing his job right. If the hardware misbehaves - introduces physical variety in a way that affects the physics-free play of syntax - the software malfunctions. (Not that the semantics-free software could ever "know" this of course.)

    We don't have a technical account of your issue.
    It was a mistake of me to try and find a technical solution prior I admit.
    m-theory

    :-}
  • General purpose A.I. is it here?
    I don't really have time to explain repeatedly that fundamentally I don't agree that relevant terms such as these examples are excluded from computational implantation.m-theory

    Repeatedly? Once properly would suffice.

    This link seems very poor as an example of a general mathematical outline of a Godel incompleteness facing computational theories of the mind.m-theory

    Read Rosen's book then.

    Perhaps if you had some example of semantics that exists independently and mutually exclusive of syntax it would be useful for making your point?m-theory

    You just changed your wording. Being dichotomously divided is importantly different from existing independently.

    So it is not my position that there is pure semantics anywhere anytime. If semantics and syntax form a proper metaphysical strength dichotomy, they would thus be two faces of the one developing separation. In a strong sense, you could never have one without the other.

    And that is indeed the basis of my pan-semiotic - not pan-psychic - metaphysics. It is why I see the essential issue here the other way round to you. The fundamental division has to develop from some seed symmetry breaking. I gave you links to the biophysics that talks about that fundamental symmetry breaking when it comes to pansemiosis - the fact that there is a nano-scale convergence zone at the thermal nano-scale where suddenly energetic processes can be switched from one type to another type at "no cost". Physics becomes regulable by information. The necessary epistemic cut just emerges all by itself right there for material reasons that are completely unmysterious and fully formally described.

    The semantics of go was not built into AlphaGo and you seem to be saying that because a human built it that means any semantic understanding it has came from humans.m-theory

    What a triumph. A computer got good at winning a game completely defined by abstract rules. And we pretend that it discovered what counts as "winning" without humans to make sure that it "knew" it had won. Hey, if only the machine had been programmed to run about the room flashing lights and shouting "In your face, puny beings", then we would be in no doubt it really understood/experienced/felt/observed/whatever what it had just done.

    Again I can make no sense of your "physics free" insistence here.m-theory

    So you read that Pattee reference before dismissing it?

    And again it is not clear that there is an ontic issue and the hand waving of obscure texts does not prove that there is one.m-theory

    I can only hand wave them if you won't even read them before dismissing them. And if you find them obscure, that simply speaks to the extent of your scholarship.

    I did not anticipate that you would insist that I define all the terms I use in technical detail.
    I would perhaps be willing to do this I if I believed it would be productive, but because you disagree at a more fundamental level I doubt giving technical detail will further or exchange.
    m-theory

    I've given you every chance to show that you understand the sources you cite in a way that counters the detailed objections I've raised.

    Pompd is the ground on which you said you wanted to make your case. You claimed it deals with my fundamental level disagreement. I'm waiting for you to show me that with the appropriate technical account. What more can I do than take you at your word when you make such a promise?
  • The intelligibility of the world
    You're saying that logic constrains thinking, and that is false, because you are making logic, which is a passive tool of thought, into something which actively constrains thought.Metaphysician Undercover

    A tool is a effective cause. A logical constraint is a formal cause. So you are confusing your Aristotelean categories here.

    But logic is not a "passive tool of thought"; on the contrary we cannot think cogently without it. IJohn

    I agree. It is the structural grounding that makes it even possible to act in a "thoughtful" way.

    Of course you can go back before the development of formal language, and even grammatical speech, and argue that animals think without this "tool".

    Yet in fact if you check the very structure of the brain, it is "logical" in a general dichotomistic or symmetry-breaking sense. It has an architecture that is making logical breaks at every point of its design.

    It starts right with the receptive fields of sensory cells. They are generally divided so that their firing is enhanced when hit centrally, and their firing is suppressed by the same stimulus hitting them peripherally. And then to balance that, a matching set of cells does the exact reverse. This way, a logically binary response is imposed on the world and information processing can begin.

    Then even when the brain becomes a big lump of grey matter, it still is organised with a dichotomous logic - all the separations between motor and sensory areas, object identity and spatial relation pathways, left vs right hemisphere "focus vs fringe" processing styles, etc.
  • Ignoring suffering for self-indulgence
    If you care about suffering, you will do something about it.darthbarracuda

    But while I arguably can't help but care about my suffering, why should I "have to" care about yours? So phrased this way, you already presume empathy as a brute fact of your moral economy?

    For me (and I think for most everyone else who isn't lacking in compassion and empathy - i.e. sociopaths, psychopaths, selfish individuals, most politicians, etc.), it seems wrong to ignore someone who just broke their leg down the block and is screaming in pain...darthbarracuda

    So yes. There is something bio-typical and evolutionarily advantageous about empathy. We can even point to the neurochemistry and brain architecture that makes it a biologically-unavoidable aspect of neurotypical human existence.

    But what then of those who are wired differently and lack such empathy. Is is moral that they should ignore such a situation, or exploit the situation in some non-empathetic fashion? If not, then on what grounds are you now arguing that they should fake some kind of neurotypical feelings of care?

    So in general I think there really is no other position to take other than to accept that those who are worse-off than we are should be sought out and helped to the best of our abilities - in other words, if the cost of us helping them is reasonably lower than the relief the victim experiences, we have a moral obligation to do so.darthbarracuda

    But that can't follow if you begin with this notion of "I care". It doesn't deal with the people who don't actually care (through no fault of their own, just bad genetic luck probably exacerbated by bad childhood experience).

    So to justify a morality based on neurotypicality is not as self-justifying as you want to claim. A consequence of such a rigid position is clearly eugenics - let's weed the unempathetic out.

    Of course we instead generally take a more biologically sound approach - recognise that variation even on empathy is part of a natural spectrum. Degrees in the ability to care are neurotypically normal. Where intervention is most justified is in childhood experience - get in there with social services. And also consider the way that "normal society" in fact might encourage un-empathetic behaviours. Then for the dangerously damaged, you lock them away.

    So to make care central, you have to deal with its natural variety in principled fashion - as well as the fact that this is essentially a naturalistic argument. Is is ought. Because empathy is commonplace in neurodevelopment, empathy is morally right.

    This leads to uncomfortable/guilty conclusions that I think modern ethicists have made an entire speculative field out of to try to mitigate: essentially much of modern ethics ends up being apologetics for not doing enough, or being a lazy, selfish individual, i.e. justifying inherent human dispositions as if they are on par with our apparent moral obligations.darthbarracuda

    From a psychological point of view, getting out and involved in ordinary community stuff is the healthy antidote to the deep pessimism that an isolationist and introverted lifestyle will likely perpetuate.

    So it is quite wrong - psychologically - to frame this in terms of people being lazy and selfish (as if these were the biologically natural traits). Instead, what is natural - what we have evolved for - is to live in a close and simple tribal relation. And it is modern society that allows and encourages a strong polarisation of personality types.

    The good thing about modern society is that it allows a stronger expression of both introversion and extraversion - the most basic psychodynamic personality dimension. And then that is also a bad thing in that people can retreat too far into those separate styles of existence.

    ....and most of all the complete abandonment of one's own personal desires in order to help others.darthbarracuda

    So from one extreme to the other, hey?

    I think you have to start with the naturalistic basis of your OP - that we neurotypically find that we care about the suffering (and happiness) of others. And then follow that through to its logical conclusions. And this complete individual self-abnegation is not a naturalistic answer. It is not going to be neurotypically average response - one that feels right given the way most people feel.
  • The intelligibility of the world
    Could someone explain to me what is wrong with the homuncular approach? People speak as if this is some big fallacy, but until the homuncular approach is proven wrong, why should we be afraid of it?Metaphysician Undercover

    Infinite regress. An explanation endlessly deferred is an explanation never actually given.
  • General purpose A.I. is it here?
    Agency is any system which observes and acts in it's environment autonomously.m-theory

    Great. Now you have replaced one term with three more terms you need to define within your chosen theoretical framework and not simply make a dualistic appeal to standard-issue folk ontology.

    So how precisely are observation, action and autonomous defined in computational theory? Give us the maths, give us the algorithms, give us the measureables.

    The same applies to a computational agent, it is embedded with its environment through sensory perceptions.m-theory

    Again this is equivocal. What is a "sensory perception" when we are talking about a computer, a syntactic machine? Give us the maths behind the assertion.

    Pattee must demonstrate that exact solutions are necessary for semantics.m-theory

    But he does. That is what the Von Neumann replicator dilemma shows. It is another example of Godelian incompleteness. An axiom system can't compute its axiomatic base. Axioms must be presumed to get the game started. And therein lies the epistemic cut.

    You could check out Pattee's colleague Robert Rosen who argued this point on a more general mathematical basis. See Essays on Life Itself for how impredicativity is a fundamental formal problem for the computational paradigm.

    http://www.people.vcu.edu/~mikuleck/rosrev.html

    I also provided a link that is extremely detailed.m-theory

    The question here is whether you understand your sources.

    Pompdp illustrates why infinite regress is not completely intractable it is only intractable if exact solutions are necessary, I am arguing that exact solutions are not necessary and the general solutions used in Pomdp resolve issues of epistemic cut.m-theory

    Yes, this is what you assert. Now I'm asking you to explain it in terms that counter my arguments in this thread.

    Again, I don't think you understand your sources well enough to show why they deal with my objections - or indeed, maybe even agree with my objections to your claim that syntax somehow generates semantics in magical fashion.

    I can make no sense of the notion that semantics is something divided apart from and mutually exclusive of syntax.m-theory

    Well there must be a reason why that distinction is so firmly held by so many people - apart from AI dreamers in computer science perhaps.

    To account for the competence of AlphaGo one cannot simply claim it is brute force of syntax as one might do with Deepblue or other engines.m-theory

    But semantics is always built into computation by the agency of humans. That is obvious when we write the programs and interpret the output of a programmable computer. With a neural net, this building in of semantics becomes less obvious, but it is still there. So the neural net remains a syntactic simulation not the real thing.

    If you want to claim there are algorithmic systems - that could be implemented on any kind of hardware in physics-free fashion - then it is up to you to argue in detail how your examples can do that. So far you just give links to other folk making the usual wild hand-waving claims or skirting over the ontic issues.

    The Chinese room does not refute computational theories of the mind, never has, and never will.
    It is simply suggests that because the hardware does not understand then the software does not understand.
    m-theory

    Well the Chinese Room sure felt like the death knell of symbolic AI at the time. The game was up at that point.

    But anyway, now that you have introduced yet another psychological concept to get you out of a hole - "understanding" - you can add that to the list. What does it mean for hardware to understand anything, or software to understand anything? Explain that in terms of a scientific concept which allows measurability of said phenomena.
  • The intelligibility of the world
    Then use "sense" or basic perception if experience is too vague or too complex a notion for your material cause.schopenhauer1

    You miss the point. No matter how we might refer to dasein or whatever, in pointing to it, we are already constructing a conceptualised distance from it. We are introducing the notion of the self which is taking the view of the thing from another place.

    So even phenomenology has an irreducible Kantian issue in thinking it can talk about the thing in itself which would be naked or primal experience. Any attempt at description is already categoric and so immediately into the obvious problems of being a model of the thing. You can't just look and check in a naively realistic way to see what is there. Already you have introduced the further theoretical constructs of this "you" and "the thing" which is being checked.

    Oh come now. A baby or animal doesn't have brute fact experiences? It only becomes experience through some sort of linguistic filter? Blah.schopenhauer1

    Again, to talk about animals having just brute fact experiences is both a convincing theoretical construct, but still essentially a construct.

    How do we imagine it to be an aware animal? Using reason, we can say it is probably most closely like ourselves in a least linguistic and self-conscious state - like staring out the window in a blank unthinking fashion. So we can try to reconstruct a state that is pre-linguistic. It doesn't feel impossible.

    But the point of this discussion is that it is humans that have a social machinery for structuring experience in terms of a logical or grammatical intelligibility. We actually have an extra framework to impose on our conceptions and our impressions.

    This is why there is an issue of how such a framework relates to the world itself. Is the machinery that seems epistemically useful for structuring experience somehow also essentially the same machinery by which the world ontically structures its own being? Is logic an actual model of causality in other words?

    You have to explain that better to be relevant in the conversation.schopenhauer1

    Or you have to understand better to keep up with the conversation. Definitely one or the other. :)
  • General purpose A.I. is it here?
    Semantics cannot exist without syntax.
    To implement any notion of semantics will entail syntax and the logical relationships within that syntax.
    To ground this symbol manipulation simply means to place some agency in the role of being invested in outcomes from decisions.
    m-theory

    Great. Now all you need to do is define "agency" in a computationally scalable way. Perhaps you can walk me through how you do this with pomdp?

    A notion of agency is of course central to the biosemiotic approach to the construction of meanings - or meaningful relations given that this is about meaningful physical actions, an embodied or enactive view of cognition.

    But you've rejected Pattee and biosemiotics for some reason that's not clear. So let's hear your own detailed account of how pomdp results in agency and is not merely another example of a computationalist Chinese Room.

    How as a matter of design is pomdp not reliant on the agency of its human makers in forming its own semantic relations via signs it constructs for itself? In what way is pomdp's agency grown rather than built?

    Sure, neural networks do try to implement this kind of biological realism. But the problem for neural nets is to come up with a universal theory - a generalised architecture that is "infinitely scalable" in the way that Turing computation is.

    If pomdp turns out to be merely an assemblage of syntactic components, their semantic justification being something that its human builders understand rather than something pomdp grew for itself as part of a scaling up of a basic agential world relation, then Houston, you still have a problem.

    Every time some new algorithm must be written by the outside hand of a human designer rather than evolving internally as a result of experiential learning, you have a hand-crafted machine and not an organism.

    So given pomdp is your baby, I'm really interested to see you explain how it is agentially semantic and not just Chinese Room syntactic.
  • The intelligibility of the world
    How is the panpyschist that different from a pragmatic semiotic theorist if both take experience as a brute fact?schopenhauer1

    I would put "experience" in quote marks to show that even to talk about it is already to turn it into a measurable posited within a theoretical structure.

    So the main difference is that you are taking experience as a brute fact. Essentially you are being a naive realist about your phenomenological access. Qualia are real things to you.

    I would take qualia as being the kinds of facts we can talk about - given a suitable structure of ideas is in place.

    Your approach is illogical. Either it is homuncular in requiring a self that stands outside "the realm of brute experience" to do the experiencing of the qualia. Or the qualia simply are "experiential", whatever the heck that could mean in the absence of an experiencer.

    My way is logical. It is the global structure of observation that shapes up the appearance of local observables. And these observables have the nature of signs. They are symbols that anchor the habits of interpretation.

    So in talking about qualia - the colour red, the smell of a rose - this is simply how pixellating talk goes. It is something we can learn to do by applying a particular idea of experience to the business of shaping up experience's structure. If I focus hard in the right way, I can sort of imagine redness or a rose scent in a disembodied, elemental, isolated, fashion as the qualia social script requires. I can perform that measurement in terms of that theory and - ignoring the issues - go off believing that a panpsychic pixels tale of mind is phenomenologically supported.
  • The intelligibility of the world
    Well, that is not sensation, that is the structure in which sensation works within, not the sensation itself.schopenhauer1

    So you say. But good luck with a psychology which is not focused on a structure of distinctions as opposed to your panpsychic pixels.
  • The intelligibility of the world
    I don't get how logic is sensation then. I'm all ears.schopenhauer1

    It is the structure of sensation. And sensation without structure feels like nothing (well, like vagueness to be more accurate).

    So if the world is logically structured, then that is the structure sensation needs to develop to be aware of the world.

    And the world itself must be logically structured as how else could it arrive at an organisation that was persistent and self-stable enough for there to be "a world", as opposed to a vague chaos of disorganised fluctuations?
  • The intelligibility of the world
    Also, I think you might find interest in at least some of what the analytics have to say, particularly Koslicki, Loux, Lowe and Tahko (hard-core hylomorphist neo-Aristotelians).darthbarracuda

    Any secondary literature that talks about my primary interests - Anaximander, Aristotle and Peirce - is going to be interesting to me. And the secondary literature around Aristotle is of course vast. He is the context for metaphysics, so every camp has to have something to say on that.

    But we have strayed away from the OP.

    The speculative/contentious point that I make there is the one that is represented by Anaximander and Peirce, rather than Aristotle. And that is that the Cosmos is intelligible because it itself represents a creative process that can be understood as the bootstrapping development of intelligibility.

    So as a metaphysical position, it is "way out there". :)

    But also, it is a holistic way of thinking about existence which is pretty scientific now.

    So systems science or natural philosophy is an Aristotelean four causes tradition that indeed detours through German idealist philosophers like Schelling. And then Peirce makes the connection between symbol and matter as the way to operationalise the four causes in the way modern science can recognise. Formal and final purpose become top-down constraints that shape bottom-up material and effective freedoms. And constraints become the symbolised part of nature - the information that is the memory of a system or dissipative structure.

    So the intelligibility of nature is a consequence of nature itself being a fundamentally semiotic or "mind-like" process. That is why Peirce described existence as the generalised growth in reasonableness.

    But calling it mind-like is really only to stress how far out of Kansas we are when it comes to standard issue reductionist realism which only wants to acknowledge a reality born of material and efficient cause. So calling it mind-like isn't to invoke a phenomenological notion or mind, nor the dualist notion of mind, but instead semiotics own idea of mindfulness, which is quite different in its own way metaphysically.
  • The intelligibility of the world
    The late E.J. Lowe, Jonathan Schaffer, Tuomas Tahko, Ted Sider, Susan Haack, Michael J. Loux, the late David Lewis, Peter van Inwagen, Timothy Williamson, Amie Thomasson, Sally Haslanger, David Chalmers, Kit Fine, D. M. Armstrong, Trenton Merricks, Eli Hirsch, Ernest Sosa, Daniel Korman, Jaegwon Kim, etc.

    The analytics.
    darthbarracuda

    Yep. Most of those I would be in deep disagreement with. But now because they represent the reductionist and dualistic tendency rather than the romantically confused.

    That is why I am a Pragmatist. As I said, reductionism tries to make metaphysics too simple by arriving at a dichotomy and then sailing on past it in pursuit of monism. The result is then a conscious or unwitting dualism - because the other pole of being still exists despite attempts to deny it.

    You read Heidegger, Husserl, the idealists?darthbarracuda

    Not with any great energy. I'm quite happy to admit that from a systems science standpoint, it is quite clear that the three guys to focus on are Anaximander, Aristotle and Peirce. Others like Kant and Hegel are important, but the ground slopes away sharply in terms of what actually matters to my interests.
  • The intelligibility of the world
    Also, contemporary realist metaphysics is largely concerned with ontology and not with the broader metaphysical stories.darthbarracuda

    Again, who are you talking about in particular?

    It's far more conservative than your version of metaphysics, with the only notable things I can think of being discussions of supervenience, grounding, causality and semantic meaning.darthbarracuda

    What you might be talking about just keeps getting muddier to me.
  • The intelligibility of the world
    I'd still like to know what you think are examples of bad metaphysics.darthbarracuda

    It's hard to be particular because the ways of expressing the generalised confusion of romanticism are so various. But anything panpsychic like Whitehead, or aesthetic like SX cites. I don't mind theistic approaches because they stick to a Greek framework of simplicity and so can deal with the interesting scholarly issues - right up to the point where God finally has to click in.
  • The intelligibility of the world
    What is this particular way? The semiotic trifold?darthbarracuda

    That is what I argue is the most penetrating model of it, yes.
  • The intelligibility of the world
    What legitimate differences are there between your conception of metaphysics and theoretical physics?darthbarracuda

    As I've already said, I see metaphysics and science as united by a common method of reasoning - the presumption the world is intelligible because it is actually rationally structured in a particular way.

    So the only possible other choice - given that method has become so sharply defined and unambiguous - is whatever is its sharp "other". And I am afraid we do see that other showing its Bizzaro head and claiming to be doing Bizzaro metaphysics (and also crackpot science, of course).

    Nobody pays you to think about the world, they pay you for results that can be applied to the economy in some way, and everyone's gotta pay the bills.darthbarracuda

    That is sadly true on the whole as I say. Even philosophy and fine art courses push the modern marketability of the critical thinking skills they teach.

    But still, if we are talking about who is best equipped to do metaphysical-strength thinking these days, that is a different conversation.
  • The intelligibility of the world
    I don't really understand what you have in mind when you say "romanticism" or "PoMo". Do you not appreciate Spinoza, Descartes, Husserl, Heidegger, etc? Only some? Only those who aren't easily fitted into your pragmatism?darthbarracuda

    All celebrated figures are celebrated for some reason. So I wouldn't dismiss anyone or any movement out of hand. But yes, I am saying something much stronger than merely that romanticism does not fit easily with rationalism. I'm saying it is the maximally confused "other" of rationalism.

    And pragmatism - if understood properly - is the best balance of the realist and idealist tendencies in philosophy. So it already incorporates phenomenology, or the irreducibility of being in a modelling relation with the world, in its epistemology.

    Science - as a method - isn't naive realism or even bald empiricism. It is rational idealism. It is a method that starts by accepting knowledge is radically provisional, and then working out how to proceed from there.

    Well, yes and no. If measurement is the only way of understanding the world (what I see as empiricism), then either is must be shown how philosophy utilizes measurement, or it must be seen with skepticism.darthbarracuda

    Do you think philosophy could have got going if philosophers were blind, deaf and unfeeling? Of course measurement is already involved in having sensations of the world.

    The point of philosophy is that ideas and perceptions are so biologically and culturally entangled with each other in ordinary life. So as a method, it works to separate these two aspects of the modelling relation from each other. It started by showing sensation (biological measurement) could be doubted, just as beliefs (cultural ideas) could be doubted.

    Then eventually this evolved into science where acts of measurement - turning an awareness of the world into numbers read off a dial - became the "objective" way to operate. But calling measurement objective is a little ironic given that it is so completely subjective now in being dependent on understanding the world only in terms of dial readings. Science says, well, if in the end there is only our phenomenology, our structure of experience, then lets make even measurement something consciously a phenomenological act.

    Usually philosophy utilizes things like counterfactual reasoning, thought experiments, etc. Other fields use these as well. These are generally "fuzzy" in their nature, though. When a philosopher thinks up something like, let's say, Neo-Platonism, it's extremely abstract and fuzzy.darthbarracuda

    If we are talking about metaphysics, there is nothing fuzzy about its reasoning method. The dichotomy or dialectic says quite simply that possibility must divide into either this or that - two choices that can be seen to be mutually exclusive and jointly exhaustive.

    The only thing "fuzzy" is that people then take up different positions about the result of this primary philosophical act. You can treat a dichotomy as either a problem - only one possibility can be true, the other must be false. Or the opposite to such monism is to embrace the triadic holism that resolves the division - adopt the hierarchical view where dichotomies are differentiations that also result then in integration. In splitting vague possibility apart into two crisply complementary things, that then is what becomes the basis of an existence in which the contrasts can mix. The world is the everything that can stand between two poles that represent mutually-derived extremum principles.

    In other words, a constraint is a totally different kind of thing from a zebra. The latter is studied by biologists, the former (as it is-itself) the metaphysician.darthbarracuda

    WTF? Have you ever taken a biology class? Are you so completely unaware of the impact that science's understanding of constraints has had on metaphysics? Next you will be saying Newton and Darwin told us a lot about falling apples and finch beaks, and contemporary philosophy shrugged its shoulders and said "nah, nothing to see here folks".

    I'm referring to contemporary realist analytic metaphysics.darthbarracuda

    It's true that those employed in philosophy departments struggle to produce anything much that feels new these days. The real metaphysics of this kind is being done within the theoretical circles of science itself. The people involved would be paid as scientists.

    Yet starting with Ernst Mach, there is a real tradition of encouraging a useful level of interaction. And analytic types fit in pretty well as interpreters, critics and synthesisers. At the bleeding edge of ideas, any academic boundaries are in practice rather porous.

    I think you may just have an idea that science is somehow basically off track and you need a metaphysical revolution led by philosophers to rescue it.

    So instead you see a world where science charges along, and metaphysicians look more like sucker fish hitching a ride, picking off some crumbs. And because it doesn't match your preconception, you read that picture wrong.
  • The intelligibility of the world
    There's different methods within this broad "scientific" account you presented. If you're an astronomer, you'll use a telescope. If you're a microbiologist, you'll use a microscope. If you're a chemist, you'll use a thermometer and a plethora of other expensive equipment; same goes for practically any scientific field.darthbarracuda

    Yes, the business of measurement is various.

    But I thought you were saying there are other methods of seeking intelligibility itself - methods that aren't just the general method of scientific reasoning.

    Again, my position is that the world is intelligible - it is actually is structured in terms of constraints and freedoms, global rules that shape local instances.

    And so it is not surprising that once human thinking aligns with that - once that is our conscious method of inquiry - then we find the world to be surprisingly easy to make sense of.

    And on this score, science is just applied metaphysics. It is a historical continuation of a method to its natural conclusion. Science has just taken the intelligible categories of Greek metaphysics - the dichotomous questions like is existence atomistic or is it holistic - and polished up the mathematical expression of the ideas, and the ability to then check them through a process of supporting measurements.

    You can rightfully point out that the purpose for even thinking this way about existence is a further matter of complication.

    The point about metaphysical/scientific reasoning is that it is meant to be dispassionate. It is meant to be the view of reality that transcends any particular human or social interests. By replacing gods, spirits, customs and values with a naked system of theory and measurement, the thought was that this would allow the Cosmos to speak its own truth, whatever that might be. We would see its reality unfiltered.

    But of course it is really difficult in fact to suppress all our own natural interests when investigating the world. It is obvious that even science embeds a strong human interest in gaining a mechanical/technological control over material existence. So science, in practice, is not as dispassionate as it likes to pretend.

    But still, the reasoning method is designed to let the Cosmos speak for itself as much as might be possible. It is objective in offering ways to take ourselves out of the equation as much as we let it.

    So then, on that score, scientific reasoning conjures up its own Romantic other. If cosmological reasoning - the kind that targets intelligible existence - has the goal of being dispassionate, then of course that opens the door to the notion of a counter-method based on being humanly passionate in trying to answer the same questions.

    So everything reason does, Romanticism would want to do the opposite.

    Instead of objectivity, let's have maximum subjectivity. Instead of careful measurement of the world, now any imagined idea about the world is good enough. Instead of the formal mathematical expression of ideas, let's try opaque poetic grandiloquence. Instead of expecting global intelligibility, let's expect global incoherence.

    So it is an inevitable part of rationality's success at developing itself into a tight self-supporting methodology that it should also, automatically, produce its Bizarro world other.

    I guess on that score, science could be said to have only room for the one method, modern philosophy - having less culturally patrolled boundaries - certainly has room for the two.

    But that is my analysis of the variety of methods that might exist in philosophy. I haven't heard what other methods of "reasoning" you have in mind when it comes to the standard issue approach of intelligibility-seeking metaphysics.

    The point being made, though, is what exactly is the subject matter of philosophy, in particular metaphysics, that makes it a legitimate attempt to understand the world, and why this subject matter is usually unable to be studied by more..."mainstream" science.darthbarracuda

    So it is important to you that there be a difference? Are you seeking to erect a cultural fenceline even if it need not exist? This is what I find weird about your stance.

    Or I guess not. It is daunting if it is the case that to do metaphysics in the modern era requires one to actually have a deep knowledge of science and maths as well. That's a lot of work.

    There aren't really any "discoveries" within metaphysics, just explanations of what we already see on a day-to-day basis.darthbarracuda

    Nope. That seems an utterly random statement to me. Do you have an example of current metaphysics papers of this kind?
  • The intelligibility of the world
    ...there seems to be more than one method of understanding the world.darthbarracuda

    So apart from "scientific" reasoning - a process of guessing a general mechanism, deducing its particular consequences, then checking to see if the behaviour of the world conforms as predicted - what are these other methods? Can you explain them?

    To say the world is intelligible is to say it is structured in terms of local instances of global rules. And so any method is going to boil down to seeking the global rules that can account for local instances. Where's the variety there?
  • The intelligibility of the world
    Here's a definition of self-organization I came across at BusinessDictionary.com: "Ability of a system to spontaneously arrange its components or elements in a purposeful (non-random) manner, under appropriate conditions without the help of an external agency."

    There are a number of questionable issues here.
    Metaphysician Undercover

    So this is an example of how science does think through its metaphysics. As already said to you in other threads where you have rabbited on about the nature of purpose, a naturalistic systems view demystifies it by talking about final cause in terms of specific gradations of semiosis.

    {teleomaty {teleonomy {teleology}}}.

    Or in more regular language, {propensity {function {purpose}}}.

    So we would have a mere physico-chemical level of finality as a propensity, a material tendency. A bio-genetic level of finality would be a function, as in an organism. And then a psycho-linguistic level of finality would be that which we recognise in a thinking human.

    See: http://www.cosmosandhistory.org/index.php/journal/article/view/189/284
  • The intelligibility of the world
    But the traditionalist account of intelligibility was such that it conveyed the sense of a complete, (if you like illuminated) understanding, in the sense of there no longer being any shortcoming or gap between the understanding and the thing understood.Wayfarer

    The Greeks were naturally stunned at finding that mathematical arguments have the force of logical necessity. If we take certain geometric axioms as unquestionable truths, then a whole bunch of incontrovertible results follow deductively.

    It was literally the creation of a machinery of thought. And rather than some spiritual illumination, it was a Philosophism (as a precursor to Scientism). :) Plato was the Dawkins of his day to the degree that he reduced the world to a literal abstraction. A perfect triangle or perfect sphere was something real and substantial that could be grasped via the rationality of the mind - and as an idea, acted to form up the imperfect matter of the world.

    So this worshipful approach to the awe of mathematical reason - the demonstration that axiom-generated truths looked to explain the hidden regularity of nature - was understandable as a first reaction. But we've since also learnt that maths is only as good as the assumptions contained in its axioms. So maths itself is no longer quite so magical, just pragmatically effective. Yet also our connecting of maths to the world via the scientific method has developed so much that the essential wonder - that existence is intelligible in this pragmatic modelling fashion - persists.

    Is no longer amazing that the Cosmos is intelligible - it has to be just to exist as a self-organised state of global regularity. But it is amazing that we can really get at that structure through the dynamic duo of maths and measurement.

    Or where it becomes less amazing again, we should qualify it by mentioning that humans naturally favour the knowledge that pays its own way in terms of serving humanity's most immediate interests. Which is where Scientism and reductionism comes in - the narrower view of causation that produces all our technology (including our political and economic "technology").

    Both philosophy and science are not big fans of holism. The great metaphysical system builders like Peirce and Hegel are held in deep suspicion. Neither AP nor PoMo likes grand totalising narratives. The idea that reality might be a reasonable place - actually driven by the purpose of becoming organised - is as unfashionable as it gets ... because society wants the machine thinking that creates the machines it is now dependent upon. He who pays the piper, etc.
  • The intelligibility of the world
    I think 'intelligible' traditionally relates to ordinary speech, not to philosophical discourse, and means that we can make out what the person is trying to communicate.andrewk

    Given this is a philosophy board and the OP was clearly meaning to apply the philosophical usage, talking instead about issues of ordinary language comprehension is an unhelpful sidetrack.

    I'll post the Wiki definition if it helps....

    In philosophy, intelligibility is what can be comprehended by the human mind in contrast to sense perception. The intelligible method is thought thinking itself, or the human mind reflecting on itself.

    Plato referred to the intelligible realm of mathematics, forms, first principles, logical deduction, and the dialectical method. The intelligible realm of thought thinking about thought does not necessarily require any visual images, sensual impressions, and material causes for the contents of mind.

    Descartes referred to this method of thought thinking about itself, without the possible illusions of the senses. Kant made similar claims about a priori knowledge. A priori knowledge is claimed to be independent of the content of experience.

    So the metaphysical surprise is that reality is logically structured. It appears to conform to the laws of thought. The world seem to operate with order and reason - regulated by formal/final cause or abstract rational principles.

    Traditionally, this seemed such a surprise that it was mystical. A transcendent cause of order seemed necessary because nature itself is naturally messy, with an ever-present tendency towards disorder.

    But now - through science and maths - we have discovered how structure in fact arises quite naturally in nature through fundamental principles of thermodynamic self-organisation. Disorder itself must fall into regular patterns for basic geometric reasons to do with symmetries and symmetry-breakings.

    So the intelligibility of the Cosmos is far less of an issue these days. We have things like selection principles and least action principles that explain the emergence of order even from randomness.