Comments

  • Do numbers exist?


    I do of course agree with you point that 2i is a quantity of two i's, like 2 apples is a quantity. So the question reduces to asking exactly what is a quantity. @tim wood brought up the idea of quantity a while back so I asked him what is a quantity, and so far I have not gotten an answer.

    By saying that the magnitude of i is 1, what I meant wasn't that there was a single i, an answer to how many 'i's are there - but that a vector that starts at the origin in the complex plane and points to i has length 1. This allows for there to be real numbers of 'i' as the 'number of i's in a complex number, so to speak.

    More generally, speaking about complex numbers like z=4+pi*i. Pi isn't exactly an answer to 'how many' i's are there in z since its interpretation is severed from counting numbers in a few ways. The first way it's severed is that z is not a multiple of i in anything like the counting number sense (there are pi-4i i's in z), so we cannot chunk z into i sized bits through division. an 'i sized bit' doesn't even make sense as imaginary numbers don't enter into the notion of size for complex or imaginary numbers.**

    The second way the interpretation of the magnitude of z is severed from the interpretation of a real number or fraction is that z has two senses of magnitude inherent in it. There's the real part and the imaginary part (which individually work exactly the same as real numbers and usual counting in terms of 'how many' questions, to the extent that irrational numbers can be said to be answers to 'how many' questions) or there's the polar form of the radius and angle - requiring two descriptors of magnitude to specify the quantity ('number') rather than the single one for scalars. Polar form and Cartesian form for complex numbers also have differences in interpretation since the polar form contains an unbounded quantity (radius) and a bounded one (the angle), and Cartesian form is done in terms of two unbounded quantities (the magnitudes of real and imaginary parts). They also mean different things (polar form and Cartesian form) even though they are just different ways of talking about the same thing (naming complex numbers).

    There's also the wrinkle which you already mentioned about the tension between irrational numbers (which are implicated in the magnitude of complex numbers in both directional and radial senses) as magnitudes and fractions as answers to 'how many x go into y' questions. Even the Gaussian integers have this problem (such as 1+i having magnitude 2^(1/2)).

    Actually looking at 'numbers', even in relatively simple cases like these, shows that there's no single sense of magnitude or quantity implicated within them - even if there are formally equivalent representations.

    **the closest approximation to this in the complex plane being dividing a complex number z=x+iy by r=sqrt(x^2+y^2) yielding u=z/r, u has magnitude 1, which is the same magnitude as i - all this says is that z lies on the unit circle and u can be obtained again by scaling by r.
  • Do numbers exist?


    Perhaps this is pedantic, but even in terms of rotations in the complex plane i does have a couple of associated quantities with its notion of multiplication. It represents an anti-clockwise rotation of 90 degrees and a magnitude of 1 in terms of the size of complex numbers. Even though it doesn't represent the same kind of quantity as scalars, there there are two associated magnitudes when thought of in terms of rotation and scaling of a point's position in the complex plane. You probably already know this.

    Another good example, which I believe you brought up, is that the integers mod p where p is prime form a field under multiplication mod p and addition mod p- the algebraic structure (with 0 removed wherever it's mathematically appropriate) which is most similar to folk intuitions of how numbers should work. But the idea of division as multiplication by multiplicative inverse mod p is nothing like the idea of division present in intuitions for fractions (rationals) and reals (rationals + irrationals).

    The reals (excluding weird stuff about 0) under multiplication and addition in the usual sense satisfy modern intuitions about what it means to be a number. When those intuitions are formalised, in turns out that there are other structures which aren't commensurate with folk intuitions that nevertheless satisfy the axiomatisation of a field inspired by those folk intuitions.

    Another wrinkle is introduced by the idea of an isomorphism. Say we have the set of numbers {0,1,2,3,4,5,6}, and addition and multiplication are equal to their remainders upon division by 7 (this is the integers mod p). We have seven elements, and since they satisfy the rules for addition and multiplication and consist of numeric symbols, it would be a stretch not to call the elements of this structure numbers.

    However, relabelling 0=A,1=B,2=C,3=D,4=E,5=F,6=G, the laws of arithmetic and the sense of equality being equality in remainder when divided by 7 produce curious statements like:

    A*x = A where x is in {A,...,G}
    A+x=x where x is in {A,...,G}
    B*F=C*G, which is equivalent to 1*5=2*6=12=5
    C^-1 = E (which states that 2*4=4*2=1)

    If presented with the {A,...,G} representation of these numbers, someone who hasn't been trained in mathematics probably will not recognise the manipulations as equivalent to manipulations of integers modulo p. Are they still numbers? Mathematically they're equivalent to numbers.

    You could do the opposite trick by labelling the symmetries of a square as numbers. Four reflection symmetries, three rotational symmetries... Further tricks by identifying usual group structures (like the integers under addition modulo 7 while ignoring multiplication) as their corresponding symmetric group representations...

    Really all this says is that 'what is a number' and 'do numbers exist' are to some extent independent from the concerns of doing mathematics. What matters is that the math functions as it's set up to.

    Another way of putting it: would a change in the ontological status of numbers change either the truth or the sense of 1+1=2? What about the ontological status of rotation groups: would what you believe about the ontological status of rotations change anything about the idea that if you rotate an object 90 degrees 4 times, you may as well have not rotated it at all? I don't think so, in either case.
  • I am an Ecology


    Been busy, there will be a reply. I've been working a little bit on mathematical details for this 'native generation of parameters' using entropy. I'm hoping you'll find it more interesting than my criticisms.
  • I am an Ecology
    @apokrisis

    I'm not going to respond to anything quantum or differential-geometric unless you think it's essential. Things are involved enough as it is.
  • I am an Ecology
    The possibility of something else having happened. The existence of the oak is a constraint on the existence of other trees, shrubs, weeds, that might have been the case without its shade. Without the oak, those other entropifiers were possible.

    So excuse me for being baffled at your professed bafflements in this discussion. I mean, really?

    I didn't doubt that you understood the 'ecology 101' folklore of how biomass flows and how niches are distributed in the canopy-forest floor trophic network. Why I asked was to see how you used your dictionary of concepts to explain the trophic network in those terms.

    Again, you claim that I'm hand-waving and opaque, but just read the damn words and understand them in a normal fashion.

    So the oak becomes the dominant organism. And as such, it itself can be host to an ecology of species dependent on its existence. Like squirrels and jackdaws that depend on its falling acorns. Or the various specialists pests, and their own specialist parasites, that depend on the sap or tissue. Like all the leaf litter organisms that are adapted to whatever is particular to an annual rain of oak leaves.

    The oak trophic network is the primary school level example. You can pick away at its legitimacy with your pedantry all you like, but pay attention to the context here. This is a forum where even primary school science is a stretch for many. I'm involved in enough academic-strength discussion boards to satisfy any urge for a highly technical discussion. But the prime reason for sticking around here is to practice breaking down some really difficult ideas to the level of easy popularisation.

    I'm not in the business of asking you to describe a simplified trophic network in the usual way it's described then saying 'aha, it was too simple', that'd be an empty rhetorical strategy. Again, what I wanted you to do was use your concepts in a way which clarified their meaning in a simplified trophic network. I take it you agree that a generalised theory of entropy has to be able to instantiate to real world examples, otherwise it's a metaphysics divorced from the reality it concerns.

    It's fun, it's professionally useful, I enjoy it. I agree that mostly it fails. But again that seems more a function of context. PF is just that kind of place where there is an irrational hostility to any actual attempt to "tell it right".

    I thought my responses were precisely demands to 'tell it right' from your perspective. This is commensurate with when you say:

    So bear in mind that I use the most simplified descriptions to get across some of the most subtle known ideas. This is not an accident or a sign of stupidity. And an expectation of failure is built in. This is just an anonymous sandbox of no account. My posts don't actually have to pass peer review. I don't have to worry about getting every tiny fact right because there are thousands ready to pounce on faint errors of emphasis as I do in my everyday working life.

    I'm not in the business of playing peer-review level criticism to your ideas, I don't think my comments have been like that.

    So it is fine that you want that more technical discussion. But the details of your concerns don't particularly light my fire. If you are talking about ecologies as dissipative structures, then I'm interested.

    More technical discussion = apo specifies what his terms mean and how they work in the contexts he describes. I think you'll agree that the style of the post I'm currently replying to is quite different from your usual subsumption of a problem phenomenon to your dictionary of concepts.

    For me. diversity just falls out of a higher level understanding of statistical attractors - https://arxiv.org/abs/0906.3507

    It's an interesting paper. Though it doesn't provide any explicit links between systems that internalise the constraints they use and biodiversity. It looks at specific entropy measures for various spaces then derives maximal entropy distributions subject to constraints. Take the binomial example, it's a discrete distribution with constrained counts, you get out of the analysis in the paper that when you assume a partitioning structure with 2 bins, look at summations of Bernoulli trials - and constrain the mean to a constant - you get the binomial distribution as the maximum entropy one.

    This is a nice link between entropy and the binomial. However, certain configurations of the binomial are entropy maximising - so there's a qualitative distinction between the entropy maximisation occurring in the space of distributions and the entropy maximisation occurring on the maximum entropy distribution that's picked out. Similarly with the space of distributions: the degrees of freedom in the space of distributions are essentially infinite, the degrees of freedom in terms of applied constraints are 1, and the degrees of freedom within the binomial formula are also 1 since the sum is constrained.

    This goes some way to addressing the 'transduction of entropy'. Through a single calculation you end up with the relation of two different entropy concepts and three different degrees of freedom concepts. The caveat is the application of the Lagrange-constraints narrows the application of the results to pre-specified parameter spaces, so an initial justification that a system cares about those constraints (and cares about entropy maximisation) would have to be provided.

    While actually measuring network flows is a vain dream from a metaphysical viewpoint. Of course, we might well achieve pragmatic approximations - enough for some ecological scientist to file an environmental report that ticks the legal requirement on some planning consent. But my interest is in the metaphysical arguments over why ecology is one of the "dismal sciences" - not as dismal as economics or political science, but plagued by the same inflated claims of mathematical exactness.

    Inflated claims of mathematical exactness are a problem across any science whose subject matter is difficult in an epistemic sense. The empirical humanities, including medicine, are actually waking up to this fact at the minute, see the replication crisis.

    OK. Degrees of freedom is a tricky concept as it just is abstract and ambiguous. However I did try to define it metaphysically for you. As usual, you just ignore my explanations and plough on.

    But anyway, the standard mechanical definition is that it is the number of independent parameters that define a (mechanical) configuration. So it is a count of the number of possibilities for an action in a direction. A zero-d particle in 3-space obviously has its three orthogonal or independent translational degrees of freedom, and three rotational ones. There are six directions of symmetry that could be considered energetically broken. The state of the particle can be completely specified by a constraining measurement that places it to a position in this coordinate system.

    So how do degrees of freedom relate to Shannon or Gibbs entropy, let alone exergy or non-equilibrium structure? The mechanical view just treats them as absolute boundary conditions. They are the fixed furniture of any further play of energetics or probabilities.

    I'm not sure what you mean by boundary conditions, but I'm guessing it's something like 'background assumptions required for the formation of a measure'.

    The parameters may as well be the work of the hand of God from the mechanical point of view.

    I appreciate that you are attempting to find a sense of 'becoming relevant' of parameters, and I think the paper you linked about maximum entropy distributions is a step in the right direction. But I don't think it's appropriate to treat parameters as 'God given', as you put it.

    If you want to mathematise something, it'll have a bunch of assumptions of irrelevance so that it fits on a page. EG, when you look at something solely in terms of a binomial distribution, you care about counts of stuff - not how the counts became relevant. A phenomenological description of what's happening in a system is always useful and should be a mandatory preparatory measure for a couple of reasons. Maybe you'll see some dialectical correspondence in this:

    (1) It expresses the model building intuitions and the purported significance of included terms and the irrelevance of excluded ones.
    (2) It allows the relation of the mathematisation to the imaginative background of the phenomenology that derived it.

    So I say degrees of freedom are emergent from the development of global constraints. And to allow that, you need the further ontic category or distinction of the vague~crisp. In the beginning, there is Peircean vagueness, firstness or indeterminism. Then ontic structure emerges as a way to dissipate ... vagueness. (So beyond the mechanical notion of entropy dissipation, I am edging towards an organic model of vagueness dissipation - ie: pansemiosis, a way off the chart speculative venture of course. :) )

    Anyhow, fill in the blanks yourself. When I talk of a rain of degrees of freedom, as I clarified previously, I'm talking of the exergy that other entropy degraders can learn how to mine in all the material that the oak so heedlessly discards or can afford to be diverted.

    The oak needs to produce sap for its own reasons. That highly exergetic matter - a concentrated goodness - then can act as a steep entropy gradient for any critters nimble enough to colonise it. Likewise, the oak produces many more acorns than required to replicate, it drops its leaves every years, it sheds the occasional limb due to inevitable accidents. It rains various forms of concentrated goodness on the fauna and flora below.

    Instantiating it:

    Oak community has X number of species dependent solely on its existence to exist.
    Oak community has Y number of species which are reduced in number solely from what would happen without the oak community.

    These are degrees of freedom in the first sense.

    Species in X have network of flows. Oaks removed, X goes to 0.
    Species in Y have networks of flows. Oaks removed, Y probabilistically increases.

    Flows:

    Complete degradation of network consisting of X, inputs to X are reassigned to other networks.
    Total throughput in Y increases if Y has species which were constrained by species in X - since input node to Y increases if it is a function of input to X.

    Total throughput - sum like variable - assumed constant so long as trophic network is stable or permits immediate recolonisation of destroyed niches with the same efficiency and that concentration of flows will not degrade the ecosystem - decreasing degrees of freedom in the first sense.

    If energy from removal of X's effects are distributed evenly among functional roles, degrees of freedom in the second sense increase a lot. If they are equally concentrated, degrees of freedom remain roughly constant. Degrees of freedom in the second sense - similar to exponentiation of flow entropy.

    Measurement - variables
    X and Y can be identified without error, but inclusion in study can miss some out.
    Total throughput - two measurements required to detect change, likely noisy, nodes in study can miss some out.

    Expected behaviour-
    Entropy maximisation - requires that distributional changes resulting from X's removal increase entropy in the functional sense. Occurs through function of total throughput and the proportions obtained of it by new niches.
    Generalised entropy maximisation - has occurred if distributions in the pre-removal of X era are shifted closer to derived maximal entropy distribution with entropy maximising parameters.

    Does this sound like a transcription of the canopy-floor ecosystem into your abstract register?

    If so: there's rather a lot of counterfactuals there. Especially to assume without evidence.

    Anyway, when I talk about degrees of freedom, my own interests are always at the back of my mind. I am having to balance the everyday mechanical usage with the more liberal organic sense that I also want to convey. I agree this is likely confusing. But hey, its only the PF sandbox. No-one else takes actual metaphysics seriously.

    So an ontology of constraints - like for instance the many "flow network" approaches of loop quantum gravity - says that constraints encounter their own limits. Freedoms (like the Newtonian inertias) are irreducible because contraints can make reality only so simple - or only so mechanically and atomistically determined. This is in fact a theorem of network theory. All more complicated networks can be reduced to a 3-connection, but no simpler.

    So in the background of my organic metaphysics is this critical fact. Reality hovers just above nothingness with an irreducible 3D structure that represents the point where constraints can achieve no further constraint and so absolute freedoms then emerge. This is nature's most general principle. Yes, we might then cash it out with all kinds of more specific "entropy" models. But forgive me if I have little interest in the many piffling applications. My eyes are focused on the deep metaphysical generality. Why settle for anything less?

    Looking at the how your background conceptions apply to the real world is an excellent way of revealing conceptual and practical problems in your metaphysics. It isn't settling for less

    Surely by now you can work out that a degree of freedom is just the claim to be able to measure an action with a direction that is of some theoretical interest. The generality is the metaphysical claim to be able to count "something" that is a definite and atomistic action with a direction in terms of some measurement context. We then have a variety of such contexts that seem to have enough of your "validity" to be grouped under notions like "work", or "disorder", or "uncertainty".

    So "degree of freedom" is a placeholder for all atomistic measurements. I employ it to point to the very fact that this epistemic claim is being made - that the world can be measured with sufficient exactness (an exactness that can only be the case if bolstered by an equally presumptuous use of the principle of indifference).

    Hurrah, it was a placeholder. I understood what you meant!

    Then degree of freedom, in the context of ecological accounts of nature, does get particularised in its various ways. Some somewhat deluded folk might treat species counts or other superficialities as "fundamental" things to measure. But even when founding ecology more securely in a thermodynamical science, the acts of measurement that "degrees of freedoms" represent could be metaphysically understood as talking about notions of work, of disorder, of uncertainty. Ordinary language descriptions that suddenly make these different metrics seem much less formally related perhaps.

    Could you comment on my attempt at instantiating your concepts to the canopy-floor ecosystem example?

    That is the reason I also seek to bring in semiosis to fix the situation. You complain I always assimilate every discussion to semiotics. But that is just because it is the metaphysical answer to everything. It is the totalising discourse. Get used to it.

    Why do you think semiotics is the totalising discourse? I'm quite suspicious of the claim that there are genuine totalising discourses; attempts to reduce reality to one type of thing fail for precisely the same reasons systems science became so popular (perhaps with some irony resulting from the view of everything as a system).

    You keep demanding that I cash out concepts in your deeply entrenched notions of reality. I keep replying that it is entrenched notions of reality that I seek to expose. We really are at odds. But then look around. This is a philosophy forum. Or a "philosophy" forum at least. Or a philosophy sandbox even. What it ain't is a peer review biometrics journal.

    What kind of description would satisfy your desire for a better 'ontic development' of my presumptions?

    You keep complaining that I'm attacking your concepts because solely they're not biometrically sound. This is the same kind of thing as saying that I have a mechanist's vantage point on ecology. The reason I'm using pre-developed entropy measures is to highlight the ambiguity in your presentations of the concept. The purpose was to get you to describe how stuff worked in your view without the analogising.

    So I can add to my apokrisis dictionary: what's a vague-crisp distinction when it's at home? And what's the epistemic cut?
  • I am an Ecology
    @apokrisis

    Will reply more later, it's late and I'm hooked into some wires.

    Now back to your tedious demand that I explain ecology 101 trophic networks with sufficient technical precision to be the exact kind of description that you would choose to use - one that would pass peer review in your line of work.

    I didn't want you to engage in some kind of organic/mechanical translation exercise, I wanted you to give some specifics of how your concepts act in an ecosystem (or a classical representation of one). Which you did, along with giving an interesting reference. I wanted you to be technically precise with your use of terms - good that you did this.

    I'm sure you will have a bunch of nit-picking pedantry welling up inside of you so I will leave off there.

    I certainly have more sympathy for your view when you do attempt to cash it out in the example. I'm not just looking for why you're wrong, I'm looking for how you could be right.
  • Level III Multiverse again.


    Yes. If you're going to say a result is established in physics, and is obvious. It should come with either a reference to either the paper or popular science article that establishes it, or a description of the text which suggests it.*

    It isn't at all obvious that @andrewk and @fishfry's comments regarding measure zero sets have been addressed, but I am taking it on faith that you know the derivation or where to find it. The alternative is you don't know the derivation or the behaviour, or a reference to either, and your incredulity at our questions is unfounded.

    *if requested.
  • Level III Multiverse again.


    There are measure zero sets in the Gaussian distribution (and in Gaussian Random fields) since their (finite state) distributions are continuous with respect to the Lebesgue measure, tho. :(
  • Level III Multiverse again.


    An ergodic random field whose harmonic oscillator coefficients span {1,2,3,4,5,6} with a Gaussian distribution centred on 3.5. The field is in a state of superposition and decoheres for infinite time. What is the probability that the decoherence branch with initial condition "4" will be encountered in the multiverse?

    Can you provide a reference to the derivation?
  • Cut the crap already
    If you look at the way everyone is responding in the thread, the majority are hurt, the majority are saying hurtful things. If you look at the forum in general, you'll be able to find bad behaviour and general disagreeableness from every regular poster including the mods. Dismissiveness, uncharitable interpretation, selective engagement, rudeness, aspersions on personal character based on philosophical predilection. Almost everything has the potential to get heated. Everyone has the potential to get pissed.

    There are a few things that are clear:
    (1) Mods probably have to have a high degree of tolerance for heat in debates.
    (1a) mods should also be allowed to heat up.
    (2) Mods should curate posts that are offensive for little to no reason, unjustifiably charged or poorly constructed.
    (2a) mods should curate each others' posts for the same reasons and reprimand/correct each-other, talk about disagreements.

    Flames and pointedness in discussion, ridicule - these are fair game. Most of the good discussions on here are filled with this kind of thing (eg me and apo in SLX's recent thread, I'm no different, and I started it!). Perhaps it's a shame, but it is the reality.

    If a mod acts to censure or censor opinions which aren't their own unjustifiably, in a consistent pattern over time, this should be brought to the attention of the rest of the staff who can make an informed judgement. Nevertheless, personal standards for posts and etiquette will differ from person to person - so will whether they decide to delete a thread.

    If there is such a problem, a consistent pattern of moderation bias with Timeline (or Hanover, but Hanover's been a mod much longer), I'm sure it will leave some traces on the website. I'm sure it would be noticed and discussed among the staff. It will eventually be found out and Timeline would have their position revoked. If they don't, it's a mark on the forum, and we would expect to lose whatever posters know of it and care.

    Give the new mod time. If it turns out their behaviour as a moderator will be unduly influenced by either their personal opinions or their philosophical ones, you'll turn out to be right in the end.
  • I am an Ecology
    This is just for interest. A thing I noticed about the ecosystem flow networks is that when you analyse them in terms of flow proportions and add the nodes for the 'system source' and 'energy loss' - the energy loss node is accessible from every other node but can't be escaped. This makes the energy loss node an absorbing state. When energy is transferred around the network over time, it's very likely that this induces a dissolution of the flow network; the flows concentrate on the energy sink. This is one possible configuration where the ascendency tends to 1.

    One way of interpreting this is that the flow network in the ecosystem is metastable (figuratively). I think it's quite unlikely that there's another attractor in the flow network when there's a single absorbing state. Another is that if we looked at the evolution of the flow network over time (internal time to the network would be the application of its transition matrix to an initial state vector, external time being a temporal sequence of flow networks), the sink node would have to be re-integrated (have an outflow) to the remainder of the flow network so that the dynamics of the flow network don't dissolve - more generally, it has to become a subgraph that acts as both a source and a sink to the remainder of the network. However, this new ecosystem would also have the universal sink as an absorbing state... It's probably a stretch, but this has a nice interpretation in terms of the diversification of losses into inputs to new nodes - new niches, new connections, new flows. A further stretch, but one I quite like, is that the fact of the non-dissolution of a flow network necessitates the diversification of its losses. More prosaically, it will be forced to recycle in new ways.

    Can draw diagrams if someone is interested.
  • I am an Ecology


    This is an example of our different interests. You presume parameter spaces can be glued together. I'm concerned with the emergent nature of parameters themselves. You have your idea of how to make workable models. I'm interested in the metaphysics used to justify the basis of the model.

    So it just gets tiresome when your criticism amounts to the fact I'm not bothered about the details of model building for various applications. I've already said my focus is on paradigm shifts within modelling. And core is the difference between mechanical and organic, or reductionist and holist, understandings of causality.

    I think you're painting my objections as mechanistic and reductionist because I haven't adequately communicated what my objections to your use of 'entropy in general' are. This boils down to two ideas, one is that entropy is a parametrised concept and the other is that you provide little information on how different senses of entropy are at work in your ideas of it.

    We can discard the discussion of thematisation errors, since it doesn't seem to interest you. We also have more than enough to talk about.

    My motivations in pointing out different senses of entropy aren't supposed to be aimed at demonstrating either the impossibility or futility of generating a generalised conception of entropy. Rather, they are pointing out difficulties in generalisation which I believe are relevant. Another purpose of using them as examples, as well as ascendency, is to show that there are different senses of entropy as they apply to different systems. This isn't to say there aren't general laws or a sufficient description of translation of one type of entropy to another, it's to say that there is a need to make the hows of entropy transfer between constitutive systems in a complex one a part of your accounts.

    Let's take a tangent on the idea of transduction. Transduction is the change of one type of energy to another of a qualitatively different but quantitatively related sort. Hold a ball out at arm's length and suspend it against gravity. The position the ball is kept it relative to the floor imbues it with potential energy. When the ball is released, the potential energy is converted into kinetic energy as the ball falls. The description of the dynamics in terms of energies is equivalent to the description in terms of velocities and accelerations. There's also an equation that links potential and kinetic energy for the motion of the ball.

    In an ecosystem, it's unfair to expect that there will be a complete description of its energetic and entropic dynamics. I've never had the mechanical hope that they can be specified like this. Regardless, when we talk about the transfer of energy between one functional stratum of an ecosystem to another, a description of the transduction of energy is the right kind of account. This could form part of a series of equations - which are likely to be too precise to capture all relevant stuff -, or a part of a historical description of the procedures constituting the transduction - which is likely to have an insufficient understanding of the specifics.

    This is probably easier to understand in the context of economics. To a first approximation, there are two kinds of empirical analysis in economics. One is based on the mathematical analysis of trends and betting strategies, one is based on a historical description of what happened and can utilise mathematics in a less central role. Future prospects are analysed in either the predictive distribution of a mathematical model or qualitative necessities or high probabilities of event occurence (EG, the tendency of the rate of profit to fall in Marx or Say's Law)

    James Simmons and Marx are good examples of each school.

    I'll take an example of how you describe the dynamics of entropy and complexity in a system.

    Canopy succession is an example. Once a mighty oak has grown to fill a gap, it shades out the competition. So possibilities get removed. The mighty oak then itself becomes a stable context for a host of smaller stable niches. The crumbs off its feeding table are a rain of degrees of freedom that can be spent by the fleas that live on the fleas.

    'So possibilities get removed' - how? Which possibilities are closed?
    'The crumbs off its feeding table are a rain of degrees of freedom that can be spent by the fleas that live on the fleas' - the canopy opens up, what degrees of freedom does this create? How do those degrees of freedom get turned into degrees of freedom for certain organisms? Which organisms? What properties do those organisms have? How do the 'degrees of freedom' in the 'crumbs' relate to the 'smaller stable niches', in what manner do they 'rain'? In what manner are they 'spent'?

    When I actually try and analyse your descriptions of how things work, there's so many ambiguities - this is an essential part of your posting style in which every procedure of 'entropy transduction' they contain is named but not specified. Your key terms: constraint, entropy, degrees of freedom, possibilities, information, symmetry breaking, dissipation; only obtain their sense in your posts through the background knowledge of their analogies in different contexts. When it actually comes to describing the hows - procedural system dynamics - the approach you take is holistic but empty of content.

    Over the course of this discussion, I've probably read 8 or so hours worth of papers and I still don't have a clue on how you actually think things work. When you cite things, they don't actually flesh out the procedural descriptions in your posts. You want to be giving an account of how entropy flows over systems and how it changes each component and introduces new components. The general structure of your posts in this regard is to substitute in a concept related to entropy in an unclear manner and use the web of associations between the varied concepts of entropy and their contexts to flesh out the rest.

    The purpose construct validity has in my argument is to highlight this fact. The way you use the key terms in your posts isn't cashed out by the background references. You analogise too much and specify too little.

    I like two key points. Natural systems are irreducibly complex because they feed off their own accidents. The traditional mechanical view wants to separate the formal laws constraining systems from the material accidents composing systems. But nature includes the accidental or spontaneous in the very thing of forming its laws, or constraining regularities.

    Great! Makes sense! Parametrisations and contextualised procedural descriptions are ways of studying the behaviour of ecosystems. Mathematical models of them aren't complete pictures either, they're supposed to elucidate certain behaviours in parts of ecosystem behaviour. If someone actually believed a particular mathematical model is a perfect description of all of the ecosystem dynamics that it was applied to they'd almost certainly be wrong.

    Regardless, the relationship of parameters in ecosystems and how subsystems become parametrised isn't something you can offload to the literature through your analogical web. Talking about these things is what it means to talk about the dynamics of ecosystems. You neglected to mention that one of the subsections of the paper you quoted from Ulanowicz is titled:

    The Return of Law to Ecology?

    which then goes on to analyse the dynamical properties of ascendency in a partly mathematical and partly phenomenological (in the scientific sense) manner.

    So good luck to any science based on a mechanistic metaphysics that presumes accidents are simply uncontrolled exceptions that can be hidden behind a principle of indifference. Yet also the universe does have lawful regularity. It incorporates its accidents into the habits it then forms.

    Again yes, I broadly agree with this. Ulanowicz' use of networks - not just quantities - to analyse the behaviour of ecosystems is exactly the kind of mathematical analysis that makes sense in this context. Graphs and networks are highly generalised and already have notions of 'flow','resistance' and 'variation' so long as their edges are weighted. The specific manifestations of the graphs in applied mathematical modelling of ecosystems are pretty bad at predicting their future except sometimes in the short term. So, if you want to think of ecosystems as flow networks, there's a lot of abstract generality there to exploit.

    So, we don't have any irreconcilable methodological disagreements. I don't think this is a matter of mechanism vs organicism or reductionism vs holism. The particular beef I have is that you provide poetic descriptions of systems behaviour without fleshing out the details, and this destroys the credibility of the accounts.

    So, could you please provide a procedural re-description of:

    Canopy succession is an example. Once a mighty oak has grown to fill a gap, it shades out the competition. So possibilities get removed. The mighty oak then itself becomes a stable context for a host of smaller stable niches. The crumbs off its feeding table are a rain of degrees of freedom that can be spent by the fleas that live on the fleas.

    In a manner that answers the questions I have:

    Which possibilities are closed?
    What degrees of freedom does this create? How does it create rather than destroy them? How do those degrees of freedom get turned into degrees of freedom for certain organisms? Which organisms? What properties do those recipient organisms have? How do the 'degrees of freedom' in the 'crumbs' relate to the 'smaller stable niches', in what manner do they 'rain'? In what manner are they 'spent'? How does one set of degrees of freedom in the canopy become externalised as a potential for the ecosystem by its deconstruction and then re-internalised in terms of a flow diversification?

    Being able to translate out of the abstract register to a specific example is essential to ensure the validity of your concept applications. When I tried with various entropy notions I couldn't, I still can't in terms of ascendency.
  • I am an Ecology


    You're getting more interesting as I press you more. Thanks for the more detailed reply.
  • I am an Ecology
    A last thing, looking at ascendency as a measure of a dissipative system.

    Ulanowicz' use of 'autocatalysis' can signify a growth or amplification of some aspect of the system. This, analogically, relates to dissipative systems. It doesn't necessarily mean the measure will have the 'ampliative' property in terms of necessitating a growing measure over time (as was seen empirically). Nor will the measure necessarily decrease over time (again, empirically, that U-shape is weird). The imaginative background of the measure suggests that it will behave in that manner - however it doesn't.

    There might be an avenue of studying the system as dissipative in terms of how a unit of energy is distributed around the network. It would be possible to define a probability distribution on each node for the probability that a given quantum of energy leaves that node and goes to another, and then we add another node for 'wasted energy' which is connected as a sink to all other nodes. This would probably introduce a directed Markovian graph in some sense dual to the system. The iterations of this Markovian graph may have a steady state (equilibrium distribution), and the distance from the steady state from a particular graph could be studied.

    But there isn't a theory of dissipation built into the ascendency formula from the get go.
  • I am an Ecology


    It is pretty clear that I’m talking about dissipative structure. And so is ascendency. So is Salthe, Kay and Schneider, Bejan and the many other cites I offered. But your critique amounts to me failing to relate dissipative structure theory to some mundane measure of species diversity.

    Ulanowicz' ascendency can be applied to any ecosystem network parametrised in terms of flow exchange, it need not be applied to a dissipative network. To see this, you can do two things: 1, look at the formula and see there is no temporal component and 2 - do some reasoning about the behaviour of the ascendency. For a definition of ascendency for reference - it is the average mutual information of flows contributing to and coming from a node scaled by the total ecosystem throughput.

    It isn't actually so clear cut that ascendency is cashed out in terms of dissipative systems. This would occur if the ecosystem network had an outflow, so that the total throughput decreased over time. Or alternatively, if the ecosystem had an inflow so the total throughput increased over time. Whether ascendency corresponds to or can detect the dissipation of energy of an ecosystem reliably or its inflow depends not just on the total throughput - which is increasing or decreasing over time in dissipative systems - but upon how changes in total throughput manifest (or don't manifest) in changes of flow concentration or equalisation.**

    The argument that ascendency has a natural interpretation in terms of dissipation doesn't just deal with the definition of ascendency - Ulanowicz doesn't actually describe it as a measure of a dissipative structure - he describes it as a measure of 'growth of an autocatalytic system'. He has intuitions that 'more developed' ecosystems will have a higher ascendency. He explicitly constructed it as an atemporal index to allow the comparison of ecosystems over time or other indices. So the behaviour of ascendency over time (or some other index) is the thing which may or may not reflect whatever assumptions you have about dissipative systems more generally.

    That it generalises to other indices which represent gradients in other quantities is put forward in the paper he defines it in:

    The process of eutrophication for example is characterised by a rise in ascendency which is due to an overt increase in the activity of the system which more than compensates for its concomitant decrease in in its developmental status. — Ulanowicz

    So what Ulanowicz said - the prediction of rising ascendency on an increasing eutrophication gradient - is empirically falsifiable since it depicts a trend in the fuzzy concept of 'ecosystem development' using a trend in an entropic measure.

    Without lingering too long on that fact that that isn't actually correct, there's an observed upside down U shape in ascendency (increase then decrease) over an eutrophication gradient, though since the paper detailing that doesn't do an error analysis it's still up for debate - he has to at least engage with the relative strengths of the terms in the formula. He does.

    And apparently none of my criticisms are relevant since my concerns are merely 'epistemic' or 'reductive'. Really? When it turns out that your equivocations aren't sound and you gloss over too much it's fine because I don't care about the 'ontic' status of ecosystems? I think I care about that quite a lot. I've actually highlighted different notions of complexity relevant to ecosystems, presented and analysed the majority of the terms you've used in your posts. Furthermore, I've spelled out the implications in terms of understanding ecosystems that your equivocations imply; then how this hand waving dis-enables productive inquiry regarding dynamical systems and informational relations. And when possible, I've looked at the empirical results relating to the infodynamics of ecosystems. Doesn't that sound like sound ontic inquiry to you?

    In contrast: you usually forgo procedural descriptions which are the glue that bind the interrelation of composite systems. You should be detailing the relationships between different subsystems in your posts procedurally, not metaphorically. You don't do this but pretend to, you cite formulas and a wealth of background literature without ever cashing out what you say in their terms. You sample bites from them and imply everything will go your way. Even now, because the single hole in my responses to you is that I haven't explicitly engaged any dissipative systems material, you focus on that as if everything I've said wasn't relevant for another reason. So:

    we have to take on trust that you are being faithful in using and interpreting the measures. We have to take it on trust that your descriptions elucidate the relational character of different systems in a manner consistent with your references. And we have to take it on trust that your interpretations of your references are well informed and valid.

    I don't think that trust is vindicated any more.
  • I am an Ecology


    So "disorder" is just an application of the principle of indifference. A messy system is one in which the details don't matter. Gone to equilibrium, the system will be generic or typical.

    Except no. The principle of indifference is an idea of equidistribution over a finite set of states. A die is the model case of the principle of indifference. Roll a 1 - same probability as a 6. Even with a charitable interpretation of this, it destroys any idea of there being degrees of disorder. Even Boltzmann entropy, which explicitly uses equiprobability of microstates within a macrostate still constrains the application to a specific macrostate, not a generalised definition.

    The principle of indifference cannot be extended as equiprobability to countable or continuous state spaces - this is because a uniform distribution cannot exist on infinite sets of outcomes. To see this, imagine a sphere. Imagine a particle trapped in it, wandering about in a random walk. The probability that it is occupies a given volume within the sphere is proportional to that volume - the proportion of the set volume purported to contain the particle to the sphere's total volume. If you want an equal distribution over all possible points in the sphere and think of it this way - you end up with something that isn't even a probability density (roughly, all points are 0 probability or their probabilities sum to infinity). Specific configurations having 0 probability is consistent with this volumetric idea, however.

    Regardless ignoring the elisions and providing lots of charity, this constrains entropy to maximised entropy and destroys relative degrees of disorder and order. It also doesn't work for entropy in continuous processes - only continuous processes which have mappings to discrete outcome spaces. EG, think of a partition of the sphere above for an example, each subset with a probability proportional to its size. Their sizes sum to the volume since it's a partition - but it's no longer a probability distribution indexed to the sphere, it's indirectly related through the partition. The limiting process as set volume tends to 0 doesn't produce a density.

    I doubt you would want this in 'modelling' applications, as it's well known that uniform distributions aren't necessarily entropy maximisers. Neither are they necessarily heavy-tailed distributions, which you usually imply as playing a pivotal role in the emergence of entropy over stratified hierarchical systems. They, not surprisingly, depend on the parametrisation and possibly a partitioning procedure of the system in question.
  • I am an Ecology


    I was typing something in anticipation to this before you responded, funnily enough. I have ontic concerns too. This is why I'm looking to add things to my reductionist utility belt.

    You would do well by paying more attention to your procedural descriptions - looking at how entropy transfers between 'levels' of organisation. I'll give an example:

    (1) Describing the dynamical system of ATP metabolism in terms of inputs and outputs.
    (2) Defining an entropic measure, or necessary features of an entropic measure, relevant to this metabolism.
    (3) Giving a relational mechanism (not just a name of it) between this metabolism and the energetic dynamics ('metabolism') of the system which uses ATP metabolism as a constituent structure. This could perhaps be achieved through averaging energy consumptions of cells and what energy sources give the fuel for the ATP metabolism - a link between biomass and derived energies.
    (4) Give a relationship between the entropic measure in (2) and the energetic measure in (3). Like the function of analogising absence of entropy with exergy.
    (5) Looking at the distribution and behaviour of derived energies on the 'top level' of analysis, also including an instantiation of the relational mechanism (maybe something like making the energy flow to one organism or its population proportional to the conversion of biomass to ATP).
    (6) Derive an entropy measure, or characterise one, relevant to (5). Relate it to the one in (4) in such a way that the product of (3) can be parametrised in accordance with it.

    That you don't do this can be related to the error of thematisation detailed above. The specific dynamics of the 'given freedom' - the behaviour of flows within the ecosystem - is glossed over, there aren't any procedural specifications from the metabolism of ATP to the 'metabolism' of flows in an ecosystem. And what is glossed over would be an excellent contribution to the account of the 'modelling relation', as you call it, between the two and would be a very productive idea in terms of generalised entropy. This is the kind of work required in generalisation, not the automatic conversion-by-handwaving from entropy in one dynamical system to another of different parametrisation and scope. I think you usually handwave this by calling it 'coupling', too (forgetting that coupled systems have shared parameter spaces). Lo and behold, when you put a bit of work in, you can see literal coupling when your figurative sense of coupling was implicated and it looks like there's some way to take the intersection of parameter spaces such that each individual system's has a non-empty intersection with enough of the rest to make a connected network of flows.

    How is this for a compromise: you keep using your terms the way you do but put more effort into relations and procedural dynamics between different ontic energy/entropy regimes and how they describe systems.

    Productive dynamical inquiry = problematising change in relations and relations of change.
  • I am an Ecology


    What else do you expect if you take the attitude that we are free to construct metrics which are valid in terms of our own particular interests?

    This is a lot of flip-flopping Apo. At the start of the thread you advised me to read various entropy measures for context. Especially ascendancy. You must believe there's some good stuff in the entropic measures, that they provide a link between the 'generalised' entropy that you want to speak about to particular contexts. Now when I point out that their aggregation would have poor construct validity, they're no longer serving as instantiations of 'generalised' entropy and caring about their behaviour is too concerned with irrelevant detail.

    You also wrote a description of entropy and work in ATP production and digestion, you do care about instantiations to contexts. But apparently you don't care about the specifics insofar as they resist easy unification into a single sense of entropy. Salthe is quite the opposite, he goes from the specific to the general then back again. Entropies are contextualised, and their parametrizations are given some respect:

    A point of confusion between H and S should be dealt with here. That is that one must keep in mind the scalar level where the H is imputed to be located. So, if we refer to the biological diversity of an ecosystems as entropic, with an H of a certain value, we are not dealing with the fact that the units used to calculate the entropy— populations of organisms—are themselves quite negentropic when viewed from a different level and/or perspective. More classically, the molecules of a gas dispersing as described by Boltzmann are themselves quite orderly, and that order is not supposed to break down as the collection moves toward equilibrium (of course, this might occur as well in some kinds of more interesting systems). — Salthe

    moreover, the distinctions between different systems are interpreted in terms of differences between their individual energetic behaviour. Exceptions are noted. Salthe's method is, essentially, attempting to form a conceptual space whereby entropy can be discussed in general by seeing what is general within the specific contexts entropy arises. He then takes the generalisations and applies them to specific contexts to re-present and contrast the nascent sense of entropy in general with the specifics. This is achieved by the description of entropy in a highly abstract register which is constituted from a web of analogies, citations and examples (but little mathematics, which is unfortunate).

    If this sounds familiar to anyone, essentially Salthe is a phenomenologist of entropy. I have no problem with this. What I do have a problem with is taking the nascent sense of generalised entropy and treating all phenomena, especially systems, as addenda to the nascent sense. In Heideggerian terms, this is an error in thematisation: Salthe gets to speak like this since he thematises entropy and writes about entropy. You don't get to speak like this about stuff in general unless you subordinate all inquiry to the inquiry of entropy. The disclosive character of Salthe's discourse has the thematisation of entropy as a necessary constituent. He is an essayist on entropy.

    You turn metaphysically ladened discussions (read: almost all discussions) into discussions about your metaphysical system. The space of problems you engage with is not thematised in terms of the problems - apparently intentionally on your part. This is a major methodological error, it occludes the nature of the problem and questions related to it in a fog of analogised and half formed notions. It is a method subsisting in obfuscation of the original character of problems, constituted by analogic metaphorical language disclosing the wrong problematic as a theme.

    Recall this is a supposed discussion of ecosystems and people/societies as ecosystems. Instantiations of entropic measures for ecosystems are apparently 'too narrow', despite being part of the topic when viewed from an information-systems perspective. Also note that degrees of freedom means something different for both measures, and neither one can be said to have boundary conditions in the literal sense or constraints in anything like the sense that we agreed upon in terms of state spaces.

    I agree we can do just that. We can describe the world in terms that pick up on some characteristic of interest. I just say that is not a deep approach. What we really want is to discover the patterns by which nature organises itself. And to do that, we need some notion about what nature actually desires.

    It is not a deep approach to attempting to generalise entropy. But your attempts to generalise entropy could not be called a deep approach to anything but entropy. I think you, perhaps like Salthe though my verdict is still out on his sins, should attempt non-metaphorical procedural descriptions of processes more often. You did a good thing by talking about entropy and work in ATP metabolism - I would commend it more if it was more relevant to entropy and information in ecosystems. You generally characterise that as a wee system 'giving freedom' to a big one. The ecosystem measures are about the big one. Salthe has a means of problematising this kind of misapplication or mislocation - a hierarchy of problem classes relevant to systems. You should be dealing with the 'given freedom' rather than the 'freedom giving' from below.

    So you are down among the weeds. I'm talking about the big picture. Again, it is fine if your own interests are narrow. But I've made different choices.

    You are a holist, and have holist concerns. Don't be hating on the weeds for being narrow, be hating on the picture for painting weeds as insufficiently general. They implicate systems of all orders of abstractions, and you seem to have forgotten this commitment.
  • I am an Ecology


    Its not metaphorical if infodynamics/semiosis is generalisable to the material dissipative structures in general.

    The bridges between your contextualised uses of entropy are metaphorical. I'll give an apokrisis like description of a process so you can see what I mean. It will be done through a series of thermodynamically inspired haikus just because.

    Organisms lay
    Jumbled up yet striving for
    Their own mouths to feed

    Each cell binds a gap
    Consuming a gradient
    Where to find new food?

    Digging in the weeds
    An old domain of feasting
    For new specialists

    Symmetry broken
    Entropy from exergy
    Degrees of freedom

    So it is not me waving my hands. It is you demonstrating a deaf ear to context. I am careful to distinguish between the part of what I say which is "normal science" and the part that is "speculative metaphysics". And the speculative part is not merely metaphor because the project would be to cash it out as concrete theory, capable of prediction and measurement.

    I don't actually care about this part. Whether what you're doing is science or not doesn't interest me. Nor do I think it's relevant to our disagreement - from my perspective anyway.

    I agree that may also be a tall order. But still, it is the metaphysical project that interests me. The fact that you repeatedly make these ad hom criticisms shows that you simply wish not to be moved out of your own particular comfort zone. You don't want to be forced to actually have to think.

    I thought you'd give me more charity than this. I read quite a few papers on ascendency so I could understand it. If researching something you've referenced to assess its merits is being afraid of thinking, I'll remain a coward. Ascendency is a cool concept, btw.

    A lot of your focus seems to be on how to we specify richness or complexity or healthy biodiversity. And you instinctively think in terms of counting niches or something else "structually concrete".

    I don't. I think that there's no current comprehensive measure of biodiversity or ecosystem complexity that I'm aware of. Quantifying biodiversity and ecosystem complexity has so many things which can be relevant.

    A lot of your focus seems to be on how to we specify richness or complexity or healthy biodiversity. And you instinctively think in terms of counting niches or something else "structually concrete".

    I took two examples to show what happens when you let one slide into another. They were both measures of an aspect of ecosystem complexity. They were designed to measure different things. A unified sense of entropy must contain both, and sliding over their differences isn't sewing them together.

    I'm not suggesting you 'look at the formulas and find a master one', the thing I cared about was that the measures of entropy in terms of ascendency and relative abundance meant different things - they summarise different aspects of the behaviour of the ecosystem.

    So here, aren't you assuming that we can just count species and have no need to consider the scale of action they might represent? One might be a bacterium, the other an elephant. Both might be matched in overall trophic throughput. We would expect their relative abundance to directly reflect that fact rather than a species count having much useful to say about an ecosystem's healthy biodiversity or state of entropic balance.

    It would be interesting if Shannon biodiversity was related to ascendency. There's a trivial link in terms of node counting - and the number of nodes (species) has an influence on ascendency (though not necessarily on its numerical value). Regardless, high entropy states of abundance don't have to reflect high entropy states of flows. I imagine flow disorder is likely to be less than relative abundance disorder if they were scaled to the same unit (not sure how to translate the units though), since there's probably going to be a primary producer om-nom-noming the sun.

    I read a bit of Salthe today. As I've said previously, I don't mind the project of trying to find a unification of entropy. Salthe has a nice article talking about four different notions of entropy, so far I can tell that he contributes a way of speaking about entropy using similar words in all the contexts. He also doesn't seem to like 'pragmatism', or what I think's better called 'instrumentalism' in the sciences. A focus on measurement and mathematisation. What I've got from reading his writing was the occasional 'I can see through time' moment regarding entropy.

    I understand that you need to be able to use the following terms to talk about anything in this vista of generalised entropy. 'degrees of freedom' 'energy' 'entropy' 'information' 'order' 'disorder' 'dynamics' 'entropic gradient' 'exergy' 'boundary condition'. Salthe seems to use the words much as you do.

    The way he uses 'boundary condition' as a metaphor is probably the strongest concept I've seen from him so far, at least insofar as a generalised parametric approach to entropy goes. It gave me a picture like this:

    Dynamical system 1 outputs a set,
    Dynamical system 2 takes that set as literal boundary conditions.

    I never said I couldn't see the meaning behind the handwaving. This is roughly consistent with the poetic notions of degree of freedom you and Salthe espouse - the boundary condition allows for the evaluation of specific trajectories, and is so a 'constraint' and thus a loss of 'freedom'.

    Maybe I'd be more comfortable with what you're saying if you used scarequotes like I do.
  • I am an Ecology


    I'm baffled that you say Shannon entropy came before Boltzmann's entropy.

    It didn't. Shannon's entropy came after. By throwing in 'original' there I meant Shannon's particular application of entropy to signals and strings.

    It is fine that science does create simpler indexes. I've no problem with that as a natural pragmatic strategy. But also, with Shannon and Boltzmann, it became clear that informational uncertainty (or configurational degrees of freedom) and entropic material degrees of freedom (or countable microstates) are two sides of the same coin. The mathematics does unite them in a general way at a more abstract level.

    It's a stretch between Shannon Biodiversity and Gibbs entropy - there's no equivalent notion of macrostate other than the vector of relative abundances within an ecosystem.

    You can keep re-stating that a proper scientist would use the proper tools. You can reel off the many kinds of metrics that reflect the simpler ontology of the reductionist. You can continue to imply that I am somehow being unscholarly in seeking to consider the whole issue at a more holistic level - one that can encompass physicalist phenomena like life and mind. And indeed, even culture, politics, economics, morality and aesthetics.

    I'm not saying you're being unscholarly because you're looking for commonalities in entropy. I'm saying that your application of entropic concepts has a deaf ear for context. Just like sliding straight from Shannon Biodiversity to Boltzmann and back, the 'macrostate' of an ecosystem parametrised solely in terms of its relative abundances isn't anything like a macrostate in a thermodynamic system.

    You've shown that you can keep it quite well contextualised. Your post on ATP and work extraction to my reckoning has a single working concept of entropy in it - that of its duality to exergy, work extraction. Then you slide into a completely different operationalisation of entropy:

    Then from an infodynamic or pansemiotic point of view, constraints become the informational part of the equation, degrees of freedom are the dynamics. In the real material world, the configuration can be treated as the knowledge, the structure, that the organismic system seeks to impose on its world. The constraints are imposed by a mind with a purpose and a design. The degrees of freedom are then the entropy, the dynamics, that flow through the organism.

    Going from ATP being used to fuel an organism straight to a 'global' sense of infodynamics and signals/signs in pansemiosis. It works only when you wave your hands and don't focus on the specifics. When what before was concrete becomes metaphorical, then what was metaphorical becomes concrete. Just like how you slingshot about with the what the degrees of freedom are.

    I want you to make substantive posts in terms of a unified sense of entropy. I don't want you to achieve that through handwaving and equivocation.

    But I know what I'm about so I'm only going to respond to your critique to the degree it throws light on the connecting commonality, the linkages to that more holistic worldview.

    This is why every time you talk about anything tangentially related to entropy, systems or complexity you make all issues an addendum to your worldview. This is why what we're talking about has almost no relation to the OP.

    Does that remain the case now that information theory has been tied to the actual world via holographic theory?

    I'm not going to pretend to know enough about cosmology to comment.
  • I am an Ecology


    Shannon's strictly broader than Boltzmann since it allows for non-equidistribution. Gibbs and Shannon are almost equivalent, or rather it can be said that Gibbs entropy is Shannon entropy applied to the distribution of microstates in a macrostate which do not necessarily have equal probability.

    I said I didn't understand it very well because I don't know what conclusions people are going to draw by blurring the boundaries between them.
  • I am an Ecology


    Will wait a bit to see what you do, and to digest the post.
  • I am an Ecology


    Will you be giving a series of replies? Should I wait?
  • I am an Ecology


    If my responses meet your standard of 'intelligent discussion', feel free to respond at this point.
  • I am an Ecology


    If instead you really want to say that entropy is simply whatever act of measurement we care to construct as its instrumental definition - that there is no common thread of thought which justifies the construct - then how could you even begin to have an intelligent discussion with me here?

    Funnily enough, it's precisely the common thread between different notions of entropy that makes me resist trying to come up with a catch-all definition of it. This is that entropy is a parametrised concept when it has any meaning at all. What does a parametrisation mean?

    Will begin with a series of examples, then an empirical challenge based on literature review. Shannon entropy, Boltzmann entropy, ascendency, mutual information - these are all functions from some subset of n-dimensional real space to real-space. What does this mean? Whenever you find an entropic concept, it requires a measure. There's a need to be able to speak about low and high entropy arrangements for whatever phenomenon is being studied.

    So-find me example of an entropy in science that isn't parametrised. I don't think there are any.

    Examples - ascendency as an entropy is a mapping from n-dimensional real space to the real line where n is the number of nodes in the ecosystem network. Shannon Diversity is a mapping from n-length sequences of natural numbers to the real line, where n is the number of of species in an ecosystem. Gibbs entropy is the same in this sense as Shannon Diversity. From thermodynamics to ecological infodynamics, entropy is always something which is spoken about in degrees, and when qualitative distinctions arise from it - they are a matter of being emergent from differences in degree. Differences in values of the entropy.

    You said you didn't mind if I believed your descriptions of entropy are purely qualitative - the problem is that they are not purely qualitative. You speak about entropic gradients, negentropy, entropy maximisation without ever specifying the entropy of what and how the entropy is quantified - or even what an entropy gradient is. Nevermind what 'entropification' is, but more on that later... Anyway, back to the commonalities in entropy definitions.

    So a commonality is that they are mappings from some space to the real line. But what matters - what determines the meaning of the entropy is both what the inputs to the entropy function are and how they are combined to produce a number. To speak of entropy in general is to let the what and the how vary with the implicit context of the conversation; it destroys the meaning of individual entropies by attempting to unify them, the unification has poor construct validity precisely because it doesn't allow the what and the how of the mapping to influence the meaning.

    So when you say things like:

    So the normal reductionist metaphysical position is that degrees of freedom are just brute atomistic facts of some kind. But I seek to explain their existence. They are the definite possibilities for "actions in directions" that are left after constraints have had their effect. So degrees of freedom are local elements shaped by some global context, some backdrop history of a system's development.

    In a trivial sense, degrees of freedom are local elements shaped by some global context. You index to the history of a system as if that gives 'its' entropy a unique expression. You can see that this just isn't the case by comparing the behaviour of Shannon Entropy and ascendency - they have different behaviours, they mean different things, they quantify different types of disorder of an ecosystem. And after this empty unification of the concept of entropy, you give yourself license to say things like this:

    So evolution drives an ecology to produce the most entropy possible. A senescent ecology is the fittest as it has built up so much internal complexity. It is a story of fleas, upon fleas, upon fleas. There are a hosts of specialists so that entropification is complete. Every crumb falling off the table is feeding someone. As an ecology, it is an intricate hierarchy of accumulated habit, interlocking negentropic structure. And then in being so wedded to its life, it becomes brittle. It loses the capacity to respond to the unpredictable - like those either very fine-grain progressive parameter changes or the out of the blue epic events

    'Evolution drives an ecology to produce the most entropy possible' - could be viewed in terms of Shannon Entropy, Exergy, Functional Biodiversity.

    'A senescent ecology is the fittest as it has built up so much internal complexity' - could be viewed in terms of Shannon Entropy, Exergy, Functional biodiversity.

    'It is a story of fleas, upon fleas, upon fleas' - is now apparently solely a network based concept, so it's a functional biodiversity.

    'There are hosts of specialists so that entropification is complete' - this makes sense in terms of numerical biodiversity - relative abundances.

    'Every crumb falling off the table is feeding someone.' - this makes sense in terms of functional diversity, like ascendency.

    'As an ecology, it is an intricate hierarchy of accumulated habit, interlocking negentropic structure'

    And when you say negentropy, you mean configurational entropy, except that means it's nothing about ascendency any more.

    '. And then in being so wedded to its life, it becomes brittle. It loses the capacity to respond to the unpredictable - like those either very fine-grain progressive parameter changes or the out of the blue epic events'

    I mean, your 'it' and 'unpredictable' are ranging over all available entropy concepts and all possible perturbations to them. You can look at the example of applying ascendency and exergy along a eutrophication gradient to see that such breadth generates inconsistencies.

    Then switching to that informational or negentropic side of the deal - the tale of life's dissipative structure - the degrees of freedom become the energy available to divert into orderly growth. It is the work that can be done to make adaptive changes if circumstances change.

    Now the degrees of freedom are solely a concept of exergy and available energy? Jesus man. The problem here isn't just that you're equivocating on a massive scale, it's that changes in different entropy measures mean different things for the dynamics of a system.

    Life is spending nature's degrees of freedom in entropifying ambient energy gradients. And it spends its own degrees of freedom in terms of the work it can extract from that entropification - the growth choices that it can make in its ongoing efforts to optimise this entropy flow.

    I could find two references for 'entropification' - and neither of them are in an ecological context, they're a process for estimating orderliness of errors in statistical models. One of them is an applied example, one of them is looking at it in terms of stochastic geometry. I mean, there's no clear sense of entropification to have. It could refer to any of them, but you probably want it to resemble exergy the most here. And through some analogy to thermodynamics, you'll think this has an accessible meaning. How does entropification work?

    You earlier say:

    So the degrees of freedom are the system's entropy. It is the through-put spinning the wheels.

    This relies upon the exponentiation of a particular entropy measure. As you saw, this idea isn't a unified one - and unification produces terrible construct validity. The degrees of freedom are something of the what of entropy, not the how. You can use the how to look back at the what, but not without context.

    Every process described in your post is a placeholder. It reminds me of my favourite sentence I've ever read in academic literature. It is a howler:

    During the search phase, subtask relevant teabag features become attentionally prioritised within the attentional template during a fixation.

    This is supposed to serve as an example of how different features of an object become relevant and become looked at for a while through the progression of a task. What they actually did was take a description of the process in general:

    During the search phase, subtask relevant features become attentionally prioritised within the attentional template during a fixation.

    And then thought 'this would be much clearer if we substituted in teabag':

    During the search phase, subtask relevant teabag features become attentionally prioritised within the attentional template during a fixation.

    How loosely you treat entropy makes almost everything you say a subtask relevant teabag feature. It is an example from a promised theory which has not been developed.

    Edit: the authors of subtask relevant teabag features actually did develop a theory, though.
  • I am an Ecology


    It's actually a lot of work just to research the background to what you're saying. So I think I have to break up my responses into a series.

    A univocal sense of entropy would require the instrumental validity of its applications. This is similar to saying that a measurement of how depressed someone is has to take into account the multiple dimensions of depression - the affective, behavioural and psychomotor components. Validity has three aspects; construct validity, content validity and criterion validity. And these have a few sub-aspects.

    Construct validity: the variable varies with what it's supposed to. If you asked a depressed person about how much they liked football, and used that as a measurement of how depressed they are, this measurement would have low construct validity. Construct validity splits into two forms, discriminant validity and convergent validity. Discriminant validity is the property that the variable does not vary with what it's not supposed to - the football depression scale I alluded to above has low discriminant validity since it would be sensitive to the wrong things. It also has low convergent validity, since if its correlation with a real measure of depression was computed, it would be very low. Convergent validity is then the property that a measure varies with what it's supposed to vary. I think that convergent validity of a group of measures (say measures for depression absence, happiness, contentment) consists in the claim that each can be considered as a monotonic (not necessarily linear as in correlation) function of the other.

    Content validity: the variable varies in a way which captures all aspects of a phenomenon. The football scale of depression has essentially no content validity, a scale of depression when the psychomotor effects of depression are not taken into account has more, a scale of depression which attempts to quantify all effects of depression and does it well has high content validity.

    Criterion validity: the variable varies with outcomes that can be predicted with it. Imagine if someone has taken a test of depression on the football scale and a good scale. Now we administer a test of 'life contentment', high scores on the good depression scale would generally occur with low scores on the life contentment scale. Scores on the football scale will have little or no relationship to the life contentment scale measurements.

    So you asked me if I can provide a univocal definition of entropy. I can't, nevertheless I insist that specific measures of entropy are well defined. Why?

    Measures of entropy are likely have high construct validity, they measure what they're supposed to. Let's take two examples - ascendency and Shannon biodiversity:

    The ascendency is a property of a weighted directional graph. The nodes on such a graph are relevant ecological units - such as species groups in a community. The weights in the graph are measurements of the transfer between two nodes. Let's take an example of wolves, rabbits and grass and construct a food web; assuming a single species for each, no bacteria etc...

    Wolves: have an input from rabbits.
    Rabbits: have an input from grass and an output to wolves.
    Grass: has an input from the sun and an output to rabbits.

    Assume for a moment that the energy transfer is proportional to the biomass transfer. Also assume this ecosystem is evaluated over a single day. Also assume that the wolves extract half as much biomass from the rabbits as the rabbits do from the grass, and the rabbits extract half the energy from the grass that the grass does from the sun; and that grass extracts '1' unit from the sun (normalising the chain).

    Then:

    Transfer(Sun,Grass)=1
    Transfer(Grass,Rabbits)=0.5
    Transfer(Rabbits,Wolves)=0.25

    Denote transfer as T. The ascendency requires the computation of the total throughput, - the sum of all weights, here 1.75. We then need the average mutual information. This is defined as:



    Where is the total of the flows going from i to others, and the reversed index is the flows going from others to j. Which I'm not going to compute since the actual value wont provide anything enlightening here - since it won't help elucidate the meaning of the terms. The average mutual information, roughly, is a measure of the connectivity of the graph but weighted so that 'strong' connections have more influence on MI than 'weak' ones.

    The ascendency is then defined as:



    What does this measure? The diversity of flows within a network. How? It looks at the proportion of each flow in the total, then computes a quantification of how that particular flow incorporates information from other flows - then scales back to the total flow in the system. It means that the diversity is influenced not just by the number of flows, but their relative strength. For example, having a network that consisted of 1 huge flow and the rest are negligible would give an ascendency much closer to a single flow network than another measure - incorporating an idea of functional diversity as well as numerical biodiversity. Having 1 incredibly dominating flow means 0 functional diversity.

    The ascendency can also be exponentiated to produce a measure of the degrees of freedom of the network. Having 1 incredibly dominating flow means 0 MI, so 0 ascendency, so the exponential of the ascendency is:



    IE one 'effective degree of freedom'. Ulanowicz has related this explicitly to the connectivity of digraphs in 'Quantifying the Complexity of Flow Networks: How many roles are there?'. It's behind a paywall unfortunately. If an ecological network has flows equidistributed on the organisms - each receiving an equal portion of the total flow -, then it will have the same effective degrees of freedom as the number of nodes (number of organism types) in the network. When an unequal portion of total flow is allocated to each, it will diverge from the number of nodes - decreasing, since there's more functional concentration in terms of flow in the system.

    Numerically, this would be equal to the exponentiated Shannon Biodiversity index in an ecosystem when the species are present in equal numbers. To see this, the Shannon Biodiversity is defined as:



    Where every is the proportion of the i-th species of the total. This is a numerical comparison of the relative abundance of each species present in the ecosystem. This obtains a maximum value when each species has equal relative abundance, and is then equal to the number of species in the ecosystem. Look at the case with 2 species each having 2 animals. p is constant along i, being 0.5, then the Shannon Biodiversity is -2*0.5*log(0.5) = log2, so its exponential is 2.

    Critically this 2 means something completely different from the effective degrees of freedom derived from the flow entropy. Specifically, this is because there are equal relative abundances of each species rather than equal distribution of flow around the network. The math makes them both produce this value since they are both configurational (Shannon) entropies - and that's literally how they were designed to work.

    If we were to take both of these measures individually and assess them for content validity - they'd probably be pretty damn good. This is because they are derived in different constrained situation to be sensitive to different concepts. They adequately measure flow diversity and relative abundance biodiversity. If you take them together - you can see they will only necessarily agree when both the flows and the numbers are distributed equally among all species in the network. This means low construct validity on a sense of entropy attempting to subsume both. It just won't capture the variability in both of them. I'm being reasonably generous here, when the degrees of freedom notion ascendency theory has was applied across a eutrophication gradient, which I assume you will allow as an entropic gradient, the ascendency degrees of freedom varied in an upside down U shape from low eutrophication to high eutrophication - so it doesn't have to agree with other (more empirically derived) concepts of 'flow concentration' (more nutrients to go to plants, less water oxygen, a possible drop in diversification). IE, the middle ground between low and high eutrophication had the highest ascendency degrees of freedom, not either extreme.

    I think this is actually fine, as we already know that 'intermediate's are likely to be closer to equidistribution of flows than extremes so long as they contain the same species. The paper puts it this way:

    In the light of these results, the network defi-nition of eutrophication (Ulanowicz, 1986) does not appear to accord with the gradient in eutrophicationin the Mondego estuarine ecosystem. Rather, it would seem more accurate to describe the effects of eutrophication process in this ecosystem in terms of a disturbance to system ascendency caused by an intermittent supply of excess nutrients that, when coupled with a combination of physical factors (e.g. salinity, precipitation, etc), causes both a decrease in system activity and a drop in the mutual information of the flow structure. Even though a significant rise in the total system throughput does occur during the period of the algal bloom and does at that time give rise to a strong increase of the system ascendency, the longer-term, annual picture suggests instead that the non-bloom components of the intermediate and strongly eutrophic communities were unable to accommodate the pulse in production. The overall result was a decrease in the annual value of the system TST and, as a consequence, of the annual ascendency as well.


    Of course, if you've read this far, you will say 'the middle state is the one furthest from order so of course it has the highest degrees of freedom', which suggests the opposite intuition from removal of dominant energy flows 'raining degrees of freedom' down onto the system. This just supports the idea that your notion of entropy has poor construct validity.

    Your notion of entropy has very good content validity, since you will take any manifestation of entropy as data for your theory of entropy, it of necessity involves all of them. However, since we've seen that the construct validity when comparing two different but related entropic measures of ecosystem properties is pretty low, your definition of entropy has to be able to be devolved to capture each of them. And since they measure different things, this would have to be a very deep generalisation.

    The criterion validity of your notion of entropy is probably quite low, since your analogies disagree with the numerical quantity you were inspired by.

    This is just two notions of entropy which have a theoretical link and guaranteed numerical equality on some values, and you expect me to believe that it's fruitful to think of entropy univocally when two similar measures of it disagree conceptually and numerically so much? No, Apo. There are lots of different entropies, each of them is well defined, and it isn't so useful to analogise all of them without looking at the specifics.

    Edit: If you want me to define entropy univocally, it's not a game I want to play. I hope the post made it clear that I don't think it's useful to have a general theory of entropy which provides no clarity upon instantiation into a context.

    So about the only thing I can say is that:

    Entropy = something that looks like Shannon Diversity.
  • I am an Ecology


    The point of that post was to highlight that there isn't a univocal sense of entropy, yet.
  • I am an Ecology


    Entropy is absolutely well defined. It's just defined in different ways. There are multiple entropies. They mean different things. They have different formulas. They can relate. The way you use entropy probably isn't well defined yet, it has shades of all of the ones I detailed in both posts, and to speak univocally about entropy as you do is to blur out the specificities required in each application. The same goes for degrees of freedom.
  • I am an Ecology


    It literally took me an hour to disambiguate the different relevant notions of entropy and degrees of freedom that bear some resemblance to how you use the terms, and I still forgot to include a few. Firstly that degrees of freedom can be looked at as the exponential of entropy measures and also that entropy can be thought of as a limitation on work extraction from a system or as a distributional feature of energy.

    I wouldn't be doing my job properly if I accuse you of playing fast and loose with terms while playing fast and loose with terms.
  • I am an Ecology


    First up, I'm not bothered if my arguments are merely qualitative in your eyes. I am only "merely" doing metaphysics in the first place. So a lot of the time, my concern is about what the usual rush to quantification is missing. I'm not looking to add to science's reductionist kitset of simple models. I'm looking to highlight the backdrop holistic metaphysics that those kinds of models are usually collapsing.

    This is fine. I view it in light of this:

    To sum up, no doubt we have vastly different interests. You seem to be concerned with adding useful modelling tools to your reductionist kitbag. And so you view everything I might say through that lens.

    Which is largely true. What I care about in the questions I've asked you is how does the metaphysical system you operate with instantiate into specific cases. You generally operate on a high degree of abstraction, and discussion topics become addenda to the exegesis of the system you operate in. I don't want this one to be an addendum, since it has enough structure to be a productive discussion.

    To help you understand, I define degrees of freedom as dichotomous to constraints. So this is a systems science or hierarchy theory definition. I make the point that degrees of freedom are contextual. They are the definite directions of action that still remain for a system after the constraints of that system have suppressed or subtracted away all other possibilities.

    What does 'dichotomous to constraints' mean? There are lots of different manifestations of the degrees of freedom concept. I generally think of it as the dimension of a vector space - maybe calling a vector space an 'array of states' is enough to suggest the right meaning. If you take all the vectors in the plane, you have a 2 dimensional vector space. If you constrain the vectors to be such that their sum is specified, you lose a degree of freedom, and you have a 1 dimensional vector space. This also applies without much modification to random variables and random vectors, only the vector spaces are defined in terms of random variables instead of numbers.

    I think this concept is similar but distinct to ones in thermodynamics, but it has been a while since I studied any. The number of microstates a thermodynamic system can be in is sometimes called the degrees of freedom, and a particular degree of freedom is a way in which a system can vary. A 'way in which something can vary' is essentially a coordinate system - a parametrisation of the behaviour of something in terms of distinct components.

    There are generalisations of what degrees of freedom means statistically when something other than (multi)linear regression is being used. For something like ridge regression, which deals with correlated inputs to model a response, something called the 'effective degrees of freedom' is used. The effective degrees of freedom is defined in terms of the trace of the projection matrix of the response space to the vector space spanned by the model terms (that matrix's trace, something like its size). When the data is uncorrelated, this is equal to the above vector-space/dimensional degrees of freedom.

    Effective degrees of freedom can also be defined in terms of the exponential of a configurational entropy, in a different context. So I suppose I should talk about configurational entropy.

    Configurational entropy looks like this:



    Where are a collection of numbers between 0 and 1 such that . They are weights. The are included to allow for conditional entropies and the like. The 'degrees of freedom' can also mean the number of terms in this sum, which is equal to the number of distinct, non-overlapping states that can obtain. Like proportions of species in an area of a given type - how many of each divided by how many in total. If the are treated as proportions in this way, it works the same as the Shannon Entropy. Shannon Entropy is a specific case of configurational entropy.

    The Shannon Entropy is related to the Boltzmann entropy in thermodynamics in a few ways I don't understand very well. As I understand it, the Boltzmann entropy is a specific form of Shannon entropy. The equivalence between the two lets people think about Shannon entropy and Boltzman entropy interchangeably (up to contextualisation).

    Then there's the manifestation of entropy in terms of representational complexity - which can take a couple of forms. There's the original Shannon entropy, then there's Kolmogorov complexity. Algorithmic information theory takes place in the neighbourhood of their intersection. The minimum number of states required to represent a string (kolmogorov) and the related but distinct quantity of the average of the negative logarithm of a probability distribution (shannon) are sometimes thought of as being the same thing. Indeed there are correspondence theorems - stuff true about Kolmogorov implies true stuff about Shannon and vice versa (up to contextualisation), but they aren't equivalent. So:

    The theoretical links between Shannon's original entropy, thermodynamical entropy, representational complexity can promote a vast deluge of 'i can see through time' like moments when you discover or grok things about their relation. BUT, and this is the major point of my post:

    Playing fast and loose with what goes into each of the entropies and their context makes you lose a lot. They only mean the same things when they're indexed to the same context. The same applies for degrees of freedom.

    I think this is why most of the discussions I've read including you as a major contributor are attempts to square things with your metaphysical system, but described in abstract rather than instantiated terms. 'Look and see how this thing is in my theory' rather than 'let's see how to bring this thing into my theory'. Speaking about this feels like helping you develop the system. Reading the references, however, does not.

    I'll respond to the rest of your post in terms of ascendency, but later.
  • Level III Multiverse again.


    Do you have any references on what the measure preserving transformation is? I mean, if we're speaking about ergodicity, it has to be the ergodicity of a measure preserving transformation. Another way of putting it is what is a 'step' in the 'trajectory of the universe' defined as? And how can it be established as ergodic?

    Another thing - how can ergodicity be used to show not just that the long term probability of set visitation is nonzero, but that its arrival time is finite? There's a distinction in terms of finite Markov chains, having an infinite arrival or 'revisitation' time excludes a state from being ergodic (and thus the chain from being ergodic in terms of all states).
  • A question about time measurement
    @Metaphysician Undercover

    By metaphysical necessity, I mean the metaphysical necessity of a proposition. By the metaphysical necessity of a proposition, I mean that it's something true which is not contingent. Something that must be the case of necessity, and cannot change. I'm sure you can see that 'the physical laws will not change' is implied by 'the physical laws cannot change' - and in the latter statement is the expression of what I mean by metaphysical necessity of physical law. I don't think it holds. I don't think it's necessary for the clock to function as it does, and I don't think it's required for reciprocating the error rate in terms of seconds/seconds to get how many seconds are required for amassing a single second of error.
  • A question about time measurement


    Take tom's example, that it has now been proven that the earth is getting further from the sun, and the years is getting longer. That difference is so slight that people in the past would never have noticed it. They would do projections into the future, extrapolations as you do, without realizing that every year the length of the error grows by the tiniest amount. After a very long time, this tiniest amount multiplies into a larger amount. What if something similar is the case with the caesium frequency? This is just one example, of one possibility, but have you considered this possibility, that the error is cumulative?

    The possibility of error in the measurement in the year induced by the Earth getting further away from the sun, based upon the assumption that the Earth has a constant elliptic orbit isn't the reason why that measurement was flawed. The reason why the measurement was flawed was because there was an error in the measurement of the year induced by the Earth getting further away from the sun. The possibility of error does not invalidate a measurement, the actuality of error does. And 'the actuality of error' consists in the claim that 'the actual quantity ascribed in the measurement error analysis is wrong'. Not that it's possibly wrong. Of course it's possible wrong, scientific knowledge is fallible. Just because it's possibly wrong gives no reason to reject it.

    Perhaps I misunderstand what you mean by metaphysical necessity of physical law, but I do believe that if you want to extrapolate the way that you do, you need some principles whereby you can argue that what was observed to be the case for one month will continue to be the case for 100 million years.

    I actually did this. I made a case that the error rate would be the same for the same measurement process in 100 million years. There are things that would make atoms behave in different ways, like if all their protons decay (which is possible). If there were no protons, there'd be no caesium or strontium atoms, and no optical lattices, so no caesium clocks. If something like was predicted to happen within 100 million years, the claim that 'the measurement error of the clock would be the same in 100 million years' has some evidence against it. So I quoted you some stuff about the chronology of the universe - the stelliferous era, the one which we are in now, is predicted to have the same atomic physics through its duration. The end of the stelliferous era will be in about 1000 more universe lifetimes, much much longer than 100 million years. This is a matter of being consistent or inconsistent with physical theories, not one of their possibility of error. There's just no good reason to believe that atomic physics will change in a meaningful way in 100 million years. It's a tiny amount of time on the scale of the universe's chronology - 100 million years is 1*10^-9% of the lifetime of the stelliferous era, which we are in and will still be in.

    Instead of focussing on what we can believe evidentially about the actuality of the laws of nature changing, you instead internalised the laws of nature to scientific consensus - claiming that the laws of nature change because of changes in science. In some trivial sense this is true; laws are descriptions of patterns in nature, if our descriptions change the linguistic formulation of patterns changes or new patterns are given descriptions. General changes in scientific consensus implies nothing in particular about the measurement error analysis of that clock.. Changes in the operation of nature might, if they influence the patterns atomic physics is concerned with in a meaningful way. Notice might, not will, since to establish that changes in the operation of nature will invalidate the error analysis a flaw has to be found in the error analysis. Not the possibility of a flaw - this is a triviality, scientific thinking is fallible, the establishment of a particular flaw in the error analysis.

    And in this, you provide the claim that the behaviour of oscillations between hyperfine states has been observed for one month, therefore measurement error analysis based on that month's observations cannot be used to calculate an error rate which is beyond the month. Maybe not beyond the month, you've been admittedly imprecise on exactly how 'the data was gathered in a month' actually changes the error analysis. Saying you have no idea of how 'it was gathered in a month' invalidates the quantification of error in the measurements.

    In general, this argumentative pattern is invalid. I have generalised here because you have not provided and cannot provide a way in which the duration of the data gathering for the paper influences the derived error rates. So, if the principle is 'we cannot say that the error is less than the data gathering duration because of a possible multiplicative effect on the error due to changes in physical law', which is still imprecise as it provides no translation of uncertainties in quantities of different dimensions (like temperature and time), we end up in a situation I detailed a bit earlier, but will provide more detail on now.

    (1) You read the temperature from the thermometer at time t. Say that the duration of your observation was 1 second.
    (2) There is a possible error associated with the thermometer and its error analysis which can multiply the error in an unbounded fashion.
    (3) After 1 second, you do not know the temperature in the room since the error is possibly so large.

    Try as you might, there isn't going to be any way you can establish the constancy of the laws of nature within a second through an a priori argument. All we have are perceptions of regularity and that stuff seems to work in the same way through terrestrial timescales in the real world. If this were something that could be reconciled a-priori Hume's arguments against it and Wittgensteinian-Kripkian analogues in philosophy of language and the whole problem with grue and blue wouldn't be there. It's always going to be possible that there's a huge unaccounted for error in the thermometer, therefore we don't know the temperature in the room on the thermometer's basis.

    I would like to think you would also believe that this argument form is invalid, since it leads to the complete absurdity that it's impossible to form opinions based on measurements. Just substitute in 'measuring process' for thermometer and index a quantity instead of 'temperature', the argument works just the same.

    And again this is an independent issue of whether it's appropriate to ask the question 'how many seconds are required to make caesium clock produce an error of 1 second' - that already assumes the clock is functioning, or would be functioning in the manner it did in the experiment for that time period. Counterfactually: if same process, same measurements, same errors. You can answer that question with a simple algebraic operation - taking the reciprocal. If my pulse has an error of 0.1 seconds per second, then it takes 10 seconds for my pulse to accumulate 1 second of error.

    At this point, you said taking the reciprocal and saying the clock has amassed that error assumes the clock is working for that long. In a trivial sense it does - since if the clock didn't function for that long it would have a different amassed error but not a different error rate. Unless, for some reason, you undermine the measurement process of the clock by saying it requires the constancy of the laws of nature...

    In that case, we end up in the absurd position that a*10^x per k error rate isn't the same as (b*a)*10^x per b*k - which is an error in basic arithmetic.

    Edit: when I say there's no good reason to believe atomic physics will change in 100 million years, I mean that there's no good reason to believe that operation of nature relevant to atomic physics will change, not that the scientific understanding of atoms won't change in that time period. It will, it will get more expansive and more precise. If we're still even alive as a species by that point, ho hum.
  • A question about time measurement
    @Metaphysician Undercover

    Actually, that seems to be exactly what fdrake was claiming.

    Well, we had an argument over whether metaphysical necessity of physical law was required for the measurement to be accurate at that point. I tried to argue that that was a category error, you tried to argue that I required it. Whether in 100 million years the clock has the same error rate depends on whether the physical laws would change. One way of preventing the change conceptually would be the application of necessity to physical law. I tried to argue that that would be sufficient but not necessary, what is required that the laws would change, not that they could or must: a contingent fact, rather than the possibility of its negation or its elevation to necessity.

    The quantification of the error in terms of 1 sec/100 mil years and its equivalence to the stated error rate in the paper is a separate issue. If you want to treat it as a separate issue now, that's fine with me -to me that looks like progress. Since you were arguing as if the metaphysical necessity of physical law was required for scaling the error to an equivalent rate; I argued that it wasn't.

    So we had this super-discussion of the necessity of physical law - neither of us believed that it was necessary. But yeah, if you want to talk about the scaling of the error rate without, in my view, muddying the waters with all this talk of the metaphysical necessity of physical law, I'd be interested in chatting about it again.
  • I am an Ecology
    @apokrisis

    This isn't necessarily a flaw in your thinking. It could be determined by me not having read the things you've read. Further: the complaint that the background research doesn't cash out in exactly the terms you present it is thephilosophyforum.com first world problems, since you actually base things on research and provide references when prompted.
  • I am an Ecology


    I'm suspicious of the confidence you have in this biosemiotic/thermodynamic/entropic system which crops up almost everywhere you post. You usually make statements based on decreases/increases in entropic or related quantities. You present it as if your interpretation of biosemiosis, thermodynamics and systems as fundamentally dynamical systems of information exchange are a sure thing, and use those to justify whatever quantitative variations you expect.

    I ask you for references and derivations to see if there's anything supporting the application of your system to the problem at hand. When I went digging, ascendancy doesn't have any clear relationship to 'degrees of freedom' as you use the term, it manifests as the behaviour of configurational entropy - which is not a monotonic function of the degrees of freedom in the sum. Neither is the joint entropy of a Markov network introduced from normalising flows with total energy (density) a function of 'degrees of freedom' - which is the degree of the network. You can compute effective numbers of species or effective degrees of freedom based off of statistical summaries like the Shannon Biodiversity or the exponential of the ascendency - but since they are monotone functions of the input entropic measure predictions and requirements from the exponentiated quantity must be cashed out in terms of predictions and requirements to its inputs.

    Your posts which use your underlying biosemiotic/thermodynamic/entropic system are a fuzzy bridge between this research background and the discussed problems. While it's commendable to base your reasoning and metaphysical systems on relevant science, it isn't at all clear how your cited literature and background concepts manifest as critiques, solutions, reframing attempts to the problematics you engage with. Especially when you use fuzzy notions like 'degrees of freedom' and hope that your meaning can be discerned by reading thermodynamics, some ecological semiotics, applications of information theory to ecological networks and dissipative systems literature.

    Whenever you've presented me literature (or someone else in a thread I've watched), I've gone 'huh that's interesting' and attempted to research it, but I've never managed to make the bridge between your research and your opinions. Too many quantitative allusions that aren't cashed out in precise terms.
  • I am an Ecology


    Well that didn't have any quantitative analysis. So I did some digging. It seems appropriate to assume that trees that canopy very high have a 'large' energy flow into them. If you removed all the trees from an area, then the energy would flow into plant species on the ground. The total energy coming into that area could probably be assumed to be constant. This doesn't immediately reduce the overhead, which is 'unspent energy' (not necessarily degrees of freedom) Ulanowicz uses, since it depends on the relative apportioning of the solar energy to ground species - how much of the total goes to each. The total can go down, but the proportions can remain unchanged, so the entropy and joint entropy can do that too, so therefore the ascendency and overhead can stay the same.

    So while the destruction of a canopy in an area could lead to changes in overhead, it doesn't have to.

    edit: see here to play about with the maths yourself, see if I'm right.
  • I am an Ecology


    I imagine that approaching ecosystems through network analysis would have alot to say about this: i.e. more biodiverse ecosystems have more nodes that support certain cycles such that the failure of a few of these nodes would not lead to the failure of those cycles as a whole; and moreover, that such robustness also has a catalytic effect - the more robust a network, the more chance for the development of further nodes, etc, etc

    What makes you think that more biodiverse ecosystems 'have more nodes that support certain cycles such that the failure of a few of these nodes would not lead to the failure of those cycles as a whole?'

    I can think of a simplified example of it. Say you have 100 wolves and 100 rabbits (only). Wolves only feed on rabbits. Shannon Biodiversity in that system is 100/200 * log (100/200) + 100/200 * log(100/200) = 2*0.5*log(0.5)=log(0.5)=log2 (after removing negative). Elmer Fudd comes in and kills 50 rabbits and wolves: wolves aren't gonna die, rabbits aren't gonna die.

    Now say you have 100 wolves and 50 rabbits. Shannon biodiversity there is (50/150)*log(50/150) + (100/150)*log(100/150) = (1/3)log(1/3) + (2/3)log(2/3) = -(1/3)log(3)+(2/3)log(2/3) = (-1/3)*log(3)+(2/3)log2-2/3 log3 = (2/3)log2-log3<log(2) (after removing minus). Elmer Fudd comes in and kills 50 wolves and 50 rabbits. The wolves then starve to death.

    Though, if you started off with 2 wolves and 2 rabbits, the Shannon Biodiversity would be log2 still, and Elmer Fudd would destroy the ecosystem.

    The idea of biodiversity in your post needs to be sharpened a bit in order for this to apply I think.

    I agree it's clunky as well, but the necessary vocabulary is kinda hard to pin down, I think. I think part of the problem is the fungibility of these terms: what may once have been a non-reflexive variable ('in-itself') may become reflexive ('for-itself'), and vice versa - the only way to find out which is which is to actually do the empirical study itself, and find out what exactly whatever little patch of nature under consideration is in fact sensitive to ('sensitive to' being perhaps a better phrase than 'what nature can see'). So when you say later on that:

    That's a much better way of putting it. No anthropomorphism, less conceptual baggage (like finding all the parameters nature 'cares' about). Also links directly to perturbation.
  • I am an Ecology


    Canopy succession is an example. Once a mighty oak has grown to fill a gap, it shades out the competition. So possibilities get removed. The mighty oak then itself becomes a stable context for a host of smaller stable niches. The crumbs off its feeding table are a rain of degrees of freedom that can be spent by the fleas that live on the fleas.

    See, I can imagine what you mean by degrees of freedom, but - and this is a big but, I don't think it can be used so qualitatively. So when you say:

    But if the oak gets knocked down in a storm or eaten away eventually by disease, that creates an opening for faster-footed weed species. We are back to a simpler immature ecosystem where the growth is dominated by the strong entropic gradient - the direct sunlight, the actual rainfall and raw nutrient in the soil.

    The immature ecology doesn't support the same hierarchy of life able to live off "crumbs" - the weak gradients that highly specialised lifeforms can become adapted to. It doesn't have the same kind of symbiotic machinery which can trap and recycle nutrients, provide more of its own water, like the leaf litter and the forest humidity.

    It's ambiguous to what the degrees of freedom refer. Are you talking about niches? Is the claim that when the oak falls, there are less niches? More niches? More configurational entropy?

    An immature ecology is dependent on standing under a gushing faucet of entropy. It needs direct sunlight and lots of raw material just happening to come its way. It feeds on this bounty messily, without much care for the long-term. Entropy flows through it in a way that leaves much of it undigested.

    But a senescent ecology has build up the complexity that can internalise a great measure of control over its inputs. A tropical forest can depend on the sun. But it builds up a lot of machinery to recycle its nutrients. It fills every niche so that while the big trees grab the strongest available gradients, all the weak ones, the crumbs, get degraded too.

    What makes a gradient strong or weak? Can you cash this out in terms of a thermocline? Say you have water near a shore, it's 10 celcius. 1m nearer the shore, it's 20 celcius. Compare this to a shift of 5 celcius over 1 m. Is that an entropic gradient? What makes it an entropic gradient? What makes it not an entropic gradient? How is it similar to the configurational entropy of niches?

    I have in mind a procedure when you're doing an imaginary counting exercise of how many niches are available, then assuming something about the proportion of organisms in each niche, then assuming that it turns out that when an oak dies there's more configurational entropy - it's a combination of changes in occupation probability and the number of terms in the sum. Decreases can result from less degrees of freedom (number of bins/configuration) or less uniform distribution of entities into bins. Or both.

    In terms of cashing out your ideas in the specifics, your post was pretty weak. Can you go through one of your 'entropy calculations' using the oak/canopy example?