• schopenhauer1
    10.9k
    @StreetlightX

    Ecologies don't need a telos. That they exist, flourish and work as a system is a well known fact. Humans though have the ability to justify why they put more people into the world, why they need more people to grow, maintain themselves, and die.
  • Galuchat
    809
    So, is describing human political economy in ecological terms a category error?
  • apokrisis
    7.3k
    a conservative ecology would be precisely a senescent one, one that, yes, acknowledges the need for 'community' and so on, but that doesn't valorize the changes that such community fosters (correlatively, a philosophy of individualism lies on the other side of the spectrum). The 'best' ecosystems are precisely those perched halfway between immaturity and senescene, insofar as they can accommodate change in the best way.StreetlightX

    Senescent is probably a bad word choice by Salthe as he means to stress that a climax ecology has become too well adapted to some particular set of environmental parameters. It has spent all its degrees of freedom to create a perfect fit, and so that makes it vulnerable to small steady fine-scale changes in those parameters outside its control - something like coral reefs collapsing as we cause changes in ocean temperature and acidity. Or else the perturbations can come from the other end of the scale - single epic events such as an asteroid strike or super-volcano.

    So evolution drives an ecology to produce the most entropy possible. A senescent ecology is the fittest as it has built up so much internal complexity. It is a story of fleas, upon fleas, upon fleas. There are a hosts of specialists so that entropification is complete. Every crumb falling off the table is feeding someone. As an ecology, it is an intricate hierarchy of accumulated habit, interlocking negentropic structure. And then in being so wedded to its life, it becomes brittle. It loses the capacity to respond to the unpredictable - like those either very fine-grain progressive parameter changes or the out of the blue epic events.

    So senescence isn't some sad decaying state. It is being so perfectly adapted to a set of parameters that a sudden breakdown becomes almost inevitable. Coz in nature, shit always happens. And then you struggle if you are stuck with some complicated set of habits and have lost too much youthful freedom to learn some new tricks.

    One can easily draw economic and political parallels from this canonical lifecycle model. And it seems you want to make it so that conservatives equal the clapped out old farts, neoliberal individualists equal the reckless and immature, then the greeny/lefties are the good guys in the middle with the Goldilocks balance. They are the proper mature types, the grown-ups.

    Well I'd sort of like to agree with that, but it is simplistic. :)

    The left certainly values the co-operative aspect of the human ecosystem, while the greens value the spatiotemporal scope of its actions.

    So conservatives certainly also value the co-operative aspects of society, but have a more rigid or institutionalised view. The rules have become fixed habits and so senescent even if a good fit to a current state. The left would instinctively take a sloppier approach as it would seem life is still changing and you need to still have a capacity for learning and adaptation in your structures of co-operativity.

    Then conservatives also value the longer view of the parameters which constrain a social ecology. Like greens, they are concerned for the long-term view - one that includes the welfare of their great grand children, their estates, their livestock. But while greenies would be looking anxiously to a future of consequences, conservatives - in this caricature - are so set in their ways by a history of achieving a fit that the long-view is more of the past. It is what was traditionally right that still looks to point to any future.

    But then conservatives may be the ones that don't rush into the future immaturely. The stability of their social mores may actually encode a long-run adaptiveness as the result of surviving past perturbations. Lefties and greenies can often seem the ones who are in the immature stage of being too eager for the turmoil of reforms, too quick to experiment in ways that are mostly going to pan-out as maladaptive, too much the promoters of a pluralist liberal individualism, too quick to throw history and hierarchy in the bin.

    So as usual, the science of systems - of which ecology is a prime one - could really inform our political and economic thinking. It is the correct framework for making sense of humans as social creatures.

    But once we start projecting the old dichotomous stereotypes - left vs right, liberal vs conservative - then that misses the fact a system is always a balancing act of those kinds of oppositional tensions.

    And we also have to keep track of what is actually different about an ecology and a society. An ecology lacks any real anticipatory ability. It only reacts to what is happening right now as best it can, using either its store of developed habits to cope, or spending the capital of its reserve of degrees of freedom to develop the necessary habits.

    But a human society can of course aspire to be anticipatory. It can model the future and plan accordingly. It can remember the past clearly enough to warn it of challenges it might have to face. It can change course so as to avoid perturbations that become predictable due to long-range planning.

    And the jury is actually out on how an intelligent society ought to respond. On climate change, the conservatives were at first the ones worried about its potential to disrupt the story of human progress. Then the neoliberal attitude took over where the strategy for coping with the future is to rely on human ingenuity and adaptability.

    One view says tone everything down as we are too near the limit. The other says if shit is always going to happen - if not global warming, then the next overdue super-volcano ice age - the imperative is to go faster, generate more degrees of freedom. The planet is just not ever going to be stable, so to flourish, planned immaturity beats planned senescence.

    Both views makes sense. As does the further view that of course there must be a balance of these two extremes. The right path must be inbetween.
  • fdrake
    6.6k


    Can you give a mechanical explanation of how an ecosystem spends degrees of freedom - also degrees of freedom of what?

    An example would also be good.
  • apokrisis
    7.3k
    Canopy succession is an example. Once a mighty oak has grown to fill a gap, it shades out the competition. So possibilities get removed. The mighty oak then itself becomes a stable context for a host of smaller stable niches. The crumbs off its feeding table are a rain of degrees of freedom that can be spent by the fleas that live on the fleas.

    But if the oak gets knocked down in a storm or eaten away eventually by disease, that creates an opening for faster-footed weed species. We are back to a simpler immature ecosystem where the growth is dominated by the strong entropic gradient - the direct sunlight, the actual rainfall and raw nutrient in the soil.

    The immature ecology doesn't support the same hierarchy of life able to live off "crumbs" - the weak gradients that highly specialised lifeforms can become adapted to. It doesn't have the same kind of symbiotic machinery which can trap and recycle nutrients, provide more of its own water, like the leaf litter and the forest humidity.

    So the degrees of freedom are the system's entropy. It is the through-put spinning the wheels.

    An immature ecology is dependent on standing under a gushing faucet of entropy. It needs direct sunlight and lots of raw material just happening to come its way. It feeds on this bounty messily, without much care for the long-term. Entropy flows through it in a way that leaves much of it undigested.

    But a senescent ecology has build up the complexity that can internalise a great measure of control over its inputs. A tropical forest can depend on the sun. But it builds up a lot of machinery to recycle its nutrients. It fills every niche so that while the big trees grab the strongest available gradients, all the weak ones, the crumbs, get degraded too.

    So the degrees of freedom refers both to the informational and entropic aspects of what is going on. Salthe is explicit about this in his infodynamic account of hierarchical complexity - the forerunner of what seems to have become biosemiosis.

    The degrees of freedom fall out of the sky as a rain of entropy, an energy source that would welcome being guided towards some suitable sink. Life then provides that negentropic or informational structure. It becomes the organised path by which sunlight of about 6000 degrees is cooled to dull infra-red.

    Then switching to that informational or negentropic side of the deal - the tale of life's dissipative structure - the degrees of freedom become the energy available to divert into orderly growth. It is the work that can be done to make adaptive changes if circumstances change.

    A weed can sprout freely to fill a space. It is green and soft, not woody and hard. It remains full of choices in its short life.

    An oak wins by trading that plasticity for more permanent structure. It grows as high and strong as it can. It invests in a lot of structure that is really just dead supporting wood.

    So degrees of freedom have a double meaning here - which is all part of the infodynamic view of life as dissipative structure.

    Life is spending nature's degrees of freedom in entropifying ambient energy gradients. And it spends its own degrees of freedom in terms of the work it can extract from that entropification - the growth choices that it can make in its ongoing efforts to optimise this entropy flow.

    So there is the spending of degrees of freedom as in Boltzmann entropy production. Turning energy stores into waste heat. And also in terms of Shannon informational uncertainty. Making the choices that remove structural alternatives.

    An immature system is quick, clever and clumsy. A senescent system is slow, wise and careful. An immature system spends energy freely and so always seems to have lots of choices available. A senescent system is economic with its energy spending, being optimised enough to be mostly in a mode of steady-state maintenance. And in knowing what it is about, it doesn't need to retain a youthful capacity to learn. Its degrees of freedom are already invested in what worked.

    Which then brings us back to perturbations. Shit happens. The environment injects some unpredicted blast of entropy - a fresh rain of entropic degrees of freedom - into the system. The oak gets blown down and the weeds get their chance again.
  • Deleteduserrc
    2.8k
    @fdrake
    But then something happens when a variable in the system can relate to that cycle by, to paraphrase Csal, by 'reflexively taking it's own parameters as a variable that can be acted upon': so humans will cultivate food so that we don't have to deal with - or at least minimize the impact of - cycles of food scarcity and die out like wolves with too few deer to prey on. This is the shift from the 'in-itself' to the 'for-itself', where the implicit becomes explicit and is acted upon as such. And this almost invariably alters the behavior of the system, which is why, I think, the two descriptions of the 'X’wunda trade system' (quoted by Csal) are not equivalent: something will qualitatively change if the system itself 'approaches itself' in Friedman's way.StreetlightX

    So first: yeah, the system will be changed if it relates to itself a system.

    Quick example, from here. (In this case the system becoming self-aware would have negative effects, but of course with different examples it could have positive effects. Either way though, a qualitative change.)

    The author is taking about gri-gri, a subsaharan belief/magic system which purports to make individuals immune to gunfire.

    Gri-gri comes in many forms – ointment, powder, necklaces – but all promise immunity to weaponry. It doesn’t work on individuals, of course, although it’s supposed to. Very little can go grain-for-grain with black powder and pyrodex. It does work on communities: it makes them bullet proof.
    happy people.PNG



    The economists Nathan Nunn and Raul Sanchez de la Sierra wrote a paper analyzing the social effects of gri-gri: Why Being Wrong Can Be Right: Magical Warfare Technologies and the Persistence of False Beliefs [...]

    The paper argues that gri-gri encourages resistance on a mass scale. Beforehand, given a mix of brave and cowardly, only a small percentage of a village would fight back. If you want to have any hope of surviving, then you need everyone to fight back. Gri-gri lowers the perceived costs of said resistance, i.e. no reason to fear guns when the bullets can’t hurt you. Now everyone fights, hence, gri-gri‘s positive benefits. Moreover: since more people are fighting, each gri-gri participant also raises the marginal utility of the others (it’s better to fight together). And, since there are highly specific requirements for using the powder (if you break a certain moral code it doesn’t work), gri-gri also probably cuts down on non-war related crimes. Take group-level selection: the belief in and use of gri-gri will thus allow any given village to out-compete one without gri-gri. After a time, these will either be replaced by gri-gri adherents (hence spreading it geographically), or they’ll adopt gri-gri themselves (also spreading it).

    So despite gri-gri appearing 'irrational', its adoption by a group is eminently rational. So why not keep the real rational benefits, but drop the irrational veneer?

    "[imagine that] the state sends a researcher into the village. “We’re sorry,” he says. “We were so stupid to mock you. We totally understand why you do this thing. Let’s explain to you what’s actually going on, now that we have an economic translation.”

    The researcher explains that, in fact, gri-gri doesn’t work for the individual, but it has the net-positive effect of saving the community. “Give up these childish illusions, yet maintain the overall function of the system,” he exhorts. A villager, clearly stupid, asks: “So it works?” The man smiles at these whimsical locals. “Oh, no,” he sighs. “You will surely die. But in the long run it’s a positive adaptation at the group level.”

    No one would fight, of course. The effect only comes from the individual. If he doesn’t think he can survive a bullet, then it’s hard to see how you’re going to make him fight. “But people fight better in groups, don’t you see?” stammers the exasperated researcher. That’s true as far as it goes, but it’s also no revelation. I trust that at least a couple of those villagers have brawled before. “Fighting six guys alone vs. fighting six guys with your friends” is a fast lesson with obvious application. Still didn’t make them go to war before the introduction of gri-gri. If that didn’t work, why do you think “time for some #gametheory” will convince anyone?


    So I agree, but the question of whether the two descriptions of the X'Wunda are equivalent is another thing entirely. I mean in one sense it's obvious they're not equivalent, otherwise they would be the same description. But do they both describe the same thing?

    My mistake was to differentiate between the 'in-itself' and the 'for-itself', when the germane Hegelian distinction would be the one between the 'in-itself' and the 'for-us' ( that is, 'for us rational observers observing the system'.)

    Importantly, for Hegel, the in-itself and the for-us are the same thing. It's not a matter of noumenal core and phenomenal presentation, but of acting and knowing. The noumenal/phenomenal distinction cast things in terms of a transcendental knower who reaches out toward (hidden noumenal) being. (You could also conceptualize it as a knower not reaching toward, but being affected by, the diffracted rays of a noumenon.)

    Hegel, as you know, holds instead that knowing is itself a type of acting (and so also a type of being). Any given type of knowing will unfold, over time, as a series of actions. In doing so it will create a pattern observable to a different, meta-level, knower.

    But it's not as though the description of the meta-knower is 'true' while the experience of the object-level knower is false. The patterns the meta-knower observes are themselves driven by the internal logic of the object level-knower. If the object-level knower spoke the meta-language, it would not act the same way, and the object-level (as it existed) would disappear.

    So the idea would be: there is indeed a hidden order - a rational in-itself - to how things unfold. It's not a projection by us. It's already there, as long as there's someone to look. But for that order to be there (were someone to look), the order itself has to be 'looking' at something different.

    In short: Both descriptions of the X'wunda example are correct, and both refer to the same thing. You can't reduce one to the other, because in reducing the object-level to the meta-level rational one, you lose the object-level altogether. If you don't have the object-level, the meta-level description doesn't refer to anything.( this is why hegel's so concerned with pointing out that the truth is the process as a whole, not simply the result)

    And then my broader idea (I guess kind of Schellingian?) is that 'nature' itself 'knows' in some way, and that that knowledge drives it to act as it does. The way in which nature knows is itself (in part) those patterns and parameters we observe, but that it can't itself know those patterns (otherwise it'd be a human.) It knows something else, so to speak.

    I suppose, then, we both agree that it's a matter of emergence, though I'm not sure we're thinking of how that happens in the same way (though maybe we are.)
  • schopenhauer1
    10.9k
    Yes and that the more important questions we should be asking is why we put more people into the world in the first place. What to grow, maintain, and die? At least ecologies and biomes can't control the absurd nature of continuing to continue. Humans can.
  • apokrisis
    7.3k
    Cheer up Schop. Take the long view. Either humanity will work out what it is about or your wish will be granted. You can wait 50 years surely?
  • schopenhauer1
    10.9k
    Cheer up Schop. Take the long view. Either humanity will work out what it is about or your wish will be granted. You can wait 50 years surely?apokrisis

    So you're saying through our destructive use of natural resources we will die out. Why would we just not intentionally choose to not add more absurd instrumentality- growth, maintenance, death?
  • apokrisis
    7.3k
    What do you mean? Either we do blow ourselves up, or we do find a long-run ecological balance.

    Well, I was just trying to cheer you up. I realise there is in fact a third option where human ingenuity does get used to keep the game going in ever more extravagant fashion. Rather than changing ourselves to fit nature, many people will quite happily go along with changing nature to fit us.

    This is the anthropocene. Once we have artificial meat, 3D printed vegetables made from powdered seaweed, an AI labour force and nuclear fusion, who cares about rain forests and coral reefs? Rent your self some VR goggles and live out of that old time stuff if you are sentimental. Meanwhile here is an immersive game universe where you can go hunting centaurs and unicorns.

    So probably bad luck. We likely have enough informational degrees of freedom to beat nature at its own game.
  • Streetlight
    9.1k
    AFAIK the mechanisms that link biodiversity to stability are still being researched, so it's far from 'settled science'.fdrake

    I imagine that approaching ecosystems through network analysis would have alot to say about this: i.e. more biodiverse ecosystems have more nodes that support certain cycles such that the failure of a few of these nodes would not lead to the failure of those cycles as a whole; and moreover, that such robustness also has a catalytic effect - the more robust a network, the more chance for the development of further nodes, etc, etc (I realize on reflection that we have different go-to intuitions with these kinds of subjects - you tend to focus on spatio-temporal specificity - as per the papers you've linked (and the discussion re: gene expression previously) - while I like to 'go abstract' and and think in terms of mechanism-independent structure; it's interesting!).

    Do you mean the time series obtaining a local maximum through 'optimisation' or do you mean an ecological model obtaining a local maximum through optimisation? The relationship of the latter to an ecological model is more a matter of model fitting and parameter estimation than how a parametrised mathematical model of an ecology relates to what it models. The parameters are 'best in some sense' with respect to the data.fdrake

    Yeah, I could have been more clear here: I guess I have something in mind like a ecosystem - or local 'patch' - fluctuating around it's carrying capacity or something similar. I mean, clearly carrying capacity isn't something that the system is 'aiming at': it doesn't tell itself 'ok we going to try and fluctuate around this point', but, like regulatory chemical reactions, it just 'fall outs' of the dynamics of the system.

    I personally wouldn't like to think about the 'modelling relation' between science and nature in terms of the 'for-itself' acting representationally on the 'in-itself'. Just 'cos I think it's awkward.fdrake

    I agree it's clunky as well, but the necessary vocabulary is kinda hard to pin down, I think. I think part of the problem is the fungibility of these terms: what may once have been a non-reflexive variable ('in-itself') may become reflexive ('for-itself'), and vice versa - the only way to find out which is which is to actually do the empirical study itself, and find out what exactly whatever little patch of nature under consideration is in fact sensitive to ('sensitive to' being perhaps a better phrase than 'what nature can see'). So when you say later on that:

    I think ecology has some complications that aren't present in simpler relationships between model and world. I'm not sure I could make a list of them all, but there's always a difficulty in measuring properties of ecosystems precisely in a manner useful for modelling. It isn't the same for chemistry.fdrake

    I think perhaps the 'problem' is that ecology exhibits precisely a higher degree of the fungibility between the implicit/explicit sensitivity than chemistry does. This is what makes it more complex.
  • Streetlight
    9.1k
    That blog post was fascinating! I keep wandering back to psychoanalytic point where the cheating husband's relationship with his lover only 'works' insofar as he is married: were he to leave his wife for the sake of his lover, the lover would no longer be desirable... Of course the psychoanalytic lesson is that our very 'subjective POV' is itself written into the 'objective structure' of things: it's not just window dressing, and if you attempt to discard it, you change the nature of the thing itself.

    And I think this slipperiness is what makes it so hard to fix the status of a 'parameter': if you want to make a parameter 'work' (i.e. if you intervene in a system on that basis), you will cause changes - but that doesn't mean the system is 'in-itself' sensitive to such parameters: only that, through your intervention you've made it so.
  • fdrake
    6.6k


    Canopy succession is an example. Once a mighty oak has grown to fill a gap, it shades out the competition. So possibilities get removed. The mighty oak then itself becomes a stable context for a host of smaller stable niches. The crumbs off its feeding table are a rain of degrees of freedom that can be spent by the fleas that live on the fleas.

    See, I can imagine what you mean by degrees of freedom, but - and this is a big but, I don't think it can be used so qualitatively. So when you say:

    But if the oak gets knocked down in a storm or eaten away eventually by disease, that creates an opening for faster-footed weed species. We are back to a simpler immature ecosystem where the growth is dominated by the strong entropic gradient - the direct sunlight, the actual rainfall and raw nutrient in the soil.

    The immature ecology doesn't support the same hierarchy of life able to live off "crumbs" - the weak gradients that highly specialised lifeforms can become adapted to. It doesn't have the same kind of symbiotic machinery which can trap and recycle nutrients, provide more of its own water, like the leaf litter and the forest humidity.

    It's ambiguous to what the degrees of freedom refer. Are you talking about niches? Is the claim that when the oak falls, there are less niches? More niches? More configurational entropy?

    An immature ecology is dependent on standing under a gushing faucet of entropy. It needs direct sunlight and lots of raw material just happening to come its way. It feeds on this bounty messily, without much care for the long-term. Entropy flows through it in a way that leaves much of it undigested.

    But a senescent ecology has build up the complexity that can internalise a great measure of control over its inputs. A tropical forest can depend on the sun. But it builds up a lot of machinery to recycle its nutrients. It fills every niche so that while the big trees grab the strongest available gradients, all the weak ones, the crumbs, get degraded too.

    What makes a gradient strong or weak? Can you cash this out in terms of a thermocline? Say you have water near a shore, it's 10 celcius. 1m nearer the shore, it's 20 celcius. Compare this to a shift of 5 celcius over 1 m. Is that an entropic gradient? What makes it an entropic gradient? What makes it not an entropic gradient? How is it similar to the configurational entropy of niches?

    I have in mind a procedure when you're doing an imaginary counting exercise of how many niches are available, then assuming something about the proportion of organisms in each niche, then assuming that it turns out that when an oak dies there's more configurational entropy - it's a combination of changes in occupation probability and the number of terms in the sum. Decreases can result from less degrees of freedom (number of bins/configuration) or less uniform distribution of entities into bins. Or both.

    In terms of cashing out your ideas in the specifics, your post was pretty weak. Can you go through one of your 'entropy calculations' using the oak/canopy example?
  • fdrake
    6.6k


    I imagine that approaching ecosystems through network analysis would have alot to say about this: i.e. more biodiverse ecosystems have more nodes that support certain cycles such that the failure of a few of these nodes would not lead to the failure of those cycles as a whole; and moreover, that such robustness also has a catalytic effect - the more robust a network, the more chance for the development of further nodes, etc, etc

    What makes you think that more biodiverse ecosystems 'have more nodes that support certain cycles such that the failure of a few of these nodes would not lead to the failure of those cycles as a whole?'

    I can think of a simplified example of it. Say you have 100 wolves and 100 rabbits (only). Wolves only feed on rabbits. Shannon Biodiversity in that system is 100/200 * log (100/200) + 100/200 * log(100/200) = 2*0.5*log(0.5)=log(0.5)=log2 (after removing negative). Elmer Fudd comes in and kills 50 rabbits and wolves: wolves aren't gonna die, rabbits aren't gonna die.

    Now say you have 100 wolves and 50 rabbits. Shannon biodiversity there is (50/150)*log(50/150) + (100/150)*log(100/150) = (1/3)log(1/3) + (2/3)log(2/3) = -(1/3)log(3)+(2/3)log(2/3) = (-1/3)*log(3)+(2/3)log2-2/3 log3 = (2/3)log2-log3<log(2) (after removing minus). Elmer Fudd comes in and kills 50 wolves and 50 rabbits. The wolves then starve to death.

    Though, if you started off with 2 wolves and 2 rabbits, the Shannon Biodiversity would be log2 still, and Elmer Fudd would destroy the ecosystem.

    The idea of biodiversity in your post needs to be sharpened a bit in order for this to apply I think.

    I agree it's clunky as well, but the necessary vocabulary is kinda hard to pin down, I think. I think part of the problem is the fungibility of these terms: what may once have been a non-reflexive variable ('in-itself') may become reflexive ('for-itself'), and vice versa - the only way to find out which is which is to actually do the empirical study itself, and find out what exactly whatever little patch of nature under consideration is in fact sensitive to ('sensitive to' being perhaps a better phrase than 'what nature can see'). So when you say later on that:

    That's a much better way of putting it. No anthropomorphism, less conceptual baggage (like finding all the parameters nature 'cares' about). Also links directly to perturbation.
  • fdrake
    6.6k


    Well that didn't have any quantitative analysis. So I did some digging. It seems appropriate to assume that trees that canopy very high have a 'large' energy flow into them. If you removed all the trees from an area, then the energy would flow into plant species on the ground. The total energy coming into that area could probably be assumed to be constant. This doesn't immediately reduce the overhead, which is 'unspent energy' (not necessarily degrees of freedom) Ulanowicz uses, since it depends on the relative apportioning of the solar energy to ground species - how much of the total goes to each. The total can go down, but the proportions can remain unchanged, so the entropy and joint entropy can do that too, so therefore the ascendency and overhead can stay the same.

    So while the destruction of a canopy in an area could lead to changes in overhead, it doesn't have to.

    edit: see here to play about with the maths yourself, see if I'm right.
  • apokrisis
    7.3k
    Your questions seem off the point so I’m struggling to know what you actually want.

    If you have a professional interest, then there is a big literature. Maybe start with https://www.jameskay.ca/about/thermo.html

    Rod Dewar and Rod Swenson also. I’ve mention Stan Salthe and Robert Ulanowicz. Charlie Lineweaver is another. Adrian Bejan might be the strongest in terms of generic models.

    I’ve not been close to the research for 20 years and I was always only really interested in the qualitative arguments. Also the quantitative support wasn’t exactly slam dunk. Measuring ecosystems is not easy.

    But for instance, one line of research involved thermal imaging of rainforests and other ecosystems. The hypothesis was that more complex ecologies would stick out by having a cooler surface temperature. They would extract more work from the solar gradient.

    Is that the kind of experiment you have in mind?

    Here’s a presentation with references at the end as well as charts of data - https://hyspiri.jpl.nasa.gov/downloads/2011_Symposium/day1/luvall%20hyspiri%20ecological%20thermodynamics%20may%202011%20final.pdf
  • fdrake
    6.6k


    I'm suspicious of the confidence you have in this biosemiotic/thermodynamic/entropic system which crops up almost everywhere you post. You usually make statements based on decreases/increases in entropic or related quantities. You present it as if your interpretation of biosemiosis, thermodynamics and systems as fundamentally dynamical systems of information exchange are a sure thing, and use those to justify whatever quantitative variations you expect.

    I ask you for references and derivations to see if there's anything supporting the application of your system to the problem at hand. When I went digging, ascendancy doesn't have any clear relationship to 'degrees of freedom' as you use the term, it manifests as the behaviour of configurational entropy - which is not a monotonic function of the degrees of freedom in the sum. Neither is the joint entropy of a Markov network introduced from normalising flows with total energy (density) a function of 'degrees of freedom' - which is the degree of the network. You can compute effective numbers of species or effective degrees of freedom based off of statistical summaries like the Shannon Biodiversity or the exponential of the ascendency - but since they are monotone functions of the input entropic measure predictions and requirements from the exponentiated quantity must be cashed out in terms of predictions and requirements to its inputs.

    Your posts which use your underlying biosemiotic/thermodynamic/entropic system are a fuzzy bridge between this research background and the discussed problems. While it's commendable to base your reasoning and metaphysical systems on relevant science, it isn't at all clear how your cited literature and background concepts manifest as critiques, solutions, reframing attempts to the problematics you engage with. Especially when you use fuzzy notions like 'degrees of freedom' and hope that your meaning can be discerned by reading thermodynamics, some ecological semiotics, applications of information theory to ecological networks and dissipative systems literature.

    Whenever you've presented me literature (or someone else in a thread I've watched), I've gone 'huh that's interesting' and attempted to research it, but I've never managed to make the bridge between your research and your opinions. Too many quantitative allusions that aren't cashed out in precise terms.
  • fdrake
    6.6k
    @apokrisis

    This isn't necessarily a flaw in your thinking. It could be determined by me not having read the things you've read. Further: the complaint that the background research doesn't cash out in exactly the terms you present it is thephilosophyforum.com first world problems, since you actually base things on research and provide references when prompted.
  • apokrisis
    7.3k
    First up, I'm not bothered if my arguments are merely qualitative in your eyes. I am only "merely" doing metaphysics in the first place. So a lot of the time, my concern is about what the usual rush to quantification is missing. I'm not looking to add to science's reductionist kitset of simple models. I'm looking to highlight the backdrop holistic metaphysics that those kinds of models are usually collapsing.

    And then a lot of your questions seem to revolve around your definition of degrees of freedom vs mine. It would be helpful if you explained what your definition actually is.

    My definition is a metaphysically general one. So it is a little fuzzy, or broad, as you say.

    To help you understand, I define degrees of freedom as dichotomous to constraints. So this is a systems science or hierarchy theory definition. I make the point that degrees of freedom are contextual. They are the definite directions of action that still remain for a system after the constraints of that system have suppressed or subtracted away all other possibilities.

    So the normal reductionist metaphysical position is that degrees of freedom are just brute atomistic facts of some kind. But I seek to explain their existence. They are the definite possibilities for "actions in directions" that are left after constraints have had their effect. So degrees of freedom are local elements shaped by some global context, some backdrop history of a system's development.

    Thus I have an actual metaphysical theory about degrees of freedom. Or rather, I think this to be the way that holists and hierarchy theorists think about them generally. Peirce would be the philosopher who really got it with his triadic system of semiosis. Degrees of freedom equate to his Secondness.

    A second distinctive point is that I also follow semiotic thinkers in recognising an essential connection between Boltzmann entropy and Shannon uncertainty - the infodynamic view which Salthe expresses so well. So this is now a quantification of the qualitative argument I just gave. Now biosemiotics is moving towards the possibility of actual science.

    Theoretical biologists and hierarchy theorists like Howard Pattee in particular have already created a general systems understanding of the mechanism by which life uses codes to harness entropy gradients. So the story of how information and dynamics relates via an "epistemic cut" has been around since the 1970s. It is the qualitative picture that led to evo-devo. And it is the devo aspect - the Prigogine-inspired self-organising story of dissipative structures - that has become cashed out in an abundance of quantitative models over the past 30 years. I assume you know all about dissipative structure theory.

    So what we have is a view of life and mind that now is becoming firmly rooted in thermodynamics. Plus the "trick" that is semiotics, or the modelling relation.

    The physico-chemical realm already wants to self-organise to dissipate energy flows more effectively. That in itself has been a small revolution in physical science. What you call configuration entropy would seem to be what I would call negentropy, or the degrees of freedom spent to create flow channelling structure - some system of constraints. And in the infodynamic (or pansemiotic) view, the negentropy is information. It is a habit of interpretance, to use Peirce's lingo. So we have the duality of entropy and information, or a sustaining flow of degrees of freedom and set of structuring constraints, at the heart of our most general thermodynamical description of nature.

    Reductionist thinking usually just wants to talk about degrees of freedom and ignore the issue of how boundary conditions arise. The thermodynamics is basically already dead, gone to equilibrium, by the time anything is quantified. So the boundary conditions are taken as a given, not themselves emergently developed. For example, an ideal gas is contained in a rigid flask and sitting in a constant heat sink. Nothing can change or evolve in regard to the constraints that define the setting in which some bunch of non-interacting particles are free to blunder about like Newtonian billiard balls. But the dissipative structure view is all about how constraints can spontaneously self-organise. Order gets paid for if it is more effective at lowering the temperature of a system.

    So thermodynamics itself is moving towards an entropy+information metaphysics. The mental shift I argue for is to see dissipative structure as not just merely a curiosity or exception to the rule, but instead the basic ontological story. As Layzer argues, the whole Big Bang universe is best understood as a dissipative structure. It is the "gone to equilibrium" Boltzmann statistical mechanics, the ideal gas story, that is the outlier so far as the real physical world is concerned. The focus of thermodynamics has to shift to one which sees the whole of a system developing. Just talking about the already developed system - the system that has ceased to change - is to miss what is actually core.

    So physics itself is entropy+information in some deep way it is now exploring. And then biology is zeroing in on the actual semiotic machinery that both separates and connects the two to create the even more complex phenomenon of life and mind. So now we are talking about the epistemic cut, the creation of codes that symbolise information, capture it and remember it, so as to be able to construct the constraints needed to channel entropy flows. Rivers just carve channels in landscapes. Organisms can build paths using captured and internalised information.

    Only recently, I believe the biosemiotic approach has made another huge step towards a quantitative understanding - one which I explained in detail here: https://thephilosophyforum.com/discussion/comment/105999#Post_105999

    So just as physics has focused on the Planck-scale as the way to unify entropy+information - find the one coin that measures both at a fundamental level - so biology might also have its own natural fundamental scale at the quasi-classical nanoscale (in a watery world). If you want to know what a biological degree of freedom looks like, it comes down to the unit of work that an ATP molecule can achieve as part of a cell's structural machinery.

    To sum up, no doubt we have vastly different interests. You seem to be concerned with adding useful modelling tools to your reductionist kitbag. And so you view everything I might say through that lens.

    But my motivation is far more general. I am interested in the qualitative arguments with which holism takes on reductionism. I am interested in the metaphysics that grounds the science. And where I seek to make contact with the quantitative is on the very issue of what counts as a proper act of measurement.

    So yes, I am happy to talk loosely about degrees of freedom. It is a familiar enough term. And then I would define it more precisely in the spirit of systems science. I would point to how a local degree of freedom is contextually formed and so dichotomous to its "other" of some set of global constraints. Then further, I would point to the critical duality which now connects entropy and information as the two views of "a degree of freedom". So that step then brings life and its epistemic cut, its coding machinery, into the thermodynamics-based picture.

    And then now I would highlight how biophysics is getting down to the business of cashing out the notion of a proper biological degree of freedom in some fundamental quantitative way. An ATP molecule as the cell's universal currency of work looks a good bet.

    I'm sure you can already see in a hand-waving way how we might understand a rainforest's exergy in terms of the number of ATP molecules it can charge up per solar day. A mature forest would extract ATP even from the tiniest crumbs dropping off the table. A weedy forest clearing would not have the same digestive efficiency.

    So I've tried to answer your questions carefully and plainly even though your questions were not particularly well posed. I hope you can respond in kind. And especially, accept that I just might not have the same research goals as you. To the degree my accounts are metaphysical and qualitative, I'm absolutely fine about that.
  • fdrake
    6.6k


    First up, I'm not bothered if my arguments are merely qualitative in your eyes. I am only "merely" doing metaphysics in the first place. So a lot of the time, my concern is about what the usual rush to quantification is missing. I'm not looking to add to science's reductionist kitset of simple models. I'm looking to highlight the backdrop holistic metaphysics that those kinds of models are usually collapsing.

    This is fine. I view it in light of this:

    To sum up, no doubt we have vastly different interests. You seem to be concerned with adding useful modelling tools to your reductionist kitbag. And so you view everything I might say through that lens.

    Which is largely true. What I care about in the questions I've asked you is how does the metaphysical system you operate with instantiate into specific cases. You generally operate on a high degree of abstraction, and discussion topics become addenda to the exegesis of the system you operate in. I don't want this one to be an addendum, since it has enough structure to be a productive discussion.

    To help you understand, I define degrees of freedom as dichotomous to constraints. So this is a systems science or hierarchy theory definition. I make the point that degrees of freedom are contextual. They are the definite directions of action that still remain for a system after the constraints of that system have suppressed or subtracted away all other possibilities.

    What does 'dichotomous to constraints' mean? There are lots of different manifestations of the degrees of freedom concept. I generally think of it as the dimension of a vector space - maybe calling a vector space an 'array of states' is enough to suggest the right meaning. If you take all the vectors in the plane, you have a 2 dimensional vector space. If you constrain the vectors to be such that their sum is specified, you lose a degree of freedom, and you have a 1 dimensional vector space. This also applies without much modification to random variables and random vectors, only the vector spaces are defined in terms of random variables instead of numbers.

    I think this concept is similar but distinct to ones in thermodynamics, but it has been a while since I studied any. The number of microstates a thermodynamic system can be in is sometimes called the degrees of freedom, and a particular degree of freedom is a way in which a system can vary. A 'way in which something can vary' is essentially a coordinate system - a parametrisation of the behaviour of something in terms of distinct components.

    There are generalisations of what degrees of freedom means statistically when something other than (multi)linear regression is being used. For something like ridge regression, which deals with correlated inputs to model a response, something called the 'effective degrees of freedom' is used. The effective degrees of freedom is defined in terms of the trace of the projection matrix of the response space to the vector space spanned by the model terms (that matrix's trace, something like its size). When the data is uncorrelated, this is equal to the above vector-space/dimensional degrees of freedom.

    Effective degrees of freedom can also be defined in terms of the exponential of a configurational entropy, in a different context. So I suppose I should talk about configurational entropy.

    Configurational entropy looks like this:



    Where are a collection of numbers between 0 and 1 such that . They are weights. The are included to allow for conditional entropies and the like. The 'degrees of freedom' can also mean the number of terms in this sum, which is equal to the number of distinct, non-overlapping states that can obtain. Like proportions of species in an area of a given type - how many of each divided by how many in total. If the are treated as proportions in this way, it works the same as the Shannon Entropy. Shannon Entropy is a specific case of configurational entropy.

    The Shannon Entropy is related to the Boltzmann entropy in thermodynamics in a few ways I don't understand very well. As I understand it, the Boltzmann entropy is a specific form of Shannon entropy. The equivalence between the two lets people think about Shannon entropy and Boltzman entropy interchangeably (up to contextualisation).

    Then there's the manifestation of entropy in terms of representational complexity - which can take a couple of forms. There's the original Shannon entropy, then there's Kolmogorov complexity. Algorithmic information theory takes place in the neighbourhood of their intersection. The minimum number of states required to represent a string (kolmogorov) and the related but distinct quantity of the average of the negative logarithm of a probability distribution (shannon) are sometimes thought of as being the same thing. Indeed there are correspondence theorems - stuff true about Kolmogorov implies true stuff about Shannon and vice versa (up to contextualisation), but they aren't equivalent. So:

    The theoretical links between Shannon's original entropy, thermodynamical entropy, representational complexity can promote a vast deluge of 'i can see through time' like moments when you discover or grok things about their relation. BUT, and this is the major point of my post:

    Playing fast and loose with what goes into each of the entropies and their context makes you lose a lot. They only mean the same things when they're indexed to the same context. The same applies for degrees of freedom.

    I think this is why most of the discussions I've read including you as a major contributor are attempts to square things with your metaphysical system, but described in abstract rather than instantiated terms. 'Look and see how this thing is in my theory' rather than 'let's see how to bring this thing into my theory'. Speaking about this feels like helping you develop the system. Reading the references, however, does not.

    I'll respond to the rest of your post in terms of ascendency, but later.
  • apokrisis
    7.3k
    I'll find time to respond to your post later. But it is a shame that you bypass the content of my posts to jump straight back to the world from your point of view.

    You make very little effort to engage with my qualitative argument. Well none at all. So it feels as though I'm wasting my breath if you won't spell out what you might object to and thus show if there is any proper metaphysics motivating your view, or whether you just want to win by arguing me into some standard textbook position on the various familiar approaches to measuring entropy.

    Perhaps I'll let you finish first.
  • Streetlight
    9.1k
    But it is a shame that you bypass the content of my posts to jump straight back to the world from your point of view.apokrisis

    >:O
  • apokrisis
    7.3k
    Why the sudden interest in Nick Lane and Peter Hoffmann? Couldn't possibly be anything I said.
  • Streetlight
    9.1k
    Sudden? Lol. Lane's been on my reading list since the book came out, and Hoffman's a nice compliment to that. I fully admit my theoretical promiscuity though - I even have Ulanowicz and Salthe coming up soon! But I'm still sousing in the sweet, sweet irony of your comment : D
  • apokrisis
    7.3k
    Sousing? You really do have a tin ear when it comes to your ad homs. It absolutely spoils the effect when you come across as the hyperventilating class nerd.
  • Streetlight
    9.1k
    If you say so, buttercup.
  • apokrisis
    7.3k
    Hmm. Just not convincingly butch coming from you. And more importantly it has no sting. You've got to be able to find a real weakness to pick at here. Calling me buttercup once again ends up saying more about your life experience than mine.
  • fdrake
    6.6k


    It literally took me an hour to disambiguate the different relevant notions of entropy and degrees of freedom that bear some resemblance to how you use the terms, and I still forgot to include a few. Firstly that degrees of freedom can be looked at as the exponential of entropy measures and also that entropy can be thought of as a limitation on work extraction from a system or as a distributional feature of energy.

    I wouldn't be doing my job properly if I accuse you of playing fast and loose with terms while playing fast and loose with terms.
  • apokrisis
    7.3k
    So already we agree that the notion is ill-defined? It is a fast and loose term in fact. Just like entropy. Or information. Maybe this is why I am right in my attempt to be clear about the high-level qualitative definition and not pretend it has some fixed low-level quantitative measure.

    But I'll keep waiting until you do connect with what I've already posted.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.