• frank
    15.8k


    Did you not read the passages you quoted?
  • Banno
    25.1k
    Yes, Frank.
  • Banno
    25.1k
    I suppose, given enough rope, one might see the dark room problem as this very misunderstanding of "surprise"; that the technical term is being confused with the common sense.
  • Banno
    25.1k
    The better one can interpret the sensory input of their environment, the better they can adapt for survival or whatever is needed to live as long as possible.Caldwell

    ...and here "better" means less surprising?

    This is the part I've not been able to get a handle on: why is minimising surprise the very same as living longest?

    What is it about minimising free energy that is the same as fitting in to one's environment?
  • Caldwell
    1.3k
    I suppose, given enough rope, one might see the dark room problem as this very misunderstanding of "surprise"; that the technical term is being confused with the common sense.Banno
    No, there's no confusion as to what they mean by surprise. I provided the quotes where you could see this. The issue that's being raised is that there's then an anomaly between what we expect for how agents should behave -- they should head to the dark room (in a manner of speaking); and what agents actually do -- they don't seem to avoid surprises, or (improbability) unlikely events.

    What is it about minimising free energy that is the same as fitting in to one's environment?Banno
    If free energy is the difference between expectation/prediction and actual sensory input, then the lower the free energy, the better the agent's prediction of its environment.
  • Banno
    25.1k
    If free energy is the difference between expectation/prediction and actual sensory input, then the lower the free energy, the better the agent's prediction of its environment.Caldwell
    Yes, you are right, and I agree. Lower free energy implies adaptive fitness.

    This was what brought on my confusion:
    In fact, adaptive fitness and (negative) free energy are considered by some to be the same thing.

    And to get that, we also need that adaptive fitness implies lower free energy. It must just be my getting used to a new way to see adaptive fitness.

    Cheers.
  • TheMadFool
    13.8k
    If biological systems, including ourselves, act so as to minimise surprise, then why don't we crawl into a dark room and stay there?Banno

    :chin: Darkness represents, in the context of our world where clarity is visually-defined, a state of not knowing or unknowing i.e. darkness, the room that's dark, is the precise reason why we possess a startle response. When the sun dips below the horizon and night creeps in, surprises, unpleasant ones, are just round the corner. I thought that we already passed that waypoint many hundred thousand years ago and that's the reason we have the startle reflex - we react faster, buying us time for fight/flight! I dunno.
  • I like sushi
    4.9k
    I think @apokrisis did a good job of answering this.

    People will assume 'surprise' to mean the everyday 'surprise' unless the technical term is outlined.

    In simpler terms it merely refers to differing from the norm and something I find interesting about the 'Free energy principle' - in terms of the cognitive neurosciences - it how this plays off Inhibition of Return (IOR) in terms of awareness and attention.

    But why is minimising surprise the very same as living longest?Banno

    Because 'minimising' doesn't mean 'eradicating'? I don't really understand this question unless you took 'minimising' to mean 'reduce to nothing'. It might also be a case of conflating this in terms of 'evolution,' 'information theory' and actual 'physical energy'?
  • TheMadFool
    13.8k
    Claude Shannon's definition of information in terms of entropy doesn't gibe with surprise viewed as having something to do with free energy minimization. Right?

    Are these two the same thing though?
  • TheMadFool
    13.8k
    Most of the posts here seem therefore off topic.Banno

    I'm sure @Alexandre Harvey-Tremblay has something to say about this. He's of the opinion that if something, anything, isn't mathematizable, it's nonsense. I tend to agree but in a broader sense - if the mathematics can't be rendered into ordinary language without weirdness à la quantum physics then, the mathematics must be nonsensical, right? It's only fair to think/say so, no?

    In all likelhood this is a case of poor analogy. A dark room is the quintessential state of unknowing - imagination runs wild and what happens is activation of fear-driven explore mode and possibilities, possibilities, and more possibilities; in other words uncertainty. Put simply, a dark room = information, it's overflowing with surprises.

    Shocking!
  • Kenosha Kid
    3.2k


    The pertinent parts of the article:

    Free energy, as here defined, bounds surprise, conceived as the difference between an organism’s predictions about its sensory inputs (embodied in its models of the world) and the sensations it actually encounters.

    and

    Under the free-energy principle, the agent will become an optimal (if approximate) model of its environment. This is because, mathematically, surprise is also the negative log-evidence for the model entailed by the agent. This means minimizing surprise maximizes the evidence for the agent (model). Put simply, the agent becomes a model of the environment in which it is immersed.

    This is the notion of surprise I and (I think) are talking about. The brain models its environment based on past sensory input. When it finds something surprising, i.e. that the model could not predict, it rewards itself with a hit of dopamine. On an evolutionary scale, sure, we may have evolved the above to maximise evidence to optimise that model. However the answer to the question stands. There are other, better, more factual reasons why we fear dark caves.

    However I can tell that you have a very precise idea of how the mode of discussion should go, so I'll make this post my last. But for the record, the article does answer your question (read to the end).
  • apokrisis
    7.3k
    Banno I think apokrisis did a good job of answering this.I like sushi

    Nice of you to say so, but it was a hurried reply to a confused OP. I can do better. :smile:

    First, it is obvious that to be able to predict the world with minimal error is going to be a way to live longer. Or even more importantly - from a true Darwinian perspective - maximise your reproductive success. So that “mystery” is easily dealt with.

    Then the actual Darkened Room issue.

    Perhaps it is clearer in Friston’s more recent Markov Blanket reformulation of his arguments, but an enactive/semiotic approach to cognition is all about the coupling of the organism to its world by a cybernetic feedback loop of action and sensation.

    An organism’s actions on the world are a source of certainty. It is like a hypothesis that you intend to test, You can at least be certain of what you plan to do in terms of acting on the world.

    The world itself is then the source of surprise. While you act with the certainty of an intention that is going to make some change to the world, the world is coming back at you the other way as the cause of any sensory uncertainty.

    The trick is then to act in ways that only increase your certainty about the sensations you will experience. If the certainty of your actions effectively reduces the uncertainty of your sensations, then the two sides of the equation are tightly coupled in a way that optimises your ability to exist in the world.

    It is all you have to do. Minimise the surprises that would otherwise stop you smoothly meeting your needs as a living organism. Zero surprise means every wish is being effortlessly met. Sensory prediction error is used to calibrate habits of action. You are winning to the degree your plans for your future don’t encounter the unexpected.

    But an organism lives in the world. It exists because it can tame environmental uncertainty through its actions. It can feed itself, protect itself, reproduce itself, etc. It can act in ways that reduce the world’s uncertainty.

    So it doesn’t need to retreat to the refuge of a darkened room to escape the environment’s capacity to surprise. That move might seem to remove the source of sensory uncertainty, but it would also remove the certainty represented by the organism’s store of habits of action. The whole system of cognition would collapse. As it does in sensory deprivation conditions.
  • apokrisis
    7.3k
    When it finds something surprising, i.e. that the model could not predict, it rewards itself with a hit of dopamine.Kenosha Kid

    Not really. Finding your lost keys might be a pleasing surprise. A sudden increase in your world certainty. Spotting the lurking tiger is something different, a sudden increase in your world uncertainty.

    A dopamine hit locks in a goal state. You get tunnel focus on the natural next action of grabbing your keys. Dopamine fixes a habit of action - hence is associated with addiction.

    But an increase in uncertainty leads to a drop in serotonin, and increase in noradrenaline. You get hit by neuromodulators that cause you to cast around anxiously for some better predictive model of the world.

    So surprise is information uncertainty. But if you are looking for lost keys, you at least know they are somewhere and what you want them for. The unpredictable bit is where they will show up as an environmental sensation. The dopamine happiness is about being immediately back on track in a surprise minimised world.

    A lurking tiger is a much greater source of uncertainty. You didn’t predict it and you are not sure what is the best thing to do about it. The sensory surprise of seeing it doesn’t spell the end of your state of uncertainty but the start of it.
  • dimosthenis9
    846
    If biological systems, including ourselves, act so as to minimise surprise,Banno

    But we obviously don't, animals neither. Empirical observation makes it crystal clear to us. Somehow the article seems to alters "surprise" common definition as to make it sound similar to "danger". Not all surprises are bad. And especially we humans are by nature curious creatures. So we mostly seek for surprises instead of avoiding them.

    But why is minimising surprise the very same as living longest?Banno

    It isn't.

    For me it is just one more example of how many people love "problems" and "paradoxes". Creating them out of nowhere,complicating things unnecessarily only as to come as "savors" later to suggest the "solution"? Or maybe just to give their name to a" problem" or "paradox"? Who knows.
    I'm not talking about you here, but for the author of the article.

    Sorry but I see no " dark room problem" at all here.
  • Kenosha Kid
    3.2k
    Not really. Finding your lost keys might be a pleasing surprise. A sudden increase in your world certainty. Spotting the lurking tiger is something different, a sudden increase in your world uncertainty.apokrisis

    Indeed:

    There are other, better, more factual reasons why we fear dark caves.Kenosha Kid

    and

    there is no general rule of aversion to surprise, nor is one needed to explain why people don't run at spikes, off cliffs, or into animal enclosures.Kenosha Kid

    The problem is in trying to model all human behaviour according to one general rule when in fact it is an interplay between many physical processes evolved at different times in different environments, some overriding. Our fear of lurking tigers _is_ quite different from our innate curiosity for the novel, and should be treated as such.
  • unenlightened
    9.2k
    As I recall, I was quite content in my darkened room until I was expelled from it by a nightmare squeezing that left me beached on a bloody sheet gasping for breath. Breath was the second surprise. Darkened rooms are unavailable for a longer lease than about 9 months. Thereafter, minimising surprise involves seeking out surprise, aka novelty, in order to familiarise oneself with it. I think this is known as "learning|".

    Always keep-a hold of nurse - for fear of finding something worse! — Hilaire Belloc

    Good advice, but impossible in the long run.
  • dimosthenis9
    846
    minimising surprise involves seeking out surprise, aka novelty, in order to familiarise oneself with it. I think this is known as "learning|".unenlightened

    Well said.
  • Hanover
    12.9k
    The article was too long, but am I correct in interpreting that it says an experiment with mice showed the mice tried to avoid surprise, so that finding was theorized to be the driving force for all animal behavior, but when they looked at how animals actually behaved in the world, their theory proved shitworthy?
  • Kenosha Kid
    3.2k
    Thereafter, minimising surprise involves seeking out surprise, aka novelty, in order to familiarise oneself with it.unenlightened

    Did you hear about the vegan who tried to eat all the cows so there'd be no more cows to eat?
  • Hanover
    12.9k
    If we adapt to environment A, we will avoid B if we're not adapted to it because we'll not compete well there. That's why tigers don't find a nice warm cave to compete with the bats.
  • frank
    15.8k
    But an increase in uncertainty leads to a drop in serotoninapokrisis

    Serotonin is an H&N chemical, so that's expected.

    *Dopamine got the nickname “the pleasure molecule” based on experiments with addictive drugs. The drugs lit up dopamine circuits, and test participants experienced euphoria. It seemed simple until studies done with natural rewards—food, for example—found that only unexpected rewards triggered dopamine release. Dopamine responded not to reward, but to reward prediction error: the actual reward minus the expected reward. That’s why falling in love doesn’t last forever. When we fall in love, we look to a future made perfect by the presence of our beloved. It’s a future built on a fevered imagination that falls to pieces when reality reasserts itself twelve to eighteen months later. Then what? In many cases it’s over. The relationship comes to an end, and the search for a dopaminergic thrill begins all over again. Alternatively, the passionate love can be transformed into something more enduring. It can become companionate love, which may not thrill the way dopamine does, but has the power to deliver happiness—long-term happiness based on H&N neurotransmitters such as oxytocin, vasopressin, and endorphin. It’s like our favorite old haunts—restaurants, shops, even cities. Our affection for them comes from taking pleasure in the familiar ambience: the real, physical nature of the place. We enjoy the familiar not for what it could become, but for what it is. That is the only stable basis for a long-term, satisfying relationship. Dopamine, the neurotransmitter whose purpose is to maximize future rewards, starts us down the road to love. It revs our desires, illuminates our imagination, and draws us into a relationship on an incandescent promise. But when it comes to love, dopamine is a place to begin, not to finish. It can never be satisfied. Dopamine can only say, “More.”". -- The Molecule of More: How a Single Chemical in Your Brain Drives Love, Sex, and Creativity--and Will Determine the Fate of the Human Race, Lieberman
  • god must be atheist
    5.1k
    If biological systems, including ourselves, act so as to minimise surprise, then why don't we crawl into a dark room and stay there?Banno

    Dark rooms are threatening. They promise surprise. We don't know if there is a mountain bear in the cave, we don't see the scorpions and the snakes. Darkness does not decrease the surprise element; it increases it. We are not in control, because we don't see, or don't see the details well enough.

    We sleep in the dark because we are basically defenseless in both the dark and in our sleep. So we combine the two, marry the two, and get two birds stoned under one hat.
  • SophistiCat
    2.2k
    Here's an article that attempts to provide a summation of the thinking around this problem: Free-energy minimization and the dark-room problemBanno

    When I read that sentence I immediately thought of Friston (who is indeed the lead author). Sean Carroll had a podcast with him, where they touched upon the dark room (non-)problem, among other things. It's pretty complicated stuff (at least for someone with no relevant background) that's hard to grasp without getting into some details of information theory, probability, Markov blankets and all that. People shouldn't jump to conclusions based on a short paraphrase.

    It may be worth mentioning that the idea of prediction error (surprise) minimization and predictive processing in general has been kicking around in cognitive science for some time. Other notable people actively working on it are Andy Clark (of The Extended Mind) and Jakob Hohwy. Friston's particular contribution is in bringing the Helmholtz free-energy approach to bear on the problem, and then trying to extend it beyond cognitive science to living systems in general.

    The problem is in trying to model all human behaviour according to one general rule when in fact it is an interplay between many physical processes evolved at different times in different environments, some overriding.Kenosha Kid

    Sure, but also keep in mind that there can be multiple subsystems that can be described by that model, of varying complexity and operating concurrently on different timescales.
  • TheMadFool
    13.8k
    But at first sight this principle seems bizarre. Animals do not simply find a dark corner and stay there. — Linked Article

    This is typical of AI logic. Very much like antinatalism which, as per @180 Proof amounts to, paraphrasing, "destroying the village to save the village". Another instance is that of negative utilitarianism's riddle of whether we should kill everybody to reduce suffering. AI "thinks" exactly like this but the problem is:

    For every complex problem there is an answer that is clear, simple, and wrong. — H. L. Mencken

    There seems to be two ways of minimizing surprises:

    1. To reduce possibilities. This is the dark room problem. AI (false)"solution".

    or

    2. Anticipate (correctly) which possibilities will actualize. One needs a good model of reality to do this. Human (non-AI) (real) solution.

    So whoever thought of the dark room problem is basically switching between AI logic and human logic - the dark room is a valid solution for an AI but not so for a human and other animals too I suppose.

    :chin:
  • TheMadFool
    13.8k
    Some insects and even animals prefer the dark...they scurry along until they're in the shadows. The dark room problem = Insect logic! = AI logic :chin:
  • NOS4A2
    9.3k
    Good article.

    To me, the problem lies in the utilization of these principles.

    From an information theory or statistical perspective, free-energy minimization lies at the heart of variational Bayesian procedures (Hinton and van Camp, 1993) and has been proposed as a modus operandi for the brain (Dayan et al., 1995) – a modus operandi that appeals to Helmholtz’s unconscious inference (Helmholtz, 1866/1962). This leads naturally to the notion of perception as hypothesis testing (Gregory, 1968) and the Bayesian brain (Yuille and Kersten, 2006). Indeed, some specific neurobiological proposals for the computational anatomy of the brain are based on this formulation of perception (Mumford, 1992). Perhaps the most popular incarnation of these schemes is predictive coding (Rao and Ballard, 1999). The free-energy principle simply gathers these ideas together and summarizes their imperative in terms of minimizing free energy (or surprise).

    The computational theory of mind and the conflation of brains and creatures invariably leads to such problems.
  • TheMadFool
    13.8k
    It's kind of a paradox.

    An insect crawls to a dark spot in a room or outdoors for the reason that it'll escape notice and thus no nasty surprises (predators)! The insect does have an accurate model of reality - predators, quite literally, everywhere. The AI solution, "surprisingly", is a good one. Silly humans!

    Where's the paradox?, you ask.

    Well, the darkness in which an insect hides is, from the vantage point of another living organism, full of surprises. Juxtapose that with the concealed insect's state of fewer surprises.
  • apokrisis
    7.3k
    The problem is in trying to model all human behaviour according to one general rule when in fact it is an interplay between many physical processes evolved at different times in different environments, some overriding. Our fear of lurking tigers _is_ quite different from our innate curiosity for the novel, and should be treated as such.Kenosha Kid

    But Friston is creating a mathematically general theory of the modelling relation that distinguishes all bios from all a-bios. He is giving neuroscience its own proper physicalist foundation - Bayesian mechanics - to wean it off the Universal Turing Machine formalisms that want to treat the brain as a representing and simulating computer.

    When he talks of surprise, it is as a technical term within a new mathematical structure. He brings together many existing information theoretic concepts - surprisal, mutual information, free energy - under the one general set of equations. So the theory is broad enough to cover the mind of a bacterium as much as a human.

    First you find the common base principles of what a biotic modelling relation with the world is all about. Then you can start to worry about the complexities of the specific implementations.
  • NOS4A2
    9.3k


    Perhaps the probability of being surprised in conditions where a creature is unable to use its senses overrides the probability of being surprised in conditions where it can.
  • TheMadFool
    13.8k
    Perhaps the probability of being surprised in conditions where a creature is unable to use its senses overrides the probability of being surprised where in conditions where it can.NOS4A2

    :ok: To stay in the dark is to level the playing field. Yes, I can't see (you) but neither can you (see me)! Plus, I could be a predator for all you know, vice versa of course :grin: The darkness is brimming with possibilities - food/as food.

    It appears that the idea/point is not to reduce surprise but to increase it. :chin:
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.