The better one can interpret the sensory input of their environment, the better they can adapt for survival or whatever is needed to live as long as possible. — Caldwell
No, there's no confusion as to what they mean by surprise. I provided the quotes where you could see this. The issue that's being raised is that there's then an anomaly between what we expect for how agents should behave -- they should head to the dark room (in a manner of speaking); and what agents actually do -- they don't seem to avoid surprises, or (improbability) unlikely events.I suppose, given enough rope, one might see the dark room problem as this very misunderstanding of "surprise"; that the technical term is being confused with the common sense. — Banno
If free energy is the difference between expectation/prediction and actual sensory input, then the lower the free energy, the better the agent's prediction of its environment.What is it about minimising free energy that is the same as fitting in to one's environment? — Banno
Yes, you are right, and I agree. Lower free energy implies adaptive fitness.If free energy is the difference between expectation/prediction and actual sensory input, then the lower the free energy, the better the agent's prediction of its environment. — Caldwell
In fact, adaptive fitness and (negative) free energy are considered by some to be the same thing.
If biological systems, including ourselves, act so as to minimise surprise, then why don't we crawl into a dark room and stay there? — Banno
But why is minimising surprise the very same as living longest? — Banno
Most of the posts here seem therefore off topic. — Banno
Free energy, as here defined, bounds surprise, conceived as the difference between an organism’s predictions about its sensory inputs (embodied in its models of the world) and the sensations it actually encounters.
Under the free-energy principle, the agent will become an optimal (if approximate) model of its environment. This is because, mathematically, surprise is also the negative log-evidence for the model entailed by the agent. This means minimizing surprise maximizes the evidence for the agent (model). Put simply, the agent becomes a model of the environment in which it is immersed.
Banno I think apokrisis did a good job of answering this. — I like sushi
When it finds something surprising, i.e. that the model could not predict, it rewards itself with a hit of dopamine. — Kenosha Kid
If biological systems, including ourselves, act so as to minimise surprise, — Banno
But why is minimising surprise the very same as living longest? — Banno
Not really. Finding your lost keys might be a pleasing surprise. A sudden increase in your world certainty. Spotting the lurking tiger is something different, a sudden increase in your world uncertainty. — apokrisis
There are other, better, more factual reasons why we fear dark caves. — Kenosha Kid
there is no general rule of aversion to surprise, nor is one needed to explain why people don't run at spikes, off cliffs, or into animal enclosures. — Kenosha Kid
Always keep-a hold of nurse - for fear of finding something worse! — Hilaire Belloc
minimising surprise involves seeking out surprise, aka novelty, in order to familiarise oneself with it. I think this is known as "learning|". — unenlightened
Thereafter, minimising surprise involves seeking out surprise, aka novelty, in order to familiarise oneself with it. — unenlightened
But an increase in uncertainty leads to a drop in serotonin — apokrisis
If biological systems, including ourselves, act so as to minimise surprise, then why don't we crawl into a dark room and stay there? — Banno
Here's an article that attempts to provide a summation of the thinking around this problem: Free-energy minimization and the dark-room problem — Banno
The problem is in trying to model all human behaviour according to one general rule when in fact it is an interplay between many physical processes evolved at different times in different environments, some overriding. — Kenosha Kid
But at first sight this principle seems bizarre. Animals do not simply find a dark corner and stay there. — Linked Article
For every complex problem there is an answer that is clear, simple, and wrong. — H. L. Mencken
From an information theory or statistical perspective, free-energy minimization lies at the heart of variational Bayesian procedures (Hinton and van Camp, 1993) and has been proposed as a modus operandi for the brain (Dayan et al., 1995) – a modus operandi that appeals to Helmholtz’s unconscious inference (Helmholtz, 1866/1962). This leads naturally to the notion of perception as hypothesis testing (Gregory, 1968) and the Bayesian brain (Yuille and Kersten, 2006). Indeed, some specific neurobiological proposals for the computational anatomy of the brain are based on this formulation of perception (Mumford, 1992). Perhaps the most popular incarnation of these schemes is predictive coding (Rao and Ballard, 1999). The free-energy principle simply gathers these ideas together and summarizes their imperative in terms of minimizing free energy (or surprise).
The problem is in trying to model all human behaviour according to one general rule when in fact it is an interplay between many physical processes evolved at different times in different environments, some overriding. Our fear of lurking tigers _is_ quite different from our innate curiosity for the novel, and should be treated as such. — Kenosha Kid
Perhaps the probability of being surprised in conditions where a creature is unable to use its senses overrides the probability of being surprised where in conditions where it can. — NOS4A2
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.