• Apustimelogist
    915
    they are principally unverifiableWolfgang

    No more than any other scientific theory. I mean, the reason why predictive coding (as a specific machine learning architecture) became popular is because machine learning architectures were designed that described actual neural responses. So this theory can be empirically evaluated as much as any other scientific theory in the sense that you can build models and test them.

    Now, Friston's free energy principle is a mathematical principle that is unfalsifiable and much more general than any specific theory about the brain...

    but specific kinds or families of predictive processing models for what neurons and the brain do are obviously testable, which is what I am talking about in the first paragraph.

    From “organisms behave as if anticipating” it does not follow that they literally minimize epistemic uncertainty.Wolfgang

    But as I already explained, this is a crucial point of Friston's theory. He is not saying that organisms have some intentional uncertainty minimization thing they do. He is saying that in order to exist, they have to look like they are doing that. It doesn't matter how that is achieved, which is why his theory generalizes across many domains. Evolutionary natural selection can be framed as free energy minimization.

    Sliding between these domains without a strict translation rule is a category error.Wolfgang

    At the highest level of generality, Friston's theory is more of general mathematical principle that can be shown to be the case generally for complicated systems regardless of the specific way they are described.

    In this paper, he makes a great deal of effort to connect his principle to physics - statistical, newtonian and quantum mechanics - to emphasize the generality of the description as applying to random dynamical systems of which fundamental physics might be seen as special cases.

    https://arxiv.org/abs/1906.10184

    The relationship between free energy minimization and stochastic systems was shown before even Friston started his idea: e.g.

    https://scholar.google.co.uk/scholar?cluster=17970774975628711245&hl=en&as_sdt=0,5&as_vis=1


    So the category error you accuse him of doesn't hold because the theory is much more general than you suggest.

    subjective experienceWolfgang

    This seems to be your main issue, which is fine because these theories aren't meant to solve the hard problem of explain subjective experience.
  • Wolfgang
    95

    You raise several important points, but I think we are still talking at different levels of description.

    When I say that such theories are not verifiable, I don’t mean that one cannot empirically test specific predictive-processing models or machine-learning architectures inspired by them.
    Of course, such models can be tested — and many have been.
    But that is not the same as testing a theory of consciousness that builds upon them.
    A model that reproduces neural activity or perceptual prediction errors is not thereby a model of experience.
    It models signal processing, not consciousness.

    You are right that Friston’s Free Energy Principle is mathematically unfalsifiable.
    But that is precisely the problem.
    It is so general that any system can always be re-described as “minimizing free energy.”
    This generality immunizes it against falsification and therefore removes it from the domain of empirical science.
    That is not a technical issue, but exactly the category error at stake:
    mathematics, physics, and semantics are being fused into a pseudo-unity, although they belong to different descriptive domains.

    When Friston says that organisms must appear to minimize uncertainty in order to persist, this is a semantic paraphrase of a mathematical tautology:
    Any system that continues to exist must stabilize its internal states within certain bounds.
    That is trivially true — but it says nothing about consciousness or intentionality.

    The fact that the principle can be applied to statistical, Newtonian, or even quantum systems only demonstrates its formality, not its explanatory power.
    A formalism that fits neurons, molecules, and galaxies alike cannot, without an additional epistemic framework, explain experience or cognition.
    It simply re-describes physical regularities in probabilistic language.

    Hence, the category error remains:
    Mathematical structure ≠ physical mechanism ≠ phenomenal meaning.
    There is no bridge law between these levels — only semantic correlation.

    And this also reveals the deeper contradiction:
    Friston and his followers often claim that subjective experience (qualia) is not the target of the theory.
    But to describe the brain as a predictive, inferential, or representational system is already to invoke the phenomenal domain.
    You cannot speak of “prediction” or “inference” without presupposing a model that experiences something to be predicted or inferred.
    To exclude qualia while describing perception is therefore not a modest limitation — it is a category mistake.
    It eliminates the very phenomenon that the terms presuppose.

    In the same sense, to speak of a unified brain theory is necessarily to speak of a theory of consciousness, because it aims to integrate perception, action, memory, and self-organization — all of which are phenomenally defined.
    A “unified brain theory” that excludes consciousness is like a “unified theory of music” that excludes sound: mathematically coherent, but phenomenologically empty.

    The Free Energy Principle may serve as a metaphorical ordering principle for self-organization, but it is not, and never was, a theory of consciousness.

    This raises the simple but essential question:
    What is Friston’s theory actually for?
    What does it allow us to know that we did not already know?
    If it is neither empirically testable nor conceptually coherent, it remains a formal metaphor — a kind of mathematical cosmology of life that explains everything and therefore nothing.
    A theory that can be applied to all systems explains no system in particular; it produces not knowledge, but only a symbolic sense of connectedness.

    A real theory of consciousness must instead explain the transition from physical stability to autocatalytic self-reference — the formation of a causal core in which internal states are not only maintained but recursively interpreted.
    Only at that level — the level of life itself — can consciousness arise.

    If, as you say, Friston’s theory is not intended to explain subjective experience, then it is simply not a theory of consciousness — and should not be presented as such.
    It might then be classified as a general systems formalism describing the dynamics of self-stabilizing structures.
    That is perfectly legitimate, but it places the theory outside the epistemic scope of what is usually meant by a theory of consciousness.

    In that case, it explains neither the emergence of experience nor the relation between neural activity and awareness; it only offers a high-level metaphor for persistence and adaptation.
    But that is not consciousness — it is life in probabilistic notation.

    And here lies the fundamental dilemma:
    If a theory claims to explain consciousness, it faces the category error I have described.
    If it does not, it becomes irrelevant to the problem it is most often cited for.

    Either way, the result is the same:
    the Free Energy Principle is not wrong, but misapplied — a powerful mathematical metaphor mistaken for a theory of mind.
  • Apustimelogist
    915
    This generality immunizes it against falsification and therefore removes it from the domain of empirical science.Wolfgang

    I don't think this matters if you treat it in the proper sense as a conceptualizing framework. If you can have testable theories at a lower level, then there's no issue. Its like criticizing mathematics for being unfalsifiable when thats not the point of mathematics. Mathematics can be used as a tool for the purpose of describing scientific theories.

    I think at the core, you are thinking about Friston's theory in terms of subjective experience. This is not what it is about in any sense.

    The category error is yours for thinking a theory or principle is about something that it is not intended to be about.

    You cannot speak of “prediction” or “inference” without presupposing a model that experiences something to be predicted or inferred.Wolfgang

    I don't think this is true. Yoy can describe a single neuron as doing predictive coding. I don't think most people believe we need to ascribe them with experience. You might say "prediction" or "inference" is the wrong word because in your head they are somehow connected to qualia, but thats then just semantics that has no bearing on the validity of models or what they intended to do.

    This raises the simple but essential question:
    What is Friston’s theory actually for?
    What does it allow us to know that we did not already know?
    If it is neither empirically testable nor conceptually coherent, it remains a formal metaphor — a kind of mathematical cosmology of life that explains everything and therefore nothing.
    A theory that can be applied to all systems explains no system in particular; it produces not knowledge, but only a symbolic sense of connectedness.
    Wolfgang

    It is a conceptual framework in which you can give things a formal description. I would say the benefit is conceptualizing how the world works, just like what philosophy does in general. Philosophy doesn't necessarily provide us with new knowledge about the world, but it people use it to organize their concepts of the world in a self-consistent way. As a mathematical tool it provides a choice for how one can describe systems they are interested in, like how in physics, there are usually different formulations of the same theory. None of the formulations predict anything different, but they are different perspectives on the same thing.

    And I will emphasize it is "conceptually coherent" in your specific sense of the phrase if we are not talking about qualia. Because thats not what its intended to do and what most neuroscientific theories are not about. We are dealing with the easy not hard problems of consciousness. In that sense, any of these theories from neuroscience can be fully consistent theories of consciousness (in the easy sense). They are not intending on solving hard problems.

    A real theory of consciousness must instead explain the transition from physical stability to autocatalytic self-referenceWolfgang

    I think it is worth noting there is obviously a continuum between inanimate and animate living things, with no strict dividing line. It is then unlikely that there can be a single, unique theory of living things or consciousness due to this because any such theories will have to make arbitrary distinctions about where living things end and non-living things start. Clearly then, a theory which fully encompasses the continuum must be maximally general, but that doesn't mean it is mutually exclusive of other theories of increasingly less generality. We shouldn't be looking for a single unique theory that explains everything. We need a plurality of tools that describes phenomena at various levels of generality from the highest to lowest.

    If it does not, it becomes irrelevant to the problem it is most often cited for.Wolfgang

    Again, the category error is yourse because the theory can be used for things and many papers have been written using it to construct models that are even compared to data. The category error is yours in thinking that it is meant not for those things but explaining phenomenal consciousness. The theory is being used for all those things it is good at in those papers. It is not being misapplied, you are mislabeling it.
  • Wolfgang
    95

    You are right that a formal framework can serve as a useful tool.
    Mathematics itself is not falsifiable — but it does not make empirical claims.
    The Free Energy Principle (FEP), however, is not presented as a mere formalism; it is promoted as a scientific account of how organisms, brains, and even societies maintain their organization by “minimizing free energy.”
    The moment such a statement is made, it leaves the purely formal domain and enters the empirical one — and therefore becomes subject to falsification.
    Otherwise, it is not a scientific framework but a metaphysical one.

    This is exactly the issue identified by Bowers and Davis (2012), who described predictive processing as a “Bayesian just-so story”:
    a framework so flexible that any observation can be redescribed post hoc as free-energy minimization.
    A theory that can explain everything explains nothing.
    It becomes a formal tautology — a mathematical language searching for an ontology.

    The same problem appears in the well-known “Dark Room Argument” (Friston, Thornton & Clark, 2012).
    If organisms truly sought to minimize surprisal, they would remain in dark, stimulus-free environments.
    To avoid this absurdity, the theory must implicitly introduce meaning — assuming that the organism “wants” stimulation, “prefers” survival, or “seeks” adaptation.
    But these are semantic predicates, not physical ones.
    Hence, the principle only works by smuggling intentionality through the back door — the very thing it claims to explain.

    Even sympathetic commentators such as Andy Clark (2013) and Jakob Hohwy (2020) have admitted this tension.
    Clark warns that predictive processing risks “epistemic inflation” — the tendency to overextend a successful formalism into domains where its terms lose meaning.
    Hohwy concedes that FEP is better seen as a framework than a theory.
    But that is precisely the point:
    a framework that lacks clear empirical boundaries and shifts freely between physics, biology, and psychology is not a unifying theory — it is a semantic conflation.

    Your second point, that terms like prediction or inference can be used metaphorically for neurons, simply confirms my argument.
    If those terms are metaphorical, they no longer describe what they literally mean;
    if they are literal, they presuppose an experiencing subject.
    There is no third option.
    This is the very category error I referred to: a semantic predicate (inference, prediction, representation) applied to a physical process, as if the process itself were epistemic.

    To say that Friston’s theory is “not about qualia” does not solve the problem — it reveals it.
    Once you speak of perception, cognition, or self-organization, you are already within the phenomenal domain.
    You cannot meaningfully explain perception without presupposing experience; otherwise, the words lose their reference.
    A “theory of consciousness” that excludes consciousness is a contradiction in terms — a map with no territory.

    You also mention a continuum between life and non-life.
    I agree.
    But the decisive transition is not a line in matter; it is the emergence of autocatalytic self-reference —
    the moment a system begins to interpret its own internal states as significant.
    That is not a metaphysical distinction but a systemic one.
    And no equation of free energy can account for it, because significance is not a physical magnitude.

    To compare FEP with mathematics therefore misses the point.
    Mathematics is explicitly non-empirical; FEP oscillates between being empirical and metaphysical, depending on how it is defended.
    That is precisely what renders it incoherent.

    Finally, if — as you and others claim — the theory is “not about subjective experience,”
    then it should not be presented as a theory of consciousness at all.
    Otherwise, it becomes exactly what I called it before:
    a mathematical cosmology of life that explains everything, and therefore nothing.

    References

    Bowers, J. S., & Davis, C. J. (2012). Bayesian just-so stories in psychology and neuroscience. Psychological Bulletin, 138(3), 389–414.

    Friston, K., Thornton, C., & Clark, A. (2012). Free-energy minimization and the dark-room problem. Frontiers in Psychology, 3, 130.

    Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36(3), 181–204.

    Hohwy, J. (2020). The Self-Evidencing Brain. Oxford University Press.
  • 180 Proof
    16.1k
    ↪Mijin The so-called “hard problem” of consciousness must first be characterized properly, because it contains two fundamental category errors.

    The first is the one already mentioned: it conflates two descriptive levels
    — the physical and the semantic — and then asks how one could possibly “give rise” to the other. This question is not profound; it is ill-posed.

    The second is subtler: it assumes that mind must arise from matter, when in fact it arises from life.
    If you reduce a physical system, you end up with particles.
    If you reduce a living system, you end up with autocatalytic organization — the self-maintaining network of chemical reactions that became enclosed by a membrane and thus capable of internal coherence.
    That is the true basis of life: the emergence of a causal core within recursive, self-referential processes.
    Wolfgang
    :100:

    From there, consciousness can be understood evolutionarily, not metaphysically.
    At the neurophysiological level, one might say that in associative cortical areas, sensory inputs converge and integrate into dynamic wholes.
    Through recursive feedback between higher and lower regions, the system begins to form something like a mirror of itself.
    When these integrated representations are re-projected onto the body map, they generate what we call feeling — the system’s own state becoming part of its model of the world.

    In that sense, consciousness is not something added to matter, nor an inexplicable emergence; it is the self-reflection of an autocatalytic system that has become complex enough to model its own internal causality.

    Of course, this is not a “solution” to the hard problem in the usual sense — because no such final solution exists.
    But it offers a neurophysiological direction that might lead toward a satisfactory description:
    not a metaphysical bridge between mind and matter, but a consistent account of how recursive, life-based systems can generate the conditions under which experience becomes possible.
    More or less the 'non-reductionist physicalist, embodied functionalism' story I tell myself too.

    But you think consciousness is real.
    — bert1

    I hear people talking about it all the time. Just not very meaningfully. And certainly not at all scientifically
    apokrisis
    :up: :up:
  • apokrisis
    7.6k
    When you speak of a “bridge mechanism,” you already presuppose that there is a level of description where semantics becomes physics.Wolfgang

    Sure. The molecule must be a message. But it is also a molecule. And there must be a message.

    The issue is not semantics becoming physics. It is semantics regulating physics in a way that builds more of the semantics. An organism that lives as it entropifies in some purposeful and selfcontrolled fashion.

    But semantic reference, intentionality, or subjective experience are not additional physical phenomena that arise through complexity. They are descriptions that belong to a different epistemic domain.Wolfgang

    Now you have reified things in the way Ryle criticises. Or at least you don’t see that you have jumped a level of semiosis to talk about the socially-constructed level of mind. Animals are conscious in an immediate and embodied fashion. Humans add self-consciousness as they regulate their consciousness through a collective socialised notion of what it is like to have a mind which has semantic reference, intentionality, subjective experience,

    These kinds of thoughts have no way of formulating in the mind of an animal. But they are the jargon that a self regulating human is expected to employ to regulate their behaviour as part of a higher level social narrative.

    So connecting consciousness to neurobiology is one thing. Connecting self awareness as a narrative habit that can then regulate human behaviour according to socially-evolved norms is another thing.

    This is why neuroscience would not talk much about “consciousness” but about attentional processing and predictive modelling and the other functional aspects appropriate to a neurobiological level account. If you want a model of the stuff you are talking about, call in the social constructionist. The issue is about how words and numbers organise the human mind, not how neurons and genes organise the biological level of mind.

    So when you say that biophysics “has already provided the bridge,” I would say: it has provided the conditions of correlation, not the transition itself. What you call a “bridge” is in truth an interface of perspectives, not a mechanism.Wolfgang

    Read the link I provided and you will see you are off the mark. The issue was how an information could even affect entropy flows. The critical finding was that all the various physical forces converge to the same scale at the quasi-classical nanoscale and so can be switched from one form to another at “no cost” by a semantic network of molecular machinery.

    A cell was once thought of as just a bag of autocatalytic chemistry - toss in enzymes at the right moment and watch the metabolism run. But now the model has completely changed to a biosemiotic one.

    Modern philosophy has thus taken on the character of a stage performance. When David Chalmers, with long hair and leather jacket, walks onto a conference stage, the “wow effect” precedes the argument. Add a few enigmatic questions — “How does mind arise from matter?” — and the performance is complete.
    Koch and Friston follow a similar pattern: their theories sound deep precisely because almost no one can truly assess them.
    Wolfgang

    But I was there. I had lunch with Chalmers and Koch the day they launched their projects of “the hard problem” and “the neural correlates of consciousness”. I quizzed them and continued to do so. I agree that each was shallow.

    And likewise, I spent time with Friston when he was just “the statistics guy”. I could see he was in a completely different class of seriousness. Which is why I object to your OP that couldn’t tell the two apart.

    The field of “consciousness studies” attracts every kind of crackpot and chancer. Everyone reasons, well they are “conscious” so already they must be an authority on the subject. It is thus important to know who is doing the serious work.

    The second is subtler: it assumes that mind must arise from matter, when in fact it arises from life.
    If you reduce a physical system, you end up with particles.
    If you reduce a living system, you end up with autocatalytic organization — the self-maintaining network of chemical reactions that became enclosed by a membrane and thus capable of internal coherence.
    That is the true basis of life: the emergence of a causal core within recursive, self-referential processes.
    Wolfgang

    Yes I agree we need to reduce to the point where life begins. But now you illustrate the mistake of talking just in terms of the physics - cells as bags of metabolism - and ignoring the semantics that must somehow be organising the chemistry.

    The biophysical surprise is that the interface that produces this “organic state of being” is so throughly mechanical. A story of molecular switches, ratchets clamps and motors. And that this molecular machinery is “mining” the possibilities of physics at the quantum level while existing in the midst of a battering thermodynamical storm.

    If you are still thinking of bags of autocatalytic chemistry, your are stuck back in the 1980s when the issue was how to square this kind of new complexity theory model of self-organising physics with the kind of informational mechanism that was a genome needing some kind of semantic bridge to allow that type of physical potential. Information needed to do less work if physics could organise itself. But it still had to do some absolutely critical work.

    Hence biosemiosis. The search for how self-information could connect to dissipative structure. And the answer has turned out to be molecular machinery doing quantum biology.
  • Wolfgang
    95

    I am no longer willing to tolerate your arrogance and condescension.
    Your comments have crossed the line between discussion and self-display,
    and they make it evident that you lack the philosophical and epistemological competence
    to address these questions in any meaningful way.

    Anyone who uses terms such as semantics, physics, and biosemiosis in a single breath
    without understanding their categorical separation should first study the foundations of philosophy
    before accusing others of misunderstanding.

    This conversation is over.
    There is no point in continuing with someone who mistakes opinion for knowledge
    and anecdote for argument.

    I nonetheless wish you well in your further search for clarity —
    perhaps philosophy itself may help before you misrepresent it again.

    A brief afterthought.
    Those who have truly shared a table with Chalmers, Koch, or Friston rarely feel the need to say so.
    Serious minds tend to argue ideas, not acquaintances.

    Authentic conversation leaves no room for name-dropping — it reveals itself through clarity, not proximity.

    Your rhetoric suggests admiration without understanding,
    and a desire to belong where genuine thought begins by standing alone.
  • apokrisis
    7.6k
    :lol: :lol: :lol:
  • Apustimelogist
    915
    The moment such a statement is made, it leaves the purely formal domain and enters the empirical oneWolfgang

    Well, no imo because its a description of what the maths says. Its directly analogous to variational principles of least action in physics which don't specifically have empirical content because they are formal tools that are used to describe lots of different things in physics.

    https://en.wikipedia.org/wiki/Action_principles

    a framework so flexible that any observation can be redescribed post hoc as free-energy minimization.Wolfgang

    But this is the point. Free energy minimization gives you a framework where you can write down the equations describing the conditions for a system to maintain its own existence over time. That might not be interesting for a rock. But I think thats quite interesting for more complicated self-organizing systems. Its a framework for describing what complex self-organizing systems do, like choosing to describe physical systems as following paths of least action.

    If organisms truly sought to minimize surprisal, they would remain in dark, stimulus-free environments.Wolfgang
    To avoid this absurdity, the theory must implicitly introduce meaning — assuming that the organism “wants” stimulation, “prefers” survival, or “seeks” adaptation. But these are semantic predicates, not physical ones. Hence, the principle only works by smuggling intentionality through the back door — the very thing it claims to explain.Wolfgang

    False. A system will have wants or preferences or seeks things out because if it doesn't it dies. And the point of FEP is that any system that continues to exist looks like it is modelling its environment. Things like preferences are an inherent part of that model and are necessitated. It comes for free. Nothing is required to be smuggled in.

    Hohwy concedes that FEP is better seen as a framework than a theory.Wolfgang

    Yes, and Friston sees it the same way, I have been saying it that too. At the same time brains and what brains do in terms of constructing models and fulfilling the predictions of an organism is clearly a corollary of complicated systems that need extremely complicated forms of self-regulation in order to continue to survive. And we can use more specific, testable models of predictive processing or similar to describe what brains do.

    it is a semantic conflation.Wolfgang

    Disagree, it is a rigorous mathematical framework with provable claims whose central, general claims are provable. Obviously, with regard to brains, it makes no specific predictions. But as a unifying theory of self-organization, it does exactly what it says on the tin, and its impossible for it to be any more precise empirically because the notion of a self-organizing system is far to general to have any specific empirical consequences. Exactly the same for a "general system's theory". Nonetheless, this theory is fundamentally describing in the most general sense what self-organizing systems do, and gives you a formal framework to talk about them which you can flesh out with your own specific models in specific domains.

    what they literally mean;
    if they are literal, they presuppose an experiencing subject.
    Wolfgang

    As I've said before, this isn't because theories are invalid. Predictive models to describe neurons are testable and can replicate single neuron responses. They are doing exactly what they should in an effective way.

    The issue is you want them to describe something else. And thats fine, but no theory in neuroscience has ever claimed to explain subjective experience, nor do they want to. Thats not the most interesting part of neuroscience.

    That is precisely what renders it incoherent.Wolfgang

    Nothing incoherent. Its a mathematical framework you can use as a tool to describe self-organizing systems, and it can be put to effective use for many purposes. This is an interesting one I have cited before:

    https://arxiv.org/abs/2502.21217

    I take the point that on its own FEP is not a theory of consciousness (and I am talking in the easy sense, NOT about subjectivity), because it is too general, but I think it can have an important role in the hierarchy of different ways, different theories of describing what brains do as living, self-organizing systems. And I think the generality is a positive because it necessarily acknowledges that you will never find strict boundaries or dividing lines between conscious and non-conscious, living and non-living, and I don't think you can have a full account of these things without that acknowledgement.

    Edit: spelling and crossing out
  • apokrisis
    7.6k
    Free energy minimization gives you a framework where you can write down the equations describing the conditions for a system to maintain its own existence over time. That might not be interesting for a rock. But I think thats quite interesting for more complicated self-organizing systems. Its a framework for describing what complex self-organizing systems do, like choosing to Describe physical systems as following paths of least action.

    ...as a unifying theory of self-organization, it does exactly what it says on the tin, and its impossible for it to be any more precise empirically because the notion of a self-organizing system is far to general to have any specific empirical consequences. Exactly the same for a "general system's theory". Nonetheless, this theory is fundamentally describing in the most general sense what self-organizing systems do, and gives you a formal framework to talk about them which you can flesh out with your own specific models in specific domains.
    Apustimelogist

    Precisely. It is not a theory of consciousness but a meta-theory of self-organisation. And one large enough to encompass the meta-theories of self-organisation that had arisen already in the different metaphysical settings of physicalism and semantics. So Wolfgang is comparing apples and oranges. :up:
12Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.