You are right that a formal framework can serve as a useful tool.
Mathematics itself is not falsifiable — but it does not make empirical claims.
The Free Energy Principle (FEP), however, is not presented as a mere formalism; it is promoted as a scientific account of how organisms, brains, and even societies maintain their organization by “minimizing free energy.”
The moment such a statement is made, it leaves the purely formal domain and enters the empirical one — and therefore becomes subject to falsification.
Otherwise, it is not a scientific framework but a metaphysical one.
This is exactly the issue identified by Bowers and Davis (2012), who described predictive processing as a “Bayesian just-so story”:
a framework so flexible that any observation can be redescribed post hoc as free-energy minimization.
A theory that can explain everything explains nothing.
It becomes a formal tautology — a mathematical language searching for an ontology.
The same problem appears in the well-known “Dark Room Argument” (Friston, Thornton & Clark, 2012).
If organisms truly sought to minimize surprisal, they would remain in dark, stimulus-free environments.
To avoid this absurdity, the theory must implicitly introduce meaning — assuming that the organism “wants” stimulation, “prefers” survival, or “seeks” adaptation.
But these are semantic predicates, not physical ones.
Hence, the principle only works by smuggling intentionality through the back door — the very thing it claims to explain.
Even sympathetic commentators such as Andy Clark (2013) and Jakob Hohwy (2020) have admitted this tension.
Clark warns that predictive processing risks “epistemic inflation” — the tendency to overextend a successful formalism into domains where its terms lose meaning.
Hohwy concedes that FEP is better seen as a framework than a theory.
But that is precisely the point:
a framework that lacks clear empirical boundaries and shifts freely between physics, biology, and psychology is not a unifying theory — it is a semantic conflation.
Your second point, that terms like prediction or inference can be used metaphorically for neurons, simply confirms my argument.
If those terms are metaphorical, they no longer describe what they literally mean;
if they are literal, they presuppose an experiencing subject.
There is no third option.
This is the very category error I referred to: a semantic predicate (inference, prediction, representation) applied to a physical process, as if the process itself were epistemic.
To say that Friston’s theory is “not about qualia” does not solve the problem — it reveals it.
Once you speak of perception, cognition, or self-organization, you are already within the phenomenal domain.
You cannot meaningfully explain perception without presupposing experience; otherwise, the words lose their reference.
A “theory of consciousness” that excludes consciousness is a contradiction in terms — a map with no territory.
You also mention a continuum between life and non-life.
I agree.
But the decisive transition is not a line in matter; it is the emergence of autocatalytic self-reference —
the moment a system begins to interpret its own internal states as significant.
That is not a metaphysical distinction but a systemic one.
And no equation of free energy can account for it, because significance is not a physical magnitude.
To compare FEP with mathematics therefore misses the point.
Mathematics is explicitly non-empirical; FEP oscillates between being empirical and metaphysical, depending on how it is defended.
That is precisely what renders it incoherent.
Finally, if — as you and others claim — the theory is “not about subjective experience,”
then it should not be presented as a theory of consciousness at all.
Otherwise, it becomes exactly what I called it before:
a mathematical cosmology of life that explains everything, and therefore nothing.
References
Bowers, J. S., & Davis, C. J. (2012). Bayesian just-so stories in psychology and neuroscience. Psychological Bulletin, 138(3), 389–414.
Friston, K., Thornton, C., & Clark, A. (2012). Free-energy minimization and the dark-room problem. Frontiers in Psychology, 3, 130.
Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36(3), 181–204.
Hohwy, J. (2020). The Self-Evidencing Brain. Oxford University Press.