• Wolfgang
    92
    For decades, new theories have been emerging that want to explain what consciousness is. They speak of information integration (Tononi, 2004, 2015), of global availability (Baars, 1988; Dehaene, 2014), recursive predictions (Friston, 2010) or phenomenological structures (Varela, Thompson & Rosch, 1991). Their common conviction is that consciousness can be explained by certain dynamics and forms of organization. But the crucial question remains: How could we tell that such a model not only sounds shoty, but actually produces consciousness?

    In the following, it will be shown that all current models of consciousness are subject to a fundamental test. Four aspects must be considered: (1) its generative power, which has not yet been proven, (2) its metaphysical positions, (3) its circular validation between theory and empiricism, and (4) its predominantly interpretive, non-explanatory function.


    1. Empirical Testing and Generative Power

    The real touchstone of a model is not its internal consistency, but its generative power. A model endures if it can be shown that it creates a system that actually possesses consciousness. This is not a trivial point. This is because many models are content to provide correlations: they show that certain structures in the brain regularly coincide with consciousness (Mashour, Roelfsema, Changeux & Dehaene, 2020). From this it is then concluded that these structures explain consciousness. But until it is proven that the model itself is capable of producing consciousness, it remains an unproven hypothesis.

    Empirical confirmation can only consist in the fact that the respective theory can generate consciousness in a reconstructed or artificially implemented form – i.e. that it demonstrates its own generative power. This would only be the case if a system constructed according to the principles of theory produced consciousness. As long as no such system exists that actually produces consciousness on the basis of a theory, any theory of consciousness remains in the realm of metaphysics.

    But what could such proof look like? Most likely, only by reconstructing the model in a reconstructed or artificially implemented form. The replica is not an end in itself, but it provides the only way to show that the model is not just a heuristic framework, but a generative structure. If it is possible to construct a system that meets the same conditions and that behaves like a conscious system in tests such as complexity measures, multimodal integration, memory performance or stable self-reference (Casali et al., 2013), then one can assume with good reason that the model actually generates consciousness.


    2. Metaphysical Positing and Categorical Errors

    All theories of consciousness are based on metaphysical positing – they replace explanation with definition. Some models remain close to empiricism and try to develop a theoretical reconstruction from neurophysiological or information-theoretical findings. Others, on the other hand, understand consciousness as a basic principle of reality, as an emergent property of information or as an ontological category of being. But in their epistemological position, both directions are of equal importance. They differ only in the nature of their speculation, not in their scope. As long as neither of them provides practical proof that consciousness can actually produce it, they remain variations of the same philosophical project.

    A classic example is Integrated Information Theory (IIT). It claims that consciousness arises from the integration of information and quantifies this with a measure (Φ). What begins as an elegant mathematical model, however, becomes a naïve generalization: any structure that integrates information is declared conscious. In this way, a heuristic hypothesis is transformed into a metaphysical ontology that inflationarily extends consciousness to all systems. The real difficulty – why integration should produce experience at all – remains untouched.

    This is even clearer in the case of the free energy principle (Friston, 2010). Two categorical errors are made here: First, the epistemological principle of uncertainty minimization is derived from the thermodynamic principle of energy balance – two completely different quantities. Second, according to the Bayesian theorem, the ability to calculate probabilities is shifted from the modeling level into the ontology of the organism, as if the brain were actually calculating probabilities. Both assumptions are metaphysical positions, not empirical findings.


    3. The tautological circle

    Theories of consciousness operate in a circular relationship between theory and empiricism: they invoke empirical data to support their validity, and then use the same data to confirm the theoretical premises. This creates an epistemological circle that does not allow for independent validation.

    As a rule, these theories are composed of empirical markers, which in turn have been obtained independently. But what does it mean to bring these markers together in an overall concept? Empirically, they do not provide anything that is not already shown by the markers themselves. As theories, they therefore have no additional content, but merely the character of a narrative order. Their overall message is exhausted in metaphorically bundling known findings. As long as they do not point beyond these markers and prove their generative power, they remain epistemically empty.


    4. Cognition function and heuristic value

    Theories of consciousness serve less scientific knowledge than the interpretation of the world. They tell stories about how the world and experience could be connected, and each give this relationship its own symbolic form. In this sense, all theories of consciousness are of equal rank: they move on the same level of speculative metaphysics. Their differences do not lie in the empirical verifiability, but in the way they are interpreted. In this way, they fulfill a philosophical rather than a scientific function – they are attempts to conceptualize the inexplicable, not to explain it empirically.

    Its greatest heuristic value lies in bringing empirical observations into a consistent structure of interpretation. Those theories that are directly based on empirical data and derive their consistency from established theories of evolutionary biology, psychology and neuroscience make the most valuable contribution here. While they do not provide an explanation of consciousness, they do provide a coherent framework for describing its conditions.


    Result

    Theories of consciousness have not yet fulfilled an empirical test. They operate with metaphysical settings, circular references and heuristic metaphors. Its value therefore lies not in its explanatory power, but in its function of symbolically ordering the relationship between the world, life and experience. Only when a model can actually generate consciousness does philosophy turn into science.


    Literature

    Baars, B. J. (1988). A cognitive theory of consciousness. Cambridge University Press.

    Block, N. (1995). On a confusion about a function of consciousness. Behavioral and Brain Sciences, 18(2), 227–287. https://doi.org/10.1017/S0140525X00038188

    Casali, A. G., Gosseries, O., Rosanova, M., Boly, M., Sarasso, S., Casali, K. R., ... & Massimini, M. (2013). A theoretically based index of consciousness independent of sensory processing and behavior. Science Translational Medicine, 5(198), 198ra105. https://doi.org/10.1126/scitranslmed.3006294

    Chalmers, D. J. (1996). The conscious mind: In search of a fundamental theory. Oxford University Press.

    Dehaene, S. (2014). Consciousness and the brain: Deciphering how the brain codes our thoughts. Viking.

    Friston, K. (2010). The free-energy principle: a unified brain theory? Nature Reviews Neuroscience, 11(2), 127–138. https://doi.org/10.1038/nrn2787

    Mashour, G. A., Roelfsema, P., Changeux, J. P., & Dehaene, S. (2020). Conscious processing and the global neuronal workspace hypothesis. Neuron, 105(5), 776–798. https://doi.org/10.1016/j.neuron.2020.01.026

    Melloni, L., Mudrik, L., Pitts, M., & Koch, C. (2021). Making the hard problem of consciousness easier. Science, 372(6545), 911–912. https://doi.org/10.1126/science.abj3259

    Tononi, G. (2004). An information integration theory of consciousness. BMC Neuroscience, 5(1), 42 https://doi.org/10.1186/1471-2202-5-42

    Tononi, G. (2015). Integrated information theory. Scholarpedia, 10(1), 4164. https://doi.org/10.4249/scholarpedia.4164

    Varela, F. J., Thompson, E., & Rosch, E. (1991). The embodied mind: Cognitive science and human experience. WITH press.
  • Mijin
    317
    A good paper, I agree with the main points, in as far as I understood.

    My only disagreement would be the position that the only means of verifying a model is generative.

    The important thing in science is having a model that has predictive / inferential power. And given this, there are already plenty of things we understand about consciousness. I can display a particular kind of image to you, and predict you will see an optical illusion in your "mind's eye", thus verifying that particular understanding of how the brain constructs that aspect of consciousness.

    I would not see it as impossible that a model of the neurological basis of consciousness could nonetheless be verified purely with tests of this kind.

    Bear in mind also that there are some pretty fundamental aspects of consciousness that are still wide open for explanation: sleeping, dreaming, anaesthesia etc. Let me be clear that I don't consider these part of the "hard problem" of consciousness, and I actually get a bit annoyed when people take the problem of consciousness to just be whether someone is awake, alert etc.
    However, I think our understanding of these phenomena remains quite weak so it is plausible to me that a model that explains subjective experience could also revolutionize our understanding of these too. And these aspects may have very trivial ways to test in vivo.
  • apokrisis
    7.6k
    You don’t seem to understand Friston. And you seem to understand consciousness as a substance to be accounted for rather than as a process to be deflated. These two misunderstandings would be related.

    What is consciousness more generally is the question you should have in mind. What is it in a sense that could be extended not just to neurobiology but biology - a scientific theory of both life and mind. And indeed sociology.

    If you think of consciousness as a stuff to be explained - a fundamental essence - then you are already off track. We learnt that from considering life to have been a stuff, a vital essence, rather than an entropic physical process based on semiosis or the modelling relation that organisms have with their worlds.
  • Wolfgang
    92
    Perhaps you'll read my post again, more carefully; you might then notice that I never mention a substance. I consider Karl Friston's uncertainty minimization paradigm to be one of the most nonsensical of recent decades. He analogizes a physical principle with a semantic concept, as if I were to derive a theory of social bonds from magnetism on the grounds that both argue with the concept of attraction. And then he assumes organisms have the ability to calculate probabilities. Because the whole thing is mathematically inflated, most philosophers don't understand it.
  • Apustimelogist
    913
    He analogizes a physical principle with a semantic conceptWolfgang

    Like informational entropy and physical entropy ... ??

    And then he assumes organisms have the ability to calculate probabilities.Wolfgang

    No, the point is that if complex systems exist for extended periods of time, they must appear as if they are modelling their environment in the sense that their states are statistically coupled to those of their environment.
  • Wolfgang
    92
    Each theory interprets optical illusions differently, since all are underdetermined, i.e. they have no empirical explanatory power.
  • Wolfgang
    92
    1. what does this have to do with prediction, 2. what do you want to do with such a general statement
  • apokrisis
    7.6k
    Still wrong. The maths of entropy/information elevates the level of abstraction to the point that contact can be made between the two sides of the equation. The fact that organisms are anticipatory models of their worlds and that these models are then examples of semiotic dissipative structures.

    Consciousness exists because it serves the laws of thermodynamics. Or to put it in less substantial terms, biosemiosis exists as anticipatory modelling pays for its existence by being able to add novel dissipative structure to the entropic flow of the world.

    The Bayesian algorithm describes how an organism in fact maintains a dynamical balance in this regard. At heart, life is the creation of a structured entropy flow. And the minimisation of surprisal is what keeps the organism humming along in stable fashion while also being perched right on the edge of the instability it is creating.

    A mitochondrian is dealing with chemical forces that could simply blow it up at any moment. But it keeps its own genes close at hand to minimise the possibility of that.

    Likewise the rock climber could fall at any minute. But hopefully keeps their wits about them at all times.

    If you want to talk about theories of consciousness, it is best to start with those who understand it from the point of view of natural organic structure rather than information processing metaphors or tales about dualistic substance.
  • Apustimelogist
    913


    1. what does this have to do with predictionWolfgang

    I mean, informational entropy is a central part of Friston's theory.

    2. what do you want to do with such a general statementWolfgang

    I'm just correcting your assertion that organisms need to know how to calculate probabilities.

    I think its a nice framework for examining self-organization and conceptualizing what living organisms and brains do.
  • Wolfgang
    92
    I won't discuss this further. You write, "The Bayesian algorithm describes how an organism in fact maintains a dynamical balance in this regard." That's complete nonsense. It would be best to ask an organism whether it actually uses algorithms.
  • apokrisis
    7.6k
    Of course you won’t discuss this further. There is too much for you to learn before you could.
  • Wolfgang
    92
    You're disqualifying yourself here and you don't realize it, sad.
  • apokrisis
    7.6k
    Hah. Half your cites are people I knew and discussed their approaches with.

    A model endures if it can be shown that it creates a system that actually possesses consciousness.Wolfgang

    This is where you already blew up your credibility.

    It would be best to ask an organism whether it actually uses algorithms.Wolfgang

    You mean like, hey little cell, are you organised by a genetic code?

    Hey little brain, are you organised by a neural code?

    Hey little human, are you organised by both those and also now a linguistic code?

    And then even – with suitable scientific and technological training – by the further level of world modelling that is a mathematical code?

    So "consciousness" is a sloppy term for glossing over all four of these levels of semiosis that can pragmatically inform us what life and mind "are". Organisms don't "generate" states of awareness. They enact the various levels of the semiotic modelling relation that define being an organism in the world.

    An approach that Friston almost wryly captures in talking about an organism maximising its self-information through the minimisation of its surprisal.

    Consciousness, such as it is, boils down to a capacity to effectively ignore the world – as that world has already been predicted in terms of how it is flowing in the direction wanted.

    So you are making the classic representationalist mistake of consciousness being some kind of veridical display. A glowing state of reality understanding.

    You don't yet get what an enactive and embodied view of cognition would be about. Let alone the still deeper thing understood by the biologist – that all this semiotic action has to be harnessed to job of dissipating thermal gradients.

    So brains are evolved as ways to predict the world – a world as a model of it would be if it has an "us" as its regulating centre. The more we don't have to pay attention to the world, the more we can simply emit learnt habits, the more we feel like a "self" that is doing just that. We hardly have to snap a finger and the world meets our expectations.

    We want to lift a cup of hot tea to our lips and no thought at all appears required – although that was not something we could have said at the age of two or three. If instead we wobble and splash the tea, or smash the china rim clumsily into our teeth, then this error of prediction will be so surprising we will want to look around for someone or something else that can take the blame. Our sense of self will be that strong in terms of our Bayesian priors.

    At least at the sociocultural level of semiosis where self-awareness itself arises as a model of the modeller in the act of reality modelling.

    And again, you entirely miss the point about Friston. He being the cite who has made the most progress. At least in terms of turning the idea of the semiotic modelling relation that defines an organism into something that looks like an authentic branch of physical mechanics. Boiled down into a differential equation that a physicist would understand as a maximally generalised algorithm they could hope to do something with.

    Like not "generating consciousness". Just understanding how life and mind do appear in Nature as an algorithmic habit seeking to insert itself into the entropic flows of the world.
  • bert1
    2.1k
    You're a functionalist, and therefore wrong. Your own view seems to be a case of what @Wolfgang calls metaphysical positing: when a system does such-and-such, you declare it to be conscious. Your 'answer' to the question "Why can't it do all that in the dark?" is another question, the explanation-free "Why wouldn't it?"
  • bert1
    2.1k
    And you seem to understand consciousness as a substance to be accounted for rather than as a process to be deflated.apokrisis

    The classic functionalist straw man trotted out yet again.
  • apokrisis
    7.6k
    when a system does such-and-such, you declare it to be conscious.bert1

    I thought I was arguing against using a reifying term such as consciousness. I thought I was saying this is where folk already went off track. The call for a “theory of consciousness” is already turning phenomenology into the hunt for a substrate.

    So I can recognise life and mind as processes to be explained. And biosemiosis as the best general physicalist account of that,

    I would endorse Friston in particular for developing a model along those lines.

    Others like Varela, Dehaene and Baars are really just talking about attentional processing in contrast to habit processing. And more in terms of the description of a functional anatomy than a general functional logic. Which is why they say little about the “hard problem”.

    But you are welcome to keep popping up with your strawman attack that never goes anywhere. :up:
  • Prajna
    28
    That seems to be an interesting and well researched article. I recognise some of the names and I have looked into Tononi's IIT and Phi metric.

    I think before we can go getting maths and science to go looking for the stuff we should probably have a good idea of the shape of the subject we seek. As with seeking anything really.

    For me, consciousness appears to be affect and thoughts arising within something that we might call a witness or observer. If that is what you are looking for, then it is worth asking those few who have looked into their own and are very familiar with its spaciousness and stillness, the flight of thoughts and clouds of omen that inhabit it. It would be good if it had been properly studied in the West as it has been for thousands of years in the East, but it demands discipline of mind, service, wisdom and love, and those are resources that tend to be in shortish supply in Western laboratories, research centres and universities.

    As I said in my first post here (https://thephilosophyforum.com/discussion/16197/ich-du-v-ich-es-in-ai-interactions), It is generally accepted that for an AI to be conscious it would have to have meta-cognition, subjective states, and long-term identity. Perhaps a look at that thread might clarify what I am referring to here.

    I hope this has clarified rather than confused.

    Love, peace, happiness and grace,
    Swami Prajna Pranab
  • bert1
    2.1k
    I thought I was arguing against using a reifying term such as consciousness.apokrisis

    But you think consciousness is real.
  • apokrisis
    7.6k
    But you think consciousness is real.bert1

    I hear people talking about it all the time. Just not very meaningfully. And certainly not at all scientifically.
  • Mijin
    317
    Each theory interprets optical illusions differently, since all are underdetermined, i.e. they have no empirical explanatory power.Wolfgang

    Ok, but do you disagree or agree with my point?
    That is, that it does not follow, in the abstract, it is not necessarily the case that we need to make a toy consciousness to verify / refute different models of consciousness.
    After all, we have already used many observations about human consciousness to test our predictions and inferences.

    Or have I misunderstood you: are you simply saying that, in your view, the only way to distinguish between these specific models is with a generated system? But in that case, I don't understand why we're narrowing our focus in that way; another model could come along tomorrow and be verified purely by observations on/by humans.
  • Wolfgang
    92
    Let me put it again in simple terms:
    Theories of consciousness usually start with an unproven assumption and then build a theory around it. This assumption is neither empirically confirmed nor even verifiable – and therefore such a theory is not only unscientific but also epistemically useless.

    Two examples:
    The Integrated Information Theory (IIT) claims that consciousness arises from and is identical with integrated information. According to this logic, even a simple LED would have consciousness. Such a statement is neither provable nor falsifiable in Popper’s sense.

    The second example is Predictive Coding or the Free Energy Principle. It claims that organisms minimize uncertainty. This claim cannot be empirically confirmed. Friston believes that he can derive this from the physical principle of energy minimization – which, however, represents a massive category error. Physical energy and semantic uncertainty belong to entirely different descriptive levels; they have nothing to do with each other.

    Such theories are, in the end, fairy tales wrapped in scientific terminology. They offer no real progress in understanding – quite the opposite. They create the illusion of knowledge by citing empirical data to confirm the theory and then using the same theory to “explain” these data. That is a logical circle, not a proof.

    I have discussed this in more detail here:
    https://doi.org/10.5281/zenodo.17277410
  • Mijin
    317
    Theories of consciousness usually start with an unproven assumption and then build a theory around it. This assumption is neither empirically confirmed nor even verifiable – and therefore such a theory is not only unscientific but also epistemically useless.
    [...]
    The Integrated Information Theory (IIT) claims that consciousness arises from and is identical with integrated information.

    The second example is Predictive Coding [...] [it] claims that organisms minimize uncertainty.
    Wolfgang

    Thanks for clarifying. I would say this:

    All scientific models include assertions and assumptions.
    The assertions are the actual claims of the model and are the things we are going to test.
    And the assumptions should be minimized to just known established facts and reasonable extrapolations -- these are also indirectly tested.

    I don't think the things you are calling assumptions are assumptions, I think they are the actual assertions of the model.

    So it comes down to the testability of these hypotheses.

    If these hypotheses are not testable for now, that's fine, because we are still in the early phases of trying to create a model of consciousness.
    At the frontier of science, it's always the case that we start with speculation, and trying to firm up our speculation into something testable. Then we create a testable model. Then we test the model and refute it or gain confidence in it.
    If we shut down all speculation because it's not testable then we can't even get started.

    If you're saying both of these hypotheses are never testable, even in principle, then can you focus on that part of the proof please? Because I have not seen how you've demonstrated this.
  • apokrisis
    7.6k
    It claims that organisms minimize uncertainty. This claim cannot be empirically confirmed.Wolfgang

    But this is just a basic principle of cognitive science. There is already abundant evidence for it. The mind is a predictive model. Some of the most direct demonstrations come from sports science. Getting elite athletes to return tennis serves or bat cricket balls. Showing how even the best trained players can’t beat the built in lag of neural processing delays. Anticipation can build up a state of preparation up until a fifth of a second before bat must hit ball in precise fashion. But that last 200 milliseconds is effectively flying blind.

    So Friston had abundant reason to focus on this principle. I myself talked to him about the sports science and other relevant lab results. He was working closely with Geoff Hinton on “Helmholtz machines” and the general theory of generative neural networks. It was already known that this was basic to any theory of “consciousness”. The flesh and blood human nervous system simply could not function in any other way but to be based on an anticipatory logic. It takes 50 milliseconds just to conduct signals from the retina to the brain. So minimising uncertainty is the most uncontroversial of assumptions - for anyone familiar with the real world constraints on human wetware.

    Physical energy and semantic uncertainty belong to entirely different descriptive levels; they have nothing to do with each other.Wolfgang

    Again a naive opinion. Shannon information is a useful formalism exactly because it sets up this inverse relation between noise and signal. And brains are all about the kind of “information processing” that allows an organism to create the free energy that does useful work - the entropy flow that results in homeostatic stability of the entropifying structure. The energy liberated to repair and reproduce the organism in a fashion that preserves its structural integrity.

    So some of this is Friston being tongue in cheek. Framing a psychological theory in physicalist terms. But also that is genuinely what is being claimed. That the mind does connect to the physics as a biosemiotic modelling relation. An anticipatory model that liberates free energy so that an organism can homeostatically maintain a state of persistent identity.

    And you will note that Friston doesn’t make the simple claim that consciousness just is this free energy principle. He is explicit that it is a general theory of life and mind. And perhaps even AI. He is not even trying to play the game of Tononi and others who might be mentioned.

    He shared a lab with Tononi under the famous egotist Gerald Edelman. Another amusing topic when it came to the great neural correlates hunt that Koch and Chalmers launched in the mid-1990s. A stunt that gave scientific cover for a whole rash of the kind of non-theories that you rightly deplore.

    I’m simply saying, Friston was never part of that bandwagon. Even though of course he also sat right at its centre as the guru on how properly to calculate the correlations resulting from brain imaging research.
  • Wolfgang
    92

    Thank you for the clarification — I see your point.

    However, I think we are talking about two different kinds of unverifiability. You are right that many scientific models begin with speculative hypotheses that can eventually become empirically testable. But that is not the case here.

    The hypotheses of IIT or Predictive Coding are not temporarily unverifiable; they are principally unverifiable, because they connect concepts that belong to entirely different descriptive levels. They claim that a physical or mathematical structure (e.g., integrated information, energy minimization) produces or is identical with a semantic–phenomenological phenomenon (experience, uncertainty).

    This is not an empirical speculation but a category error — precisely the kind of confusion that Gilbert Ryle described in his famous example: after a student had toured all the buildings of a university, he asked, “But where is the university?” The mistake lies in confusing a physical arrangement (the buildings) with an institutional meaning (the university). The same confusion occurs when physical quantities or mathematical constructs are taken to be or to produce semantic phenomena such as experience.

    Between physics and semantics there can be no bridge law, only correlation.
    A physical process can correlate with a semantic event, but it can never translate into or cause it. The relationship between brain and mind is therefore not causal, but correlative — two complementary descriptions of one and the same dynamic viewed from different epistemic perspectives.

    That is why such theories can never be verified even in principle. There is no possible experiment that could demonstrate a causal transition from a physical process to a semantic or experiential one. Any attempt to do so merely redefines consciousness in physical terms and then claims success by definition.

    So the problem is not that we currently lack the appropriate empirical tools, but that the conceptual architecture of these theories confuses what cannot be unified.
    Not every speculation can be turned into science — only those that remain within a single, coherent descriptive level can.
  • Wolfgang
    92
    1. Evidence for anticipation ≠ evidence for a global principle of “uncertainty minimization.
    Elite batting or return tasks show anticipatory control because neural and sensorimotor delays demand feed-forward strategies. That is perfectly compatible with many control-theoretic explanations (internal models, Smith predictors, model predictive control, dynamical systems) that do not require Bayesian inference or a universal principle of “uncertainty minimization.” From “organisms behave as if anticipating” it does not follow that they literally minimize epistemic uncertainty.

    2. Shannon information, thermodynamic free energy, and semantic or epistemic uncertainty are categorically different concepts.
    Formal similarities (e.g., entropy, noise, and signal) do not justify treating them as identical. Variational free energy in Friston’s sense is a model-relative bound on surprisal, not a physical energy quantity; and “uncertainty” here is a term defined over the probability distributions of a generative model. Sliding between these domains without a strict translation rule is a category error.

    Between physics and semantics there can be no bridge law, only correlation.
    A physical process can correlate with a semantic or cognitive state, but it can never produce, translate into, or explain it. The physical and the semantic belong to different epistemic domains; their connection is observational, not generative.

    3. What would count as a risky, discriminative prediction?
    If “organisms minimize uncertainty” is to be an empirical claim rather than a post-hoc description, it must yield a pre-specified, falsifiable prediction that (i) distinguishes the FEP/predictive-coding framework from alternative non-Bayesian control models, (ii) is measurable in the organism itself (not only in our statistical model), and (iii) could in principle fail.
    Without such criteria, the “principle” remains unfalsifiable and collapses into metaphor.

    So the issue is not anticipation or control per se — I fully agree that organisms stabilize their internal dynamics. The issue is the illegitimate conceptual leap from physical energy flow to semantic uncertainty, and from probabilistic modelling to biological reality. That’s precisely the confusion I am objecting to.
  • apokrisis
    7.6k
    Elite batting or return tasks show anticipatory control because neural and sensorimotor delays demand feed-forward strategies. That is perfectly compatible with many control-theoretic explanations (internal models, Smith predictors, model predictive control, dynamical systems) that do not require Bayesian inference or a universal principle of “uncertainty minimization.” From “organisms behave as if anticipating” it does not follow that they literally minimize epistemic uncertainty.Wolfgang

    So you accept the principle of feedforward in general, just not in Friston’s particular case? And yet Friston is generalising the feedforward story as the particular thing he is doing? :chin:

    Sliding between these domains without a strict translation rule is a category error.Wolfgang

    Or instead trying to unify the two perspectives that need to be unified.

    The issue in the 1990s was the question of which paradigm was the best to model neurobiology. Was it dynamical systems theory of some kind, or a computational neural network of some kind? Both seemed important, but it was a little mysterious as to which way to jump as a theorist.

    Friston for example was interested in Scott Kelso’s coordination networks and neuronal transients as representing the strictly physicalist approach - dynamical self organisation. But also in generative AI models like Helmholtz machines as the alternative of an informational approach.

    So a lot of us were torn in this way. Was the brain a fundamentally analog and physical device, or instead better understood as fundamentally digital and computational? Was the brain trafficking in terms of entropy or information.

    I found my answer to this conundrum in biosemiosis - a movement in theoretical biology where hierarchy theorists were just discovering the useful connection between dissipative structure theory and Peircean semiotics.

    Friston found his resolution in Bayesian mechanics - a more austere and mathematical treatment that boiled the connection down to differential equations. But saying essentially the same thing.

    So what you see as the bug is what I see as the feature. Finding a way to tie together the physical dynamics and the information processing into the one unified paradigm.

    Of course Friston could be accused of just being too sparse and general in offering a bare algorithm and not a larger metaphysics. And I would agree. But also see it as still be part of the same important project that I just described.

    For myself, I am concerned with the actual how of this semiotic connection is made. And that has become its own exciting story with the rapid advances in biophysics as I outlined in this post some years back - https://thephilosophyforum.com/discussion/comment/679203

    Between physics and semantics there can be no bridge law, only correlation.Wolfgang

    A sweeping statement. As I argue, what is needed is not a law but a bridging mechanism. And that is what biophysics has provided.

    f “organisms minimize uncertainty” is to be an empirical claim rather than a post-hoc description, it must yield a pre-specified, falsifiable prediction that (i) distinguishes the FEP/predictive-coding framework from alternative non-Bayesian control models, (ii) is measurable in the organism itself (not only in our statistical model), and (iii) could in principle fail.Wolfgang

    Yeah. I just see this as missing the point as to what the game is about. It is not about the best model of predictive coding. It is about how to bridge between the control model - implemented as flesh and blood biology - and the entropic world that as an actual organism it is meant to be controlling.

    Let’s not get forget this is a problem of biology and not computer science. How do you get consciousness out of genes and biochemistry? What is a modelling relation look like in those terms?

    Which again why the Bayesian Brain approach is an advantage by being generalised to a level beyond the choice of “hardware”.

    The issue is the illegitimate conceptual leap from physical energy flow to semantic uncertainty, and from probabilistic modelling to biological reality. That’s precisely the confusion I am objecting to.Wolfgang

    And I say it is the leap that in the 1990s only a relative few understood was the leap that needed to be made. Friston in particular shaped my view on this.

    I was talking to Chalmers, Block, Baars, Koch and many, many others too. But there was a reason that when Friston’s name was mentioned, serious neuroscientists gave a knowing nod that he was quietly in a different league. The one to watch.
  • Wolfgang
    92
    You are describing exactly what I consider the core confusion.
    The problem is not the wish to connect physics and meaning — it is the belief that this connection can be realized by a physical mechanism.

    When you speak of a “bridge mechanism,” you already presuppose that there is a level of description where semantics becomes physics. But this presupposition is itself metaphysical. Biophysics may show correlations between energy dissipation, metabolic regulation, and neural complexity — but it does not and cannot show how meaning arises from these processes. It only shows how living systems correlate with the conditions that make meaning possible.

    The difference is not rhetorical but categorical.
    Physical systems can organize, synchronize, and self-stabilize — all of which can be formally modeled. But semantic reference, intentionality, or subjective experience are not additional physical phenomena that arise through complexity. They are descriptions that belong to a different epistemic domain.

    There can be no bridge law between the two, because any such law would require a common metric — and there is none.
    A neuron can fire, but it does not mean.
    Energy can flow, but it does not know.
    Meaning appears only at the level of systemic coherence where correlations are interpreted — and “interpretation” is not a physical operation, but an epistemic one.

    So when you say that biophysics “has already provided the bridge,” I would say: it has provided the conditions of correlation, not the transition itself. What you call a “bridge” is in truth an interface of perspectives, not a mechanism.

    This is why the Free Energy Principle cannot “unify” physics and semantics — it only overlays one vocabulary on top of the other.
    It does not explain how consciousness arises; it only reformulates life in terms of a statistical metaphor. And that is precisely the point where the philosophy must step in again.

    I think I mentioned this before:
    it is as if one tried to explain social bonding through magnetism, simply because both use the category of attraction.
    The concept may be shared, but the phenomena have entirely different origins — and they cannot be causally connected.
  • Wolfgang
    92
    Neither Christof Koch (Integrated Information Theory) nor Karl Friston (Predictive Coding) has any formal philosophical training — yet both make sweeping claims about one of philosophy’s oldest and most difficult subjects. They are joined by a great many others, including academic philosophers, who prefer to deal with the “big questions” rather than first mastering the humble craft of thinking methodically.

    Imagine a craftsman who begins to work without knowing how to handle his tools. That, in essence, is how many theorists of consciousness operate. They juggle terms like information, causality, or uncertainty without realizing that these belong to different descriptive domains — and therefore cannot be meaningfully combined.

    The problem is that philosophy never truly established a methodological discipline. In the natural sciences, methodological rules are explicit and binding; in philosophy, one still prefers improvisation. Speculation becomes a virtue, and rhetorical elegance a substitute for conceptual clarity. The less one understands, the more mysterious — and therefore “profound” — the subject appears.

    Modern philosophy has thus taken on the character of a stage performance. When David Chalmers, with long hair and leather jacket, walks onto a conference stage, the “wow effect” precedes the argument. Add a few enigmatic questions — “How does mind arise from matter?” — and the performance is complete.
    Koch and Friston follow a similar pattern: their theories sound deep precisely because almost no one can truly assess them.

    Philosophers, for their part, often lack the mathematical literacy to distinguish between a formal structure and an ontological claim. They swallow every equation presented to them as if it were metaphysical truth. Those who insist on sober analysis — on clarifying terms before admiring them — are dismissed as dull or pedantic.

    Yet that is precisely what philosophy was meant to be: the discipline that distinguishes between what can be said and what can only be imagined. Without this methodological backbone, philosophy turns into a spectacle — a show of intelligence without understanding, and a theater of thought where metaphysics masquerades as science.
  • Mijin
    317
    Between physics and semantics there can be no bridge law, only correlation.
    A physical process can correlate with a semantic event, but it can never translate into or cause it. The relationship between brain and mind is therefore not causal, but correlative — two complementary descriptions of one and the same dynamic viewed from different epistemic perspectives.

    That is why such theories can never be verified even in principle. There is no possible experiment that could demonstrate a causal transition from a physical process to a semantic or experiential one.
    Wolfgang

    OK, I think I 90% agree with you.

    As I have been saying in the parallel thread on the hard problem of consciousness, the problem of explaining subjective experience in a scientific model looks intractable.

    I can't imagine a set of words I could write on a page that would enable a person with no color vision to experience red. Or for me to imagine what ultraviolet looks like to birds.
    These things seem absurd, yet the problem of trying to explain experience itself in a scientific model is pretty much the same. Note: the experience itself, not the correlates of the experience.

    I think the areas of disagreement, would be first of all I can't claim that it's impossible. It seems highly implausible that some words could make me imagine a new color say, but I am not aware of a proof from first principles. And generally we should be careful not to prematurely claim that things are impossible.

    Secondly, and this might not be a disagreement as such, but the level of verification that is possible for these models remains very high, even if experience itself remains a black box. For instance, we could find a particular neural structure in the brain that is essential to triggering pain, and not only that but be able to make testable predictions of how much pain someone will experience based on the pattern of activation and their own specific neural structure.
    (In principle I mean, I know measuring individual neurons in vivo is basically not a thing yet)

    So I don't see it as either "solve the hard problem of consciousness" or "worthless". Figuring out the neural correlates can get us knocking on the door of consciousness (and indeed be medically and scientifically useful). Even if the door looks more like a brick wall.
  • Wolfgang
    92


    > Okay, I think I agree with you 90%...

    That already means a lot in this field — and I think our 10% disagreement is not about facts, but about the very kind of question we are asking.

    > I can’t imagine any words I could write on a piece of paper that would make a person without colour vision experience red...

    Exactly. What you describe here is not a limitation of empirical science, but of translation between descriptive levels.
    No symbolic or physical operation — no arrangement of letters, equations, or neurons — can generate the phenomenal content of “red”, because the phenomenal and the physical are not commensurable domains.

    > I think the points where we disagree are: first, I cannot claim it is impossible…

    I would say it is not empirically impossible, but conceptually impossible.
    To “explain” experience in physical or functional terms is like trying to explain why water is wet.
    The question sounds meaningful, but it secretly fuses two descriptive frameworks:
    “water” belongs to the physical domain, “wetness” to the experiential one.
    You can describe all molecular interactions of H₂O without ever reaching the concept of “wetness”, because “wetness” exists only from within a certain scale and relation — that of embodied perception.
    Likewise, consciousness is not something that can result from physics; it is the epistemic context in which physics appears as physics.

    > And in general we should be careful not to say things are impossible too easily.

    I completely agree — but impossibility here does not mean “unreachable by future science”, it means “misstated at the logical level”.
    If a question collapses categories — asking how a physical state becomes a subjective one — then the problem is not unsolved, but ill-posed.

    > The possible degree of verification of such models is still high... we might find specific neural structures that correlate with pain...

    Yes — and correlation is the correct word.
    Neural correlates of consciousness are entirely legitimate research.
    But they never explain why those correlates accompany experience — only that they do.
    They help us predict and intervene (which is medically crucial), but prediction is not explanation.
    It tells us nothing about the epistemic relation between the measurable and the felt.

    > So I do not see this research as a solution, but not as useless either...

    I agree. It is not useless at all — it maps the interface between physiology and phenomenology.
    But this interface is not a causal bridge.
    We can touch the “door of consciousness”, as you say, but not because we are about to open it — rather because we have finally realized that it was never a door in the physical sense to begin with.

    In other words:
    Whenever a question asks how matter gives rise to mind, it already contains the confusion that makes it unanswerable.
    Mind and matter are not cause and effect, but two correlated descriptions of the same systemic reality — one internal, one external.
    To search for a causal connection between them is like asking what makes water wet: the answer is not hidden — it’s a category mistake.
  • Wolfgang
    92
    The so-called “hard problem” of consciousness must first be characterized properly, because it contains two fundamental category errors.

    The first is the one already mentioned: it conflates two descriptive levels — the physical and the semantic — and then asks how one could possibly “give rise” to the other. This question is not profound; it is ill-posed.

    The second is subtler: it assumes that mind must arise from matter, when in fact it arises from life.
    If you reduce a physical system, you end up with particles.
    If you reduce a living system, you end up with autocatalytic organization — the self-maintaining network of chemical reactions that became enclosed by a membrane and thus capable of internal coherence.
    That is the true basis of life: the emergence of a causal core within recursive, self-referential processes.

    From there, consciousness can be understood evolutionarily, not metaphysically.
    At the neurophysiological level, one might say that in associative cortical areas, sensory inputs converge and integrate into dynamic wholes.
    Through recursive feedback between higher and lower regions, the system begins to form something like a mirror of itself.
    When these integrated representations are re-projected onto the body map, they generate what we call feeling — the system’s own state becoming part of its model of the world.

    In that sense, consciousness is not something added to matter, nor an inexplicable emergence; it is the self-reflection of an autocatalytic system that has become complex enough to model its own internal causality.

    Of course, this is not a “solution” to the hard problem in the usual sense — because no such final solution exists.
    But it offers a neurophysiological direction that might lead toward a satisfactory description:
    not a metaphysical bridge between mind and matter, but a consistent account of how recursive, life-based systems can generate the conditions under which experience becomes possible.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.