Comments

  • Models and the test of consciousness

    I am no longer willing to tolerate your arrogance and condescension.
    Your comments have crossed the line between discussion and self-display,
    and they make it evident that you lack the philosophical and epistemological competence
    to address these questions in any meaningful way.

    Anyone who uses terms such as semantics, physics, and biosemiosis in a single breath
    without understanding their categorical separation should first study the foundations of philosophy
    before accusing others of misunderstanding.

    This conversation is over.
    There is no point in continuing with someone who mistakes opinion for knowledge
    and anecdote for argument.

    I nonetheless wish you well in your further search for clarity —
    perhaps philosophy itself may help before you misrepresent it again.

    A brief afterthought.
    Those who have truly shared a table with Chalmers, Koch, or Friston rarely feel the need to say so.
    Serious minds tend to argue ideas, not acquaintances.

    Authentic conversation leaves no room for name-dropping — it reveals itself through clarity, not proximity.

    Your rhetoric suggests admiration without understanding,
    and a desire to belong where genuine thought begins by standing alone.
  • Models and the test of consciousness

    You are right that a formal framework can serve as a useful tool.
    Mathematics itself is not falsifiable — but it does not make empirical claims.
    The Free Energy Principle (FEP), however, is not presented as a mere formalism; it is promoted as a scientific account of how organisms, brains, and even societies maintain their organization by “minimizing free energy.”
    The moment such a statement is made, it leaves the purely formal domain and enters the empirical one — and therefore becomes subject to falsification.
    Otherwise, it is not a scientific framework but a metaphysical one.

    This is exactly the issue identified by Bowers and Davis (2012), who described predictive processing as a “Bayesian just-so story”:
    a framework so flexible that any observation can be redescribed post hoc as free-energy minimization.
    A theory that can explain everything explains nothing.
    It becomes a formal tautology — a mathematical language searching for an ontology.

    The same problem appears in the well-known “Dark Room Argument” (Friston, Thornton & Clark, 2012).
    If organisms truly sought to minimize surprisal, they would remain in dark, stimulus-free environments.
    To avoid this absurdity, the theory must implicitly introduce meaning — assuming that the organism “wants” stimulation, “prefers” survival, or “seeks” adaptation.
    But these are semantic predicates, not physical ones.
    Hence, the principle only works by smuggling intentionality through the back door — the very thing it claims to explain.

    Even sympathetic commentators such as Andy Clark (2013) and Jakob Hohwy (2020) have admitted this tension.
    Clark warns that predictive processing risks “epistemic inflation” — the tendency to overextend a successful formalism into domains where its terms lose meaning.
    Hohwy concedes that FEP is better seen as a framework than a theory.
    But that is precisely the point:
    a framework that lacks clear empirical boundaries and shifts freely between physics, biology, and psychology is not a unifying theory — it is a semantic conflation.

    Your second point, that terms like prediction or inference can be used metaphorically for neurons, simply confirms my argument.
    If those terms are metaphorical, they no longer describe what they literally mean;
    if they are literal, they presuppose an experiencing subject.
    There is no third option.
    This is the very category error I referred to: a semantic predicate (inference, prediction, representation) applied to a physical process, as if the process itself were epistemic.

    To say that Friston’s theory is “not about qualia” does not solve the problem — it reveals it.
    Once you speak of perception, cognition, or self-organization, you are already within the phenomenal domain.
    You cannot meaningfully explain perception without presupposing experience; otherwise, the words lose their reference.
    A “theory of consciousness” that excludes consciousness is a contradiction in terms — a map with no territory.

    You also mention a continuum between life and non-life.
    I agree.
    But the decisive transition is not a line in matter; it is the emergence of autocatalytic self-reference —
    the moment a system begins to interpret its own internal states as significant.
    That is not a metaphysical distinction but a systemic one.
    And no equation of free energy can account for it, because significance is not a physical magnitude.

    To compare FEP with mathematics therefore misses the point.
    Mathematics is explicitly non-empirical; FEP oscillates between being empirical and metaphysical, depending on how it is defended.
    That is precisely what renders it incoherent.

    Finally, if — as you and others claim — the theory is “not about subjective experience,”
    then it should not be presented as a theory of consciousness at all.
    Otherwise, it becomes exactly what I called it before:
    a mathematical cosmology of life that explains everything, and therefore nothing.

    References

    Bowers, J. S., & Davis, C. J. (2012). Bayesian just-so stories in psychology and neuroscience. Psychological Bulletin, 138(3), 389–414.

    Friston, K., Thornton, C., & Clark, A. (2012). Free-energy minimization and the dark-room problem. Frontiers in Psychology, 3, 130.

    Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36(3), 181–204.

    Hohwy, J. (2020). The Self-Evidencing Brain. Oxford University Press.
  • Models and the test of consciousness

    You raise several important points, but I think we are still talking at different levels of description.

    When I say that such theories are not verifiable, I don’t mean that one cannot empirically test specific predictive-processing models or machine-learning architectures inspired by them.
    Of course, such models can be tested — and many have been.
    But that is not the same as testing a theory of consciousness that builds upon them.
    A model that reproduces neural activity or perceptual prediction errors is not thereby a model of experience.
    It models signal processing, not consciousness.

    You are right that Friston’s Free Energy Principle is mathematically unfalsifiable.
    But that is precisely the problem.
    It is so general that any system can always be re-described as “minimizing free energy.”
    This generality immunizes it against falsification and therefore removes it from the domain of empirical science.
    That is not a technical issue, but exactly the category error at stake:
    mathematics, physics, and semantics are being fused into a pseudo-unity, although they belong to different descriptive domains.

    When Friston says that organisms must appear to minimize uncertainty in order to persist, this is a semantic paraphrase of a mathematical tautology:
    Any system that continues to exist must stabilize its internal states within certain bounds.
    That is trivially true — but it says nothing about consciousness or intentionality.

    The fact that the principle can be applied to statistical, Newtonian, or even quantum systems only demonstrates its formality, not its explanatory power.
    A formalism that fits neurons, molecules, and galaxies alike cannot, without an additional epistemic framework, explain experience or cognition.
    It simply re-describes physical regularities in probabilistic language.

    Hence, the category error remains:
    Mathematical structure ≠ physical mechanism ≠ phenomenal meaning.
    There is no bridge law between these levels — only semantic correlation.

    And this also reveals the deeper contradiction:
    Friston and his followers often claim that subjective experience (qualia) is not the target of the theory.
    But to describe the brain as a predictive, inferential, or representational system is already to invoke the phenomenal domain.
    You cannot speak of “prediction” or “inference” without presupposing a model that experiences something to be predicted or inferred.
    To exclude qualia while describing perception is therefore not a modest limitation — it is a category mistake.
    It eliminates the very phenomenon that the terms presuppose.

    In the same sense, to speak of a unified brain theory is necessarily to speak of a theory of consciousness, because it aims to integrate perception, action, memory, and self-organization — all of which are phenomenally defined.
    A “unified brain theory” that excludes consciousness is like a “unified theory of music” that excludes sound: mathematically coherent, but phenomenologically empty.

    The Free Energy Principle may serve as a metaphorical ordering principle for self-organization, but it is not, and never was, a theory of consciousness.

    This raises the simple but essential question:
    What is Friston’s theory actually for?
    What does it allow us to know that we did not already know?
    If it is neither empirically testable nor conceptually coherent, it remains a formal metaphor — a kind of mathematical cosmology of life that explains everything and therefore nothing.
    A theory that can be applied to all systems explains no system in particular; it produces not knowledge, but only a symbolic sense of connectedness.

    A real theory of consciousness must instead explain the transition from physical stability to autocatalytic self-reference — the formation of a causal core in which internal states are not only maintained but recursively interpreted.
    Only at that level — the level of life itself — can consciousness arise.

    If, as you say, Friston’s theory is not intended to explain subjective experience, then it is simply not a theory of consciousness — and should not be presented as such.
    It might then be classified as a general systems formalism describing the dynamics of self-stabilizing structures.
    That is perfectly legitimate, but it places the theory outside the epistemic scope of what is usually meant by a theory of consciousness.

    In that case, it explains neither the emergence of experience nor the relation between neural activity and awareness; it only offers a high-level metaphor for persistence and adaptation.
    But that is not consciousness — it is life in probabilistic notation.

    And here lies the fundamental dilemma:
    If a theory claims to explain consciousness, it faces the category error I have described.
    If it does not, it becomes irrelevant to the problem it is most often cited for.

    Either way, the result is the same:
    the Free Energy Principle is not wrong, but misapplied — a powerful mathematical metaphor mistaken for a theory of mind.
  • Models and the test of consciousness
    The so-called “hard problem” of consciousness must first be characterized properly, because it contains two fundamental category errors.

    The first is the one already mentioned: it conflates two descriptive levels — the physical and the semantic — and then asks how one could possibly “give rise” to the other. This question is not profound; it is ill-posed.

    The second is subtler: it assumes that mind must arise from matter, when in fact it arises from life.
    If you reduce a physical system, you end up with particles.
    If you reduce a living system, you end up with autocatalytic organization — the self-maintaining network of chemical reactions that became enclosed by a membrane and thus capable of internal coherence.
    That is the true basis of life: the emergence of a causal core within recursive, self-referential processes.

    From there, consciousness can be understood evolutionarily, not metaphysically.
    At the neurophysiological level, one might say that in associative cortical areas, sensory inputs converge and integrate into dynamic wholes.
    Through recursive feedback between higher and lower regions, the system begins to form something like a mirror of itself.
    When these integrated representations are re-projected onto the body map, they generate what we call feeling — the system’s own state becoming part of its model of the world.

    In that sense, consciousness is not something added to matter, nor an inexplicable emergence; it is the self-reflection of an autocatalytic system that has become complex enough to model its own internal causality.

    Of course, this is not a “solution” to the hard problem in the usual sense — because no such final solution exists.
    But it offers a neurophysiological direction that might lead toward a satisfactory description:
    not a metaphysical bridge between mind and matter, but a consistent account of how recursive, life-based systems can generate the conditions under which experience becomes possible.
  • Models and the test of consciousness


    > Okay, I think I agree with you 90%...

    That already means a lot in this field — and I think our 10% disagreement is not about facts, but about the very kind of question we are asking.

    > I can’t imagine any words I could write on a piece of paper that would make a person without colour vision experience red...

    Exactly. What you describe here is not a limitation of empirical science, but of translation between descriptive levels.
    No symbolic or physical operation — no arrangement of letters, equations, or neurons — can generate the phenomenal content of “red”, because the phenomenal and the physical are not commensurable domains.

    > I think the points where we disagree are: first, I cannot claim it is impossible…

    I would say it is not empirically impossible, but conceptually impossible.
    To “explain” experience in physical or functional terms is like trying to explain why water is wet.
    The question sounds meaningful, but it secretly fuses two descriptive frameworks:
    “water” belongs to the physical domain, “wetness” to the experiential one.
    You can describe all molecular interactions of H₂O without ever reaching the concept of “wetness”, because “wetness” exists only from within a certain scale and relation — that of embodied perception.
    Likewise, consciousness is not something that can result from physics; it is the epistemic context in which physics appears as physics.

    > And in general we should be careful not to say things are impossible too easily.

    I completely agree — but impossibility here does not mean “unreachable by future science”, it means “misstated at the logical level”.
    If a question collapses categories — asking how a physical state becomes a subjective one — then the problem is not unsolved, but ill-posed.

    > The possible degree of verification of such models is still high... we might find specific neural structures that correlate with pain...

    Yes — and correlation is the correct word.
    Neural correlates of consciousness are entirely legitimate research.
    But they never explain why those correlates accompany experience — only that they do.
    They help us predict and intervene (which is medically crucial), but prediction is not explanation.
    It tells us nothing about the epistemic relation between the measurable and the felt.

    > So I do not see this research as a solution, but not as useless either...

    I agree. It is not useless at all — it maps the interface between physiology and phenomenology.
    But this interface is not a causal bridge.
    We can touch the “door of consciousness”, as you say, but not because we are about to open it — rather because we have finally realized that it was never a door in the physical sense to begin with.

    In other words:
    Whenever a question asks how matter gives rise to mind, it already contains the confusion that makes it unanswerable.
    Mind and matter are not cause and effect, but two correlated descriptions of the same systemic reality — one internal, one external.
    To search for a causal connection between them is like asking what makes water wet: the answer is not hidden — it’s a category mistake.
  • Models and the test of consciousness
    Neither Christof Koch (Integrated Information Theory) nor Karl Friston (Predictive Coding) has any formal philosophical training — yet both make sweeping claims about one of philosophy’s oldest and most difficult subjects. They are joined by a great many others, including academic philosophers, who prefer to deal with the “big questions” rather than first mastering the humble craft of thinking methodically.

    Imagine a craftsman who begins to work without knowing how to handle his tools. That, in essence, is how many theorists of consciousness operate. They juggle terms like information, causality, or uncertainty without realizing that these belong to different descriptive domains — and therefore cannot be meaningfully combined.

    The problem is that philosophy never truly established a methodological discipline. In the natural sciences, methodological rules are explicit and binding; in philosophy, one still prefers improvisation. Speculation becomes a virtue, and rhetorical elegance a substitute for conceptual clarity. The less one understands, the more mysterious — and therefore “profound” — the subject appears.

    Modern philosophy has thus taken on the character of a stage performance. When David Chalmers, with long hair and leather jacket, walks onto a conference stage, the “wow effect” precedes the argument. Add a few enigmatic questions — “How does mind arise from matter?” — and the performance is complete.
    Koch and Friston follow a similar pattern: their theories sound deep precisely because almost no one can truly assess them.

    Philosophers, for their part, often lack the mathematical literacy to distinguish between a formal structure and an ontological claim. They swallow every equation presented to them as if it were metaphysical truth. Those who insist on sober analysis — on clarifying terms before admiring them — are dismissed as dull or pedantic.

    Yet that is precisely what philosophy was meant to be: the discipline that distinguishes between what can be said and what can only be imagined. Without this methodological backbone, philosophy turns into a spectacle — a show of intelligence without understanding, and a theater of thought where metaphysics masquerades as science.
  • Models and the test of consciousness
    You are describing exactly what I consider the core confusion.
    The problem is not the wish to connect physics and meaning — it is the belief that this connection can be realized by a physical mechanism.

    When you speak of a “bridge mechanism,” you already presuppose that there is a level of description where semantics becomes physics. But this presupposition is itself metaphysical. Biophysics may show correlations between energy dissipation, metabolic regulation, and neural complexity — but it does not and cannot show how meaning arises from these processes. It only shows how living systems correlate with the conditions that make meaning possible.

    The difference is not rhetorical but categorical.
    Physical systems can organize, synchronize, and self-stabilize — all of which can be formally modeled. But semantic reference, intentionality, or subjective experience are not additional physical phenomena that arise through complexity. They are descriptions that belong to a different epistemic domain.

    There can be no bridge law between the two, because any such law would require a common metric — and there is none.
    A neuron can fire, but it does not mean.
    Energy can flow, but it does not know.
    Meaning appears only at the level of systemic coherence where correlations are interpreted — and “interpretation” is not a physical operation, but an epistemic one.

    So when you say that biophysics “has already provided the bridge,” I would say: it has provided the conditions of correlation, not the transition itself. What you call a “bridge” is in truth an interface of perspectives, not a mechanism.

    This is why the Free Energy Principle cannot “unify” physics and semantics — it only overlays one vocabulary on top of the other.
    It does not explain how consciousness arises; it only reformulates life in terms of a statistical metaphor. And that is precisely the point where the philosophy must step in again.

    I think I mentioned this before:
    it is as if one tried to explain social bonding through magnetism, simply because both use the category of attraction.
    The concept may be shared, but the phenomena have entirely different origins — and they cannot be causally connected.
  • Models and the test of consciousness
    1. Evidence for anticipation ≠ evidence for a global principle of “uncertainty minimization.
    Elite batting or return tasks show anticipatory control because neural and sensorimotor delays demand feed-forward strategies. That is perfectly compatible with many control-theoretic explanations (internal models, Smith predictors, model predictive control, dynamical systems) that do not require Bayesian inference or a universal principle of “uncertainty minimization.” From “organisms behave as if anticipating” it does not follow that they literally minimize epistemic uncertainty.

    2. Shannon information, thermodynamic free energy, and semantic or epistemic uncertainty are categorically different concepts.
    Formal similarities (e.g., entropy, noise, and signal) do not justify treating them as identical. Variational free energy in Friston’s sense is a model-relative bound on surprisal, not a physical energy quantity; and “uncertainty” here is a term defined over the probability distributions of a generative model. Sliding between these domains without a strict translation rule is a category error.

    Between physics and semantics there can be no bridge law, only correlation.
    A physical process can correlate with a semantic or cognitive state, but it can never produce, translate into, or explain it. The physical and the semantic belong to different epistemic domains; their connection is observational, not generative.

    3. What would count as a risky, discriminative prediction?
    If “organisms minimize uncertainty” is to be an empirical claim rather than a post-hoc description, it must yield a pre-specified, falsifiable prediction that (i) distinguishes the FEP/predictive-coding framework from alternative non-Bayesian control models, (ii) is measurable in the organism itself (not only in our statistical model), and (iii) could in principle fail.
    Without such criteria, the “principle” remains unfalsifiable and collapses into metaphor.

    So the issue is not anticipation or control per se — I fully agree that organisms stabilize their internal dynamics. The issue is the illegitimate conceptual leap from physical energy flow to semantic uncertainty, and from probabilistic modelling to biological reality. That’s precisely the confusion I am objecting to.
  • Models and the test of consciousness

    Thank you for the clarification — I see your point.

    However, I think we are talking about two different kinds of unverifiability. You are right that many scientific models begin with speculative hypotheses that can eventually become empirically testable. But that is not the case here.

    The hypotheses of IIT or Predictive Coding are not temporarily unverifiable; they are principally unverifiable, because they connect concepts that belong to entirely different descriptive levels. They claim that a physical or mathematical structure (e.g., integrated information, energy minimization) produces or is identical with a semantic–phenomenological phenomenon (experience, uncertainty).

    This is not an empirical speculation but a category error — precisely the kind of confusion that Gilbert Ryle described in his famous example: after a student had toured all the buildings of a university, he asked, “But where is the university?” The mistake lies in confusing a physical arrangement (the buildings) with an institutional meaning (the university). The same confusion occurs when physical quantities or mathematical constructs are taken to be or to produce semantic phenomena such as experience.

    Between physics and semantics there can be no bridge law, only correlation.
    A physical process can correlate with a semantic event, but it can never translate into or cause it. The relationship between brain and mind is therefore not causal, but correlative — two complementary descriptions of one and the same dynamic viewed from different epistemic perspectives.

    That is why such theories can never be verified even in principle. There is no possible experiment that could demonstrate a causal transition from a physical process to a semantic or experiential one. Any attempt to do so merely redefines consciousness in physical terms and then claims success by definition.

    So the problem is not that we currently lack the appropriate empirical tools, but that the conceptual architecture of these theories confuses what cannot be unified.
    Not every speculation can be turned into science — only those that remain within a single, coherent descriptive level can.
  • Models and the test of consciousness
    Let me put it again in simple terms:
    Theories of consciousness usually start with an unproven assumption and then build a theory around it. This assumption is neither empirically confirmed nor even verifiable – and therefore such a theory is not only unscientific but also epistemically useless.

    Two examples:
    The Integrated Information Theory (IIT) claims that consciousness arises from and is identical with integrated information. According to this logic, even a simple LED would have consciousness. Such a statement is neither provable nor falsifiable in Popper’s sense.

    The second example is Predictive Coding or the Free Energy Principle. It claims that organisms minimize uncertainty. This claim cannot be empirically confirmed. Friston believes that he can derive this from the physical principle of energy minimization – which, however, represents a massive category error. Physical energy and semantic uncertainty belong to entirely different descriptive levels; they have nothing to do with each other.

    Such theories are, in the end, fairy tales wrapped in scientific terminology. They offer no real progress in understanding – quite the opposite. They create the illusion of knowledge by citing empirical data to confirm the theory and then using the same theory to “explain” these data. That is a logical circle, not a proof.

    I have discussed this in more detail here:
    https://doi.org/10.5281/zenodo.17277410
  • Models and the test of consciousness
    You're disqualifying yourself here and you don't realize it, sad.
  • Models and the test of consciousness
    I won't discuss this further. You write, "The Bayesian algorithm describes how an organism in fact maintains a dynamical balance in this regard." That's complete nonsense. It would be best to ask an organism whether it actually uses algorithms.
  • Models and the test of consciousness
    1. what does this have to do with prediction, 2. what do you want to do with such a general statement
  • Models and the test of consciousness
    Each theory interprets optical illusions differently, since all are underdetermined, i.e. they have no empirical explanatory power.
  • Models and the test of consciousness
    Perhaps you'll read my post again, more carefully; you might then notice that I never mention a substance. I consider Karl Friston's uncertainty minimization paradigm to be one of the most nonsensical of recent decades. He analogizes a physical principle with a semantic concept, as if I were to derive a theory of social bonds from magnetism on the grounds that both argue with the concept of attraction. And then he assumes organisms have the ability to calculate probabilities. Because the whole thing is mathematically inflated, most philosophers don't understand it.
  • Consciousness and events
    A lot of the confusion comes from us applying our everyday concepts like 'particle' or 'wave' and giving them an absolute, real meaning.

    A good (though imperfect) way to think about it is a water surface: Is it a wave or individual droplets? The question isn't really answerable until you measure it. The moment you put an object into it, a droplet sticks to it—you've made a measurement. The previous superposition of possibilities (wave/droplet) has 'collapsed' into a single, concrete result (one droplet).

    The crucial point that goes beyond the water analogy is that the mathematics describing this 'fuzziness' (the Schrödinger equation) is ultimately a probability calculation. It's a tool for our knowledge about the system (epistemic), not necessarily an exact description of a physical vibrating process. What happens 'in reality' before we measure is the subject of various interpretations of quantum mechanics. The most common one (the Copenhagen Interpretation) indeed says that the measurement itself brings about a definite state.
  • The Relationship between Body and Mind
    I cannot give the whole philosophy here. Of course, evolution is a material process. We collect fossils, etc. What seems immaterial to you are theories that are formulated from fossils, etc. Of course, theories are not material, but theories never are. After all, they have no ontological status, they are our mental creations.

    Don't be angry with me, but unfortunately my explanation has to end here.
  • The Relationship between Body and Mind
    Do you know anything other than matter? Why are we always looking for something we have never seen before? Maybe because we want to solve problems that we have created for ourselves through category errors.
  • The Relationship between Body and Mind
    No, there is no mind stuff, at least none has ever been proven. Everything is matter, we don't know anything else.
    Mind and body are n o t two forms of the same stuff. They are two forms of description!!!
    You could also describe an individual from a sociological perspective. It is and always remains the same individual.
  • The Relationship between Body and Mind
    It links a mental state (e.g. pain) to a physiological state (specific electrochemical state). A physiological state would be meaningless without referring to a mental state.

    Both are descriptions, one through psychology (mental state), the other through physiology (electrochemical state.
  • The Relationship between Body and Mind
    If there were an ontological relationship, body and mind would have to be ontologies. That would mean that we are dealing with two substances or entities, a body and a mind. Descartes could not find a mind anywhere. His conclusion was that it must be immaterial. My conclusion is that they are descriptions of one and the same thing. Let's call it an individual, an organism, a brain, whatever you like.
  • The Geometry of Thought
    Descartes' rules largely belong to the second dimension, his doubts, his cogito ergo sum and his self-reflection belong to the third dimension. He has nothing to do with the fourth dimension, since he reflects on the universal validity of reason and less on its possible relativity or limitations.
  • Why The Simulation Argument is Wrong
    I have written two posts on this, one in which I reject simulation because it contains the false idea that consciousness can be introduced into any entity. A second that describes our universe as the work of an experimenter, with the sole purpose of curbing our arrogance.
    https://medium.com/p/b2709f6c48bc
    https://medium.com/p/3b0eb3e66048
  • Quantum Physics and Classical Physics — A Short Note
    Everyone can interpret the world as they wish. I am presenting a point of view here that you can share or not. If you see it differently, that's OK.
    If you want to understand my point of view (but you don't have to), it's best to read my epistemological article. There you will see that there are four levels of knowledge and it depends on which one you want to argue on: https://www.dr-stegemann.de/erkenntnistheorie-anthropischer-relativismus/genetisch-relativistische-erkenntnistheorie/
    It is in German, but you can have it translated without any problem. It will also appear soon on my medium.com account: https://medium.com/@drwolfgangstegemann
  • Quantum Physics and Classical Physics — A Short Note
    As I have already said, we make an ontology out of the phenomenology of the world. Some people then believe that this is identical with an objective truth in the transcendent sense. For me, this idea is naive. Classical and quantum physics are therefore two ontologies that we have made out of two different phenomenologies.

    If we now argue from our operational level of knowledge, it seems as if the microworld is fundamental. But this statement can only be understood relativistically.

    Ultimately, we only transform the world into a modality that suits us. In our case, this is a neuronal one. In the case of a photon, it would be a photonic one. And that would look completely different to ours.
  • Quantum Physics and Classical Physics — A Short Note
    Whether it is an ontological emergence depends on us, because it is we who epistemically construct the world.
    But regardless of that, it seems that this fine carpet of matter is fundamental at the smallest level and everything macroscopic 'unravels', but then evolves according to its own rules and acts deterministically (gravity through planets).
    In any case, we will not be able to describe classical and quantum physics with the same terms and theories.
  • A model of everything
    I don't know what you're getting at (I haven't read everything). Of course, consciousness is our window to the world and therefore the most important thing. But a theory of consciousness has to integrate everything, including the 1st and 3rd person perspective, otherwise you can't measure anything. Both perspectives are usually correlated in studies.

    Something general: The brain and our consciousness are so complex things that I believe you have to study the entire spectrum intensively for years in order to be able to make reasonably meaningful statements. I know that philosophy often leads you to spontaneously develop plausible thoughts. But that usually leads to pure speculation. I have discussed the methodic and methodological errors that can be made in this portal, but also on my medium page.
  • A model of everything
    A theory of consciousness should include as many aspects, levels and perspectives as possible, including the 1st and 3rd person. If we do not take the 3rd person into account, we will not receive any empirical data about the brain and will not be able to correlate it with behavior or experience.
    It is extremely difficult to combine the many aspects due to the complexity.
    I think it is important, however, not to derive consciousness causally, because it already exists. So we 'only' have to find the mechanisms that maintain it.
    On AI: How should a machine become conscious? Consciousness means (from the 1st person perspective) sensations that arise through sensors and the nerves to the brain and the projection onto the body via the neuroendocrine system. I think it is almost impossible to recreate this, at least in the next few decades. We should not be too impressed by LLMs. They have nothing whatsoever to do with consciousness.
  • The 'hard problem of consciousness'
    Think logically: if they were stimulus-response machines, who would monitor the sensors? A homunculus, perhaps? The whole thing is self-organized. There is no red alarm light that lights up in case of danger. Life has to do everything itself. So it senses everything that is there inside and outside.
    You have to distance yourself from yourself and your oh-so-fascinating experience, otherwise it will be difficult to understand it.
  • The 'hard problem of consciousness'
    Suppose you know nothing about consciousness, but you examine a human organism and find that there are sensors and nerves. Do you then ask yourself what this is good for? The answer will be that it must have a function. Perhaps you then think that it is there so that these beings can sense what they are doing. So that they are not eaten in the next moment. Sensing is nothing other than consciousness. In our case, this has now become more differentiated, so that we experience entire dramas. This does not change the principle.
  • Limitations of the human mind
    Imagine a fluid or graphene intelligence, how would they conceptualize the world? And who would be right, them or us? This inevitably leads to an epistemic relativism.
    This states that we can recognize the world within our modality, but not in its 'objective' modality. For this we would need a 'neutral' observer from outside - something like God, who could recognize all subjective worlds.
    Instrumentally we could therefore recognize the world step by step, but not epistemologically.
  • The Liar Paradox - Is it even a valid statement?
    I think you have to differentiate between pure logic and linguistic logic. The liar paradox is a linguistic construct - nothing more.
  • Do we live in a dictatorship of values?
    Western thinking - of course it exists - has so clogged up people's brains that it takes an immense epistemological effort to rise from the frog's perspective and build up the maximum distance in order to understand these things.
    The power of the factual is immense and prevents us from taking on other perspectives. It also prevents you from wanting to take on different perspectives and from wanting to learn. People resist any epistemological change. That is more than understandable.
  • Quantum Physics and Classical Physics — A Short Note
    Maybe read my post here (https://thephilosophyforum.com/discussion/14959/epistemology-anthropic-relativism) then you will understand my epistemological position.
    Furthermore, I assume that we are the ones who interpret the world using our neural means. So there is no 'objective' ontology.
    The ontology that we are able to construct leads to the classical physics of the mesocosm. The microworld is 'too small' for us to be able to construct an ontology that is adequate for us.
    Both classical and quantum phasic are constructions. In classical physics it takes the form of an ontology; in quantum physics it remains phenomenology.
  • Quantum Physics and Classical Physics — A Short Note
    The transfer of the quantum world to the mesoworld meant philosophy, not technology, example structural realism. Please read carefully!
  • Epistemology – Anthropic Relativism
    If you just label my arguments but don't address them further, further discussion makes no sense.
  • Epistemology – Anthropic Relativism
    I argue purely logically by saying that we are neuronally established in this world and act here exclusively in this modality. That means we sit in this neural bubble. There is nothing else for us, otherwise we could perhaps switch back and forth between two or more modalities.
    If there is no other 'world' for us, then there is nothing behind this world of ours, so there is no meta-physics.
    I understand that this argument destroys all dreams of a better, otherworldly world.
  • Epistemology – Anthropic Relativism
    I base myself on the fact that we are transforming the world and therefore there cannot be another (for us).
    For me, metaphysics is that other world 1 that does not exist for us and therefore there is and cannot be any metaphysics. We have to come to terms with the fact that we can only ever act neurally instantiated, so that every argument is ultimately always tautological. The only question is, can you justify it or just assert it?
  • Epistemology – Anthropic Relativism
    I provide an explanation for the fact that metaphysics does not exist