Comments

  • What is right and what is wrong and how do we know?
    Beautifully put - that’s a wonderfully clear way to mark the difference.

    Yes, in science, our language aims for mind-to-world fit: we adjust our beliefs to mirror what is. In ethics, the movement reverses - world-to-mind fit - we attempt to bring what is into alignment with what ought to be. But what’s fascinating is that this reversal isn’t arbitrary; it rests on a prior recognition that “what is” includes beings capable of suffering and flourishing. The “ought” is already latent in the “is,” waiting to be actualized through choice.

    In my view, this makes ethics not the negation of science but its completion. Science describes relations of fact; ethics transforms those relations through care. When I say “it is wrong to cause unnecessary suffering,” I’m not merely proposing a new state of affairs - I’m acknowledging a tension between the real and the realizable. Ethical language, in that sense, is a kind of creative realism: speech that doesn’t just reflect reality but helps it remember what it can become.

    So perhaps the two directions of fit meet halfway: science refines our understanding of interdependence, and ethics tells us how to live that interdependence responsibly. Both, in their own registers, are ways of aligning ourselves with reality - one by describing its order, the other by deepening its compassion.
  • What is right and what is wrong and how do we know?
    Yes - exactly. I agree that the problem lies less in ethics than in what we mean by realism. If we imagine moral facts as entities “lying about the place,” independent of all minds and cultures, then moral realism quickly collapses into absurdity. But if we reduce ethics to mere convention or preference, we lose the very distinction between right and wrong that moral language is meant to express. The challenge is to articulate a realism that isn’t naively objectivist yet isn’t dissolved into subjectivism.

    My own position sits closer to what some call experiential realism or intersubjective realism. Moral truths aren’t things but relations that hold across conscious beings. When I say “it is wrong to cause unnecessary suffering,” I’m not pointing to a property out in the world; I’m describing a stable pattern in the space of coexistence - a regularity in the way awareness relates to awareness. In that sense, moral statements can be true or false because they correspond to the real dynamics of sentient life, not to floating moral particles.

    This is why I keep returning to compassion. It isn’t a “fact” waiting to be measured, nor a mere sentiment; it’s the experiential disclosure of what sustains the relational field in which meaning, language, and value are even possible. If that field weren’t real, nothing else we call real - not science, not logic, not dialogue - could function, because all depend on trust, recognition, and shared intelligibility.

    So yes, I accept your distinction: realism about ethics needs rethinking. But rather than abandoning the word, I’d redefine it. Moral realism, for me, means this: that value is as intrinsic to the fabric of relation as curvature is to space-time. We don’t find moral “facts” lying about; we find ourselves already entangled in moral space.
  • What is right and what is wrong and how do we know?
    Excellent points, and I’m grateful for them - they go to the heart of what it means to speak of ethics as a real structure rather than a sentiment.

    You’re right that harm, conflict, or even cruelty can enlarge the field of meaning in certain ways. Tragedy, trauma, and loss often deepen awareness and generate profound transformations of self and culture. But I would distinguish between enlarging meaning and affirming value. Violence may broaden the narrative field, but it does so through negation - by showing what breaks when relation collapses. Compassion, by contrast, reveals what holds the field together. Both are revelatory; one is diagnostic, the other sustaining.

    To your question: “Why privilege one over the other?” - because only compassion can make coexistence coherent. Fear, domination, and cruelty can organize relations, yes, but only parasitically; they depend on the very trust and vulnerability they exploit. To be loved or feared are not symmetrical options, because fear corrodes the dialogical reciprocity on which understanding depends. In that sense, compassion isn’t just “nicer” - it’s structurally necessary for communication itself to remain possible.

    You’re also right that care can wound - that good intentions may be felt as intrusion. For me this is not a counterexample but part of the texture of compassion. Genuine care includes respect for autonomy and an awareness of its own fallibility. It isn’t perpetual agreement but sustained responsiveness: the effort to repair when our help harms, to keep the conversation open. The ethical relation is asymptotic rather than static - it’s the ongoing calibration of good within complexity.

    As for the gravity analogy, I take your point. Moral life is not predictable in the way physical law is; what I mean is that the consequences of ignoring compassion are as consistent as the consequences of ignoring gravity. We may not fall at a calculable rate, but civilizations and relationships collapse all the same. Over time, indifference erodes meaning as reliably as gravity pulls objects down.

    So yes - behaviour is less predictable than matter, but the pattern of what sustains or destroys meaning is remarkably invariant. We might say compassion is to coexistence what gravity is to structure: the invisible coherence that keeps the whole from flying apart.
  • What is right and what is wrong and how do we know?
    That’s beautifully put - I think our intuitions are indeed convergent. If relation is substance, then the universe is not a collection of things interacting but an interaction that gives rise to things. In that sense, Spinoza’s natura naturans and Rovelli’s relational ontology are saying the same thing: being is event, structure is process, substance is relation-in-motion.

    Where I would add a small inflection is here: if relation is substance, then the quality of relation - its affective tone - matters metaphysically. The moment we feel compassion, we’re not adding sentiment to a neutral network; we’re glimpsing the network’s self-recognition. The ethical isn’t an overlay upon the physical - it’s the physical come to consciousness of its own interdependence.

    So perhaps ecstatic naturalism describes the ontology, and compassion names its pathos: nature not only is relation but feels itself through sentient beings. When we care, the cosmos cares through us.
  • What is right and what is wrong and how do we know?
    Thank you - that’s an insightful connection, and I’m glad you mentioned Buber. You’re right: the conception of compassion I’m working with probably leans closer to Buber’s I-Thou than to Levinas’s infinite Other. Levinas emphasizes transcendence and asymmetry - an ethical height that forever exceeds comprehension. Buber, by contrast, stresses reciprocity and presence: the moment when two beings meet in mutual openness, each confirming the other’s reality.

    I see compassion as the living current that flows between these poles. It begins as Levinasian exposure - an encounter with the Other’s vulnerability that unsettles me - but it deepens, Buber-like, into a dialogical relation where both exist through the relation itself. Compassion is not self-sacrifice or self-assertion but the space between, the field of recognition that allows “I” and “Thou” to co-arise.

    As for the naturalistic aspect, yes - I mean that quite literally. Compassion is not a supernatural virtue but a biological and phenomenological constant: an evolved mode of attunement that makes coexistence possible. Our neural and hormonal architectures, our mirror systems and attachment circuits, are the physiological correlates of what Buber calls the “dialogical principle.” The ethical, in this sense, is the felt continuity of life with life.

    So perhaps my position could be described as dialogical naturalism: compassion as the empirical face of a metaphysical truth - the truth that relation precedes substance. Whether we speak the language of Levinas, Buber, or biology, the insight is the same: to exist is already to be with.
  • Comparing religious and scientific worldviews
    Science only works with what we can detect with instruments. Its conclusions are limited to that. It is mute about the basis of existence and key philosophical questions.Punshhh

    Beautifully said - and I completely agree. Science excels at mapping what happens and how it happens, but not why anything matters. Instruments can register particles, forces, and correlations, but not value, significance, or moral responsibility. Those arise in the first-person field of experience that science must presuppose but cannot exhaust.

    For me, that’s exactly where philosophy begins - not in competition with science but as its horizon of intelligibility. Science describes the measurable; philosophy interprets the meaning of measurement. When I speak of Compassionism as a metaphysical condition, I’m not proposing an alternative physics but pointing to the fact that inquiry itself presupposes care: the desire to know, to reduce error, to communicate truth, are all ethical acts. Even science rests on a covenant of trust and cooperation - the minimal compassion of minds working together in a shared world.

    So yes, science is mute about the basis of existence, but its very success depends on that silent ground: the lived, ethical, and relational world that gives data its sense. In that light, compassion isn’t opposed to reason - it’s the precondition of reason’s continuity. Without care for truth, evidence, or one another, even science would collapse into noise.
  • Comparing religious and scientific worldviews
    How does Vegan fit in? Vegan is…scientific?DingoJones

    Veganism is based on evidence, reason and empathy. Evidence shows that other organisms are sentient, e.g. cows, dogs, cats, fish, octopuses, elephants, lions, meerkats, zebras, horses, monkeys, chimps, chickens, whales, dolphins, goats, ducks, lambs, turkeys, lobsters, etc. They all respond to pain, the way we respond to pain. They have sophisticated nervous systems. I empathise with the pain and distress of all sentient beings. That's why I am a vegan. Non-vegans cause suffering and death to 80 billion sentient land organisms and 1 to 3 trillion sentient aquatic organisms per year. Veganism is also better for the environment and for human health. Here is more information about reasons to go vegan: https://www.vegansociety.com/go-vegan/why-go-vegan
  • What is right and what is wrong and how do we know?
    I understand the appeal of the aesthetic-emotive stance very well. There’s a kind of honesty in admitting that our first contact with value is felt, not deduced. What you describe as intuition is, I think, the raw material of any genuine ethics: the moment when reality moves us before we have a theory about why.

    As for your question - whether I’m a moral realist - the answer depends on what kind of realism we mean. I’m not a metaphysical realist in the sense that “goodness” or “compassion” exist as freestanding entities somewhere in the universe. But I’m also not a pure subjectivist. My position is what might be called phenomenological or relational realism: values are not “out there” independent of minds, yet they are not arbitrary projections either. They arise in the space between beings, as the disclosure of what sustains or destroys relation.

    In that sense, compassion isn’t an invented rule but an encountered reality - the felt structure of coexistence itself. When I harm another, I don’t merely break a social convention; I diminish the field of meaning that connects us. The “realness” of ethics lies in that experiential invariance: wherever sentient beings coexist, the possibilities of care and harm appear as objectively distinct modalities of relation.

    So yes, I would say I’m a moral realist of a weak, experiential sort: ethics is not a cosmic property but a condition of intelligibility. We discover it the way we discover gravity - by noticing what happens when we ignore it.

    Your aesthetic approach, far from being opposed to this, may actually be its most authentic expression. Feeling and intuition are the first phenomenology; reason arrives later to articulate what we already know.
  • What is right and what is wrong and how do we know?
    Yes - beautifully put. Compassion does indeed presuppose chaos in the sense that it awakens in response to vulnerability, loss, pain, disease, injury, harm or disorder. Without a fracture, there would be no need for mending. In that way, compassion and chaos form a polarity rather than an opposition: compassion arises because there is chaos, yet it points beyond it.

    But I’d add a nuance. While compassion depends on suffering to manifest, it doesn’t depend on it to exist in principle. Even in a perfectly harmonious world - if such a thing could be - the relational openness that makes compassion possible would still be the same ontological structure, only without wounds to heal. What we call “chaos” is the circumstance that reveals compassion, not the ground that creates it.

    The yin-yang metaphor is apt if we take it dynamically: each side generates and limits the other. Chaos exposes finitude; compassion answers it. The two are rhythmically entangled, but not equal in aim. Chaos describes what is; compassion describes what can restore relation. In that sense, compassion is not the mirror of chaos but its transformation - the movement through which Being reclaims coherence from fragmentation.
  • What is right and what is wrong and how do we know?
    I love your questions. Thank you for asking them. These are exactly the questions that matter, and you raise them with admirable clarity.

    You’re right that “compassion is the more natural relational core” cannot be demonstrated in the same way one demonstrates an empirical law. It isn’t an evidential claim about frequency or dominance, but a phenomenological and pragmatic one about coherence. When we look at the range of human possibilities - cruelty, indifference, care - which mode most fully realizes the structure of relation itself? Only compassion recognizes the other as subject rather than instrument. Violence treats the other as object, thereby erasing relation. That erasure may succeed in practice, but conceptually it’s parasitic: to negate relation, it must first presuppose it.

    As for “better,” I don’t mean “better” by inherited theology but by existential intelligibility. Compassion is better not because a God commands it, but because it sustains the very conditions under which meaning, community, and dialogue can exist. I am not a Christian. Please see: https://www.evilbible.com and https://www.skepticsannotatedbible.com if you want to know why I am not a Christian. The moment we decide that conquest is equally valid, we undermine the shared world that makes any valuation - including the valuation of conquest - possible. Compassion, in that sense, is self-validating: it preserves the possibility of coexistence that all discourse presupposes. I am a vegan because of my compassion for all sentient beings. To say “better” at all implies that flourishing, not annihilation, carries weight. If we reject that, then we don’t just abandon compassion; we forfeit the basis for any normative distinction whatsoever. The nihilist and the sadist can live consistently only if they cease to ask why anything matters.

    So I see Compassionism not as an ungrounded belief but as the minimal metaphysical condition for an intelligible world: if meaning is possible, some form of care must already be operative. The Ouroboros image you mention captures this beautifully - yes, suffering and healing seem entwined, but the loop only closes through response, not indifference. Without compassion, the circle breaks into chaos.
  • What is right and what is wrong and how do we know?
    That’s a crucial question, and I agree that the record of our species reveals both tendencies in abundance: tenderness and atrocity, rescue and massacre. The human condition seems bifurcated between empathy and domination.

    My claim that the movement of concern discloses Being’s relational core isn’t an empirical generalization about what humans always do; it’s a phenomenological statement about what moral experience means when it occurs. The fact that many ignore or suppress this responsiveness doesn’t make it less primordial - it only shows that consciousness can close itself against its own depth. The possibility of cruelty presupposes the capacity for empathy, just as lying presupposes language. One can negate compassion only because one already stands within the sphere where the other’s vulnerability matters.

    Culturally and biologically, both impulses - aggression and care - have evolutionary roots. But phenomenologically, only care reveals relation as relation: the recognition that the other’s being concerns mine. Violence objectifies and thereby conceals that relation; compassion exposes it. In that sense, cruelty is not another “core” but a rupture, a refusal of disclosure. It flattens the encounter back into ontology without ethics.

    So when I say compassion is the more natural relational core, I don’t mean it is the statistically dominant behaviour, but that it reveals the more fundamental truth of coexistence. Empathy is what allows coexistence to appear as such; conquest denies that appearance. The ethical call is fragile, easily drowned by fear, ideology, or tribal conditioning - but its fragility is part of its meaning: Being’s openness is not enforced, only offered.

    In this light, Compassionism isn’t the claim that humans are compassionate, but that compassion names the deepest possibility of what it means to be. The conqueror and the caregiver are both human, but only the latter manifests what humanity is capable of when it fully hears its own ontological vocation.
  • What is right and what is wrong and how do we know?
    These are excellent questions - thank you for asking them.

    Yes, I do intend the scope of the ethical call to include non-human animals, and indeed all sentient life. Levinas himself remained primarily anthropocentric - his face of the Other presupposes language and mutual address - but if we take the “face” not literally but as the phenomenon of vulnerability, then any being capable of suffering already presents that summons. The cry of a wounded animal, even without words, calls us to responsibility in precisely the sense Levinas describes: it demands a response before reflection or ontology. In that sense, ethics extends wherever suffering discloses itself. I am a vegan because I care about all sentient beings.

    As for how ethics can “arise” in this way: I don’t mean that ethics emerges as a factual property within Being, but that in the event of encounter - when another’s vulnerability impinges on me - Being shows one of its fundamental modes: relational exposure. Ontology tells us what is; ethics tells us how being is with being. The claim that ethics is the “deepest disclosure” of Being is not empirical but phenomenological: it describes what experience reveals when we attend to its affective depth. We discover that to exist is already to be implicated in others’ existence. Ethics, then, is not an optional layer placed on top of ontology but the felt recognition that Being is never solitary substance but shared finitude.

    To put it less abstractly: when we encounter pain - human or non-human - we do not first deduce an ethical rule; we are already moved. That movement of concern is the disclosure of Being’s relational core. Demonstration, in the logical sense, is replaced here by revelation through encounter: what Levinas calls the “saying” prior to the “said.” The ethical moment is not inferred from what-is but given with what-is; it’s how Being manifests its own openness.
  • What is right and what is wrong and how do we know?
    Thank you very much for your excellent question. Levinas’s meontological move in Totality and Infinity is precisely what I had in mind when I spoke of the ethical and the ontological as “two inflections of the same opening.” For Levinas, ethics is first philosophy because it arises not within Being but before it - me ontos, beyond-Being. The face of the Other interrupts ontology’s self-enclosure; it calls me from a height I did not posit, demanding responsibility prior to any theoretical stance. In that sense, Levinas radicalizes Heidegger’s Geworfenheit (thrownness): I am not only thrown into Being but summoned beyond it.

    Where I diverge slightly is in emphasis. Levinas’s meontology can sound like a complete rupture - an absolute outside to Being. I read it, rather, as the self-transcendence of Being itself, its capacity to exceed its own totalization through the ethical relation. In other words, the ethical call is not alien to ontology but its deepest disclosure: Being showing itself as vulnerable and relational. The “firstness” of ethics is not chronological or hierarchical but modal: the primordial tone of existence as care, exposure, and obligation.

    If we hold these together - Heidegger’s ontological disclosure and Levinas’s ethical interruption - we glimpse a fuller picture: ontology opens the space for encounter; ethics keeps that space from closing into self-sufficiency. They are indeed two inflections of one openness - the event of meaning itself oscillating between comprehension and compassion.

    That’s the sense in which I see Compassionism (my own evolving framework) as both ontological and ethical: Being is never a neutral substrate; it is always already an appeal to alleviate suffering. The “beyond-Being” that Levinas names as meontology is, for me, the pulse of compassion within Being - the refusal of indifference that makes the universe intelligible at all.
  • What is right and what is wrong and how do we know?
    Thank you, Constance - your reply is characteristically rich and generous. I’ll try to clarify what I meant by “ethical phenomenologist avant la lettre,” and then engage the larger point about metaphysics, language, and sublation.

    By that phrase, I didn’t mean to credit Hitchens with inventing phenomenology before Husserl, but to suggest that - without a phenomenological vocabulary - he was already doing something akin to ethical phenomenology: describing the pathic structure of moral experience as it shows itself. His critique of the “God-concept” exposes how the affective dynamics of guilt, obedience, and awe shape lived consciousness. In that sense, he was tracing the phenomenology of moral emotions, not their ontology. Of course, he stayed at the ontic level, as you rightly note - within a naturalistic frame - but even there he revealed how language and power constitute the moral field. I saw in that an unthematized phenomenological insight: the “givenness” of moral experience before metaphysics.

    I completely agree that this doesn’t yet cross into Heidegger’s ontological project, where Being itself is what gives the horizon of moral and cognitive sense. But perhaps that’s where Hitchens’s critique is instructive by contrast: his limitation to the ontic makes visible the need for a deeper account of why such moral structures appear at all. In other words, his lack of ontology points us back toward ontology.

    Your reflections on metaphysics as the inner life of physics resonate strongly with me. The modern impulse to banish metaphysics misunderstands that physics already presupposes a metaphysical grammar: measurement, causality, time, identity. What Hegel calls sublation - the overcoming that preserves - captures precisely the movement I see in philosophy today: a self-surpassing language that still carries its inheritance. The physicist who speaks of “fields,” “vacua,” or “information” is already enmeshed in a metaphoric web that opens onto Being, whether acknowledged or not. In that sense, I share your view that metaphysics isn’t something beyond the empirical but the intelligible depth of the empirical itself.

    And yes, Derrida’s trace completes that thought: every act of meaning both posits and defers, revealing an openness that no closure can finally master. The very question, as you beautifully put it, is freedom - not the possession of answers, but the trembling of language at the edge of its own horizon. Philosophy lives in that interval where language interrogates itself.

    So perhaps where we converge is here: the ethical and the ontological are not two regions but two inflections of the same opening. The cry of the suffering child and the concept of Being are both ways the real addresses itself to us. Hitchens hears the cry and answers in moral outrage; Heidegger hears the silence of Being and answers in thought. Both responses, in their different registers, attest to the same fact: existence calls, and we must respond - whether with compassion, with questioning, or with both.
  • What is right and what is wrong and how do we know?
    Thank you, Constance - your response beautifully captures the Heideggerian intuition that any ethical relation presupposes a being who can be related. I agree that empathy or responsibility cannot arise in a vacuum; the one who responds must first be capable of self-holding - of sustaining their own openness to Being. In that sense, the haltung De Gennaro discusses is indeed prior in the analytic order, even if, as you note, not necessarily in the historical or evolutionary one.

    Where I was pressing the “ought” as equiprimordial with Being, I didn’t mean that ethics could float free of ontology. Rather, I wanted to resist the tendency - especially in some readings of Heidegger - to treat the moral call as a secondary derivative of Dasein’s self-understanding. The cry of the wounded child doesn’t wait for us to complete an ontological analysis before it claims us. The affective disclosure of suffering and the ontological disclosure of being are simultaneous moments of one event - what Levinas might call the face of the Other breaking through ontology.

    So perhaps the priority I meant was phenomenological, not logical: in lived experience, the ethical summons arrives first, even if conceptually we can only make sense of it against the backdrop of selfhood and worldhood. The “holding” of oneself in Being and the “being claimed” by another’s vulnerability may be two aspects of the same existential structure - the self as simultaneously sustaining and exposed.

    Your point about the crucible of Dasein as the “only actuality there is” resonates deeply. Yet what strikes me is that this actuality - this singular locus of disclosure - is always already permeated by others. Even our most solitary “holding” is linguistically and affectively mediated. In that sense, the first cry of the artist or the infant is both self-affirming and world-summoning. It is a reaching outward that presupposes no prior metaphysics of self or society, only the raw openness of existence calling to itself through others.

    Perhaps ethics, then, begins not after ontology but as its trembling edge - where Being feels its own finitude and vulnerability through us.
  • Could anyone have made a different choice in the past than the ones they made?
    Thank you for your detailed reply. I think we’re largely aligned, though we diverge on a few interpretive nuances.

    On factorization, I accept your refinement: decoherence doesn’t magically bifurcate the universal state into sealed compartments but rather entangles subsystems such that coherence between them becomes practically lost. When I said “continuously factorizes the total state,” I meant this relational entanglement structure - the effective tensor-product decomposition that yields dynamically autonomous components relative to the measurement context. So yes, the split is local and conditional, not global. I like your phrasing that “only the systems that have measured the decohered state become entangled with it.” That’s a good corrective to the loose Everettian imagery.

    I share your relational preference over full-blown realism. My “ontologically generative” phrasing was intended in that same spirit: the ontology is not a collection of separate universes but a web of ever-evolving relational configurations. The structure of relations changes - new entanglement correlations come into being - even though the global amplitude distribution remains one evolving unity. So perhaps “structurally generative” would be the better expression.

    Regarding “observer” versus “interaction,” I agree completely. I used “observer” phenomenologically, but “interaction” avoids the mentalistic overtones. I’m wary of language that suggests intent or consciousness as a special causal category; it risks re-smuggling the old metaphysical privilege that quantum theory works so hard to dissolve.

    On free will, I think we converge on compatibilism but may use slightly different vocabularies. I take your point that “one’s own nature” could itself be construed as a second metaphysical causal chain, but I meant it more modestly: the organism’s integrated causal structure - its neural and psychological architecture - as distinct from an external intruder like the rabies virus. The point is not that one becomes “independent” of causality, but that causal efficacy remains internal to the system’s evaluative dynamics. That’s why the rabies example marks the boundary between responsibility and compulsion. Dualists, as you say, would complicate that further, perhaps imagining the pathogen interfering with both the physical and the mental “tracks,” but I see that as multiplying mysteries rather than explaining anything.

    Your description of sleep paralysis is fascinating - it’s a vivid phenomenological example of partial causal decoupling: consciousness active, motor output suppressed. From a naturalistic angle, it actually illustrates how finely tuned the causal layers of agency are: when one channel is interrupted, agency becomes experiential but not performative. It’s a transient epiphenomenal pocket, not a metaphysical clue, but I can see why it feels uncanny.

    As for your final point - that human choice doesn’t depend on branching - I fully agree. The experiential grammar of choice would be the same in any interpretation, whether Everettian, Bohmian, or GRW. My comparison between branching and agency was metaphorical: both involve local differentiation within a globally deterministic process. The analogy isn’t meant to make agency depend on branching, only to highlight the structural parallel between emergent autonomy in physics and in psychology.

    In that sense, I see compatibilist freedom and relational quantum ontology as reflections of the same deeper pattern: causal closure at the global level, emergent quasi-autonomy at the local. In my GENE Causal Self Model, those quasi-autonomous patterns are constituted by the interaction of Genes, Environments, Nutrients, and Experiences - a biological analogue to decoherence’s relational structure. Both describe complex systems that remain causally determined yet exhibit self-organizing agency through internal feedback loops. Determinism and autonomy, far from being opposites, are two perspectives on the same relational process.
  • Could anyone have made a different choice in the past than the ones they made?
    Thank you for your thoughtful and detailed reply. I appreciate your clarifications - especially on terminology.

    When I said the universal state vector differentiates, I didn’t mean that it “splits” or collapses in any literal sense. I agree that the universal wave function evolves unitarily. What I meant is that decoherence continuously factorizes the total state into dynamically autonomous subspaces. The evolution is singular, but its structure becomes increasingly partitioned as interference terms vanish. In that descriptive sense, decoherence is ontologically generative - it produces new relational structure within the universal state, even if not new “worlds” as discrete entities.

    You’re right that Everett himself didn’t speak of sharply defined “branches,” and I share your caution about reifying them. Still, decoherence does create stable quasi-classical sectors whose internal histories no longer interfere. Calling them “branches” is shorthand for these dynamically independent histories. So when I said that “an event that leaves no macroscopic trace still differentiates the overall state,” I meant that every quantum fluctuation alters the total wave function’s structure, even if those alterations remain forever unamplified from our classical perspective.

    I also agree that quantum mechanics is not observer-dependent in the Wigner sense - nothing special happens because a conscious agent looks. My use of “observer” was relational, not Cartesian: any subsystem that records or correlates information functions as an “observer” relative to another. Within that relational framework, phenomenological perspectives arise naturally from entanglement structure, not metaphysical privilege.

    Regarding freedom and responsibility: yes, phenomenology remains intact. Even if the total evolution is deterministic, each branch still contains agents whose deliberative architectures causally mediate outcomes within that branch. That structure grounds a compatibilist sense of agency: one can be determined and yet meaningfully responsible insofar as choices flow from one’s own evaluative processes. Libertarian freedom, by contrast, would require causal independence from one’s own nature - an incoherent notion. In your rabies analogy, the external pathogen literally overrides the person’s cognitive structure, which is why we no longer ascribe responsibility. The contrast actually illustrates compatibilism rather than libertarianism.

    Stepping back, the parallel between branching and agency seems telling: both involve emergent autonomy within an underlying deterministic totality. The global state’s evolution may be seamless, yet locally it yields distinct, causally closed structures - worlds in one case, deliberating agents in the other. In both, the differentiation is real enough to sustain the lived grammar of choice, even if metaphysical freedom never enters the picture.

    In my own framework - the GENE Causal Self Model - I interpret such autonomy through the interplay of Genes, Environments, Nutrients, and Experiences. Each agent’s decisions are determined by the evolving configuration of these factors, yet within that causal web, reflective self-organization still emerges. Much like decoherent branches of the wave function, selves are dynamically distinct yet law-governed substructures of a single evolving whole.
  • What is right and what is wrong and how do we know?
    Thank you for such a rich and thoughtful elaboration. I deeply respect the metaphysical continuity you describe - from Husserl through Heidegger to Henry - and your insistence that all talk of “reality” already presupposes consciousness or Dasein as its horizon. I don’t reject that lineage; I simply ask a different kind of question within it.

    Where you pursue the whatness of Being, I’m drawn to the ought that emerges within Being: the affective and ethical textures through which existence discloses itself. When a child is burned or a slave is whipped, what matters first is not that Being manifests itself in suffering, but that suffering calls us to respond. This “call” is not derivative of ontology; it is equiprimordial with it.

    That’s why I invoked Hitchens - not as a metaphysician, but as an ethical phenomenologist avant la lettre. His critique may lack ontological precision, but it exposes the pathic structure of moral life: how conceptual fictions (like “God” or “sin”) can channel either empathy or cruelty. The moment consciousness awakens to that relation, philosophy is already in motion.

    So when you ask, “What is ethics?”, I would answer: ethics is the self-manifestation of Being as concern for the Other. It is not an add-on to ontology, nor a sociological derivative, but the dimension in which Being feels its own vulnerability. Perhaps that’s where our projects intersect - your ontological monism and my existential compassionism are two ways of naming the same intimacy between consciousness and value.
  • Could anyone have made a different choice in the past than the ones they made?
    Thank you for the thoughtful clarification. I think our main divergence lies in how we treat ontic status within the Everett framework.

    You’re right that Everett dispenses with counterfactual definiteness: only the total wave function is “real,” while definite outcomes are branch-relative. However, if every decoherence event differentiates the universal state vector, then by definition, each “unmeasured” quantum fluctuation still contributes to the branching structure of the multiverse. The fact that we only observe a subset of classical branches doesn’t mean the rest lack existence; it only means they are decohered beyond causal contact with us.

    So when I say “an event that leaves no macroscopic trace still differentiates the overall state,” I mean that decoherence is ontologically generative - the universe’s global wave function encodes every microscopic difference, even those never amplified to our classical level. From that global perspective, nothing “fails to happen”; it merely fails to be observable within our branch.

    As for responsibility, I agree that phenomenology remains intact. Even if the total state-space evolves deterministically, subjective deliberation and outcome differentiation are still structurally real within each branch - enough to preserve the experiential grammar of choice, if not libertarian freedom.
  • What is right and what is wrong and how do we know?
    I am right. You are wrong. Because I say so.

    Infallable!
    I like sushi

    If infallibility is self-declared, then I, too, am infallible - and I say you’re wrong. Now what?
  • Could anyone have made a different choice in the past than the ones they made?
    Thank you for the thoughtful engagement - I think we’re converging on several points while framing them differently.

    On Claim A, I accept that Norton’s Dome demonstrates classical indeterminism under non-Lipschitz conditions, though it’s a purely mathematical curiosity. In any physically realizable system governed by continuous differentiable dynamics, each decoherence event still alters the global quantum state. Even if that alteration remains thermodynamically undetectable within a local Lyapunov horizon, it nevertheless yields a distinct universal configuration in principle. My claim concerns this ontological divergence, not its empirical detectability.

    On Claim B, you’re right that the assertion depends on whether one accepts counterfactual definiteness. I was speaking from an Everett-style, decoherence-based ontology where every event contributes to a definite branch of the universal wave function. Under that framework, an event that leaves no macroscopic trace still differentiates the overall state of the universe. The difference need not be observable to be real.

    For Claim C, I’d refine “always matters” as follows: every quantum perturbation modifies the total wave function, but only some of those perturbations are amplified within our causal region into new classical structures. Others disperse or remain dynamically isolated, but they still shape the global state. “Trapped” and “amplified” are perspectival distinctions within one continuous evolution.

    On determinism and responsibility, I think we share the pragmatic view. Determinism doesn’t abolish agency; it merely redefines it as a complex causal process rather than an uncaused power. Responsibility survives as a social and ethical convention that regulates behaviour within the deterministic flow. To borrow my own GENE model language, deliberation and choice are emergent computations of Genes, Environments, Nutrients, and Experiences - not exemptions from causality but expressions of it.

    So when I speak of “choice” or “agency,” I mean the real-time process of deliberation that precedes action, not a metaphysical ability to have done otherwise. The phenomenology of choice remains intact, even if the universe’s total state never could have evolved differently.
  • What is right and what is wrong and how do we know?
    Your points about the historicity of thought and the phenomenological horizon are well taken. Yes, any talk of “God,” “the world,” or “the self” emerges from within language and culture, not from an Archimedean point outside them. But I think that is precisely why Hitchens’s critique retains philosophical force. His focus on moral consequences is not “idle talk”; it is an inquiry into how concepts shape lived reality.

    You call God a fiction born of thrownness into finitude. Very well, but fictions that shape moral life still have measurable effects. Whether “God” is a phenomenological boundary-concept or an anthropomorphic myth, the question remains: What does belief in this fiction do to sentient beings? Does it cultivate compassion, or sanctify domination? That is not a superficial question; it is an existential one.

    You say philosophy should proceed like a scientist suspending cultural assumptions. Yet even the phenomenological reduction cannot suspend the ethical field in which human beings suffer and act. “Value-in-being,” as you put it, is not discovered in neutral contemplation but in encounter - the face of the Other, to borrow from Levinas, not the mineral horizon of a geologist.

    When Hitchens challenges doctrines that justify eternal punishment or servitude, he is performing a kind of moral reduction: bracketing divine authority to see what remains of goodness once the threats are removed. That is philosophy doing its most basic work - clarifying the conditions of value and responsibility.

    So yes, we can follow Husserl into the indeterminacy of consciousness, or Heidegger into the openness of Being; but we must also follow the child burned at the stake, or the slave whipped in God’s name, into the concreteness of suffering. Otherwise, “pathos” becomes an aesthetic posture rather than an ethical response.

    If the “greatness” woven into existence means anything, perhaps it is precisely this - that consciousness is capable of compassion even without metaphysical guarantees. That, too, is philosophy, and it is not nihilism.
  • What is right and what is wrong and how do we know?
    I agree that philosophy must go deeper than empirical refutations or moral outrage - but Hitchens’s value lies precisely in the moral dimension that many technical philosophers neglect. He exposes how certain conceptions of God license cruelty and submission, and that critique operates at the level of moral phenomenology, not mere empiricism. When he asks “What kind of being would demand eternal praise under threat of hell?”, he isn’t just being cynical - he’s inviting us to examine the psychological and ethical structure of the “God-concept” itself.

    You ask what is “natural” versus “supernatural.” I’d say that distinction loses meaning if “God” cannot be coherently defined or empirically differentiated from nature. Once the supernatural ceases to have observable consequences, we’re left only with human moral experience - which is precisely where Hitchens situates his inquiry: in compassion, honesty, and the freedom to question.

    If “God” is a moral concept, then its worth must be judged by the moral outcomes it inspires. A concept that sanctifies fear, tribalism, or subservience fails on its own moral grounds. The greatness you mention may indeed be woven into the fabric of human existence - but perhaps what we call “God” is simply our evolving attempt to articulate that greatness in moral and existential terms. When the old metaphors harden into dogma, philosophy reopens the question.

    So I’d say: philosophy doesn’t replace Hitchens’s critique - it completes it.
  • What is right and what is wrong and how do we know?
    Thank you for sharing your observations. Given how self-contradictory the Bible is, I am not surprised that Christians can't agree about what is right and what is wrong.
  • Could anyone have made a different choice in the past than the ones they made?
    Thank you for your detailed reply.

    1. On decoherence, chaos and “everything matters”
    You’re right to insist that every physical event in principle influences the future state of the universe. But there are three separate claims mixed together here, and they need to be untangled:

    Claim A: “Every decoherence event must produce a macroscopically different future.”*
    This is false as a practical claim. Mathematically, you can map a micro-perturbation forward, but most microscopic differences remain confined beneath the system’s Lyapunov horizon and are washed out by dissipation and averaging. Saying “it mattered in principle” is not the same as “it produced a distinct, observable macroscopic outcome.”

    Claim B: “If a quantum event didn’t cascade to macroscopic difference, then it didn’t happen.”
    This is a category error. An event’s occurrence is not defined by whether it produces long-range, observable divergence in weather on Mars. Decoherence can and does happen locally without producing macroscopic differences that survive coarse-graining. To deny the event happened because it didn’t alter the weather is to adopt a peculiar, counterfactual definition of “happened” that isn’t used in physics.

    Claim C: “Because chaotic systems amplify differences, microscopic quantum noise always matters.”
    Chaos gives sensitivity to initial conditions, not guaranteed macroscopic divergence from every tiny perturbation within any fixed observational timescale. Some perturbations are amplified quickly; many are damped or trapped inside subsystems and never produce a new, robust classical structure. So yes, everything is part of the state functionally, but that does not imply practical, observable macroscopic branching for every microscopic event.

    2. On ensemble forecasting and pragmatic unpredictability
    Ensemble weather models show that small perturbations grow and forecasts diverge over days to weeks. That demonstrates sensitivity, not an omnipresent quantum-to-macroscopic channel that we can exploit or even detect in a controlled way. Ensemble perturbations used in practice are far larger than Planck-scale corrections; their convergence tells us about statistical predictability and model error, it does not prove ontic indeterminacy at the macroscale. In short: models are evidence of chaotic growth, not of routine quantum domination of weather.

    3. Interpretations of quantum mechanics - collapse, MWI, Bohmian, etc.
    Two helpful distinctions:

    Predictive equivalence vs metaphysics.
    Most mainstream interpretations (Copenhagen-style pragmatism, Everett/MWI, Bohmian/DBB, GRW-style objective collapse) make the same experimental predictions for standard quantum experiments. Where they differ is metaphysical: whether there is a literal branching reality (MWI), hidden variables (Bohmian), or real collapses (GRW/Penrose). That difference matters philosophically but not experimentally so far.

    Determinism vs practical unpredictability.
    MWI is best understood as deterministic at the universal wave function level (no collapse), while Bohmian mechanics is deterministic at the level of particle trajectories guided by the wave function. Both can produce the same Born probabilities for observable results. Objective collapse theories, if true, would introduce genuine stochastic events at the fundamental level. Superdeterminism attempts to recover determinism by postulating global correlations that undermine usual independence assumptions - but it’s philosophically and scientifically unattractive because it erodes the basis for experimental inference.

    So: yes, many interpretations are deterministic; some are not. But the existence of multiple empirically-equivalent interpretations means the metaphysical verdict isn’t settled by current experiments.

    4. Functional robustness (brains, transistors, computation)
    Absolutely: brains and silicon devices exploit enormous redundancy and averaging to achieve robust classical behaviour despite quantum microphysics. That robustness is precisely why we can treat neurons as implementing computations without invoking exotic quantum effects. Inputs and boundary conditions matter: if an input to a brain were influenced by a huge amplification of a quantum event, your choices could track that influence, but that’s a contingent physical story, not a metaphysical proof of libertarian free will.

    5. About “happening”, counterfactuals and responsibility
    Two related points:

    Happening and counterfactual dependence.
    Whether an event “happened” should not be defined by whether it caused a macroscopic divergence millions of miles away. Physics generally treats events as happening if they leave local, causal traces (entanglement, records, thermodynamic irreversibility), not by whether they produce globally visible differences across light-years.

    Responsibility and determinism.
    Even if one accepts a deterministic physical description (whether classical or quantum-deterministic under MWI or Bohmian), that does not automatically dissolve ordinary moral responsibility. That’s the compatibilist position: responsibility depends on capacities, reasons-responsiveness, and the appropriate psychological relations, not on metaphysical indeterminism. Saying “my decision was set at the Big Bang” is metaphysically dramatic but doesn’t change whether you deliberated, had conscious intentions, and acted for your reason(s) - which are precisely the things our ethics and law respond to.

    6. About “pondering” and the illusion of choice
    You’re right to resist the crude conclusion that determinism makes choice an illusion. Choice is a process that unfolds over time; it can be broken into sub-choices and revisions. Whether decisions are determined or involve ontic randomness does not by itself answer whether they were genuinely yours. If you deliberated, weighed reasons, and acted from those deliberations, we rightly treat that as agency. Randomness doesn’t create agency; reasons and responsiveness do.

    We shouldn’t conflate three different claims: (A) that micro events in principle influence the universal state; (B) that such influence routinely produces distinct, observable macroscopic outcomes; and (C) that metaphysical determinism therefore undermines agency. In practice, decoherence + dissipation + coarse-graining mean most quantum perturbations don’t make detectable macroscopic differences. Interpretations of quantum mechanics disagree about metaphysics but agree on predictions. And finally, even in a deterministic physical world, agency and moral responsibility can still be meaningful because they hinge on capacities, reasons, and psychological continuity, not on metaphysical indeterminism.
  • What is right and what is wrong and how do we know?
    Did you watch the above video? I agree with everything he said in the video. Please note that I am talking about the Biblical God.

    Christopher Hitchens may not have been a professional philosopher, but I don’t think that diminishes the depth or value of his insights. What I find interesting about what he says about God is not technical philosophy but moral and existential clarity.

    He challenges the assumption that belief in God automatically makes a person moral, and he exposes the moral contradictions in many religious doctrines - especially those that sanctify cruelty, fear, or submission. He asks uncomfortable but necessary questions: If God is good, why does he permit suffering? If morality depends on divine command, does that make genocide or slavery good if commanded by God?

    Hitchens also reminds us that we can find meaning, awe, and compassion without invoking the supernatural. He combined reason, moral passion, and literary brilliance - showing that intellectual honesty and empathy can coexist.

    So, while he wasn’t a technical philosopher, he was a moral and cultural critic who made philosophy accessible and urgent - which, to me, is just as important.
  • Could anyone have made a different choice in the past than the ones they made?


    1. On Decoherence and Chaotic Amplification

    I appreciate your clarification. I agree that once decoherence has occurred, each branch behaves classically. My emphasis was never that quantum events never cascade upward, but that most do not in practice. Chaotic sensitivity doesn’t guarantee amplification of all microscopic noise; it only ensures that some minute differences can diverge over time. The key is statistical significance, not logical possibility.

    The fact that there are trillions of decoherence events per nanosecond doesn’t entail that every one creates a macroscopically distinct weather trajectory. Many microscopic perturbations occur below the system’s Lyapunov horizon and are absorbed by dissipative averaging. The “butterfly effect” metaphor was intended to illustrate sensitivity, not to claim that every quantum fluctuation alters the weather.

    So:

    Yes, chaos implies amplification of some differences.
    No, it doesn’t imply that quantum noise routinely dominates macroscopic evolution.

    Empirically, ensemble models of the atmosphere converge statistically even when perturbed at Planck-scale levels, suggesting the mean state is robust, though individual trajectories differ. (See Lorenz 1969; Palmer 2015.)

    2. On Determinism, Ontic vs. Epistemic Randomness

    You’re right that we can’t know that randomness is purely epistemic. My point is pragmatic: there’s no experimental evidence that ontic indeterminacy penetrates to the macroscopic domain in any controllable way.
    MWI, Bohmian mechanics, and objective-collapse theories all make the same statistical predictions. So whether randomness is ontic or epistemic is metaphysical until we have a test that distinguishes them.

    Even if indeterminacy is ontic, our weather forecasts, computer simulations, and neural computations behave classically because decoherence has rendered the underlying quantum superpositions unobservable.

    So I’d phrase it this way:

    The world might be ontically indeterministic, but macroscopic unpredictability is functionally classical.

    3. On Functional Robustness

    Completely agree: both transistors and neurons rely on quantum effects yet yield stable classical outputs. The entire architecture of computation, biological or digital, exists precisely because thermal noise, tunnelling, and decoherence are averaged out or counterbalanced.

    That’s why we can meaningfully say “the brain implements a computation” without appealing to hidden quantum randomness. Penrose-style arguments for quantum consciousness have not found empirical support.

    4. On Choice, Process, and Responsibility

    I share your intuition that a “choice” unfolds over time, not as a single instant.
    Libet-type studies show neural precursors before conscious awareness, yet subsequent vetoes demonstrate ongoing integration rather than fatalistic pre-commitment.

    Determinism doesn’t nullify responsibility. The self is part of the causal web. “Physics made me do it” is no more an excuse than “my character made me do it.” In either case, the agent and the cause coincide.

    Thus, even in a deterministic universe, moral responsibility is preserved as long as actions flow from the agent’s own motivations and reasoning processes rather than external coercion.

    5. Summary

    Decoherence → classicality; not all micro noise scales up.
    Chaos → sensitivity; not universality of amplification.
    Randomness → possibly ontic, but operationally epistemic.
    Functional systems → quantum-grounded but classically robust.
    Agency → compatible with determinism when causation runs through the agent.

    Quantum indeterminacy might underlie reality, but classical chaos and cognitive computation sit comfortably atop it.
    Responsibility remains a structural property of agency, not an escape hatch from physics.
  • Could anyone have made a different choice in the past than the ones they made?
    Thank you for your thoughtful and technically well-informed reply. Let me address your key points one by one.

    1. On Decoherence vs. Propagation of Quantum Effects

    I agree that quantum coherence is not required for a quantum event to have macroscopic consequences. My point, however, is that once decoherence has occurred, the resulting branch (or outcome) behaves classically, and further amplification of that quantum difference depends on the sensitivity to initial conditions within the system in question.

    So while a chaotic system like the atmosphere can indeed amplify microscopic differences, the relevant question is how often quantum noise actually changes initial conditions at scales that matter for macroscopic divergence. The overwhelming majority of microscopic variations wash out statistically - only in rare, non-averaging circumstances do they cascade upward. Hence, quantum randomness provides the ultimate floor of uncertainty, but not a practically observable driver of weather dynamics.

    2. On the “Timescale of Divergence”

    I appreciate your breakdown - minutes for human choice, months for weather, millennia for asteroid trajectories, etc. That seems broadly reasonable as an order-of-magnitude intuition under MWI or any interpretation that preserves causal continuity. What’s worth emphasizing, though, is that those divergence times describe when outcomes become empirically distinguishable, not when quantum indeterminacy begins influencing them. The influence starts at the quantum event; it’s just that the macroscopic consequences take time to manifest and become measurable.

    3. On Determinism and Randomness in Complex Systems

    I also agree that classical thermodynamics is chaotic, and that even an infinitesimal perturbation can, in principle, lead to vastly different outcomes. However, that doesn’t mean the macroscopic weather is “quantum random” in any meaningful sense - only that its deterministic equations are sensitive to initial data we can never measure with infinite precision. The randomness, therefore, is epistemic, not ontic — arising from limited knowledge rather than fundamental indeterminacy.

    Quantum randomness sets the ultimate limit of predictability, but chaos is what magnifies that limit into practical unpredictability.

    4. On Decision-Making Systems and Quantum Filtering

    I completely agree that biological and technological systems are designed to suppress or filter quantum noise. The fact that transistors, neurons, and ion channels function reliably at all is testament to that design. Quantum tunneling, superposition, or entanglement may underlie the microphysics, but the emergent computation (neural or digital) operates in the classical regime. So while randomness exists, most functional systems are robustly deterministic within the energy and temperature ranges they inhabit.

    * Decoherence kills coherence extremely fast in macroscopic environments.
    * Chaotic systems can amplify any difference, including quantum ones, but not all microscopic noise scales up meaningfully.
    * Macroscopic unpredictability is largely classical chaos, not ongoing quantum indeterminacy.
    * Living and engineered systems filter quantum randomness to maintain stability and reproducibility.

    So while I agree with you that quantum events can, in principle, propagate to the macro-scale through chaotic amplification, I maintain that in natural systems like the atmosphere, such amplification is statistically negligible in practice - the weather is unpredictable, but not “quantumly” so.
  • Could anyone have made a different choice in the past than the ones they made?
    So no, you could not have made a different choice because that would have meant that you had different information than you did when you made the decision.Harry Hindu

    I agree.
  • Could anyone have made a different choice in the past than the ones they made?
    Thank you for asking for a source. You’re right that quantum effects can, in principle, influence macroscopic systems, but the consensus in physics is that quantum coherence decays extremely rapidly in warm, complex environments like the atmosphere, which prevents quantum indeterminacy from meaningfully propagating to the classical scale except through special, engineered amplifiers (like photomultipliers or Geiger counters).

    Here are some references that support this:

    1. Wojciech Zurek (2003). Decoherence, einselection, and the quantum origins of the classical. Reviews of Modern Physics, 75, 715–775.
    Zurek explains that decoherence times for macroscopic systems at room temperature are extraordinarily short (on the order of (10^-20) seconds), meaning superpositions collapse into classical mixtures almost instantly.
    DOI: 10.1103/RevModPhys.75.715

    2. Joos & Zeh (1985). The emergence of classical properties through interaction with the environment. Zeitschrift für Physik B Condensed Matter, 59, 223–243.
    They calculate that even a dust grain in air decoheres in about (10^-31) seconds due to collisions with air molecules and photons - long before any macroscopic process could amplify quantum noise.

    3. Max Tegmark (2000). Importance of quantum decoherence in brain processes. Physical Review E, 61, 4194–4206.
    Tegmark estimated decoherence times in the brain at (10^-13) to (10^-20) seconds, concluding that biological systems are effectively classical. The same reasoning applies (even more strongly) to meteorological systems, where temperature and particle interactions are vastly higher.

    In short, quantum coherence does not persist long enough in atmospheric systems to influence large-scale weather patterns. While every individual molecular collision is, in a sense, quantum, the statistical ensemble of billions of interactions behaves deterministically according to classical thermodynamics. That’s why classical models like Navier–Stokes work so well for weather prediction (up to chaotic limits of measurement precision), without needing to invoke quantum probability.

    That said, I fully agree with you that quantum randomness is crucial to mutation-level processes in biology - those occur in small, shielded molecular systems, where quantum tunnelling or base-pairing transitions can indeed introduce randomness before decoherence sets in. The key distinction is scale and isolation: quantum effects matter in micro-environments, but decoherence washes them out in large, warm, chaotic systems like the atmosphere.

    Here are two images I created to help explain my worldview:
    The-GENE-Causal-Self-Model-infograph.jpg

    Understanding-Human-Behaviour.jpg
  • What is right and what is wrong and how do we know?
    I agree with Christopher Hitchens. Thank you very much for posting the video.
  • Could anyone have made a different choice in the past than the ones they made?
    Thank you very much for the fascinating links you posted. I really appreciate your thoughtful follow-up. I agree that we’re largely converging on the same view.

    Regarding Norton’s dome, I think it’s an interesting mathematical curiosity rather than a physically realistic case of indeterminism. It depends on idealized assumptions (e.g., perfectly frictionless surface, infinite precision in initial conditions) that don’t occur in nature. Still, it’s a useful illustration that even Newtonian mechanics can be formulated to allow indeterminate solutions under certain boundary conditions.

    As for the quantum–chaos connection, yes - Schrödinger’s cat is indeed the archetypal quantum amplifier, though it’s an artificial setup. In natural systems like weather, decoherence tends to suppress quantum-level randomness before it can scale up meaningfully. Lorenz’s “butterfly effect” remains classical chaos: deterministic, yet unpredictable in practice because initial conditions can never be measured with infinite precision. Whether a microscopic quantum fluctuation could actually alter a macroscopic weather pattern remains an open question - interesting but speculative.

    I agree with you that determinism is a great tool for agency. Even if all our choices are determined, they are still our choices - the outputs of our own brains, reasoning, and values. Indeterminacy doesn’t enhance freedom; it merely adds noise.

    On superdeterminism: I share your concern. It’s unfalsifiable if taken literally (since it could “explain away” any experimental result), but it remains conceptually valuable in exploring whether quantum correlations might arise from deeper causal connections. I don’t endorse it, but I don’t dismiss it either until we have decisive evidence.

    You put it well: the bottom line is that we mostly agree - especially that neither pure determinism nor indeterminism rescues libertarian free will. What matters is understanding the causal web as fully as possible.

    Thanks again for such a stimulating exchange. Discussions like this remind me how philosophy and physics intersect in fascinating ways.
  • Could anyone have made a different choice in the past than the ones they made?
    Quantum indeterminacy is irrelevant because at macroscopic levels all the quantum weirdness (e.g. quantum indeterminacy and superposition) averages out.
    — Truth Seeker
    Only sometimes, but not the important times. There are chaotic systems like the weather. One tiny quantum event can (will) cascade into completely different weather in a couple months, (popularly known as the butterfly effect) so the history of the world and human decisions is significantly due to these quantum fluctuations. In other words, given a non-derministic interpretation of quantum mechanics, a person's decision is anything but inevitable from a given prior state. There's a significant list of non-deterministic interpretations. Are you so sure (without evidence) that they're all wrong?

    Anyway, it's still pretty irrelevant since that sort of indeterminism doesn't yield free will. Making truly random decisions is not a way to make better decisions, which is why mental processes do not leverage that tool.
    noAxioms

    Thank you for the thoughtful response. You raise a key point — that in chaotic systems, even minute quantum fluctuations could, in theory, scale up to macroscopic differences (the “quantum butterfly effect”). However, I think this doesn’t meaningfully undermine determinism for the following reasons:

    1. Determinism vs. Predictability:
    Determinism doesn’t require predictability. A system can be deterministic and yet practically unpredictable due to sensitivity to initial conditions. Chaos theory actually presupposes determinism - small differences in starting conditions lead to vastly different outcomes because the system follows deterministic laws. If the system were non-deterministic, the equations of chaos wouldn’t even apply.

    2. Quantum Amplification Is Not Evidence of Freedom:
    As you already noted, even if quantum indeterminacy occasionally affects macroscopic events, randomness is not freedom. A decision influenced by quantum noise is not a “free” decision — it’s just probabilistic. It replaces deterministic necessity with stochastic chance. That doesn’t rescue libertarian free will; it only introduces randomness into causation.

    3. Quantum Interpretations and Evidence:
    You’re right that there are non-deterministic interpretations of quantum mechanics - such as Copenhagen, GRW, or QBism - but there are also deterministic ones: de Broglie-Bohm (pilot-wave), Many-Worlds, and superdeterministic models. None of them are empirically distinguishable so far. Until we have direct evidence for objective indeterminacy, determinism remains a coherent and arguably simpler hypothesis (per Occam’s razor).

    4. Macroscopic Decoherence:
    Decoherence ensures that quantum superpositions in the brain or weather systems effectively collapse into stable classical states extremely quickly. Whatever quantum noise exists gets averaged out before it can influence neural computation in any meaningful way - except in speculative scenarios, which remain unproven.

    So, while I agree that quantum indeterminacy might introduce genuine randomness into physical systems, I don’t see how that transforms causality into freedom or invalidates the deterministic model of the universe as a whole. At best, it replaces determinism with a mix of determinism + randomness - neither of which grants us metaphysical “free will.”
  • Could anyone have made a different choice in the past than the ones they made?
    Randomness entails a factor not under our control.Relativist

    It's not just randomness that is a factor not under our control. We don't control the genes we inherit, our early environments, our early nutrients and our early experiences. As we grow older, we acquire some control over our environments, nutrients and experiences, but even then, we don't have 100% control.

    The-GENE-Causal-Self-Model-infograph.jpg
  • Could anyone have made a different choice in the past than the ones they made?
    I think both, but I'm not a compatibilist.bert1

    That's interesting.
  • Could anyone have made a different choice in the past than the ones they made?
    Even though it seems like you could have chosen differently, it is impossible to know you could have.Relativist

    I agree that it is impossible to know with 100% certainty.