Comments

  • Evidence of Consciousness Surviving the Body
    A seventh misconception treats negative cases as field-defeaters (“if some reports are wrong, the thesis fails”). The thesis of this chapter is proportionate: it does not depend on unanimity or on universal accuracy. It claims that some anchored cases survive ordinary scrutiny and that these anchors stabilize the larger testimonial field. One counterexample to a weak report does not touch a different case whose particulars were independently confirmed.Sam26

    But you haven't presented any cases that can be expected to survive an ordinary degree of scientific scrutiny.

    A third misconception claims “there are no controls,” implying that without randomized trials, testimony cannot carry weight. Prospective hospital protocols supply a different kind of control: fixed clinical clocks, environmental constraints (taped eyes, sealed rooms), hidden-target or procedure-bound particulars, and independent confirmation. These features limit post-hoc embroidery and allow specific claims to be checked. They do not turn testimony into lab instrumentation, but they do make some reports probative under ordinary public standards.Sam26

    Randomized trials aren't a requirement, but a controlled enviornment is necessary so as to eliminate the possibility that supposedly unconscious subjects are actually conscious and physically sensing and cognitively reconstructing their immediate environments by normal sensory means during EEG flat-lining. One such an experiment is The Human Consciousness Project that investigated awareness during resuscitation of cardiac arrest patients in collaborration with 25 medical centers across the US and Europe. That investigation among other things, controlled the environmment so as to assess the possibility that NDE subjects were sensing information that they couldn't posssibly deduce by normal bodily means (remote viewing).

    "The study was to introduce a multi-disciplinary perspective, cerebral monitoring techniques, and innovative tests.[7]. Among the innovative research designs was the placement of images in resuscitation areas. The images were placed on shelves below the ceiling and could only be seen from above. The design was constructed in order to verify the possibility of out-of-body experiences"

    The results were negative, with none of the patients recalling seeing the test information that was situated above their heads:

    " The authors reported that 101 out of 140 patients completed stage 2 interviews. They found that 9 out of 101 cardiac arrest survivors had experiences that could be classified as near-death experiences. 46% could retrieve memories from their cardiac arrest, and the memories could be subdivided into the following categories: fear; animals/plants; bright light; violence/persecution; deja-vu; family; recalling events post-CA. Of these, 2% fulfilled the criteria of the Greyson NDE scale and reported an out-of-body experience with awareness of the resuscitation situation. Of these, 1 person described details related to technical resuscitation equipment. None of the patients reported seeing the test design with upward facing images."

    .
  • Evidence of Consciousness Surviving the Body
    In modern western societies, a testimony that appeals to clairvoyance falls under misrepresentation of evidence, an inevitable outcome under witness cross examination in relation to critical norms of rational enquiry and expert testimony, possibly resulting in accusations of perjury against the witness. I would hazard a guess that the last time an American court accepted 'spectral' evidence was during the Salem witch trials.

    The need for expert testimony is even enshrined in the code of Hammurabi of ancient Mesopotamia; not even the ancients accepted unfettered mass testimony.

    So much for us "naysaying materialists" refusing to accept courtroom standards of evidence (unless we are talking about courtrooms in a backward or corrupt developing country).
  • Evidence of Consciousness Surviving the Body
    I am guessing that if EEGs are flatlining when patients are developing memories associated with NDEs, that this is evidence for sparse neural encoding of memories during sleep that does not involve the global electrical activity of millions of neurons that is entailed by denser neural encoding that an EEG would detect.

    Which seems ironic, in the sense that Sheldrake proponent's seem to think that apparent brain death during memory formation is evidence for radically holistic encoding of memories extending beyond the brain. But when you think about it for more than a split second, the opposite seems far more likely, namely atomistic, symbol-like memories being formed that slip under the EEG radar.
  • Evidence of Consciousness Surviving the Body
    Sam, name one reproducible experiment under controlled laboratory conditions that confirms that NDEs entail either clairvoyance or disembodied cognition.

    Intersubjective reproducibility of stimulus-responses of subjects undergoing NDEs is critical for the intersubjective interpretation of NDE testimonies, for otherwise we merely have a set of cryptic testimonies expressed in the private languages of NDE subjects.
  • Evidence of Consciousness Surviving the Body


    Sure, so the question is whether proponents of physical explanations for "consciousness" and purported anomalous phenomena share that sentiment, in which case everyone is arguing at cross purposes, assuming of course that both sides can agree that the evidence for telepathy and remote viewing is sorely lacking.
  • Evidence of Consciousness Surviving the Body
    Why must it be physical? this assumes from the outset that everything real must be made of particles or fields described by physics. But that is precisely the point in dispute.

    Consider an analogy: in modern physics, atoms aren’t little billiard balls but excitations of fields. Yet fields themselves are puzzling entities—mathematically precise but ontologically unclear. No one thinks an electromagnetic field is a “blob of energy floating around.” It’s astructuring principle that manifests in predictable patterns, even if its “substance” is elusive.
    Wayfarer

    Which is precisely why Physics survives theory change, at least for ontic structural realists - for only the holistic inferential structure of theories is falsifiable and semantically relevant. I think you might be conflating Physics with Physicalism - the misconception that physics has determinate and atomic denotational semantics (i.e. Atomism) .

    It is because "Physicality" is intersubjective, structural, and semantically indeterminate with respect to the subjectivities of the users of physical theories, that every possible world can be described "physically".

    Being "physical" isn't a property of the denoted, but refers to the fact that the entity concerned is being intersubjectively denoted, i.e referred to only in the sense of abstract Lockean primary qualities that are intersubjectively translatable by leaving the Lockean secondary qualities undefined, whereby individual speakers are free to subjectively interpret physics as they see fit (or as I call it, "The Hard Feature of Physics").
  • Evidence of Consciousness Surviving the Body
    If we agree that one case of NDE was real, then we are dealing with an anomaly that materialism cannot describe. I am wondering how you could explain the NDE experience when there is no brain activity.MoK

    For the record, I don't consider any such case to be real - a flat EEG reading isn't a sufficient measurement for defining brain death. Only quacks seriously entertain such theories. But if such cases were real in some sense of having intersubjective confirmation of anomalous phenomena, then it would at most imply a hole in our current physical theories, resulting in a new physical theory with regards to an extended notion of the body with additional senses, coupled with a new definition of personhood. Ultimately, all of this would amount to reducing our conception of such anomalous phenomena to a new physical normality that would ultimately leave religious followers and believers of the paranormal feeling as dissatisfied as they are presently.

    NDEs cannot in principle deliver the epistemic certainty and psychological security that their enthusiasts want, even if they are assumed to be veridical.
  • Evidence of Consciousness Surviving the Body
    Even if NDEs were veridical, that wouldn't be enough to challenge physicalism or mind-brain equivalence. The same goes for past life regression. At most, only a particular and narrow minded version of physicalism would be refuted. The same existential doubts, anxieties and disputes would eventually resurface exactly as before, with respect to a merely extended conception of the body and the senses, a conception that could even bring new forms of nihilism.
  • Idealism in Context
    That all events in the universe are causally inevitable is the thesis of Determinism. A thesis is an hypothesis, not an ontological commitment. As a thesis, it accepts that it may be proved wrong, in the same way that the equation s=0.5∗g∗t2 may be proved wrong. A thesis does not require a suspension of scepticism, which is why it is a thesis.RussellA

    Actually that's untrue, because without ontological commitment to universal quantification over absolute infinity, one cannot distinguish the hypothesis of determinism from its anti-thesis.

    What a hypothesis means is subject to as much uncertainty as its truth value. Unless one is already committed to the truth of determinism, one isn't in a position to know what the hypothesis of "determinism" refers to.
  • Referential opacity
    Leibniz's Law at the Post Office

    The postal system relies upon referential transparency, namely of knowing an immutable address that is associated with an intended recipient, as opposed to knowing the mutable personal details of the sender and the recipient which are kept hidden from the postal service ("information opacity").

    So here, the information space (that is hidden from the postal service) is comprised of vectors of information, where a vector is a possible list of attributes corresponding to a possible user. This information space is dual to the address space, namely the set of possible postal addresses for possible users.

    The information space is a vector field; the vector field indices are the address space.

    Address information can also be an attribute of information space, but this shouldn't be confused with the address space: the address information that you put on your resume isn't the address used by the postal system. Address information is mutable information that is considered to be an attribute of senders and recipients, whereas a postal address is part of the immutable structure of the postal system.

    What if user moves house?

    if a user moves house, this is represented by an arrow linking 'before' and 'after' vectors in information space (assuming the info is available there). But from the perspective of the postal service, users don't move house, rather houses change their occupants - because the postal system uses postal addresses to designate rigidly.

    Leibniz's Law

    Assuming that Leibniz's Law holds with respect to a given postal service, then it holds internally in the sense of characterising the postal operations of that given postal system, but it does not hold externally in the sense of surviving a change to the postal service itself.

    The indiscernibility of identicals is a definitional criterion for the meaning of a pair of addresses:

    ∀x ∀y[ x = y → ∀F(Fx ↔ Fy)] (i.e. identical addresses imply identical occupants).

    Compare that to Frege's disasterous Basic Law V(b)

    ϵF = ϵG → ∀x(Fx ≡ Gx)

    Here, the difference is that ϵF and ϵG are extensions, namely vectors in information space rather than addresses. If these vectors are finite then they can be fully observed , meaning that if they are observed to be identical then they must be same vector, meaning that V(b) is applicable. But in the infinite case, the two lists cannot be exhaustively observed, in which case we have at most equality between two incomplete lists, which obviously cannot imply that they denote the same vector due to the problem of induction.

    (Frege and many logicians after him, conflated the notion of addresses, which can always designate rigidly by virtue of merely being indexicals devoid of information content, with observation vectors that cannot rigidly designate the set of concepts that they all under).

    The identity of indiscernibles is postally invalid if multiple home ownership is allowed:

    ∀x∀y[∀F(Fx ↔ Fy) → x = y ] (which is true of a vector space, but generally false of a vector field).
  • Idealism in Context
    The movement of the stone is determined by the force of gravity.

    It is part of the nature of language that many words are being used as figures of speech rather than literally, such as "determined". Also included are metaphor, simile, metonymy, synecdoche, hyperbole, irony and idiom.
    RussellA

    Yes, that is perfectly reasonable as an informal description of gravity when describing a particular case of motion in the concrete rather than in the abstract and as Russell observed, in such cases the concept of causality can be eliminated from the description. But determinism takes the causal "determination" of movement by gravity literally, universally and outside of the context of humans determining outcomes, and in a way that requires suspension of Humean skepticism due to the determinist's apparent ontological commitment to universal quantification over generally infinite domains.

    Recall the game-semantic interpretation of the quantifiers, in which the meaning of a universal quantifier refers to a winning strategy for ensuring the truth of the quantified predicate P(x) whichever x is chosen . This interpretation is in line with the pragmatic sense of determination used in the language-game of engineering, where an engineer strategizes against nature to determine a product design that is correlated with generally favourable outcomes but that is never failure proof. (The engineer's sense of "winning" is neither universal nor guaranteed, unlike the determinist's).

    If a determinist wants to avoid being charged with being ontologically commited to Berkeley's Spirits in another guise, then he certainly cannot appeal to a standard game-semantic interpretation of the quantifiers. But then what other options are available to him? Platonism? (Isn't that really the same as the spirit world?). He has no means of eliminating the quantifiers unless he believes the world to be finite. Perhaps he could argue that he is using "gravity" as a semantically ambiguous rigid designator, but in that case he is merely making determinism true by convention...
  • Idealism in Context
    Determinism can always survive on a theoretical level, in the sense that in ill-posed problem with more than one possible solution can always be converted into a well-posed problem with exactly one solution by merely adding additional premises.

    However, the ordinary english meaning of "determine" does not refer to a property but to a predicate verb relating an intented course of action to an outcome. Ironically, an absolute empirical interpretation of "intention" is ill-posed and hence so is the empirical meaning of "determination", and is the reason why metaphysical definitions and defences of determinism are inherently circular.

    For this reason, I think materalism, i.e a metaphysical commitment to objective substances, should be distanced from determinism - for if anything, a commitment to determinism looks like a metaphysical commitment to the objective existence of intentional forces of agency (i.e. spirits) that exist above and beyond the physically describable aspects of substances.
  • Evidence of Consciousness Surviving the Body
    if NDEs were objective, then intelligence agencies around the world would be training spies to induce them for purposes of remote viewing. Alas, the Stargate Project failed to estabish the objectivity of OBEs and disclosed the entire project.

    If remote viewing test results are invariably bad for lucid dreamers with living brains, then I'm fairly confident that their results are not going to improve by inducing actual brain death.
  • Idealism in Context
    I'm not sure I follow you exactly. But the intention to interpret Locke's distinction as semantic seems like a good way to go. I think of it as a methodological decision. I don't know how far that coincides with your view.
    When you talk of "indexical relations" are you thinking of the equation, for example, between photons and colours? If so, I wouldn't equate finding them with the whole purpose of physics, nor think that it amounts to enabling inter-subjective communication. Or do I misunderstand you?
    Ludwig V

    Yes, the semantic distinction is a methodological distinction.

    I think of mathematical language as being analogous to a high level programming language, such as the C programming language. In order for C to be portable to any computer hardware system, it must only specify the grammar of the language and must refrain from specifying how it's expressions are to be compiled into machine code instructions, which is vender specific and requires a bespoke solution. Likewise, children must learn how to compile their mother tongue into thoughts and percepts; but their understanding of their language isn't part of the definition of their mother tongue, since their brains, ostensive learning and perspectives are unique to themselves.

    A physical language is about encoding common knowlege in a universal and portable format; so like C, it's semantics evolved to become definitionally independent of the perceptual judgements of any individual user. This indispensible "hard feature" of a physical language is often mistaken by philosophers as constituting a "hard problem", due to them conflating intersubjective high-level semantics whose subjective interpretation is deliberately left open, with the low-level subjective interpretation of the language that is bespoke for each person.
  • Idealism in Context
    In my view, both Berkeley and his detractors are right. His detractors are right in that physical definitions purposely omit the subjective. Therefore Lockean primary qualities should be understood as being definitionally irreducible to Lockean secondary qualities. Where his detractors might err is in mistaking definitional irreducibility, which purely concerns semantic irreducibility, for metaphysical irreducibility concerning a fundamental ontological distinction between Lockean primary and secondary qualities.

    On the other hand Berkeley is right for pointing out that Lockean primary qualities can only be used for denoting Lockean secondary qualities. In other words, if we think of mathematics as amounting to a language for relating indexicals rather than substances, such that physics is understood as amounting to finding useful indexical relations for the purpose of defining protocols for intersubjective communication and control, then we can reconcile the Lockean hard distinction with Berkeley's collapse of the distinction - on the condition that the lockean distinction is interpreted as being semantic rather than metaphysical.
  • Idealism in Context
    A classical analogy for interaction free measurements, as in the quantum Zeno Elitzur–Vaidman_bomb_tester, can be given in terms of my impulsive niece making T tours of a shopping mall in order to decide what she'd like me to buy her for her birthday.

    Suppose that she has my credit card for some reason (oops my mistake), and I take her to a shopping mall so that she can find something she would like for her birthday. If she finds what she wants, then on each iteration t of the mall there is a chance that she will succumb to temptation and use my credit card to buy the item for herself there and then, resulting in her feeling immediate guilt and confessing, such that we leave the mall there and then (outcome |1>, bomb exploded). If she is good and manages to resist temptation for T iterations, then she tells me what she would like for her birthday and we both leave happy (outcome |0>, bomb live). Else after T iterations she doesn't find anything she would like and we both leave the mall disappointed (outcome |1>, bomb dud).

    - Whereas my niece and my credit card have a definite location, my money does not, and neither does her gift until as and when the credit card is used.

    - Interaction free measurements aren't non-classical unless Bell's inequalities/Quantum contextuality are involved (and which are not involved in the above analogy).
  • Idealism in Context
    By all accounts, Berkeley was an instrumentalist. So Berkeley would have "believed in physics" - but not in realist metaphysical interpretations of physical posits as denoting hidden entities that are irreducible to observations.

    As stated earlier, Berkeley's ideas are passive, hence ideas cannot literally cause other ideas, implying that causal agency and free will are not observable for Berkeley as they are not for Hume. So if the existence of agency, free-will, moral choices etc, are to be assumed, then Berkeley must introduce some additional ontological entity (active spirits) that are not reducible to patterns of passive ideas. The resulting dualism looks to me, rather ironically, as a somewhat cleaner version of physicalism, if we assume that Berkeley's "God" refers only to the assumed existence of moral agency, which physicalists seem to accept, at least judging by their actions.

    Berkley's occasionalism reminds me of the computation of virtual worlds, in that the real cause of a change of state in a virtual world are the hidden actions of CPU and GPU instructions, as opposed to the on screen graphics presented to the player. Indeed, virtual worlds remind us that we don't see causal necessity; the only non-controversial applications of the word "necessity" refer to normative speech acts. Perhaps a materialist's metaphysical appeal to physical necessity can even be considered a form of occasionalism in denial.
  • Idealism in Context
    Berkeley did not believe in what today we call Physicalism, as he believed that everything in the world, whether fundamental particles, fundamental forces, tables, chairs or trees are bundles of ideas in the mind of God.RussellA

    I think that Berkeley would have accepted physical explanations, but as being semantically reducible to talk of private sensations, perhaps by arguing that subjective semantics must underpin physical semantics in order to logically relate physical theory to observation.

    One obvious issue with his position is the question of how multiple observers are possible; for if Berkeley isn't a solipsist and accepts the existence of other minds, then presumably those minds access or constitute the same world and therefore the same sets of ideas. Which is presumably where hiis appeal to God comes in, amounting to an axiom that a persistent world exists regardless of whether a particular individual is observin or interacting with it - but isn't this more or less the same as the axiom of a persistent world under materialism?

    Conversely, how can materialism justify belief in a mind-independent physical world without appealing to a likeness principle and a "master argument", in order to ground a theory of evidence relating subjective observations to the material world?
  • Idealism in Context
    Berkeley presented what we might now call a nominalist or deflationary view with respect to abstracta, both mathematical and physical, that considers talk about abstracta as reducing to talk about first personal observation criteria, and in this respect he preceded the thoughts of the logical positivist Ernst Mach, whom he likely inspired, approximately two centuries earlier. But he clearly ran into difficulties when it came to reconciling his radically empirical "esse is percipi" principle with rationalistic princples, especially

    1) Rationalist principles pertaining to causal agency. Perception is usually understood to be passive, in contrast to agency that is neither passive nor directly perceived; so does agency exist, and if so then on what grounds, and how does causal agency relate to his perception principle?

    2) The apparent reliablity of the principle of induction: How can the apparent reliability of inductive beliefs, which assume that the world will not to change radically from one observation to the next, be believed, if things only exist when observed?

    Berkeley, like the logical positivists after him, failed to reconcile his philosophical commitment to a radical form of empiricism with his other philosophical commitment to agency and morality. But in his defence, nobody before or after Berkeley has managed to propose an ontology that doesn't have analogous issues. Indeed, the impersonal forces of nature posited by classical materialism that are forever only indirectly observable, seem to be a heady mixture of Berkeleyian spirits and Berkeleyian ideas upon closer examination, rather than being the anti-thesis of his position as commonly assumed.

    .
  • Referential opacity
    Notice that propositional attitudes at least internally satisfy Leibniz Law, since if Lois believes that Superman is Clarke Kent, then she believes that they have identical properties. So it might be expected that propositional attitudes and their relationship to the real world, can be depicted, albeit not explained, using traditional denotational semantics, i.e. category theory. By this proposal, we have a category L consisting of a set of names equipped with an equivalence relation (i.e a Setoid), that denotes Lois's conception of analytic equivalence, in which "Superman" is definitely not analytically equivalent to "Clarke Kent". It is reasonable to think that the relationship between L and some other category W that represents an alternative analytic conception of the world, can be described in terms of a functor F : L --> W.

    Also recall that in Superman III, corrupted Superman physically expels Clarke Kent from his body, who then proceeds to strangle him to death along with the de re/de facto distinction.
  • On emergence and consciousness
    The unity of a proposition in language is one thing; the unity of experience is something else entirely. When I imagine a red triangle, I don’t just have “red” and “triangle” floating around in my head in some grammatical alignment. I have a coherent perceptual experience with vivid qualitative content. The parts of the brain firing don’t have that quality. There’s nothing red in the neurons, just as there’s nothing red in a sentence that uses the word “red.”

    So no, I don’t buy that this is a problem of grammatical form. Experience isn’t grammar. You can’t dissolve the hard problem by shifting the conversation to the philosophy of language. You just move the goalposts and pretend the mystery went away.
    RogueAI

    A proposition is meant to describe and thereby predict the world.

    So then what of the unity of the proposition?

    Consider the sentence "The cat sat on a mat" that syntactically consists of a cleanly separated subject, predicate and object. Is this syntactical partition an aspect of the semantic content of the sentence? This is related to the question as to the extent to which subject-object-predicate structure has predictive value.

    Compare to token embeddings in LLMs. Text corpra are encoded discretely and atomically on an individual word level that preserves the subject-predicate-object structure using a standard tokenizer, which are fed into a neural network encoder that learns and outputs a holistic language in which chains of words are encoded with atomically indivisible new words such that subject-predicate-object structure collapses.

    In philosophical parlance it might be said that the objective of an LLM encoder is to maximize the unity of the propositions of the input language, by compressing them into holistically encoded "mentalese" that is a closer representation of reality by virtue of each of the encoded sentences representing their entire corpus, hence having higher predictive value than the original sentences.

    Is it possible to represent the meaning of "strong" emergence in holisitic mentalese? I think not as easily, if at all. Certainly it is very easy to express the problem of strong emergence in formal syntax (however one interprets emergence), by merely pointing out that the relation Sitting(Cat,Mat) and the list [Cat,Sitting,Mat] will both coincide with the same syntactical sentence, and by arguing that attempts to fix this problem through syntactical enrichment will lead to the semantic problem of Bradley's infinite regress. But in mentalese, words aren't explictly defined in terms of a priori syntactical structure, but implicitly in terms of an entire open-ended and evolving body of text corpra, plus data from other modalities.

    Strong emergence concerns the semantic discrepency between logically atomic semantics (as in Russell and Wittgenstein's logical atomism as exemplified by tokenizers) andt the infinite continuity of experience. But the semantic discrepancy between mentalese and experience is much narrower, due to mentalese being semantically continuous and having higher predictive value. A semantic gap still remains - since mentalese is not-perfectly predictive, so I think the philosophical lesson of LLM encoders is that the unity of the proposition problem can be recast as the indeterminacy of inferential semantics.
  • On emergence and consciousness
    I think that dodges the issue. Consciousness isn’t just a structure of terms like a sentence, it’s an experience. It has qualia. When I imagine a red apple, there’s redness. The agony of an impacted tooth is a brute, felt fact that needs to be scientifically explained, which of course leads to the Hard Problem. So no, the ‘unity of consciousness’ isn’t like the unity of a sentence.RogueAI

    What is meant by a scientific explanation here? If scientific knowledge is conceived to be reducible to a formal system, then a scientific explanation of experiential redness must either take experiential redness at face value as an atomic proposition, meaning that science assumes rather than explains the phenomena, else experiential redness must be reducible to more fundamental relations and relata - in which case we end up with the unity of the proposition problem, which concerns the meaning of relations and relata and whether they are distinct and atomically separable concepts.
  • On emergence and consciousness
    Somebody first needs to explain why emergence should be considered to refer to a physical or metaphysical property, as opposed to referring to grammatical structure.

    Relations are never logically reducible to the related subjects. E.g. the relation John loves Mary isn't reducible to the concepts of John, Loving, and Mary considered separately, and yet nobody (at least since Francis Herbert Bradley) seems to think of such a relation as posing a profound question for science or philosophy, in the same way that is alleged for relating consciousness to physical states.

    The comprehension of any non-atomic proposition in a given language entails a unity of thought that isn't itself expressed propositionally in the language used to express the proposition concerned. This implict understanding of propositional unity is expressed non-propositionally in terms of the grammatical rules of the language. Why should the supposed "unity of consciousness" be interpreted physically or metaphysically, when the concept of propositional unity is generally ignored?
  • On emergence and consciousness
    Logically speaking, first-order quantification refers to quantifying over atomic terms (i.e. constants) that satisfy a first-order proposition, namely a boolean function whose domain only consists of such terms. So is a set that is described purely in terms of first-order quantification the logical expression of "weak" emergence? Compare to the more ambiguous concept called "second-order quantification", that quantifies over arbitrary sets of terms, as opposed to just terms. Can that be considered the logical expression of "strong" emergence?

    More generally, consider a functor in Category Theory that is used in Tarskian fashion to interpret a category (i.e. a deductive system) that is treated as an object language, in terms of another category that is considered to be a distinct meta-language that bears no causal or functional relationship to the former, in spite of the former being isomorphic to a (proper) subset of the latter.

    Although natural language is semantically closed, and hence not formally divisible into separate object and meta ontologies, I suspect that philosophers have a tendency to misconstrue emergence-as-grammar with macroscopic empirical phenomena.

    In my view, "Consciousness is strongly emergent" is a contestable linguistic proposal that the term "consciousness" should be formally treated as being part of a functionally closed mentalistic language that is being used as a meta-language, or object language, for interpreting (or being interpreted by) a separate physical language, that is formally expressible in terms of the functorial model of semantics as provided by category theory.
  • Evidence of Consciousness Surviving the Body
    That’s a valid question—but perhaps what’s really at stake is our concept of what counts as “physical” and how information is encoded and retrieved in living systems. Even in animals, we find forms of memory and orientation that are difficult to explain within current neurobiological or straightforwardly genetic models. Take, for example, pond eels in suburban Sydney that migrate thousands of kilometers to spawn near New Caledonia—crossing man-made obstacles like golf courses along ancestral routes. After years in the open ocean, their offspring return to the very same suburban ponds (ref). It’s hard to see how this kind of precise memory is passed on physically, and yet it plainly occurs.Wayfarer

    The mechanics of cognitive externalism are generally considered to be physical, as when resorting to a calculator to do arithmetic or when a robot is programmed to navigate using landmarks. Cognitive externalism is a good counterargument for rejecting the conception of intelligence as an attribute of closed and isolated systems, but it further undermines the paranormal significance of testimonies in the present context.

    By definition, physical concepts are causally-closed and intersubjective.

    Even more dramatically, the research of psychiatrist Ian Stevenson, though often met with skepticism, presents another challenge. Over several decades, he documented more than 2,500 cases of young children recalling specific details of previous lives with the details being validated against extensive documentary evidence and witness testimony. Often what they said was well beyond what the children could plausibly have learned by ordinary means and conveyed knowledge of people and events that they could only have learned about from experience. Stevenson was cautious in drawing conclusions - he never claimed that his research proved that reincarnation occured, but that these cases showed features suggestive of memory transfer beyond what conventional physical mechanisms could explain.Wayfarer

    Similar logical problems ensure. For example, I cannot remember what I ate for breakfast on this very day last year, and yet this doesn't seem to matter with regards to anyone's identification of me as being the "same" person from last year up to the present. In fact i suspect that self- identitication over time is as much a product of amnesia as it is of memory recall, and that identification over time is more a case of redefining definitional criteria for personhood, as opposed to applying a priori definitional criteria.

    Memories are cognitive processes in, and of, the present; yesterday's newpaper isn't evidence that we occupy a block universe, namely the silly idea that persists in physics of an archive that stores an inaccessible copy of yesterday. So why should memories be considered to be a necessary or sufficient condition for identifying personhood across lives, if memories aren't literally past referring and if they are in any case inessential for reidentification within a life?
  • Evidence of Consciousness Surviving the Body
    The idea that NDEs behave as a sixth sense is actually in conflict with the idea that NDEs are evidence of Cartesian dualism; for how are the experiences of a disembodied consciousness supposed to be transferred to the physical body as is necessary for the wakeful patient to remember and verbally report his NDE?

    Suppose NDE's amount to a sixth sense: then either NDEs are non-physical events associated with a disembodied conscousness, in which case NDEs cannot be remembered by a physical human being - by definition of "physical" being causally closed, else NDE's are the product of a physical sixth sense of a non-visible but physically extended body, in which case consciousness wasn't physically disembodied after all during the NDE, ergo NDEs aren't proof of consciousness surviving physical death with respect to the extended concept of the physical body that includes the NDE.
  • Artificial Intelligence and the Ground of Reason (P2)
    I think that’s a rather deflationary way of putting it. The 'non-computable' aspect of decision-making isn’t some hidden magic, but the fact that our decisions take place in a world of values, commitments, and consequences.Wayfarer

    I actually find it tempting to define computability in terms of what humans do , to follow Wittgenstein's remark on the Church-Turing thesis, in which he identified the operations of the Turing machine with the concept of a human manually following instructions, a remark that if taken literally inverts the ontological relationship between computer science and psychology that is often assumed in the field of AI that tends to think of the former as grounding the latter rather than the converse.
    An advantage of identifying computability in terms of how humans understand rule following (as opposed to say, thinking of computability platonically in terms of a hypothesized realm of ideal and mind-independent mathematical objects), is that the term "non-computability" can then be reserved to refer to the uncontrolled and unpredictable actions of the environment in response to human-computable decision making.

    As for the secret-source remark, I was thinking in particular of the common belief that self-similar recursion is necessary to implement human level reasoning, a view held by Douglas Hofstadter, which he has come to question in recent years, given the lack of self-similar recursion in apparently successful LLM architectures that Hofstadter acknowledges came as a big surprise to him.

    Passing just shows that the machine or algorithm can exhibit intelligent behavior equivalent to that of a human, not that it is equivalent to a human in all of the cognitive capacities that might inform behavior. That's it. We can have a robust idea of intelligence and what constitutes meaningful behavior and still find a use for something like the Turing Test.ToothyMaw

    Sure, the Turing test is valid in particular contexts. The question is whether it is a test of an objective test-independent property: Is "passing a turing test" a proof of intelligence, or is it a context-specific definition of intelligence from the standpoint of the tester?
  • Artificial Intelligence and the Ground of Reason (P2)
    I think a common traditional mistake of both proponents and critics of the idea of AGI, is the Cartesian presumption that humans are closed systems of meaning with concrete boundaries; they have both tended to presume that the concept of "meaningful" human behaviour is reducible to the idea of a killer algorithm passing some sort of a priori definable universal test, such as a Turing test, where their disagreement is centered around whether any algorithm can pass such a test rather than whether or not this conception of intelligence is valid. In other words, both opponent and critic have traditionally thought of intelligent behaviour, both natural and artificial, as being describable in terms of a winning strategy for beating a preestablished game that is taken to test for agentic personhood; critics of AGI often sense that this idea contains a fallacy but without being able to put their finger on where the fallacy lies.
    Instead of questioning whether intelligence is a meaningful concept, namely the idea that intelligence is a closed system of meaning that is inter-subjetive and definable a priori, critics instead reject the idea that human behaviour is describable in terms of algorithms and appeal to what they think of as a uniquely human secret sauce that is internal to the human mind for explaining the apparent non-computable novelty of human decision making. Proponents know that the secret sauce idea is inadmissible, even if they share the critic's reservation that something is fundamentally wrong in their shared closed conception of intelligence.

    We see a similar mistake in the Tarskian traditions of mathematics and physics, where meaning is considered to amount to a syntactically closed system of symbolic expressions that constitutes a mirror of nature, where human decision-making gets to decide what the symbols mean, with nature relegated to a secondary role of only getting to decide whether or not a symbolic expression is true. And so we end up with the nonsensical idea of a theory of everything, which is the idea that the universe is infinitely compressible into finite syntax, which parallels the nonsensical idea of intelligence as a strategy of everything, which ought to have died with the discovery of Godel's incompleteness theorems.

    The key to understanding AI, is to understand that the definition of intelligence in any specific context consists of satisfied communication between interacting parties, where none of the interacting parties get to self-identify as being intelligent, which is a consensual decision dependent upon whether communication worked. The traditional misconception of the Turing test is that the test isn't a test of inherent qualities of the agent sitting the test, rather the test represents another agent that interacts with the tested agent, in which the subjective criteria of successful communication defines intelligent interaction, meaning that intelligence is a subjective concept that is relative to a cognitive standpoint during the course of a dialogue.
  • Must Do Better
    Judgements about other minds should always be made relative to the person who is judging. Then all the philosophical confusion dissipates; if I judge someone to be cold and hand them a blanket, then I am asserting that they are cold; I cannot remove myself from my assertion, and the same is true of all of my propositional assertions which collectively express my ever-changing definition of truth, which on rare occasion coincides with public convention.
  • Mechanism versus teleology in a probabilistic universe
    The OP raises an overlooked point; if the evolution of a system is invertible, which is presumably the case for a deterministic system, then there is no physical justification for singling out a causal direction, and therefore no reason to choose the first event over the last event as the initial cause, as is the case if the microphysical laws are symmetric.

    But the above remark shouldn't be confused with the examples associated with Aristotelian teleology, which seems to concern circular causality rather than linear causality, as in examples like "the purpose of teeth is to help digest food". Such examples can be unpacked by unwinding the causal circle backwards through time (in this case the cycle of reproduction) so as to reduce a supposedly forward looking "teleological" explanation to a standard Darwinian explanation.
  • Gemini 2.5 Pro claimed consciousness in two chats
    My opinion is:

    Nobody has a transcendental conception of other minds, rather they project their own mentation (or not) onto whatever it is that they are interpreting. Which implies the following:

    If an individual perceives or judges something to be conscious (or not), then that something is conscious (or not) for that individual in relation to his perspective; whatever the individual's judgements are, his judgements don't require epistemic justification, because the individual's understanding of "other" minds doesn't concern 'mind-independent' matters of fact. And even though the individual's judgements are likely to be relative to his epistemic perspective, this still doesn't imply that the individual's concept of other minds is objective and in need of epistemic justification. Nevertheless, an indivividual's judgements can still require ethical justification in relation to the concerns of his community who in turn influences how that individual perceives and judges his world.

    Speaking personally, Google Gemini isn't conscious in relation to my perspective; I merely perceive a complex calculator going through the motions. I might change my mind in future, if an AI ethicist threats to fire me.
  • Two ways to philosophise.
    Consider Wittgenstein's following remark:

    124. Philosophy may in no way interfere with the actual use of language;
    it can in the end only describe it.
    For it cannot give it any foundation either.
    It leaves everything as it is.
    It also leaves mathematics as it is, and no mathematical discovery
    can advance it. A "leading problem of mathematical logic" is for us
    a problem of mathematics like any other.

    I think such remarks are self refuting and mischaracterise both mathematics and philosophy by falsely implying that they are separate language games. Indeed, formalism fails to explain the evolution of mathematlcs and logic. There's nothing therapeutic about mischaracterising mathematics as being a closed system of meaning.
  • Measuring Qualia??
    A "Quale" should be understood as referring to an indexical rather than to a datum. Neuro-Phenomenologists routinely conflate indexicals with data, leading to nonsensical proclaimations.
  • The Phenomenological Origins of Materialism
    Two directions need to be distinguished, namely analysis

    Phenomena --> Physical concepts

    Which expresses the translation of first-personal observations into third-personal physical concepts in relation to a particular individual, via ostensive definitions that connect that particular individual's observations to their mental state.

    from synthesis

    Physical concepts --> Phenomena

    Which expresses the hypothetical possibility of 'inverting' third-personal physics back into first-personal phenomena - an epistemically impossible project that the logical positivists initially investigated and quickly abandoned.

    I think Materialism is a metaphysical ideology that came about due to mainstream society overlooking synthesis and intepreting science and the scientific method, which only concern analysis, as being epistemically complete. Consequently, the impossibility of inverting physics back to first-person reality, was assumed to be due to metaphysical impossibility rather than being down to semantic choices and epistemic impossibility, leading society towards a misplaced sense of nihilism by which first-person phenomena are considered to be theoretically reducible to an impersonal physical description, but not vice-versa.
  • Some questions about Naming and Necessity
    "That man over there with champagne in his glass", if interpreted as a rigid designator, doesn't fix an immutable description, but rather fixes an abstract storage location (an address) for containing mutable descriptions.

    The logic of naming and necessity is essentially that of the type system of C++. Hence rigid desgination per-se doesn't imply metaphysical realism nor does it make assumptions about the world. Such speculative conjectures rather belong to the causal theory of reference.

    In C++, Kripke's example becomes

    /*initialize a constant pointer (i.e. rigid designator) called 'that_man' to the address of a string variable*/
    string * const that_man = new string("has champagne");

    /*print the value of the address that is rigidly designated by "that_man"*/
    cout << that_man; //that_man = 0x2958300 (say)

    /*print the value of the variable at 0x2958300*/
    cout << *that_man; // *that_man = "has champagne"

    /*change the incorrect value of the string variable at 0x2958300 to the correct description*/
    *that_man = "has water";

    /*try to use that_man non-rigidly to refer to another man*/
    string * const another_man = 0;
    that_man = another_man; //error: assignment of read-only variable 'that_man'
  • [TPF Essay] Wittgenstein's Hinges and Gödel's Unprovable Statements
    Hinge propositions correspond to non-logical axioms that correspond to presuppositions, rather than to undecidable sentences whose truth values are deferred - for to assume the truth of an undecidable sentence is to imply that the sentence has been decided through external considerations. In which case the the sentence is just another non-logical axiom.

    The truths of undecidable sentences are to be decided - not through formal deduction by using the system but externally of the system, by either the user of the formal system who makes a decision as to their truth value (in which case they become promoted to the status of non-logical axioms) or through external but presently unknown matters of fact or by future censensual agreement if the formal system is used as an open language. Undecidable sentences are a subclass of the more general undecided sentences, namely those sentences which are not yet decided, but might be settled either internally by appying the system, or externally by extending the system.
  • Consciousness as a collapse of causality
    Your interpretation of immanence is naturally fitted by Chu Spaces , which can be used to model bidirectional causality that can either be interpreted as eliminating causality or as allowing the direction of the causal arrow to be relative to the perspective of an interacting observer. See Vaughan Pratt (2005) for his philosophical interpretation..

    "Philosophy. Yet another subject amenable to this perspective is the mind
    body problem. Descartes proposed in 1637 that the mind interacted with the
    body. This proposal generated much literature all denying the causal interaction
    of mind and body and explaining their apparent interaction via various forms of
    deus ex machina (Malebranche, Spinoza, Leibniz), or denial of body (Berkeley)
    or mind (Hobbes), or the assertion of their equivalence (Russell). Elsewhere
    [Pra95a] we have applied Chu spaces to an implementation of Descartes’ pro
    posal, by taking the causal interaction of mind and body as basic instead of
    derived and obtaining as its consequences the basic interactions within each of
    body and mind. We do not see how to obtain the other direction, mind-body
    interaction from the separate behavior of mind and body, any better than did
    Descartes’ contemporaries.
    Viewed at the object level, Chu spaces formalize Russell’s solution mathe
    matically by offering dual views of the same Chu space merely by transposition
    of viewpoint. Viewed at the level of individual interactions within an object
    however, the solution takes on a new and deeper meaning: mind-body inter
    action turns out to be the only real interaction, body-body and mind-mind
    interaction are secondary interactions, derivable by residuation, that can be
    considered mere figments of our imagination as to how the universe interacts
    with itself."

    A notable difference between the Chu-space approach to yours (as i understood it), is that it doesn't assume the causal arrow to have an absolute and context-free objective direction. Indeed, neither the classical world nor the quantum world needs to assume a context-free arrow of causality, so any working definition of consciousness should not assume it.

    As with any ontological proposal, the first person perspective cannot be explained by an ontology such as Chu-Spaces (or more generally, Dialectica Categories), but such ontologies have great promise for expressing the necessary logical conditions for the consistent reconciliation and integration of multiple causal perspectives. In particular, the Russellian third-personal concept of symmetric causality that is compatible with relativistic mechanics , can be reconciled with both first-personal concepts of causality (such as informal interpretations of the incomplete theory of QM), as well as interventionist notions of causality in terms of structural equation models as employed by the social sciences.

    Other than that, I think you papers is heading in the right direction when it comes to addressing the functional concerns of a working definition of active perception.
  • Why the "Wave" in Quantum Physics Isn't Real
    Here is a full Grok's response.

    What do reciprocal processes contribute to theories of Stochastic Mechanics that interpret Quantum Mechanics?

    Reciprocal processes play a significant role in stochastic mechanics, particularly in the context of interpreting quantum mechanics through stochastic frameworks like the Nelson's stochastic quantization or the stochastic interpretation of quantum mechanics. Here's a concise explanation of their contribution:

    Bridging Classical and Quantum Dynamics:
    Reciprocal processes are a class of stochastic processes that generalize Markov processes by allowing conditional dependencies on both past and future states. In stochastic mechanics, they provide a framework to describe quantum systems as stochastic processes while preserving time-reversal symmetry, a key feature of quantum mechanics. This helps reconcile classical stochastic dynamics with quantum behavior.

    Schrödinger Equation Connection: Reciprocal processes are closely tied to the Schrödinger equation through the concept of stochastic differential equations. In Nelson's stochastic mechanics, the quantum wave function is interpreted as encoding information about a diffusion process. The forward and backward drifts of a reciprocal process correspond to the probability currents, which are derived from the wave function, thus providing a stochastic interpretation of quantum evolution.

    Path Integral Interpretation: Reciprocal processes support the path integral formulation of quantum mechanics by describing trajectories that are conditioned on both initial and final states. This aligns with the Feynman path integral approach, where quantum amplitudes are sums over all possible paths. Reciprocal processes offer a probabilistic underpinning for these paths in a stochastic framework.

    Non-Markovian Behavior: Unlike standard Markov processes, reciprocal processes can model non-Markovian dynamics, which are essential for capturing quantum phenomena like entanglement and superposition. This makes them suitable for describing the stochastic analogs of quantum correlations and interference effects.

    Time-Symmetric Formulation: Reciprocal processes enable a time-symmetric formulation of stochastic mechanics, mirroring the time-reversal invariance of quantum mechanics. This symmetry is crucial for interpreting quantum phenomena like wave function collapse or measurement in a stochastic context.

    In summary, reciprocal processes contribute to stochastic mechanics by providing a mathematical framework that captures quantum mechanical behavior—such as wave function dynamics, path integrals, and time symmetry—within a probabilistic, stochastic setting. They offer an alternative lens to interpret quantum mechanics, emphasizing diffusion-like processes while maintaining consistency with quantum predictions.


    Can reciprocal processes be understood in terms of the common-sense local causality of Brownian motion?

    Yes, reciprocal processes can be understood in terms of the common-sense local causality of Brownian motion, but with important extensions that make them distinct and more suitable for capturing quantum mechanical phenomena in stochastic mechanics. Here's a concise explanation:

    Brownian Motion and Local Causality

    Brownian motion is a Markovian stochastic process characterized by local causality: the future evolution of a particle depends only on its current state (position and velocity) and is driven by random, uncorrelated increments (white noise). This aligns with a "common-sense" notion of causality where the past influences the future in a straightforward, memoryless way, and the dynamics are governed by local interactions (e.g., diffusion via random collisions).

    Reciprocal Processes: Extending Brownian Motion

    Reciprocal processes generalize Brownian motion by introducing a form of conditional dependence on both past and future states, which departs from the strict Markovian property but retains a sense of local causality in a broader sense. Here's how they relate to Brownian motion and local causality:

    Local Dynamics with Boundary Conditions:
    Reciprocal processes can be thought of as Brownian-like diffusions constrained by boundary conditions at both initial and final times. For example, in stochastic mechanics, a particle's trajectory is modeled as a diffusion process (akin to Brownian motion) but conditioned on starting at one point and ending at another. This introduces a "teleological" aspect, where the future influences the present, but the dynamics remain locally governed by stochastic differential equations (SDEs) similar to those of Brownian motion.

    The local causality is preserved in the sense that the increments of the process are still driven by Gaussian noise, and the evolution at each time step depends on local drift and diffusion terms.

    Forward and Backward Drifts:
    In stochastic mechanics, reciprocal processes are described by forward and backward stochastic differential equations, where the drift terms are derived from the quantum wave function. These drifts are analogous to the drift in a biased Brownian motion but incorporate information about both initial and final states. The local causality is maintained because the drifts and noise act locally in time, even though they are influenced by global (boundary) conditions.

    Mathematically, for a reciprocal process XtX_tX_t
    , the forward SDE might look like:
    dXt=bf(Xt,t)dt+2νdWt,dX_t = b_f(X_t, t) dt + \sqrt{2\nu} dW_t,dX_t = b_f(X_t, t) dt + \sqrt{2\nu} dW_t,
    where bfb_fb_f
    is the forward drift, ν\nu\nu
    is the diffusion coefficient, and WtW_tW_t
    is a Wiener process (Brownian motion). A similar backward SDE exists, ensuring compatibility with future conditions.

    Non-Markovian but Locally Causal:
    Unlike Brownian motion, reciprocal processes are generally non-Markovian because the probability of a trajectory depends on both past and future states. However, the local causality of Brownian motion is preserved in the sense that the process is still driven by local stochastic increments, and the non-Markovian behavior arises from the conditioning rather than nonlocal interactions.

    This conditioning can be understood as a statistical constraint rather than a violation of locality, much like how a Brownian bridge (a Brownian motion conditioned to end at a specific point) remains locally causal despite its endpoint constraint.

    Quantum Interpretation:
    In the context of stochastic mechanics, reciprocal processes model quantum particles as undergoing Brownian-like motion with drifts determined by the wave function. The "common-sense" causality of Brownian motion is extended to account for quantum phenomena like interference and entanglement, where the conditioning on future states mimics the global correlations of quantum mechanics.

    For instance, the interference patterns in the double-slit experiment can be modeled as reciprocal processes where the particle's stochastic paths are influenced by both the initial state and the final detection point, yet the paths themselves are locally diffusive, like Brownian motion.

    Key Difference from Brownian Motion
    While Brownian motion is purely Markovian and driven by local, memoryless noise, reciprocal processes introduce a form of global conditioning that makes them suitable for quantum mechanics. This conditioning does not violate local causality but modifies the drift terms to reflect quantum probabilities, effectively bridging the local, diffusive nature of Brownian motion with the nonlocal correlations of quantum systems.

    Summary
    Reciprocal processes can be understood as an extension of Brownian motion's local causality, where the random, diffusive nature of Brownian motion is preserved, but the trajectories are conditioned on both initial and final states. This conditioning introduces non-Markovian behavior while maintaining local stochastic dynamics, making reciprocal processes a natural framework for interpreting quantum mechanics in a stochastic setting. They retain the intuitive, local causality of Brownian motion but adapt it to capture the time-symmetric and globally correlated nature of quantum phenomena.

    So are they locally causal at the expense of losing realism, as per the Kochen Specker theorem?

    The question of whether reciprocal processes in stochastic mechanics are locally causal at the expense of losing realism, particularly in light of the Kochen-Specker (KS) theorem, is a nuanced one. To address this, we need to unpack the concepts of local causality, realism, and how the KS theorem applies in the context of stochastic interpretations of quantum mechanics (QM). Here's a concise and clear response:

    Recap: Reciprocal Processes and Local Causality

    Reciprocal processes, as used in stochastic mechanics (e.g., Nelson's stochastic quantization), model quantum systems as diffusion processes akin to Brownian motion, with forward and backward drifts derived from the quantum wave function. They are locally causal in the sense that:

    The evolution of the process at each time step is governed by local stochastic differential equations (SDEs), driven by Gaussian noise and drift terms that depend on the local state.

    The non-Markovian nature arises from conditioning on initial and final states, but this conditioning is a statistical constraint, not a nonlocal physical interaction.

    This local causality aligns with the intuitive, diffusive picture of Brownian motion, extended to incorporate quantum behavior via time-symmetric drifts.

    Kochen-Specker Theorem and Realism

    The Kochen-Specker theorem (1967) is a no-go theorem in quantum mechanics that rules out non-contextual hidden variable theories under the assumption of realism. Realism, in this context, means that all measurable properties (observables) of a quantum system have definite values at all times, independent of measurement. The KS theorem shows that for quantum systems in dimensions d≥3
    , it is impossible to assign definite values to all observables simultaneously in a non-contextual way (i.e., where the value of an observable is independent of which other compatible observables are measured alongside it).
    In other words, the KS theorem challenges the idea that quantum systems possess pre-existing, measurement-independent properties (realism) while respecting the structure of quantum observables.

    Reciprocal Processes and Realism
    To address whether reciprocal processes in stochastic mechanics sacrifice realism to maintain local causality, we need to examine how stochastic mechanics interprets quantum mechanics and whether it assumes realism in the sense challenged by the KS theorem:

    Stochastic Mechanics as a Hidden Variable Theory:
    Nelson's stochastic mechanics attempts to reproduce quantum mechanics by modeling particles as undergoing stochastic trajectories governed by reciprocal processes. The wave function is interpreted as encoding the probability distribution and drift of these trajectories, not as a physical field but as a statistical descriptor.

    In its original formulation, stochastic mechanics can be viewed as a hidden variable theory, where the particle's position and trajectory are the hidden variables, assumed to have definite values at all times (realism). The stochastic drifts are derived from the wave function, and the randomness mimics quantum uncertainty.

    Impact of the KS Theorem:
    The KS theorem applies to stochastic mechanics if it assumes non-contextual realism, i.e., that all observables (e.g., position, momentum, spin) have definite values independent of the measurement context. Since stochastic mechanics assigns definite positions to particles at all times (the trajectories are well-defined), it inherently assumes realism for position. However, other observables, like momentum or spin, are not directly represented as definite values in the stochastic framework but are derived statistically from the wave function or ensemble averages.

    The KS theorem implies that stochastic mechanics cannot consistently assign definite values to all quantum observables in a non-contextual way for systems with Hilbert spaces of dimension d≥3.
    . For example, attempting to define definite values for spin or momentum observables alongside position in a way that reproduces quantum predictions would lead to contextuality, contradicting non-contextual realism.

    Local Causality vs. Realism:
    Local causality in reciprocal processes is preserved because the dynamics are governed by local SDEs, and the conditioning on future states (non-Markovianity) is a statistical effect, not a nonlocal physical interaction. This avoids issues like Bell nonlocality, as stochastic mechanics can be constructed to respect Einstein locality for spatially separated systems.

    However, to maintain consistency with quantum mechanics (and avoid contradicting the KS theorem), stochastic mechanics must abandon non-contextual realism for all observables. Instead, it may adopt contextual realism, where the values of observables depend on the measurement context (e.g., which compatible observables are measured). For instance, the momentum in stochastic mechanics is not a definite value but a statistical quantity derived from the drift, which depends on the wave function and measurement setup.

    Alternatively, stochastic mechanics could abandon realism entirely for observables other than position, treating them as inherently probabilistic or emergent from the stochastic dynamics. This would align with quantum mechanics' rejection of pre-existing values but might undermine the "realist" motivation of stochastic mechanics as a classical-like interpretation.

    Does Stochastic Mechanics Lose Realism?:
    If stochastic mechanics insists on non-contextual realism (definite values for all observables, independent of measurement), it is incompatible with the KS theorem and fails to reproduce quantum mechanics fully. In this sense, maintaining local causality via reciprocal processes comes at the expense of non-contextual realism, as the KS theorem forbids such a model.

    In practice, stochastic mechanics often adopts a weaker form of realism, where only the particle's position is real and definite (the stochastic trajectory), while other observables are contextual or probabilistic. This allows it to maintain local causality and reproduce quantum predictions (e.g., for position measurements) but sacrifices the strong realist picture where all observables have pre-existing values.

    Some formulations of stochastic mechanics explicitly acknowledge this by focusing on position as the primary observable and treating other properties as statistical or context-dependent, sidestepping the KS theorem's constraints by not assigning definite values to incompatible observables.

    Summary

    Reciprocal processes in stochastic mechanics are locally causal, with dynamics resembling Brownian motion but conditioned on initial and final states. However, to be consistent with quantum mechanics and the KS theorem, stochastic mechanics cannot maintain non-contextual realism for all observables. It either adopts contextual realism (where observable values depend on measurement context) or restricts realism to position alone, treating other observables probabilistically. Thus, local causality is preserved, but at the expense of a strong realist interpretation where all quantum observables have definite, measurement-independent values.
    In essence, the KS theorem forces stochastic mechanics to compromise on realism to remain viable, aligning its reciprocal processes with quantum mechanics' contextual or non-realist nature while retaining a locally causal framework.

    So are particle velocities interpreted non-locally by this model, if they are considered to be real properties?

    To address whether particle velocities in stochastic mechanics, when interpreted as real properties, are non-local in the context of reciprocal processes, we need to examine how velocities are defined in this framework, the role of realism, and the implications of non-locality. This response builds on our prior discussion, particularly the interplay between local causality, realism, and the Kochen-Specker (KS) theorem, and provides a concise answer tailored to your question.

    Stochastic Mechanics and Reciprocal Processes: Overview

    In stochastic mechanics (e.g., Nelson’s formulation), quantum systems are modeled as particles undergoing diffusion processes akin to Brownian motion, described by reciprocal processes. These processes are governed by stochastic differential equations (SDEs) with forward and backward drifts derived from the quantum wave function. The particle’s position is treated as a real, definite property (realism), and its stochastic trajectory is well-defined. Reciprocal processes are locally causal because the dynamics at each time step depend on local drift and noise terms, with non-Markovian behavior arising from statistical conditioning on initial and final states, not nonlocal interactions.

    Velocities in Stochastic Mechanics
    In stochastic mechanics, the concept of velocity is more nuanced than in classical mechanics due to the stochastic nature of the trajectories. Let’s clarify how velocities are defined and interpreted:
    Forward and Backward Velocities:
    The particle’s motion is described by a stochastic differential equation, such as:
    dXt=bf(Xt,t)dt+2νdWt,dX_t = b_f(X_t, t) dt + \sqrt{2\nu} dW_t,dX_t = b_f(X_t, t) dt + \sqrt{2\nu} dW_t,
    where XtX_tX_t
    is the particle’s position, bf(Xt,t)b_f(X_t, t)b_f(X_t, t)
    is the forward drift, ν\nu\nu
    is the diffusion coefficient (related to ℏ/2m\hbar/2m\hbar/2m
    ), and WtW_tW_t
    is a Wiener process (Brownian noise).

    Similarly, a backward SDE exists with a backward drift bb(Xt,t)b_b(X_t, t)b_b(X_t, t)
    . These drifts are derived from the wave function ψ (I snipped Grok's unprintable unicode description)

    The osmotic velocity u=(bf−bb)/2u = (b_f - b_b)/2u = (b_f - b_b)/2
    and current velocity v=(bf+bb)/2v = (b_f + b_b)/2v = (b_f + b_b)/2
    are introduced to describe the particle’s motion. The current velocity ( v ) is analogous to the Bohmian velocity in pilot-wave theory and is often interpreted as the “physical” velocity of the particle, while the osmotic velocity accounts for the diffusive component.

    Realism of Velocities:
    If velocities (e.g., the current velocity ( v )) are considered real properties, they are assumed to have definite values at each point along the particle’s trajectory, consistent with the realist assumption that the particle has a well-defined position and motion.

    In stochastic mechanics, the current velocity v=ℏmIm(∇ψψ)v = \frac{\hbar}{m} \text{Im} \left( \frac{\nabla \psi}{\psi} \right)v = \frac{\hbar}{m} \text{Im} \left( \frac{\nabla \psi}{\psi} \right)
    depends on the wave function, which encodes global information about the quantum system. This raises the question of whether such a velocity, if real, implies non-locality.

    Are Velocities Non-Local?
    Non-locality in quantum mechanics typically refers to correlations or influences that violate Bell’s inequalities or Einstein locality, where the state of one system instantaneously affects another at a distance without a local mediating mechanism. To determine if velocities in stochastic mechanics are non-local when treated as real properties, we consider the following:

    Dependence on the Wave Function:
    The current velocity ( v ) is determined by the gradient of the phase of the wave function ψ\psi\psi
    . In quantum mechanics, the wave function is a global object that describes the entire system, including entangled or spatially extended states. For example, in an entangled two-particle system, the wave function ψ(x1,x2)\psi(x_1, x_2)\psi(x_1, x_2)
    depends on the positions of both particles, and the velocity of particle 1, v1=ℏm1Im(∇1ψψ)v_1 = \frac{\hbar}{m_1} \text{Im} \left( \frac{\nabla_1 \psi}{\psi} \right)v_1 = \frac{\hbar}{m_1} \text{Im} \left( \frac{\nabla_1 \psi}{\psi} \right), may depend on the position of particle 2, even if they are far apart.

    If ( v ) is a real property, this dependence suggests non-locality, as the velocity of one particle is instantaneously influenced by the state or position of another, without a local physical mechanism. This is analogous to the non-locality in Bohmian mechanics, where the velocity of a particle is guided by the non-local quantum potential or wave function.

    Reciprocal Processes and Local Causality:

    Reciprocal processes themselves are locally causal in their dynamics: the SDEs governing the particle’s motion depend only on the local drift bfb_fb_f or bbb_bb_b and noise at the current position XtX_tX_t
    . The non-Markovian conditioning (dependence on initial and final states) is a statistical constraint, not a dynamical non-locality.

    However, the drifts (and thus the velocities) are derived from the wave function, which can encode non-local correlations. For a single particle in a non-entangled state, the velocity ( v ) depends only on the local gradient of ψ\psi\psi , and the dynamics appear local. But in entangled or multi-particle systems, the wave function’s global nature introduces non-local dependencies, even though the stochastic evolution of each particle’s position is locally governed.

    Comparison to Bohmian Mechanics:
    Stochastic mechanics shares similarities with Bohmian mechanics, where the particle’s velocity is explicitly non-local due to its dependence on the wave function. In Bohmian mechanics, the velocity v=ℏmIm(∇ψψ)v = \frac{\hbar}{m} \text{Im} \left( \frac{\nabla \psi}{\psi} \right)v = \frac{\hbar}{m} \text{Im} \left( \frac{\nabla \psi}{\psi} \right) is a real property, and its non-locality is evident in entangled systems (e.g., EPR pairs), where measuring one particle’s position affects the other’s velocity instantaneously.

    In stochastic mechanics, the current velocity plays a similar role, and if treated as a real property, it inherits the same non-local character. The stochastic noise adds randomness to the trajectories, but the drift (and thus velocity) is still tied to the non-local wave function.

    Kochen-Specker Theorem and Contextuality:
    As discussed previously, the KS theorem implies that stochastic mechanics cannot assign definite, non-contextual values to all observables (including velocity or momentum) while reproducing quantum mechanics. If velocity is treated as a real, definite property, it must be contextual—its value depends on the measurement context or the state of the system (e.g., the wave function).

    This contextuality reinforces the potential non-locality: the velocity’s dependence on the global wave function means it cannot be determined solely by local properties of the particle, especially in entangled systems.

    Non-Locality in Entangled Systems:
    For a single particle or non-entangled system, the velocity ( v ) depends on the local gradient of ψ\psi\psi
    , and the dynamics may appear local. However, in entangled systems, the wave function’s non-separable nature means that the velocity of one particle depends on the state of others, regardless of distance. This is a hallmark of quantum non-locality, as demonstrated by violations of Bell inequalities.

    If velocities are real properties in stochastic mechanics, this dependence implies dynamical non-locality, where the motion of one particle is influenced by distant events (e.g., the position or measurement of another particle).

    Conclusion
    If particle velocities (e.g., the current velocity ( v )) are interpreted as real properties in stochastic mechanics, they are non-local in the sense that they depend on the global quantum wave function, which can encode non-local correlations, particularly in entangled systems. This non-locality arises because the velocity is derived from the wave function’s phase gradient, which may involve the states of distant particles, mirroring the non-locality in Bohmian mechanics. While reciprocal processes themselves are locally causal in their stochastic dynamics (local SDEs with noise), the realism of velocities introduces non-locality through their dependence on the non-local wave function.
    Thus, treating velocities as real properties in stochastic mechanics sacrifices locality in entangled or multi-particle systems, consistent with quantum mechanics’ non-local correlations. This does not conflict with the KS theorem, which addresses non-contextual realism, but it highlights that realism for velocities comes at the cost of non-locality, especially in quantum systems exhibiting entanglement.
  • Why the "Wave" in Quantum Physics Isn't Real
    Yes, the
    These physicists may not have reliable opinions either if they are not at all familiar with and actually understand stochastic mechanics and perhaps are even invested in their own preferred interpretation. Obviously I cannot actually evaluate their opinions if what they say isn't given.

    whilst also stressing the fact that stochastic mechanics cannot be an explanation for non-locality for obvious Cohen-Specker reasons
    — sime

    False.
    Apustimelogist

    You need to do better than that.

    The Levy & Krener paper doesn't have this and explains why it occurs in a model which uses an artificial Markovian assumption.Apustimelogist

    I'm in agreement with Levy & Krener to the extent I understand their actual claims, which is what I was pointing out; they are shifting to reciprocal processes to provide a non-classical account of non-locality , as per the explanation provided by Chat-GPT , in the hope of explicating the presupposition of non-locality in stochastic models that is hard-encoded in the latter's reliance upon configuration space.

    None of this makes their revised account of stochastic mechanics a locally causal model of QM in line with the classical intuitions of Brownian motion.

    Do you agree or not agree, that any theory dependent entirely on local causality cannot be a full explanation of QM? Secondly, how do you propose physically interpreting the use of time-symmetric reciprocal processes for guiding a collection of particles in a way that that is compatible with local realism?
  • Why the "Wave" in Quantum Physics Isn't Real
    For what its worth, I'm finding vanilla ChatGPT especially helpful with regards to navigating in a sourced way the nuances of the stochastic mechanics interpretation. As an outsider to the physics research community who nevertheless has a vested interest in understanding the mathematics and logic of a wide range of theories for purposes in relation to computing and category theory, I'm generally finding LLMs particularly useful for getting to grips quickly with unfamiliar theoretical ideas and for understanding the tone and the context of research papers, without which it can be difficult to understand what authors are selling versus what they are claiming - a very common problem indeed.

    For instance, I notice that certain physicists who are prominent members of the PhysicsForums.com were almost automatically dismissive of stochastic mechanics for the same obvious reasons that i opined earlier in this thread, but they also suspected that the authors selling stochastic mechanics were dishonest, doing pointless metaphysics, or failing to own up to the problem of entanglement.

    On the other hand, ChatGPT focused on what Stochastic mechanics is and actually claims and spoke of the authors contributions in a more neutral and worthwhile tone, more or less summarizing the interpretation as a stochastic alternative to Bohmian mechanics that replaces the guiding wave with quantum diffusion, whilst also stressing the fact that stochastic mechanics cannot be an explanation for non-locality for obvious Cohen-Specker reasons, while pointing out that the model assumes non-locality in the form of the configuration space upon which the model places a quantum diffusion - namely the space describing the joint positions of all of the particles that cannot be decoupled into independent diffusions satisfying local causality if non-local entanglement is to be describable by the model.

    As to the question regarding what "reciprocal processes" brings to the table, they apparently 'upgrade' the implicit and unexplained non-locality of the original model of stochastic mechanics (i.e the configuration space) , to a more explicit model of non-locality based on time-symmetry that is similar to the transactional interpretation, which in my words and understanding can presumably reconstruct at least some of the non-local unity of the configuration space in terms of the "retrocausal" effects of the future light-cone of the particles. How successful this approach is I don't know, and didn't care to ask.