Sorry, very late reply. Will be long post.
No, CFD (counterfactual definiteness) simply implies that physical quantities have definite values at all times. de Broglie Bohm's interpertation is a perfect example of an interpetation which has CFD. MWI violates CFD because it assigns multiple values to hypothetical measurements, so it can be realistic and 'local' (although in a weird sense... after all, what is more nonlocal than a 'universal wavefunction' split at each measurement?). — boundless
Alright, fair enough!
So, in your view, if the particle configuration is definite at all times, how can you describe non-local correlations without a non-local dynamics/kinematics which involves some notion of simultaneity? — boundless
I had trouble formulating a reply to this. I don't have enough insight inside these theories to make strong statements I would like; nor are these theories and surrounding literature really complete in a desired way. I will just offer different perspectives which are incomplete whether from lack of literature or my own ignorance.
Perspective 1:
The original stochastic mechanics by Nelson has an explicit non-locality issue where marginal probabilities of particles depend on velocity potentials related to other spatially separated particles. I
believe this is thought to be similar to the Bohmian issue.
In the non-locality section of his book,
quantum fluctuations, Nelson explicitly shows that in principle a non-Markovian as opposed to Markovian diffusions resolve this issue (pdf for book can be found on webpage below) :
https://web.math.princeton.edu/~nelson/papers.html
And there is at least one variation of stochastic mechanics where non-Markovianity is explicitly used and this eliminates that non-locality issue that was identified (clicking the link below is a direct download to the pdf of the paper:
Stochastic mechanics of reciprocal diffusions by Krener and Levy)
https://math.ucdavis.edu/~krener/51-75/68.JMP96.pdf
Again, I don't have access to any real deeper insights into these theories here and their further implications within the theories. All I know is that Nelson saw this non-local problem and it seems to be solvable in principle, especially via dropping non-Markovianity.
I guess I might as well note that Nelsonian stochastic mechanic has two other major issues - incorrect multi-time correlations and something called the Wallstrom problem but I think both issues can be regarded as more or less resolved or resolvable based on recent formulations and papers.
Perspective 2:
This is not stochastic mechanics but still a stochastic interpretation based on showing mathematically a very general correspondence between unitary quantum systems and indivisible stochastic ones:
https://arxiv.org/abs/2302.10778
https://arxiv.org/abs/2309.03085
In the following paper:
https://arxiv.org/abs/2402.16935
They argue their theory is causally local - analyzing with Bayesian causal models they find that measurements of observers do not causally influence each other (Sections VII-VIII near end). Entangled stochastic systems do causally influence each other but this is because their non-factorizable transition matrices have encoded their initial local interaction. It is just the nature of these systems they will fail to factorize until a 'division event' because statistical information is encoded cumulatively in the transition matrix (in the words of the author). I have no idea how this perspective relates to the first because they are just different stochastic formulations of quantum mechanics. Perspective 2 is actually explicitly non-Markovian; but again, there is no explicit connection that can I can see that would relate it to the issues in the first perspective or vice versa.
Perspective 3:
This not specific to the stochastic interpretation but an attempt to explain away non-local correlations in a way I find appealing. Has roots in various authors (e.g. Pitowsky will be mentioned momentarily) but perhaps best exemplified by the 1982 papers by philosopher Arthur Fine:
https://scholar.google.co.uk/scholar?hl=en&as_sdt=0%2C5&q=arthur+fine+1982&btnG=
It establishes equivalence of Bell violations to the absence of a unique joint probability distribution. Recent generalization by Abramsky:
https://scholar.google.co.uk/scholar?cluster=12086196826892314859&hl=en&as_sdt=0,5
In following paper Abramsky talks about contributions of Pitowsky:
https://scholar.google.co.uk/scholar?cluster=17313080888273101986&hl=en&as_sdt=0,5&as_vis=1
Who noticed that Bell inequalities are actually a special case of Boole inequalities which have roots in the work of George Boole in the 1800s:
Boole’s problem is simple: we are given rational numbers which indicate the relative frequencies of certain events. If no logical relations obtain among the events, then the only constraints imposed on these numbers are that they each be non-negative and less than one. If however, the events are logically interconnected, there are further equalities or inequalities that obtain among the numbers. The problem thus is to determine the numerical relations among frequencies, in terms of equalities and inequalities, which are induced by a set of logical relations among the events. The equalities and inequalities are called “conditions of possible experience”. — Pitowsky
For certain families of events the theory stipulates that they are commeasurable. This means that, in every state, the relative frequencies of all these events can be measured on one single sample. For such families of events, the rules of classical probability — Boole’s conditions in particular — are valid. Other families of events are not commeasurable, so their frequencies must be measured in more than one sample. The events in such families nevertheless exhibit logical relations (given, usually, in terms of algebraic relations among observables). But for some states, the probabilities assigned to the events violate one or more of Boole’s conditions associated with those logical relations.
A violation of Boole’s conditions of possible experience cannot be encountered when all the frequencies concerned have been measured on a single sample. Such a violation simply entails a logical contradiction; ‘observing’ it would be like ‘observing’ a round square. We expect Boole’s conditions to hold even when the frequencies are measured on distinct large random samples. But they are systematically violated, and there is no easy way out (see below). We thus live ‘on the edge of a logical contradiction’. An interpretation of quantum mechanics, an attempt to answer the WHY question, is thus an effort to save logic. — Pitowsky
The force of this perspective basically is that what Bell violating correlations
may have a formal cause not a physical one. The bizarre correlations could be formally entailed when certain statistical conditions are fulfilled, regardless of what system is being talked about. No information is actually being communicated across space between particles.
The question is then about what causes these joint probability absences? According to Fine, it is from non-commutativity.
Now there are many sources that attest to the fact that non-commutativity and associated uncertainty relations can be generically derived within generic stochastic systems, at least under certain conditions. In fact, this can be seen in the Path integral formulation where non-commutativity in that formulation comes from the non-differentiability (because of stochasticity) of the paths. Normally people see these paths as computational tools (purely out of incredulity). In the stochastic interpretation they represent actual definite trajectories particles may take.
Given that they are entailed formally, such correlations may occur in other areas with similar structures. Infact, it has been suggested that such non-local correlations are in principle possible in classical light: e.g.
https://arxiv.org/abs/2401.01615
Note, that classical entanglement is well-established in classical optics but it is usually only formulated in local "intrasystem" scenarios as opposed to the non-local "intersystem" scenario proposed by the paper. Given the setting is purely classical, the formal presence of non-commutativity or joint probability absences may be sufficient to provide the central mechanism for Bell violating correlations in that scenario or any other kind (e.g. social sciences they occur for probabilistic reasons albeit not as relevant because not about locality/nonlocality).
Whats most interesting is perhaps you don't need remarkably strange assumptions to get non-commutativity or virtually all quantum predictions our of stochastic systems.
For instance, the gist of the major Nelsonian stochastic mechanical assumptions are basically as follows - 1) particles follow paths by Newtons law had they been perturbed randomly; 2) the diffusion is time-reversible - which can be derived in kinds of equilibrium contexts where entropy regarding trajectories is maximized; and 3) the diffusion coefficient is inversely proportional to particle mass. And from that you can even reproduce the perfect spin (anti)correlations and Bell violations like in following dissertation and paper published from it (assumptions listed in dissertation).
https://scholar.google.co.uk/scholar?oi=bibs&hl=en&cluster=16239473886028239443
https://scholar.google.co.uk/scholar?cluster=15973777865898642687&hl=en&as_sdt=0,5
Despite the fact many would say it produces unphysical non-local correlations (obviously I have tried to argue via Fine's theorem that these may be in some sense a formal entailment that transcends physics), I think its definitely relevant to ask why it is even possible for virtually all quantum predictions to be derived from some very pedestrian assumptions in the first place. Why is it that indivisible stochastic systems with definite outcomes reproduce entanglement, decoherence and interference? Its kind of miraculous - if such non-locality should be impossible for particles in definite positions, why is this behavior even derivable?
Point 4:
My last point will be about your point about simultaneity of relativity and preferred reference frames. I think my point would be that such issues are no reason to discount a stochastic interpretation because these issues seem to be quite general. They occur in hydrodynamics, they occur for relativistic brownian motion, for thermodynamics. Markovian diffusions in general are known to not respect relativity and have superluminal propagation (mentioned in second link below too). It seems that when you start talking about things like probability and randomness, their relation to relativity just is never straightforward, and so areas outside of quantum mechanics have been or will be grappling with this same kind of issue also: e.g.
https://scholar.google.co.uk/scholar?cluster=17685845957935258058&hl=en&as_sdt=0,5&as_ylo=2023
Relativistic fluid dynamics [1] is an important tool in the description of vastly different physical systems, such as the quark-gluon plasma formed in ultrarelativistic heavy-ion collisions [2] and accretion disks surrounding supermassive black holes [3]. Early models of relativistic hydrodynamics were constructed in the mid twentieth century by Eckart [4] and Landau and Lifshitz [5], but these were later found to possess unphysical behavior signaled by causality violation [6] and the fact that in such theories the global equilibrium state is not stable with respect to small disturbances in all Lorentz frames [7]. These issues are not inherent to the formulation of viscous fluids in relativity.
https://scholar.google.co.uk/scholar?cluster=16512488009491179103&hl=en&as_sdt=0,5
Before outlining our approach, a general remark might be in order. Usually, a diffusion theory intends to provide a simplified phenomenological description for the complex stochastic motion of a particle in a background medium (e.g., on a substrate [5, 30, 33, 34, 35] or in a heat bath [20]). Thus, there exists a preferred frame, corresponding to the rest frame of the substrate (or, more generally, the center-of-mass frame of the interaction sources causing the stochastic motion). It is therefore not expedient to look for Lorentz or Poincare invariant spatial diffusion processes (cf. Sec. 5 of Montesinos and Rovelli [39]). Accordingly, we focus here on discussing simple diffusion models that comply with the basic requirements of special relativity in the rest frame of the substrate.
(a long time ago, I read of some versions of this interpretation which are Lorentz invariant. So, I guess that this kind of 'simultaneity' doesn't necessarily imply a rejection of special relativity. I don't remember however the details) — boundless
Hmm; just to be short, I feel like the issue is very up in the air and not simple. Skepticism isn't quite unwarranted imo. Certainly there seem to be stochastic field theories that can fulfil relativistic predictions but apparently have preferred frame plus some of the Markovian superluminality.
Edit: Just re-phrasing / clean up. Shouldn't change content but additional point:
Paper with interesting suggestion if non-locality appears in classical optics, it suggests that it should be compatible with Lorentz invariance:
https://scholar.google.co.uk/scholar?cluster=13776304742041840922&hl=en&as_sdt=0,5&as_vis=1
Can't comment on what math says at all but I am guessing the logic is that classical electrodynamics is already in some sense compatible with Lorentz invariance/covariance.