(I know you know what floating points are, but many readers will not)
They took a classical system; the three body problem but with black holes; evolved it forward in time from t = 0 to t = t. They then took the position of the objects in the system at t and perturbed them by the Plank length. They then ran the system backwards in time, using the same equations, from the perturbed position at t. They looked at how often the forward time trajectory wasn't just replicated, but backwards, and found it to be 5% of the time.
What this says is. quoting the article:
The movement of the three black holes can be so enormously chaotic that something as small as the Planck length will influence the movements," Boekholt said. "The disturbances the size of the Planck length have an exponential effect and break the time symmetry.
.
The
original paper describes its methodology as:
The main idea of our experiment is the following. Each triple system has a certain escape time, which is the time it takes for the triple to break up into a permanent and unbound binary-single configuration. Given a numerical accuracy, , there is also a tracking time, which is the time that the numerical solution is still close to the physical trajectory that is connected to the initial condition. If the tracking time is shorter than the escape time, then the numerical solution has diverged from the physical solution, and as a consequence, it has become time irreversible. Only the systems with the smallest amplifications factors will pass the reversibility test. However, by systematically increasing the numerical accuracy (decreasing epsilon), we aim to increase the tracking time of each system. An increasing fraction of systems will obtain a tracking time exceeding its escape time, thus gradually decreasing the fraction of irreversible solutions
Key points:
(1) There's a numerical accuracy parameter; the numerical accuracy parameter arises because computers don't just store numbers like pi exactly like we can write down symbols for on paper, they store floating point approximations of them, and there's always some error. For example, if you can only represent 3 decimal places, the number 0.00099999 will be 0, despite it being close to 0.001. The computers only have so many bits to represent the number with.
(2) The numerical accuracy parameter can be varied with the simulation. The numerical accuracy parameter encodes how precisely all the variables in the system can be represented. If the numerical accuracy parameter for giving directions to your neighbour was 100km, you could give them "accurate" directions within that error-threshold by telling them that you live in the city you live in.
(3) A "solution" of these equations defining the three body system is a numerical solution; all the calculus and numbers are represented by computer approximations using these error prone floating points and functions taking floating points and spitting out floating points. (The scarequotes are not to say this is an illegitimate procedure, the scarequotes are to distinguish a numerical solution from an analytic - pen and paper - one, the analytic one is exact).
They highlight this distinction in the discussion:
In the limit of infinite accuracy (epsilon → 0) we retrieve the microscopic time-reversibility of Newton’s equations of motion. In the presence of perturbations of size epsilon, whether numerical or physical, a fraction of systems becomes irreversible...
Super take home message: the laws as you'd write them down on paper are time reversible.
(4) (1-3) are just maths, the next bit is physics. Something physically interesting would be if the accuracy of the numerical approximations broke down badly sometimes at a physical length scale; like say the diameter of an atom, or the Planck length.
(5) How to establish this? When they run the equations backwards from the end point - using the end point as a new start point - with perturbation of the order of 10^-74 for the system's motion relevant variables (position, velocity, acceleration at a time point it looks like); they receive a result which is statistically indistinguishable from the unperturbed one. But there are negligible differences, "micro differences" as the paper puts it, what this means is that for every forward run, there are a collection of backward runs which are negligibly different from it.
(5) What if we put in an error thresh-hold of the Planck length, run the system forward, then perturb its state by the plank length (approx 10^-32) and counted what proportion of times the system outputs a backward trajectory which is statistically
distinguishable from the one we get from retracing the steps from the forward iterations exactly? Turns out this is 5%.
The paper does not discuss any QM effects, and whether there is anything physically meaningful about this result; the Planck-length result is merely suggestive of something significant without specifying what it is. As some speculation about what it is, the paper shows that even if you represent a very chaotic system's dynamics exactly, if there is some underlying "uncertainty" or "fluctuation" in the exact state variables, it will pull trajectories apart - it turns out that this is true 5% of the time for the three body system and the Planck length, which every material thing is presumably bigger than.
The paper's methodological take home message seems to be more about measuring how chaotic systems are numerically using this % of irreversible trajectories as a function of the error thresh-hold.
But fewer people would care about the paper if it didn't suggest (with plausible deniability in that typical academic way) that it has something to say about time irreversibility of physical/natural trajectories as opposed to time irreversibility of numerical algorithms representing them.
Edit: something this post didn't cover, and maybe suggests wrongly, is that the weird irreversibility can be blamed on the researchers' implementation/code; and that's wrong. The time irreversibility is framed as a feature of the system (under the numerical algorithms), not a bug. The whole graph representing "what proportion of simulations turn out to be time irreversible at this error threshold" is a system property, the system (or the equations defining it) imbues the numerical approximations with properties like that.