Denis / All
As I have mentioned on my thread in the Philosophy of Science forum, probabilities are an acknowledgment of no known cause for a range of outcomes, and therefore their function is to provide a
description in the absence of an explanation.
In providing that description, I agree with T Wood that
Probability is a measure of information about a system. — tim wood
Determinist theory, and the maths involved in the traditional Laws of Physics & Chemistry, (rather than the principles emerging from QM), says that for every action from a precise start point, there can only be
one inevitable outcome - and if you were to accept that premise for all aspects of existence, then everything in existence would be truly inevitable and we would all be acting-out a fixed script.
The trouble is that this theory contradicts our living experiences, and requires us to effectively deny both the reality of our lives, and the experiences of every person who has ever lived, simply to uphold a doctrine - which may or may not be true. That is a very big ask.
However, determinists will point out that there is a difference between our limited knowledge and therefore our ability to
predict outcomes vs the underlying inevitability of existence. There is an important difference in that respect, but it doesn't deny the possibility of truly random events instead of hidden causes/variables.
The Laws of Physics demand one outcome - not multiple outcomes from a given start point, (called randomness) - because that is the basis of their explanations. So when multiple outcomes arise, there is either a missing factor, or a lack of determinism.
In trying to resolve the unknown factors behind the differences in outcome, people may try to assume that the gap simply reflects known factors that are not being monitored. But that is an assumption, and where there is
no factor that can logically be applied to resolve a scenario, then such a presumption seems to be at odds with the evidence. It is perfectly valid to look for missing factors, but it is also valid to consider that there may be genuinely non-determinist (spontaneous or random) factors in existence.
The more narrow the range of potential outcomes, the more likely it would seem that a hidden mechanism is at work, which has not yet been identified - but conversely, a very broad range of outcomes, without any discernible pattern might also be evidence for a lack of determinism.
As has been pointed out, probabilities, like odds in a lottery, provide a generalised description rather than a firm prediction. The most extreme and unusual outcome could occur next time around. But the converse is also true, probabilities indicate likelihood, and so the chances of the same extreme outcome occurring a second or third time would be extremely remote if the statistical model were originally correct.
That is the dilemma when trying to apply probabilities to the origin of the first living cell.
Multiple examples of every single protein would be needed for nature to experiment and evolve. When the odds of a single protein occurring by chance are one in '10 to the power 366' (ie. more options than there are atoms in the universe), then yes the likelihood of a 2nd example occurring by chance are pretty well nil. Yet life emerged.
In other words, the problems are not confined to randomness, but also examples of co-ordination that break the scientific models. So I don't agree with fishfry that
We are hard pressed to give even a single example of an ontologically random event. Most people will fall back on quantum events. The low-order bit of the femtosecond timestamp of the next neutrino to hit your detector is random because QM says it is. — fishfry
The traditional main source of examples for a lack of determinism arise from Thought and consciousness. But as we have seen, the origin of proteins, plus the growing number of examples from the
activities of molecules within living cells, point to other equally challenging issues. For instance, (as again discussed in the Philosophy of Science forum), certain molecules display characteristics of problem solving which defy the logic of the Laws of Physics which should apply to them.
In terms of randomness there are also many examples from other disciplines, such as cosmology including the broad theory of origin and the Big Bang. The evidence of the accelerating expansion of the universe either says that the 'Big Bang - Big Crunch' explanation must have had a
start point 13.7 billion years ago (representing spontaneity without cause), or had a change to a previously eternal sequence - requiring a change that had to be either spontaneous or random - ie. non-deterministic. (I confess that I got that principle from Finipolscie's books).
The problem with QM is that we can't see what's happening at that level of existence, and can only describe what is observed. The extensive use of probabilities in QM is a way to try to bring the perceived randomness of the observations into the deterministic fold - but they still represent a description rather than an explanation, because they are basically an admission that we don't have a cause to explain the different outcomes.