What are the following in your view?
1. Probability
2. Determinism
3. Non-determinism — TheMadFool
1. Probability expresses incomplete knowledge that we have about a system.
2. The exact same initial states in a deterministic system lead to the exact same outcome.
3. The exact same initial states in a non-deterministic system can lead to different outcomes.
What are they in your view?
Also read my previous two posts carefully, I think eventually it will click for you. I’m taking quite a lot of time to help you understand, so it would be fair if you took at least as much time to read and attempt to understand my posts. — leo
In your definition of non-determinism you concede that there is something you don't know viz. the outcomes and then you go on to say that probability is about incomplete knowledge. So it must follow that non-determinism is just probability or are you claiming that there's a difference that depends on what you're ignorant about- only the initial states or only the outcomes - and probability would be an issue of ignorance regarding initial states but non-determinism would be ignorance about outcomes despite having knowledge of the initial states.
If that's the case you're making then non-determinism can't be understood in any way because the outcomes will not exhibit any pattern whatsoever. In other words non-determinism is true randomness with every outcome having equal probability and that brings us to where we began - that non-determinism = probability. — TheMadFool
1.d)
If we know that the die is perfectly symmetrical, then combining that knowledge with our incomplete knowledge of the initial states and outcomes described in the previous paragraphs, we can conclude that 1/6th of the initial states lead to outcome ‘1’, 1/6th of the initial states lead to outcome ‘2’, 1/6th of the initial states lead to outcome ‘3’, and so on. This is the same as saying that each outcome has probability 1/6 of being realized, that’s the definition of probability. This result isn’t obvious but it can be proven mathematically, offering us partial knowledge of the function f. — leo
Basically in non-deterministic systems there is irreducible probability even if you have complete knowledge of the system, whereas in deterministic systems the probabilities are only a sign of incomplete knowledge, and disappear when we have complete knowledge. — leo
So as you can see, it is not the case that in a non-deterministic system the outcomes will not exhibit any pattern whatsoever, it isn’t the case that a non-deterministic system is totally random. — leo
This is what I've been saying all along. Deterministic systems can behave probabilistically. — TheMadFool
1. In a deterministic system there's a well defined function that maps each initial state (I) to a unique outcome (O) like so: f(I) = O.
2. In a non-deterministic system there is no such function because there are more than one outcome e.g. initial state A could lead to outcomes x, y, z,...
You mentioned a "function" pf(I) = O but if memory serves a function can't have more than one output which is what's happening in non-deterministic systems according to you: one initial state and multiple outcomes. — TheMadFool
So, there's a difference between non-determinism and randomness but you have to admit that both can be described with mathematical probability. — TheMadFool
No no this is where your confusion lies. What do you mean exactly by “behave probabilistically”? — leo
A die is deterministic and it behaves probabilistically. This probably needs further clarification because it looks like you're confused. — TheMadFool
A die is a deterministic system in that each initial state has one and only one outcome but if the initial states are random then the outcomes will be random. — TheMadFool
There is no confusion at all. A die is deterministic and it behaves probabilistically. This probably needs further clarification.
A die is a deterministic system in that each initial state has one and only one outcome but if the initial states are random then the outcomes will be random. — TheMadFool
I explained carefully why saying that “the die behaves probabilistically” is at best meaningless and at worst a contradiction, and yet you’re saying I’m the one who is confused ... — leo
With your current understanding, you can’t explain why we can pick initial states deterministically and get outcomes with frequency 1/6 each. — leo
A variable has an event space, and that event space has a distribution. How you pick a value for the variable determines whether the variable is independent or dependent. An independent variable can be a random variable, and a dependent variable can depend on one or more random variables.
How we retrieve the values for the variable in an experiment (i.e. if it's a random variable or not) has no influence on the distribution of the event space of the variable, but it can introduce a bias into our results.
That the same variable with the same distribution can have its values computed or chosen at random in different mathematical contexts is no mystery. It's a question of methodology. — Dawnstorm
we must resort to probability theory and it seems to work pretty well; too well in my opinion in that the die when thrown without knowledge of the initial states behaves in a way that matches theoretical probability. — TheMadFool
A. The usual way we throw the die - randomly - without knowing the initial state. The outcomes in this case would have a relative frequency that can be calculated in terms of the ratio between desired outcomes and total number of possible outcomes. It doesn't get more probabilistic than this does it? — TheMadFool
However, there's a major difference between A and B to wit the probabilities on a single throw of the die will be poles apart. In situation A, the probability of any outcome will be between 0 and 1 but never will it be 1 or 100% but in situation B every outcome will have a probability 1 or 100% — TheMadFool
In situation A the probably of one outcome is also 1 or 100% once the die is thrown, it is simply our incomplete knowledge that makes us say that any outcome is possible, but the outcome that is about to be realized is already determined. — leo
If you want we can focus on proving that, if you finally understand that this is the only way that we can make sense of what we observe, without invoking magic or randomness, without saying that our ignorance of the initial states somehow makes the die behave differently. — leo
When we have no knowledge of the initial states, the frequencies of the outcomes are often similar simply because we pick the initial states arbitrarily, and there are many more combinations of initial states where outcomes have a similar frequency, so we pick such combinations much more often. — leo
Yes, I believe I wrote something to that effect in my reply to Harry Hindu but that was because I thought he claimed ignorance had some kind of a causal connection to randomness. Later in my discussions with him/her and you, I realized that ignorance of deterministic systems is not a cause of but rather an occasion for, probability. I hope we're clear on that. — TheMadFool
Just as long as what we are clear on is that probabilities only exist in the system of your mind, not in the system of dice being rolled. Determinism exists in both systems. The idea of probabilities are a determined outcome of ignorant minds. — Harry Hindu
A. The usual way we throw the die - randomly - without knowing the initial state. The outcomes in this case would have a relative frequency that can be calculated in terms of the ratio between desired outcomes and total number of possible outcomes. It doesn't get more probabilistic than this does it?
B. If we have complete information about the die then we can deliberately select the initial states to produce outcomes that look exactly like A above with perfectly matching relative frequencies. — TheMadFool
The scenarios A and B in my previous post was to explain that deterministic systems can behave probabilistically and I think it accomplished its purpose. — TheMadFool
A variable has an event space, and that event space has a distribution. — Dawnstorm
Bear in mind though that I don't mean deterministic systems are non-deterministic. I just mean that sometimes, as when we have incomplete knowledge, we can use probability on deterministic systems.
Considering we can use probability on non-deterministic systems too, it must follow that probability theory has within its scope non-determinism and determinism,some part of which we're ignorant of. — TheMadFool
This is an obvious fact and doesn't contradict anything I've said so far. — TheMadFool
it would be wrong to expect that the observed frequencies will always match the theoretical probabilities we’ve come up with, it would be wrong to expect that if you throw the die a gazillion times you will always get 1/6 frequency for each outcome. — leo
It isn’t an obvious fact, it’s not easy to prove. — leo
That is correct.There's no contradiction. — Dawnstorm
The expected value for x, E(x) = P(x) * T = (3/6) * T where P(x) is the theoretical probability[/i[ of event x.
The law of large numbers states that (x1+x2+...+xn)/n will approach E(x) = P(x) * T
Note: my math may be a little off the mark. Kindly correct any errors — TheMadFool
What about the law of large numbers which says exactly the opposite of what you're saying? The law of large numbers states that the average of the values of a variable will approach the expected value of that variable as the number of experiments become larger and larger. — TheMadFool
What could be more obvious than saying if there are more ways of x happening than y then x will happen more frequently if the probabilities of all outcomes are equally likely? — TheMadFool
You have a flawed understanding of expected value, it is 1*P(1)+2*P(2)+3*P(3)*4*P(4)+5*P(5)+6*P(6) = (1+2+3+4+5+6)*1/6 = 3.5 — leo
Are you in any way challenging the law of large numbers?eople have come up with plenty of ‘laws’, are they always correct? — leo
A special form of the LLN (for a binary random variable) was first proved by Jacob Bernoulli.[7] It took him over 20 years to develop a sufficiently rigorous mathematical proof which was published in his Ars Conjectandi (The Art of Conjecturing) in 1713. — Wikpedia
Firstly, what’s not obvious is that there are more ways of x happening than y. — leo
You have a flawed understanding of expected value, it is 1*P(1)+2*P(2)+3*P(3)*4*P(4)+5*P(5)+6*P(6) = (1+2+3+4+5+6)*1/6 = 3.5
That ‘law’ states that the average of outcomes will converge towards 3.5, not towards 1/6 times the number of trials (that wouldn’t make sense). — leo
Your comments are basically about practical limitations and these can be safely ignored because, as actual experimentation shows, even a standard-issue die/coin behaves probabilistically. — TheMadFool
That ‘law’ states that the average of outcomes will converge towards 3.5, not towards 1/6 times the number of trials (that wouldn’t make sense). — leo
On the one hand, you say that practical limitations can be safely ignored, and on the other hand you wish to appeal to actual experimentation. You have to choose one. Practical limitations may not be important to the law of large numbers when it comes to an ideal die, but they're certainly vitally important to actual experimentation. That's a theoretical issue, by the way: the universe we live in is only a very small sample compared to the infite number of throws, and what any sample we throw in the real world converges to is the actual distribution of the variable, and not the ideal distribution (though the sets can and often will overlap).
More importantly, though, since you're talking about determinism, you're actually interested in practical limitations and how they relate to probability. It's me who says practical limitations are unimportant to the law of large number, because it's an entirely mathematical concept (and thus entirely logical). Not even a universe in which nothing but sixes are thrown would have anything of interest to say about the law of large numbers.
I'd say the core problem is that without a clearly defined number of elements in a set (N), you have no sense of scale. How do you answer the question whether all the die throws in the universe is a "large number" when you're talking about a totality of infinite tries? If you plot out tries (real or imagined, doesn't matter) you'll see that the curve doesn't linearily approach the expected value but goes up and down and stabilises around the value. If all the tries in the universe come up 6, this is certainly unlikely (1/6^N; N = number of dice thrown in the universe), but in the context of an ideal die thrown an infinite number of times, this is just be a tiny local devergance. — Dawnstorm
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.