• Rich
    3.2k
    Why don't you ruminate over the Dogma: "The Laws of Nature are Fixed". Sometimes it is necessary to pause and ruminate.
  • Rich
    3.2k
    If you don't understand it, don't worry about it.
  • apokrisis
    6.8k
    Academic education is all about regurgitating for A's.Rich

    Sure, at school, there often is too much stress on regurgitation at the expense of teaching critical thinking.

    But the question here, on a philosophy forum, is are you able to demonstrate a capacity for critical thinking?

    You have your own faith to peddle. Morphogenetic fields, holographic quantum mind projection and other routine New Age babble.

    What people are pointing out to you is the big difference between critical faith and uncritical faith. If you accept no method of fixing belief, then you didn't even learn that lesson at school.
  • fdrake
    6k


    I don't actually believe that it's necessary for them to be fixed. If you look at the start of the universe it's predicted that the four fundamental forces of nature join. Having a unification for gravity, the strong force, the weak force and the electromagnetic force is a wildly different reality from our usual gravity + strong + weak + eletromagnetic or gravity+strong+(electroweak) which is sometimes used.

    If the laws of nature were not fixed I would be very interested in finding out how they change and whether it's predictable. I would love to see an experiment or theory which, say, found different values for the cosmological constant for different periods of the universe.

    However, I think there is good evidence that nature behaves in a roughly constant way over large time scales.

    Dogmatism isn't really a function of a person's beliefs, it's a function of HOW they believe them. I try to find evidence for and against my beliefs, this is why I decided to challenge you on this to see if there was any 'cause for concern' in some substructure of my beliefs. As a reward I got a few interesting thoughts about the non-constancy of nature's laws over long time scales, and a few 'arche-fossil 101' arguments to use against QM vitalists. I learned some stuff from talking to you. This is a very non-dogmatic viewpoint. You also probably assume that I dismissed Sheldrake immediately, I didn't - I did a bunch of reading a few years ago and found his ideas not cogent and not relevant, and evidence for these beliefs.

    So, have your beliefs shifted at all? Have you learned anything? I don't think you have, since I don't think you spent time trying to understand the arguments I made OR why I disagreed with you in the first place. That's dogmatism, try to avoid it. I hope you did.
  • Rich
    3.2k
    However, I think there is good evidence that nature behaves in a roughly constant way over large time scales.fdrake

    We have evidence for only over a short period of time, and as Sheldrake relates, even a very short period may be too long.

    When we discuss the nature of nature, I prefer evidence over stories. Stories tend to be biased toward pre-ordained goals. If nature is living, then everything is evolving. It's possible. Whether or not it is testable, I don't know, but we do know it's that scientific theories and experimental results are constantly changing.

    Nothing I believe it's dogma. My beliefs are always changing because I seek change. I am certainly not going to entertain any dogmas of science.
  • fdrake
    6k


    No, we have evidence over a very long period of time. We, roughly, have a paradigm in physics called quantum mechanics which has been around for just under 100 years. The contents of this theory allow predictions on particle behaviour which occur over long time scales, for example an explanation of the slow decay of carbon 14 based on the low probability of observing sufficiently energetic W bosons (which is why radiocarbon dating works) for beta decay. Theory can, and does, make predictions for times before and after the development of the theory. It would be a terrible theory if it couldn't.
  • Rich
    3.2k
    No, we have evidence over a very long period of time. We, roughly, have a paradigm in physics called quantum mechanics which has been around for just under 100 years.fdrake

    100 years it's a very, very short period of time, and all quantum mechanics provides is a probabilistic equation as well as an Uncertainty Principle. This doesn't really give us much to roll with.

    https://www.researchgate.net/post/Why_cant_Schrodingers_equation_be_used_with_high_accuracy_for_atoms_that_are_different_from_hydrogen_atoms
  • fdrake
    6k


    You probably didn't read most of my posts in the thread fully, but uncertainty principles occur in lots of contexts. Every time you have a sequence of records over time there is a derived quantity which has an uncertainty principle associated with it. Anyway:

    Just because there are current avenues for improvement or further research in a field doesn't make all the predictions of a field wrong. Quantum mechanics has been amazingly successful in producing semiconductors, radiocarbon dating techniques... If you've never read Isaac Asimov's 'The Relativity of Wrong' it's an excellent read:

    The young specialist in English Lit, having quoted me, went on to lecture me severely on the fact that in every century people have thought they understood the universe at last, and in every century they were proved to be wrong. It follows that the one thing we can say about our modern "knowledge" is that it is wrong. The young man then quoted with approval what Socrates had said on learning that the Delphic oracle had proclaimed him the wisest man in Greece. "If I am the wisest man," said Socrates, "it is because I alone know that I know nothing." the implication was that I was very foolish because I was under the impression I knew a great deal.

    My answer to him was, "John, when people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together."

    It actually has given us a lot to roll with. Quantum advances have been partially responsible for Moore's Law of computational power growth along with PET scans, radiocarbon dating...
  • Rich
    3.2k
    Just because there are current avenues for improvement or further research in a field doesn't make all the predictions of a field wrong.fdrake

    Never said this. It is possible that the cause of inaccuracies may be the evolution of the universe itself.

    Quantum physics is fine For All Practical Purposes (FAPP) as are Newton's Laws where applicable. It doesn't mean that they aren't slowly evolving. Given that there is no evidence that the equations are precise and unchanging, it is a leap of faith to say otherwise.

    Now really, this can't go much further. You want to believe in unchanging laws of nature, I am not hear to convince you otherwise. I am just saying there is no evidence. We all have a choice in what we believe and what we don't.
  • fdrake
    6k


    Claiming that the laws are changing also requires evidence. To be consistent with current physics, all changes must be within experimental error for all experiments (otherwise the change would be noted). The idea the laws change with time thus has no support since all observations which could've supported it are also within a constant laws of nature solution by construction. You would also end up with some crazy things and 'unchanging laws governing time evolution'.

    A photon has energy equal to h*f, where h is plank's constant and f is its frequency. Assume that this law holds before some time t, and that after t instead we have a new constant i. This induces a discontinuous jump. This could also be modelled as an 'unchanging law of nature' by instead having the energy be equal to h(x), where h is a function of time x, such that h(x<t)=h and h(x>=t)=i. If it changed continuously you would also be able to have an 'unchanging physical law' by specifying a continuous functional form for h, and also estimate it precisely in experiments by measuring photon energy and dividing by the frequency - the law as it is now just makes this a straight line graph, it would instead have h(t) as the graph as an 'unchanging physical law'.

    In order to not be in a scenario equivalent to one with unchanging laws, you would require the changes in natural law to behave in a completely patternless manner, essentially adding a huge variance noise term to every single physical law. This is already falsified since, say, plank's constant can be measured very precisely!
  • Rich
    3.2k
    Claiming that the laws are changing also requires evidence.fdrake

    I never said this. What I did say is that science is constantly changing which may be attributable to underlying universal evolution. Something to ruminate over. Really, take some time to think about it.
  • fdrake
    6k


    What other interpretations can there be for your statement:

    It is possible that the cause of inaccuracies may be the evolution of the universe itself.

    ?
  • Rich
    3.2k
    Ruminate.
  • fdrake
    6k
    I suppose that means you can't defend your assertions any more.
  • Rich
    3.2k
    ok. I'm good with that.
  • Cavacava
    2.4k
    Pennies pushed, not flipped.
  • Jeremiah
    1.5k


    Like most people, you have a horrible understanding of what probability is. Probability is the frequency of possible outcomes. Whether or not that is a result of predetermination or "chance" is irrelevant.
  • TheMadFool
    13.8k
    Like most people, you have a horrible understanding of what probability is. Probability is the frequency of possible outcomes. Whether or not that is a result of predetermination or "chance" is irrelevant.Jeremiah

    Can you teach me the correct understanding of probability?
  • fdrake
    6k
    Mathematically probability doesn't have to resemble either 'the long term frequency of events' or 'a representation of a subjective degree of belief and evidence for a proposition'. Its definition is compatible with both of these things.

    To understand what it's doing, we need to look at what a random variable is. A random variable is a mapping from a collection of possible events and rules for combining them (called a sigma algebra) to a set of values it may take. More formally, a random variable is a measureable mapping from a probability space to a set of values it can take. Intuitively, this means that any particular event that could happen for this random variable takes up a definite size in the set of all possible events. It is said that a random variable X satisfies a probability measure P if the associated size of an event (E) which induces a set of values from the random variable has probability P(E makes X take the set of values).

    There's nothing in here specifically about 'frequency of possible outcomes', since for continuous* probability spaces the probability of any specific outcome is 0.

    Fundamentally, all probability is an evaluation of the size of a set with respect to the size of other sets in the same space.

    *specifically non-atomic probability measures
  • Jeremiah
    1.5k


    You have some aspects right and others wrong. The definition of probability is the frequency of possible outcomes of repeated random events (random in this context means that all events have an equal chance of being selected).

    Think of probability as a ruler, and we are using it to measure possible outcomes, in the same way you might measure a length of string. Now, there is a true frequency of occurrences for those possible outcomes, which is every bit as objective and real as the length of the string, and, like the length of the string, we lack the ability to measure the true value. We can approximate the length of the string, but our methods and tools are not fine enough to find the true length of the sting. The same holds true with probability.

    For the given possible outcomes, there is a true frequency of occurrences, which we measure and approximate with a "ruler" we call probability. The fact that those occurrences occur by contingent causation is irrelevant to that measurement, as that is not what we are measuring. We are measuring the frequency of possible outcomes, which does have a true value - even if we can only approximate it.
  • Jeremiah
    1.5k
    To understand what it's doing, we need to look at what a random variable is. A random variable is a mapping from a collection of possible events and rules for combining them (called a sigma algebra) to a set of values it may take. More formally, a random variable is a measureable mapping from a probability space to a set of values it can take. Intuitively, this means that any particular event that could happen for this random variable takes up a definite size in the set of all possible events. It is said that a random variable X satisfies a probability measure P if the associated size of an event (E) which induces a set of values from the random variable has probability P(E makes X take the set of values).fdrake

    This sounds more like deviation. Did you quote that from somewhere or are those your own words?
  • fdrake
    6k
    It's a summary of the mathematical definition of random variables and probability in terms of measure theory. Here are a few references.

    When mathematicians and statisticians speak about probabilities, they are secretly speaking about these. The definitions are consistent with both frequentist (frequency, asymptotic frequency) and Bayesian (subjective probability) philosophical interpretations. Also more general notions of probability where probability distributions can represent neither. Such in Bayesian shrinkage and frequentist regularization approaches.
  • Jeremiah
    1.5k


    I actually asked you if you wrote it or quoted it.
  • fdrake
    6k
    I wrote that post.
  • Jeremiah
    1.5k


    And what are your qualifications for making such an assessment?
  • fdrake
    6k
    I'm a statistician.
  • Jeremiah
    1.5k


    I am a 4th year statistic major, and to me it sounds like you are talking about deviation and not a probability distribution.
  • fdrake
    6k
    Have you taken a class in measure and probability theory?
  • Jeremiah
    1.5k


    I have taken some probability mathematics and still have more to go.

    However, probability was defined in intro to stats, and has been echoed through all courses as the frequency of possible outcomes from a repeated random event. I am very clear on that, it is in my text books, it is defined and used that way in academic papers, and it is not a hard concept to grasp.

    Also, if you're going to give a reference, then give a direct reference, and not a "it is somewhere in that general direction." Statistics is the second degree I am working on; writing was my first, and I know that that is a poor citation.
  • fdrake
    6k


    I did not intend you to feel intimidated or patronised, so please try to be less aggressive.

    Also, if you're going to give a reference, then give a direct reference, and not a "it is somewhere in that general direction." Statistics is the second degree I am working on; writing was my first, and I know that that is a poor citation.

    My posts aren't intended to be parts of academic papers, that would be boring. But if it helps, here is a description of some of the references I gave:

    The first two links in 'here' and 'are' have the definition within the first two pages. The last is literally a whole course on measure theoretic probability, the definition and intuitive explanations thereof is contained in the section labelled 'Random Variables'. It goes from the intuitive notions you will have already met and the formalistic notions you will see if you take a course in measure theoretic probability towards the end.

    Yes, a non measure-theoretic definition of probability is what is used in introductory stats courses and wherever the measure theoretic properties are irrelevant. Probability being 'the frequency of possible outcomes from repeated random events' is the frequentist intuition of probability. It is arguably incompatible with the use of Bayes' Theorem to fit models because of 1) the existence of a prior distribution and 2) the interpretation of population parameters as random variables.

    Measure-theoretic probability governs the elementary things in both of these approaches - random variables and probability distributions - and so clearly implies neither.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.