• Virtual Physics...Ergo, Virtual Any Damn Thing That Fancies You
    it enables magic (fantasy games) and über-advanced technology (superhero and space genres)TheMadFool
    ... and jiggling!

    Kidding aside, I dunno that the link you imagine exists, or that the link that does exist is all that interesting.

    When a player character shoots lightning from their fingertips, the physics part, if any, is to make the lightning look like lightning - meaning to make it look like real-world lightning, or maybe more likely like what the player expects lightning-from-fingertips to look like based on what they've seen in other visual works of fiction.

    The magic part is how the lightning gets into the fingertips, and how it gets out of them without causing damage. Neither of these is being visualized, though - because it happens off-screen, because the player has no expectations that need be met, because what is happening is that something is not happening, whatever. So there's nothing much for the engine to do here.

    In short, there's no overlap to speak of. A similar point applies to cartoon physics, with the creators' physical experience and intution taking the place of the software's physics engine.

    I'm not necessarily saying that there are no counter-examples, but off the top of my head, I can't think of any.
  • Virtual Physics...Ergo, Virtual Any Damn Thing That Fancies You
    Visualizing different laws of the nature is straightforward, as demonstragted by ^ cartoon physics.

    Visualizing different mathematics and logics is vastly less straightforward. The closest I can think of is what works like Escher's do with geometry, but I'd call that bending, not breaking, the familiar rules.

    I mean, to visualize 1+1=3, say, you'd need to show something along the lines of there being an apple and adding another apple and not in any way adding yet another apple, but still ending up with three apples. I can abstractly consider, but not concretely imagine, that occurring. And being able to imagine something rather seems like a prerequisite for being able to create an image of that something.

    AFAIK, the usual explanation for this difference in kind is that some of the ways our universe works are hardwired into our brain, meaning that short of upgrading our minds to superior hardware, this is one limitation we have to live with.

    Considering how amazingly adaptive the brain is, though, I'd not dismiss the importance of nurture in this regard. Maybe a human raised in a world in which 1+1=3 would cope just fine, and be conversely completely incapable of imagining a 1+1=2 world. Unfortunately, the human body would cope not at all, so we're back to separating mind from body. *shrug*
  • The biological status of memes
    The deeper problem for me is that such fuzzy definitions, applied to core concepts like "alive" or "having being" seems to not only uproot bivalence, but the Law of the Excluded Middle entirely, and then what's left of your logical systems?Count Timothy von Icarus

    Fuzzy concept, fuzzy logic. I take your point that the more fundamental the concept, the more uneasy it makes us to think that way. But that has everything to do with the way we think, or like to think, and nothing to do with the way the things we think about actually are.
  • The biological status of memes
    The human mind likes binary propositions. Nature does not.

    At the very least, "the property of 'being alive'" has fuzzy edges. And reconsidering it in terms of a matter of degrees may well be more worthwhile altogether. Of course, that immediately runs into trouble of its own. Is a human being more or less alive than a single one of its constituent cells; is an ant colony more or less alive than a single one of its constituent ants?

    All of this is to be expected, though. "Life" is an emergent quality of complex systems, and those are invariably difficult to pin down.

    Still, none of this makes the concept "meaningless" or even any less meaningful than otherwise.

    IMHO. :)
  • IQ and Behavior
    Doesn't this mean that, in some sense, random throws of a die or a coin exhibits greater [genius-level] intelligence than an actual intelligent being?TheMadFool
    Pure trial-and-error is less efficient than design, of course, but it can be more effective - because it has no qualms about exploring the entirety of the solution space.
    The concept [of evolvable hardware] was pioneered by Adrian Thompson at the University of Sussex, England, who in 1996 used [a field-programmable gate array] to evolve a tone discriminator that used fewer than 40 programmable logic gates, and had no clock signal. This is a remarkably small design for such a device, and relied on exploiting peculiarities of the hardware that engineers normally avoid. For example, one group of gates has no logical connection to the rest of the circuit, yet is crucial to its function.
  • Nature vs Nurture vs Other?
    It's much easier to gain the traits to become rich in a rich country than in a poor one.magritte
    Heh, I was only thinking of contrasting the two contexts, and here you are combining them. Interesting idea!
  • Nature vs Nurture vs Other?
    You have changed the question [...]magritte
    That was rather the point. Does it make a difference? Why or why not? The answers to those new questions may provide insight into the original one.
    Wealth comes from other people. That brings in the environment, both physical and social.magritte
    Who are traits inherited from, if not other people? Where are traits acquired from, if not the environment? Those sound like similarities to me - am I missing something?
  • Nature vs Nurture vs Other?
    This might be a worthwhile approach: Pick a definition, then replace the common element in that definition by something else, then look for a third type in that new context. For example, if "nature" is "inherited traits" and "nurture" is "acquired traits", the common element is "traits", which, off the top of my head, I'll replace with "wealth". So is there a third way to become wealthy, besides inheriting and acquiring? FWIW, I'd categorise my earlier suggestion of "destiny" as "imposed", which overlaps partly with either and fully with neither of the original pair, IMO. Mind you, neither "imposed traits" nor "imposed wealth" makes a whole lotta sense to me... *shrug*
  • Precision & Science
    The speedometer is both accurate and precise.TheMadFool

    In a thought experiment, you can have such a thing as a perfect speedometer, and use it to perfectly determine relative speeds, and use those to test models against each other, as long as their predictions differ at all.

    In the real world, a speedometer can't be perfect, only better or worse than another speedometer. To be able to test models against each other, their predictions need to differ by enough to overcome those imperfections.

    Suppose the actual velocity is [...]TheMadFool
    In the real world, there's no point in supposing such a thing, because the only way we can find out is to meaure it. In a thought experiment, there may be a point - but thought experiments can't confirm theories, only falsify theories hypotheses that are internally inconsistent.
  • Nature vs Nurture vs Other?
    Can, yes, I reckon so.

    When one interprets the terms more broadly, they simply partition the whole, along the lines of "nature" standing for "all things internal" and "nurture" for "all things external". Then, there's nothing else, so no.

    When one interprets them more narrowly, though, introducing additional behavioural drivers makes sense. The first one that came to my mind is "destiny", seeming sufficiently distant from the core senses of both "nature" and "nurture" to be considered distinct. I suspect the thought process may have gone from "natural" via "non-natural" to "supernatural". I definitely do not mean to suggest that stepping outside the ordinary is necessary, however.
  • Precision & Science
    Re-reading the recent posts, I think any remaining confusion comes down to theory versus application, more than anything else. The concept of "precision" comes into it on both those levels, and it means fundamentally the same thing on both of them - but what it means specifically depends on the specific context.

    To illustrate, let's consider everyone's favourite thought experiment, flipping a coin.

    Theory: The simplest model, let's label it "Alpha", says that there are only two outcomes, heads and tails, and that they have the same probability, Ph = Pt = 50%. Well, actually, there is a third outcome, in which the coin balances on its rim. So in model "Bravo", we treat the coin as a cylinder with radius R and thickness T, and say that the probability for that third outcome depends on those new inputs, Pr = f(R, T), and that the two original outcomes remain equally likely, Ph = Pt = (100% - Pr)/2. But actually, a cylinder has at least two further equilibrium positions, in which it balances on a point along one of the lines at which the rim and the faces meet. So in model "Charlie"...

    Application: Flip a coin, repeat N times, count how often each outcome occurs. The ratio Nh/N measures the probability Ph for heads, et cetera.

    Now, which model is more precise, Alpha or Bravo? A case can be made either way. Alpha predicts Ph to be 50%, which is perfectly precise in the sense that no source of imprecision is included in this model. It's not 0.5 precise to 1 sigfig, or 0.500 precise to 3 sigfigs, but 1/2, the ratio of two integers.

    Bravo, by contrast, expresses the probabilites in terms of physical properties that have to be measured. Those measurements are necessarily imprecise, and because imprecise inputs yield imprecise outputs, this model's numerical predictions cannot be perfectly precise. Bravo is a less precise model than Alpha, in this sense.

    However, treating the coin as a three-dimensional cylinder with thickness T is closer to reality than treating it as a two-dimensional disk with thickness zero. So Bravo can be thought of as approximating reality, and Alpha can be thought of as approximating Bravo, for a typical coin. Being only approximations, neither prediction should be considered precise, but it's reasonable to expect Bravo to be less imprecise than Alpha, in that sense.

    On the applied side, how precise are those measured probabilities? For one thing, a ratio like Nh/N isn't quite the same as that 1/2 above, because the numerator and denominator aren't integers in quite the same sense. As N gets large, miscounting gets inevitable, so a result like 12345/23456 shouldn't be thought of as perfectly precise any longer. If we estimate the uncertainty to be on the order of 100, say, we can employ scientific notation to write that as (1.23*10^4)/(2.35*10^4) to make that point.

    For another thing, by design, this is about chance, and so there's always a chance the measured probabilities won't agree with the theoretical predictions regardless of whether the model is good or bad. For N=2, there're four simple outcomes - heads then heads again, heads then tails, ... - and half of them are best explained by a model that says "the coin keeps doing the same thing". Fortunately, such flukes get less likely as N gets large - unfortunately, that means that measurements can't avoid both types of imprecision at once.

    TLDR, lots of stuff may be thought of as imprecision, and doing so may provide little insight.
  • Precision & Science
    The difference between Newton and Einstein, their theories to be "precise", manifests as differences in the precision of the outputs of the respective formulae of Newtonian velocity addition and relativistic velocity addition.TheMadFool
    Agreed, but with reservations. We can "parametrise" the speed summation equation like this in general:

    v = gamma * (v1+v2)

    According to Newton, gamma = 1. According to Einstein, gamma = 1 / (1 + v1v2/c^2). It's instructive to consider how Einstein's expression behaves as v1 and v2 approach 0 on the one hand - approaches the Newtonian limit - and the speed of light on the other hand - approaches 1/2, which then keeps v from ever exceeding c.

    And if one thinks of the Newtonian, constant value as an approximation, either of the Relativisitic expression or of reality, then this introduces an imprecision into the output of the equation that is disconnected from the imprecision of the inputs of the equation.

    This, I believe, is not how physicists typically do think about it though. The reason being that plenty of physical models are explicitly constructed like that, whereas in this case it would be more of a retcon. More importantly, to be considered sound, those models must themselves supply a means of estimating the magnitude of the imprecision they contain. For Newton, you have to step outside the model to come up with such an estimate.

    You'd miss it completely if you maintain that significant digits preclude higher precision in the output than in the inputs.TheMadFool
    Precisely. In F = m*a, the imprecision in F is the combined imprecision in m and a, both of which need to be measured. In v = gamma * (v1+v2), the imprecision in v is the combined imprecision from taking gamma to be a constant and from the straight summation of v1 and v2, which again need to be measured. The only way not to "miss it completely" is for the parametric contribution to be the dominant one, which in practice means either Relativistically high speeds, or high precision in measuring those speeds, or ideally both.
  • Precision & Science

    Okay, I think I see now what you're grappling with. The point is this one:

    A) Low-precision version of the experiment

    Data
    • v1 ~ 111.110 km/s (speed of the first probe, as measured by a stationary observer)
    • v2 ~ 111.113 km/s (speed of the second probe, ditto)
    • v12 ~ 222.222 km/s (speed of the first probe, as measured by the second probe)
    • v21 ~ 222.219 km/s (ditto, vice versa)

    Theory
    • vo = v1+v2 ~ 222.223 km/s (old model)
    • vn = (v1+v2) / (1 + v1v2/c^2) ~ 222.223 km/s (new model)

    The measurement tools used in this version are precise to a few m/s, which shows up as noise at the level of the 6th sigfig. Using more sigfigs in the computations would be pointless and misleading. The measured values and those derived from the old and new models are all close enough to each other to be considered identical. We've simply confirmed both models, lacking the power to discriminate between them.

    B) High-precision version of the experiment

    Data
    • v1 ~ 111.111114 km/s
    • v2 ~ 111.111112 km/s
    • v12 ~ 222.222198 km/s
    • v21 ~ 222.222194 km/s

    Theory
    • vo ~ 222.222226 km/s
    • vn = 222.222196 km/s

    Now we're using tools precise to a few mm/s, and so increase our working precision to 9 sigfigs. This extra precision is what allows us to say that there is a non-negligible difference (~30 mm/s) between the predictions made by the old and new models, and to meaningfully compare the experimental data with either one. The data disagrees with the old and agrees with the new model, which is strong confirmation of the latter.

    If you're still not quite comfortable with sigfigs, remember that they're merely a shorthand for how much error there is in a value. Maybe the readout of the low-precision tool uses 9 figures, and gave us v1 as "111,109.876 m/s". There's nothing wrong with reporting that as "111.109876 km/s, with a margin of error of 3 m/s", say. It's just more verbose and "not the done thing" in this context.

    Happy? :)
  • Precision & Science
    1. Am I correct about what I said about Newton? Had his measurements for mass and distance been more precise (had more decimal places) than what was available to him, he would've realized that the formula was wrong.TheMadFool
    Unlikely, I'd say.

    What one learns in school about the Scientific Method is that when a new practical result turns out to contradict the old theoretical system, what scientists do is throw away the old system and replace it with a new one.

    What happens in the real world is a lot messier, because there are always a bunch of possible reasons for such discrepancies. Maybe the result was a fluke. Maybe there was a systematic error in how it was obtained. Maybe it doesn't show us a single effect, but how various effects interact, and the old theory works fine for the primary one but doesn't apply to each of the secondary ones, or one of the theories that do apply to the secondary ones is the one that's dodgy, or some of those other theories don't even exist yet because this is the first time this effect has shown up. Or, or, or.

    For an illustration, imagine aliens living on our Moon using a high-precision optical telescope to observe a cannon firing on Earth, and noticing that the cannonball's trajectory doesn't quite match Newtonian predictions. Do they need to invent Relativity? A far likelier explanation is that they've not properly accounted for atmospheric effects like drag, given that their Lunar environment doesn't have much of an atmosphere.

    For an example, have a look at Pioneer anomaly @ wikipedia.

    So that's one good reason not to give up on a theory at the first sign of trouble. Another one is that until there's a new theory,.you use the old one, whether or not you know it to be flawed. In the traditional interpretation, in which theories can be true or false, that's a bit distasteful - but in the modern interpretation, in which models can only be better or worse approximations, there's nothing wrong with it.

    With all that in mind, what would Newton have done with those high-precision measurements? It's not like he was in a position to go ahead and come up with Relativity himself: None of the theoretical groundwork that Einstein built on was in place at the time, not least because the bulk of it was ultimately built on Newtonian foundations in turn. Reasonably, it would have made little difference, other than to make him suspect that some other effect, like the atmospheric drag in my illustration or the thermal recoil in the Pioneer example, comes into play at some point.

    2. Why can't the output of a formula not be more precise than the input?TheMadFool
    Did you not like my eariler explanation?
    The general proof again needs statistical methods, no doubt. For the specific case of a multiplication like F = ma, though, just think of the inputs as the length and width of a rectangle, and the output as its area. If the length is known perfectly, and the width has an uncertainty of 10%, say, then the area will have an uncertainty of 10% as well. Vice versa, if the length has the 10% uncertainty, and the width is known perfectly, same result. So when both the length and the width have a 10% uncertainty, it should be clear that the area now has an uncertainty of more than 10%.onomatomanic

    What is of concern to me is why an entirely new model needs to be built from scratch simply to explain a more precise measurement if that is what's actually going on?TheMadFool
    Part of the problem may be that you're thinking in terms of individual measurements. Think in terms of datasets instead:

    Y6JfhtE.png

    The upper dataset is low-precision, and can be "explained" as the blue line, which is straight. The lower dataset is high-precision, and must be explained as the green line, which is curved. The old model was quite good, in the sense that it predicts parameters (offset and slope) for the straight line that put it in the right place. But straight lines is all it can do, so it's not good enough for the higher-precision data. The new model is better, in the sense that it can do what the old model can do, plus predicting curvature parameters. Still, the old model remains better in the sense that it's less cumbersome to work with, so it makes sense to keep using it whenever either the line doesn't curve or the needed precision isn't high. (Hm, that actually worked out even nicer than I anticipated!)
  • Precision & Science
    GR is not even able to approach this problem.Verdi
    Do you mean that our mathematical methods and computing resources are insufficient to apply GR to certain classes of problems, or that the model itself is less powerful than Newtonian mechanics? If what you mean is that for a given investment of effort, Newtonian methods will more often than not yield better results than Relativistic methods, then we're saying the same thing in different ways.
  • Precision & Science
    B) If m = 2.1 and a = 3.1, F = 2.1 × 3.1 = 6.5 [ I dropped the 1 after 5]

    My precision in B is greater than my precision in A.
    TheMadFool
    Yes. It gets a bit trickier when the inputs aren't of the order of magnitude of 1, which is to say, aren't between 1 and 10:

    C) If m = 20.1 and a = 30.1, F = 605

    3 sigfigs in the inputs, so 3 sigfigs in the output. That the figures are in different places (hundreds, tens, and ones; instead of tens, ones, and tenths) doesn't matter. This is one of the reasons why people like to use scientific notation:

    C') If m = 2.01*10^1 and a = 3.01*10^1, F = 6.05*10^2

    Back to not tricky at all. :)

    If so, my question is does Newton's and Einstein's theories differ in this respect? Put differently, is Newton's theory less precise than Einstein's?TheMadFool
    I don't quite know how to answer that - and as you've seen, others have responded in quite different ways - which shows that it's quite a good question. It seems to me that it depends more on how the theories are interpreted than on the theories themselves, ultimately.

    Put simply and imprecisely: Newtonian mechanics fails for Mercury because it uses Euclidean geometry; General Relativity holds for Mercury because it uses non-Euclidean geometry, aka "the curvature of space(-time)".

    The traditional interpretation of this discrepancy would be that each theory makes that assumption about the actual nature of actual space. In this interpretation, the fact that precise measurements of Mercury disagree with the Newtonian prediction tell us that its assumption was wrong, and therefore that the theory as a whole was fundamentally wrong. The imprecision is small, so the prediction is quantitatively quite good. But while convenient, that's not really the point - the way it describes the situation qualitatively is no good. So its being imprecise for once means that it was wrong all along.

    On the other hand, the fact that the measurements agree with the Relativistic prediction confirm its assumption. Which does not, of course, rule out that other measurements won't say otherwise. For the present, the theory remains "unfalsified", and its assumption about the actual nature of actual space remains in the running for being actually true.

    This is probably how Newton would have thought about it, and possibly how Einstein would have thought about it at least some of the time.

    The modern interpretation differs, unsurprisingly. One way to put it might be to say that it treats both theories models (the new label is somewhat tied to the new interpretation) as applying to distinct and equally hypothetical worlds, in which their respective assumptions hold by definition. What the measurements taken in the real world tell us is that Einstein's hypothetical world is a better approximation of ours than Newton's. Nevertheless, in the vast majority of practical situations, the disagreement between the two approximations is negligible. The fact that Newton's approximation is discovered to be non-negligibly imprecise under certain circumstances simply tells us not to rely on it in those sorts of circumstances. And the fact that Einstein's approximation holds up doesn't mean that it ceases to be an approximation, just that we've not yet achieved the precision or encountered the circumstances under which it, too, buckles. So both models are considered, a priori, to be precise within their hypothetical worlds and imprecise in the real world. Newton's model is lower-precision than Einstein's, but also lower-effort. Pick whichever fits a given situation, and don't worry about that elusive concept called "truth".
  • Precision & Science
    By that standard, Ptolemaic astronomy isn't wrong, it's just less precise than Kepler.T Clark
    Quite. Unfortunately, it's less precise while also being more effort. So as a model, it's objectively worse, and there is no situation in which it would be preferrable to use it. But I take your point. The standard is the one that modern physics applies to itself, primarily, and applying it outside of that domain can be a bit absurd.
  • Precision & Science
    The relevant point is that the output is never going to be more precise than the inputs.onomatomanic
    The general proof again needs statistical methods, no doubt. For the specific case of a multiplication like F = ma, though, just think of the inputs as the length and width of a rectangle, and the output as its area. If the length is known perfectly, and the width has an uncertainty of 10%, say, then the area will have an uncertainty of 10% as well. Vice versa, if the length has the 10% uncertainty, and the width is known perfectly, same result. So when both the length and the width have a 10% uncertainty, it should be clear that the area now has an uncertainty of more than 10%. Is that good enough? :)

    Give me a crash course on signficant figures.TheMadFool
    Let's write the earlier result like this, for the sake of illustration:

    000 006.060 126 000 +/- 0.000 5

    The leading zeros are insignificant, in that dropping them doesn't affect the value. Ditto for the trailing zeros. And the "126" portion is also insignificant, in that it's below the "certainty threshold" we're specifying. The remaining figures are the significant ones, and counting how many of them there are is a useful shorthand for the value's precision. "6.06" has 3 sigfigs, "6.060" has 4, which is why they don't mean quite the same thing (in this context, this is a convention that need not apply in others).
  • Precision & Science
    Interestingly enough, Newton wasn't wrong. It was simply not precise enough for large bodies. You can take the theory of relativity and reduce it down to Newton's equation for regular sized bodies. It is evidence that certain equations are useful for particular scales, but breakdown in others.Philosophim
    A quibble.T Clark
    Depends on who you ask.

    In the context of modern physics, it's pretty much the heart of the matter. Newtonian mechanics isn't false, and Relativity isn't true. Both are simply models, and it's not even as simple as that Einstein's model is unequivocally better than Newton's.

    Models approximate reality. Newton's model doesn't approximate it as well as Einstein's, so it's worse in that sense. But it's also considerably lower-effort, which is a point in its favour. Choosing a model to apply is like choosing a tool to use: The optimal choice depends on the job at hand.
  • Precision & Science
    Say, m = 2 kg, a = 3 m/s2

    F = ma = 2 × 3 = 6 Newtons of force.

    Now, if I measure the mass more precisely e.g. 2. 014 kg and I do the same thing to acceleration, a = 3.009 m/s2 what I get is

    F = 2.014 × 3.009 = 6.060126 Newtons
    TheMadFool
    I'd normally not comment on this, outside of grading homework, but since precision is what this thread it about: Your last line is slightly problematic. A better version looks like this:

    F = 2.014 kg × 3.009 m/s² = 6.060 N

    I re-added the units, but never mind that. The relevant point is that the output is never going to be more precise than the inputs. Here, both of the inputs are precise to 4 "sigfigs" ("significant figures", which is similar to but more inclusive than the "decimal places" you touched on in the OP), so the output will be precise to 4 sigfigs at most. The additional numerals "126" are arithmetic artifacts, and contain no physically meaningful information.

    The reason including them is potentially harmful, and not merely pointless, is that a number like "6,060" contains an additional piece of information in this context. Namely, it implicitly tells you the precision of the value, by how many sigfigs it gives. An explicit equivalent for "6.06" is "6.06 +/- 0.005". For "6.060", it's "6.060 +/- 0.0005". And for "6.060126", it's "6.060126 +/- 0.0000005". And that claim clearly can't hold here.

    Unsurprisingly, what I just said is itself imprecise. Properly, combining the uncertainties in the input values into an uncertainty in the output value takes statistical methodology. And when it matters, that's what the professionals do, too. And then you get results along the lines of "6.0601 (-0.0007)(+0.0008) N", where the numbers in the parentheses specify the interval within which the true value is expected to fall with a given confidence, like 50% or 90%.

    End of tedious aside. :)
  • Bias inherent in the Scientific Method itself?
    I don't think static vs. dynamic is a good distinction to describe the situation.T Clark
    Just to clarify, the static-versus-dynamic contrast is what I am concerned with; "describe the situation" isn't. So saying that view A is more static than view B could be like saying that children can hear higher frequencies than adults: True, but describing children as "people who can hear high frequencies" would be silly.
  • Bias inherent in the Scientific Method itself?
    To account for this, our models tend to become less and less static over time.onomatomanic
    I don't know if that's true or not.T Clark

    Well, let's take a step back, then. Would you agree that what I'll call a naive worldview - that of a child or a caveman, say, developed on the basis of unaided senses and common sense - will be more static than what I'll call a modern worldview - developed on the basis of modern equipment and insight? This appears obvious to me, as things that seem simple at the scale of the unaided senses invariably turn out to be complicated at other scales.

    Like, in the naive worldview, still air is going to be either just that, or nothing at all. In the modern worldview, what our thermal sense perceives as its temperature stems from the motion of its molecules. When a dynamic element is present but hidden, the naive and modern views must exclude and include it, respectively.

    Same for the ground. Naively, we can treat it as permanent. The caveman doesn't expect his cave's opening to close up overnight, or to lead someplace else tomorrow than it did today, and quite rightly so. (And if he did expect that, it stands to reason that he'd not have agreed to become a caveman in the first place. :P) Meanwhile, the modern worldview, with its greater scope, touches on rock formation and erosion and so forth. So again, a dynamic element is present but hidden.

    I think two examples suffice to illustrate this point, so I'll stop here. You mentioned catastrophic changes, on top of those gradual ones, and I agree that those may well be accounted for in the naive worldview. But the modern one accounts for them too, in different terms, so we may as well call that one a draw.

    Ergo, less naive equals less static. Are we on the same page this far?
  • Bias inherent in the Scientific Method itself?
    We're now very comfortable seeing evolutionary processes in language and culture and science itself.Srap Tasmaner
    Yes, I expect that statement was what triggered my meme connection, it just took a while to sink in - thanks again! The nice thing about memetics is that it has an information-theoretical aspect, which means it's not just a conceptualization but has predictive power, just like genetics. Potentially, anyway.

    Maybe this is the real story, some continual swing back and forth between the two poles.Srap Tasmaner
    Science definitely has its fashions, just like any other branch of culture. So on occasion, you're going to see a less dynamic model coming into and a more dynamic model going out of fashion. Once one model is accepted as the mainstream one, though, it doesn't seem plausible for it to be replaced in that role by a less dynamic one at a later point. After all, the reason for its success(ion) will have had a lot to do with that it could account for subtleties that it's predecessor couldn't, and I find it difficult to reconcile that with "less dynamic".

    Admittedly, as I'm using "dynamic" with a meaning that takes it close to "progressive", there may be a bit of circularity in that reasoning. :P

    Anyway, what I'm suggesting is that there's a long-term trend from static to dynamic, but with smaller-term back-and-forth fluctuations superimposed on it, and that those are what you picked up on.

    Traditional philosophy was 'top-down' in its approach - it conceived of the world as an ordered whole (which is the meaning of the term 'cosmos') and tried to discern the nature of that order through reason and observation. Modern science and philosophy tends to be bottom-up, that is, reductionistic, and also to try to restrict itself to observable cause-and-effect relationships and principles.Wayfarer
    Nice, I'd not properly considered that distinction in this context. Seems to me that it raises the analogous issue - when one asks the questions with a top-down mindset, are the answers one arrives at likely to mirror that mindset, and vice versa? Unlike with my static/dynamic contention, it seems self-evident that this must indeed be so, though - close to the point of tautology, even.

    I'm not sure what you mean by the bias in the scientific method. Do you mean a bias in the scientific approach to nature? Don't you think that this approach is biased by definition? Namely, being scientific?Verdi
    No, this is explicitly not what I mean - cf my second post in this thread. When a bias exists by definition, it's at best wanted, and at worst unwanted but apparent to the user. "My" bias is one that is quite a bit more insidious, as it involves a domain transition - a quality of the general approach (the Scientific Method) potentially "infecting" the specific models generated by that approach.

    It may be a bit akin to the quantum effect famously demonstrated in the double-slit experiment, when the act of measurement impacts the outcome of that measurement. And it may be a bit like when crime scene DNA turns out to belong to the crime scene or lab techs. (Or it may not, heh.)

    In the realm below the Moon moving objects come to rest, unless powered by an energy source. In fact, all moving objects come to rest ultimately.Verdi
    That's my point! The way everyday objects move hasn't changed - sooner or later, they tend to stop - nor has the everyday way we observe this - we look at them. But the way we think about what we see has changed. When we try to slide a thing across a plane and it doesn't go as far as we'd like it to, we no longer think "the thing stopped moving because that's just what things do", but "the thing would have kept moving but for too much friction". Science's standard answer to why the latter view is better than the former is that it has more explanatory and predictive power. And I'm in no way questioning that. But I'm wondering if there's something else there, namely, that what I refer to as the more "dynamic" view is subtly more attractive to a scientific mind, because the Scientific Method by which that mind operates is in turn more dynamic than more traditional approaches.
  • Bias inherent in the Scientific Method itself?
    Don't you think Dawkins's selfish gene and meme view on evolution is a rigid static approach, or model? The model is closely connected even to a dogma: the central dogma of biology. Even questioning this model is considered blasphemy in the church based on this dogma, inhibiting progress in science. The Lamarckian view is a priori dismissed.Verdi
    Hm. Either my understanding of Dawkins's formulation of memetics is very flawed, or yours is. Here's mine:

    The basis for the so-called "Central Dogma" was that in genetics, a first-generation gene has dual functionality. On the one hand, in developmental terms, it acts as "blueprint" for a first-generation expression. On the other hand, in reproductive terms, it acts as "source copy" to a second-generation gene's "target copy". Assuming that that's the full picture, there is then no information flow from the first-generation expression to the second-generation gene, which dismisses the Lamarckian view a priori, just as you say.

    In memetics, memes do not have dual functionality. A first-generation meme again acts as blueprint for a first-generation expression... but there is no "other hand". Reproduction happens when a host encounters the first-generation expression and turns that into a second-generation meme. So the information flow clearly does involve the expression, so the dogma clearly does not hold.

    For example, let's say the first-generation meme is a melody in my head. The first-generation expression is me whistling it. The second-generation meme is you listening to and memorizing it. And if I whistle it while moving away from you, so that what you're listening to is Doppler-shifted down by an octave, then the second-generation meme won't match the first-generation meme, because of something that happened to the first-generation expression only.

    I've never considered the label "Lamarckian" at the level of genes or memes, as opposed to that of organisms, so I dunno that it applies entirely - but it's got to come close, surely.

    "Continues in its state" seems pretty static to me.T Clark
    Yes. But as stated at the outset, my usage of the labels is primarily relative. "Continuance" is a somewhat less static natural state than "rest".

    Then, in the late 1920s, Edwin Hubble observed cosmological red shifts and concluded that the universe is expanding after all.T Clark
    Then the theory of plate tectonics was developed. After that, the idea that the continents can move is part of our fundamental understand of the world.T Clark
    Are you suggesting that the change I'm talking about is less a binary contrast between un-scientific and scientific approaches, and more an ongoing process that takes place within science just as much? If so, the point is well taken.

    How about this for an alternate explanation, without referencing the Scientific Method directly: For the sake of simplicity, our null hypothesis tends to be that a situation is static when nothing suggests any different. But as our observational prowess increases, we increasingly notice dynamic behaviours at unfamiliar scales. To account for this, our models tend to become less and less static over time.
  • Bias inherent in the Scientific Method itself?
    Hang on, I just made another connection. Namely, that it may be fruitful to re-consider the various approaches and views and models mentioned as memes, in Dawkins's original sense of the term. Because if an evolutionary theory is thought of that way, then it may end up applying to itself. That takes the amount of "meta" to a whole 'nother level, clearly. The only question being whether one's brain is capable of operating at that level, heh.
  • Bias inherent in the Scientific Method itself?
    Can you give an example?Verdi
    Not sure I follow in turn. The pseudo-scientific ideas mentioned in the OP (like creationism) and the pre-scientific idea about rest being a more natural state than motion were meant to be just that. What, specifically, is it about them that doesn't work for you?

    Newton's 'clockwork universe' is not dynamic in the way we now expect nature to be, with galaxies and even matter itself 'evolving', if that's the right way to put that.Srap Tasmaner
    Spot-on, IMO. That's why I felt a bit uneasy about extending the contrast to Newtonian mechanics, despite its being so closely linked to "dynamics", at least the way a physicist would use the term. So maybe not an ideal choice on my part. I already mentioned one alternative I considered, "progressive", in the OP, and why I didn't stick with it.

    And it's even possible to see change over time as predictable, 'empires rise and fall', that sort of thing, which has a static vibe to it.Srap Tasmaner
    Another excellent point. When one re-interprets change (A -> B) as but one of the phases of a cycle (A -> B -> C -> ... -> Z -> A), then the dynamic quality of the former is subsumed in the static quality of the latter. That makes it more palatable, which may well have contributed to the prevalence and prominence of this thought pattern.

    But I still think you're right that there's something different about the modern view, and I still think it's probably Darwin. I just can't put my finger on it.Srap Tasmaner
    I think it's more about the cumulative effect of multiple paradigm shifts, than about any single one of them. Darwin for biology. Quantum mechanics for physics, replacing a deterministic with a probabilistic worldview. Gödel for mathematics, upsetting the comfortable assumptions about completeness and consistency taken for granted to that point. Because any single one of them can be thought of as correcting a mistake, even if the mistake was a massive one, and the correction correspondingly so. Which then allows one to think that now that the mistake is corrected, one is on firm ground. But when such major corrections keep on coming, at some point it sinks in that at best there's no way to tell how far away that firm ground is, and at worst there's no such thing at all.
  • Bias inherent in the Scientific Method itself?
    Thanks!

    I don't know what you mean when you say that science is dynamic vs. static.T Clark

    My basic contention is that scientific models have a tendency to be less static than their non-scientific counterparts, such as the pre-scientific ideas of the past and the pseudo-scientific ideas of the present that address the same questions.

    I suppose the most straightforward example of the former is the Newtonian take on motion - that, without dissipative effects like friction, a body, once in motion, will stay in motion - replacing the Classical take - that the natural state of a body is to be at rest.

    Those mechanical senses of "static" and "dynamic" weren't quite the ones I originally had in mind, though. Instead, I was thinking in terms of "unchanging" and "changing", because those are the ones that are directly mirrored in the unscientific and scientific approaches themselves: The one typically starts with a fixed idea one has faith in, for one reason or another. The other, ideally, follows whatever works best (as in, makes the most predictions that come the closest to what actually happens, and the like) wherever it leads.

    Any clearer now? :)
  • Bias inherent in the Scientific Method itself?
    True, scientific models are biased toward qualities like quantifiability and reproducibility, in the sense that models without those qualities are bad models in and of themselves, from a scientific point of view, and are therefore not readily entertained.

    The case is different for the bias in question, though, in that there's no apparent reason why a static model should automatically be worse, in that sense, than dynamic models. Indeed, another one of those qualities towards which scientific models are transparently biased is simplicity, and static models are arguably simpler than dynamic ones, all else being equal (which, of course, it never is).

    That suggests that "my" bias really ought to be considered a flaw, and not a feature like the rest. I find the argument made in the first reply in the locked thread more on point - that while it may be a flaw in the abstract, the natural inclination of humans goes the other way, and so as long as the effect of the former is weaker than that of the latter, which seems a safe assumption, there's no reason to consider it a flaw in practice. If there's anything to it in the first place, of course.

    PS: Incidentally, in my experience science forums typically do allow such discussions, either in a dedicated philosophy forum or in an off-topic area. Which is why this rule caught me out.