• I am an Ecology


    Can you give a mechanical explanation of how an ecosystem spends degrees of freedom - also degrees of freedom of what?

    An example would also be good.
  • Welcome to The Philosophy Forum - an introduction thread


    Welcome Believenothing! The forum is an excellent way to procrastinate while learning things, so long as you read up on what people say.
  • A question about time measurement


    This evidence supports my claim that not only is the extrapolation possibly wrong, it is probably wrong. I do not claim to know anything about what the error actually is.

    :o

    I think this has gone on long enough.
  • A question about time measurement


    How much less precise is the error than stated?
  • How do political scientists mathematize the politcal spectrum?
    How do you mathematize it? One way it's been done is this. You ask people a bunch of questions which distinguish between different political beliefs, producing a score on a Likert Scale. The Likert scores can then be given as a representation of your political beliefs, or alternatively you can minimise the distance to a set of ideologies/people who believed them to find who you're politically 'close' to. The political centre in this case is just 'not belonging to any extreme', so is a range of middling scores on the questions.
  • I am an Ecology
    Lots to think about here.



    In ecological or evolutionary terms, one can think of this in terms of robustness: robust ecosystems, those that can best handle 'perturbations', are also those that can best accommodate diversity and change; in evolution, phenotypic robustness actually allows for a maximum of genotypic change, change that cannot be 'seen' by natural selection because it takes place below the level at which selection can exert pressure on it. I've not studied the ecological analogs of this (perhaps @fdrake will have more to say), but I can only imagine the same applies.

    Biodiversity itself can have a regulatory effect. I think the most extreme example of this is a monocultural crop. If a field consists of a single crop everywhere in it, perturbation through disease can quickly wipe out the whole crop. Diversifying land use in the field can increase both single crop yields and the stability of the crop to disease and other externalities like climate change. There's a nexus of articles on Wiki about similar topics, surrounding polyculture and agro-ecology. This paper is about biodiversity and stability but asks the questions in terms of scope changes (local,regional,global biodiversities) and spatial biodiversity (link totally not biased since it's my old boss' paper). In the latter paper, you can see the effect of fortuitous/unfortuitous ways of thinking about space and locality methodologically (which I mentioned in terms of zonation).

    AFAIK the mechanisms that link biodiversity to stability are still being researched, so it's far from 'settled science'.

    I should add that thinking about methodological constraints in the same manner as ecological realities as I did in the boundary post is very heterodox and probably needs to be taken with a grain of salt.

    Next post:

    The question of paramatizaion is facinating to me - like, what is the exact status of a 'parameter'? Is it simply 'epistemic', 'merely' a way to gain a handle on things? But it can't be merely that because it has to in some way 'track' a real change occuring in the 'thing/process' itself. So what exactly is happening when you see an 'optimization' of a parameter along a certain dimension in a time series?

    Do you mean the time series obtaining a local maximum through 'optimisation' or do you mean an ecological model obtaining a local maximum through optimisation? The relationship of the latter to an ecological model is more a matter of model fitting and parameter estimation than how a parametrised mathematical model of an ecology relates to what it models. The parameters are 'best in some sense' with respect to the data.

    Also @csalisbury:

    My intuition - probably along the lines of Csal's distinction between the 'in-itself' and the 'for-itself' - is that most parameters are 'emergent';

    But then something happens when a variable in the system can relate to that cycle by, to paraphrase Csal, by 'reflexively taking it's own parameters as a variable that can be acted upon': so humans will cultivate food so that we don't have to deal with - or at least minimize the impact of - cycles of food scarcity and die out like wolves with too few deer to prey on. This is the shift from the 'in-itself' to the 'for-itself', where the implicit becomes explicit and is acted upon as such. And this almost invariably alters the behavior of the system, which is why, I think, the two descriptions of the 'X’wunda trade system' (quoted by Csal) are not equivalent: something will qualitatively change if the system itself 'approaches itself' in Friedman's way


    I personally wouldn't like to think about the 'modelling relation' between science and nature in terms of the 'for-itself' acting representationally on the 'in-itself'. Just 'cos I think it's awkward. Will give an example: if you plant a monoculture and it gets destroyed by disease, when the 'in-itself' of the crop gets destroyed, we can say it's because of the 'for-itself' of the vulnerability of the crop to disease in our way of thinking about it. The crop's vulnerability to disease acts as a pattern in nature and a pattern in thought, and there's some kind of functional equivalence of terms. Even if nature sees only the individual plants and their inter-relations, this 'crop through iterated conjunction' still works like the 'crop' which satisfied the properties of monocultures. But this aversion of mine might be because I don't understand Kant very well. Could either of you map the distinction for me insofar as it relates to ecological models?

    Of course you can ask how a certain process 'knows' if the level is too high or too low, but it's all just mechanism: because these systems are 'looped', the end product itself influences the rate at which that product is produced. Thus - at another analytic level - the usual alternating-periodic 'sine wave' pattern of certain preditor-prey cycles, which I'm sure you're well, well farmilar with:

    I think what allows the aggregation of prey/predators in the model to work like something in nature is that in terms of exchangability. Let's take wolves and rabbits, the specifics of the wolves don't matter too much since availability of food and food amount required operate on each wolf individually in the same way as they operate on the the group (scaled up). Rabbits are the same, the specifics don't matter too much insofar as they need to get food, how much food there is and how many predators there are. A way of putting this might be 'the individual is an aggregate of size 1' in these circumstances.

    Methadologically, I suppose, the ecological question is always: does the system see itself in the way I'm describing? And if not, how careful must I be with respect to the conclusions I'm trying to draw with my data? And of course one can relate all of this to Heidegger's 'ontological distinction' and the so-called horizon of intelligibility where beings appear as beings, and animals with are 'without world' etc etc. I think a really interesting project would be to try and think these two things together, but I'm not ready to pursue that here! And yeah, all of this should indeed be linked to your other question: "How does nature learn what to care about?"

    I think ecology has some complications that aren't present in simpler relationships between model and world. I'm not sure I could make a list of them all, but there's always a difficulty in measuring properties of ecosystems precisely in a manner useful for modelling. It isn't the same for chemistry.

    An example, the Haber process. It works so long as there's air, hydrogen, a catalyst, and a cooling procedure. The terms in the description of the process aren't abstractions, they're the real thing. The algorithm works on real inputs (air,hydrogen) and produces real outputs (ammonia). Why it works might be conceptually ladened, but procedurally the description it embodies is equivalent to the described, if that makes sense. I don't think the same is true of ecological parameters.
  • A question about time measurement


    Would you agree that while the clock is going, its error rate will be as stated?
  • A question about time measurement


    What reduces the accuracy of the measurement from its purported value?
  • I am an Ecology
    I guess it's better if I try and detail the kinds of boundaries that ecosystems have.

    Natural boundaries:

    Spatial - subtended land area.
    Temporal - duration since inception, events can insert different regimes of biomass accumulation (think of an opportunistic shrub's series after a forest fire) and otherwise disrupt ecological flows.
    Functional - what the ecosystem does, what flows constitute and regulate it, what perturbations disrupt and change it.

    Can view ecotones as 'sharp' spatial/functional/temporal boundaries and ecoclines as 'fuzzy' spatial/functional/temporal.

    Generalisations/composites

    Communal/community based - the composition of organisms in an ecosystem is often a flow regulator and flow-type distinguisher (eg: biomass going from predator to prey species being a distinct flow category from soil gradients and plant community density/composition gradients despite the possibility of their coupling like Yellowstone), can have an abstract boundary in terms of not functioning the same once perturbed far enough away from its current state. Communities are emergent properties of organism/physical arrangements that are spatio-temporally subtended and functionally active and constrained. The action of a community in an ecosystem can be coupled to the subtended areas and dissolve ecosystems entirely (changing their dynamics irrevocably, think non-endemic crop-parasite behaviour), or promote the growth and stability of the ecosystem in general (wolves of Yellowstone a good example here too).

    Zonation - variation of a community or assemblage's properties or its organismal composition along a spatial/temporal gradient. Can be the relationship of the spatial distribution of an organism to an ecological gradient over space.

    Non-natural/methodological

    Operational Zonation - picking out relevant areas for study of a particular theme, can coincide with natural boundaries but need not.

    Curtailing - picking out relevant flows and processes in an ecosystem to study it.

    ____________________________________________________________________________________

    All of these have the idea of parametrisation in common. A quantity varies, a change occurs. Certain ranges of changes are compatible with current ecosystem behaviour (perturbative stability of a state within an amount of perturbation), certain ones aren't ( [localised] extinction, inhabitability, niche destruction). Operational specification of ecological parameters can be fortuitous or occlusive in the process of revealing ecological dynamics; for an example see the discussion on edge effects and whether the increases in biodiversity towards ecosystem edges are illusory, unique to ecosystem operation at the boundary or a result of habitat patch overlap.

    Nature seems to care about the parameters since we can study ecosystems using them and learn things, but I don't think nature 'sees', say, the distinction between altitude's effect on the spatial distribution of soil bacteria (propensity-to-change) and the functional form we specify. Nor the specific way we measure ecological parameters.

    Another question entirely is the generative process that gives rise to the appropriate parameter spaces for studying ecological dynamics. How does nature learn what to care about?
  • A question about time measurement


    The word 'refuting' doesn't appear in any of my recent posts in this thread. My last response to @Metaphysician Undercover is an attempt to detail how his position undermines our ability to know pretty much anything.

    Also, stop with the chicken-caesar word salad.
  • Time and such




    The video series I linked titled 'Gamma' from Sixty Symbols on Youtube has a worked example on how to deal with the relativity of simultaneity
  • A question about time measurement


    Observing a thermometer is observing a measurement. Observing a watch is observing a measurement. Observing a radiocarbon dating procedure is observing a measurement. Observing the number of rings on a tree and dividing it by a rate is observing a measurement. Looking back through geological time based on the stratification of soil and rock deposits is a measurement. Every psychology experiment which elicits variables from subjects is a collection of measurements. Every sequencing of genes and study of their change or population genetic calculation based on real data is a measurement.

    The world is so much more realistic when you restrict the knowledge of it to anecdotal evidence, which you can't form anyway since anecdotal evidence consists in records of experience or generalisations thereof that are not confined to the same time period as their generation.

    You be trolling.
  • A question about time measurement


    A baby is born at 10pm in New York. Someone looks at their watch. Since the measurement process took a second, we can't justifiably say the baby's been born at 10pm. When you look away from a thermometer after checking the temperature, you can't justifiably say what temperature it is. You can't justifiably say the dinosaurs were around millions of years ago. You can't date trees based off their rings. All of geological history may as well be a myth, all of evolutionary theory has to be thrown away, every single measurement or calculation ever that was done must be discarded because it can't be justified since it's an extrapolation. Measurement error analysis is impossible, every psychological experiment ever done is bunk, every piece of anecdotal evidence is in even worse standing. The fabric of our social life disappears - we can no longer learn and generalise based on our experiences.

    You don't live in this world. No one does.
  • I am an Ecology


    True, true. I guess it's more that living things have a 'dedicated' 'in-built' hereditary system (even though it's not the only hereditary system that living things have - i.e. the epigenetic, behavioural and symbolic systems charted by Jablonka and Lamb), whereas ecologies are more modular and not fixed by any particular system like that of DNA.

    There's also the question of (eco)system boundaries. Hereditary mechanisms are embodied at the organismal level but operate above and below it; prosaically, nature has its own notions of scope. Ecosystems are no different, and their boundaries can even be distinct ecological units, arising from both the continuous variation of landscape properties (such as soil moisture content) and discrete variation in terms of presence/absence of communities. At the level of population genetics, you can obtain continuous variation as a result of discretising gene-flow regulators like mountain ranges and archipelago.

    But, studying genetic and phenotypic variation along one side of a mountain range doesn't necessarily make use of the mountain range as a gene-flow regulator since the methodological assumption of studying one side of it pre-individuates the gene flows on either side and picks one. Unless it generates a hybrid zone, in which case what was once a continuous variation from base-species to its genetic modifications is reflexively re-introduced to the process at a later time in its development (interbreeding of 'transitional' species). Prosaically, nature unfolds in terms of the continuous, the discrete and their inter-relation. And what is a boundary for one analysis is an irrelevance for another.

    As an interesting side note, this emphasis on perturbations and transient dynamics in ecosystem theory is finding an expression in differential geometry and topology, and the view of ecosystems as dynamical systems with flows seems to be serving as a basis for ecology's mathematisation at a theoretical level (like what happened with population genetics and statistics).

    The distinct features of flows in population genetic terms and flows in ecological terms could serve as a poetic inspiration for treating an organism as an ecosystem, but nothing is really gained from this taxonomy that wasn't already pregnant in the idea of the organism as composite system embedded in composite systems.

    Edit: though maybe it's a useful pedagogical tool to get people thinking about humans in less individualistic terms!
  • A question about time measurement


    Say that the radiocarbon dating of a dinosaur fossil took a month, is it then illegitimate to claim that it's more than a month old?
  • The experience of awareness


    It isn't what philosophy has always sought to do, though. The empirical character of phenomenology let it internalise that kind of dialogue between liminal phenomena and pre-developed theory.
  • Sociological Critique


    To make it easier, if you give me a sequence of 10 moves, I'll tell you whether it's losing or not. EG: denote by W a day that you work and N a day that you don't, you could give me a string like WWWNWWWN, and I'd tell you whether it's losing or not.
  • Sociological Critique


    I don't think it's feasible to keep believing in the universality of economic rationality when there are plenty of scenarios which don't contain it. We can play a game that doesn't contain it if you like.

    You are now called Toby, Toby has chronic fatigue syndrome. The rules of the game are as follows:
    (1) A move is whether you decide to go to work on a given day.
    (2) If you become too tired, you will have to spend some time out of work to recover. Becoming too tired is a function of the hours worked within a time period.
    (3) If you don't work enough, you will be fired.
    (4) You lose when you are fired or when you become very ill from working too much.

    This is an incomplete information game in two senses, you don't know the rules fully - only enough to make moves, you don't know the probability distribution of outcomes, you don't know the utility function or expected loss of your moves.

    Make a move, and I'll tell you whether its expected value with the hidden utility is positive, negative or 0.
  • Sociological Critique


    I was under the impression that game theory is based on mathematics, so, eventually (given a sufficiently complex calculus) all games could be modeled to understand what actions would produce the maximum amount of utility to all participants. Since you seem to know more about this than I do, then I figure you must be right in highlighting the complexity of various games and imposed constraints on participants. But, again it seems that the underlying premise to render such a conclusion as sound would be that every participant is acting in their own or collective self-interest, no?

    Acting in someone's self interest isn't actually a clear thing game theoretically unless strategies can be discussed. To 'act in your self interest' is to make a move or sequence of moves which increase your utility. IE, you need to be able to evaluate the utility quantitatively to speak clearly in this sense. This is why it works for simple games better than complex ones.

    There's a big difference between 'collective self interest' and 'self interest', if you read the wiki-page on cooperative game theory, you can see that self interest is simply the interest of what counts as a player. There's not necessarily a sense of subjectivity implicit in the game, even. You can consider estimating a line of best fit a game where nature tries to give you the worst possible data for the estimate and you need to make the best possible guess (given a loss or utility function).

    Yes, I understand that modeling a situation often requires more than the 2D analysis we're talking about, or rather 3D analysis bounded by time; but, as I understand it, there are no hard limits imposed by any situation that wouldn't allow a sufficiently complex calculus to be devised to account for all externalities arising from interactions in the market.

    Games typically have finite numbers of moves, prices of stuff in the market are determined by buying and selling games, time is continuous in financial time series models, therefore there are infinitely many moves made. If you require a calculus that can be written down and computed by hand, that doesn't always exist for Bayesian Games. It could be done with numerical approximation though. Hard limits on the situation correspond to constraints imposed by the game theoretic model which do not actually obtain in a relevant manner. Such as the symmetry of gains and losses of actions which is typically assumed, and the role that the negation of that assumption plays in prospect theory.

    What makes you say that? Again, is there a hard limit imposed by a theorem or such that would prohibit said modeling to occur?

    Well, self interest can mean the interest of everyone in the game, you, the self interest of nature... And what self interest means game theoretically only makes sense in terms of calculable payoffs and costs. Finding a 'utility function' for life in general is doubtlessly impossible.
  • I am an Ecology


    The reproductive behaviour of organisms can also be considered as part of an ecosystem though. This is why colony collapse disorder for bees is terrifying, no mo' bees is no mo' trees.

    The image of ecological succession in terms of discrete developmental stages of the distribution of plant matter over an area is outdated. The most dated bit of it is the idea of ecological climax, which contains within it a sense of ecological equilibrium (self regulating/homeostatic interdependence), there's no evidence for this. The preferred view atm is one of dynamism and flux, focussing on the possible disturbances and potentials for the ecosystem than rather arbitrary categorisation of stages of plant development. What can be said of the wolves in YellowStone park? Should they be called part of the series?

    As a historical note, the idea of succession actually predates the idea of ecosystem. Ecosystem as a concept was proposed to solve some of the conceptual problems associated with plant succession:

    It is now generally admitted by plant ecologists, not only that vegetation
    is constantly undergoing various kinds of change, but that the increasing
    habit of concentrating attention on these changes instead of studying plant
    communities as if they were static entities is leading to a far deeper insight
    into the nature of vegetation and the parts it plays in the world. A great part
    of vegetational change is generally known as succession, which has become
    a recognised technical term in ecology, though there still seems to be some
    difference of opinion as to the proper limits of its connotation; and it is the
    study of succession in the widest sense which has contributed and is contributing
    more than any other single line of investigation to the deeper knowledge
    alluded to.
    — Tansley

    You can read the amazing article from Tansley, where the word 'ecosystem' comes from, here.
  • Sociological Critique


    If they are preallocated is a big 'if'. Rarely are things so clear or obvious in the real world, which you bring up later in your post. Which, leads me to believe that acting selfishly will almost always be what is best for the individual and group of individuals (Is there a theorem for that? I think the Nash Equilibrium only holds given that premise, otherwise the game falls apart, I think.).

    When you say 'is there a theorem for that' it has to be specific to a game or class of games. There's not 'theorem for that' for games which display most of the features of political/economic discourse or activity. There isn't even a guarantee for Nash equilibrium in this kind of context.

    This isn't even starting to mention the asymmetric information problem. But, what I gathered from my short stint at one course of game theory at college, is that even when asymmetric information problems are avoided by having a game of guaranteed rewards or more formalized conditions (the market)

    Trying to model 'the market' in terms of game theory is not usually done in a manner that represents the complexities of the market. If someone agrees that the Black-Scholes equation is useful - or more generally continuous time modelling of financial time series - this is no longer representable as a game with a finite number of actions without losing information. Fluctuations of the market in continuous time are generated per unit time through the activities of humans.

    is that utility is maximized even more by self-interested behavior. This is again because, at the very fundamental level, self-interested behavior is rational and ought to be done. So, the system is constantly self-reinforcing.

    In a tautologous sense, you can define self-interest as maximising your utility function. But to say that this necessarily contains all the features of 'rational self interest' in something close to the Randian or non-empirical economic models sense for all games just isn't true. Secret Hitler makes people behave collegially since they share winning conditions for the group and they cannot defect. The interest of the individual is equivalent to the interest of their group here.

    The subject is constrained by the rules of the game they are depicted in. This entails that the sense of game-theoretic rationality for Secret Hitler has the exact same kind of justification to be the 'primordial sense of self interest' that any (most, really) other game theoretic conception of human activity, including the rational-self-interest Bayesian-dutch-book super capitalist investor-God. It's only the pop-cultural amalgamation of the old Cold War game theory + the neoliberal economic subject that makes us believe the subject in their use of games is more primordial or even more representative of human subjectivity than someone playing Secret Hitler.
  • The experience of awareness
    I first started reading phenomenological literature when I was 18. I was impressed with the apparent ability to vary something, say a lightbulb, in terms of shape, size, consistency, constitutive elements in order to derive its necessary properties in our sensory manifold, pace Husserl. A friend at university introduced me to Heidegger, and suggested I read Being and Time. The thing that made me want to read it was the friend's observation, which was (paraphrased and shortened):

    The mode of engagement with an object characterised by intellectual variation of its sensible properties does not derive necessary sensible properties of the appropriate kind as the necessity is of a justificatory rather than perceptual character. It is imposed rather than implicated in the perceptual object.

    And that completely blew me away. If Heidegger's method of thought could in some manner get around the problem - providing the right kind of entailment or suggestiveness in the description of phenomena to their fundamental constituents -, it was something worth studying. So I spent a year or so reading through Being and Time and secondary/tertiary literature, writing rough notes on the sections in the first part (no temporality). It helped a bit with the problem, as instead of focussing on particular objects or areas of study, it took (what was allegedly) the entire environment of a person and implicated a kind of hierarchy of concepts (say, tools->signs->language->propositional-as-structures, hermeneutic-as-structures with anxiety->my self as mine-> facticity->horizonal temporality->originary temporality for a rough discretisation of the book) which were holistically implicated in each other. In a certain sense, the broadness of scope allowed a big chunk of what mattered to stay near the concerns of the analysis (the obsession with 'thematisation' for those who know Heidegger).

    So after studying Heidegger for a while I turned to Merleau-Ponty and Levinas, who apparently noticed that intersubjectivity and the body respectively appear in incredibly impoverished forms in Heidegger's analysis. It rang quite true, the Other in Heidegger is mostly a normative-linguistic structure that distracts us from our own lives, and the body is little more than the vessel for Dasein.

    Levinas remained a pure phenomenologist, but implicated in his phenomenology is a kind of limiting process and the revealing of my limits that allows a place to be other (and be other than me). Merleau-Ponty's (early) methodology is far more radical however, as it studies perception as the body varies, using case studies from brain damage and amputees.

    Hubert Dreyfus places the breakdown of everyday phenomena as a disclosive frontier for phenomenology; like, you're playing your guitar, a string breaks, for a second or two you're treating your guitar like an object that doesn't make much sense (oh shit, it broke), much different from the flow state of playing it. Merleau-Ponty does a similar thing with perception - what happens when the body varies, how can we speak of the sensation of the phantom-limb and the kick-in-the-nads in the same breath?

    What Merleau-Ponty and Levinas showed, methodologically, was that phenomenology cannot just proceed from the every-day to thematise any (perceptually derived) concept and its ontological ground, we have to take a methodological breakaway to unusual circumstances of our being to study its structures comprehensively; and treat that heterogeneity with both intellectual and practical respect. There can be no privilege of 'internal' ontological inquiry over the 'external' ontical conditions that constrain it.
  • Time and such


    This and this are almost entirely maths free descriptions of special relativity's use of the speed of light as 'cosmic speed limit' and how it has a consequence of time dilation. This is a series of videos that culminate in a calculation of 'what speed do the photons in a torchlight on a moving cart travel at?' intended for entirely lay audiences.
  • Sociological Critique


    One thing that game theory does in an analysis is ascribe an abstract opponent. This can be 'nature' or another player. This can be generalised to cooperative games, where groups of players can form coalitions and solutions (strategies) of the game are ways of allocating resources (payoffs) or costs (losses) to groups. Further, the analysis can include leaving allied groups and making new allied groups. The assumption that self interest generates optimal payoff in general really only applies to games of coalition of size 1; self interest becomes interest of one's coalition if they are pre-allocated (like in Secret Hitler). If coalitions are not pre-allocated, self interest can take the usual optimal payoff for me form (which can include overheads of coalition joining to avoid greater losses), but also the group form if leaving the coalition has opportunity costs close to losing the game (or are sufficiently bad for the player)

    Things are crazy complicated when an individual player wins, coalitions aren't disjoint (a person can only be a member of 1 player group) and aren't pre-allocated. Like in the board Game of Thrones or Risk. Things are even more complicated when there are group winning conditions, the conditions aren't monotonic (containing a winning player implies that group wins, like in Secret Hitler), and there are overlapping coalitions. Also each player (or coalition) can possibly make multiple moves at once, has incomplete information on allies and enemies, and can make at least one of infinitely many moves (think about adjusting a tax rate)... The latter of which begins to resemble real life more than Fuck You Buddy.

    It's worthwhile to remember that the development of game theory was principally done at the RAND corporation (name is significant) during the Cold War, which developed 'the delicate balance of terror' and mutually assured destruction as a Nash Equilibrium to prevent nuclear holocaust in the Cold War. The historical context for its first developments are the paranoid spying and technocratic policy planning of the Cold War, so it isn't surprising that the subject which plays games in the old game theory is self interested, isolated and amoral.

    The idea of applying game theory to large social structures seamlessly is pretty bad - the kind of games that begin to approach the complexity of real world diplomacy are analytically intractable, mathematician speak for 'this can't be solved exactly, only approximately', and so resist pithy formulation to evince claims in essays. A rule of thumb when someone justifies something using game theory is to look at the assumptions for the game and see how distorted the vision of politico-economic life it requires. That said, the numerical analysis of games to inform policy decisions lends itself very well to technocratic powers attempting to keep things as they are, like the aforementioned US think tank, but it's also present in the UK under the guise of neoliberal public choice theory.
  • A question about time measurement


    (1) The error analysis is correct.
    (2) The derived error rate is approximately 3*10^-16 seconds per second.

    Do these require metaphysical necessity and unchanging physical laws?
  • A question about time measurement


    It's known that the error rate for that clock is about 3*10^-16 seconds per second. This implies the error rate is 3*k * 10^-16 seconds per k*second.

    I never made any claim of the necessity of any physical law, in fact if you read through my posts you'll see that I said I was sympathetic to the view that they can change. However, that they can change doesn't entail they will change in a way that destroys the accuracy of the clock. So, tell me when they will change, and how they will change so that the accuracy of the clock is destroyed.

    I also said the following to you and @tom

    Edit 2: making this explicit, if the clock stopped working entirely, of course it wouldn't provide a precise measurement of the second. If it stopped working in a more subtle way, say a variation in the laws of physics relevant to the functioning of the clock, then it may stop working entirely or degrade in performance. Otherwise, so long as it functions in accordance with the set up in the paper, it will have that error rate.

    The clock doesn't work with metaphysical necessity. That it works isn't conditional on the necessity of physical laws. The calculation of the error rate depends solely on the physical process that constitutes the clock and the measurements it generates. So, if the physical process were to stop - if someone took a sledgehammer to the experimental apparatus - the clock would stop. If all protons had already decayed, there couldn't be a clock. What is required to invalidate the error analysis of the clock is to show that the physical process in it will change in a manner that effects the clock, or alternatively find an error in the paper's error calculation.

    The error rate in terms of 'how many years would it take for a single second of error to accrue' is equivalent to the original 3*10^-16 ish seconds per second error rate. It is not an extrapolation. Let's look at the google definition:

    extrapolation the action of estimating or concluding something by assuming that existing trends will continue or a current method will remain applicable.

    So yes, the error rate of the clock remaining the same with changing background conditions requires that the physical process that constitutes it doesn't change in a way which renders the analysis inapplicable. It isn't an extrapolation to say if nature keeps working as it does then the clock will.

    Nor is it an extrapolation to translate the error rate to a different numerical scale. Saying that the clock will be there in 100 million years? That might be an extrapolation.

    You want to make it an extrapolation, so tell me how and when the physical process constituting the clock will change, in a manner that makes the error analysis inadequate.
  • A question about time measurement


    Noo.... An extrapolation is an extension of an analysis outside the data range for which it was estimated. Say the error rate is K seconds per second, then you can scale K by a constant to obtain an error rate in terms of years, trillions of years, a googol of years. This is estimating a parameter then expressing the value of that parameter on a different numerical scale.

    You may as well say that it's an extrapolation to go from 1 femtogram to 2.20462e-18 pounds!
  • A question about time measurement


    It isn't an extrapolation, it's a rounding of the error rate translated to a timescale that denotes the sheer precision of the measurement to a lay audience. See tom's post.


    It seems to me that what you are implying is that expressing the extraordinary accuracy of the atomic in terms of time scales that a non-technical audience might better understand, is not the same as claiming the clock will still exist in 100,000,000 years?

    Precisely. There's one necessary and sufficient condition for the clock not to work in accordance with that error rate. That's for the process in the clock that measures the oscillations to change. Not the possibility of its change or the necessity of its change - that it will change.

    Edit: or alternatively that the error analysis in the paper isn't accurate!

    Edit 2: making this explicit, if the clock stopped working entirely, of course it wouldn't provide a precise measurement of the second. If it stopped working in a more subtle way, say a variation in the laws of physics relevant to the functioning of the clock, then it may stop working entirely or degrade in performance. Otherwise, so long as it functions in accordance with the set up in the paper, it will have that error rate.
  • A question about time measurement


    Would you be happy with 6*10^-16 seconds per 2 seconds? How about 9*10^-16 per 3? You can scale the error like that all you like, it still represents the same error rate. If you ran the clock for 3*10^15 years, of course you're going to get an error on it: what matters is that it's the same error as predicted by the analysis of the process.

    Also, since you know the error rate, you know when it's going to have accumulated a second of error, so it can be re-calibrated. (subtract a second from the display, or a year from the display...)
  • A question about time measurement


    Ok. If the measurement error analysis in the paper isn't wrong, that means the 1 second in 100 million years isn't wrong. Since that corresponds to an error rate of about 3 * 10 ^ -16, which was derived within the month. The unit of the error rate is in seconds per second... Take the reciprocal, voila!
  • A question about time measurement


    Help me out a bit.

    (1) Beliefs about nature and methods for deriving them are fallible.
    (2) The laws of nature are non-necessary.
    (3) The laws of nature can change.
    (4) The laws of nature will change.
    (5) The laws of nature only describe properties of thought.
    (5) Measurement error is a property of human engagement with a phenomenon.
    (6) Historically, ways of measuring time have been less accurate than described.
    (7) Scientific beliefs can change.
    (9) An observed pattern of nature cannot be assumed to arise or perpetuate in the controlled conditions which generate it.
    (10) Scientific knowledge is incomplete.

    Are all elements of your post.

    Can you tell me how their combination entails:

    (11) The measurement error analysis of the caesium-133 clock and the optical lattice clock are wrong.

    ?

    Edit: if I've missed a vital element, please add it to the list!
  • A question about time measurement


    OK, then how do you make 1) consistent with dark energy and dark matter? These are enormous features of the universe which cosmologists admit that they do not understand. How can you say that the current understanding is correct, when the consequence of that understanding is the need to posit all of this mysterious substance?

    Basically correct. If you want to talk about dark energy, you have to be able to accept solutions to Einstein's field equations as correct and the web of theory and experiment around them. Dark energy only makes sense as a concept on the background of the acceleration of the expansion of the universe; and is contained in a few explanations of it.

    If your claim is that the universe is the way that it is, regardless of how we understand it, then how is this relevant? What we are discussing is our capacity to measure the universe, specifically to measure time. So the fact that time is how time is, is irrelevant to our discussion of our efforts to measure time.

    Your argument so far has been based on an equivocation of the following: the beliefs of scientists and the practice which generates them; usually called science, and the phenomena they study; usually called nature. If a pattern is observed in nature, and it becomes sufficiently theorised and experimentally corroborated, it will be a scientific law. Note that nature behaved that way first, the scientists adjusted their beliefs and inquiries to track the phenomenon.

    You want to have it so that the changes in the beliefs of scientists over the ages implies that nature itself has changed over that time. This is a simple category error. You keep attempting to justify the idea that assigning a small measurement error to an optical lattice clock is unjustified because the laws of nature possibly will change. Besides being an invalid argument - the laws of nature would have to change , not just possibly change, in order to invalidate the current error analysis of the clock, you're using the above equivocation to justify it.

    You thus have to show that the laws of nature (read - how nature behaves) will change in a way that invalidates the error analysis of the clock within 100 million years.

    It's very suspicious to me that something you could have understood by reading the papers thoroughly and researching the things you didn't know to enough standard to interpret the results, but now you're attempting to invalidate a particular error analysis of a clock by either the cosmological claim that the way nature operates will change in some time period or undermining the understanding that scientists have of reality in general. Engage the papers on their own terms, show that the laws of the universe will change (not will possibly change), or stop seeing nails because you have a hammer!
  • A question about time measurement


    Before Newton thought of ma=mg implies a=g, objects with little difference in air resistance fell in the same way. Before Schrodinger's equation, atoms were already probability clouds. Before the understanding of planetary accretion, the Earth formed. Reality behaves in a manner accordant with discovered physical laws because those laws describe what happens, and if they have errors - their description contradicts observation -, they are expanded, discarded or re-interpreted. The representation of a pattern in nature in scientific terms has a certain correspondence with what happens, because that's precisely what it means for something to be a physical law. They are discovered through human activity, that doesn't mean they are constrained to human activity.

    The claim that the laws won't change in that time is based on 1) that the current understanding of things is basically correct and 2) that this current understanding entails that the universe will be much the same for that time period.

    Even if science is wrong, that doesn't mean nature will change. Nature does not change to accommodate the beliefs of scientists. The scientific description of patterns in nature may change when previous descriptions are found incorrect or novel phenomena are studied.

    You need to establish not only that the laws of nature can change - in the strong sense, that nature itself will change -, but that it will change in a manner that makes the error estimates for the optical clocks incorrect. As yet you've not. So tell me, why will the measurement process inside of the clocks change?
  • A question about time measurement


    I understood you were making the claim that since the laws of the universe can possibly change. I'm not making the claim that it's impossible to change. I'm making the claim that they won't change in any meaningful way for 1000 times longer than the current age of the universe.

    Why would you think that because it's possible for the universe's laws to change, that they will?
  • A question about time measurement


    What scientists believe about dark energy has absolutely no bearing on whether the laws of the universe will change in a given time period. Coming to know more about the laws of the universe may reveal the reason for all the 'missing matter', but this novel disclosure has no bearing on whether the laws will change - only what the laws are believed/known to be. With that in mind:

    Can you make a positive argument that the laws of the universe will change within 100 million years? Can you establish that the measurement process going on inside an atomic clock or an optical lattice clock will degrade? When will it degrade? How will it degrade?

    Also, it just doesn't follow that since an estimate of something was derived from a month's work of science that any error measurement derived from it is curtailed to a month. Such a temporal localisation of knowledge removes the validity of all measurements, not just temporal ones. You read a thermometer - the thermometer's measurement is only observed now - therefore we don't know what temperature it is when we look away.
  • A question about time measurement


    I did a bit of background reading on Smolin. From what I understand he advocates the view that the laws of physics change over time. I'm sympathetic with this view, since the different regimes of energy distribution at different stages of the universe's development give rise to markedly different topologies of physical law. By this I mean, at the start of the universe there is theorised to be a unification of the electroweak, gravitational and strong forces, and spatio-temporal variations in the ambient levels of energy unfold a universe with distinct forces and distinct length-scales for their activity.

    However, the universe will still be in the same regime of energy distribution for billions of years, and there is no good reason to believe that the laws will change in this time. The mere possibility of the laws changing evidently does not impede scientific discovery and theory-forming over the different stages of the universe's chronology. I had a similar discussion with Rich once, the accuracy of measuring red-shift in photons coming to Earth gives excellent evidence of the constancy of universe over very large time scales.

    Indeed, approximately 10^14-(13.8 * 10 ^ 9) = 10^13 years need to occur in order for the regime of distribution of energy to change in any meaningful way. That's 1000 times longer than the current age of the universe...

    Any account of the study of physical phenomena must allow for the ability of physics to probe the beginning and the end of the universe, the near instantaneous (10^-18 of a second) to the universal (10^100 years) time scales with reasoned argument and mathematical precision. It must also allow the ability to assign errors and upper/lower bounds to these predictions and measurements.

    If your metaphysical speculation is inconsistent with the sheer scope of our ability to study the universe, so much the worse for your metaphysical speculation.

    Edit: another way of thinking about it - the contingency of physical law doesn't do anything to the laws revealed, other than requiring accounts for their formation and end. (and thus replacement by other laws)

    Edit 2: and why would the process of measurement embodied by the optical lattice or caesium clok change anyway?
  • A question about time measurement


    The laws of physics have been shown to operate over all observed parts of the universe - and thus back in time more than that. It isn't a stretch to assume if no one destroys the clock or the measuring mechanism, or turns it off, that the process operating within it that measures time will have that error rate.
  • Sometimes, girls, work banter really is just harmless fun — and it’s all about common sense


    Yes, one is an expression utilising a shared background of institutionalised prejudiced to furnish its acceptance and is acceptable, and one is an expression utilising a shared background of institutionalised prejudiced to furnish its acceptance and is not acceptable.