Comments

  • A question about time measurement


    Don't you see the distinction between:

    The clock only ran for a month, so the error is at least a month.
    The clock ran for a month and had an error of 5.4*10^-17 in that period.

    What allows the extrapolation of the error - and thus statements like '1 second in 100 million years' - is that the clock had a certain error which accrued over a month. The measurement error precisely gives 'how much it changes over time'. If the clock degraded, then the error would change. The degradation would occur in the instruments of measurements - what tracks which quantum state the clock is in, not in the oscillations between two quantum states; the latter is understood variation anyway (like variations due to gravity).

    Time, on the other hand, is slightly different. A unit like second can only be defined with periodic change that has to be regular, just like for length. But, without a time piece that is already regular we can't determine if the periodic phenomena we're using to measure time with is regular or not. See...?

    Ok, how do you account for the ability to assign measurement errors to clocks? What does your skepticism actually do?
  • A question about time measurement


    There doesn't seem to be a law that cleary demonstrates true regularity of any physical process. Every clock is imperfect. All we've done is postponed the event when our clocks will accumulate enough error to be noticeable. While this may be acceptable in living im the seconds, minutes, hours, days, months or years, we can't ignore it in doing science where accuracy is vital.

    You're always going to have measurement error in experiments. 5.4*10^-17 error in a second is ridiculously precise. As the optical lattice clock paper noted at the end - this level of precision allows all kinds of new experiments. The need for no measurement error to demonstrate anything through experiment has the opposite effect than 'accuracy is vital for the progress of science', since it completely undermines every single experiment ever done.
  • Demonstration of God's Existence I: an Aristotelian proof


    Any argument that purports to prove the existence of God could be called a demonstration of God's existence. That doesn't mean the person detailing the argument believes the conclusion of the argument, nor does it mean that the demonstration is successful.
  • Demonstration of God's Existence I: an Aristotelian proof


    It doesn't pretend to prove God. The point of the thread is to examine the argument. If the OP was filled with ridicule of atheists and had 'checkmate atheists' at the end, maybe your responses would be more appropriate.
  • Demonstration of God's Existence I: an Aristotelian proof


    I'd rather see what conception of God is engendered by the assumptions than attempt to shoehorn in a totally irrelevant conception for the sole purpose of refutation.
  • Ethos
    There's a blogger that links abstract properties of societies to real life impediments in the UK. How he writes is exemplary and worthy of study and mimicry.
  • A question about time measurement
    They ran for a period of a month, and they got out of phase by 2.8 x 10^-17 seconds. That doesn't mean it's only proven to be stable for a month. Quite the contrary, the error is so low in a month that it's negligible.
  • A question about time measurement


    I think you're mistaking the one measurement for another. Can you cite the passage?
  • A question about time measurement


    Honestly I don't understand literally everything in the paper. I trust their error analysis. If you really want me to translate the error analysis in the paper to a more convenient form I could try, but not now.
  • The Facts Illustrate Why It's Wrong For 1% To Own As Much As 99%


    Well yeah, the ratio of crime rates of those subject to the intervention and the per capita rate in the general community excluding the people subject to the intervention is a good statistic for that. Doesn't need 100% success rate to be a good thing, nor to be shown to be a good thing.

    Yes, obviously poverty and inequality can be temptations for immoral behaviour. That's what statistics show. But that kind of inequality has nothing to do with 1% owning 99% and the others owning just 1%. It has to do with whether the 99% have their basic needs met.

    I think this is a relevant difference. There are different statistics to measure these things. The overall level of inequality in terms of wealth possession is something that could be decreased through wealth redistribution measures, and funded in a variety of ways (in the UK and US actually enforcing its fucking tax laws on multinationals would go a long way, not that there are enough staff to do it in the UK for some reason :(). So this is the kind of inequality that would be measured by the 99% and 1% sharing equal amounts of money.

    Another way, somewhat maxi-min inspired, of measuring inequality would be a survey of expenditures of those within the lowest 5%, then the proportion of their total income which would be devoted to necessities. Another way of finding this threshold would be to construct a budget from local prices for housing, food, electricity etc then looking at communities in these areas which would struggle to obtain these things on average.

    I'm of the opinion that the latter measure, and measures inspired by ratios of living expenses (or minimal living expenses) to incomes more generally, are much more sensitive to deprivation, and provide metrics for evaluating improvement in targeted communities. A very high proportion of total income spent solely on basic sustenance on average would make a community a good candidate for intervention. Targeting the worst off areas to incentivise investment in small businesses there (tax incentives without their abuse), providing community education, organising community policing from those within the community and neighbourhood watches and doing whatever can be done to increase the healthcare of those in the areas (like needle banks in areas with heroin problems).

    The stats will tell you where those target areas for intervention are. They'll also give you feedback on policy effectiveness, at the same time as personal interviews (payed, of course) with those who engage and do not engage with the intervention measures.
  • The Facts Illustrate Why It's Wrong For 1% To Own As Much As 99%


    Now we probably agree in practice if not in principle. However you're still going to be hostile to the ideas that inequality and poverty contribute to these temptations and their relationship can be understood statistically, I bet.

    Also, say there's an intervention in a community with the aim of making it have less crime. Some kind of moral education initiative, would you say the success of the initiative could be measured by how the crime rate per capita behaves over the next few years? Also whether and how many of those people who were instructed in the initiative committed crimes?
  • The Facts Illustrate Why It's Wrong For 1% To Own As Much As 99%
    @Agustino

    No, because I don't have enough time, and good enough results can be achieved by other means. But understanding what those means are involves understanding the root of the problem. In this case, the root is moral - so the moral aspect has to be addressed first.

    Do you see a role for statistics in finding problem areas in the organisation of a society that contribute to people making immoral decisions?
  • The Facts Illustrate Why It's Wrong For 1% To Own As Much As 99%
    @Agustino

    No, of course, I'm going to bet on him coming from the place with high crime rate and high population vs the low crime rate and low population one. But that's because I don't have knowledge - it's my own ignorance of what is actually the case that forces me to resort to using statistics. If I actually knew what was the case, I wouldn't bother with stats. But I need to take a decision in the absence of knowledge - so then I'm concerned with stats and forms of hedging my risks.

    And do you propose to interview every member of a society in order to analyse its aggregate properties?
  • The Facts Illustrate Why It's Wrong For 1% To Own As Much As 99%


    One of the reasons statistical approaches to gas behaviour works was because it took something incredibly complicated with loads of variables - the individual trajectories of gas molecules; made a few simplifying assumptions like no particle interaction, then derived statistical properties based on the simplification. I don't think it's particularly contentious to say that people and the systems they create through their interaction are far more complex than any gas. This is then an excellent motivation for using statistical summaries to get information about individuals in societies - what promotes and constrains their behaviour.

    The simplifications inherent in creating summary statistics are largely either subpopulation based - you estimate a property of a population based on a sample - or inherent to the calculation; like aggregating 'crimes per capita' over a country despite there being local variations in crime rates per capita.

    I don't see it working that way. I will totally ignore that 90% of business start ups fail, and simply ask myself what makes a business successful, if it's the right time for their particular business, if I can do something to make them successful apart from capital and what's the upside vs the downside.

    Ignoring base rates about their own business or businesses they like is probably one of the distinguishing features of entrepreneurs and investors. They condition on their own exceptionalness and believe it with great vigour.

    Regarding the last point, clearly, even if something has a 90% chance of success but the potential upside in case of success is 10% return, and if it fails, then -100% return, it would be stupid to invest cause negative expected value.

    So you're quite happy to average over a poorly estimated distribution of an individual's success to calculate an expected return, but as far as betting on whether someone who just committed a crime is a member of a place with high crime rate and a high population or low crime rate with a low population... That's a no go. Riiiiiiight.
  • The Facts Illustrate Why It's Wrong For 1% To Own As Much As 99%
    Well, multiple things. Firstly and most importantly, society is made of individuals. Individuals aren't like gas molecules - they have free will. That doesn't mean their behaviour may not be predictable, but it is still marked by the presence of free will and hence moral responsibility.

    Individuals are far more complicated than gas molecules therefore we can't treat them as random and use their aggregate properties to study their properties as a whole?

    Second of all, when we're interested in what's possible for an individual, just like when we're interested in what an individual gas molecule does - we don't use a statistical analysis. We only use a statistical analysis when we're interested in the behaviour of all individuals taken together.

    No... We're interested in what individuals do, and what things constrain and promote their behaviour. This just goes to show you didn't understand why I brought in conditional probability. Say you're deciding whether to give money to a new start up. If you're savvy you'll know that 90% of business start ups fail. Then you'll see what their business model is, see what their ingoings and outgoings are, evaluated how likely the business is to expand - see what evidence there is that the start up is doing the things that make a business successful, and how long that is likely to continue. If the latter process appears to outweigh the initial high failure rate, if they impress you and provide good evidence that they're a good investment - you'll give them the money. This is placing a bet on what you believe to be good odds: a conditional probability.

    Conditional probability gives you an indexing of general societal properties to individual endeavours and contexts. Everyone still has free will. It just so happens that, say, those in areas with higher crime rates tend to choose to commit criminal acts. The underlying reasons for that can be analysed, and individual motivation plays a part. What also plays a part is their context. Think people would stand on the street selling drugs and in the line of fire if they were the child of a rich businessperson ? No, of course you don't.

    Probabilistic summaries, properties of aggregates are entirely consistent with the capacity of individuals to make decisions. We've been over this a few times now.
  • The Facts Illustrate Why It's Wrong For 1% To Own As Much As 99%


    Ok. What's wrong with doing this Aug:

    I gave you exactly the same example. The movement of gas molecules is deterministic. But yet we use statistics to assess certain gas properties - why? Because we're interested in the behaviour of the society as a whole - WHICH CAN BE MODELLED AS RANDOM, although quite evidently it is not random in reality, but merely approximates what we would identify as random behaviour.

    Also:

    "Estimation theory is a branch of statistics that deals with estimating the values of parameters based on measured empirical data that has a random component"

    You literally just skim read the wikipedia article to find the first thing you could say to me that looked like a counterpoint. In order to estimate parameters in a model, a noise term is added which makes the terms treated a random variable. This is why physicists spend so long dealing with measurement error - quantifying random uncertainty from their measurements in deterministic systems.
  • The Facts Illustrate Why It's Wrong For 1% To Own As Much As 99%


    You're saying that statistics only works when the thing it's applied to is random. I gave you a counter-example. Parameter estimation is used to assess the accordance of theoretical prediction of deterministic systems with their experimental behaviour. You're just wrong on this one I'm afraid.
  • The Facts Illustrate Why It's Wrong For 1% To Own As Much As 99%


    No. Physical models are fit to data which are generated deterministically. The parameter fitting process still assumes the data is indistinguishable from random data. Just look at least squares estimation used in the highschool physics experiment to determine F=MA. More generally experimental physicists use B-splines to quantify novel trends, when there isn't a parametric model to fit to the data. All of this applies on the backdrop of measurement imprecision, which is assumed to be random. Hence, while the Higgs Boson was still a fresh discovery at CERN, the media kept reporting the buzzword that it was a '5 sigma result'.

    The use of statistics to quantify things and learn about them does not depend on your interpretation of Aristotelian metaphysics. If it is incompatible in your eyes, so much the worse for your personal metaphysics.
  • The Facts Illustrate Why It's Wrong For 1% To Own As Much As 99%
    @Agustino

    Yeah I want to flip the tables, since there is free will when it comes to human behaviour, hence why there is moral agency and responsibility for one's actions.

    "I believe all these facts limit my agency therefore they are false" - exactly why I stepped away in the first place. You don't actually care about patterns in the territory. Or about the methodological principles which bring them out.
  • The Facts Illustrate Why It's Wrong For 1% To Own As Much As 99%
    No, it doesn't mean it IS that, but it can be modelled as that. That's a big big difference. The map is not the territory. I'm interested to get to know the territory, not approximations on the map.

    The statistics are part of the map. They're like signposts, signalling and quantifying relationships in the territory.
  • The Facts Illustrate Why It's Wrong For 1% To Own As Much As 99%
    Supplementary post: declaring that some statistics don't apply to individuals is essentially flipping the table we're sitting around to discuss this. Every part of the discussion depends on properties of aggregates of people - specific statistics - and their relationship to each-other. Making some parts of statistical thinking 'off limits' has the effect of removing the relevance of everything we're discussing to the individual. A prosaic way of putting it would be that the aggregate properties of people provide base rates for their actions, and their individual choices in effect modify the base rate up or down. This is why there are people who don't commit crimes in very high crime areas, but how nevertheless it's still a high crime area despite this individual's choices.

    The way in which aggregate properties of societies constrain individuals is exactly what we're discussing. To say that the statistics don't apply on an individual level is to change the terms of the analysis - committing a pernicious category error.

    I hope you surprise me by re-evaluating your position on how statistics are relevant for ascertaining what it's like for people to live in societies.
  • The Facts Illustrate Why It's Wrong For 1% To Own As Much As 99%


    You're completely forgetting the application of base rates. Also your claim that a phenomenon has to be 'truly random' to have statistics applied to its study is just false. Model fits from experiments in physics follow the same principles as ones which are used to model asset returns and goods prices.

    So, for base rates: if you're healthy and don't smoke, your probability of smoking is a downward adjustment from the base rate - but still pretty close to it. P(you get cancer) is proportional to P(you get cancer given base rate)*P(base rate). In a similar manner, P(you commit a crime given that you're in a ghetto) is proportional to P(you live in a ghetto given you commit a crime)*P(crime base rate). I'm not conflating 'different notions of statistics' at all - you're ignorant of how to manipulate probabilities. If you were a mine worker, your probability of lung cancer would be an upwards adjustment of the base rate.

    The group of products has a change in price, which is also modelled stochastically (like with a GARCH process for financial assets) when it's modelled at all. Same notion of statistics.

    You getting cancer given that you're healthy is (generates, really) exactly the same type of random variable as you getting cancer.

    Of course, none of this will be convincing to you, that's why I stepped away.
  • The Facts Illustrate Why It's Wrong For 1% To Own As Much As 99%


    I understand it quite well, statistics apply to individuals insofar as they support your arguments, and they do not apply to individuals to the extent that they do not support your arguments.
  • The Facts Illustrate Why It's Wrong For 1% To Own As Much As 99%
    , @Baden

    Right, and we discussed in what regards he is right and in what regards he isn't. He's not right that if 1% owns 99% of the wealth it's necessarily bad. That 1% of the wealth left may be enough - hypothetically - for the 99% to be able to have a decent life.

    What actually happened in the discussion was that I left when you started saying statistics can't be applied to individuals. There's no such thing as an income distribution, 1% can't own the money of 99% because they're statistics and don't apply to individuals. The ratio of the minimum wage to big mac index doesn't mean anything for people's lives because statistics can't be applied to individuals. The amount of money lost per year in a country due to tax avoidance doesn't matter because it's a statistic and statistics don't apply to individuals.
  • Demonstration of God's Existence I: an Aristotelian proof


    I think it's a stretch to say that Aristotle's metaphysics is a good description of what humans believe as a matter of common sense.
  • Demonstration of God's Existence I: an Aristotelian proof


    I try to suspend all common sense when asking for clarifications on metaphysics.
  • Demonstration of God's Existence I: an Aristotelian proof


    I didn't understand the role of 'Therefore, both exist' in the post. Is it because acorns and lighters have some 'real potential' that they can be said to exist?
  • Demonstration of God's Existence I: an Aristotelian proof


    What's the difference between potency and actuality here? I'm genuinely asking 'cos I'm curious, not for some 'destruction through Socratic method' of the original argument.
  • Demonstration of God's Existence I: an Aristotelian proof
    @charleton,@darthbarracuda

    Honestly? The interesting thing about this thread isn't whether there is a God or isn't one. It's in what metaphysical assumptions generate that conclusion, and how those metaphysical assumptions are justified. Only on the basis of analysing the argument in terms of its metaphysical background can it be established as sound and valid anyway.

    So when you read something like:

    3.) A potential cannot be actualized except by something already actual.

    There's a wealth of metaphysical background that could be explored, and its relationship to the argument would be the next step of analysis. Saying that it's 'just false' is completely uninteresting and you don't learn much from that.

    More interesting responses to (3) might be:
    a) do potentials actualise themselves?
    b) in light of (a), is it true that material objects change through the 'actualisation of potentials by actuals' or 'the actualisation of potentials' in general?
    c) what does it mean to be actual and what does it mean to be a potential? are potentials actual? material? corporeal? incorporeal? immaterial?
    d) can potentials be included in hierarchies?

    Yadda yadda. Being an atheist doesn't just mean you rebut theistic arguments on the internet, as if dialogue was a competition to be the most right, it means you have to reject theological baggage in how you think.
  • A question about time measurement
    A cliffnotes version of the conclusion: errors in measuring the number of oscillations of atoms or lattices between different quantum states within a given duration are then translated into errors in time measurement.
  • A question about time measurement
    I did some googling for you. @Metaphysician Undercover

    Here's a paper that does measurement error analysis for a type of atomic clock.

    Here's one that does measurement error analysis for a modern optical lattice clock.

    Here's the wikipedia page on the adoption of the atomic clock standard.

    Measurement error estimates in general are obtained from making repeated measurements. When there are multiple components to the measurement error like in the error analysis for atomic clocks, individual component error can be obtained by varying one component independently of the others. The errors are then usually combined through the square root of the sum of their squares, or the square root of the sum of squared %errors.
  • A question about time measurement
    Every clock has a measurement error associated with its time. This is literally a quantification of how accurate the clock is. For the caesium-122 clock, this is an error of 1 second in 100 million years. The reason the atomic clock was switched to over the mean-solar-day definition was that it was more accurate, it had less measurement error and variability.

    The cycles of caesium atoms don't differ in any meaningful way. That's kinda the point. They're regular enough to make a measurement of time to the tune of 1 second of error in 100 million years.

    Accuracy = precision of measurement. Precision of measurement = small measurement error. The absence of measurement error is impossible, all that matters is whether it is low enough to make good measurements. If a new time measuring device is more accurate like this one which won't get an error until the universe doubles in age from now, then definitions can be made with respect to the more accurate clock.

    This is why the second standard based the Earth's rotation round the sun was rejected, it was demonstrably less precise. But - but - we keep leap-seconds, leap-days etc so that we stay calibrated with the Earth's rotation around the sun since we don't want to reject the solar year and its monthly/daily/hourly divisions and come up with a new manner of organising time...

    This is also why the number of oscillations of the caesium atoms was chosen, since it was incredibly close to the current definition of the second but measured far more precisely.
  • A question about time measurement
    @Metaphysician Undercover

    Convention privileges a measurer of time as a definer of the second. Then other ways of measuring time are calibrated to it.

    What's the time where you are MU?
  • A question about time measurement


    The entire point of calibrating measurements of time is that there is a privileged time-measurer and other measurements of time are calibrated through their relationship to the privileged one. This is then what it means for two time-measurers to be in accord. If they are out of accord, they can be corrected. If the privileged one behaves in an unexpected way, it will be changed.

    This is because the conventional definition of time with respect to the rotation of the Earth around the Sun is slightly different from the conventional definition of time with respect to the oscillations of a Caesium atom. And thus the introduction of the leap second is precisely an attempt to calibrate the atomic clock second with proportion of a year second. This is so that we can keep the conventional organisation of time in terms of hours, days, months, years and not reinvent the wheel purposelessly.

    If you like you could become an advocate of year definitions without leap seconds.
  • A question about time measurement
    @TheMadFool

    How do you check the accuracy of your watch? You must compare it to some standard clock, say A. The same question applies to A too and so on...ad infinitum. We can never be sure of the accuracy of a clock.

    Except no, because this isn't an infinite regress. It stops at whatever measurement of time is conventionally accepted as the definition. The duration of a second now means:

    "the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom."

    And so comparisons ultimately derive from this one.
  • The Facts Illustrate Why It's Wrong For 1% To Own As Much As 99%
    These are just statistics and don't apply to the actions of individuals.
  • A Robert De Niro Theory of Post-Truth: ‘Are you talking to me?’


    I'm just grateful my polemic did something for once. ;)
  • Demonstration of God's Existence I: an Aristotelian proof


    Well I want to understand the logic by which these hierarchies are constructed. Depending on how successive elements in the hierarchy are allowed to relate to each-other there can be violations of the sense of logical priority which is suggested in the OP.

    Say we allow two partial orders, and we say that an element is prior to another iff it is prior under at least one of the partial orders. So take the cup and gravity. The cup is logically before the action of the earth's gravity to it, but the action of the earth's gravity on the cup is also determined by the cup's mass and therefore the cup. The cup's resting on the table is 'notionally after' the idea of gravity keeping it there but is 'physically prior' to the theory of gravity since it is instantiated in terms of the cup's mass and the earth's. Then cup<gravity in the first sense, but gravity<cup in the second sense. This then makes the constructed hierarchy is not a hierarchy, since gravity and the cup are distinct and hierarchical orderings are anti-symmetric.

    This means there have to be restrictions on the number or types of relation which are used in constructing these hierarchies, otherwise there are going to be examples that show a hierarchy cannot be constructed using those concepts.

    The purpose of my questioning was twofold, in the first sense I wanted to know if the book had any recipe for constructing such hierarchies after the fact or whether it contained any metaphysical justification for how things will always stand in some relation of derivative causal power. The second was to see if constructing these hierarchies is self consistent.

    It also just isn't the case that there aren't infinite hierarchies without precluding certain classes of objects. The natural numbers are ordered by < but there is no greatest element, thus there is an interminable sequence. I suppose this is why there is a restriction to material objects standing in hierarchies (but then why would the inclusion of an immaterial being in the hierarchy be allowed?).