Comments

  • Infinite casual chains and the beginning of time?
    This is distressing. I'm beginning to agree with MU . . .jgill

    It's not that bad! There are lots of equivalence relations!
  • Infinite casual chains and the beginning of time?


    Did you see my reply here? Link.

    I remember a physics prof in university spending maybe 15 minutes scolding me when I said something like "the uncertainty principle says we can't know...", they responded "The uncertainty principle has nothing to do with how much we know about particles, it's not about our knowledge of the particles, it's about the particles" - not that exactly since it was a lot of years ago now, but that was definitely the gist. They were pretty mad at the suggestion it was epistemic, and their research was quantum theory, so I trust 'em.

    Edit: I have another story like that which is pretty funny. We had an analysis lecturer that was extremely eccentric, and one of the masters theses they were willing to supervise was on space filling curves. They handily included a "picture of a space filling curve in a subset of the plane", which was just a completely black square. I asked another prof if the eccentric prof actually wrote out code to draw the space filling curve, since it was the kind of thing he'd do if he could. The other prof got pretty angry and said "Computers can't do that, it's noncomputable, the construction relies upon the axiom of choice!".
  • Political Correctness
    or do we need to take a survey on that?Kaarlo Tuomi

    I think it went through a few phases. AFAIK:

    It was originally a leftist political in joke, about strict discipline in adhering to one's organisation's line. Like how Leninist organisations approach disagreement (they are not pluralist in any way). This was in the 1970's I think. It marked a contrast between old left tactical orthodoxy and new left tactical pluralism.

    Then people noticed that leftists were using it somewhere between an in joke and a weapon. So it became a slur towards the left. The left are those people that have to be politically correct. This is 1980's, when the old left was dying and globalisation + finance friendly ideology was sweeping the globe. In this context, it's a way of gainsaying left criticism or viewpoint as being against freedom of speech.

    After that, because the political advocacy of the New Left was less about class structure (the unions were dying and being undermined by the globalisation of supply chains), left struggles were more about identity and socialisation issues; race, gender, sexual preference and discrimination against those groups. It marked a shift from class struggle politics on the left to what pejoratively gets called the "culture war", which is "fought" mostly on the terrain of discourse/speech acts.

    That environment made it particularly fecund as a criticism of left "identity politics", because any anti-discriminatory intervention (like hate speech legislation) can be framed as against "freedom of speech". You see what they're doing now? You can't even say that you're ordering a chinky!

    We're still in that environment, so it's still a useful pejorative for left activity that focuses on "policing" speech - or from a left angle, changing culture to be more inclusive by changing how we relate to each other socially.

    So you get the bizarre situation where grumpy class focussed leftists find they dislike political correctness because it's proxied with identity politics and that allegedly filled the vacuum of class struggle, "fiscally conservative" (neoliberal) liberals dislike it because they don't want social changes to "freedom of speech", and anyone who appreciates its history as a leftist injoke and knows that it was an anti-hate speech legislation joke/weapon in its most common usage is going to react negatively to the ideological clusterfuck that's now its meaning.
  • Natural and Existential Morality
    Thread in a nutshell

    Is the fact/value distinction isomorphic to the map/territory distinction when considering descriptions of evaluations of valenced (pleasant/unpleasant) events?

    @Pfhorrest says no, because there are oughts in the territory; our bodies' sensations and their congruent expectations to be satisfied, These are based on descriptive content, and can thus be synthesized into heuristics that accurately describe the oughts in the territory.

    @Kenosha Kid says yes, because oughts in the territory can only ever be mapped, so the fact/value distinction says they are devoid of the theoretically synthesised imperatives the investigation seeks to produce. Even if they concern morals.

    Do facts about values say those values are right? No.
    Do facts about values say those values are ours in a qualified way? Yes.

    If there's a universal core of oughts that applies to everyone - a privileged flavour derived from necessities of human functioning by an intellectual synthesis, it seems @Pfhorrest wants to say these are true since they describe the deep structure of our oughts, and they are binding because they are actually occurrent. @Kenosha Kid comes in at this point and says because they are descriptions, you can't get behind the map of our oughts to get at the territory of any universal principles of morality without it ceasing to be a map.

    Two different flavours of immanence accusing each other of different sorts of transcendence ("You can't get behind the map!" says K to Pf, "You can't get outside the theory ladened!" says Pf to K), each relying on precisely what is accused.
  • Political Correctness
    You made me to fill a questionnaire which you then explained in quite detail, so...ssu

    All I've done in this thread is:
    (1) Say that the question was leading and uninformative.
    (2) Try to explain why any interpretation of it suggested so far is fraught.
    (3) Given worked examples on how questions can be misleading.
    (4) Given a generic description of the error in survey design this question makes: it's a binary choice where almost all the information recorded in the Approve/Disapprove is determined by the framing brought to the survey by the respondents and subsequent interpreters rather than the survey designer's constraints placed upon plausible interpretation of the question's substantive content.

    If it was a question like the hate speech one @Maw brought up, I wouldn't be reacting like this, as hate speech has much more definite content.

    Survey questions like that are like contracts with an audience of devils. You answer "approve", the audience brings whatever fine print they like and anything plausible is equally justified based solely on the question response (not also given the fine print/justification narrative for interpreting it in a given way).
  • Why aren't more philosophers interested in Entrepreneurship?
    If the question though is why there aren't more entrepreneurial efforts to promote philosophy, the answer probably is that philosophy simply doesn't sell.Hanover

    It creates value in forms invisible to balance sheets. It's worse for it than not selling.

    "What actionable insights does philosophy give you?"
    uhhh... - philosophy graduate
    "Hey can you monitor this production chain and assess if the desired tolerance is violated?"
    Sure, boss! -engineering graduate
  • Political Correctness


    That's an awful lot of opinion to form from that statistic, eh? You even brought hats. What you've done is fit that statistic in with your previous conceptions, rather than used the statistic itself. You provided all the implicit characterisation of political correctness. Just as the respondents to the survey were asked to.

    That is exactly the point I was making. That's all you can do with this statistic. And that's what you did.
  • Political Correctness


    So what conclusions do you draw from that bit of data?
  • Political Correctness
    How something is vague matters a lot regarding its usefulness.

    So what on Earth are you talking about?ssu

    I would like to live in a world where people seem as unaware as you are that questions can be leading or loaded, and intentionally or negligently made that way. When you answer "Approve/disapprove" or "Yes/no" to a question, if the question conjures a certain framing with its usual interpretation, a constellation of yes and no answers transfers the framing assumed by the interpreter to the respondent (or the broader sample).

    "Should Scotland be independent?"
    "Should Scotland leave the union?"

    There was a fight over that one. There are reasons questions are asked the way they're asked.

    Besides, political correctness is far better defined as those terms above: using language that avoids offending members of particular groups in society.ssu

    The framing assumed by the "political correctness" one is the interpreter's of the statistic. That goes against basic survey design principles; you should do whatever you can to make there be only one plausible interpretation of what the question concerns when its purpose is to elicit a binary choice on the matter.

    "Should Scotland be independent?" frames the yes answer as positive.
    "Should Scotland leave the union?" frames the yes answer as negative.

    Let's go through the questions I asked you:

    Do you believe consensus building is always of vital importance in political dispute resolution?
    (Yes/No)

    Has there ever been a situation in which consensus building was not of vital importance?
    (Yes/No)

    Do you think that every human has a right to express their viewpoint?
    (Yes/No)

    Do you think Naziism is a viewpoint?
    (Yes/No)
    fdrake

    If you say "no" to 1, that suggests you think alternatives to consensus building - power plays - are sometimes appropriate. The expected answer was no. I put the second one in in case you'd answer "Yes" to the first one, you're more likely to answer based on specifics if you're primed on specifics.

    I was seriously expecting you to think that Naziism is a viewpoint, because it's a perspective someone can take on some matters.

    If I changed the questions to: "Do you think every human as a right to express their belief system?"
    and
    "Do you think Naziism is a belief system?"

    I'm guessing you'd answer yes to them now.

    If you answer anything positive about Naziism - like approving of them expressing their viewpoints, which the questions engender -, and if you simultaneously believe that power plays are sometimes necessary in politics, a reader of those responses will often be left with the impression that the respondent (you) approved of Naziism in some way and approved of using power plays in politics. If you have that "approves of Nazi in some vague manner" priming, it's going to prime for interpreting power plays as violent.

    The questions you ask on surveys can engender their answers by being phrased in a leading way. You can get that effect if you include a pejorative in the question - and make no mistake, political correctness mostly functions as a pejorative.

    People responded saying they did not approve of (what the pejorative applies to), and what does it apply to exactly? Well, that's left to the interpreter. Just like someone who would read the above and conclude you were pretty far right and believed in violent direct action.

    If you expect all of these common associations to have to follow a syllogistic structure (like you're demanding me to articulate), that's simply not how making leading questions works.

    The purpose of a survey question should be to elicit someone's opinion on a matter, what that "political correctness" one did is leave any interpreter to fill in the blanks about what their opinions concerned as they like.

    Or to put it another way; let's grant that it concerns something vague, now you're filling in the specifics in your head - against that it's acknowledged as vague! Bad question, bad usage of question. But it was designed to be used that way I imagine.
  • Political Correctness
    What I'm saying that many statistics are vague. Yet that vagueness doesn't mean the statistic is useless.ssu

    You go on interpreting the poorly designed survey question in accordance with whatever political worldview you think it confirms then...
  • Political Correctness
    I don't know who fdrake is so no, why would you need my employer?ssu

    Either to inform them that some idiot doesn't think Naziism is a viewpoint or has Nazi sympathies and sometimes approves of violent direct action.
  • Political Correctness
    Even that does tells a lot: progressive blah.ssu

    Yes. And it's entirely up to the survey interpreter to decide what is meant by progressive blah. Is hatespeech mitigating law progressive blah?

    Headline demonstrating bias: "Liberals finally seeing the end of their worldview, 88% of Native Americans don't believe in political correctness"

    Are race+gender non-discriminatory hiring practices part of progressive blah?

    "Liberal narratives of inclusive hiring no longer desired by public, (blah high)% of people no longer approve of political correctness"

    It's really really easy to twist that survey result however an interlocutor wants to. And that's exactly what's been seen in this thread. You're all approving of the vague statistic, Nos approves of it for different reasons, you think Maw dislikes it because it goes against their worldview, you think I dislike it because it goes against my worldview... That's not an us problem, that's a methodology problem. It's a shit question.
  • Political Correctness
    As if people wouldn't differ just on what "is a problem" or what "extremism" or "an act of terrorism" is. No, at the present you either have to have a unified World view about everything or otherwise it's meaningless.ssu

    Fill out this survey please:

    Do you believe consensus building is always of vital importance in political dispute resolution?
    (Yes/No)

    Has there ever been a situation in which consensus building was not of vital importance?
    (Yes/No)

    Do you think that every human has a right to express their viewpoint?
    (Yes/No)

    Do you think Naziism is a viewpoint?
    (Yes/No)

    Oh, and if you wouldn't mind, please give me your employer's email address...
  • Infinite casual chains and the beginning of time?
    My understanding is that, for example, the Planck length is the length at which our current theories of physics break down and may no longer be applied. So that we can't sensibly speak of what might be happening below that scale. We can't say that reality is continuous or discrete; only that our current theories only allow us to measure to a discrete limit.fishfry

    But I am still confused about your conclusion. You're saying that a situation is ontological if there's no knowledge I could have that would settle the matter. Whereas I seem to mean something different. There's what we can know, and there's what really is. Two different things.fishfry

    I think I break the terms up differently from you.

    To my understanding, you treat all that a mathematical model of something says as epistemic. Because a mathematical model is knowledge of a thing, what it predicts about that thing is knowledge of a thing. I agree with that. And I'm inclined to take another step; if a mathematical model of something is good, I'll accept what it concludes as if it were the thing. I treat good mathematical models as representational knowledge; and they represent the thing. Part of representation to me is being able to stand in for the thing when considering it.

    So when the theory says; "it doesn't matter how much you sample about (blah), the variance of (blah) has a lower bound", I tend to treat it as being about (blah), rather than about our knowledge of (blah).

    With statistics and averages, I'm less inclined to do this. A lot of randomness in statistical models is epistemic, and thus it can in principle be reduced by sampling. I'm happy treating that as as a fact about how the model's estimates relates to the sample, not a fact about the model the sample is being used to estimate things from. EG; samples of heights of people in Wales having a variance that can be arbitrarily reduced (in principle) by more sampling (epistemic) vs sampling from a signal in time space and that sampling strategy inducing a lower bound on the error in the frequency space regardless of the specifics of the sampling strategy (aleatoric).

    But with aleatoric randomness, I'll treat it as a fact about the model. It doesn't matter what samples you do to estimate stuff from the model, the thing is gonna be random. I don't have a "physical intuition" or a "physical meaning" associated with these uncertainty principles when predicated of signals; as if I knew what they implied about reality. Some people seem to do this, interpreting an uncertainty principle about the modelled thing (blah) as meaning (blah) really is a distribution. Maybe it is, it seems useful to think that, but all I wanted to do here by calling it "aleatorically random" was to say that the uncertainty associated with an uncertainty principle is a model property rather than a sample property; it's about the (representation of) the thing, rather than about samples about the (representation of) the thing. In other words, its application is invariant of sampling, so it's about the model.

    If you wanna call it "epistemic" because it's some content of the model, and don't want to let the model "stand in" for the thing like I'm inclined to, that's fine with me.
  • Am working


    Not a problem :up:
  • Political Correctness


    It isn't a surprise that over 80% of a group dislike a nebulously defined pejorative. Does disliking political correctness entail anything about your opinion on any of the following:
    (1) Censorship of hate speech in media.
    (2) A cosmopolitan attitude.
    (3) Equality of opportunity.
    (4) Race/sex/gender indifferent hiring practices.
    (5) Politeness.
    ...
    I could go on.

    When someone talks about "political correctness", they usually cannot articulate precisely what it is. It's usually an "excessive version of (undefined allegedly progressive blah)", and everyone dislikes unspecified undefined allegedly progressive blah when it is excessive.

    The survey designers may as well have asked "Do you dislike things which you think are bad?"

    It does not take any effort whatsoever to specify the term. I am immediately suspicious when a survey designer asks a loaded question because they're literally trained not to do that.

    Do you like excessive profit motive from oil companies?

    Of course you don't.
  • Am working


    I've changed your name. Has the password reset email turned up yet?
  • Donald Trump (All General Trump Conversations Here)
    Anyone else got any explanation that makes sense given the facts?tim wood

    The fog makes it impossible for anyone inclined to believe Trump to hold him accountable for a mistake. If you give a true believer for his real politics (racist-nationalist populism + corporate handouts + repealing welfare programs), somewhere in the fog they will find a narrative that suits them. Even better, if anyone points out a flaw, there'll be a flipflop statement or reframing to substitute in! Avoid ever having to think about why you believe what you believe! Make America Great Again!
  • Infinite casual chains and the beginning of time?
    Cantor's work arose directly from physical considerations. This point should be better appreciated by those who dismiss transfinite set theory as merely a mathematical abstraction.fishfry

    Do you have a source for this? I'd love to see the connection.
  • Infinite casual chains and the beginning of time?
    Well this isn't so bad. I seem to recall that Heisenberg uncertainty comes ultimately from Fourier analysis or some such. The idea seems to be referenced here.fishfry

    I had this in a signals processing/wavelets class a while back. There's a standard proof here.

    The Fourier transform of the momentum operator applied to a wavefunction is the position operator applied to that wavefunction. There's a theorem in signal processing called the Gabor limit that applies to dispersions (variance) of signals; the product of the dispersion of a signal in its time domain representation and the dispersion of a signal in its frequency domain representation is at least (1/4pi)^2. Math doesn't care that time is time and frequency is frequency, it might as well be position and momentum. The Gabor limit applied to (position operator applied to wavefunction) turns into the Heisenberg uncertainty principle for position + momentum of wavefunctions.

    It's illustrated in the link you provided, if you Fourier transform a Gaussian with variance , you get a Gaussian with variance ; the product of the two variances is strictly positive. If you scale the original distribution by k, the Fourier transformed distribution will be contracted by 1/k. Contractions in transform space are dilations in original space. When dilations in time result in contractions in frequency, it isn't so surprising that the product of "overall scale"/(variance) of time and frequency has a constant associated with it.

    It isn't an epistemological limit.

    In statistical modelling, there's a distinction between epistemic and aleatoric randomness. Epistemic randomness is like measurement error, aleatoric randomness is like perturbing a process by white noise. One property of epistemic randomness is that it must be arbitrarily reducible by sampling. Sample as much as you like, the uncertainty of that product is not going to go below the Gabor limit. That makes it aleatoric; IE, this uncertainty is a feature of signals that constrains possible measurements of them, rather than a feature of measurements of signals. There is no "sufficient knowledge" that could remove it (given that the principle is correct as a model).
  • Bannings


    Yes. The guy keeps using sockpuppets to copy paste verbatim from his blog. The blog content is far right (we're talking creative euphemisms for a Jewish conspiracy far right), as if sockpuppeting and advertising weren't enough. He's also posted some of that far right content before.
  • Bannings
    Banned @Bruno Campello. @BrunoCampello, for sockpuppeting and advertising.
  • Causality, Determination and such stuff.


    If we got into it we'd be discussing your philosophical system rather than the thread topic.
  • Causality, Determination and such stuff.


    I dunno. I'd hesitate to say prediction = causation in any way. You have to do a lot of work to interpret the estimated parameters of a statistical model causally.

    Say if you're studying lung cancer rates observed in hospitals within countries, and you have country level data, hospital indicator and smoking status of the individual as predictors. If you took an individual that was a non-smoker, then made them a smoker
    *
    (calling all else equal in the background or propagating correlations between smoking rates and the other variables into the prediction)
    , you'd get something close to a causal interpretation of increased risk. If you took an individual that lived in Scotland, then put them in England, you'd get another change in risk. Does that mean this individual moving to England suddenly gets an increased lung cancer risk as soon as they cross the border?
  • Causality, Determination and such stuff.
    n. Not a definition of nonlinear in the strictly mathematical sense. And what is "small"?. For example:jgill

    Wrote a post trying to explain some chaos concepts a while ago. Since you're a meteorologist I'd guess you probably already know it and are making a point regarding chaos being a buzzword most of the time, but just in case.
  • Are there any philosophical arguments against self-harm?
    I don't see good reasons against "self harm" in this sense, if it strikes an accord with you and does no harm to others.

    The only avenue of attack I see against it is the question: "Why are you only yours?", and that way lies madness, jackboots and Philosophy in the Bedroom.
  • Causality, Determination and such stuff.
    Worth reading for context. A deterministic model of the Galton box; no randomness involved. Against Galton's idea that the collisions of balls with pins were "independent accidents" summing together (hence normal distribution). Against independence, because it seems that how a ball bounced last pin and the pins before it influences how it bounces next pin.

    On reflection, I think that on topic arguments shouldn't really be concerning themselves with randomness vs determinism here; it's more regarding whether it's appropriate to consider the Galton box as an example of bounces following bounces as a matter of logical necessity in the wild. The central question for whether it's determined in that logical sense is whether a real Galton box can sensibly be modelled with infinite precision inputs. I don't think it's fundamentally about whether deterministic mathematical models are successful in providing insights about 'em (they are), it's regarding the relationship of a real Galton box to the "input completely specified => unique trajectory" implication.

    I wanna have my cake and eat it to, really. I'd like to insist that "input completely specified => unique trajectory" applies to real Galton boxes since deterministic models of them work, but that nevertheless real Galton boxes do not have a specifying mechanism that enables anyone to do anything to them that pre-specifies any ball's initial conditions to sufficient precision for that mechanism's actions to collapse the outcome set to a unique hole for any given ball.

    I also wanna insist that that isn't "just an epistemic limitation", it's built into the box that it cannot be manipulated in that way. Maybe for other boxes you can.
  • Causality, Determination and such stuff.
    -But the universe (or 'everything' or 'the one' or ' the total totality' etc )cannot be treated as a closed system where 'everything' is such and such at state 0, determining unique states at t=xcsalisbury

    Think you're mostly right. "Open" and "closed" don't mean quite that though. A closed system is one that isn't subject to any external net force or matter/energy transfer. Once the balls in the box are set in motion, it's a closed system (to a good approximation).
  • Causality, Determination and such stuff.
    The salient point is that determinism is not found in classical physics but assumedBanno

    I think it's true that the hypothetical "if initial state is completely specified, then trajectory is completely specified" is true of the systems like the Galton's box I linked; if that's all someone means by determinism, I think it holds of the box. If they additionally assert "the initial state is completely specified in this Galton box", I don't think it holds of the box. At least, there's room for doubt.
  • Heraclitus Weeps For Us, Democritus Laughs At Us
    Indexicals, man, sometimes folly is funny, sometimes folly is sad.

    Also, something can be funny and sad at the same time.
  • Causality, Determination and such stuff.
    "We see that to give content to the idea of something’s being determined, we have to have a set of possibilities, which something narrows down to one – before the event".StreetlightX

    That role seems to be played by the initial conditions. For a given initial condition, there's a guaranteed outcome. For a range of initial conditions, there's a range of outcomes. Imprecise specification of an initial condition gives a range of outcomes consistent with (determined by? @Kenosha Kid) the range of inputs concordant with the imprecisions. The sleight of hand that makes determinism seem to be a system property seems to be the specification of an initial condition with sufficient precision; as if the specification of an initial condition was done externally to the dynamics of any actual Galton box.
  • Causality, Determination and such stuff.
    For any given initial location X of the ball in a Galton box, there will be some positive number delta(X) such that a measurement of initial conditions with error less than delta(X) can predict the outcome with certainty.andrewk

    That's a possible feature of how the balls are input, no? If you have a robotic arm capable of placing balls to arbitrary precision; like we can on paper by specifying an initial condition, then that's going to hold. If the causal processes that puts the balls into the Galton box by design does not constrain it in that manner; effectively evolving a volume of initial conditions forward through the box; then the output pattern is going to be close to binomial (on left vs right hole transitions) or approximately normal (on horizontal coordinate of box base) assuming the sample of initial conditions isn't really weird in some way.

    The contrast is between:

    (1) The Galton box is deterministic because there is a hypothetical arbitrary precision mathematical model of it that allows perfect prediction for every input trajectory that doesn't result in unstable equilibrium. Complete specificity of initial conditions. Does not actually occur in actual Galton boxes.
    (2) The actual operation of the Galton box doesn't have that. Vagueness of initial conditions - a distribution of them.



    The initial conditions of each ball in the Galton box are not specified to arbitrary precision, they're kinda just jammed in. So for the above Galton box and initial condition specification (kinda just jamming it in), and for any particular bead, it's true that we can't predict its trajectory. That sits uneasy with the hypothetical claim that we could if only it were specified to sufficient precision; that "if only" means we're no longer talking about the above box.
  • Causality, Determination and such stuff.
    we can be quite sure that a third measurement won't be at 0.7T, for instance, unless something other than gravity was acting.Kenosha Kid

    So regarding the measurement error thing. Wanted to make my argument more precise.

    Two main points:
    (1) Laplace's demon does not take error terms' interpretations' seriously.
    (2) The existence of error terms in a model breaks the deterministic relationship between the observed values of measured quantities in those models.

    Say you're testing a linear relationship, the theory says that the following relationship holds between two quantities:



    where and are both measurable in the lab. You do an experiment, and there's always individual level noise and measurement imprecision. In a situation like that, you modify the model to include an error term :



    The relationship which is being studied is , the individual level errors are assumed not to be part of the causal structure being analyzed. Nevertheless, when you make the measurements, there is individual level variation. Its causal structure is unmodelled, it's assumed to be noise. But as part of the model that noise stands in for all other causal chains in the environment which influence the measurement.

    Perhaps I'm wrong in this, but I think that from the perspective of Laplace's demon, it's imagined that Laplace's demon knows holds, but must also know the entire causal structure that yields the to contribute to the measurements as they do. Laplace's demon knows why the that makes every measured pair deviate slightly from takes the value that it does. But Laplace's demon knows with complete specificity the behaviour of unspecified, unknowable causal chains. Unspecified and unknowable regarding is part of the model structure. Such causal chains are not part of the causal relationship between being studied, but they're part of the causal chain in the experiment linking observed and .

    The status of that "unknowable, unstructured variation" is part of every model as soon as it ceases to be a theoretical idea and comes to obtain estimated parameters. What I'm trying to highlight is that the structure of interest - loses its determinism (in the sense that intervention yields response) as soon as that gets involved. Even in the most precise measurements, it is possible that individual level variation explains the entire observed relationship, it can just be made vanishingly unlikely. That possibility mucks with characterising Laplace's demon's knowledge from how we use physical law, it's at best an unrealistic idealisation from it that forgets how the error term works. Every experimental model that involves an error term breaks the metaphysical necessity of the deterministic relationship contained within it insofar as it purports to explains the observed data.
  • What's been the most profound change to your viewpoint
    Some ideas that stick with me from various places, all of them have changed how I've thought about stuff.

    Statistics: data generating process, model uncertainty, Box's quote: "All models are wrong but some are useful", causal probabilistic models. interaction effects, population, model. indicator. generative model. network theory.

    Mathematics+Logic: formal language, flow, trajectory. pushfoward and pullback. necessary and sufficient conditions. invariant. parameter.

    Psychology+sociology: nomological network, construct, operationalisation, dimensional emotion models. self concept, two systems theory, framing effect (and other cognitive biases). active inference, Ramachandran's experiments.

    Philosophy; self model, assemblage, population thinking, , thetic/pre-thetic intentionality distinction, ampliative inference, the distinction between a statement of fact and the role it plays in a discourse, embodied cognition, transduction, individuation. condition of possibility, extended mind thesis, semantic externalism. simulacra. the idea that a difference can look different (or not even exist) depending upon what side of it you're on, regional ontology.

    Ecology+biology+systems theory: umwelt (Uexkull), developmental landscape, ring species, zonation (related to (de)territorialisation), feedback (and feed forward), good regulator principle, metastability. ecocline. preferential attachment.

    Quotes:

    Whitehead: "Every philosophy is tinged with some secret imaginative background, which never emerges explicitly into its train of thinking"

    Debord: "Everything that was once directly lived has receded into representation"

    Marx: "What chiefly distinguishes a commodity from its owner is the fact, that it looks upon every other commodity as but the form of appearance of its own value. A born leveller and a cynic, it is always ready to exchange not only soul, but body, with any and every other commodity, be the same more repulsive than Maritornes herself."

    Marx: "Since gold does not disclose what has been transformed into it, everything, commodity or not, is convertible into gold. Everything becomes saleable and buyable. The circulation becomes the great social retort into which everything is thrown, to come out again as a gold-crystal. Not even are the bones of saints, and still less are more delicate res sacrosanctae, extra commercium hominum able to withstand this alchemy. "

    Ted Hughes (Crow on the Beach):

    "He grasped he was on earth.
    He knew he grasped
    Something fleeting
    Of the sea’s ogreish outcry and convulsion.
    He knew he was the wrong listener unwanted
    To understand or help –

    His utmost gaping of his brain in his tiny skull
    Was just enough to wonder, about the sea,

    What could be hurting so much?"
  • Causality, Determination and such stuff.
    for instance, unless something other than gravity was acting.Kenosha Kid

    Exactly! The majority of the time something else is in play too. When we imagine a system, it's usually determinate - it has a system of equations that describe its evolution. Then you try and measure its parameters and that sneaky crops up at the end of the line. In the lab it's measurement error. Outside of controlled circumstances, it's pretty much everything. That epsilon is individual level variation.
  • Causality, Determination and such stuff.


    Wrote a comment about randomness as internal to systems a long time ago here.

    But epistemology isn't everything.Kenosha Kid

    Knowledge is everything to Laplace's demon. If such a state of knowledge is impossible in some circumstances, Laplace's demon couldn't function as described in those circumstances. We already know it is impossible in most circumstances.

    The difference is that states corresponding to some measurement (e.g. momentum) are multi-valued, which means that trajectories through phase space are also multi-valued.Kenosha Kid

    Say we've got the following system:

    (1) A time variable ranging from 0 to .
    (2) A normal distribution
    (3) A sampling operator; . What it will do is generate a sample from the distribution at

    If we were to observe that process, the measurements would come from the sampling operator, not from the deterministic evolution of the distribution. Even if the evolution of the probability over time is fully deterministic, that doesn't tell us the sample paths; which are the events which actually happen; arise from a deterministic law. It's a baby and bathwater thing - yes, the distributions of time varying random processes can evolve deterministically in time, but no, that does not entail the events which happen due to them are determined. It's more apt to weaken "determined" to "constrained" regarding the observed events of deterministically evolving probabilistic laws, I think. A deterministic time evolution of a probability function still only constrains its realised sample paths. Laplace's demon has absolutely nothing to say about the sampling operator, only the time evolution of the distribution it acts upon.
  • Causality, Determination and such stuff.
    That is what is questioned by the Del Santo article.Banno

    It's also true that a fair dice has probability 1/6 of landing on each side. Randomness as a physical property of systems rather than an epistemic limitation on them is something people really resist. But I think this is tangential to the determinism spoken about in the OP article.
  • Causality, Determination and such stuff.
    If you couple this arbitrarily small but nonzero uncertainty to a chaotic time-evolution, it is true you cannot predict the outcome of an event. But it is true because you could not specify the initial conditions exactly. I'm at a loss as to what can we learn from this?Kenosha Kid

    One way of phrasing Laplace's demon is "If the initial state within an arbitrary system was completely specified, it would evolve into a unique state at any given time point". It's an implication; complete specification of initial state => complete specification of trajectory.

    But we live in a world where complete specification of the initial state is practically impossible for almost all flows over time. That is, it is a fact that they are not completely specified for the most part.

    So the determinism of Laplace's demon is a hypothetical; if complete specification of input state, then complete specification of output state.

    That determinism of implication doesn't hold for almost every phenomenon because we know it's practically impossible to completely specify the input state that lead to its emergence. Its antecedent is false, so it is useless as an implication; it ceases to apply. What remains of Laplace's demon as an ontological thesis when it's rendered merely a hypothetical? We can't feed almost every system into its defining implication. So what systems are left?
  • Causality, Determination and such stuff.
    Isn't that right?Banno

    Think so! I was writing in the context of the claim: "If there were no Laplace's demon, then our deterministic models wouldn't work", the point being it doesn't matter for deterministic model functioning if there is a Laplace's demon or not.