Comments

  • What Are the Chances That This Post Makes Any Sense? A Teleological Argument from Reason


    If (a), the generation of this initial given (call it X) was then necessarily to some extent limited or bounded (hence, determined) by an end – for the sake of which it was generated – which, as end aspired toward, could not have been generated by God prior to God’s very first, intentional generation (i.e., his generation of X). Here, then, God was himself to some degree limited or bounded (determined) by his actively held intent (telos or goal or aim), an intent held by him which he did not create and which he did not instantaneously realize. Therefore, God was not - and thereby is not - omnipotent.

    I do agree that analytic definitions of the God of classical theism are contradictory, but I wasn't able to follow this reasoning.

    It seems to me that:
    1. If God is omnipotent then God can do anything God wants to do.
    2. God only does the things God does want to do.

    Is totally consistent with omnipotence as classically defined.

    If I follow you're saying:
    1. What God does do is determined by God's desires.
    2. God's desires are properties of God, and such properties are necessary.
    3. God didn't create God's properties, so God is constrained by God's uncreated desires, which cause God to only do what God wants to do.

    Another way to phrase this is to say that God's omnibenevolence contradicts God's omnipotence by acting as a constraint on God's actions, since God can only perform good acts. Since God's property of omnibenevolence is necessary, this precludes God from some actions.

    This has generally not been taken as a true contradiction because an agent's only doing what that agent wants to do doesn't seem to constrain what an agent is metaphysically capable of doing.

    But we can reject that counter argument. However, this example only seems to outline problems with the coherence of the definition of omnipotence in play, and we don't need to reject the solution to devise the same problem in other terms.


    Consider:

    "If God is omniscient then God cannot forget anything and cannot create a truth that God does not know. Thus, God is constrained and not omnipotent."

    Or:

    "God can/cannot create a rock so heavy that God cannot lift it."

    Plantinga argued that these turn out not to be real contradictions. The first is logically equivalent with "if there is a truth, God knows it." The second is logically equivalent with "God can lift all rocks." God only doing good things based on God's desires is equivalent with "all of God's actions are good and God only does what God wants to do," which is the same as "God is omnibenevolent and God can do or not do anything God desires."

    I don't see how God having necessary/uncreated desires contradicts "God can do or not do anything God wants to do," which is the definition of omnipotence.

    However, I think there is indeed a real problem, and it's one of self reference. Any proposition stating a truth about what God does or doesn't do entails some constraint on the what God does or doesn't do, but the trait of omnipotence is supposed to mean that God faces no such constraints. Omnipotence itself refers to control over the truth value all propositions, but the excluded middle implies that no such control can be absolute. I don't see how that's relevant to the OP though. The God of philosophical theism is a weird entity dreamed up by the constraints of analysis, not the only possible conception of the divine.
  • What Are the Chances That This Post Makes Any Sense? A Teleological Argument from Reason


    By propensity I mean the propensity interpretation of probability: https://plato.stanford.edu/entries/probability-interpret/#ProInt

    Although the logical interpretation may be more apt for the original example.

    Exactly. Frequentism is what underlies actualism, a form of determinism.

    Imagine that you're rolling a die at a craps table. You'll say that the 5 has a 1/6 chance of appearing face up. This is an assessment of logical possibility. We have to be careful about what we say after the die has landed. If it was a 5, we know it's possible that the 5 could appear face-up because it did! But could the 2 also appear face up? Logically, you can't have more than one side of the die face up. If the 5 appeared, it isn't possible for any other number to be face-up. So what happened to the other possibilities? Where did they go? What exactly are those other possibilities?

    One way to look at it is to say those other possibilities are information we possess about how the universe works. We use that information to make predictions. But we can back off of imagining that those other possibilities have some ontological implications. They don't. They're just the result of our analysis.

    This is just the axiom that things have already happened have necessarily happened in temporal logic. This is no way entails that future events are necessary. And it doesn't entail that probability is frequency. Anyhow, if probability IS frequency then probability is NOT subjective in any case, it's not about "our information," but a fact about the world.

    But for probability to be fully synonymous with frequency it seems like you also need eternalism, the claim that all events already exist at all times, so that the probability of an event's occuring can be based on its frequency throughout all times. Why? Because before an event has occured at least one time such a view, sans eternalism, would be stuck saying the probability of that event was 0, since it has never shown up in a population before. But then the probability somehow changes to 100% upon the outcomes first occurrence. However, we generally say that if a thing occurs with probability = 0 then it is contradictory to say it also occurs. IDK, there could be a work around here but I imagine it'd be convoluted.

    Frequentism does not entail eternalism though. There are plenty of ways to embrace frequentism and not rope yourself into determinism and eternalism. Generally, frequentism is explained in terms of possible worlds for this reason, or it is represented as merely an epistemological methods for discovering propensities.

    So sure, probability is frequency and future events are necessary if you take those claims to be as axiomatic, but I don't think there are good reasons to accept such a proposition because I have never observed anything to make me think that future events exist before they occur.

    When we say that bullet to the head has the potential to cause brain damage, this reflects experience with brains and gun shot wounds. It's fully possible for a person to receive a GSW to the head and suffer no brain damage. It happens all the time, especially in suicide attempts where they just end up blowing their faces off. Again, you have to take it case by case.

    This is dancing around the point though. Are you aware of any cases where a .50 BMG round passed through the brain of an individual and they don't have brain damage? Is there a single case where a relatively large solid object goes through the brain and there is no biologically significant result? It's prima facie unreasonable to claim that, if such an event occured and was well documented, the medical and scientific community would simply shrug and say, "well there are outliers out there, all we can know if probabilities." Same thing if someone one day walks through a solid wall or begins floating through the air. Cause is there even if there is an attempt to banish it to the background.


    Anyhow, in your view is it possible to meaningfully talk about the probability that Biden wins the 2024 election? Does it make sense to say that aggressive anti-Chinese rhetoric by US politicians increases the probability of war? Or, as one time events, is it impossible to say anything about them because they are one time events?
  • What Are the Chances That This Post Makes Any Sense? A Teleological Argument from Reason


    Knowledge is about something, no? So it's necessarily tied to ontologically.

    "Correlation does not imply causation," does not imply that causation doesn't exist. Medicine does not say, "smoking doesn't cause cancer, bullets to the head don't cause brain damage, etc., all we can know is that previous samples of groups of people who have been shot in the head have a higher incidence of brain damage."

    The entire reason you go out and compare the mean incidence of lung disease for smokers against the mean in some control population is because you think there is something about smokers that gives them a greater propensity for developing lung disease. Even eliminiativists re: "cause," allow that a complete description of a phenomenon will show how past events evolved into future ones, i.e., why the group of smokers tended to end up with lung disease more often.

    There are all sorts of ways to explore cause, do-calculus and the like, which are employed heavily in medicine.

    If you don't believe in propensities, then you have absolutely no grounds for defining the classes whose frequencies you compare in many cases. Take your example, if I notice smokers have higher rates of lung disease, why shouldn't I just assume that the frequency with which "all people" get lung cancer is actually higher than I thought. Why posit smokers as a class?

    In the sciences, classes are often defined by frequencies of some observed variable themselves. If I flip a coin and it comes up heads 100 times in a row, and I don't believe in propensities, then I should just say that the probability of a coin coming up heads has changed, rather than positing that the coin is rigged. Indeed, what grounds would I have for saying the class of rigged coins and the class of coins are two different classes?
  • What Are the Chances That This Post Makes Any Sense? A Teleological Argument from Reason


    I think probability can be taken one of two ways: it's either an assessment of some number of iterations (so we toss the coin 100 times, it comes up heads once, so we say it has a 1% chance of coming up. This assessment has to be considered in the light of the data from which it came.

    The other way to assess probability is to examine the logical possibilities. Look at the coin and determine how it's weighted. If it's evenly weighted, there's logically a 50% chance it will come up heads.

    Exactly, the first being frequentism and the latter being propensity. There is also subjective/Bayesian probability.

    Frequentism has problems with all one-off events. What was the probability of Donald Trump winning the election in June 2016? If probability is frequency then it was already 100%. But then what is the chance that Joe Biden wins in 2024? Does it not exist? Do probabilities only exist for one-off events after the event? Or are we forced to posit eternalism, that all events exist eternally, so that there is some frequency for one-off events we can reference?

    And what does this say about descriptions of quantum mechanics that are inherently probabilistic? At the start of the universe, T0, no quantum events have occured. So there existed no frequency through which to define quantum system probabilities. And yet, presumably, we think the universe had physical laws from the beginning.

    More importantly, we generally don't think that past frequency, of itself, possesses causal physical powers. We don't say a coin flip is 50/50 because past flips have been so. A coin flip isn't "50/50 because the frequency of coin flips is 50/50," that's a vicious circle.

    We say a coin flip has these probabilities because of the attributes of coins. But in that case, frequency is just a useful way to observe propensities and discover them, in which case it is absolutely fine to apply probability to one off events. And indeed cosmology would be impossible otherwise as nothing could be said about the likelihood of different hypotheses.

    We can't use the iterative form of probability either, because by definition, the universe is a one-off. However it is, it had a 100% chance of happening that way because the assessement is 1/1.


    How does this not apply to all natural phenomena? Every event we observe only occurs at one time, in one place, in one way. I don't see how it doesn't generalize. Sure, you can claim that some phenomena belong together in some sort of relevant equivalence class, but at the same time there is always the counter argument that you're looking at the wrong type of equivalence. If you say all coin flips belong in the same class then it seems to me like you have to beg the question and assume that the universe behaves the same way vis-á-vis flipped coins at all times, in all places, otherwise the class wouldn't be valid.

    Generally, we go in the reverse order. We see that coins have attributes such that, wherever we flip them, they come up 50/50, and assume their properties cause this distribution. Invariance across space and time for multiple classes then justifies the idea of "physical laws."


    When we saw that the curvature of space and the conditions in places in the universe that were very far away from ours seemed very unlikely given an eternal universe, we developed the Big Bang Theory. Over time, a great deal of evidence was gathered that supports the Big Bang Theory. But by your logic, I don't get why we shouldn't have seen the facts that caused us to posit the Big Bang in the first place, shrugged, and said "probability can't be applied to cosmology, whatever universe exists, exists with p=1, so there is actually nothing to explain here in terms of likelihood." And I don't see how this stops at just cosmology.

    How is the analogy to the Boltzmann Brain problem not apt? You could use the same counter for that problem and say: "thermodynamics isn't really about probabilities because there is actually just one universe that has one series of microstates, not many possible microstates. We are either merely a Boltmann Brain or we are in a legit Boltzmann Universe, it is one or the other with p=1, because there is just one universe. Thus, the mere Boltzmann Brain isn't actually more likely than the Boltzmann Universe."

    But if you buy that, I don't see how it doesn't generalize to all arguments from statistical ensembles, making the entire scientific enterprise invalid. Every paper using statistics, every significance tests would be bunk. Frequency can't tell you that two samples are different unless you believe that differences in frequency can be defined in terms of something other than just the frequencies you happen to observe.


    I will grant though that the argument is more compelling if you accept that the universe can be explained mathematically, and more so if you believe the universe and its component parts essentially are the mathematical object that describes it.
  • Why should we talk about the history of ideas?


    I never said all historical arguments can be justified in this way, and certainly not that they're all good. You can use any method to make bad arguments. I said they make sense in certain contexts.

    I'm not going to defend the idea that "people used to believe x," is itself an argument, although it might be interesting and tangentially related to an argument. But that hardly means all arguments from history lack weight or relevance unless you invoke some sort of overarching project vis-á-vis the history of ideas.

    Use the frequentism example. You can't just "argue on the merits of Bayesianism or propensity," if your interloceturs are firmly entrenched dogmatists who keep saying "but look, frequency IS probability just like a triangle is a three sided shape. It's what the word means, it's an analytical truth." Something has to be done to address the foundations of the dogma. This is particularly true for ethics, where, for example, it used to be the norm to support nonvoluntary, painful medical treatment to "cure" homosexuality. You'll note that people often refer back to earlier treatment of homosexuals when addressing contemporary issues with transgender individuals because it makes for a good argument from analogy as well (another reason to bring up history.) People have a very hard time seeing past their dogma, that is the nature of dogmatists, but a trip through history can show how the seemingly necessary (e.g. probability defined as frequency) is actually contingent.

    No doubt it also helps for emotional appeal that the main champions of frequentism as dogma were eugenicists; it's logos and pathos and ethos after all.

    Or take: "n/0 has to be undefined or else bad things will happen." This could be met with "no, n/0= ∞, x, y, and z genius polymath agreed. But more importantly, people did math fine all the time back then despite the problems you listed, so clearly it isn't the problem you say it is. DAX and other popular data analysis languages use n/0 = ∞ for legit reasons. Take the limit of 1/0.000000...01 and tell me what it is!"

    I'm not going to defend the position that n=∞, but obviously there is a pragmatic argument that can be bolstered by the history of making division by zero undefined, because it shows that the problems being fixed don't really affect many applied uses for arithmetic anyhow, and even that n/0 = ∞ was better for some applied use cases.
  • Why should we talk about the history of ideas?


    There's plenty of reasons to go into the history of ideas. Off the top of my head:

    It's a good way to rebut appeals to contemporary authority or appeals to popular opinion. Granted, such appeals generally appear on lists of common fallacies, but that doesn't negate the fact that they still carry weight in many contexts.

    It's also a good way to argue against dogmatic view points in some contexts. If I'm arguing for x, and my interlocutor's response is that x cannot be true because of y, where y is some widespread, dogmatically enforced belief that I think is false, then it makes perfect sense to explain how y came to be dogmatically enforced. For one, it takes the wind out of appeals to authority and appeals to popular opinion if you can show that the success of an idea was largely contingent on some historical phenomena that had nothing to do with valid reasons for embracing that idea.

    There might be valid reasons for supporting x. But said evidence for x might also not be particularly strong. Perhaps in our opinion, the evidence for x is far weaker than y,z, or even q. Yet if x has somehow contingently won approval from relevant authorities and/or popularity for some historical reason this is necessarily going to tend to influence how people judge x versus other competing positions.

    That's just how people are, and not without reason. If 99 doctors out of a 100 tell you x is absolutely true, you should look twice at your reasons for buying into y before you eat the horse dewormer or whatever. But sometimes dumb ideas also get very popular.

    The book Bernoulli's Fallacy is an excellent example of this sort of argument. It demonstrates some core issues with frequentism, but it also spends a lot of time showing how frequentism became dominant, and in many cases dogmatically enforced, for reasons that have nothing to do with the arguments for or against it re: statistical analysis.

    The history of an idea can also show where a tradition when wrong in ways that simply looking at where the current tradition is today can't.
  • What Are the Chances That This Post Makes Any Sense? A Teleological Argument from Reason


    There is a echo here of other threads - you seem to presuppose time. If the next moment everything is different, there is nothing to say that it is 'next', rather than any other configuration where everything is different. Time has no meaning unless there is continuity and change, that produces 'succession'.

    You don't need to presuppose time, or strictly four dimensions. I mentioned Floridi's maximally portable ontology thinking of just this objection, but avoided going into detail because I figured it'd make the post too long.

    Of course a universe doesn't need "time," but it needs difference. Imagine even the simplist toy universe consisting of just a single dimensional line. Obviously points on the line have to vary from one another in some respect (their coordinates) or else you have no line. Such universes also vary in length unless there is some reason they are necessarily infinite; you can have discrete or continuous models as well. But for all of them, contain any information, for them to describe anything, you need variance between somethings, elsewise everything is indistinguishable from everything else, making such a universe contentless. Even a point can't exist as a point if it isn't a point that is relative to some other point or a coordinate system.

    Time is the dimension over which change occurs in our observable three dimensional universe. But we can posit n dimensions and the problem doesn't change. It doesn't collapse if we extrapolate from the Holographic Principle and suppose our world is two dimensional, nor does it go away if we posit all the dimensions of M Theory. I simply use time because it's more familiar and the way we commonly define physical laws due to how we experience the world, and because the world that we exist in obviously does have time.

    Plus, any observer looking at n dimensions might really be in a reality where more or fewer observable dimensions exist depending on where you are in that reality, even us: https://journals.aps.org/prd/abstract/10.1103/PhysRevD.21.2167

    Anyhow, it seems to me like the idea that time exists shouldn't be controversial when discussing empirical arguments about the world.
  • What Are the Chances That This Post Makes Any Sense? A Teleological Argument from Reason


    You seem to be drawing probability (with the word 'unlikely") and possibility ("necessary") into it.

    Yes, because there is a connection. Take the normal argument for Fine Tuning. If the constants of our universe and its initial entropy are such that the odds of their occuring are significantly less than 1 in 10×10^123, then it doesn't make sense to assume such things have occured by chance. You don't bet against a coin that has come up heads for 5 hours of flips because it is obvious that the coin isn't fair given the result. Hence, the Fine Tuning Argument has been taken seriously to date.

    The counter to the Fine Tuning Argument is this: "sure, our world looks unfathomably unlikely. This seemed even more true back when we though the universe was eternal and that we lived in a Boltzmann Universe (i.e., a universe where, due to incredibly unlikely random thermodynamic fluctuations, everything moved just so, so as to create the visible universe out of heat death). However, we keep learning more about the world. For example, we developed the Big Bang Theory, which gets around the Boltzmann Universe's problem. Perhaps we can fully explain exactly why physical constants have the values they do and why entropy was so low in the early universe. Problem solved, Fine Tuning will get explained."

    My point is that the argument above still fails even if you appear to have such explanations, and even if it seems like you can define our universe with mathematical certainty. Why? Because there are combinatorially unfathomably more ways that a mathematically describable universe could come to briefly appear to be the object you think you've discovered when creating such a "complete physics," and yet actually be some sort of different universe with different laws, or much more likely, no laws at all. Unless you can prove the necessity of the laws, under determination makes it more likely that you're actually in a universe that lacks such laws, and that this will be revealed at any moment as order breaks down.

    Given that this doesn't happen, that the coin always comes up tails, and given that we reject that we just sprang into existence, we seem justified in assuming some sort of selection process or rational principle that is ontologically primitive at work in reality. Various conceptions of God fit this role.

    Science assumes the world is rational because it must. We don't have a bedrock theory falsified by some observation and just declare "ugh, guess it was another Humean miracle." We assume that we either had something wrong originally, the we got the observation wrong, or that such an event is explained by a deeper law. However, this assumption isn't based on any necessity, since even if any N dimensional universe can be described mathematically, that in no way entails that information about any partial slice of said universe should let you know anything about what other parts look like. However, a law-like universe is exactly the type of object where data about a slice of it tells you everything about the whole (or at least brackets what the whole can look like probabilistically).

    The multiverse does not solve this problem at all. Indeed, I'd argue that it makes it significantly worse by making structural realism more compelling. You're trading the very low likelihood of physical constants having the values they do for the even lower likelihood that the combinatorial possibilities for universes that exist in said multiverse just happen to be those that are governed by this sort of law.
  • What Are the Chances That This Post Makes Any Sense? A Teleological Argument from Reason


    I really don't see how that follows. If the universe develops teleologically why does that entail that God is guided by the same goals? I don't even see how this necessarily applies to God's immanent activities and properties.

    B. Seems to imply that having goals necessarily implies a lack of agency. I don't think I follow. Surely one isn't free if one's behavior is arbitrary. The ability to rationally develop one's own goals and the ability to have second and nth order goals about one's own desires are both generally taken as prerequisites for freedom. How does this not rule out all free will? If it does, why does doing what I want to do entail a lack of freedom?
  • "All reporting is biased"


    ProPublica has had pretty good coverage of wage theft and labor issues in general. Their finding that US employers steal more money from employees each year by not paying them for hours worked or for legally required overtime than the combined value of all the thefts and robberies in the country each year made the rounds on NPR and the larger papers.

    https://www.propublica.org/topics/labor

    The problem is that this sort of coverage just gets slammed as "left-wing propaganda." And, to be fair, I have also seen interest pieces about the sturggles of buisness owners that seem to have real merit slammed as "right-wing propaganda," as well, although less often.

    Actually, government reporting, for example the Consumer Financial Protection Bureau, tends to put out some of the best stuff. CFPB had a whole big thing on hidden fees and dishonest pricing showing that the median American household pays a quite meaningful amount of their entire income to companies who have tricked them into giving the company their financial information so that company can essentially steal from the family, taking their money in "exchange" for goods they had no intent to buy or marking the price of a service up by over 100% with hidden fees.

    It always cracks me up how American culture basically teaches you that it's your responsibility to not allow large companies to steal from you. Just one example of a good report I've seen from the Feds recently. I think the vast majority of the public hates this behavior, which is endemic, and would love to see something done about it. The problem is that media companies themselves use these practices...



    They aren't all on the same level. The argument that they are is also self-undermining. If I believe everything I read is as biased as North Korean state media, why should I believe a person when they say that everything is that biased? And why bracket it to the amorphous category of "journalism." Scientific journals wield plenty of influence, and they are influenced by politics, so shouldn't we include them too? Yet, I wouldn't put flat earth websites on a level with geology textbooks.

    Some outlets allow more editorialization in their news than others. Some allow a wider selection of voices in their editorial spaces than others. For example, back when Tom Ashbrook led NPR's "On Point," he used to have people from conservative think tanks (e.g., CATO, AEI, etc.) probably more often than liberals (although he often paired them together). Outlets can also be very biased in what they cover, even if the coverage of what they do cover meets some standards of rigor. It's a gradation.

    A lot is done to muddy the waters, but there is indeed a stark difference between real journalistic enterprises and those, often state run (or those run at a loss by the very wealthy), that are run solely in order to advance the interests of a single group, and which have no qualms with simply making up and publishing falsehoods.

    The waters just get intentionally muddied by propagandists. If you can't make yourself more credible, it helps to just make everyone else less credible.


    There are also many different kinds of bias. People tend to think of the big picture "left vs right," divide, but in different sub-areas there are all sorts of different splits that can become very heated and lead to bias in coverage. For example, there are allegations of academic reporting bias in niche publications over language acquisition methods, or there used to be a good deal of bias and censorship vis-a-vis alternative theories in quantum foundations until the late-90s. These are real divisions that lead to censorship and bias, but the left-right divide doesn't map to them at all as they tend not to be politically salient arguments. That doesn't stop people from being fanatic about them though (e.g., people losing their shit over "whole language" vs "phonics," when it seems fairly obvious to most people that kids learn to read using both).
  • The Argument from Reason


    Incompatibility makes it impossible to have immutable axioms which would be applicable to all systems.

    Absolutely. That's why the pivot is to just think in terms of all the possible coherent systems. There isn't one set of immutable axioms but rather a landscape of systems as your new fixed objects. At least that's how I've seen the conception developed in some cases.

    But, as I understand it, while numbers tend to get grounded in quite abstruse work within set theory that there is less general confidence in, they can also be grounded using category theory. Barry Mazur has some relatively approachable stuff on this, although I certainly don't get all of it.

    Timelessness remains either way, mathematics is eternal, not involved in becoming— in most takes at least. This, I think, may be a problem. Mazur had an article on time in mathematics but it didn't go that deep. But I recently discovered Gisin's work on intuitionist mathematics in physics, and that is quite interesting and sort of bound up with the philosophy of time. The Nature article seems stuck behind a paywall, but there is this Quanta article and one on arXiv.

    https://arxiv.org/abs/2011.02348

    https://www.quantamagazine.org/does-time-really-flow-new-clues-come-from-a-century-old-approach-to-math-20200407/
  • The Argument from Reason


    That's what makes it reductionist. You can set aside the first person perspective, and with it, the reality of existence, by treating it as a model, or a board game, as if you were surveying the whole panorama from outside it - when you're actually not.

    Exactly. However, the problem of whether or not consciousness can actually be fit into such a physicalist model, which is something such models need to be able to explain if they are to be satisfactory to most people, seems like a separate problem from the one Lewis is pointing out. If we assume that physical systems, as described per physicalism, can indeed produce first person experience, then Lewis' argument doesn't seem to work.

    I don't think abstraction is a particularly hard problem for the physicalist either. Tropes and universals can be described in mathematical, computable terms.

    So my point would be that argument just doesn't seem to add much. Obviously it is true that physicalism is deeply broken if it is unable to ever explain the most obvious fact of existence, first person experience. No extra argument is really needed if you can prove that an ontology has a giant "the world we experience," hole in it. However, since no system can currently "explain everything," and since plenty of previously mysterious phenomena have successfully been explained in physical terms, I don't think this is a KO of physicalism either. It certainly doesn't work the way arguments that superveniance, as presented in popular forms of physicalism, is incoherent work. If accepted, these do seem to "KO" at least popular varieties of physicalism. That seems to be the type of argument Lewis is going for, but I don't think it works.

    Have you ever encountered Bertrand Russell's A Free Man's Worship?

    No, I generally tend to steer clear of primary sources for Russell, at least on that sort of thing, because I find him to be one of the most uncharitable, self-assured philosophers out there and it rubs me the wrong way. I'm familiar with the vision from Stace's "Man Against the Darkness," though. I find it sort of funny in a way, because for the Stoics and many early Christians the fact that the world did move in such a law-like way was itself evidence of the divine Logos, not an argument against the divine.
  • The Argument from Reason


    Sure, computation has been operationally defined since Turing's "On Computable Numbers With an Application to the Entscheidungsproblem," and Church's introduction of Lambda Calculus, and their findings re: the Church-Turing Thesis was later extended conceptually to physical systems more generally with the discovery that relatively simple cellular automata could simulate a Universal Turing Machine. The computable numbers, as Turing says, "may be described briefly as the real numbers whose expressions as a decimal are calculable by finite means."

    However, an operational definition is not what is wanted when discussing the ontological underpinnings of the universe in the same way that explaining matter as "the amount the spring on a spring balance stretches when something is placed upon it," fails to adequately explain matter in all its aspects.

    In any event, even the operational definition is widely acknowledged not to be fully rigorous, as the term "algorithm," on which the Church-Turing thesis hangs itself lacks a rigorous definition.

    The problem gets even more dicey when one starts talking about the physical instantiation of computation. There is considerable debate and myriad different camps vis-á-vis the question of how to define computation in this respect. Central to this debate is the problem that, if pancomputationalism is true, and every physical system is a computer, then the core thesis of computational theory of mind, that the brain is a computer, becomes trivial (this is why pancomputationalists have jumped on IIT, although to my mind I don't get how IIT doesn't imply panpsychism, which Tegmark at least seems prepared to bite the bullet on).

    On the other hand, permissive mapping accounts of physical computation allow virtually every system to be "computing," every possible computable function (Putnam and Searle make this argument) making the concept of computation itself trivial. I for one can certainly see how these issues can motivate people to cast in with semantic explanations of computation, even if I'm not ready to join them.

    There is certainly no widely accepted definition of what makes a physical system a computer, and in this aspect the current operational definition itself is sorely lacking. I am not aware of many attempts at a theoretical definition, maybe because Liebnitz seems to have done as good of a job as possible off the bat? Although I also think the shadow of Platonism in mathematics makes it difficult to work on defining an abstract process that necessarily take "steps" (time) and which is itself defined by recursion (e.g. Godel's early work on computation).



    Sure, check out his original paper "On Computable Numbers With an Application to the Entscheidungsproblem." The term "computer" in Turing's day referred to a person who computed figures for their job. He makes specific references to people in crafting the idea of a TM. For example, the requirement that the machine's memory be finite is justified by "the fact that the human memory is necessarily limited." Turing is specifically idealizing what a human being does when "computing" figures with a pen and paper. For more detail you can also check out: https://plato.stanford.edu/entries/church-turing/#MeanCompCompTuriThes

    I agree with the keys and streetlight metaphor but I also think pancomputationalism does get at an essential element of how the world works. I am just not convinced that Turing's definition is appropriate for what we wish to describe, in part because continua do appear to exist in physics, although a fully discrete universe certainly hasn't been ruled out.

    It's worth noting that Turing's claim is not that "anything that follows law-like behaviors or instructions must be computable." His claim is merely that a Universal Turing Machine can compute all functions that any Turing Machine can compute and this statement is paired with an argument for why the UTM is a good definition for effective computation, while this definition is bolstered by providing an example of an uncomputable number. This does not imply that there cannot be machines that can do things UTMs cannot do.

    The reason I mention the above is that, even if it were satisfactorily confirmed that some elements of physics are surely uncomputable via a UTM, I don't not think this would be a death blow to pancomputationalism. Rather, we might be able to adopt some sort of new formalism to describe such uncomputable law-like behavior, and we'd likely give it some sort of new name like "super computation," although hopefully it'd be something more clever than that.
  • The Argument from Reason


    Quite a few pancomputationalists also seem to embrace ontic structural realism, that the universe is the mathematical structure describing it, so I'm not even sure if it makes sense to talk about a physical computer and a non-physical program, as the distinction seems to collapse.

    I think the whole concept of pancomputation suffers from the fact that computation itself is poorly defined. The most common explanations make reference to "what Turing Machines do," because that's the easiest way to describe computation, but then Turing Machines are themselves an attempt to define what human beings do when carrying out instructions to compute things. But then human consciousness is also explained in terms of computation, making the whole explanation somewhat circular.
  • The Argument from Reason

    I feel like Platonism is so heavily ingrained in mathematics that even those trying to run from it can find themselves simply lapsing into it from another direction. Part of this has to do with specialization in academia IMO. If, being a scholar focused on mathematical foundations, you're not generally going to be able to do a lot of work or teaching on other fields, the fields where a Platonist would say numbers are instantiated, then your field, by definition, places you in a silo where your experience of mathematics is necessarily "floating free of the world."

    Rather than a set of immutable numbers, which seems less defensible today, we can have a set of possible, contextually immutable axioms, which define a vast, perhaps infinite space of systems. The truths in the systems are mutable, because there are different systems, but then there is a sort of fall back, second-order Platonism where the existence of the systems themselves, and relations between them, are immutable.




    To go back to Bateson's initial quote, what would a numberless measurement of length, for example, be?

    Couldn't this be accomplished by simply referencing objects' extension in relation to one another? Indeed, this is how our measurement systems tend to work. We take an arbitrary phenomenon and use it as a base and describe other phenomena in terms of their relation to the base. I wouldn't agree that a ratio is essentially a number either, as a ratio is necessarily a comparison between things, be they discrete entities or parts of a whole.

    But more to the point on animals having some ability to conceive of numbers, I'm not sure if that demonstrates too much in either direction. Human nature seems to produce a strong tendency to want to think of things in terms of discrete objects. We have some good reason to think this tendency is the result of evolution, since it causes a great deal of difficulty in trying to conceptualize how the world appears to actually work at very large or very small scales. That is, the discrete object view appears to work only at the scales relevant to evolution. It also makes it hard for us to conceive of continua, hence the endless appeal of the Eleatic Paradoxes. However, mathematics also shows us that this conception of numbers is much shakier than was originally thought. I feel like there is support for the supposition that the illusion of discreteness is just a useful survival trick as much as for the idea that innate numeracy denotes the existence of numbers "out there, sans mind."

    how can one have numbers in the complete absence of discrete amounts of givens - i.e., of quantities?

    Imagine a continuum, for example a line, of finite length. Our line has an uncountably infinite number of points but also a finite length. Take some section of the line, arbitrarily, and compare how many lengths of the section fit within the whole. There are sections of the line that exist such that the line can be broken into n segments of equal length, where n is a natural number. No initial discreteness required, right? All that is required is that the points of the line differ from each other in some way; then we can define this difference in reference to a given segment's length relative to the whole to produce numbers for a coordinate system.

    I've always found the reverse argument more interesting, the claim that numbers are essential for reality, or at least our understanding of it.

    We ought to have ontological commitment to all and only the entities that are indispensable to our best scientific theories.
    Mathematical entities are indispensable to our best scientific theories.
    Therefore, we ought to have ontological commitment to mathematical entities.

    That's a brief summary of the Quinte-Putnam Indispensability Argument. In response to this, I know some folks actually have made some headway in describing areas of physics without reference to numbers, although it isn't exactly pragmatic to do so. Anyhow, if some hitherto unformulated version of logicalism is true, and numbers are reducible to logos, it seems to me like this argument is moot (and that the concept of logos spermatikos ends up beating out divine nous as a better explanation of "how things are," IMHO.)
  • The Argument from Reason


    I meant to respond to this when the thread first came out because I am working on a different sort of argument from reason. I do not think this argument works with common, highly nominalist versions of reductive physicalism.

    In most versions of physicalism, which tend to embrace the computational theory of mind (still seemingly the most popular theory in cognitive science), a belief is just an encoding of the state of the external environment. This encoding exists within a system that can be defined as an agent. Agents need not be conscious, they simply need goals and a set of possible behaviors to decide from when attempting to actualize those goals. Decisions on how to act given some goal x and some set of beliefs y can be described in computational terms. This "set of beliefs," is represented as a database of atomic propositions, a "knowledge base," and the behavior selection process can be described well enough through backwards chaining searches on the knowledge base.

    Under this view, a belief is true just in case the representation of the enviornment of which the agent is a part (the world) corresponds to the actual enviornment. It's that simple. The claim that "no belief is rationally inferred if it can be fully explained in terms of nonrational causes," is simply the result of a misinterpretation of what "rational," should be taken to mean. The world is rational because it obeys a set of rules that govern how it progresses from state to state which can be fully described mathematically, perhaps even fully described computationally. Future states are deducible from current ones. All beliefs are thus the product of a rationally describable set of steps, essentially a "program running on a quantum computer," as it is often put. States in a program are all logical consequences of prior states, so "rational." Assuming "rational," to mean "the result of an agents beliefs," as Lewis does is arguably begging the question.


    The encodings inside the physical system making up the agent don't have to fully describe the world as it is to be true. Indeed, organisms cannot encode all the data they are exposed to without succumbing to entropy (Terrance Deacon's "Towards a Science of Biosemiotics," has a good explanation of this). This means that a true belief is just a representation of the world that is in some ways isomorphic to actual states of affairs in the world. A belief can be consistent with many states of affairs and is true just in case it corresponds to the actual state of affairs.

    Thinking through computable toy universe examples, there doesn't seem to be any reason why a toy universe can't contain a subsystem that instantiates the logical computations that (allegedly) result in the creation of agents (and conscious agents at that). Parts of the universe simply interact such that the agent subsystem comes to represent a compressed, partial description of the universe within itself. These descriptions are the knowledge base, which is what it uses to compute ways of achieving its goals (goals which generally include maintaining homeostasis and reproduction, with these goals being explained by reference to natural selection).

    Beliefs then are just other names for physical subsystems within a physical agent, e.g. patterns of neuronal activity. Such beliefs are created due to physical causal mechanisms. Belief, the verb, is just the description of what the enumeration of these physical belief subsystems "feels like" to a conscious agent.

    Knowledge then is justified true beliefs. Beliefs are justified if the methods employed by the agent to vet their beliefs have proven themselves to be successful in the past (inductive support) and if any deduction used in vetting/creating these beliefs is sound. Since AI can already build proofs, I don't think there should be too much argument that causal processes can be used to cross check soundness.

    This view works regardless of how consciousness arises, or even if it is eliminated, because agents are not defined in terms of possessing first person perspective, but rather through having goals. First person experience is thought to be totally described, somehow, by physical casual processes anyhow, so adding it won't change anything.

    Given this view, I don't see how this argument works at all. Lewis seems to conflate the proposition that "the universe and causal forces are meaningless," as in, "devoid of moral or ethical value and describing nothing outside themselves," with agent's beliefs necessarily also being "meaningless," as in "the beliefs must not actually be in reference to anything else." The first sense of the term "meaningless" is not the same as the second. A toy universe with just a bunch of floating balls and a Pac Man that tries to eat them can be meaningless in the first sense while the Pac Man could have meaningful representations of the locations of the balls encoded within itself provided there is some medium for interaction (e.g. light waves bouncing around to hit the Pac Man's eyes).

    Basically, a lack of external reference does not imply a lack of internal reference.
  • Ukraine Crisis


    Wagner is just one of the "private militaries" that exist within Russia. There are actually a large number of "volunteer," forces that are not integrated into the MoD structure and are under the control of quasi-military leadership. Wagner and the Kadyrovite Chechens are the most visible of these in the West, in part due to significant efforts to market themselves online, but far from the only ones. Wagner itself had some other armed groups (ironically, explicitly neo-Nazi ones) folded into it prior to the war for example.

    Wagner had, however, grown into the most potent such force due to its ability to recruit from prisons and then its willingness to carry out costly frontal attacks that the military balked at despite apparently heavy pressure from Putin and his clique.

    Having your own military group is a weird sort of credibility thing in Russia. Strelkov's cred, while waning fast, comes from his prior control over independent forces in the Donbas for example. Some of these are the result of prior political crises in Russia. Minority leaders essentially rule as warlord vassals of Putin, the prime example being Kadyrov. These groups are less threatening because they don't pull support from the Russian majority, unlike Wagner.

    While most groups are small compared to Wagner, which was at times 10-20% of Russian combat forces in Ukraine, they add up. The MoD was talking about bringing 20,000 in as contract troops by 7/1 and another 25,000 by August, essentially filling out an entire corps equivalent, which would be a significant share of combat forces. A significant number of these would be assigned to an army based out in the Far East, which is where they tend to be from.

    You also have groups active in the Donbass since 2014, foreign groups (small but they add up), South Ossetians (apparently just took horrific losses plugging a gap in the defensive line), the DNR, LPR, etc. Just the MoD, Kadyrovites, Wagner, LPR, and DNR meant 5 independent armies in the war- it being such a shit show is sort of explained by this.
  • Ukraine Crisis


    Maybe. Or maybe he did receive support? The air assets brought against Wagner were not considerable, rotary wing craft held at low levels. It would be interesting to see which branch the pilots were with. Gerasimov hasn't been seen or heard since the rebellion. Shoigu has only been seen in a bit of stock footage type shots that could have been from any time. Putin also hasn't been visible.

    After Putin goes on TV and specifically references 1917, then allows the rebel leader and the rebels to carry off their military gear to Belarus, no punishment, you'd think he'd at least try to do some show of strength thing where he is seen with Shoigu, showing that he hadn't folded to Wagner's demands. But that hasn't happened, allowing room for people to speculate that he did have to capitulate on command of the MoD.

    But even if that was the case, why wouldn't he reverse course after the crisis was averted to show he hadn't capitulated?

    It's pure speculation, but I get the feeling that this might have been the last straw for the military leadership and that they may be dictating ultimatums at this point.

    The thing is, a lot of these guys are quite nationalist, and they also have a sunk cost with the war. They absolutely do not want chaos behind the lines, Wagner taking Moscow, etc., because they want to win, but they also want more control over operations, more independence, more freedom to promote leaders based on merit, and freedom to go after anyone whose corruption is hurting the war effort, even if they are part of the top FSB-dominant clique.

    Pure speculation, but if we start seeing guys who could formerly act with impunity getting arrested and leadership shifts, that would be my suspicion of why. This would be a bad thing for Ukraine (more competent and possibly hard line leadership), although it could also precipitate another crisis, as empowering the military means taking power from those who are used to having it.
  • Ukraine Crisis


    The coup was always stoppable if it was just Wagner (and a fraction of Wagner at that). It's just that it was quite unclear if it was stoppable before they marched into central Moscow, or at least caused serious damage in the outskirts.

    4,000 men with no logistical support could be beaten, especially as defections would become increasingly likely if pardons were offered while no sign of a larger rebellion emerged. But they could also tank the war effort in Ukraine and cause an incredible amount of damage. The Wagner column was about the size of the garrison in Mariupol, and they caused Russia a very hard time even with half of them retreating and extremely loose rules of engagement from the air. You can't really retake your own city that same way.

    My guess would be that Prigozhin hoped more people would bandwagon aboard, since dissatisfaction with Shoigu is apparently widespread in the military. He's an outsider for one, and, as the war continues to go poorly, the military is upset that it remains subservient to the FSB and other former KGB elements, even as it grows in power and ability to challenge their dominance in the state (60+% of Putin's ministers have some prior tie to the intelligence services.)

    Notably, after a reshuffle coup over removing Shoigu, there has been no show of unity, a video of Shoigu and Putin together, etc. No strong sign to show the coup demands weren't met. All they have released is a video of what looks like stock footage with no sound. Even Russian milbloggers are skeptical. It's a tough position because Putin can't remove Shoigu without looking weak, but then the military is also likely clamouring for his dismissal considering the absolute shit show to date. Notably for public opinion, Wagner left cheered as heros and MoD forces moved in to jeers and cat calls.

    If Shoigu goes and Prigozhin keeps some meaningful part of Wagner for his "duties" in Belarus, then the reshuffle coup is successful, and the best predictor of a coup in political science, bar none, is a previously successful coup.

    I imagine they will try to MacArthur Shoigu, keep him in the title but strip all power, but he sort of has to go along for that to work.
  • The Andromeda Paradox


    Right, I didn't mean RQM has direct relevance to this particular issue; IMO that is explained quite well by the arguments laid out in the quotes in my first post. I meant that the ontological picture painted by RQM flows very nicely with the conception of local becoming existing without any universal serial ordering.

    The Andromeda Paradox and the results of modified Wigner's Friend experiments are similar, despite being different areas of physics, in that they paint a picture of a world where observers do not seem to be able to point to an absolute, observer free context for grounding claims about states of affairs. However, it is able to do this without making claims about the necessity of consciousness for existence or a truly absolute relativism, because it can be consistent with a sort of ontic structural realism where knowledge about the relations that generate observations is possible, at least in theory.
  • Joe Biden (+General Biden/Harris Administration)


    There is an argument to be made that you cannot deprive the sibling of a politician of their liberties, but I don't think this precludes a law where anyone in high office has to report accepted job offers or contracts above a certain $ threshold for immediate family members to an oversight agency as soon as they become aware of them.

    Then the independent agency can investigate any that seems sketchy. It would probably have some level of records request power. It doesn't need to always be doing "investigations," which cast an air of wrongdoing. Rather they would just check in on relations between an official and an entity who was paying their immediate family members as a sort of net to catch wrongdoing or the potential perception of it.

    But then these sorts of deals should just be illegal for officials themselves and their spouses, as it already is for lower government officials. You can't have the heads of the executive branch recuse themselves the way judges and Congressmen can, but you can at least assure the public that some level of oversight.

    Because it's not always obvious wrongdoing. Some politicians don't care about their kids lol. Stalin rejected a deal to exchange his son for a German general, saying simply "I won't trade a major for a field marshal." When he heard his son had tried to commit suicide, trying to shoot himself in the heart and missing, his first response was allegedly, "see, I told you he can't do anything right." Great guy.
  • Joe Biden (+General Biden/Harris Administration)
    I find it hilarious how both parties will make endless hay over intimations of corruption and yet, while there is common ground on this issue, they refuse, again and again, to pass a law that would restrict elected and appointed federal officials from doing these things. An unpaid municipal commission member, a state employee at a regulator in almost every state, a regular federal employee, and a military officer would all be guilty of crimes if they engaged in the actions of Thomas (absolutely huge "gifts' and major business dealings without recusal) Clinton (settling a tax evasion case with UBS in a manner that generally was seen as very advantageous for the bank and then her husband being paid $1.5 million to give a speech to them), Scalia (same as Thomas but "only" $100,000 worth of gifts," etc., but the fact is that these things aren't even illegal for the top.

    That is, even taking actions that would possibly create the perception of corruption, taking any gifts over $20, etc. is illegal in most states, for both elected officials down to your lowest level town employees.

    In any of the jobs I've held at the state, local, or federal level the actions of Thomas, Scalia, or Clinton would be illegal. In many states, having an elected or appointed official leave for a job with a company they had just recently been regulating is illegal; many states have lifetime bans in these cases.

    It's disturbing to me that partisans simply jump in to defend their side each time or attack the other, and that there has been no real pressure to stop these issues. Likely, it has to do with the fact that so many of Congress would find themselves guilty of violating such a law.

    The Hunter Biden situation actually seems mild in comparison to other cases, because generally a person isn't expected to be able to police their children's business dealings. That said, does anyone really think a gas company had a legitimate reason for giving a self described out of control drug addict hundreds of thousands of dollars a year for a part time gig in an industry he had no real experience in?
  • Ukraine Crisis


    Exactly, I was just assuming he might have allies in the MOD who he knew would jump in if he made it that far and that was part of the decision-making process.

    I imagine the whole, "if only the wonderful Tsar knew what the recalcitrant bowyers were doing," schtick around Putin would vanish pretty much instantly if it ever looked like he was about to lose power. It honestly feels like a parody sometimes.
  • Ukraine Crisis


    Today has shown that in one thing Putin was more successful than he probably expected - he wanted to have a politically apathetic population and that is exactly what he got... He could be deposed and nobody would bat an eye, no jumping on the tanks for him.

    Well, he's had centuries of help. Serfs, the vast majority of the population, didn't get their freedom until around the time the US ended slavery and they were still paying back the debt of their "purchase" from the nobility in 1917.

    The February Revolution was precipitated over riots over living conditions, but the removal of the Tsar was a palace coup. The October Revolution was a small cadre, one that could easily have been overwhelmed if people were willing to fight, getting control essentially with essentially a shrug. The "people," didn't drive to revolts the way the sans-culottes did in France during multiple revolts, the Egyptians in 2011, etc. Not that incredibly vicious struggles didn't start later, but at first it was a big "meh."

    ---

    Surprised the shit out of me. I figured the Rubicon had already been crossed, no going back.

    The bulk of Wagner abandoning the effort for amnesty or routing if there was stiff resistance seemed entirely possible, but not a deal.

    It would not surprise me if the Wagner commander's assessment of their odds of even breaching the defenses of the hastily assembled, poorly equiped forces in Moscow was that it was unlikely. IDK how many soldiers Wagner still has, how many came on the march, and how many had any stomach for a fight, but it wouldn't surprise me if that number was quite low. Plus they lack any logistics for their heavy equipment.

    That said, they absolutely could have held out in Rostov forever, or Moscow if they made it into the city. Russia's PGM shortage and their AA would make bombing impossible without leveling the city and Russia doesn't have the forces to take a million + city in urban fighting anywhere, let alone to spare. I figured he would sit tight in Rostov and hold the threat of ending the war over Putin.

    When the column kept moving with limited air strikes I actually started to think maybe he had coordinated with parts of the military to launch the coup, but it seems like their air force is just spent.
  • Ukraine Crisis


    Yeah, although he faces a tough choice. He needs to empower competent leaders to turn the war around and stop the problems that led to this, but any such competent, ambitious leaders just saw that having just a division or two at their command would be enough to send the leadership fleeing as they grab Moscow.

    I mean, the whole reason he propped up a rival second army in the first place was fear of the military moving against his FSB oriented government. If this prompts a move to put someone more capable into control of the MoD, that's its own sort of risk (particularly if they are charismatic, but then good leaders often are charismatic.) It's not hard to see how this might have played out if someone else marched on Moscow with more loyal forces with an army of "veteran heros and citizens," rather than prisoners.

    The flip side is that putting someone competent in charge and empowering them could also help Russia win some objectives worth celebrating and also save Putin.
  • Ukraine Crisis


    Unclear. Prig ruling Russia as a warlord is in some ways potentially scarier than Putin, but doesn't seem like a likely final outcome even if Putin is killed by his guards or something dramatic like that.

    Still, he might exert significantly more control over events for awhile, and he has generally been more hardline on "mobilizing," to win the war. That said, it seems that could just be posturing to court nationalists. His statements indicate a willingness to end the war, especially his painting it as the work of recalcitrant oligarchs.

    Best case is a quick deposing of Putin and some sort of unity government and the withdrawal of Russia to at least the 2014 borders (although they recently lost land they've held since 2014). The problem though is that Ukraine will want to keep pushing while nationalists don't want to lose the Donbas or especially Crimea. Ukrainian attacks might help to unify the country around these, although it could also weaken the nationalists as they sink their efforts in holding Ukrainian land without the support of the whole Russian state.

    Worst case is some sort of large defection to Wagner, but not enough to stop the MOD from still retaining their own army. Then you have a civil war, which isn't in anyone's interest, likely not even Ukraine's, at least not compared to a peaceful resolution. Given all the fault lines in Russian leadership, ambitions, grievances, etc. it seems like such a war might quickly degenerate into a multifaction struggle, like their last civil war. Wars with more factions tend to last longer because negotiation is harder with 6 parties than two, let alone 30+.

    If that happens, who knows? Hell, it's unclear if it would even end with one unified Russia rather than multiple states in a stalemate. Plus there is the issue of their nuclear arsenal. A Russian MOD plane was allowed to take off with a transponder on and fly through Ukrainian air space earlier; I imagine that is about securing the weapons. Hopefully they have the good sense to destroy them if they look to be in risk of being taken. Hell, if I was in charge of them and saw a real civil war opening, I'd certainly give the order to dynamite them, nothing good could come from their use as leverage.
  • Ukraine Crisis


    From the pics it looks like all Rosgvardiya; essentially police. Outside of parking vehicles on the road they seem to have been unwilling to engage so far.

    In other news, the Belarusian opposition fighting in Ukraine, who are by far and away the largest volunteer force involved in the war, released a national address telling Belarusian soldiers not to carry out orders to support the regime and to ignore the Russian "civil war," and focus on their oaths to Belarus. They claim that they will be arriving with the coordinated support of elements of the Belarusian military itself and will be deposing the current leadership (whose dictator appears to have fled the country already).

    Like I said, even if Putin can contain Wagner within a week or two, that might be enough time to bring Lukashenko's dictatorship crashing down, and it's hard to see how Russia continues the war with Ukraine at that point. I suppose they could do a larger scale mobilization, both conscription and economic mobilization, but that just seems like it would fan the still smoldering coals of revolt.

    https://www.youtube.com/watch?v=XLA3nNS8i8g

    There is also to consider that minorities, who gets a very low level of services from the Russian state, have been massively disproportionately hit by the conscription to date and have been increasingly vocal in criticism of the state. Leaders there, of factions not in power, could certain use this as an opportunity to push for independence. Then first mover collective action problem being solved allows for a lot more decisive moves to be made at a lower cost. Sort of "of everyone does it, they can't stop us all."

    The Far East is increasingly less economically dependent on Russia. Russia needs it for resource extraction, but the people there, like the peoples of Central Asia, increasingly see China as a better model of rule and as a partner who can offer far more (e.g., Belt and Road). Tajikistan's leader said something to the effect of "I want to be a Deng, not a Putin."

    In the run up to the Ukraine invasion all of the "Stan" nations had unprecedented unrest, and this, along with the revolt in Belarus, likely helped motivate the invasion. Putin saw Central Asia as being pulled out of Russia's orbit, into China's, and needed to make it look like a stronger partner. Leaders or would be leaders in autonomous regions have some reason to think they would do better as independent states that are clients of China than as part of Russia and if Russia can't stop a Wagner convoy to Moscow can they really stop minorities from driving home with their weapons and declaring independence?

    But Beijing might see a bigger prize in playing kingmaker for the next Russian leader and try to hold that sort of thing back. Independence is probably a last ditch option to invoke if a liberal democratic regime looks to be winning in Russia (a liberal Russia that is part of the EU and/or NATO with a huge land border with China is a strategic nightmare for them).

    The Russian Far East had just 20,000 Chinese nationals living in it in 2000. As China took over resource extraction this rose to 700,000 by 2014, about 12% of the population of that huge area. The Russians then asked China to stop publishing figures on this due to nationalist backlash about Chinese "colonization." Russia, but Chinese investment has kept growing, spiking with sanctions as other nation's nationals left in droves. Chinese nationals might be more like 15-20% of the population now.

    The population of the Far East is also not majority Russian, the largest ethnicities are those indigenous to the region, followed by ethnic Mongolians, and another large contingent is minorities of the western half of the empire (Ukrainians, Jews, Chechens, etc.) whose ancestors were forcibly deported there by the Russian state. The former two of these are arguably culturally a good deal closer to China than Russia.

    Which is all to say that something in Belarus could radically shift things and seems likely, but that there might be follow on revolts. Hell, maybe it will be good for long term world peace if Xi can trumpet some expansion in the north and accept that as his glory instead of going for Taiwan. China does still occasionally claim Russian land up there and they have had a history of annexing land from former Soviet states since the dissolution of the USSR (doing it to all their neighbors on a few occasions).
  • Ukraine Crisis


    Seems that is the plan. I figured they would stay in Rostov but they are already halfway there and have only faced resistance from the air. This hasn't slowed them yet and there appears to be footage of several aircraft being downed by their AA while presumably trying to attack.

    The barricades being thrown up around Moscow seem to suggest they think it is a possibility. At full speed, a frontal advance party is about 6 hours away (by slow IFV speeds), so it's not exactly far. And having municipal busses parked to block the roads would suggest better options aren't available.

    The whole military is on the front and can't really pack up and leave without letting Ukraine advance. Plus, ground commanders might want to see how this shakes out from afar while they have a plausible reason to do nothing.

    Luka apparently fleeing is in a way more escalatory because Belarus has had a much stronger dissident movement and a much weaker military. I imagine this must seem like a golden opportunity there. But if something happens there, it hits Putin's credibility hard. Even if he fixes the current crisis, he can't conceivably "liberate" Ukraine and Belarus at the same time.
  • Ukraine Crisis
    And to think, everyone made fun of Kojima saying the plot of Metal Gear Solid was unrealistic...
  • Ukraine Crisis

    Yes, but they punished protestors by conscripting them to the front, so it seems the lesson wasn't totally learned



    Also, Putin has historically been able to shore up his power by military ventures. He benefits from a quagmire as much as he would a victory, just in terms of eliminating any hint of democracy in Russia.

    Not so much this time though. His plane left Moscow and neither he or Medvedev have done a public video to counter rumors they have fled the capitol the way Zelensky or many other leaders have done in similar instances in the past.


    Also, Lukashenko, who I assume is in a good place to know the security situation, appears to have fled Belarus. His private plane shot off to Turkey over night.

    And, it doesn't look good. The pictures of the barricades around Moscow are all manned by Rosgvardiya, Putin's internal strong arm. But these guys are police, for keeping order and getting dissidents. They lack heavy arms.

    They were used for the initial invasion of Ukraine and disintegrated on contact with a military force with tanks and IFVs. All they have is trucks and small arms. They're also used to acting with impunity, being the self serving muscle. They're morale is suspect.

    The only large scale desertions in the war I am aware of is from the Rosgvardiya taking losses, then packing up and going home against orders. We know of at least some of these because there were 700+ court cases over these, probably a way to try to sure up their discipline. Maybe the military forces are elsewhere, but these guys are unlikely to slow down the advance, especially if it is picking up regular army forces on the way as they claim.
  • The Andromeda Paradox
    BTW, Relational Quantum Mechanics handles this sort of "paradox" quite well, even if we consider that any full theory must deal with gravity and attendant issues from relativity at extremely small scales.

    If discrete objects do not exist, and things only exist in how they interact (i.e., relations exist, persistent objects with properties at all times do not), then the idea that all becoming is local doesn't seem strange at all. The conception seems to work well with Wheeler's "many fingered time," and arguably also with his more provocative "it from bit."

    The idea that things only exist in their relations works
    quite well with the holographic principle as well.

    Having not spent too much time diving into RQM, it does seem, at first glance, like it would be quite compatible with Floridi's maximally portable ontology of bare, essential ontic difference or various quantum variants that have been proposed. Certainly, it seems to fit well with ontic structural realism, but does so in a way that avoids having to have a multiverse, which in turn avoids having an theory where it still seems like the vast majority of human observers should be Boltzmann Brains.

    Interactions are ontologically primitive, not "stuff." These (mostly) occur locally, although one also needs a way to explain (apparent?) non-locality. Best of all, RQM seems to not only deal with this "paradox" quite well, but also experimental results that suggest it is fundementally possible for two observers to observe different facts about a system, leaving no room for a bedrock "objective" world outside of one that is posited based on philosophical inclinations, but which we have to accept as completely inaccessible to all observers.
  • Ukraine Crisis
    lol, I guess I was right about Putin speed running the Russian Revolution.

    Before this all blew up, I did find myself asking: how were the overwhelming majority of Russia specialists so surprised by how poorly Russia's military functioned? Why did the scale of the rot elude them? The Afghan National Army style ranks of ghost soldiers and ghost vehicles who only exist on paper to funnel payments, the sold off fuel, the broken down vehicles abandoned a day into a major invasion? Virtually no one expected it.

    Why not? We're told Putin's regime is essentially the fusion of organized crime with the elements of the former KGB. Journalists claim to have uncovered vast money b laundering operations for Putin and his inner circle. Why wouldn't the rot spread downwards?

    Basically, why should we be shocked to learn that Putin's rule was as hollowed out as the military?

    One should recall that neither the February nor October Revolutions were primarily mass uprisings. To be sure, there were riots precipitating action in February, but the removal of the Tsar was a palace coup and the rise of the Bolsheviks was accomplished by a small cadre of armed partisans (smaller than Wagner) and the total apathy of the population. It wasn't the might of the Bolsheviks that won, but that the conscript army and even much of the officer corps initially had no interest in defending the government.

    Wagner represents an existential threat only to the degree that the main bulk of the military will be unwilling to fight them. If Russia really is largely mobilizing just a small cadre of loyalist, reliable "special forces," to deal with them, then that shows a lack of faith in the military. You can't do an urban assault against veterans of a Verdun-like battle without either destroying the city of taking huge losses. If they jump to shelling their own city, I doubt it goes well for morale to say the least. So they are in a pickle, because the Russian military is in no place to lay siege to even a Ukrainian city the size of Rostov, against a force the size of Wagner, let alone doing it quickly with tight ROE, no indiscriminate use of force, and no logistics set up.

    Wasn't the reliance on a private army a telling sign? Isn't arresting anti-war and dissident activists/protestors and then sending them to the front to gain leadership experience and a chance to b radicalize your army almost always a bad idea?

    Then again, no one saw the USSR's collapse coming until it happened. Not that this spells collapse, but you can't argue it's a good sign for Putin.
  • The Andromeda Paradox


    I don't think it's a paradox at all. It's only a paradox if one assumes the absolute Newtonian serial time must exist. It's consistent with local becoming.

    A false dichotomy is often set up by advocates of eternalism in the physics literature between some sort of absolute, serial ordering of time and all moments existing at all times in an eternal "block universe." These arguments rely more (arguably entirely) on philosophy than scientific support, since the conjecture is arguably unfalsifiable.

    The "Andromeda Paradox" has been a popular vehicle for this, as has the "Twin Paradox." Neither of these are actually paradoxical given an assumption of local becoming, and no reference to rods or clocks is needed to ground our sense of time, although the mathematics involved being abstruse might be why rejections of the paradoxes, which have been around for a century, are less well known than the paradoxes.

    We'd probably see a resurgence of interest in these explanations if the evidence for quantum scale time irreversibility hadn't come out at the same time as the Higgs boson discovery, thus overshadowing it. It's not the end of the debate, but we certainly have a universe that appears to run differently forwards in time as opposed to backwards, at both large scales and quantum ones. This isn't at all surprising IMO, since all empirical evidence suggests time only goes in one direction and decoherence/collapse only occur in one direction.


    We have seen that SR rules out the idea of a unique, absolute present: if the set of events that is simultaneous with a given event O depends upon the inertial reference frame chosen, and in fact is a completely different set of events (save for the given event O) for each choice of reference frame in inertial motion relative to the original, then there clearly is no such thing as the set of events happening at the same time as O. As Paul Davies writes (in a variant of the example given by Penrose above), if I stand up and walk across my room, the events happening “now” on some planet in the Andromeda Galaxy would differ by a whole day from those that would be happening “now” if I had stayed seated (Davies 1995, 70).


    From these considerations Gödel concludes that time lapse loses all objective meaning.But from the same considerations Davies concludes, along with other modern philosophers of science, that it is not time lapse that should be abandoned, but the idea that events have to “become” in order to be real. "Unless you are a solipsist."

    As I argued in Chap. 3 above, events “exist all at once” in a spacetime manifold only in the sense that we represent them all at once as belonging to the same manifold.But we represent them precisely as occurring at different times, or different spacetime locations, and if we did not, we would have denied temporal succession...


    ...in each case we are presented with an argument that begins with a premise that all events existing simultaneously with a given event exist (are real or are deter- mined), and concludes that consequently all events in the manifold exist (are real or determined). But the conclusion only has the appearance of sustainability because of the equivocation analysed above in Chap. 3.If a point-event exists in the sense of occurring at the spacetime location at which it occurs, it cannot also have occurred earlier. But if the event only exists in the sense of existing in the manifold, then the conclusion that it already exists earlier—that such a future event is “every bit as real as events in the present” (Davies), or “already real” (Putnam)—cannot be sustained.Thus, far from undermining the notion of becoming, their argument should be taken rather to undermine their starting premise, that events simultaneous with another event are already real or already exist for it in a temporal sense. For to suppose that this is so, on the above analysis of their argument, inexorably leads to a conclusion that denies temporal succession.

    This, in fact, was Gödel’s point.As mentioned in the introduction to this chapter, he had already anticipated the objection that the relativity of time lapse “does not exclude that it is something objective”. To this he countered that the lapse of time connotes “a change in the existing”, and “the concept of existence cannot be relativized without destroying its meaning completely” (Gödel 1949, 558, n. 5).To this he countered that the lapse of time connotes “a change in the existing”, and “the concept of existence cannot be relativized without destroying its meaning completely” (Gödel 1949, 558, n. 5).As we saw in Chap. 3, however, the sense in which events and temporal relations “exist” in spacetime is not a temporal sense.This would amount to a denial of the reality of temporal succession.20

    So the root of the trouble with the “layer of now’” conception of time lapse is a failure to take into account the bifurcation of the classical time concept into two distinct time concepts in relativity theory. The time elapsed for each twin—the time during which they will have aged differently—is measured by the proper time along each path. The difference in the proper times for their journeys is not the same as the difference in the time co-ordinates of the two points in some inertial reference frame, since they each set off at some time t1 and meet up at a time t2 in any one ...

    We may call this the Principle of Chronological Precedence, or CP. As can be seen, it presupposes the Principle of Retarded Action discussed in Chap. 4, according to which every physical process takes a finite quantity of time to be completed.Note that so long as CP holds for the propagation of any physical influence, it will not matter whether light or anything else actually travels with the limiting velocity.41


    As Robb showed in 1914, this means that—restricting temporal relations to these absolute relations only—a given event can be related in order of succession to any event in its future or past light cones, but cannot be so related to any event outside these cones (in what came to be called the event’s “Elsewhere”).There are therefore pairs of events that are not ordered with respect to (absolute) before and after, such as the events happening at the instants A and B on Robb’s “Fig. 6.1" The event B, being too far away from A for any influence to travel between them, is neither before nor after A.



    For example, B could be the event on some planet in the Andromeda Galaxy that Paul Davies asked us to imagine, in the Elsewhere of me at the instant A when I am considering it. It is true that by walking this way and that I could describe that event as being in the past or in the future according to the time coordinate associated with the frame of reference in which I am at rest. But that event is not present to me in the sense of being a possible part of my experience. It bears no absolute temporal relation to my considering it...

    All the events I experience, on the other hand, will be either before or after one another, and therefore distinct. In fact, they will occur in a linear order.They will lie on what Minkowski called my worldline.


    There is nothing unique about my worldline, however. On pain of solipsism, what goes for me goes for any other possible observer (this is the counterpart in his theory to Putnam’s “No Privileged Observers”).42 Thus if we regard time as constituted by these absolute relations, time as a whole does not have a linear order: not all events can be ordered on a line proceeding from past to future, even though two events that are in each other’s elsewhere (i.e. lying outside each other’s cones) will be in the past of some event that is suitably far in the future of both of them. In this way, all events can be temporally ordered, even if not every pair of events is such that one is in the past or future of the other. This is Robb's "conical order." In the language of the theory of relations, it is a strict partial order, rather than a serial order.

    In a paper of 1967 the Russianmathematician Alexandrov showed how the topology of Minkowski spacetime is uniquely determined “by the propagation of light or, in the language of geometry, by the system of the light cones”, noting the equivalence of this derivation to Robb’s derivation on the assumption of chronological precedence.

    The Reality of Time Flow: Local Becoming in Modern Physics
  • The Conservation of Information and The Scandal of Deduction


    "Information" is very tough term because it is defined loads of different ways. I suppose here I should have used "Kolmogorov Complexity," in every instance here. This is a measure of how many bits it takes to describe something (really how many bits a computer program would need to be to produce an output of a full description).

    So, that said, I would think that the "heat death," scenario, where the universe is in thermodynamic equilibrium, would have the greatest complexity/take the most bits to describe as a description of its macroproperties excludes a maximal number of possible microstates that must be excluded by specifying information.



    The context would be algorithmic information theory, or Kolmogorov Complexity. The common claim is, the shortest possible description of the universe at any time T(short) gives you complete information about all the states of the universe because one can simply evolve T(short) into any other state. Thus, no new information in the algorithmic sense is ever added.

    The problem here from my perspective is that T(Short) + evolution produces all states, not just a single state of the universe. To use T(Short) to have a "program" (conceivably the universe evolving itself as a quantum cellular automata), output a specific different time, T(Diff) requires that we have some way to halt the evolution at T(Diff) and output the current state. Otherwise the evolution just runs forever. To do this would seem to require that you already have a complete description of T(Diff) to match to, comparing your evolved states to the one you want, and since T(Diff) could require a much larger description than T(Short), this means such an evolution scheme does NOT entail that the universe doesn't gain algorithmic complexity.

    If we allow that any program that produces an output of x is the shortest description of x, even if it outputs other things, then the shortest descriptions of all items is a simple program that combinatorially enumerates all possible combinations of values for an object encodable in a string of length n. This seems like nonsense.

    I imagine the confusion comes for certain pancomputationalist conceptions of the universe where the universe is essentially a quantum computer, and thus the assumption that every moment that exists is an output, and so T0, the start of the universe, entails a unique output for all states. I don't think this is a useful way to use an analogy to computation in nature. If we accept that every system state should be considered as an "output," then it follows that very simple closed systems, given infinite time, compute all computable functions, so long as they instantiate any of the basic cellular automata capable of universe computation.

    A related problem is that, even if you have a unique identifier for something, that doesn't mean the identifier actually allows you to construct it. E.g., "the coprimes of the centillionth factorial," uniquely describes a set of integers, but it doesn't give you a means of constructing that set and even if doing so is straight forward enough, it would take a lot of resources because it's a huge computation.

    Thus, it's unclear if the claim that "deterministic computation cannot create new information," (reduce uncertainty in this case, usually framed as Shannon information or something similar) makes any sense. If we were given the set of coprimes, our uncertainty as to the elements' identities would appear to be reduced in the example above. Being able to know a set of facts S with 100% certainty given information x + deduction is not the same thing as knowing S because real computation doesn't occur in a Platonic realm where all logical consequence exists eternally. Even abstractly, computation should be thought of as requiring steps (time), as a stepwise enumeration of logical consequence, rather than being identical to the relationships it describes.

    Mazur's "When is One Thing Equal to Some Other Thing?" sort of gets at this same thing, but from a different angle and I don't understand category theory so he loses me midway through.

    https://www.google.com/url?sa=t&source=web&rct=j&url=https://people.math.harvard.edu/~mazur/preprints/when_is_one.pdf&ved=2ahUKEwjn-pvitdn_AhVskokEHfqrDawQFnoECBoQAQ&usg=AOvVaw3_EzAdB0Ll98eIYYV4dLzi
  • The Conservation of Information and The Scandal of Deduction
    This seems somewhat related to the (seemingly) paradoxical fact that you can increase the algorithmic complexity of a set by removing elements.

    For example, it's very easy to uniquely define "the natural numbers up to 100,000." However, if you randomly chop out 1/3rd of them the description has to get significantly longer to uniquely specify the list.

    Specifying a unique time-like slice of the universe might likewise require more information that specifying all such slices. However, if we except this, it seems to beg the question re: eternalism and eternally existing truths, since it implies that a description of "all the moments in the universe," is a real thing that physics studies, as opposed to to just the current universe being the only such actual time-like slice. It seems like a sort of deeply embedded backdoor Platonism.

    If you assume past moments no longer exists and that future ones do not yet exist, i.e., that local becoming is real, then "specifying T2 takes more information than simply specifying all states of the universe past and future," seems to be equivalent with saying "the universe has gained information."
  • The Conservation of Information and The Scandal of Deduction
    Lots of good points here, I'll try to get to them all eventually.



    I thought it might be most immediately applicable to physics. The claim that the algorithmic complexity of a deterministic universe doesn't increase is fairly common as is the claim that a non-deterministic universe would have an increase in information.

    However, such "non-deterministic" universes aren't thought to be entirely random, both because QM isn't random and because there isn't much you can say about a universe that is totally random, given that no state bears any necessary relationship to any others. Generally this "randomness" is bracketed by probabilities assigned to outcomes.


    If your "random" outcomes are discrete (or observably discrete), then you can just create a program that says "starting with the initial conditions at T0, produce every possible universe." That is, you can turn a non-deterministic Turing Machine into a deterministic one by having it brute force its way through all combinatorially possible outcomes provided there are finitely many. This being the case, in what sense could quantum interactions actually create new information? If a program can describe the possible outcomes of all such interactions, and we're allowed to say that a state S1 that evolves into another state S2 is identical with the second state (S1 = S2), then it seems like there shouldn't be a distinction.

    Whereas if the information content of the universe is said to be infinite (it usually is not), then analogies to computation really aren't appropriate in the first place.

    IDK, it seems like a sort of Platonic hangover haunting the sciences, eternal relations and all.



    Thus the initial conditions of a deterministic universe 'contain' all the information of every moment of its evolution, and time is the running decompression algorithm that 'expands' the initial equation.

    Right, that's exactly the normal position and what is meant by "conservation of information." It's a Laplace's Demon type conception of the universe. The problem I see is that, in the world of experience, logical consequence doesn't appear to exist eternally, rather it rolls out in a stepwise fashion. My throwing a ball at a window entails that the window will break, but the window doesn't break until the ball reaches it. A description of the ball and window before and after impact aren't the same thing unless you allow for a sort of "outside of time" computation to exist. It seems to me that committed nominalists should reject that sort of conservation of information.
  • Philosophy is for questioning religion


    That's a tough question. Religion does attempt to explain the transcendent, and that is where religious inquiry tends to focus today, but historically religion also tried to explain nature and metaphysics in the way that science and philosophy do today. "What exists and why? How was the world created and where is it going? How was first person experience created and how does it end, or does it end at all?"

    There is a mix of all the subdisciplines in metaphysics, ontology, causation (e.g., as a result of the Divine Will), substance/essence, etc.

    Religion also seeks to explain and enumerate ethics. Philosophy of mind often gets into the mix (e.g., Neoplatonism and the World-Soul - Soul relationship). Epistemology also ends up intertwined with religion, but not quite to the same degree. Same for aesthetics.

    Organized religion also sometimes fulfills roles more often fulfilled by the state these days.

    This makes religion tough to categorize. It is fluid, filling holes left by the absence of other institutions. It is also ubiquitous, existing in all human societies. At the highest level, I would say it is the organizing principle for human societies. Religion defines "what life is about." Religion creates meaning and purpose.

    Other disciplines can fulfill some of these roles. The state can try to fill them with nationalism, science can fulfill questions about what the world is, philosophy can fill most (all?) of these roles. I think it's noteworthy though that when these other disciplines try to fulfill these roles they tend to become more "religious-like," i.e., more all encompassing, more dogmatic, more defensive about criticism, etc.
  • Philosophy is for questioning religion


    You are correct. Although both terms are used, more so "natural selection," in terms of initial domestication, and "selective breeding," in terms of ongoing efforts. You see both in the self-domestication literature, even for humans. I like to think of "selective-breeding," as a subtype of natural selection guided by intelligent agents who are aware of how the breeding fulfills their goals, since humans are part of nature, if that makes sense? The division seems somewhat artificial. Archaic man didn't know a lot of things, but presumably they knew what they were doing when breeding docile animals to each other, even if the initial self-domestication happened without human intentionality. That children resemble parents seems to have been understood since the beginning of history.

    To circle back, I don't think markets as a whole would be attractors, the attractors would show up in phenomena like general inflation or deflation rates. Individual prices are generally determined by individual vendors based on intentional analytical reflection about their business, but in a period of general inflation the behavior of most vendors is slowly attracted towards a common % increase in the price level across a given sector. The common behavior doesn't undercut the fact that individuals planning price changes are being very intentional. I haven't seen as much work making this claim for the phenomena of market equilibrium (harder to define quantitatively), but I assume someone has made that connection.

    The point about the need to include institutional agency to explain social-historical phenomena is too far afield for this topic. I will make another thread about it when I have time and can dig out old sources so it doesn't seem like total speculation (or at least not just my own speculation lol).
  • Philosophy is for questioning religion


    That intentionality effects genetic selection. If dog breeding isn't natural selection than either humans are supernatural, magic, or all mutualism, parasitism, symbiosis, is not natural selection.
  • Philosophy is for questioning religion


    One is tempted by the analogue with a strange attractor, after ↪Count Timothy von Icarus, but even a strange attractor is rhythmic and predictable compared to the path of even a simple institution, or with the unpredictable events of a lifetime.

    Take any pivotal life decision, be it moving to a distant city or committing to a partner or accepting a job offer. Everything changes, unpredictably, as a result of the decision. Because of this, while there may be a pretence of rationality, ultimately the decision is irrational. Not in the sense of going against reason, but in the sense of not being rationally justified. It is perhaps an act of hope, or desperation, or sometimes just whim.

    And this not only applies to big choices, but to myriad small choices. Whether you have the cheese or the ham sandwich had best not be the subject of prolonged ratiocination.

    Most of our choices are not rationally determined; and this is usually a good thing, lest we all become Hamlet

    Yes, that is exactly the sort of analogy I have in mind. Attractors have been invoked as a more rigorous description of the mechanism of apparent "natural teleologies," in some cases.

    Attractors can be found in complex systems filled with conscious agents. For example, businesses all make decisions about pricing separately, based on a rational assessment of their costs and profit margins, yet a general industry-wide inflation rate emerges as an attractor. Gradual changes in grammar and linguistic fluctuations have also been identified with attractors and spontaneous organization. What we choose to say and how we choose to say it is often something we focus on, an action with intentionality, and yet our individual choices are shaped by the larger dynamics of contemporary language. People simply don't talk the same way they did in 40 or 50 years ago, even the same individuals. Phrases come and go. Foot traffic also follows patterns of spontaneous self-organization.

    Looking for market level attractors in the behaviors of a single firm or individual is simply looking in the wrong place. It's like trying to explain state change or turbulence by looking at a single molecule. You won't ever see the bigger picture looking at one individual, but the bigger picture is still shaping what the molecule does in a profound way.

    I get that people find the "intelligent composite entities," thesis metaphysically dubious. It flies in the face of both our focus on the individual and tendencies towards reductionism. But, if the theory holds no water, why is it that models of higher level market data, models in international relations that treat the states as the deciding subject, etc. all are far more predictive of future observations than attempts to predict state behavior, future market prices, changes in consumption, etc. using analysis strictly of individuals? Why does the logic of an electoral system (e.g. winner take all, first past post) predict if it will produce a two party or multiparty system so well across cultures and times?

    At least in IR, which I am most familiar with, psychological assessments of individual leaders are considered a dubious means of predicting state behavior, and are thought to be most relevant in autocracies, least relevant for liberal states. This is what you would expect in the "state as an emergent agent," thesis. You can't chart the path of an individual based on market attractors, but things like swings in regional housing prices, or surging structural unemployment in a given field, obviously shape individuals decisions to move to a given locale or which vocation they enter. When tons of GI's buy homes in suburbs due to incentives shaped by intentional government policy, that's individual life choices producing an output guided by an institutions explicit rationality.

    Unfortunately, natural selection is the most well known complex systems process, and, for partly philosophical (and arguably dogmatic) reasons, it is drilled into students that natural selection does not involve final causes. Animals don't evolve because they want to. Evolution was supposed to be the great answer to questions of design and intentionality.

    The problem with this view is that natural selection can be found everywhere, in all sorts of systems, and these systems often involve conscious agents. The creation of dog breeds is an example of natural selection, the enviornment shaping a species' genetic traits, that can only be explained in terms of the intentions of human agents. Likewise, when cultural norms effect how humans mate, we have agents' rational decisionmaking involved in selection.

    There are arguments for natural selection in business survival, language, etc. All involved conscious entities and intentional decisions.

Count Timothy von Icarus

Start FollowingSend a Message