Comments

  • The Fine-Tuning Argument
    I've been reading some more on the topic. An extensive review of fine-tuning for life in fundamental physics and cosmology is given by the young cosmologist Luke Barnes: The fine-tuning of the universe for intelligent life (2012) (this rather technical article served as a basis of a popular book coauthored by Barnes). He frames his article as a polemic with Victor Stenger's popular book The Fallacy of Fine-tuning: Why the Universe Is Not Designed for Us (2011), which goes beyond the ostensible thesis of its title and argues that the purported fine-tuning of the universe is not all it's cracked up to be. Barnes is a theist (as far as I know), and Stenger was, of course, one of the crop of the New Atheists, so there may be an ideological aspect to this debate. But in his academic writing, at least, Barnes stops short of making an argument for God, and having read this article (and Stenger's response), I am more persuaded by his case - as far as it goes.

    One thing caught my attention though. While discussing the fine-tuning of stars - their stability and the nucleosynthesis that produces chemical elements necessary for life - Barnes writes:

    One of the most famous examples of fine-tuning is the Hoyle resonance in carbon. Hoyle reasoned that if such a resonance level did not exist at just the right place, then stars would be unable to produce the carbon required by life. — Barnes (2012), p. 547

    He then includes this curious footnote:

    Hoyle’s prediction is not an ‘anthropic prediction’. As Smolin (2007) explains, the prediction can be formulated as follows: a.) Carbon is necessary for life. b.) There are substantial amounts of carbon in our universe. c.) If stars are to produce substantial amounts of carbon, then there must be a specific resonance level in carbon. d.) Thus, the specific resonance level in carbon exists. The conclusion does not depend in any way on the first, ‘anthropic’ premise. The argument would work just as well if the element in question were the inert gas neon, for which the first premise is (probably) false. — Barnes (2012), p. 547

    Barnes credits this insight to Smolin's article in the anthology Universe or Multiverse? (2007). Oddly, he himself does not make the obvious wider connection: the same argument could be just as easily applied to every other case of cosmic fine-tuning. For example, it could be similarly argued that the lower bound of the permissible values of the cosmological constant is to avoid a re-collapse of the universe shortly after the Big Bang. We know that the universe did not collapse; the additional observation that, as a consequence, intelligent life had a chance to emerge at a much later time is unnecessary to reach the conclusion with regard to the cosmological constant. And yet, in this and other publications Barnes insists on referring to every case of fine-tuning (except for carbon resonance, for some reason) as fine-tuning for life.

    So why talk about life in connection with cosmic fine-tuning? Why would someone who objectively evaluates the implications of varying fundamental laws and constants of the universe - which is what Barnes ostensibly sets out to do - single out fine-tuning for life as a remarkable finding that cries out for an explanation? Well, one could argue that life is the one thing all these diverse cases of fine-tuning have in common. And the fact that the universe is fine-tuned for some feature (in the sense that this feature exhibits a sensitive dependence on fundamental parameters) to such a great extent is inherently interesting and demands an explanation.

    To this it could be objected that the target seems to be picked arbitrarily. Picking a different target, one could produce a different set of (possibly fine-tuned) constraints. Indeed, in the limit, when the target is this specific universe, the constraints are going to be as tight as they could possibly be: all parameters are fine-tuned, and all bounds are reduced to zero. Is this surprising? Does this extreme fine-tuning cry out for an explanation? Certainly not! Such "fine-tuning" is a matter of necessity. Moreover, even excluding edge cases, one could always pick as small a target in the parameter space as one wishes; it then becomes a game of Texas Sharpshooting ().

    Another objection is that life, being a high-level complex structure, is going to be fine-tuned (again, in the sense of being sensitive to variations of low-level parameters) no matter what. In fact, any such complex structure is bound to be fine-tuned. (Physicist R. A. W. Bradford demonstrates this mathematically in The Inevitability of Fine Tuning in a Complex Universe (2011), using sequential local entropy reduction as a proxy for emerging complexity.) So if there is something generically surprising here, it is that the universe is fine-tuned to produce any complex structures.

    It seems then that, objectively speaking, whatever it is that the universe is trying to tell us, it is not that it is fine-tuned for life. What then would be a legitimate motivation for framing the problem in such a way? One such motivation can be found in the observer selection effect in the context of model selection in cosmology, where it is also known as the weak anthropic principle: out of all possible universes, we - observers - are bound to find ourselves in a universe that can support observers. Thus fine-tuning for life (or more specifically, for observers) is offered as a solution, rather than a problem. Of course, this requires a scenario with a multitude of actual universes - in other words, a multiverse. Barnes considers existing multiverse cosmological models in his paper and finds that, whatever their merits, they don't solve the fine-tuning problem; if anything, he contends, such models make the problem worse by being fine-tuned to an even greater extent.

    So we come back to the question: Why do people like Barnes consider fine-tuning for life to be a problem in need of a solution? I think that theologian Richard Swinburne, who was perhaps the first to formulate a modern FTA, gave a plausible answer: we find something to be surprising and in need of an explanation when we already have a candidate explanation in mind - an explanation that makes the thing less surprising and more likely. And God, according to Swinburne, presents such an explanation in the case of intelligent life. So there is our answer: the only plausible reason to present fine-tuning for life as a problem is to make an argument for the existence of God (or something like it), and anyone who does so should deliver on that promise or risk appearing disingenuous.
  • Site Improvements
    I can see why Paul would stop frequenting the site. The nasty nature of the destruction of the older site and change over to this one must have been hard on him but I am very glad that he appears to have received a duly deserved financial reward for all his work.John Doe

    IIRC he hadn't been active on the old site in the last years before its demise, usually emerging into the public view to put out some fire.
  • Classical Music Pieces
    Speaking of Bohemians, I like this piece from Ma Vlast by Smetana, and particularly this old mono recording by Rafael Kubelik with the CSO.



    This recording was produced with a single microphone suspended high above the stage. When I listen to it, I imagine soaring like a bird above the endless green expanses, with waves of music rising warmly towards me, mixed with the moist aroma of the earth.
  • About skepticism
    It seems to me that there is nothing deeper to this argument than your rather idiosyncratic language. Usually, to say that I "reject" a proposition does not commit me to a stance of absolute certainty, it just means that I consider the proposition to be dubious at best. Which means that, in order to be performatively consistent with my own beliefs, I'll live my life as if I believed the proposition to be false, just as you said of yourself.
  • The Fine-Tuning Argument
    Yes, after I posted this I thought about it a bit more and realized that this wasn't actually making sense. I think I understand where Collins is coming from. Fine-tuning comes up in the context of model selection in particle physics and cosmology, but the logic there is somewhat different from that in the FTA. What happens, roughly, is that we start with a general mathematical form of the action or dynamical equations, based on general symmetry considerations or some such, in which the constants and boundary values are free parameters. This is where the problem of distributions over potentially infinite domains comes up as well. Various best-fit estimates involve marginalizing over parameters, which often results in integrals or sums over infinite domains, such as



    The normalizability challenge can then be answered with considerations such as that the applicability of the model is limited to a finite range of parameter values (e.g. the Planck scale), as well as considerations of "naturalness" (which present another can of worms that we need not get into.)

    The bottom line is that in physics we are not agnostic about at least some general physical principles, and more often we are working with quite specific models with known properties and limitations, which can inform the choice of probability distributions. Whereas in the most general case of the FTA we are agnostic about all that. Any form of a physical law, any value of a fundamental constant represents an epistemically possible world, which we cannot discount from consideration.
  • The Fine-Tuning Argument
    I've been reading Robin Collins's extended treatment of the FTA in his 2009 The teleological argument: An exploration of the fine-tuning of the universe. For fairness's sake, I would like to revisit the nomalizability and the coarse-tuning objections to the probabilistic FTA, which I have talked about here, and which Collins addresses head-on.

    To recoup, the nomalizability objection draws attention to the fact that a uniform probability distribution, which the Principle of Indifference compels us to adopt, is undefined on an infinite domain; thus, for those fine-tuned parameters for which the range of epistemically possible values is unbounded, we cannot even have prior probabilities (this objection has been made by a number of critics of the FTA). The coarse-tuning objection points out that, even if the nomalizability problem was somehow dealt with, we would end up in a strange situation, where the width of the life-permitting region of a "fine-tuned" parameter doesn't actually matter, as long as it is finite; it could be 101010 times as wide, and this would not make any difference to the argument. (@andrewk has pointed out that the metaphor of "fine-tuning," which comes from analogue radio and instrument dials, breaks down here.)

    Collins makes a number of arguments in favor of considering finite ranges in parameter space. I have already mentioned his concept of an "epistemically illuminated" (EI) region, which has an affinity with intuitive arguments made, for example, by John Leslie in his 1989 book "Universes," who at one point compares fine-tuning with a bullet hitting a lone fly on the wall: even if other, remote areas of the wall were thick with flies, he says, this would not make the pin-point precision of the hit any less surprising. I am not convinced by such analogies; I suspect that they trade on intuitions that work in familiar human surroundings, but break down in the vastness and strangeness of modern physics, especially when it comes to highly counterintuitive infinities. (For example, when we imagine bullets randomly hitting broad targets, we don't really imagine infinite targets with uniformly distributed probability; rather, we probably have in mind something like a very wide normal distribution, which is nearly flat within a finite region in front of us, but drops off to virtually zero far away from us.) In any case, if the analogy is justified, there ought to be a rigorous statement of the argument that vindicates it, and I still haven't seen any, which makes me distrustful.

    Perhaps the most interesting argument that Collins makes is that we are not justified in considering unbounded ranges for physical constants, because all our scientific theories have a limited domain of applicability (for fundamental physics it is mostly the energy scale; our current theories are low-energy effective field theories). If we deviate too far from the actual values of physical constants, the very models where these constants appear break down; in those remote parameter regions we will need some different physics with different constants. This is a good point that I haven't considered in relation to the FTA, nor have I seen it addressed by FTA critics. However, my objection to this argument, as well as the less formal arguments for EI regions, is that limiting the range of epistemically possible values cannot be justified from within the Bayesian epistemic model used in the probabilistic FTA. In particular, this move doesn't sit well with already highly problematic indifference priors, which are inversely related to the size of the range. It follows that specific, finite probabilities with which we operate depend on these squishy, ill-defined choices of EI regions. Moreover, the limitations of EFTs are only contingent, and only apply to their boundary conditions and perhaps constants, but not to the mathematical form of their laws.

    Although he puts the most effort into defending the idea that the size of epistemic parameter ranges is necessarily limited, Collins also considers the possibility of using non-standard probability theories, perhaps those without the requirement of the so-called finite additivity, and thus not suffering from paradoxes of uniform distributions over infinite domains*. As I said earlier, I am generally sympathetic to this idea: I am not a dogmatic Bayesian; I recognize that Bayesian epistemology is not a perfect fit to the way we actually reason, not is it a given that Bayesianism is the perfect reasoning model to which we must aspire. An epistemic model based on something other than classical Kolmogorov probability? Sure, why not? However, such a model first has to be developed and justified. And the justification had better be based on a large number and variety of uncontroversial examples, which is going to be highly problematic, because we simply lack intuitions for dealing with infinities and infinitesimals.

    And that is a general problem for arguments of this type, which also include Cosmological arguments: at some point they have to appeal to our intuitions, be they epistemic or metaphysical. But the contexts in which these arguments are invoked are so far removed from our experiences that intuitions become unreliable there.

    * I have also thought of another solution that he could propose to address the challenge of infinite domains, along the lines of his epistemically illuminated regions: epistemically illuminated probability distributions, which, he could argue, would be non-uniform (most likely, Gaussian).
  • The Fine-Tuning Argument
    This is exactly the point I have been trying to make.Rank Amateur

    No, not even close. The only point that you've managed to make in this discussion, and which you keep repeating over and over, as if it wasn't stupidly obvious, is that you know that you are right, and those who disagree do so only because they are prejudiced. We get it. You can stop repeating it and leave, since it is obvious that you have nothing else to say. Take Wayfarer with you, too.
  • The Fine-Tuning Argument
    Personally I am conflicted about the multiverse notion. I feel that it's probably true, not because of the FTA but rather because I just wonder - if there's one spacetime, why shouldn't there be more?andrewk

    It should be kept in mind that historically, "multiverses" in cosmology were not proposed as a solution to the fine-tuning problem. For example, the kind of bubble multiverse that results from chaotic or eternal inflation is just a generic prediction of inflationary cosmology, which has its own raison d'etre, not having directly to do with fine-tuning.

    Some cosmologists argue that, as an added bonus, such multiverses solve the fine-tuning problem, but there is no settled opinion on this. There is, for example, an ongoing argument over whether such a solution commits the "inverse gambler's fallacy."
  • The Fine-Tuning Argument
    I would answer 1 in 6, but that just means that that's what I'd use in calculations about what to bet, in most situations. It's not a truth claim. It's not a fact. It's an assumption I make to help in decision-making.andrewk

    And it's the right way to think of probabilities, in my opinion - at least in this context. After all, we are interested in beliefs (such as God beliefs), and how new evidence (such as the evidence of fine-tuning) affects them. After you learn this new evidence, and taking into account what you already know, what would be your most rational bet? That is just the kind of probabilistic reasoning that Collins and some others attempt to pursue.
  • The Fine-Tuning Argument
    there is no metaphysical reason why we should consider the laws to be fixedSophistiCat
    Isn’t there an empirical reason, namely, that they always are? Could it ever be F=MA(most of the time)?Wayfarer

    Teleological arguments deal with counterfactual possibilities. Empirically, fundamental constants are just what they are (most of the time) - that is why we call them constants. The FTA considers the possibility of them being something other than what is empirically observed. I am just saying (and I am not being particularly original here) that we could, with the same justification, vary fundamental laws as well.
  • The Fine-Tuning Argument
    From that abominable start, does it get any better? Are there any pages that are less naive and worth reading?andrewk

    It does, I think. It's been a while since I read his exposition of the argument; the latest, most complete version, according to him (which I haven't read, but intend to do), is his entry in The Blackwell Companion to Natural Theology: The teleological argument: An exploration of the fine-tuning of the universe. I agree, such half-arsed analogies as that do more harm than good to the argument, but Collins does make a brave attempt to justify his conclusion with sound reasoning, and I like that about him. Not being satisfied with faith or unexamined intuitions and prejudices, he plays the philosopher's game, delving into the hows and whys, laying bare his reasoning and making himself vulnerable to criticism. Which is a whole lot better than blandly stating, like @Rank Amateur does, that the design inference is "obvious," and if you don't agree, it's your own damn fault, you closed-minded atheist.

    Of course, I agree with @Moliere that, at the end of the day, it still comes down to subjective judgments, whether they are based on epistemic or esthetic or ethical criteria (they are all closely related, anyway). But it seems to me that a teleological argument, especially one as technically specific as the FTA, is amenable to analysis, so we shouldn't be satisfied with gut feelings.

    Take @Rank Amateur's favorite card deck analogy, for example. It is just the kind of toy example where Bayesian analyses (which Collins favors) shine. And it is instructive to consider. I won't bore you with formulas (which are elementary, anyway), but the idea here is that the canonical ordering of cards in a deck is far more likely to be the result of a deliberate action (whether because it was just removed from its factory packaging or because someone deliberately arranged it in order) than of a random shuffling. But we know this because we know something about decks of cards, how they are labeled and handled, and about people and their habits and preferences. We have some rational expectations, or priors, in Bayesian lingo, which are based on our experiences of the world.

    The case of fundamental laws and constants is disanalogous to this example in a number of ways. One, already mentioned, is more technical, having to do with normalizability. But more importantly, we can have no rational expectations with regards to fundamental laws and constants - that is just what makes them fundamental. (By contrast, we can potentially make some predictions, or even precise calculations for the spring constant in Hooke's law.) There is nothing in our experience that could give us any clue as to why they have the structure and the values that they have.

    Of course, we could still hypothesize that some supernatural entity made the laws and constants just so. And of all the ways a supernatural entity could have done it, it made the laws and constants to be compatible with the existence of life in the universe - lucky us! Perhaps then of all conceivable supernatural entities - what are the chances?! - the one that is responsible for the fundamental laws and constants of our universe is a kind of personal, anthropomorphic being that had both the power and intention to make the world suitable for humans. In any case, it seems that, as @jorndoe points out, this supernatural being is exquisitely fine-tuned!
  • The Fine-Tuning Argument
    And therein lies a problem: there is no uniform probability distribution on an infinitely wide interval. But if not uniform, then what? If a uniform distribution of epistemic probability could be justified by the principle of indifference (sort of - there's a lot of controversy over the principle of indifference, and over epistemic probability for that matter), there surely cannot be any justification for a non-uniform distribution.

    Robin Collins attempts to address the problem with so-called "epistemically illuminated regions," if I remember correctly. That is, he suggests that instead of taking the largest conceivable range as the total probability space, which would be infinite for many of the fine-tuned parameters, we should only look at finite ranges that we have actually examined for life fitness ("epistemically illuminated.") The problem here is that for at least some of these parameters we have no trouble examining the entire infinite range. We could (and probably should) vary all so-called parameters simultaneously and thus end up not with a single range but with a multidimensional parameter space. However, even though it might be analytically and technically difficult, nothing in principle prevents us from theoretically examining this entire infinite space for life fitness. If we do, and it turns out that the life-supporting regions in this space have a combined measure of zero, that would undermine Collins's probabilistic analysis. It seems unsatisfactory to bet your entire analysis on such a contingency.

    If we then allow the laws themselves to vary (and there is no metaphysical reason why we should consider the laws to be fixed while relaxing constants and boundary conditions), we run into an even more severe problem: the "collection" of all possible laws is too big to even form a set (since it will be as "big" as the set of all sets - a paradoxical concept, as Russell famously demonstrated), and so no measure theory - and thus no probability - could possibly be applied here.

    But why stop there? Who said that there have to be laws? They are not a metaphysical necessity. So, as long as we allow constants, boundary conditions, and then laws to vary, we should also consider possible universes in which laws do not operate, or operate inconsistently. That broadens the space (which is not a space!) inconceivably.

    Is this a fatal blow to the FTA? Well, it is to Collins's Bayesian argument, but frankly, I think there are better reasons than that to reject Collins's argument.
  • The Fine-Tuning Argument
    Yes, I am aware of Robin Collins's argument. Maybe we'll get to him, but I was rather hoping to engage proponents of FTA directly. I could talk about Collins's argument (I'll need a refresher), but I wouldn't want to just talk to myself. I don't think his argument works, but he is one of the few to take up the defense of the FTA seriously, and if he is wrong, his failure is instructive.

    jeeprs... go thumb your dog-eared collection of quotes and moan about the evils of atheistic materialism somewhere else. You don't seem to understand what we are talking about.
  • The Fine-Tuning Argument
    I am finding it hard to understand why you don’t see the issue here. The customary post-Enlightenment attitude to this matter has always been that as the Universe was not ‘God’s handiwork’ [i.e. the consequence of intentional creation], then the only alternative was that life arose by chance.Wayfarer

    You are equivocating on two meanings of "chance"; depending on the meaning, the "chance/design" dichotomy is either obviously true or obviously false, but in no case is it profound or relevant to our topic.

    If "chance" means unintentional, accidental, then it translates into "The universe and everything in it is either the consequence of intentional creation or it is not" - a truism. If "chance" means random, lacking any pattern, then it is obviously false, since most people before, during and after the Enlightenment had at least some idea of the universe as a fairly orderly place. As such, it is to be expected that the universe was always constituted in such a way that life would be possible in it at least at some time and in some places. This is in no way a novel, unexpected finding. Neither is the idea that the universe could conceivably have been constituted otherwise.

    None of this has much to do with the notion of fine-tuning, which specifically has to do with sensitive dependence of life on certain "parameters" of physical models.
  • The Fine-Tuning Argument
    Now this has to be explained.Antony Latham

    Why? Can you explain your reasoning? This is one of the things I would like to clear up in this discussion. Is this fine-tuning surprising? Is it unexpected? If so, what are your expectations and what are they based on?

    Occam's razor leads me more to the more parsimonious solution - design.Antony Latham

    How do you figure?

    The probability of all the needed conditions is on the order of 52!Rank Amateur

    How do you calculate the probability - not of the card deck permutation, of course, but of the universe being life-supporting? Show your work, please.

    As an example - in the classic thought experiment:Rank Amateur

    Explain your reasoning in this thought experiment. What if the card order was not the canonical order - would your answer be different? Why?
  • The Fine-Tuning Argument
    Despite the mathematical rigor that the argument requires (i.e. the Universe needs a specific set of constants, each of which need to "tweaked" to a specific number), the argument rests on the probabilistic absurdity of using a sample size of one.Maw

    Yes, when considering the probability of fine-tuning we clearly cannot appeal either to observed statistics (we just have one sample), nor to theory (theory gives us fixed values, not distributions). Probability in this context is usually understood as epistemic probability. The modal reasoning goes something like this: for all we know, the constants could have been different, and since we have no reason for favoring any one value over another, we end up with a uniform probability distribution (principle of indifference).

    I guess I don't see much difference between FTA and other forms of the teleological argument -- is that an unfair characterization, in your view?Moliere

    It's a type of teleological argument, or argument from design. Among other examples probably the best-known are those having to do with biological design (e.g. Paley's watch analogy). And like with other teleological arguments, it seems to have a lot of intuitive appeal with some people, and yet when the argument is viewed skeptically, it turns out surprisingly hard to even give it a rigorous formulation, and few even try.
  • The Fine-Tuning Argument
    In the book Just Six Numbers, Martin Rees offers the argument that six numbers have to be what they are in this universe so that we can live in it. But there's more. If those numbers were other than they are, then the universe itself would probably be short-lived. Stepping into very plausible conjecture, he argues that perhaps there have been lots (and lots (and lots...)...) of Universes that weren't quite right for us that came and went, until one came along that would support us.

    Conjectural but reasonable physics, or a super natural being that presides over it all. What seems most reasonable to you?
    tim wood

    Depending on the level of modeling and level of detail, there may be other "fine-tuned" numbers - some are briefly discussed in the SEP article referenced in the OP, more can be found elsewhere. (There are 20-30 constants in the Standard Model of particle physics, and then there are relativistic constants and cosmological initial conditions.)

    But before we go looking for a solution, we need to establish motivation: is there a problem to be solved? Is there something that cries out for an explanation? That is far from obvious. That the universe is suited for life is a truism. That the universe could have been otherwise (at least as a conceivable possibility) is also pretty uncontroversial. The only sticking point is this alleged fine-tuning - sensitive dependence of life-permitting conditions on certain parameters of fundamental physics.

    Strictly speaking, they are not parameters - in the theories where these numbers are found they are givens, brute facts that go along with equations and other postulates of those theories. But if we treat them as free parameters, as knobs that we can turn this way and that (and why should we?), then there is a sensitive dependence - although just how sensitive is also not so clear - see above.

    In physical cosmology and particle physics (Inflational cosmology, String theories, quantum field theory), where the problem to be solved is theory choice, anthropic reasoning and (no-)fine-tuning considerations appear in the context of typicality and naturalness (see chapters 4.4 and 5 in the SEP article for an overview). These are rather controversial epistemic criteria, but at least here we understand what the stakes are and what the reasoning is. Whereas when the problem of fine-tuning is stated out of such context, it is not even clear why it is a problem in the first place.
  • Reviews of new book, Neo-Aristotelian Perspectives in the Natural Sciences
    Fair enough, but my point still stands, since @StreetlightX insists that the rot of Plato's political and social ideology and the historical context that supposedly shaped his views infects all varieties of modern Platonism-so-called.
  • Reviews of new book, Neo-Aristotelian Perspectives in the Natural Sciences
    If perchance you could lay out their 'Platonic take on humanity', perhaps we might go from there.StreetlightX

    Why should I? And who are you quoting? Just a bit earlier you were telling us how all Platonists, even modern Platonists-about-this-or-that, were all thoroughly compromised at their metaphysical foundation by Plato's "shitstain" (now that is an actual quotation). Well, I am not seeing how that comes about. Perhaps you could make an argument that Frege's mathematical Platonism is really crypto-Fascism?
  • Reviews of new book, Neo-Aristotelian Perspectives in the Natural Sciences
    When you take, say, Bigelow and Pargetter's arguments for structural universals and demonstrate convincingly how they have a maleficent apology of the nationalistic slave-owning patriarchy of Classical Greece built into them, then perhaps we will have something to talk about. Until then it's just so much sophistry.

    No argument from me.
  • Reviews of new book, Neo-Aristotelian Perspectives in the Natural Sciences
    I don't really believe that these ideas are separable from 'Plato's views'StreetlightX

    I do, and apparently so do other philosophers who took up the idea. I am not buying this primordial taint line.

    *PD: "The social conflicts of the fourth century, the greater dependence on slavery, after a decline at the end of the Peloponnesian War, made [Plato's] attempt to justify and rationalize the social relationships of the polis comprehensible. Difference had invaded and disrupted the city, and was acknowledged and almost despaired of by Euripides. Plato's response to the presence of difference was to look even more deeply inward and to justify the differences within the city in terms of an attribute of the citizen, logos. The Greek male human being thus reconstructed his notion of the world; the dominance of the citizen, the philosopher, was justified not in terms of autarkeia, but rather in terms of inevitable and natural superiority. The contradictory position of women, in which they were both objects of exchange necessary for the reproduction of the city, and outsiders, bestial and irrational, was also rationalized in a new way. Women were associated with the body, which was inferior to the mind; thus they, like the body, served the soul, the head, the philosopher, the male".StreetlightX

    Or perhaps Plato's attitude towards women was simply due to his preference for boys. You know, I have little regard for such speculative sociopsychology.

    One may, but then one has no adequate plan for creating an individual. Where does the other information (the things you wish to abstract away) come from? Remember, the role of the ideal is to explain the intelligibility of the individuals we observe.Dfpolis

    The role of the ideal is to identify an essence of an individual thing, separating it from other, inessential qualities, but what that essence is in any particular case is arguable. One may claim that the essence of humankind is not bound up with race or gender, just as when we identify some object as a chair, say, we abstract away a lot of the things that would be required to create the individual chair, like its precise shape and size and material and manufacturer. Or something like this. You should rather take this up with a competent Platonist.
  • Reviews of new book, Neo-Aristotelian Perspectives in the Natural Sciences
    Again, I have no interest in defending Plato's own views (I am, frankly, not all that interested in what his views were, although I do have some idea along the lines of the gloss that you give here). Progenitors and namesakes of ideas don't own the ideas. Newton, who by all accounts was a very disagreeable person and had some wacky ideas, doesn't own Newtonian mechanics. And Plato doesn't own Platonism, which is what was originally at issue here. And although, again, I have little sympathy with the philosophy, I also dislike this uncharitable smear. You literally Godwined the discussion!

    If there is an ideal, an exemplar human being, then that exemplar is male or female, of some particular race, introverted or extroverted, attracted to men or women, masculine or feminine in demeanor, etc.Dfpolis

    Not necessarily. One can abstract all these details, leaving only essentials. Whatever those essential may be, they may reasonably exclude all the things that you list here. I am not going to play Plato's advocate here - there are plenty of good ones out there (present company excluded, unfortunately); I am only calling for charity and intellectual honesty.
  • Reviews of new book, Neo-Aristotelian Perspectives in the Natural Sciences
    Positing a Platonic idea or exemplar implies, for example, that some individuals are more human (better reflect the exemplar) than others. This can only foster prejudice and injustice.Dfpolis

    Well now, I am not a Platonist by any stretch, but this is unfair. The most obvious Platonic take on humanity would be that some individuals are closer to the human ideal (which, I suppose, would be Christ in Christian neo-Platonic philosophy). That's not so "fascist," is it?
  • Reviews of new book, Neo-Aristotelian Perspectives in the Natural Sciences
    And I would say it gives you more of a problem admitting the principle of least action does reduce to a holistic position which takes finality seriously as part of the fundamental workings of the Cosmos.apokrisis

    I don't really see a problem here. A time-reversible, deterministic system (which is the context in which the principle of least action is operative) can equivalently be evolved forward from the initial state or backwards from the final state using instantaneous laws of motion. We are more used to thinking in terms of unfolding forward in time, but there is no time asymmetry in such systems. So if you think that this feature is remarkable, you don't even need to appeal to the "holistic" principle of least action (PLA)* - it is already evident in the "atomistic" differential formulation.

    The PLA is not really about the "finality" (the final state determines the path in the differential formulation as well). And it has least to do with Aristotelian final cause, which is bound up with anthropomorphic, psychological categories of goals and intentions. The PLA is made possible by the particular nomological structure that describes the system. Such structures - constraints - are characterized by redundancies where knowing some limited information about the system, such as the boundary conditions and the laws of motion or the action, allows one to determine everything else about that system.**

    If there is tension here, it is the tension between the perspective of individual causal powers and dispositions on the one hand, and the nomological/covering-law perspective on the other. The former, "atomistic" perspective has its attractions, but it can obscure the global structure. Indeed, all this talk about a particle or a ray knowing, feeling, wanting, this recourse to anthropomorphic teleology comes from assuming the local dispositional perspective and losing sight of the global nomological one. Ironically, I think that the dispositional view is more closely associated with the classical, pre-scientific philosophy, whereas the nomological view mainly emerges during the Enlightenment and the following scientific revolution, which is when the PLA was first formulated and developed.

    * Or, more accurately, stationary or extremal action.

    ** Another, mathematically related example is the Gauss theorem, which relates the distribution of a vector field on a closed surface to the distribution inside the volume bounded by the surface; I remember being mildly surprised by this result as an undergraduate - it's as if the surface "knows" about what is inside. Of course, as one gets a better feel for mathematics - and the mathematical structure of physical laws - such results become less surprising.

    Again, I thought you were arguing against four causes modelling. And now you are championing it under the permissive banner of pluralism.apokrisis

    Not really. I mean, if you have to dig up that antique, you may as well derive this lesson from it. It's not such a good fit though: as I understand, Aristotelian causes are supposed to be complementary, rather than alternative, they all have their roles to play, with the final cause taking the center stage.
  • Reviews of new book, Neo-Aristotelian Perspectives in the Natural Sciences
    Interesting? Or entirely paradoxical for reductionist meaphysics?apokrisis

    Scientific and other analytic explanations tend to be reductionist, in the sense that they fit phenomena or concepts into some theoretical framework. In that they reduce, demonstrate that one thing is nothing other than another, presumably simpler or more tractable or more familiar or otherwise more theoretically attractive thing. In that sense Newtonian, Lagrangian and Hamiltonian formulations of classical mechanics are equally reductionist, and the same goes for alternate formulations of quantum mechanics.

    If there is a lesson to derive from the four causes it is this pluralism of explanations - and that would be a genuine counter to reductionism. Rather than arguing for one framework as the only metaphysically correct one, the emphasis can be placed on the fact that there are these alternate frameworks that are sometimes exactly equivalent (and the interesting question to consider is how that comes about), but in any event offer different instrumental and conceptual possibilities.
  • Reviews of new book, Neo-Aristotelian Perspectives in the Natural Sciences
    This. It's irritating how much time in philosophy is spent looking backwards, trying to give an "Aristotelian" or "Kantian" or whatever gloss to every idea, even if it means doing ridiculous mental contortions and completely emptying the idea of any substance. And it looks about as convincing as the efforts of the Bible Code cranks. It's as if they fear that without establishing such a noble pedigree they won't be taken seriously. And yet if you look at the really interesting and relevant discussions of causation, for example, during the last half-century or so, you will hardly find a mention of the famous four causes.

    Desire? Selects? Maybe my memory is off, but I think Feynman described the "quantum event" as taking all possible paths, all but the shortest cancelling each other out. If you've got space for "desire" or something "selecting" please make clear how that can be: where it is, how it is, how it workstim wood

    This is typical stone soup. Nothing whatever is gained by appeals to "desire" or "foreknowledge." Yes, variational approaches in physics have this interesting property that the path taken appears to be explained by the final state, rather than the other way around. But superficial anthropomorphism only gives the appearance of an explanation, all the more so because it is equally (and just as ineffectually) applicable to any situation. There are deeper and more interesting ways to make sense of such alternate explanatory frameworks.
  • Environmental Alarmism
    What do you think?Marchesk

    I think you should more critically evaluate your sources and not lump together science-fiction and actual science.
  • Is there anything paradoxical about statements that are true but unbelievable?
    In natural language, as opposed to formal logic, when we say "it's either this or that" we usually imply that all presented alternatives are live possibilities. Interpreted this way, the statement mentioning a unicorn is simply false, since (we assume) a unicorn is never a live possibility. But if you interpret "or" as a formal disjunction, then there is nothing wrong with the statement. It's just not how people usually talk (if they are being serious*). So I think your unease comes from conflating these two senses of "or."

    * Imagine a scene: you and a friend come across a horsebox and wonder who is inside: a horse, a cow, a donkey... You decide to have a friendly wager: you bet on a cow and your friend bets on a horse. At this point you both clearly hear neighing from inside the box. Your friend smirks and says: "Well, it's either a horse or a unicorn!"
  • Is there anything paradoxical about statements that are true but unbelievable?
    My instinct is that this is an elementary philosphical problem, but I've not so far succeeded in finding any treatment of this question.Rupert

    It's either an elementary misunderstanding of disjunction or a lack of the sense of humor.
  • Donald Trump (All General Trump Conversations Here)
    My point is not that he'll care, it's that I don't get why you feel the need to protect him. As far as I remember, you supported Kasich who despised Trump and certainly isn't defending him now. This is not necessarily a progressive vs conservative issue. Many traditional conservatives are as critical of him as anyone else. So, why have his back?Baden

    To right-wingers supporting the power takes priority over conscientious reflection - that is what makes them right-wingers. That is why 90% Republicans support Trump. I would like to think that all those people don't really approve what he says and does, because that would be just too sad. They stand behind him because he is the leader.
  • Time is real?
    So in this experiment, a decision to raise a screen effects whether a particle goes through a double slit as a particle or a wave IN THE PAST.Devans99

    Did you actually read the article?

    Most physicists think the answer is a resounding "no." No, we cannot kick back with retrocausality. Or, at the very least, the experiment, whether conducted across a lab or across galaxies, doesn't support the idea of time travel.gizmodo.com

    The article is written at a very basic layman level and with lots of journalistic flourish, but there are links to more in-depth explanations at the bottom.
  • Relational Proof
    Thank you for your patient explanations. It will take me more work to completely follow your Hilbert system derivation, but I trust that it is sound.

    However... I am compelled to resist some more. So we can formally derive the variable swapping rule in at least some formal systems. This is all well and good, but I don't think that this is relevant. Because before we can interpret the formulas in the OP in any system, we have to do some preliminary interpretation, which includes parsing the symbols x and y as arbitrary (fungible) variable names. So that variable swapping rule? We have already helped ourselves to it before we even settled on a formal system for interpreting the expressions. To then prove what we already assumed using the axioms and rules of a particular system is unnecessary and question-begging.

    What do you think?
  • Relational Proof
    By meta-language I mean the implicit convention, under which, among other things, the strings x and y are interpreted as the names of variables. The meta-language is what enables us to parse the formulas in the OP without knowing anything about the particular logical system in use; without the implicit assumption of the meta-language convention andrewk's own reply to the OP would not make sense.
  • Relational Proof
    Why are they not identical? It is a commonly accepted rule of the meta-language, i.e. the formal or semi-formal language that is used to write logical expressions, regardless of the particular logical system being used, that variable substitution can be used sensu veritate. The LHS of the above formula can be transformed into the RHS using only the rules of the meta-language, with no reference to any logical axioms or theorems.
  • Relational Proof
    Syntactically equivalent, as opposed to logically equivalent - thank you, that is what I was getting at. The premise and the conclusion in the OP problem are syntactically equivalent, unless I am missing something.

    ∀x.∀y.p(x,y) ≡ ∀y.∀x.p(y,x)
  • Relational Proof
    The fact that notation has no bearing on meaning doesn't seem like something you would need to prove, no? Postulate maybe, but not as an axiom but as a meta-rule.
  • Relational Proof
    How is the conclusion different from the premise (other then lexically)? What am I missing?
  • Carnap and the Meaninglessness of Metaphysics
    A meaningless debate might go something like this

    "of shcrik in the water too"

    "gavagai"

    I have no idea what those terms mean. It is purely nonsense.

    So given that standard I'd likely say there isn't such a thing, insofar that the words have meaning.
    Moliere

    It is true that accusations of "meaninglessness" (as well as some others, such as "incoherency") are often thrown around rather loosely. But, returning to the topic of the thread, you need to remember that Carnap was a positivist, and so he had stringent and, perhaps to our ear, rather idiosyncratic criteria of meaningfulness.

    But let's not nitpick vocabulary. I think the idea in this particular instance is that some debates just lack substance and worth. Some - in fact, probably many - questions that have been mainstays of philosophy, and metaphysics in particular, are pseudo-questions.

    My own approach when it comes to questions of ontology, debates over realism vs. nominalism, etc. is to ask, What is at stake? Why is this important? What difference in our worldview would one position make vs. the other? If it seems to me that nothing substantial is at stake, except perhaps minor differences in language, then I judge such questions to be - let's say "worthless," if you don't like "meaningless."
  • Carnap and the Meaninglessness of Metaphysics
    You did so on the grounds that anti-metaphysical statements are meaningless. You even stated as much in the first sentence of the previous post.Marchesk

    Well, no, not unless you believe that metaphysical questions are necessarily as vague and pointless as the one I was criticizing. My most charitable take on metaphysics is that it is a search for and a critical analysis of framing - and that is not meaningless.

    what motivates the questioningSophistiCat

    The difference between the individual things we perceive, and our universal talk about them.Marchesk

    That makes no sense, no matter how many times you say this. Come on, Marchesk, you are not even trying.

    (2) what it is that you actually want explained, and (3) what kind of an explanation you require.SophistiCat

    (2) Whether there is something in the world which matches or supports our universal talk.

    (3) An argument for something in the world or in our concepts that explain the universal talk.
    Marchesk

    That is still much too vague. There are many ways in which such a question could be cached out: we could analyze our language, starting with universal talk and perhaps going on to causal talk (which is one of the directions this conversation has taken). We could analyze our psychology/cognition - and here there is also a variety of approaches. We could talk about "the world" (i.e. the intended objects of our universal talk) - and here the possibilities are too many to number. We could also talk about the interrelationship between all these spheres, which broadens the scope to a truly unmanageable size.

    (4) There have been at least 4 possible answers given to this question: nominalism, conceptualism, moderate realism (Aristotle), and realism (Platonism).Marchesk

    There are so, so many more ways to address the general topic "universals" - at least until you frame the question better than you have done so far. But in any case, to paraphrase Crispin Wright, identifying your position with one of the above labels accomplishes about as much as clearing one's throat.


    I was wondering, by the way, what it is that you were trying for with your programming analogy. A class in object-oriented programming (OOP) is not a good analogy for the general idea of universals. In OOP two objects with the same functional properties are not necessarily instances of the same class. Indeed, being an instance of a particular class is itself a property, which can be directly queried in languages that support reflection. That would not make sense with universals: being a member of a class is not a property that is distinct from the sum of properties that defines that class. Being a member of the class of blue things is exactly the same as being blue (which is the point that @Snakes Alive already made).

    I guess you were looking for some causal, generative account of differences and similarities between things. But I am afraid that such an overly general approach is not going to be a productive direction for inquiry; you need to bring more focus to it. (And turning back to OOP for a moment, a slightly better but still imperfect analogy for universals would be an interface or a completely abstract class, which defines "phenomenal" properties of objects. But interfaces are not generative: conceptually, they are used to abstract properties from existing things or describe hypothetical properties that may or may not exist.)
  • Carnap and the Meaninglessness of Metaphysics
    Well, I explained why your question makes no sense, but alas, all you can think of is poisoning the well. Never mind, I think others here make a much better job of making this discussion substantive and interesting than you do.