Comments

  • Do you believe there can be an Actual Infinite
    The resolution of singularities is in part due to the precedence of them turning out to be the result of mistakes in our models.MindForged

    Singularities are nasty beasts, and there's a better reason for eschewing them than past experience: singularities blow up your model in the same way that division by zero does (division by zero is one instance of singularity); they produce logical contradictions.

    Of course, singularities are not the only sort of infinities that we deal with. As you said, if we use modern mathematical apparatus, then it is exceptionally hard to get rid of all infinities. A few have tried and keep trying, but it's a quixotic battle.

    As for the objection "it's just math, it's not real," then my next question is: what is real? Where and why do you draw the boundary between your conceptual mapping of the world and what you think the world really is? Is there even any sense in drawing such a distinction? Are three apples really three, or just mathematically three? If they are not really three, then what are they really?
  • Self-explanatory facts
    OK, I see now that your position is deeply embedded in Aristotelian metaphysics, which holds no attraction for me. Thanks for taking the trouble to explain it though.
  • The Death of Literature
    Nobody expects Quentin Tarantino or Ryan Gosling to have anything particularly interesting to say about the world, but they do expect that of JM Coetzee and Hilary Mantel.andrewk

    The age of the serious writer as a public intellectual carrying wisdom and moral authority is even shorter than the age of print - that started roughly in the middle 19th century in the Western world, and is on the vane now. I think you are wrong about Tarantino and Gosling, given our celebrity culture.
  • The Death of Literature
    19th century was the golden age of print (or more precisely, from late 18th century to early 20th), and, coincidentally or not, that is also when the novel became "serious literature." By print I mean not so much the physical medium, but what has come to be associated with it: the relatively long, sequential read, which includes "literature," as well as non-fiction books and magazine and newspaper articles of nontrivial size. It is contrasted with audio-visual and multimedia entertainment, reference, social media, Internet browsing, forums like this, etc. (The latter two are on the way out, by the way.)

    So literature, or print, as we conceive of it now, is actually a relatively recent and brief phase in the history of human civilization. Already, if we group together all the new forms that came to prominence in the 20th-21st centuries, this new age is comparable in length to the age of print.
  • Self-explanatory facts
    How can something essentially inadequate to a task perform the task?Dfpolis

    Explain, please.
  • Self-explanatory facts
    Now, my question is the following : how would you attack this argument, in a way other than denying (P2), i.e. that there exists a series of all grounded facts ?Philarete

    Would you consider just dropping the PSR? It's difficult for me to see what the attraction of an unrestricted PSR is, Della Rocca's arguments notwithstanding.
  • Why do athiests have Morals and Ethics?
    Why do athiests have Morals and Ethics?AwonderingSoul

    Have you tried Google? I just highlighted "Why do athiests have Morals" in your title, right-clicked, and selected the option to search Google. (You misspelled "atheists," of course, but Google is clever enough to correct the misspelling.) The very first page of results contains several responses from card-carrying (literally!) Atheists.
  • Possible Worlds Talk
    My understanding is along the lines of what @Snakes Alive said (I think). For a modal realist like Lewis possible worlds serve as a reductive explanation of (one type of) modality, but that is a minority view. For the rest, possible worlds talk is just that - talk. It's a metaphorical interpretation of (some) modalities. It neither explains (in the way Lewis's realism does) nor replaces modality - it's just an informal and intuitive language. Whenever possible worlds language is used, you can replace it with the appropriate formalism.
  • Knowledge without JTB
    Well, that is excellent news. Tell me, do you believe JTB is the best description for knowledge in a non-general sense? I know you can justify it, but I'm curious as to whether you believe it.Cheshire

    I am ambivalent about it. The advice that I gave you about seeing how it works in a philosophical context is the advice I would take myself. I haven't read enough, haven't burrowed deep enough into surrounding issues (partly because I didn't find them interesting) to make a competent judgement.

    .
  • Knowledge without JTB
    I'm not so much interested as how its used 'in language', but rather how it's used in reality.Cheshire

    Knowledge is a word, language use is its reality. It's not like there is some celestial dictionary in which the "real" meanings of words are inscribed once and for all. Knowledge is what we say it is. So one way to approach the question is to do as linguists do when they compile a dictionary: see how the word is used "in the wild." Philosophers and other specialists extent the natural language in coining their own terms, which they can do in ways that narrow the colloquial meaning or diverge from it. However, it is considered to be a bad and misleading practice to diverge too far, in effect creating homonyms.

    While @javra attempted a conceptual justification of the JTB knowledge, I'll stick to natural language for a moment. How much does the JTB knowledge differ from common sense knowledge? One thing you can say about the JTB definition is that, at first glance, it does not appear to be an operational definition (this parallels both your critique and @javra's notes above). If you wanted to sort various propositions into knowledge and not knowledge, you could plausibly use the first two of the JTB criteria (setting aside for a moment legitimate concerns about those two), but you cannot apply the criterion of Truth, over and above the criterion of Justification. For how do you decide whether a proposition is true, if not by coming up with a good justification for holding it true?

    But think about what happens when we evaluate beliefs that we held in the past, or beliefs that are held by other people. They are Beliefs, and they could be Justified as well as they possibly could be, given the agent's circumstances at the time. And yet, when you consider those beliefs from your present perspective, you could judge the Truth of those beliefs differently. And since it would not be in keeping with the common sense to call false beliefs "knowledge," it seems that there is, after all, a place for the Truth criterion.

    And before you object, I mean to say especially philosophers, when I say people. My primary reason for making JTB a target is just because it's so well guarded from criticism and taught as if were a law of thought; when as Gettier showed in nearly satirical fashion the emperor has no cloths.Cheshire

    Well, how familiar are you with contemporary epistemology? Even from a very superficial look, it is hard to see where you got this idea - see for instance SEP article The Analysis of Knowledge.
  • Knowledge without JTB
    I greatly appreciate the charitable read and I agree. So long as JTB isn't meant to actually describe the real world and is only maintained for the purpose of an exercise I suppose I no longer object. Thank you for the reference to Gettier; I'm aware my arguments or causal assertions must appear quite naive.

    Do you think you could produce an example of these two different types of knowledge? The general and the technical?

    I suppose I'm agreeing with Gettier in a sense, but avoiding his objection. He's saying hey your system doesn't work because it can produce mistaken knowledge. I'm saying some knowledge is mistaken.
    Cheshire

    Yes, Gettier's counterexamples are where all three of the JTB criteria seem to be satisfied, and yet the result doesn't meet our intuitive, pre-analytical notion of knowledge. Your examples are where our intuitive notion of knowledge does not meet the JTB criteria. How damaging are such attacks? That totally depends on the context.

    Like I said, if the goal was to just give an accurate account of how the word "knowledge" is used in the language, you probably can't do better than a good dictionary, together with an acknowledgement that such informal usage is imprecise and will almost inevitably run into difficulties with edge cases like Gettier's.

    But philosophers define their terms in order to put them to use in their investigations, so I think the best way to approach the issue is not to latch onto one bit taken out of context, but see what work that JTB idea does in actual philosophical works. Maybe the JTB scheme is flawed because it doesn't capture something essential about knowledge, or maybe the examples that you give just aren't relevant to what philosophers are trying to do. I haven't done much reading in this area myself - I am just giving what I hope is sensible general advice on how to proceed.
  • Knowledge without JTB
    The theory of knowledge that serves as the foundation of philosophy is flawed.Cheshire

    This reminds me of Russel's famous conundrum: "The present king of France is bald."

    Anyway, the most charitable reading of your post suggests that you are dissatisfied with the JTB theory of knowledge because it does not fully reflect the way the word "knowledge" is used in the natural language (English language, at least). This would have been a valid objection if an English language dictionary gave "justified true belief" as the only definition of the word "knowledge." Like many words, the meaning of "knowledge" as exemplified by actual use is heterogeneous and will not be captured by a single, compact definition. But JTB was not meant to serve as a general definition - it was to be a technical definition for use in analytical epistemology. So we can talk about whether it is a useful definition (and many have challenged it before you, most famously, Gettier).
  • Deities and Objective Truths
    God states that killing is wrong, Gob states that it is not.Joe Salem

    I think that this controversy should be resolved in the traditional way: single combat.
  • Law of Identity
    You are confusing terms of language, or written symbols, with entities that are designated by them. You have essentially reproduced the confused argument of the OP.
  • In defence of Aquinas’ Argument From Degree for the existence of God
    I was specifically addressing OP's understanding and presentation of the argument, and contrasting it with Aquinas's. I agree with you and others that Scholastic philosophy carries with it a load of metaphysical baggage that makes it a non-starter for many. And that, if we want to address that philosophy - whether to uphold it or to dispute it - we need to take it on its own terms (as best as we can make out those terms).
  • The argument of scientific progress
    The explanandum of a cosmological argument is not the sum of the physical features of the first cause. For that, cosmological arguments are usually content to defer to science. If anything, some of these arguments present an overly confident view of science. For example, proponents of the Kalam cosmological argument, such as W. L. Craig, insist that cosmologists have already settled the scientific question of whether the universe has a beginning in time (which he identifies with the "Big Bang"), whereas in reality the question remains open.

    Nothing that future science could add to its picture of the early universe could address the problem that cosmological arguments claim to raise and resolve. The only resolution that could satisfy proponents of a cosmological argument is one that proves the first cause to be necessary in the appropriate sense (depending on the type of the argument). But such a resolution could hardly be expected from science. Science tells us what is (the brute fact), not what must be. Only logic or metaphysics can claim to do the latter.

    At this point I recommend that you actually take a closer look at these arguments, because I get an impression that you have a very vague idea of what they are saying. The SEP has an extensive introduction: Cosmological Argument.
  • In defence of Aquinas’ Argument From Degree for the existence of God
    The "existence" is based on scholastic realism's belief in the extra-mental existence of universals. Once that is made explicit, the significance of the "proof" as a proof evaporates. It remains, however, as an artifact of a certain kind of thinking. The presentation of the "proof" as a proof without making its realist underpinnings clear (if known - a material qualification), is simply fraud.tim wood

    It's unfair to the original argument, because presented in the way @Samuel Lacrampe did, it fails miserably. His version of the argument simply says that there must be an maximum of actually realized goodness, and that is what we call God, which is wrong for several quite obvious reasons.
  • The argument of scientific progress
    Every time I hear the cosmological argument or, in recent years gaining popularity, the kalām argument, it's generally based around a fundamental flaw in which it assumes properties of the first cause in order to call it god or gods without there ever being any support through the premisses about the properties of that first cause. If that first cause is, let's say an "anti-universe", a negative mass and energy that reach a fulcrum point that balance over into a burst that we would then call big bang, then the first cause is just a pile of negative energy and mass, not a god. But those arguments are used as arguments for god, which is by any standards around, a pure fallacy.Christoffer

    I don't know which examples of cosmological arguments you have in mind, but the ones I am familiar with mainly trade on the one feature of the first cause that cannot be denied (short of denying the existence of the first cause): it's being first, uncaused cause. This is what's supposed to make it metaphysically special, elevating it above any natural cause that we know or can hypothesize. Everything else that is said about that first cause more-or-less flows from that.
  • Law of Identity
    I should also clarify that modern formal logic is not quite the same thing as traditional Aristotelian logic - not just because it can have different laws (axioms), but because it is a different thing conceptually. Although it is possible to reconceptualize traditional logic in the modern paradigm, it wouldn't be what people used to think of as "Aristotelian logic."
  • Law of Identity
    Has Aristotelian logic been subjected to the same critiques as Euclid's geometry. In other words is there a non Aristotelian logic to be derived by a critical examination of it's axioms?jlrinc

    Yes and no. "Yes" in the sense that, just as with geometry, we now know of more than one logic. "No" in the sense that we did not find other geometries by proving that some Euclidean axioms are wrong, and neither did we find other logics by proving that some axioms of the Aristotelian logic are wrong.

    Nothing is wrong with Euclidean geometry, and nothing is wrong Aristotelian logic. It's just that at some point we decided that the concept of "logic" doesn't have to be limited to Aristotelian logic, and just as there is now a generalized concept of "geometry" that covers any number of geometries (including both familiar, practical geometries, and completely abstract, made-up ones), there is a generalized concept of "logic" that covers any number of logics. We have also found that the same logic can be axiomatized differently, i.e. two different axiomatic systems can have all the same implications.
  • Law of Identity
    Regarding the Law of identity "a is a" is it wrong to argue that a is not a because one a is on the left side of the copula and the other a is on the right side, and having different properties they are clearly not identicaljlrinc

    Being on the RHS or the LHS is not a property of a, but a property of the sentence "a is a".
  • If the dinosaurs had not gone extinct
    This is absolutely necessary for the evolution of human like intelligence.yatagarasu

    Again, you are only considering a narrow neighborhood of Homo sapiens when judging what is necessary to fulfill a broad requirement. This is a myopic view, as demonstrated by the great variety of adaptations and numerous convergencies that can be seen in life on Earth. Besides, as I said, you are underestimating the potential for variety within a lineage; this is why I brought up birds, some of whom, by the way, move about over much wider ranges than humans ever did for most of their existence - without the benefit of sweat glands. Fish didn't have sweat glands either, and yet here we are.
  • If the dinosaurs had not gone extinct
    Maybe not if the tape was rewound, or another planet.Marchesk

    Yeah, I have no definite opinion on this myself. Was there always a potential niche for a land-dwelling, mid-size generalist with highly developed cognitive faculties, just waiting to be filled? I am no Gould or Conway Morris, but I think that even coming from the best theoreticians, any conjecture on alternative evolutionary history would be very speculative.

    One interesting perspective on this question comes from the field of thermodynamics, of all things. Our universe (or at least the part of it that we inhabit), on average, has a rather low entropy. It is a far-from equilibrium thermodynamic system. A system that is at thermodynamic equilibrium is static; nothing interesting happens there. A system that is only slightly disturbed will evolve towards equilibrium in a fairly orderly fashion - this is what we learn in high school and undergraduate physics. But in far-from-equilibrium systems more interesting things can happen, such as spontaneous formation of persistent energy-dissipating structures. (This goes against the common stereotype that equates entropy with disorder!)

    It is said that, looking from the global thermodynamic perspective, things like stars and planets and complex chemistry - and life - play the role of energy-dissipating, entropy-producing subsystems that spontaneously form as part of the dynamics of a far-from-equilibrium system that is our universe. As one might expect, some of these subsystems are more efficient at producing entropy than others. According to some approximate calculations, it turns out that the more complex things, such as living organisms, are more thermodynamically wasteful, producing more entropy per unit mass than, for example, star cores. And human brains are right there near the top of the hierarchy as super-efficient entropy generators. So perhaps there is something to the infamous "great chain of being," after all! Or, more seriously, perhaps it is the Second Law of Thermodynamics that pushes the universe at this point of its evolution to form more and more complex structures, up to and perhaps beyond intelligent life (all to hasten its eventual heat death...)

    I think the meaning of the dinosaurs going extinct where the big ones occupying all the niches that kept mammals to a small size.Marchesk

    An alternative history could have dinosaurs diminished (to more than just the bird lineage), but not extinct, sharing space with mammals; think, for example, of how marsupials now coexist with placental mammals. Or perhaps dinosaurs could eventually produce a highly intelligent species. If they could produce something as un-dinosaur-like as birds (and some birds are pretty intelligent!), why not?
  • If the dinosaurs had not gone extinct
    I find it very unlikely something like Humans would have evolved without the extinctions of the dinosaurs. What type of reptilians do they even suggest would have led to humans evolving? Two of the biggest adaptations that led humans to evolving the way they did was the brain and the stamina humans have. (our ability to generate a thin layer of sweat) I don't see how these would develop in a world dominated by massive reptiles.yatagarasu

    Also that our ancestors came out of the trees. I don't know that the Velociraptor line would have gone to the trees for long enough to develop the kind of hands we have.Marchesk

    Two things. First, your mistake here is that you are considering a tiny counterfactual neighborhood of human evolutionary history. It is unreasonable to suggest that sweat glands or arboreal habitat are a sine qua non for evolving human-like intelligence in any species, just because these factors (allegedly) played an important role the evolution of human intelligence.

    Second, dinosaurs are not extinct. Look out the window and you'll likely see some. When you think of dinosaurs, you might have a picture of comic-book giant reptiles in your imagination; if so, you are seriously underestimating the potential for variety in that lineage. Also, what @Bitter Crank said: if you didn't know better, would you expect fish to evolve into something like us?
  • How do we justify logic?
    Somehow, for the 3rd or 4th time, you have skipped over the core of the answer: Thinking about reality is correct when it preserves the truth of what we know of reality (is salve veritate) -- and preserves that truth, not accidentally, but in virtue of the processed followed (i.e. essentially). This is an operational, goal-oriented definition.

    It is amazing that, while noting that I said, "essentially, not accidentally," you seem unable to grasp what essential note is required. Just so you do not miss it again the essential note is truth preserving (salve veritate),

    I am not discussing any "them" such as rules, but the definition of correct thinking.
    Dfpolis

    I am sorry, I haven't been closely following this entire exchange, but this just sounds like a wordy way of saying that the correct way of thinking is the way of thinking that is correct ("preserves the truth of what we know of reality," etc.) Not terribly illuminating.
  • Site Improvements
    Removing the lounge from the front page is fine, I think. This way the forum appears less chatty.

    Incidentally, this is just the sort of thread that belongs in the lounge (if it belongs anywhere), and not on the front page.
  • The Fine-Tuning Argument
    I've been reading some more on the topic. An extensive review of fine-tuning for life in fundamental physics and cosmology is given by the young cosmologist Luke Barnes: The fine-tuning of the universe for intelligent life (2012) (this rather technical article served as a basis of a popular book coauthored by Barnes). He frames his article as a polemic with Victor Stenger's popular book The Fallacy of Fine-tuning: Why the Universe Is Not Designed for Us (2011), which goes beyond the ostensible thesis of its title and argues that the purported fine-tuning of the universe is not all it's cracked up to be. Barnes is a theist (as far as I know), and Stenger was, of course, one of the crop of the New Atheists, so there may be an ideological aspect to this debate. But in his academic writing, at least, Barnes stops short of making an argument for God, and having read this article (and Stenger's response), I am more persuaded by his case - as far as it goes.

    One thing caught my attention though. While discussing the fine-tuning of stars - their stability and the nucleosynthesis that produces chemical elements necessary for life - Barnes writes:

    One of the most famous examples of fine-tuning is the Hoyle resonance in carbon. Hoyle reasoned that if such a resonance level did not exist at just the right place, then stars would be unable to produce the carbon required by life. — Barnes (2012), p. 547

    He then includes this curious footnote:

    Hoyle’s prediction is not an ‘anthropic prediction’. As Smolin (2007) explains, the prediction can be formulated as follows: a.) Carbon is necessary for life. b.) There are substantial amounts of carbon in our universe. c.) If stars are to produce substantial amounts of carbon, then there must be a specific resonance level in carbon. d.) Thus, the specific resonance level in carbon exists. The conclusion does not depend in any way on the first, ‘anthropic’ premise. The argument would work just as well if the element in question were the inert gas neon, for which the first premise is (probably) false. — Barnes (2012), p. 547

    Barnes credits this insight to Smolin's article in the anthology Universe or Multiverse? (2007). Oddly, he himself does not make the obvious wider connection: the same argument could be just as easily applied to every other case of cosmic fine-tuning. For example, it could be similarly argued that the lower bound of the permissible values of the cosmological constant is to avoid a re-collapse of the universe shortly after the Big Bang. We know that the universe did not collapse; the additional observation that, as a consequence, intelligent life had a chance to emerge at a much later time is unnecessary to reach the conclusion with regard to the cosmological constant. And yet, in this and other publications Barnes insists on referring to every case of fine-tuning (except for carbon resonance, for some reason) as fine-tuning for life.

    So why talk about life in connection with cosmic fine-tuning? Why would someone who objectively evaluates the implications of varying fundamental laws and constants of the universe - which is what Barnes ostensibly sets out to do - single out fine-tuning for life as a remarkable finding that cries out for an explanation? Well, one could argue that life is the one thing all these diverse cases of fine-tuning have in common. And the fact that the universe is fine-tuned for some feature (in the sense that this feature exhibits a sensitive dependence on fundamental parameters) to such a great extent is inherently interesting and demands an explanation.

    To this it could be objected that the target seems to be picked arbitrarily. Picking a different target, one could produce a different set of (possibly fine-tuned) constraints. Indeed, in the limit, when the target is this specific universe, the constraints are going to be as tight as they could possibly be: all parameters are fine-tuned, and all bounds are reduced to zero. Is this surprising? Does this extreme fine-tuning cry out for an explanation? Certainly not! Such "fine-tuning" is a matter of necessity. Moreover, even excluding edge cases, one could always pick as small a target in the parameter space as one wishes; it then becomes a game of Texas Sharpshooting ().

    Another objection is that life, being a high-level complex structure, is going to be fine-tuned (again, in the sense of being sensitive to variations of low-level parameters) no matter what. In fact, any such complex structure is bound to be fine-tuned. (Physicist R. A. W. Bradford demonstrates this mathematically in The Inevitability of Fine Tuning in a Complex Universe (2011), using sequential local entropy reduction as a proxy for emerging complexity.) So if there is something generically surprising here, it is that the universe is fine-tuned to produce any complex structures.

    It seems then that, objectively speaking, whatever it is that the universe is trying to tell us, it is not that it is fine-tuned for life. What then would be a legitimate motivation for framing the problem in such a way? One such motivation can be found in the observer selection effect in the context of model selection in cosmology, where it is also known as the weak anthropic principle: out of all possible universes, we - observers - are bound to find ourselves in a universe that can support observers. Thus fine-tuning for life (or more specifically, for observers) is offered as a solution, rather than a problem. Of course, this requires a scenario with a multitude of actual universes - in other words, a multiverse. Barnes considers existing multiverse cosmological models in his paper and finds that, whatever their merits, they don't solve the fine-tuning problem; if anything, he contends, such models make the problem worse by being fine-tuned to an even greater extent.

    So we come back to the question: Why do people like Barnes consider fine-tuning for life to be a problem in need of a solution? I think that theologian Richard Swinburne, who was perhaps the first to formulate a modern FTA, gave a plausible answer: we find something to be surprising and in need of an explanation when we already have a candidate explanation in mind - an explanation that makes the thing less surprising and more likely. And God, according to Swinburne, presents such an explanation in the case of intelligent life. So there is our answer: the only plausible reason to present fine-tuning for life as a problem is to make an argument for the existence of God (or something like it), and anyone who does so should deliver on that promise or risk appearing disingenuous.
  • Site Improvements
    I can see why Paul would stop frequenting the site. The nasty nature of the destruction of the older site and change over to this one must have been hard on him but I am very glad that he appears to have received a duly deserved financial reward for all his work.John Doe

    IIRC he hadn't been active on the old site in the last years before its demise, usually emerging into the public view to put out some fire.
  • Classical Music Pieces
    Speaking of Bohemians, I like this piece from Ma Vlast by Smetana, and particularly this old mono recording by Rafael Kubelik with the CSO.



    This recording was produced with a single microphone suspended high above the stage. When I listen to it, I imagine soaring like a bird above the endless green expanses, with waves of music rising warmly towards me, mixed with the moist aroma of the earth.
  • About skepticism
    It seems to me that there is nothing deeper to this argument than your rather idiosyncratic language. Usually, to say that I "reject" a proposition does not commit me to a stance of absolute certainty, it just means that I consider the proposition to be dubious at best. Which means that, in order to be performatively consistent with my own beliefs, I'll live my life as if I believed the proposition to be false, just as you said of yourself.
  • The Fine-Tuning Argument
    Yes, after I posted this I thought about it a bit more and realized that this wasn't actually making sense. I think I understand where Collins is coming from. Fine-tuning comes up in the context of model selection in particle physics and cosmology, but the logic there is somewhat different from that in the FTA. What happens, roughly, is that we start with a general mathematical form of the action or dynamical equations, based on general symmetry considerations or some such, in which the constants and boundary values are free parameters. This is where the problem of distributions over potentially infinite domains comes up as well. Various best-fit estimates involve marginalizing over parameters, which often results in integrals or sums over infinite domains, such as



    The normalizability challenge can then be answered with considerations such as that the applicability of the model is limited to a finite range of parameter values (e.g. the Planck scale), as well as considerations of "naturalness" (which present another can of worms that we need not get into.)

    The bottom line is that in physics we are not agnostic about at least some general physical principles, and more often we are working with quite specific models with known properties and limitations, which can inform the choice of probability distributions. Whereas in the most general case of the FTA we are agnostic about all that. Any form of a physical law, any value of a fundamental constant represents an epistemically possible world, which we cannot discount from consideration.
  • The Fine-Tuning Argument
    I've been reading Robin Collins's extended treatment of the FTA in his 2009 The teleological argument: An exploration of the fine-tuning of the universe. For fairness's sake, I would like to revisit the nomalizability and the coarse-tuning objections to the probabilistic FTA, which I have talked about here, and which Collins addresses head-on.

    To recoup, the nomalizability objection draws attention to the fact that a uniform probability distribution, which the Principle of Indifference compels us to adopt, is undefined on an infinite domain; thus, for those fine-tuned parameters for which the range of epistemically possible values is unbounded, we cannot even have prior probabilities (this objection has been made by a number of critics of the FTA). The coarse-tuning objection points out that, even if the nomalizability problem was somehow dealt with, we would end up in a strange situation, where the width of the life-permitting region of a "fine-tuned" parameter doesn't actually matter, as long as it is finite; it could be 101010 times as wide, and this would not make any difference to the argument. (@andrewk has pointed out that the metaphor of "fine-tuning," which comes from analogue radio and instrument dials, breaks down here.)

    Collins makes a number of arguments in favor of considering finite ranges in parameter space. I have already mentioned his concept of an "epistemically illuminated" (EI) region, which has an affinity with intuitive arguments made, for example, by John Leslie in his 1989 book "Universes," who at one point compares fine-tuning with a bullet hitting a lone fly on the wall: even if other, remote areas of the wall were thick with flies, he says, this would not make the pin-point precision of the hit any less surprising. I am not convinced by such analogies; I suspect that they trade on intuitions that work in familiar human surroundings, but break down in the vastness and strangeness of modern physics, especially when it comes to highly counterintuitive infinities. (For example, when we imagine bullets randomly hitting broad targets, we don't really imagine infinite targets with uniformly distributed probability; rather, we probably have in mind something like a very wide normal distribution, which is nearly flat within a finite region in front of us, but drops off to virtually zero far away from us.) In any case, if the analogy is justified, there ought to be a rigorous statement of the argument that vindicates it, and I still haven't seen any, which makes me distrustful.

    Perhaps the most interesting argument that Collins makes is that we are not justified in considering unbounded ranges for physical constants, because all our scientific theories have a limited domain of applicability (for fundamental physics it is mostly the energy scale; our current theories are low-energy effective field theories). If we deviate too far from the actual values of physical constants, the very models where these constants appear break down; in those remote parameter regions we will need some different physics with different constants. This is a good point that I haven't considered in relation to the FTA, nor have I seen it addressed by FTA critics. However, my objection to this argument, as well as the less formal arguments for EI regions, is that limiting the range of epistemically possible values cannot be justified from within the Bayesian epistemic model used in the probabilistic FTA. In particular, this move doesn't sit well with already highly problematic indifference priors, which are inversely related to the size of the range. It follows that specific, finite probabilities with which we operate depend on these squishy, ill-defined choices of EI regions. Moreover, the limitations of EFTs are only contingent, and only apply to their boundary conditions and perhaps constants, but not to the mathematical form of their laws.

    Although he puts the most effort into defending the idea that the size of epistemic parameter ranges is necessarily limited, Collins also considers the possibility of using non-standard probability theories, perhaps those without the requirement of the so-called finite additivity, and thus not suffering from paradoxes of uniform distributions over infinite domains*. As I said earlier, I am generally sympathetic to this idea: I am not a dogmatic Bayesian; I recognize that Bayesian epistemology is not a perfect fit to the way we actually reason, not is it a given that Bayesianism is the perfect reasoning model to which we must aspire. An epistemic model based on something other than classical Kolmogorov probability? Sure, why not? However, such a model first has to be developed and justified. And the justification had better be based on a large number and variety of uncontroversial examples, which is going to be highly problematic, because we simply lack intuitions for dealing with infinities and infinitesimals.

    And that is a general problem for arguments of this type, which also include Cosmological arguments: at some point they have to appeal to our intuitions, be they epistemic or metaphysical. But the contexts in which these arguments are invoked are so far removed from our experiences that intuitions become unreliable there.

    * I have also thought of another solution that he could propose to address the challenge of infinite domains, along the lines of his epistemically illuminated regions: epistemically illuminated probability distributions, which, he could argue, would be non-uniform (most likely, Gaussian).
  • The Fine-Tuning Argument
    This is exactly the point I have been trying to make.Rank Amateur

    No, not even close. The only point that you've managed to make in this discussion, and which you keep repeating over and over, as if it wasn't stupidly obvious, is that you know that you are right, and those who disagree do so only because they are prejudiced. We get it. You can stop repeating it and leave, since it is obvious that you have nothing else to say. Take Wayfarer with you, too.
  • The Fine-Tuning Argument
    Personally I am conflicted about the multiverse notion. I feel that it's probably true, not because of the FTA but rather because I just wonder - if there's one spacetime, why shouldn't there be more?andrewk

    It should be kept in mind that historically, "multiverses" in cosmology were not proposed as a solution to the fine-tuning problem. For example, the kind of bubble multiverse that results from chaotic or eternal inflation is just a generic prediction of inflationary cosmology, which has its own raison d'etre, not having directly to do with fine-tuning.

    Some cosmologists argue that, as an added bonus, such multiverses solve the fine-tuning problem, but there is no settled opinion on this. There is, for example, an ongoing argument over whether such a solution commits the "inverse gambler's fallacy."
  • The Fine-Tuning Argument
    I would answer 1 in 6, but that just means that that's what I'd use in calculations about what to bet, in most situations. It's not a truth claim. It's not a fact. It's an assumption I make to help in decision-making.andrewk

    And it's the right way to think of probabilities, in my opinion - at least in this context. After all, we are interested in beliefs (such as God beliefs), and how new evidence (such as the evidence of fine-tuning) affects them. After you learn this new evidence, and taking into account what you already know, what would be your most rational bet? That is just the kind of probabilistic reasoning that Collins and some others attempt to pursue.
  • The Fine-Tuning Argument
    there is no metaphysical reason why we should consider the laws to be fixedSophistiCat
    Isn’t there an empirical reason, namely, that they always are? Could it ever be F=MA(most of the time)?Wayfarer

    Teleological arguments deal with counterfactual possibilities. Empirically, fundamental constants are just what they are (most of the time) - that is why we call them constants. The FTA considers the possibility of them being something other than what is empirically observed. I am just saying (and I am not being particularly original here) that we could, with the same justification, vary fundamental laws as well.
  • The Fine-Tuning Argument
    From that abominable start, does it get any better? Are there any pages that are less naive and worth reading?andrewk

    It does, I think. It's been a while since I read his exposition of the argument; the latest, most complete version, according to him (which I haven't read, but intend to do), is his entry in The Blackwell Companion to Natural Theology: The teleological argument: An exploration of the fine-tuning of the universe. I agree, such half-arsed analogies as that do more harm than good to the argument, but Collins does make a brave attempt to justify his conclusion with sound reasoning, and I like that about him. Not being satisfied with faith or unexamined intuitions and prejudices, he plays the philosopher's game, delving into the hows and whys, laying bare his reasoning and making himself vulnerable to criticism. Which is a whole lot better than blandly stating, like @Rank Amateur does, that the design inference is "obvious," and if you don't agree, it's your own damn fault, you closed-minded atheist.

    Of course, I agree with @Moliere that, at the end of the day, it still comes down to subjective judgments, whether they are based on epistemic or esthetic or ethical criteria (they are all closely related, anyway). But it seems to me that a teleological argument, especially one as technically specific as the FTA, is amenable to analysis, so we shouldn't be satisfied with gut feelings.

    Take @Rank Amateur's favorite card deck analogy, for example. It is just the kind of toy example where Bayesian analyses (which Collins favors) shine. And it is instructive to consider. I won't bore you with formulas (which are elementary, anyway), but the idea here is that the canonical ordering of cards in a deck is far more likely to be the result of a deliberate action (whether because it was just removed from its factory packaging or because someone deliberately arranged it in order) than of a random shuffling. But we know this because we know something about decks of cards, how they are labeled and handled, and about people and their habits and preferences. We have some rational expectations, or priors, in Bayesian lingo, which are based on our experiences of the world.

    The case of fundamental laws and constants is disanalogous to this example in a number of ways. One, already mentioned, is more technical, having to do with normalizability. But more importantly, we can have no rational expectations with regards to fundamental laws and constants - that is just what makes them fundamental. (By contrast, we can potentially make some predictions, or even precise calculations for the spring constant in Hooke's law.) There is nothing in our experience that could give us any clue as to why they have the structure and the values that they have.

    Of course, we could still hypothesize that some supernatural entity made the laws and constants just so. And of all the ways a supernatural entity could have done it, it made the laws and constants to be compatible with the existence of life in the universe - lucky us! Perhaps then of all conceivable supernatural entities - what are the chances?! - the one that is responsible for the fundamental laws and constants of our universe is a kind of personal, anthropomorphic being that had both the power and intention to make the world suitable for humans. In any case, it seems that, as @jorndoe points out, this supernatural being is exquisitely fine-tuned!
  • The Fine-Tuning Argument
    And therein lies a problem: there is no uniform probability distribution on an infinitely wide interval. But if not uniform, then what? If a uniform distribution of epistemic probability could be justified by the principle of indifference (sort of - there's a lot of controversy over the principle of indifference, and over epistemic probability for that matter), there surely cannot be any justification for a non-uniform distribution.

    Robin Collins attempts to address the problem with so-called "epistemically illuminated regions," if I remember correctly. That is, he suggests that instead of taking the largest conceivable range as the total probability space, which would be infinite for many of the fine-tuned parameters, we should only look at finite ranges that we have actually examined for life fitness ("epistemically illuminated.") The problem here is that for at least some of these parameters we have no trouble examining the entire infinite range. We could (and probably should) vary all so-called parameters simultaneously and thus end up not with a single range but with a multidimensional parameter space. However, even though it might be analytically and technically difficult, nothing in principle prevents us from theoretically examining this entire infinite space for life fitness. If we do, and it turns out that the life-supporting regions in this space have a combined measure of zero, that would undermine Collins's probabilistic analysis. It seems unsatisfactory to bet your entire analysis on such a contingency.

    If we then allow the laws themselves to vary (and there is no metaphysical reason why we should consider the laws to be fixed while relaxing constants and boundary conditions), we run into an even more severe problem: the "collection" of all possible laws is too big to even form a set (since it will be as "big" as the set of all sets - a paradoxical concept, as Russell famously demonstrated), and so no measure theory - and thus no probability - could possibly be applied here.

    But why stop there? Who said that there have to be laws? They are not a metaphysical necessity. So, as long as we allow constants, boundary conditions, and then laws to vary, we should also consider possible universes in which laws do not operate, or operate inconsistently. That broadens the space (which is not a space!) inconceivably.

    Is this a fatal blow to the FTA? Well, it is to Collins's Bayesian argument, but frankly, I think there are better reasons than that to reject Collins's argument.
  • The Fine-Tuning Argument
    Yes, I am aware of Robin Collins's argument. Maybe we'll get to him, but I was rather hoping to engage proponents of FTA directly. I could talk about Collins's argument (I'll need a refresher), but I wouldn't want to just talk to myself. I don't think his argument works, but he is one of the few to take up the defense of the FTA seriously, and if he is wrong, his failure is instructive.

    jeeprs... go thumb your dog-eared collection of quotes and moan about the evils of atheistic materialism somewhere else. You don't seem to understand what we are talking about.
  • The Fine-Tuning Argument
    I am finding it hard to understand why you don’t see the issue here. The customary post-Enlightenment attitude to this matter has always been that as the Universe was not ‘God’s handiwork’ [i.e. the consequence of intentional creation], then the only alternative was that life arose by chance.Wayfarer

    You are equivocating on two meanings of "chance"; depending on the meaning, the "chance/design" dichotomy is either obviously true or obviously false, but in no case is it profound or relevant to our topic.

    If "chance" means unintentional, accidental, then it translates into "The universe and everything in it is either the consequence of intentional creation or it is not" - a truism. If "chance" means random, lacking any pattern, then it is obviously false, since most people before, during and after the Enlightenment had at least some idea of the universe as a fairly orderly place. As such, it is to be expected that the universe was always constituted in such a way that life would be possible in it at least at some time and in some places. This is in no way a novel, unexpected finding. Neither is the idea that the universe could conceivably have been constituted otherwise.

    None of this has much to do with the notion of fine-tuning, which specifically has to do with sensitive dependence of life on certain "parameters" of physical models.