Comments

  • Why should we talk about the history of ideas?

    -- but for some reason even you hedge here and don't advocate it --

    My reticence isn't so much due to the fact that I find speculative history to be wrong-headed or hopeless, but rather that it's almost impossible to advance such a theory in a convincing and well-supported manner while also keeping the argument short. It's also the type of thing where coming up with criticisms is much easier than convincing positive arguments, and I wouldn't want to get side tracked defending the particular merits of some one theory when the question is more about the merits of speculative history and how it interacts with philosophy in general.

    (2) Some people hear "the Enlightenment" and think, "Greatest wrong turn in history, still sorting out the mess it made," and some people think "Finally! That's when we got on the right path, the only trouble is staying on it."

    The "we went off on the wrong track here," type arguments aren't necessarily without their merits, but these tend to be arguments about how there is some "truth" or "ideal" out there and how we can discover/actualize it, rather than being an attempt to describe progress as a whole.


    @Isaac's suggestion is, I believe, that there is no 'objective' context to recover to understand the Enlightenment; however you describe that context, before and after, is going to be shaped and colored by the story you're telling about it.

    This is certainly true to some degree. There is no one objective frame in the first place because different people have different opinions within their own eras, and oftentimes these are diametrically opposed. The same people also view the same events differently at different times. Nor are trends ever absolute; every period of "romanticism," has its rationalists, every period of "rationalism" has its romantics.

    That said, I don't think this leaves us unable to analyze intellectual history at all. We can observe that Renaissance thinkers "rediscovered," classical culture in an important way. We can spot major swings in US culture when comparing the 1950s and 1960s and be quite confident in describing real differences in trends. The problem is often one of degree, we can overemphasize some trends, etc. There are different reasons we have for turning to history, so how deep we go in exploring nuance is something that gets determined on a pragmatic basis.

    Moreover, the problem only seems so intractable if we insist on seeing history in terms of agents and intent, a stage where individuals are running the show. "How can you privilege this or that voice? How can you be sure this description of events isn't self-serving?" These are certainly valid questions, but questions about individuals' original intent are only paramount if we think man is firmly in the driver's seat of history, that there is one proper unit of analysis in human affairs: the individual. I think this is a major mistake.*

    There is plenty of work in the social sciences to suggest that institutions have goals that aren't continuous with those of the individuals that compose them, that organizations exhibit emergent forms of intelligence in problem solving, and that "group minds" are a useful way of understanding some emergent behaviors. From ant colonies, to lymphocytes, to neurons, we see patterns in how complex systems work, and these seem to apply to human social organizations. So, just as no one ant knows what the hive is doing, the same can be true for us vis-a-vis history.

    This is what Hegel gets most unambiguously correct. He's at the same time an early progenitor of complexity studies and still on the bleeding edge of being willing to follow it to its logical conclusions. What "a man," thinks doesn't drive history in the long run, but what rather what "mankind," thinks. We are but accidents of a social "substance," i.e., "Spirit." His teleological claims about where Spirit is headed is less supportable.

    Seen from above, the various threads of philosophy over the years look akin to the "terraced deep scanning," preformed by lymphocytes as they dynamically explore a massive sample space in an attempt to solve problems. Some areas get explored more thoroughly than others, some lines of inquiry receive more resources at one time, and multiple lines work in parallel.

    IMO, the problem of sorting out bias is not the central problem when considering history, although it is a real one. The larger problem is that we're in the role of a neuron having to explain what the entire brain is doing, or a fish being asked to explain the behavior of its school. This is why we have needed to build such a massive apparatus of data collection and analysis, and so many separate fields of inquiry in the social sciences. Our narratives are akin to neuronal action potentials or honey bees dances; they're the way individual components of the system talk to one another.

    However, this doesn't doom civilization's attempts at self-knowledge any more than a human being's mind being the work of small components precludes us from having a sort of emergent, imperfect self-knowledge. Sure, a sole neuron is never going to understand the brain alone, but then the neuron doesn't work alone either. History writ large is communicated, and its information processed, by systems of people, not individual people in isolation. I think the correct analogy is a perceptual system, a mind mulling something over, not a map that gets pieced together by individuals.

    But if this is true, than science and philosophy are also not a mapping process, but more akin to group cognition. In the big picture they are just another link in a great chain of systems whereby being encodes being, representing itself to its self. This chain continues in ever higher levels of emergence, from the most primitive genomes, to nervous systems, to language, to cultures, and upwards, with each system undergoing its own form of natural selection and evolution in a sort of fractal recurrence.

    To what end? We can consider that the earliest life didn't "understand," the universe so that it could survive, but rather survived because it somehow encoded relevant information about the enviornment into its structure. In life, "knowledge" pre-dates goals (as only makes sense, you have to know something to have goals). But goals aren't irrelevant to survival in intelligent life, they just take time to emerge. We as individuals have goals, organizations have goals, but my guess is that we have yet to reach a point where the highest order organizations we are a part of can have goals.

    And perhaps goals undergo their own sort of selection process?



    But the calculus changes here if you recognize that all you have the option of doing is comparing stories (and what they present as evidence for themselves) to each other; it's obvious with history, but true everywhere, that you don't have the option of judging a story by comparing it to what it's about, 'reality' or 'what really happened'. Comparing stories to each other might give some hope of 'triangulating' the truth, until you remember that this triangulating process is also going to be shaped and colored by narrative commitments, just like the material we're trying to judge.

    Exactly. Sort of like how the the visual cortex doesn't work with any of the original light waves that are the "subject" of sight, and the components of the auditory cortex don't have access to, or communicate with sound waves. Narratives are the action potentials of history.


    *(Interestingly it is also a mistake that human beings make almost universally vis-a-vis nature, both:

    A. Early in the development of civilizations - i.e., animism is ubiquitous, seeming to occur across cultures until a civilization develops some form of philosophy that starts to look for abstract principles that determine how nature works. E.g., "the river floods or doesn't flood because it wants to, the rock falls because it wants to, etc.

    B. Early in human development. Research shows that young children are far more likely to describe events (including those involving only inanimate objects) in terms of agency than adults.

    This is an interesting parallel. Do people in a more advanced society need to retread the mental routes their ancestors have taken to reach the same developmental stages? Or maybe it is a coincidental similarity?)
  • Why should we talk about the history of ideas?
    Having gotten distracted by the minutiae of justification, I would just offer up that how someone sees the relevance of history in philosophy likely depends on their philosophy of history.

    Should we consider philosophical questions largely in isolation or should we be thinking in terms of a larger picture, e.g. "where is human thought coming from and where is it going?" Are there patterns in philosophical thought such that we can see where we might be headed from where we have been?

    Is the "Great Conversation," the canon of philosophy, simply a collection of influential works that happen to cite one another, or is it an example of an unfolding dialectical process? Do we continue to study the works of Plato, Kant, etc. because there is something truly great about the primary sources? Or do we keep going back to reshape their work for our times, in a way "reshaping," the history itself? If the latter, is there any discernible pattern to how this is done?

    Is "philosophy... [its] own time apprehended in thoughts?"

    For skeptics of the speculative attitude, I'll just throw this quote out there on speculative history more generally and add a bit more below:


    Speculative philosophy of history, then, stems from the impulse to make sense of history, to find meaning in it, or at least some intelligible pattern. And it should not surprise us that at the heart of this impulse is a desire to predict the future (and in many cases to shape it). By any standards, then, this branch of philosophy of history is audacious, and there is a sense in which the term ‘speculative’ is not only appropriate but also carries derogatory implications for those historians and others who insist on a solely empirical approach to the past, i.e., on ‘sticking to the facts’...

    To others, however, it is a worthwhile undertaking because it is so natural to a reflective being. Just as at times one gets the urge to ‘make sense’ of one’s own life, either out of simple curiosity about its ‘meaning’, or through suffering a particularly turbulent phase, or because weighty decisions about one’s future are looming, so some are drawn to reflect, not on themselves, but on the history of their species – mankind.


    Whether speculative philosophy of history is worthwhile or, instead, a fundamentally flawed exercise, it is surely an understandable venture.Firstly, attempts to discover a theory or ‘philosophy’ of history are intrinsically interesting because they try to make sense of the overall flow of history – even in some cases to give it meaning.And there is a sense in which to do particularly the latter is to offer answers to the question, ‘what is the point of life?’ (not yours or mine, but human life in general.)

    The importance of such a question is either self-explanatory or nil, depending on an individual’s assumptions.Some see it as the ultimate question to be answered, whereas others see it as symptomatic of an arrogant anthropomorphism which demands that ‘life, the universe, and all that’ be reduced to the petty model of merely human dimensions, where intention and reason are seen as the governing principles.But that individuals differ in this way is exactly the point, in the sense that speculative philosophy of history raises the issue directly into the light of argument, allowing us to examine our initial assumptions regarding the value or futility of such ‘ultimate’ questions.

    For example, one might ask sceptics whether they at least accept the notion that, on the whole, ‘history has delivered’ progress in the arts, sciences, economics, government, and quality of life. If the answer is "yes," how do they account for it? Is it chance (thus offering no guarantees for the future)? Or if there is a reason for it, what is this ‘reason’ which is ‘going on in history’?

    Similarly, if the sceptics answer ‘no’, then why not? Again, is the answer chance? Or is there some ‘mechanism’ underlying the course of history which prevents overall continuous progress? If so, what is it, and can it be defeated?


    From M.C. Lemon's Philosophy of History

    Additionally, if we believe science tells us true things about the world then presumably we believe that at least one human project does undergo progress. To be sure, we don't think it always gets things right, but we also tend to think that a biology textbook from 2020 should be much closer to the truth than one from 1960 and that one from 1960 gets more right than one from 1860.

    If we can make progress here, such that human beliefs hew closer to the truth over time, why not in other areas? Why not in some or all areas of philosophy?

    Humans are goal driven and can accomplish their goals. Indeed, a big trend now is to ground the emergence of meaning in an "essentially meaningless," physical reality in the goal oriented nature of life itself. Groups of humans also accomplish intergenerational goals, e.g., the great cathedrals rose over several generations. Whole nations have at times been successfully mobilized to accomplish some goals, e.g., the standardization of Italian and German out of several dialects in the 19th century, or the revival of Hebrew as a spoken language after 2,000+ years. This being the case, what stops us from recognizing a broader sort of global "progress," or a more narrow sort of progress in some areas of philosophy?

    If the reason progress is impossible is because "progress" can't be statically defined long term, is there any pattern to how we redefine " progress," over time?

    I don't want to derail the thread, if we want to have a thread on the philosophy of history we can, these questions are more rhetorical. Obviously the relevance of history changes depending on how you answer them. There is an argument to be made that focusing on arguments in isolation is akin to putting all your effort into finding out the best way to walk and making the most accurate maps, while completely ignoring the question of where you are walking from or to and why.

    Just as an example, the cooption of Peirce by the logical positivists is relevant re: questions on ontology writ large if we see logical positivism largely as a reaction against the influence of Hegel. The move wasn't entirely reactionary though, it doesn't go back to mechanism, but instead moved to an empiricism so radical that I honestly find it closer to idealism than today's popular mechanistic accounts of physicalism. In this, and many other ways, it is more a synthesis of Hegel with other contradictory strands in philosophy. Hegel was sublated and subsumed within the new "logical positivism," and this helps us see why logical positivism was born containing the seeds of its own destruction. A set of implicit contradictions was there from the beginning, just waiting for folks like Quine to make manifest.

    If ideas and theories don't simply "evolve," due to natural selection towards truth (i.e. Fitness vs Truth Theorem), but rather advance through a dialectical model, then history is a good deal more "active," in how thought develops in all contexts. Saying these turns are "necessary," might be a bridge too far, but they also aren't as contingent as in a "natural selection-like" theory of how knowledge progresses. Something like an attractor in systems theory might be a better way to conceive of the dialectical advance, maybe blended with the idea of adjoint modalities and the way a proof of one object serves as a proof of others (a key intuition in the development process of category theory from what I understand).

    Obviously the above example would need a lot more fleshing out than I want to put into it to be compelling and I certainly don't want to presuppose Wayfarer was thinking anything like this. Not all speculative history need be quite so Hegelian.
  • Ukraine Crisis


    I'm sure on paper everything is looking just dandy for Ukraine.

    Not really. They've had millions of people leave, which is a drag on the economy, continued missile attacks act as a check on investment, and they are facing multiple huge ecological disasters from damage to critical infrastructure during the war.

    Their air defenses seem to be holding up better than the prior leaks suggested, particularly re: longer range missiles, but their SHORAD is obviously quite limited. Their initial offensive push stalled, and they had to switch tactics, because they were unable to keep the skies clear ahead of advances.

    They also still have less artillery and fewer fire missions, even if they do seem to be winning to war of attrition re: artillery due to better counter battery radars, PGMs, and longer range artillery. The artillery numbers for both sides are likely lower than stated because rear area losses are less likely to be photographed and they've likely worn out a great many pieces. I've always thought one of, if not the best things that the West could do is simply stand up large scale shell and gun production until Ukraine has a 3-5:1 advantage. That is what is needed to advance against prepared defenses without air support.

    The Ukrainian air force is also extremely depleted and there is simply no way for them to maintain the sortie rate of their remaining fighters. If they aren't shot down they are going to crash from over use.

    The F-16, which appears to be on the way, fixes some of these issues because there is a huge supply of them, but they will face attrition and will be very constrained in what they can do because Russian air defenses are still plenty strong to play defense. Certainly, they will be very helpful, but it's also not like they can operate in the ground attack role in most instances because SEAD can only be achieved for short windows by firing off HARMs at any radar signature.

    Modern attack helicopters might be the next thing they receive, but these won't be fully effective until SHORAD assets get attrited down, something that isn't happening because there isn't anything in the sky to waste MANPADS on.

    The only good news is that they are getting closer to parity on vehicles, which means a 50/50 loss ratio during offensive operations doesn't necessarily doom an attrition based offense. That and the absolute shit show that is Russian politics.

    They obviously lack an effective command structure and are still reliant on sprawling geographic commands and brigade level organization. This is partly because of their absolutely insane logistical challenges, the result of using a ton of different equipment from different countries. Thus, you have a brigade that used just Czech equipment, etc. to make that easier. But by most accounts they have a very hard time doing complex operations, especially offensive ops, because artillery brigades aren't working organically with other elements.

    IDK, the middle of a war is not the time to do a reorg, but this obviously shows the limits of brigade centric doctrine (and even more so the limits of Russian battalion tactical groups). It's sort of a lesson on the advisability of moving away from division centric thinking, which I think comes from taking the wrong lessons from the GWOT, which was not a peer conflict. Ukraine really needs a way to pull off coordinated corps level ops and there is a very long road to that.

    The inability of either side to conduct successful large ops is a weird thing. Communications equipment has come a very long way since 1950 and it should make this easier.

    In China's initial spectacularly successful offensive against US led UN forces they were coordinating operations between two field armies of 6 and 3 corps each, each corps comprising 3 divisions and ancillary attached supports. They conducted complex and highly effective maneuver operations against a UN force with a huge firepower advantage using, incredibly, using largely just small arms and mortars and almost entirely man hauled supply lines.

    Obviously MacArthur being an absolutely atrocious commander was a determining factor in how badly the UN forces fared, but it still required excellent coordination by the Chinese, who could not use aerial reconnaissance either. I mean, the Chinese offensive was, IMO, the worst out and out rout the US military has experienced in its entire history by a solid margin.

    Maybe it is that surveillance and recon have advanced more relative to comms and this makes large ops harder? And obviously China had been at war non-stop for almost half a century then, so they also had an extremely veteran force.
  • Ukraine Crisis
    1689353356793739.png

    Not the best translation, "working," in particular should generally be "operating," instead.

    Verified by major outlets later: https://www.npr.org/2023/07/14/1187644890/russian-general-fired-for-being-critical

    This is the same event, aliases are sort of a common thing, Strelkov is Girkin, people go by their patronimials sometimes, etc.

    One interesting thing is how the tone of Pro-War Russian bloggers has shifted re: the "musical chairs," shifting of Russian commanders on the ground. In general, early in the war it seemed like efforts to pin blame on relived commanders was quite successful, with charges being brought against a few for having maintained Afghan National Army style "ghost soldiers," and "ghost vehicles." These are essentially people or things that only exist on paper and then funds and materials for them can be embezzled, the materials sold off, etc. Pro-War folks seemed pretty accepting of the shake ups.

    I think that makes sense; bad performance suggests bad leadership. But now it seems that these are increasingly being taken in a more negative light.

    Also not surprising considering what happened with the last guy who began complaining about a lack of shells :rofl: . Although a behind doors rant and a stream of social media videos pouring out an avalanche of invective and gay slurs against the head of the MoD and the chief flag officer are, IMHO, kind of different beasts lol.

    BTW, the memes that out of the Prigozhin meltdowns are pretty hilarious. As are the pictures of his "assets" that got turned over to him, which have a bunch of bricks of white powder next to all the fire arms (including a gold plated handgun), stacks of roubles and USD, and gold bars. If you put it in a movie people would say it is too on the nose.

    Here is the Warhammer and Lord of the Rings versions.

    warboss-prigozhin-v0-XTAFb9-AVvj-HA5t-T9j-JU6-Vb7-E8d-O-7g-I-coa-Iq7-Hn5-Y.jpg
    66c.png
  • Ukraine Crisis


    The decline isn't just in tanks though. The sortie and fire mission rates have plunged too, which are fairly easy to verify from satalites, allowing that when either side accidentally shells its own positions it's not always easy to sort out the attribution.


    1688670116724628.png
    1688670350589427m.jpg

    Having taken what the UA thought was inappropriately high losses the UA appears to have given up on a maneuver warfare breakthrough and is focusing on attrition of artillery, likely in the hopes that a low morale force will rout without superior fire power. This has at least been more successful than the maneuver warfare ops.

    On another note, the sortie rate on the few surviving Ukrainian MiGs is absolutely ridiculously high. I saw a picture and they look like Frankenstein monsters. I suppose this is what happens when you have plenty of mechanics but no planes.
  • Why should we talk about the history of ideas?


    I don't buy into stuff strictly verificationist epistemology because it's self-defeating and no one actually goes by those standards for most beliefs.

    For example, presumably we can agree that "in 1986 the New York Mets won the World Series, and have sadly not won it since," even if we don't follow baseball. This isn't a testable claim, we can't go back to 1986 and, while Daryl Strawberry and Kieth Hernandez were great, I doubt the have championship baseball skills we can verify.

    Now people did observe the games, but presumably plenty of us didn't and we still believe we can verify the claim from records. Of course, we have videos, which people used to generally except as a sort of gold standard of evidence (seeing is believing), but it's increasingly easy to fake that sort of thing convincingly (and it could be done before).

    But obviously we don't doubt many facts based on records for claims that aren't able to be repeatedly tested. Which is just as well because, how do we know the results of most experiments? Records. Necessarily, most people don't have the time or resources to even begin replicating some substantial share of all experiments across the sciences.

    I feel quite confident in some fairly distant historical claims. That Saint Augustine "had the intention to make Neoplatonic thought coherent with post-Nicean Christianity," seems plenty certain. He left more work behind than anyone else from antiquity, thousands of pages that pass textual analysis as to their authorship. He seems to be making earnest attempts at what the claim says he is doing (certainly convinced a lot of people), we have surviving transcripts of his comments at various councils, other letters referencing him. It hard to think of another candidate theory from abductive reasoning that explains all that writing. Maybe something more specific would be hard to verify, but this seems more sure than plenty of dubious findings in peer reviewed journals based on observations it's impractical to replicate.

    The other thing is that: "the best way to ensure true future beliefs is to subscribe to verificationism," isn't a claim that can be verified by verificationism.
  • What Are the Chances That This Post Makes Any Sense? A Teleological Argument from Reason
    On a different note, could we say that the description of any rational (law-like) possible universe is equivalent with just the begining of a description of an infinite number of possible irrational universes that just happen to act like a given law-like universe up to some arbitrary number of states? And so we could say that an analog of Cantor's Diagonal Argument applies here in that a combinatorial concatenation of all law-like universes simply defines one unique irrational universe (one that jumps around essentially "simulating," each rational universe in turn). Sort of a diagonal "Kolmogorov Complexity of the infinite," argument . I don't think there is a bijection if these sets can exist.

    In this case, there would be infinitely more irrational possible universes. Not convinced this works, but it seems plausible.



    If I had a good answer for that I'd be publishing my landmark philosophical treatise, but I'm at a loss.
  • What Are the Chances That This Post Makes Any Sense? A Teleological Argument from Reason


    Agreed. That gets to the unreasonability of denying PSR in many every day contexts. But generally we don't feel the same way about violations of PSR for seemingly "eternal," truths. "Why does the Golden Ratio or Pi have the values they do?" Well, we can explain that in terms of other ratios and numbers, but we generally are fine with there being no "cause," behind the explanation. 2+2 is equivalent to 6-2 in some way, but we don't tend to say 2+2 causes 4.

    IDK, there are plenty of ways to deny that mathematical truths are eternal, and I'm open to those. But it does seem much more plausible that these sorts of Platonic truths exist in some sort of acausal way. So there is an argument from analogy that could be made that an eternal universe is sort of like 2+2 = 4, it's a truth without beginning or end.

    I don't think the analogy works. What would be satisfying is something that doesn't work on analogy, but rather carries the same sort of necessity as simple mathematical truths or logical truths. This is what I take Hegel to be attempting in the Science of Logic, starting from the consciousness, but it's also fairly impenetrable.
  • Deductive Logic, Memory, and a new term?


    You might consider this point: formal theories of communication and computation are extremely similar and in some cases indiscernible. Computation requires communication. In the canonical model of computation in natural systems, cellular automata, a cell's neighbors' need to "signal" the cell so that it "knows" which state to adopt. In the Turing Machine model, the head needs to scan the tape at each step to run the quintuple. This is communications.

    Memory can be seen as simply the communication between a past state of a machine and its future self. This same sort of communication can even be achieved via recursion in a Markov Chain IIRC.

    I will try to find the paper I found laying this out. But, if computation is deduction occuring in a step-wise fashion over time, which it seems to be to me, then we can think of memory and recursion as a sort of communication.
  • What Are the Chances That This Post Makes Any Sense? A Teleological Argument from Reason


    Sorry if I wasn't clear before. But yeah, that's the basic problem I see. If things start to exist, having not existed at any prior point, then it seems like things could start existing whenever and anything should start to exist in this way, not just the Cosmic Inflation state preceding the Big Bang.

    I always figured this is one of the reasons why many cosmological theorists start with some sort of vacuum state or "the laws of physics," existing without beginning, instead of positing an initial first time they exist, but I might be wrong.

    I am not entirely clear on the history of this sort of argument, but I gather that it was taken seriously and used to justify the the once popular view of an eternal, static universe, before evidence of expansion and the Big Bang began to pile up. "Start states,' seemed to open the door for supernaturalism. But the shift towards accepting the Big Bang didn't seem to rekindle interest in the problem, prehaps because it's intractable and there is not much new to say. IDK, you still see it in Big Bang to Big Crunch to Big Bang appeals or Black Hole Cosmology, the universe then never had a start states but oscillates or regress eternally.



    One could just as well argue that the universe specialises in black holes

    This is actually one of the theories I find more interesting. What if every inside black hole singularity is a Big Bang, the Big Bang simply being our name for a specific white hole. Universes can have all sorts of traits, which are somewhat random, but only universes that tend to produce black holes "reproduce." Prehaps this fixes the Fine Tuning Problem, we exist because universes like ours produce more black holes and natural selection works on universes. It's intriguing at least because the mathematics of models around black holes jive with observations we'd expect were this the case, although to date this "Black Hole Cosmology," is empirically indiscernible from the position that the Big Bang is unique. This perhaps just gets us to an infinite regress, but it's a neat idea.
  • Why should we talk about the history of ideas?
    On the original topic, IIRC Aristotle and Darwin are the two most cited individuals in biology. Why do you think this is the case? They're obviously aren't cutting edge. Even if I am recalling this inaccurately, I certainly see Galileo and Newton brought up all the time; Plato might be the most cited person in Springer Frontiers books I've read despite it being a series of books on the "cutting edge," or scientific issues.

    I feel like it's a mix of tradition, appeal to authority, appealing to well known, canonical thinkers, and a desire to ground theories in some sort of foundation. The history gives a nice ready made structure for a literature review, but it also seems to do more. There is a sense in which all theories are arguments and the rejoinders become important.

    I'll come back later but just briefly:



    First, we might have to agree to disagree on arguments. In general, I think that, if you agree with the logic being employed, accept the inference rules, etc., if the argument is valid, and if the premises are all true, the argument should generally be persuasive. The general "type" of argument doesn't tend to make it more or less persuasive to me and I guess is the big difference here, the idea that some types are inherently less persuasive.

    I won't hold that this is absolutely the case. Gödel's proof of God works off pretty innocuous axioms, but it doesn't tend to convince people for just one example.

    The claim isn't that they are specific to history, it is that history is at the further end of a spectrum.

    I agree that history is on the "more difficult to verify and falsify," side of the spectrum. I do think many questions of scientific and philosophical history are actually closer to the middle of this spectrum than many of the questions in social sciences though.

    I mean, there are all sorts of reasons why any firm raises prices. Core concepts in economics are about how complex systems work in the aggregate, and this makes falsification very difficult because we admit that exceptions do exist and that other factors can overwhelm one sort of relationship.

    International relations is even more fraught. It is in many cases easier to make a plausible argument for the core reasons why a given war occured, or at least rule out many explanations, than it is to elucidate a common principle by which wars tend to occur.

    But I also think some degree of progress gets made despite these issues. However in these areas it's more about assigning probabilities to explanations than establishing certainty.
  • What Are the Chances That This Post Makes Any Sense? A Teleological Argument from Reason
    It might violate logic if you assume that before the first moment of existence of everything there was another moment of existence of nothingness.

    I don't see how emptiness (an empty set) void, is precluded by logic. But, moreover, I never proposed such an assumption. The question is why uncaused entities, relations, etc. must necessarily all exist for the first time at the exact same moment (and then continue to exist indefinitely).




    A state prior to the absolute first state" is a nonsensical construction, which you seem to be insisting on inserting into Jabberwock's model instead of grasping the model that Jabberwock is trying to convey to you.

    No, I get that there is no state prior to the first state in the way Jabberwocky frames things, and I see nothing wrong with that in terms of being coherent. I actually accept that way of looking at things (with a caveat). I didn't say "your model is nonsense," I said "why should I accept your claim that it would be illogical to question your model?"

    The way this exchange started was not a disagreement over whether uncaused entities are coherent. It was a disagreement over whether or not it is necessary that, if uncaused entities exist (i.e., they exist without existing in any prior state) they can only exist for the first time simultaneously, that it is a necessity that uncaused entities not exist for the first time except at the exact same moment.

    Something starting to exist when it did not exist prior to its first moment of existence is something coming from nothing.

    Maybe the phrasing is bad here, but I explained it in detail earlier. When I say, "something from nothing," I am not talking about a progression of states of nothing, a series of empty states with no variables, to a series of states with variable that is continuous with the empty states. I am talking about the fact that entities that can be described by variables exist in some first state, despite not existing in any prior state, and so there is "nothing" causally prior to them.

    I get that we are not talking about a progression of moments. There is no time where there exists a "nothing." If nothing exists, then nothing changes, so there is no relevant time dimension. But the "first state," does not occur outside of time. It is a state existing within time. The first state is a state containing entities, per Jabberwocky, which I think makes sense. These entities exist in the first state and exist in no prior state. So, entities can exist without having existed in any prior state.

    This being the case, I am left wondering why it follows that entities can exist without existing in any prior state, but only in the first state that any entity exists? If entities can exist uncaused (having existed in no prior state), then it seems like they should be able to exist uncaused in any state given the normal definition of "states," which is just "a description of what exists in a system.


    If a set of uncaused entities can come to exist at some first state, why can't other uncaused entities exist for the very first time at any later state? This is where the definition seems to be doing the all heavy lifting, because a state is then also defined as "everything that exists," to preclude more than one uncaused system, and "states are such that they only progress from other states, except for the first state," to preclude additional uncaused entities. But all it normally means that states "progress" from one another is that they are ordered. That they are ordered does not mean that the descriptions of later states must be entailed by earlier ones. But the definition now seems to also include the caveat that, outside the first state, all future states are entailed by prior ones, precluding anything else that is uncaused. I'll buy that this might describe our universe, but I don't see how it's illogical to reject this model.


    This isn't the usual definition of states. You can have toy model universes that are random and they can have states.
  • What Are the Chances That This Post Makes Any Sense? A Teleological Argument from Reason


    Yeah, I got that part. If I accept your definition I accept your conclusion because your conclusion is contained in the definition. I understand why your conclusion flows from your definition. The question is, why should I accept your definition? Something starting to exist when it did not exist prior to its first moment of existence is something coming from nothing. I am not sure how the position just stated violates some core principle of logic?

    If anything, the claim that the universe has no cause is the claim that violates a commonly held "rule of thought," the Principle of Sufficient Reason. But I will allow that not everyone agrees that PSR should be taken as axiomatic and that it remains controversial . However, I do think it's telling that the only context where I can recall seeing people deny PSR in the context of the external/physical world is on the topic of First Cause.

    If the universe is a brute fact then it is a brute fact, not a deduction that it is illogical to question. I don't rule out that the position is true, only that it's illogical to question it, and I've yet to see or hear of a proof showing how PSR applies, except for the birth of the universe based on any commonly accepted first principles.

    For what it is worth, I also don't think the claim that the universe began uncaused is illogical in any sense either, I just think it presents problems.
  • What Are the Chances That This Post Makes Any Sense? A Teleological Argument from Reason


    Yeah, this still seems empty. All the work is done by definitions I'm not inclined to accept. I am left at a loss about why it is "illogical," to disagree with you. This seems reminiscent of the claim that it is "meaningless to talk about anything causing the Big Bang because by definition nothing caused the Big Bang."

    If the universe has a first state then it does not exist without beginning or end, it is not eternal. If, as you say, states contain entities, then entities came to exist acausally. If entities began to exist for no reason then there is no constraint on entities coming into existence for no reason and it is not the case that, as you say, things always progress "from something to something." If an entity exists for no reason than its initial existence involves "something from nothing."

    If there was a "birth of the universe" then that is an event, an occurrence, a thing that happened,etc. There aren't technical terms, I mean them just in the normal sense. If something coming from nothing can happen, then it can happen again because if something can begin to exist with no prior conditions then no prior conditions are relevant to it. If this the case, as you seem to accept, then the claim that "something always comes from something," is simply in contradiction with the claim "that everything came from nothing."

    What exactly in logic makes the universe existing as a brute fact necessary? What formal system do we use to prove "things began to exist, all at once," by logical necessity?

    I don't even think it's a bad option to claim that the universe simply began and we can know nothing as to why in virtue of the other options. This might be my preferred take. I don't know why there is an impetus to first claim the universe is a brute fact though, and then claim this is somehow logically necessary.
  • The US Economy and Inflation
    Glad to see not everyone is buying into the "wage price spiral," argument. Rising wages are no doubt part of the cause of inflation, particularly for some sectors, but you don't have record profit margins and declining real wages during a period where inflation is driven by wage growth.

    I think it was the third quarter of 2022 that America's 500 largest companies had almost double the profit margin that they had through the 60s and 70s.

    Rather, the most reasonable explanation I can think of is that:

    First the pandemic causes a huge supply shock, one that has aftershocks due to China's repeated, draconian shut downs. This puts supply side pressure on prices.

    At the same time, people can't consume services in the same way due to COVID restrictions. Demand for durable goods soars at the same time supply is hit hard, so there is a demand side inflationary push as well.

    Service workers who were laid off in droves find new production jobs, which are expanding rapidly due to the shift towards durable goods consumption. At the same time, Baby Boomers begin to retire more rapidly due to the pandemic, both due to job conditions and safety fears as it is primarily an illness that is dangerous for the elderly. Migration also slows way down, hurting the labor supply. When things reopen, tons of service jobs are hiring at once but the labor force is smaller. This kicks off wage hikes, and indeed for a while , even with inflation, the bottom 80% of earners are doing better than they have in decades.

    This is the initial inflation, but why don't we reach equilibrium quickly? I would argue that in the labor market we did reach equilibrium. Real wage growth slowed and then actually reversed. Did we have a labor shortage with a decling price for labor?

    So why the continued inflation? I think the answer is lies in the surge in corporate profits. The pandemic forced many industries to raise prices and made people accept large price hikes. At the same time, the short period of real wages growth and the stimulus left households with more money to spend making demand less inelastic. Companies now had an excuse to raise prices that consumers buy and because prices have not recently surged like this consumer habits are lax. First some firms hike prices because they have to in order to survive, but then this solved the collective action problem of who raises prices first in an oligopoly.

    Huge market concentration over the past decades makes this worse. When 4 firms control 80% of meat and over half the value of the average grocery cart goes to 6 conglomerates, it is much easier for oligopolies to realize monopoly profits once their collective action problem is fixed. Obviously the war in Ukraine hit energy supplies too.

    Then you have the housing market which goes up for a whole different set of reasons, but adds to all this.

    If this is the case, or at least a close approximations, fiscal policy should play a larger role. We should be taxing those who benefited from the windfall monopoly-like profits to reduce aggregate demand instead of using a brute force tool like rate hikes (of course, you might still do hikes, very low rates appear to increase inequality long term in a corrosive way). We should also be looking at market share and trust busting with renewed vigor.




    Ageism has nothing to do with it. Some economic policies hurt some classes and help others, that is the nature of the beast. Accumulating $32 trillion in debt hurts younger adults and children. Having almost half the Federal budget be transfer payments to seniors (universal basic income in the form of Social Security and universal healthcare in the form of Medicare) necessarily hurts other classes. There is only so much to go around. But this doesn't make "think of the children," necessarily a good way to look at monetary policy.

    We gave seniors these benefits largely because of the assumption that, once someone reaches a certain age they can no longer work in many professions. SS was in the context of the 1930s when lifespans were much lower and most work was manual labor. The problem being addressed was widespread senior poverty.

    The programs worked. Today seniors are the group least likely to live in poverty. Children are the most likely group to live in poverty. During the pandemic we experimented with a child tax credit that gave families about 35% of the average Social Security payment per month, per child. But we also ran astounding deficits during this period.

    The arguments for children deserving support, that they cannot support themselves, seem at least as good as the argument for seniors. The argument that it is an investment is even better. However, we can't afford current spending levels, so it's not like this is a real option unless we dramatically hike revenues or cut spending. To the extent transfers to seniors crowd out funding for other classes, veterans' benefits, support for children, even ideas like reparations for slavery, they "hurt," those classes.

    To my mind, seniors already have the largest share of wealth of any group, a much larger share of wealth than their parents did at their age, receive the lion's share of all transfer payments, hold most high offices at the federal level, and so I wouldn't be particularly worried about them as a class per say, at a least not more than any other age group. I would be worried about a particular subset of seniors, those with inadequate savings for a basic standard of living. But the easiest way to help them is with targeted transfer payments, not monetary policy. Monetary policy is inherently a blunt tool and needs to focus on the big picture.

    That many seniors don't have adequate savings, is to my mind definitive proof that our national pension system is simply inadequate. It is highly unlikely that current generations will live through the high rates of growth and wage growth that Baby Boomers did and even many Baby Boomers have trouble covering their retirement. That points to a fundementally structural issue.

    We have traditionally prioritized older generations over younger ones on the assumption that economic growth and technological changes will mean that younger generations end up far better off than prior ones. That assumption no longer appears to be true, real wages have stagnated for half a century, falling for lower quintiles, life expectancy is declining for babies born today, growth hasn't returned to old levels.

    Plus, IDK if inflation even does hurt seniors more as a class. That's common wisdom but they own the most real estate and equities relative to their share of the population by a solid margin and these are gaining value faster than inflation.
  • What Are the Chances That This Post Makes Any Sense? A Teleological Argument from Reason
    If there can be a "first state" at one point, then there was a system for which no prior states existed. That seems fine.

    But then the definition of a state is supposed to somehow preclude the possibility that there could ever be more than one system without prior states. That doesn't seem to flow from your definition. If an entity can exist in a "first moment," and we can say nothing meaningful about why, then any number of entities can exist in a first moment. Nor can we say might about these entities properties or if they can interact.

    If one system can have no prior states why not others? Even if we say there can be no "last states," the definition doesn't suggest "there must be one and only one "first state," for one and only one system. Nor am I aware of a definition of "system," that precludes systems from interacting.

    Then there is the other issue of events. If we adopt one of the more eliminative views on cause, then what we call events is really just the transition from state to state. For a Newtonian universe, we can think about 3D slices cut across the time dimension. An event then is simply a description for some phenomena we experience that can be described by some components of a state, a subset. The event has a starting time and an ending time, and it exists as just the relevant subset of components of a state from the start time to the end time.

    Now the states we observe don't evolve in just any way. They evolve based on regularities that can be described by mathematics; our "laws of physics," are at least an approximation of these regularities. However, if a first state, a particular arrangement of variables occurs due to no prior states why does it then follow that the variables cannot shift their values randomly, as opposed to in accordance with their normal regularities, at any other time? More importantly, why should we define a state, a set of variables describing a system at some instant, as only a "state" when there are multiple states and states evolve such that regularities dictate that evolution.

    I see no reason why I cannot have a model universe where the values of the variables describing S1 do not entail the values of variables at S2.

    For the definition to solve the problems we need the definition of a state to be: "a variables describing a system at a given moment but only in cases where the evolution of states is dictated by mathematically describable regularities, except in the case of the first state. Further, to be a state, it must exist in a system that does not interact with any other systems (this is required to avoid a second 'first state' for some other system occuring, and then the new system interacting with our original)." That seems like an ad hoc definition aimed at "defining away," the problem.

    Imagine if there was empirical evidence to support the existence of multiple Big Bangs, our universe being the result of some sort of cosmic merger. Would this entail that such a universe had no states?

    I guess what would convince me of your point is if you could show that only one uncaused system can exist by necessity and that system's prior states necessarily have some sort of entailment relationship with their future ones. Otherwise the fix seems ad hoc. For it to be convincing that something is "true by definition," the definition needs to be necessary in some way, entailed by other premises we accept, or the disagreement needs to be about popular word usage.
  • What Are the Chances That This Post Makes Any Sense? A Teleological Argument from Reason


    IDK what you definition of state is. I figured you were talking about states in terms of physics, since physics is relevant to the cosmological argument. In physics a state is simply a set of variables describing a system at a given moment. Systems come into being and go out of being all the time in physics. However, they are all, to some degree, arbitrarily defined. We can give systems a definitive definition because we are the arbiters of what a system is, but that subjectiveness isn't helpful here.

    Anyhow, defining the problem out of existence doesn't seem compelling. "An uncaused event can occur, but only once because of how we've defined our terms." It's a weak tautology IMHO. If all events can be described by physical state changes (a core premise of physicalism) then the line between "event" and "state transition" seems weak.

    It's essentially akin to the claim that "talking about what came before the Big Bang is meaningless." Is it? I don't know, people don't seem to have any trouble thinking of something as causally prior to the Big Bang. Indeed, it is now popular scientific opinion that there was something before the Big Bang, Cosmic Inflation. And it seems likely that we will find out more about events prior to the Big Bang and potentially events prior to Cosmic Inflation. Hence the effort to redefine the "Big Bang" from its original scientific description into "time 0, the point before which we can claim that talking of cause is meaningless." IMO, this effort seems doomed as the interval between the earliest processes we think existed and the Big Bang as originally described continues to increase.

    The uncaused has no limits, no cause can dictate its occurrence. What principle can explain why the uncaused can only be prior to the causal? I don't think definition does it. States transition causally, but its easy to imagine unchanged state transition and even build such things in toy universes.


    ---

    As to infinite regress, it solves the cosmological argument, and potentially the rationality problem.

    Take Penrose's idea that the conditions of the late universe might be such that they result in another Big Bang. This makes the universe cyclical, without beginning or end. There is no start time to consider. Black Hole Cosmology is another such theory, one which I could see becoming popular with the right evidence, that might deny the existence of any "first state."

    Now, if we can also show that the random cannot be eternal, cyclical in this way, this seems like it could explain rationality to some degree in terms of necessity, although IDK how such an argument would work exactly, I could be convinced though.
  • Why should we talk about the history of ideas?


    You keep using strawmen. If I say, "we can be justified about some historical facts and narratives," you respond with "so, you don't get that people can disagree over historical facts and narratives?"

    If I say, "we must sometimes rely on the authority of institutions and base our beliefs on trust because it is impossible for one person to conduct more than a minute fraction of all experiments in the sciences," you respond with "so you always blindly trust authority?"

    No, I never implied anything of the sort. "Some x are y," is not equivalent with "all x are y," nor is it refuted by "some x aren't y." Actually, I agree with you more than you seem to think re: why we have reason to doubt some facts more than others and why we need to be open to revising our beliefs. TBH, this is a very frustrating trend in all our exchanges. I appreciate your effort and I think you often bring up a bunch of good points about credulity and justifications for knowledge, but there is always this move to bleed out any nuance and turn things into binaries.

    I will give you a specific example. I claimed the history of ideas is sometimes useful in explaining theories and making arguments about them. I said that this is the reason why introductions to a theory usually begin with a historical overview.

    The question was about verifying the narratives in textbooks on the history of ideas. Are you suggesting that such evidence troves exist for all ideas.

    Obviously not. This is yet another "some x are y," being taken as "all x are y."


    I think our main points of disagreement come down to:

    A. I agree with your reasons for doubting historical narratives. However, I don't think the problems you point out are at all specific to history. A zeitgeist (paradigms) colors how people interpret empirical evidence, political pressures shape how scientific data is reported, or if it is reported at all. Culture influences science; e.g., the role of culture/norms is the best explanation for why the replication crisis is such a massive problem for sociology, but not as much for other social sciences (with some social sciences not fairing any worse than "hard" sciences.)

    Trust in both individuals/institutions and in the process of scholarship is just as essential for science. We're counting on others to call out cherry picking, fake data, etc. E.g., there is only one LHC; if you do not go to CERN you cannot observe super high energy physical reactions firsthand. Even if you do visit CERN, you cannot vet if they are doing what they say they are without a ton of specialized knowledge and permission to inspect the LHC in detail. By contrast, for some historical issues, a wealth of easily accessible data exists.

    Point being, the degree to which we must rely on trust is variable and I haven't seen a good argument for why historical claims necessarily require more trust than many scientific claims. To head off another binary, I am not saying all historical claims can be backed up, We can also have relative degrees of certainty about them.

    The point that "anyone can make up historical claims," is trivially true for science as well (see Flat Earthers). I would absolutely agree that the sciences, in general, tend to have a better peer review process, and higher barriers to entry. It is harder to convincingly fake a scientific paper due to the unique vocabulary that fields employ, but this is a problem of degree IMO.

    The question of how "hard science," "soft science," and research-focused humanities differ in terms of justification is a very interesting one, but outside the scope here. My brief take is that as you get into very complex systems, e.g. international relations, quantitative analysis becomes increasingly less convincing due to the nature of the data involved, making documentary evidence more relevant but also forcing us to look probabilistically at claims. Arguments are sometimes negative, and it often isn't hard to show that some historical narratives are highly unlikely, even if it is impossible to show that just one is right; good history often does this.

    Yes. again, I've no clue what point you're trying to make here. /quote]

    The point above. I would be convinced by your arguments if you could show me why claims about the history of some idea are specially unknowable such that: "We cannot make any compelling arguments about the history of ideas, why a theory was adopted, etc."
    Could we do this self-checking with the argument of the OP regarding post enlightenment thought?

    In almost every post ITT I have said "some historical arguments aren't good." This is the same reduction to a binary of all/none.

    I don't know where you're headed by providing these hyper-specfic examples which are not illustrative of the form in general.

    Ok, now we're getting somewhere! You agree that, some beliefs about the development of a theory can be justified? Sometimes these are helpful for proving a point. In which case, our disagreement is simply a matter of degree. My argument is simply this: "if the history of an idea is sometimes relevant, and if we can sometimes have justified beliefs about the history of ideas, then sometimes arguments made from the history of an idea are relevant. Whether we accept or reject the argument should be based on the data supporting the premises and if the conclusion actually follows from the premises."

    I take it we will disagree on how difficult it is to support some of these premises, no matter.


    B. I think that knowing why a theory was adopted is central to the scientific project. If we cannot know why a theory developed, science is in big trouble precisely because there is a sociological element to the project.


    Baffling.

    IDK, you said the difference between historical claims and scientific ones was that the latter used empirical facts. I was just pointing out that this isn't the difference between the two, that historical arguments are also based on empirical facts. I was pointing out simple examples to show that, presumably, you do accept some historical facts, which could be used in premises.

    I asked how historical arguments are different and you made an appeal to logic and empirical facts. But historical arguments can be put into valid logical forms and they are often based on empirical facts, so this doesn't seem like a difference in kind. Nor is it clear that all scientific empirical claims are easier to verify than many historical fact claims.

    See:
    And to emphasise, this is not the case with arguments relying of basic rules of thought and empirical observation.

    History is so open to interpretation that virtually any theory can be held without issue. Not so with empirical facts, not so with informal logic (not so with formal logic either but that wasn't my point).

    I would ask though, why does science have so many less narratives? It seems to me like the reasons are largely social. This is why I mentioned Quine, Kuhn, and holism. Most theories are underdetermined. There are, what, 9 major competing interpretations of quantum mechanics, all with identical empirical predictions? QM isn't unique in allowing the possibility of multiple interpretations, it's the way that science is practiced that closes off the proliferation of alternate explanations. This is why the history of ideas is so relevant in the sciences.



    That's why I asked for clarification about it originally. See your posts below:

    For a logical argument to have persuasive force it is only necessary that I agree with the rules of logic. I could not, of course, but it's not a big ask.

    For an argument from analogy to have persuasive force, like the one you presented, I'd need to already agree that the situations are, indeed, analogous....


    Exactly. It has persuasive force. If we just swap out all the premises for letters and produce a long, non-obvious, logical argument that, say , if A> B and B>C then A>C, that has persuasive force. I can look at that and think "yes, that's right, A is greater than C in those circumstances" I've been persuaded by the presentation. The longer an more complex the argument, more likely it is to draw out entailment from believing one logical move on other logical moves. I'm persuaded by the argument that I must accept the entailment, regardless of whether I accept the premises.

    What I found weird was the claim that "[if] I'm persuaded by the argument that I must accept the entailment, regardless of whether I accept the premises," which seemed to imply that the logic alone was persuasive. But a valid argument with false premises isn't persuasive. I didn't, and still don't really know how to take the claim that: "For a logical argument to have persuasive force it is only necessary that I agree with the rules of logic." That doesn't seem true, and most of your posts seem to be arguing that you have to accept the premises of an argument to be persuaded, which I would agree with 100%.

    Example: is anything flawed with the logic of the following?

    We should accept well-justified historical facts as true premises.
    "Einstein created the theory of thermodynamics," is a well-justified historical fact.
    Thus, we should accept "Einstein created the theory of thermodynamics," as a true premise.


    I don't think so, per common rules of inference anyhow. But we shouldn't find it persuasive because a key premise is not true. So, I don't get how your argument is about logic rather than the ability to justify a certain class of premises.

    As I pointed out, that two things are analogous is normally itself a premise. Was the claim then that analogies cannot be put into a formal format?

    IDK, I take it you meant "if there are lots of premises and a valid argument I can accept the conclusion even if some premises are false?" That makes sense, especially when we're working with claims we assess in terms of probability of their being true.

    Also, why couldn't you put a historical argument into a long formal argument? You could absolutely take all your fact claims about history and assign them to letters and put them into a valid statement. All your arguments about the merits of historical claims have been about the ability to justify their factualness. That is, you reject the premises, so the argument isn't persuasive, so I didn't understand the digression into logic re: analogies and historical claims.
  • Why should we talk about the history of ideas?


    Given modus ponens as an inference rule. (And thus not a theorem.)

    Actually constructing arguments requires some system of deduction, not just the definitions of the logical constants.

    What is the relevance here? (And don't say it's a non-sequitor, I thought we decided about those :rofl: )

    I was responding to the claim that good arguments are just those arguments where "all that is necessary is that one agrees with the rules of logic." That and the stranger claim that if an argument is in a valid form we should be persuaded by the argument and that "must accept the entailment, regardless of whether [we] accept the premises.
  • What Are the Chances That This Post Makes Any Sense? A Teleological Argument from Reason


    I fail to see how calling it something different changes the problem. Why should the uncaused and wholly unexplainable manifest in just one convenient way? Why can you have an uncaused first state but not an uncaused last state, a sudden uncaused end?

    If a universe can blink into existence for no reason then it seems it can blink out of existence for no reason. In which case, maybe we should just assume the world, including ourselves and our memories, just began to exist in the past second, since that gives the universe less time to have vanished into the uncaused void from which it came?

    IMO, an infinite regress seems more appealing. Such an infinite regress doesn't really require or specify the God of any existing religion either, so if I have to bite the bullet either way...

    Or there are ways to avoid the infinite regress through logical necessity. For example, ontological arguments for God, or something like Behemism where the world is blown into existence by the force of contradiction— the necessity of resolving contradictions — being itself being a sort of dialectical engine (Hegel being an example of the latter). If successful, these avoid infinite regress. If they're actually successful in another question though.

    Anselm and Gödel's ontological arguments have the dubious distinction of solid staying power in the face of many talented minds trying to find a definitive way to put them to rest while simultaneously convincing likely not one person to change their minds on the issue, which makes me think the pragmatic use of ontological arguments are even more dubious than the rest of the philosophy of religion.
  • What Are the Chances That This Post Makes Any Sense? A Teleological Argument from Reason


    Agreed. I'd just add that, if we go back to 1, we are still left with the problem of why an uncaused thing happened once, but hasn't happened since. If random, uncaused events happen, shouldn't they happen all the time? You can't really argue that this isn't a problem because uncaused events are "unlikely," since how could they have any sort of factor determining their likelihood while still remaining uncaused?

    Maybe they do occur and we don't notice them, but since the birth of the universe was a pretty big event, leaving lots of evidence, it doesn't seem to to work to say all the other uncaused events are too small to see. Nor does it work so say "oh, uncaused events would necessarily occur outside our universe." Why? It's has no causal precedence, they can occur wherever and whenever.

    But any argument about God stemming from this problem seems not unlike the Fine Tuning Argument, but even harder to quantify, so I don't see it having legs.
  • Why should we talk about the history of ideas?


    What? Why would people have to be 'in' on anything? Are you honestly having this much trouble understanding the concept of disagreement among epistemic peers? Some theories are popular, others aren't. Is that such a challenging concept for you?

    That's a pretty weak strawman. We're not talking about disagreements about scientific theories. What I wrote:

    I've always thought that these reviews [of the origins of scientific theories] were done so that the student could follow the development of a position. Knowing which alternatives to a theory have been considered and rejected are key to understanding a theory because, especially for a novice, the dominant theory of the day is always going to look undetermined by the evidence they are aware of. It's also true that knowing why a given element was added to a theory gives you much better insight into how to think about that part of the theory. If some constant was added simply because the mathematics for some project wasn't working out, it's good to know it.

    I've yet to come across any radically different versions of how thermodynamics, etc. were developed. Even books like Becker's "What is Real?" with a serious axe to grind still give the same essential outline for how QM developed.

    But per your view, how can we actually know why a scientific theory was advanced or why others were rejected?

    The question of why a model was abandoned, or why a constant was added is someone's opinion. Someone's theory. Again, from your perspective (you agree with the textbook - or trust the institution) that all seems really solid, but it's not the history that's done that, it's your belief in the authority of the person presenting it. The theory might have been discarded for reasons other than those the textbook claims, the constant might have been added for more rigorous reasons in someone's view but others disagreed (the ones writing the text book)

    IDK, when Einstein says he added the Cosmological Constant to have his theory jive with the then widely held view that the universe was static I think that is a good reason to believe that is why Einstein added the Cosmological Constant.

    Really, how?

    The pioneers of quantum mechanics published papers throughout their lifetimes, conducted interviews, were taped during lectures, and wrote memoirs, all describing how the theory evolved. In many cases, their personal correspondences were made available after their death. Most of this is even free.

    Now tell me where I can get access to a free particle accelerator and a Youtube on how to properly use it so I can observe particle physics findings first hand?

    You're confusing empirical facts for narratives about the motivations, socio-political causes, zeitgeist,... As above, empirical facts are quite easy to persuade others of since we generally share means of verification .

    Einstein added the Cosmological Constant to fit current models is an empirical fact. In 1492, Columbus sailed the ocean blue is an empirical fact. The Catholic Church harassing advocates of heliocentrism is an empirical fact. People have had sensory experiences of those things and reported them.

    Most facts we accept aren't easy to verify personally. You can read about chimpanzee behavior extensively, but how easy is it to go and study chimps in the wild? When was the last time you wanted to learn something and held a double-blind clinical study?

    Do you replicate the experiments after you read a scientific paper? No. Then you're trusting the institution publishing it and its authors, right?


    and trust

    Plenty of people don't trust the scientific establishment. This cannot be a good criterion for justification.
  • Why should we talk about the history of ideas?


    Yes. And I've countered that point several times now, but you're still stuck at the beginning. It's not the same because not all methods are so open, not all methods are so narrowly shared. There are entailments resulting from denying a common form of logic, or an empirical fact that are uncomfortable and which are not necessary when denying some interpretation of history.

    Simply put if I say, "the ball is under the cup" and then I show you the ball you could still deny my theory, but you'd have to bring in a mass of other commitments about the possibility of illusion, not trusting your own eyes, ... Commitments you wouldn't like.

    Sure. And denying that we can trust the standard fare of physics textbooks re: the origins of relativity or thermodynamics also comes with a lot of commitments. You'd have to assume a lot of people were "in" on a misrepresentation and that they had all coordinated to keep to the same narrative across a wide array of texts, including falsifying and circulating the papers of the original people involved.

    The "ball under the cup," example is rather lacking. Many phenomena explored by contemporary physics can only be observed using fantastically expensive equipment. Findings aren't deducible from mathematics. If you find results in contemporary physics credible you are either did the experiment yourself or you are relying on the authority of others and processes like peer review there too.

    Your average person is in a much better position to vet if a science textbook is telling them the truth about the history of quantum mechanics than they are to go out and observe entanglement and test Bell's inequalities. When was the last time you read something about cosmology and fired up your giant radar telescope to verify it?

    I do actually agree with you in a limited way though. There is a real difference with some aspects of history, where the number of people who are motivated to develop their own interpretations is larger, the degrees of freedom for interpretation greater, the barrier to entry in advancing one's own theories (somewhat) credibly is much lower, and the ulterior/political motivations for advancing some arguments much greater. I don't buy that this is any reason to assume total nescience is at all rational though.

    I'm sure to someone with your... how do I put this politely... confident way of thinking, the Facts™ of history probably are all written in stone and no doubt all these alternative interpretations are more of those 'conspiracy theories' your priesthood of disinformation experts are working so hard to cull. I can see how the argument I'm trying to make just won't mesh with some mindsets. It may be an impasse we can't bridge.

    :rofl:


    Maybe you could write a book on this topic and lay out your arguments systematically? Texts in science and philosophy almost universally review such history, and they're wasting a lot of time, right? So, you could radically change pedagogy for the better.

    The only downside would be that neither you (nor I, for my small role in spurring you on to the project) could ever get credit for the idea. The fact of our contributions would be lost to the shifting sands of history, unable to be verified.
  • How Does Language Map onto the World?


    ... metaphysical frameworks, such as idealism and panpsychism, which were derided as baseless nonsense by the positivists of the past, are back in new forms.

    This is sort of funny in the context of language, given that Russell's theory requires propositions to exist as relevant explanatory entities that exist outside space and time, and yet which we somehow "grasp."



    This question seems to be phrased in a leading way though. Most popular forms of idealism do not deny the existence of an external world or express skepticism towards its existence. Even if I bought into something like Absolute Idealism, or Kastrup's idealism, I think I'd still have to pick the top choice.

    That said, I agree that idealism seems more popular here than in philosophy at large.



    Excellent point. And the reason this works with spoken/written language so well is because we recognize that, when someone speaks to us or writes to us, they are trying to communicate. So, we don't have the problem of some sort of latent infinity of possible meanings existing within finite beings. Rather, we have the recipients' recognition of "the source of this incoming stimuli is an attempt to communicate, what could they want to specify?" and language allows us to rapidly narrow down the possibilities.

    Thus, possibilities are "out there." This can be true even as respects our own thoughts, internal monologue, etc. because the mind works by communicating to itself, neurons are constantly communicating as much as computing.

    Still, it seems to me like meaning is in some ways constructed too. Just going of cognitive science research into the topic, it seems like different, quite independent systems get used for processing different aspects of language. When asked to visualize something, the same system used for processing sight gets used; when asked to imagine hearing something, we use a different system. These systems work in parallel and what makes it to conscious awareness is regulated by both unconscious processes, which seek to prioritize certain signals, and executive function - i.e. what we are paying attention to.

    It's quite possible to listen to someone just enough to get by in terms of "playing a language game," to respond in ways that don't give any offense, while barely gleaning any meaning from what you hear. On the flip side you have guided visualization, where we are intentionally meditating on another's words. The levels of meaning that seemingly unfold can vary. We can read the same passage twice and get different levels of understanding from it, both because we are paying closer attention to it, or because we have new relevant knowledge/experiences that help us interpret the message. So, it seems like the recipient "brings something to the table."

    IMO, philosophy of language has been badly hampered by foundationalism. Language is an evolved capacity that itself evolves. It is used to do many different types of things. Sometimes it is referring to real objects in the world, sometimes it is used in a social game, sometimes it is expressing propositions (whatever the nature of propositions). Obviously names have causal histories, obviously language is established by social norms, etc. I don't understand why attempts work so hard to try to reduce it to just one of these things.
  • Why should we talk about the history of ideas?


    lol, you responded at the same time I was editing.

    Yeah, it's not a good example for the point, since it's affirming the consequent the way many people might read it, with "must" as if instead of iff. The point is that you can have an argument of the form:

    If and only if a then b.
    b
    Thus, a.

    This is not affirming the consequent. But it's a shit example of that because "will," and "must" can be taken as if or iff so I'll change it.
  • Why should we talk about the history of ideas?


    I edited it to phrase it better and not leave it open to interpretations of just affirming the consequent.
  • Sleeping Beauty Problem
    I had an idea for a rather dark version of this:

    Beauty gets cloned with all her memories on a given flip, such that each Monday and Tuesday has a 50% chance of resulting in a new clone being created.

    On Wednesday, all the Beauties wake up in an identical room with identical memories. The real Beauty is given breakfast and allowed to leave her room and enjoy the magical castle until Sunday, when the experiment will be rerun.

    The clone Beauties are killed by the first person who comes to the door.

    The real Beauty retains her memories of her free weekends.

    Now, let's say the experiment has been running for a long time, three years. A lot of other Beauties have likely been cloned and killed by this point. But, if you're the real Beauty, and you consistently think you are the real Beauty when you wake up, then you have indeed been right about that fact for three years straight. So, when you wake up next time, how worried should you be that you'll be killed?

    Based on (an admittedly simple) Bayesian take, Beauty should be increasingly confident that she is the real Beauty with each passing week. The whole idea is that repeated trials should move the dial in our probability estimates. And yet, this doesn't seem right, no?
  • Why should we talk about the history of ideas?

    My issue wasn't really with the use of history per se, but with how it was or wasn't connected to other points being made, which would hold for any sort of obiter dicta in a post. I left in the detail that it was a specifically historical point as an opening for defending a different view of what sorts of connections between points are required in an argument.

    Seems fair to me. Although that doesn't seem like a "type" of argument vis-á-vis the history of ideas except in the sense that it falls into the type of "bad," as respects having a bad connection between its premises and conclusions, or simply being a non-sequitor, rather than an argument.

    But that makes perfect sense. There are cases where the history of an idea is quite relevant and others where it isn't, and you can present history that isn't even relevant to the idea at hand.

    I think something like SR/GR is a good example because it's a theory that is very much defined by the failures of other contemporary theories, and since there were other consistent models that were arguably rejected for contingent reasons (e.g., if we allow that objects grow and shrink depending on their velocity, we can avoid some of Einstein's conclusions).

    I'll just add that when people have been discussing a set of topics for a very long time, it seems like seemingly tangential points are brought up because of some broader, and at times inappropriate context. This seems inevitable; you see it in old letters between patristic theologians as well, where some footnote has to explain why x is remotely relevant to some prior point because of some previous exchange two years earlier. I will agree that this sort of thing is entirely unhelpful for anyone following along.



    This, @Srap Tasmaner, might serve as an example of the costs of engagement. Why am I having to expend time countering an interpretation of an argument that a five year old could see was wrong? Why hasn't that interpretation been silently ruled out by all parties in this thread on the grounds that we're not stupid? We shouldn't be here.

    lol, I in no way interpreted the original post as ruling out arguments from historical examples in general. Hence why I only replied to Srap Tasmaner re: the examples of frequentism and the common practice of explaining the historical conditions surrounding the emergences of scientific theories.

    My response to you was what it was because you have repeatedly made the claim that the reason arguments involving history aren't valid is because "you can select just the history that proves your point." My point was that this can be claimed against all inductive arguments- that, you are using an argument re: analogies and the history of ideas that generalizes.

    Take your latest restatement:

    My point is that history alone has no such force since it is inevitably selective. Thousands of things happened in the past, so pointing to A and B as precursors of C doesn't do anything because the argument would be in your choice of A and B not in the mere fact of their near contemporaneity to C.

    You made the same argument for why historical analogies, in general, don't work. My response shifted because you appear to be making the wider claim that arguments from historical example do not work because cherry picking is possible (otherwise, why are historical examples related to the history of an idea and those used in analogies somehow unacceptably vulnerable to cherry picking?)

    How does your argument in the quote above not apply to my example about the Laffer Curve? You could absolutely claim that I am cherry picking. My dataset has only three examples, all from the same country, in roughly the same era. However, taxes have been cut across human history, and presumably sometimes revenue went up after taxes were cut. Arguments about cherry picking are arguments against the truth of a premise; they are arguments about the applicability of the data. That cherry picking is possible is not a good argument against the use of any historical examples, nor is it specific to one type of historical example.

    Do you see how I could have taken things like: "thousands of things happened in the past, so pointing to A and B as precursors of C doesn't do anything," as arguments against induction in general?

    If I was accused of cherry picking re: the Laffer Curve, I could counter that the effects of tax cuts in the US, in the modern era, are more relevant than the wider population of all tax cuts across history. This would be to say that the Reagan, Bush, and Trump tax cuts are more closely analogous vis-a-vis a consideration of what tax policy should be in the US today.

    And to emphasise, this is not the case with arguments relying of basic rules of thought and empirical observation. There are not, in those cases, a myriad of narratives to feely choose from. One might well argue against a tenet of modern physics by claiming maths is flawed, but one would be rightly wary of the commitments that would entail. Not so with historical analysis. I can easily say "No, things did not happen that way" and I'm committed to absolutely nothing else as a result. It's a free pass to disagree.

    But you argue right above that I have no good reason to trust a physics textbook as to why certain theories were adopted because:

    Again, from your perspective (you agree with the textbook - or trust the institution) that all seems really solid, but it's not the history that's done that, it's your belief in the authority of the person presenting it.

    What is different here? Presumably I trust an institution because they have a track record of producing truthful information. People can, and do, fake their data. Governments produce fake economic figures. And even if we're not talking about fake data, it is completely possible to cherry pick any empirical data, whether it be historical case studies for an IR paper or which medical studies you include in a meta-analysis.

    So, we cannot trust any sort of historical narrative because the person presenting it might be lying or cherry picking, and yet for some reason we can trust some empirical data that other people present to us because...

    If, as you say, "the question of why a model was abandoned, or why a constant was added is someone's opinion," and unverifiable, based soley on authority, then science is in a very rough place...




    You also seemed to be making the much stronger claims that:

    >Argument from analogy is not a good form of argument because anyone can disagree with whether the analogy fits.

    For an argument from analogy to have persuasive force, like the one you presented, I'd need to already agree that the situations are, indeed, analogous...

    [Analogies'] merits are contingent on the interlocutor already agreeing with the point it's supposed to be demonstrating to them. What's the point in demonstrating to someone a point they already agree with?

    In an analogical argument, "x is to y as a is to b..." is a premise. It was not clear to me how your point that: "your interlocuter must agree that your premises are true for them to accept the argument," is unique to analogies. This is what I meant by: "the same critique can be leveled any argument."

    Further, the claim that someone must "already agree with you," for an analogy to be successful goes too far. People can, and often are, unaware of all the entailments of premises they accept as true. A good analogy can be persuasive and informative if you're audience is listening in good faith.


    >That some other types of argument (I'm not sure which), aren't vulnerable to this sort of disagreement over premises? so long as a person accepts the rules of logic.

    For an argument from analogy to have persuasive force, like the one you presented, I'd need to already agree that the situations are, indeed, analogous... For a logical argument to have persuasive force it is only necessary that I agree with the rules of logic. I could not, of course, but it's not a big ask.

    Can you see my confusion? What arguments aside from those using allegedly a priori, self-evident premises, are not vulnerable to having the premises challenged?

    Exactly. It has persuasive force. If we just swap out all the premises for letters and produce a long, non-obvious, logical argument that, say , if A> B and B>C then A>C, that has persuasive force. I can look at that and think "yes, that's right, A is greater than C in those circumstances" I've been persuaded by the presentation.

    Only if the premises are true. Let's look:

    "If 3 is greater than 9 and 9 is greater than 100 then 3 is greater than 100."

    Convinced?

    The longer an more complex the argument, more likely it is to draw out entailment from believing one logical move on other logical moves.

    The longer and more complex an argument the less feasible it is for a human being to ever work through its validity, develop a truth table, etc. Hence why we rely on computers so heavily with long logical statements. In general, we want to compress our logical statements down as much as possible or put them in CNF for easy computation.


    I'm persuaded by the argument that I must accept the entailment, regardless of whether I accept the premises.

    This is just a baffling statement and I'm going to assume you meant something else by it, like "an argument can be valid without being sound." When an argument is valid, it does not mean that any entailments it enumerates are true or should be accepted.

    >If it is Monday, then Grover Cleavland is the President
    >It is Monday
    >Thus, Gover Cleaveland is the President (proposed entailment/conclusion)

    This is a logically valid argument.
  • God and the Present


    Sure, time is emergent in that it's the dimension in which change occurs in three dimensional space. Aristotle noted this when rebutting Zeno's Paradox of the Arrow as a fallacy of composition. Sure, if we consider any frozen moment in the path of an arrow, from when it leaves the bow string to when it hits the ground, the arrow isn't isn't moving in any of the frozen moments. However, that doesn't imply motion doesn't exist. Rather, time is the dimension through which changes in location take place. A universe with no change has no (observable) time dimension.

    Now, there is an argument that, ontologically, a toy universe, or our real universe, either has three dimensions or four, regardless of if change exists. You can imagine a four dimensional universe where nothing changes, it's just that it would observably indistinguishable from a three dimensional for anyone who inside said universe (ignoring that we arguably can't imagine an observer who doesn't experience time). Some formulations of cosmology for our universe add an extra dimension for the very early phase that "disappears," shortly after the Big Bang.
  • God and the Present


    Presupposing time doesn't seem like an issue for empirical science. After all, we observe time and use it to define all sorts of phenomena. The unfortunately common claim that "physics shows that time is illusory," is quite misleading. A more appropriate restatement would be "physics shows that the passage of time relevant to some present moment is illusory." That is, virtually no one denies the existence of a relevant time dimension re: Minkowski Space-Time (time is right in the name).

    But even the restatement goes too far. It'd be more accurate to say that "many physicists agree with philosophical interpretations of empirical findings in physics that suggest that the passage of time is illusory." Obviously there aren't empircal findings from some experiment where time has been stopped or run in reverse for us to observe. I am of the opinion that the evidence for these philosophical interpretations is far too weak to justify a blanket denial of any relevant present. I don't even think this is a majority opinion in physics writ large, but it seems like it is a majority opinion for those who work in cosmology and publish works of popular science and for philosophers of time.

    I would imagine that, just as in philosophy, where you specialize changes how you see the issue. I'd be willing to bet that a survey would find that people who specialize in statistical mechanics are far less likely to believe in eternalist interpretations than those who work in cosmology.
  • Why should we talk about the history of ideas?
    BTW, I'll agree that not all background is useful. I don't think freshmen neuroscience majors should have to learn about Freud, even thought they often do in some cursory fashion. However, I also don't think you can satisfactorily explain introductory quantum mechanics or SR/GR without going into the history of Newtonian physics. Often, new theories are defined in terms of old ones.


    The modern conception of physicalism was defined in terms of popular dualist and idealist theories this way. Defining physicalism in terms of causal closure and the denial of any suis generis forces makes no sense if it isn't explained in the context of its competitors.
  • Why should we talk about the history of ideas?


    For a logical argument to have persuasive force it is only necessary that I agree with the rules of logic.

    No, this is profoundly misunderstanding what logic alone can do for us. Logic just tells you that, if the premises of an argument are true, then the conclusion follows. Logic generally can't tell you anything about whether the premises are true. Most arguments are claims about states of affairs/matters of fact. You can't argue anything "just from logic," except (maybe) "a priori truths," that can be grasped from pure deduction alone (which plenty of people don't think exists).

    "All historical arguments are good arguments.
    Wayfarer's argument, which sparked this thread, is a historical argument.
    Thus, the Wayfarer's post is a good argument," is deductively valid.

    And there isn't one set of "the rules of logic," for people to agree to either. There is a fairly well agreed upon set principles for classical logic, and there are the widely accepted "laws of thought," but these don't allow you to phrase many of the arguments people want to make (i.e. arguments about modality, quantifiers, etc.), nor does everyone agree on them. Mathematics has not proven deducible from logic to date, and so even proofs don't "only require that [you] agree to the rules of logic." Hence, either logical pluralism or logical nihilism is the norm, with some folks still holding out on the hope that some One True Logic reveals itself.


    You can apply logic to parts of an inductive argument, but such an argument necessarily includes claims about past states of affairs/past observations. If I say "cutting taxes won't result in higher government revenues per the Laffer Curve, because we have seen 3 major tax cuts since 1980 and each time revenues have fallen instead of increasing," that is of course an argument relying on historical fact. In many claims about the world, I would argue that deduction's primary role is to ground the statistical methods used to analyze past observations. People can always argue that past observations are in error, fake, poorly defined, etc.

    You can put historical arguments into the form of a deductively valid syllogisms. It doesn't mean they will be convincing or true.


    To get back to the original point here: , do you guys think most science textbooks waste the student's time by going through the history of how a theory came to be developed?

    Every in-depth treatment of GR/SR, quantum mechanics, or thermodynamics I've read starts with the history of the ideas in play. A survey of thermodynamics normally starts with Carnort, Clausius, and mechanism-based explanations of thermodynamics in terms of work. Most treatments will discuss the once widely held, but now thoroughly debunked caloric theory of heat; what the theory was and which experiments ultimately led to the rejection of the theory and the positing of a new one.

    Likewise, almost every review of relativity of any depth starts with a summary of Newtonian physics and discussions of the theory of luminiferous aether.

    I've always thought that these reviews were done so that the student could follow the development of an position. Knowing which alternatives to a theory have been considered and rejected are key to understanding a theory because, especially for a novice, the dominant theory of the day is always going to look undetermined by the evidence they are aware of. It's also true that knowing why a given element was added to a theory gives you much better insight into how to think about that part of the theory. If some constant was added simply because the mathematics for some project wasn't working out, it's good to know it.

    For example, if the a multiverse version of eternal cosmic inflation becomes the dominant view in cosmology, I'd argue that it'd be good for students to know that the driving reason behind that theory's adoption was concerns over the Fine-Tuning Problem. Why? Because some people might find FTP totally untroubling and so mignt question why some seemingly "philosophical," question led science to accept a vast landscape of unobservable phenomena. Likewise, mechanism was rejected because Newton's gravity acted at a distance; knowing this is relevant when Einstein's theory replaces Newton's because the historical reason for rejecting mechanism goes away and locality is seemingly back on the table.

    What would your preferred method of presentation be? Just presenting currently held facts and models? Talking about just experimental results and how they support or undermine a theory, without any reference the the history behind the experiment?

    I don't think this works. We collect and categorize data based upon our current theories. The historical context of an experiment determines how it is preformed and how it is understood.

    Plus, old "debunked," theories have a habit of coming back in new forms. Wilzek's "Lightness of Being," spends considerable time look at old aether theories and why they were rejected because he wants to revive aspects of the theory in a new format, to explain space-time as a sort of aether, a metric field. Explaining the history lets the reader see how only certain parts of the old aether theory were inconsistent with experimental findings.
  • What Are the Chances That This Post Makes Any Sense? A Teleological Argument from Reason


    Pretty much what you pointed out, avoiding "epistemic grandiosity." I think it's an argument for pragmatism, circular epistemology, and fallibilism. Demands for absolute certainty and for an absolute foundation go too far.

    And, as Hoffman develops his very similar argument, I think it also cautions looking closely at how our innate faculties might be acting as theoretical blinders. It's too much to get into in detail, but I think mechanical philosophy in particular seems like it is something that might be born out of how evolution shaped the human sensory system.

    Because our senses are vulnerable to illusions, we use one sense to "cross check" the veracity of another. E.g., when we see a flower we think might be fake, we touch it and smell it to help us decide. Mechanism and corpuscularism draw us in by giving us a "fundamental" model of reality that nicely coincides with an "image" of reality as bouncing balls we can simulate for ourselves with most of our senses (touch, vestibular, sight, hearing).

    It's worth noting here that we appear to use the same systems for imagining a phenomenon that we use for perceiving that same type of phenomenon. Gallielo though everything was particles in motion, but dog-Gallielo might have argued for "fundamental scents."



    I'm not sure this is correct - for evil to exist, this seems to require free choice. How can something be evil if it is the necessary requirement for existence, built into it by the creator/evolution? The notion of predation, so much a part of the natural world of animals, must then imply that the natural world is evil. Do you subscribe to this? Manichaeism holds to this view. Earthquakes, fire and floods are built into how nature functions, how can they be evil? Are black holes evil?

    How can death and suffering exist without life? Something has to be alive in order for it to die, right? That's all I was getting at.

    I'm not sure this works for me. You talk about 'law-like'. But even in using the term 'laws' this implies a lawgiver - there's a prejudice built into the language

    Is there? When people talk about "the laws of physics," or "natural laws," I don't think they're generally presupposing any sort of "lawgiver." I don't see anything inconsistent with being an atheist and believing in "laws of physics." Indeed, many prominent atheists claim these are the only things they believe exist.

    Whether these laws are intrinsic, just a description of the way physical entities interact because of what they are, or extrinsic, e.g. the Newtonian view where laws are outside of physical entities and govern them, doesn't really make a difference. The claim is simply that there is invariance in fundemental aspects of nature. E.g. water doesn't dissolve skin one day out of every 1,000, we don't see solid objects passing though each other — people walking through walls — conservation of energy appears to hold in our experiments, etc.

    Do you think that's a controversial claim? To be sure, it's open to the critiques of radical skepticism, and the critique that the sciences regularly refine their descriptions of nature, but I feel like the claim that "Newton's law of gravity is well supported empircally and describes a uniformity in the world," is fairly well justified, even if we accept that it doesn't describe the phenomenon perfectly.

    IDK, if people don't believe in the law of universal gravity, don't think Maxwell's equations describe the behavior of something real in the world, I will grant that my argument won't have any real appeal for them.

    Could not many of our accounts of the world be more about us than the world itself?

    Sure, absolutely. I'm willing to say that is almost certainly the case. But if we can't say we are justified in claiming that the fundamental findings in the sciences actually correspond to something about the way the world actually is, then how is knowledge even possible? If we observe gravity working the same way throughout our lifetime, and yet we still think this might be some sort of unreal order imposed upon the world, what could possibly justify any knowledge claims about external reality?

    Why even posit a noumenal world in this case? If you take Kant that far, I'd argue you're better off going where Fichte and Hegel realized that thought led, to some form of idealism.

    Do these say anything about a creator or about purpose?

    Not obviously. But I'll refer you to the Fine-Tuning Problem and this post. .

    Fine Tuning is considered a problem because it appears that life should only be possible by a fantastical set of "just so coincidences." The multiverse hypothesis from cosmic inflation proports to solve this problem by showing that, if some very large set of universes is created, then it is actually not surprising that we exist. Most universes can't support life, but a few can. Clearly, observers like us will only be in that smaller set of universes where they can exist.

    My point is that this sort of argument runs into the problem of then having to explain why the multiverse only creates certain types of universes, that is, ones with "physical laws." Why is that constraint relevant? Because if you don't specify that only those sorts of universes get created, then the number of random universes is far larger than the number of ones with describable laws. However, the random universes should also be able to create observers like us by pure chance, and even be observably indiscernible from universes that do have laws for long periods of time, by pure coincidence. It's sort of like how a program that randomly outputs English words is much more likely to produce a coherent page of text than one that randomly outputs letters (randomly outputting pixels might be a better analogy though).

    It's a problem akin to the Boltzmann Brain Problem, which is also an issue for multiverse theories, but more generalizable and perhaps less tractable. Unless there is some explanation of why only certain types of universes exist in the multiverse, the pivot to the multiverse doesn't seem to actually address the problem.

    It also works from a purely phenomenological perspective as well, unlike Fine-Tuning, because there are many more ways we can imagine that our world progresses from state to state.
  • God and the Present


    I'm not sure if relativity explicitly outlaws a universal now, or if it just means we could in principle never figure out which reference frame decides the universal now. That's something I'd like to hear an expert's opinion on tbh. It's a question I've had for a long time.

    It's generally taken that a universal now cannot exist, although this to some extent depends on how one defines their terms. In the context of SR and GR and how time is defined there, we do not have an absolute "now." Rather, it is generally argued that either becoming/simultaneity occurs locally or that all times exist within a "block universe." There is also a "growing block universe" where the past exists but the future does not, such that the four dimensional universe "grows." Such growth occurs locally however, with a "many fingered time." There is also the "crystalizing block universe," where multiple quantum possibilities grow outwards from any local "now" and only "crystalize" when there is wave function collapse. At this point, what a quantum system appears to do is "retroactively decide" which past it actually had, although there is considerable debate on how to interpret this appearance. You can look up the "quantum eraser" experiments" for that sort of thing.

    ScreenShot20220524at2.00.39PM.png?resize=600%2C307

    Or the crystalizing block:

    crystallising-block-univers.gif

    Not to confuse you, but we can still talk about "time-like slices," but these aren't Euclidian planes but hyperplanes that may have a curved surface. Basically, we can talk about a global slice, but not about global simultaneity. Simultaneity is defined locally. But GR/SR are also classical theories, so things get even dicer when you talk about quantum phenomena like entanglement or tunneling.

    Anyhow, below is a good quote I already had pulled out, although it is a bit dense. The Great Courses has a good class on relativity as well if you're interested.

    We have seen that SR rules out the idea of a unique, absolute present: if the set of events that is simultaneous with a given event O depends upon the inertial reference frame chosen, and in fact is a completely different set of events (save for the given event O) for each choice of reference frame in inertial motion relative to the original, then there clearly is no such thing as the set of events happening at the same time as O. As Paul Davies writes (in a variant of the example given by Penrose above), if I stand up and walk across my room, the events happening “now” on some planet in the Andromeda Galaxy would differ by a whole day from those that would be happening “now” if I had stayed seated (Davies 1995, 70).


    From these considerations Gödel concludes that time lapse loses all objective meaning. But from the same considerations Davies concludes, along with other modern philosophers of science, that it is not time lapse that should be abandoned, but the idea that events have to “become” in order to be real. "Unless you are a solipsist."

    As I argued in Chap. 3 above, events “exist all at once” in a spacetime manifold only in the sense that we represent them all at once as belonging to the same manifold. But we represent them precisely as occurring at different times, or different spacetime locations, and if we did not, we would have denied temporal succession...


    ...in each case we are presented with an argument that begins with a premise that all events existing simultaneously with a given event exist (are real or are deter- mined), and concludes that consequently all events in the manifold exist (are real or determined). But the conclusion only has the appearance of sustainability because of the equivocation analysed above in Chap. 3.If a point-event exists in the sense of occurring at the spacetime location at which it occurs, it cannot also have occurred earlier. But if the event only exists in the sense of existing in the manifold, then the conclusion that it already exists earlier—that such a future event is “every bit as real as events in the present” (Davies), or “already real” (Putnam)—cannot be sustained. Thus, far from undermining the notion of becoming, their argument should be taken rather to undermine their starting premise, that events simultaneous with another event are already real or already exist for it in a temporal sense. For to suppose that this is so, on the above analysis of their argument, inexorably leads to a conclusion that denies temporal succession.

    This, in fact, was Gödel’s point. As mentioned in the introduction to this chapter, he had already anticipated the objection that the relativity of time lapse “does not exclude that it is something objective”. To this he countered that the lapse of time connotes “a change in the existing”, and “the concept of existence cannot be relativized without destroying its meaning completely” (Gödel 1949, 558, n. 5).To this he countered that the lapse of time connotes “a change in the existing”, and “the concept of existence cannot be relativized without destroying its meaning completely” (Gödel 1949, 558, n. 5).As we saw in Chap. 3, however, the sense in which events and temporal relations “exist” in spacetime is not a temporal sense. This would amount to a denial of the reality of temporal succession.20

    So the root of the trouble with the “layer of now’” conception of time lapse is a failure to take into account the bifurcation of the classical time concept into two distinct time concepts in relativity theory. The time elapsed for each twin—the time during which they will have aged differently—is measured by the proper time along each path. The difference in the proper times for their journeys is not the same as the difference in the time co-ordinates of the two points in some inertial reference frame, since they each set off at some time t1 and meet up at a time t2 in any one ...

    We may call this the Principle of Chronological Precedence, or CP. As can be seen, it presupposes the Principle of Retarded Action discussed in Chap. 4, according to which every physical process takes a finite quantity of time to be completed.Note that so long as CP holds for the propagation of any physical influence, it will not matter whether light or anything else actually travels with the limiting velocity.41


    As Robb showed in 1914, this means that—restricting temporal relations to these absolute relations only—a given event can be related in order of succession to any event in its future or past light cones, but cannot be so related to any event outside these cones (in what came to be called the event’s “Elsewhere”).There are therefore pairs of events that are not ordered with respect to (absolute) before and after, such as the events happening at the instants A and B on Robb’s “Fig. 6.1" The event B, being too far away from A for any influence to travel between them, is neither before nor after A.



    For example, B could be the event on some planet in the Andromeda Galaxy that Paul Davies asked us to imagine, in the Elsewhere of me at the instant A when I am considering it. It is true that by walking this way and that I could describe that event as being in the past or in the future according to the time coordinate associated with the frame of reference in which I am at rest. But that event is not present to me in the sense of being a possible part of my experience. It bears no absolute temporal relation to my considering it...

    All the events I experience, on the other hand, will be either before or after one another, and therefore distinct. In fact, they will occur in a linear order.They will lie on what Minkowski called my worldline.


    There is nothing unique about my worldline, however. On pain of solipsism, what goes for me goes for any other possible observer (this is the counterpart in his theory to Putnam’s “No Privileged Observers”).42 Thus if we regard time as constituted by these absolute relations, time as a whole does not have a linear order: not all events can be ordered on a line proceeding from past to future, even though two events that are in each other’s elsewhere (i.e. lying outside each other’s cones) will be in the past of some event that is suitably far in the future of both of them. In this way, all events can be temporally ordered, even if not every pair of events is such that one is in the past or future of the other. This is Robb's "conical order." In the language of the theory of relations, it is a strict partial order, rather than a serial order.

    In a paper of 1967 the Russian mathematician Alexandrov showed how the topology of Minkowski spacetime is uniquely determined “by the propagation of light or, in the language of geometry, by the system of the light cones”, noting the equivalence of this derivation to Robb’s derivation on the assumption of chronological precedence.

    The Reality of Time Flow: Local Becoming in Modern Physics
  • God and the Present


    You might be interested in this similar line of thought:

    One might think that philosophy ought to begin with the concept of “beginning” itself. Yet for Hegel such a concept is, paradoxically, too complex to serve as the real beginning of thought. The concept of “beginning” (Anfang) is that of “a nothing from which something is to proceed” (SL 73/1: 73 [181]). It thus takes for granted from the start that what is being thought is the beginning of something yet to emerge...

    Hegel’s account of being begins not with a full sentence but with a sentence fragment: “being, pure being, without any further determination” (SL 82/1: 82 [193]). In this way, Hegel indicates through his language that what we are to focus on is not a determinate subject of discourse or “thing” nor a predicate of some assumed thing (such as the “Absolute”) but rather utterly indeterminate being. Such being is to be thought of not as existence or nature but as sheer being as such—what Hegel calls "indeterminate immediacy.”

    Such being is abstract, but it is not a mere illusion for Hegel... At this point, Hegel confronts us with the first of many surprising paradoxes: for he maintains that by virtue of its utter indeterminacy pure being is actually no different from nothing at all: “being, the indeterminate immediate, is in fact nothing (Nichts), and neither more nor less than nothing” (SL 82/1: 83 [195]). Of all Hegel’s statements in the Logic, this is the one that has perhaps invited the most ridicule and elicited the greatest misunderstanding. In Hegel’s view, however, it is trivially true: pure being is utterly indeterminate and vacuous and as such is completely indistinguishable from sheer and utter nothingness. This is not to say that we are wrong to talk of pure being in the first place. There is being; it is all around us and is, minimally, pure and simple being, whatever else it may prove to be. Insofar as it is pure being, however, it is so utterly indeterminate that logically it vanishes into nothing. Presuppositionless philosophy is thus led by being itself to the thought of its very opposite.

    This nothing that pure, indeterminate being itself proves to be is not just the nothingness to which we frequently refer in everyday discourse. We often say that there is “nothing” in the bag or “nothing” on television when what we mean is that the specific things we desire are not to be found and what there is is not what we are interested in... .By contrast, the nothingness Hegel has in mind in the Logic is the absolute “lack” or “absence” of anything at all, or sheer and utter nothing. It is not even the pure void of space or the empty form of time, but is nothing whatsoever... as the sheer “not.”

    Being and nothing are utterly different from one another but collapse logically into one another because of the indeterminate immediacy of their difference.

    From this, we get the dialectical move where the initial posit, being, sublates (negates, while incorporating parts of) its opposite (which emerges from the original concept itself). So, we get becoming the process through which whatever has being continually passes into non-being. The being of the present, forever passing on into non-being of the past, while the future has yet to become.

    Being and nothing thus both prove to be absolutely necessary and to be endlessly generated by one another. Yet neither has a separate stable identity apart from its vanishing since logically each vanishes straight away into the other.

    What Hegel’s philosophy shows... is that logically, purely by virtue of being “being,” being turns out to be “becoming.” Becoming is thus what being is in truth: immediacy as the restless vanishing and reemergence of itself.

    Thus, we are always in the one place, that of becoming, the same "now." There is another progression that comes to define how we are always in the same "here," and so always in the same "here and now."

    https://phil880.colinmclear.net/materials/readings/houlgate-being-commentary.pdf

    I can totally understand why people don't like this sort of thing, and the Logic is a beast, but I find it pretty neat. Houlgate's full commentary on the first bit of the logic is also fairly accessible given it is a commentary on perhaps the most inaccessible thing ever written.
  • Why should we talk about the history of ideas?

    It seems you're only looking at history through the lens of one who already agrees with the points you want to make. From that perspective, of course history looks like it supports your position, it's confirmation bias, not compelling argument.

    Sure, you're making an argument. This detraction can be leveled at all forms of argument and so it seems to be trivial. "You're only looking at the entailments of that proposition that support your argument," "you're only brining up analogies that support your argument," "you're only discussing x scientific model that supports your argument," etc. etc.

    Again, it makes an argument from analogy. I fail to see how it makes a good one; other than by it coming to a conclusion you already happen to prefer. As a step in a rational argument it doesn't seem to contain any data. "They used to do that with homosexuals" is an empty argument without your interlocutor already agreeing that homosexuals and trans people share the same status... and if they agreed on that, there'd be no argument in the first place. You couldn't argue against the incarceration of child molesters by saying "they used to do that to heretics". It was wrong to do it to heretics, it's right to do it to child molesters. The argument is in the case, not the history.

    You fail to see how it's a good analogy because you think trans people are more similar to child molesters than to homosexuals, or because you disagree with the shift to wider acceptance of homosexuality? Or are you just making the point that its possible for someone to disagree with any analogy, regardless of its merits and that it's also possible to make bad analogies? (This seems trivial to me). Or is it that arguments from analogy are inherently flawed? (This just seems wrong)

    I don't see the broader point here. It's possible to write bad proofs and it's possible to believe that good proofs don't work. This objection seems like it applies to any form of argumentation.


    Anyhow, I was merely trying to give some examples where the history of ideas may relevant, not even making an argument. Frequentism jumped to my mind simply because I think Bernoulli's Fallacy is a good book, even if I don't buy all the arguments. It uses the historical rise of frequentism to both order and elucidate its mathematical arguments. Thus, you're not just seeing that "people thought about x differently in the past," but you're seeing both a mathematical argument for why frequentism doesn't work in all cases paired with examples of where prior thinkers went wrong and how that has influenced current dogma.

    You can't argue anything if your opponents are 'firmly entrenched dogmatists'. I suspect they would disagree and therein lies the problem.

    You can say the same thing about a syllogism. That someone could reply to "all men are mortal, Socrates is a man..." with "you can't know that all men are mortal!" doesn't amount to much, no?

    Why is an argument from the history of an idea particularly bad?
  • What Are the Chances That This Post Makes Any Sense? A Teleological Argument from Reason


    I'll have to think about that more. It seems to me that the "end" does not exist until it is actualized. Thus, God's desire is posterior to the existence of the end. Or, if God is eternal, God desires the end simultaneous with all times, while the end only occurs within created time. Sort of like how, if I build a house, my idea of a house and my desire to have a house exists before I have built the house. To be sure, the "idea" of my end exists before I start building it, but I don't know if it works to say the unactualized end must exist before an agent can desire it (at least not for God, I suppose that is true in Platonism for people).

    Much like the universe is not, traditionally in the West, of itself an aspect of God but instead is God's creation

    This is definitely true of modern folk religion, which has tried to separate the realms of science and religion, but I don't think it's traditionally true. For example, in Neoplatonism the One still emanates nature (barring versions with a Demiurge, e.g., Gnosticism). In Christianity, God is often seen as continually causing the world to come into being. In the Confessions, Saint Augustine has God "within everything but contained in nothing," like "water in a sponge," Origen likewise has God involved in sustaining being. Eckhart has a conception of God that get likened to Pantheism, although I don't think this is entirely accurate, it's more Pantheism + more traditional Trinitarian conceptions. You have Spinoza in the Western Tradition, Boheme and Hegel's self-generating God/Absolute, Berkley, where God is responsible for all our sensations. Nature itself was suffused with God, before becoming "disenchanted," as Adorno puts it.

    Not super relevant to the topic at hand, but I think it would be interesting to unpack why this strong tradition of seeing God involved in sustaining all things, filling all things, came to decline in favor of the "divine Watchmaker," or a God who mostly doesn't act in the world and only sometimes intervenes, and who always does so supernaturally.
  • What Are the Chances That This Post Makes Any Sense? A Teleological Argument from Reason


    As presuppositions go, I don't see overwhelming evidence that the world we think we know is rational or ordered. Humans impose reason and order because we are pattern seeking machines. One could just as well argue that the universe specialises in black holes and chaos and kills most of the life it spawns, often with horrendous suffering. Life on earth is one of predation - for many creatures to eat, suffering and death are required. Why would a universe be designed to produce such chaos and suffering and a natural world which wipes out incalculable numbers of lifeforms with earthquakes, fires and floods? Why would a universe of balance have within it so many meaningless accidental deaths in nature, along with endless horrendous diseases and concomitant wretchedness?

    Sure, "rationality" as a whole is an amorphous term. I was thinking more specifically in terms of "is it likely for a universe evolve from state to state, such that past states dictate future ones?" That is, that the Principle of Sufficient Reason holds. That PSR is a far assumption for our world has no doubt be challenged, but I think those challenges still are a small minority viewpoint. And that makes sense to me, after all, we don't see pigs materialize out of thin air, second moons appear in the night sky, chop a carrot and have one half turn to dust, etc. There are law-like ways to describe the behaviors of the universe at both the macro and microscales. That's the sort of rationality I'm getting at, one which I believe tracks fairly closely with the Stoic and Patristic conceptions of Logos Spermatikos.

    We can imagine consciousness without PSR. We can think up toy universes similar enough to ours where PSR might not hold but first person experiences can still exist. However, there is a strong argument to be made that PSR, or at least a world that is "mostly law-like," is essential for freedom. I think that connection to arguments for God could be explored to some benefit. Although, it's probably more common to think that PSR somehow precludes freedom, so maybe this wouldn't be a very successful argument. I absolutely disagree with that interpretation, but it's certainly a common take.


    A definition in those terms assumes a time dimension of course, but we could redefine it more abstractly in terms of dimensions only.

    Death, suffering, chaos, etc. all only make sense in terms of living things so those issues seem anterior to life existing, more in the bucket of "the problem of evil."


    Plantinga has a brilliant mind, but his brilliance is very limited by his nescience with respect to 'the' scientific picture and naturalistic perspectives. Unfortunately Plantinga is only able to present straw men to attack with the EAAN. Admittedly the EAAN can be highly effective as an apologetic that maintains others in a state of nescience similar to that of Plantinga.

    BTW, 100% agree on this. He's one of the greatest logic choppers of a generation but it seems like he's thrown that talent into areas where it is just less relevant to what people care about. To be sure, there is some interesting stuff in the philosophy of religion, but it seems very rare for it to actually change people's opinions or even influence theology much. This, to me, is one of the weirder things about ontological arguments. There is a fair share of sophisticated analysis on Anselm and Godel's "proofs of God," that concludes both that they are valid deductions and completely unconvincing.

    I do wonder though if people who come to believe that realityis mathematical, would tend to put more stock in such ontological arguments? It seems like they should, because they don't see mathematics as merely a practical tool, but for some reason I doubt it moves the needle much.
  • What Are the Chances That This Post Makes Any Sense? A Teleological Argument from Reason


    I've caught up on the thread now. I don't really want to get into a discussion of the fine tuning argument, because I've spent the past 15 years arguing with (mostly) Christian apologists and I'm pretty bored with discussing it at this point. My thinking is that the appearance of fine tuning to the universe gets us (at best) to recognition that there are things we aren't in an epistemic position to be able to explain.

    We can speculate, and there are lots of speculative attempts at explanation, and not much strong reason to choose among speculations or even decide that anyone yet has speculated in a way that is somewhat accurate. I lean towards there being a multiverse (in line with Guth's thinking on eternal inflation) as being relatively parsimonious, but I don't lean that way nearly strongly enough to think it is worth spending any time arguing for it.

    That's a fair conclusion. I'm not super gungho about this argument outside of being interesting. I actually dislike most philosophy of religion, because I find that it's an area where one's ontology, epistemology, logic, ethics, mereology, etc. all tend to be relevant. This forces authors into question begging to make their papers manageable and tends to shift rebuttals towards attacks not really related to the original thesis. It's almost like you have to start most papers in the field with a list that begins "given we assume 1, 2... 117, then it follows..."

    Plus, I don't recognize the God of classical philosophical theism in any real religious traditions I can think of.

    That said, I think arguments like Plantinga's, if successful, do more than just show us our epistemic limits. If your theory of the world is self-defeating, if there is a contradiction in your justification for having true beliefs, it's worth looking at how you can avoid this problem.

    For example, with Hempel's Dillema, I think the key take away is not so much that physicalism doesn't work, but rather that we shouldn't dismiss any theories because they don't "seem" to be physical, as what counts as physical is itself continually redefined and refined as we build knowledge.

    One point I would raise in the context of speculating about goddish minds as an explanation is, "What reason do we have to think that it is metaphysically possible for a mind to exist without supervening on some sort of information processing substrate?"

    That's a good point. How can a mind understand something like, say the current state of the Earth, without somehow containing all the gradations of difference required to specify such a thing? If God is a unity, without distinction, and yet God knows the world, it would seem like God knows the world in a way that is indescribable using the language of mathematics, or at the very least our existing concepts of information.

    A sort of diagonal inverse of that point is that, if we buy into computational theory of mind or integrated information theory, it doesn't seem like the idea of a sort of cosmic intelligence is at all precluded.

    Anyhow, I think the original argument, perhaps fixed up a bit, is most relevant to people who embrace the idea of a multiverse precisely because they think it somehow "fixes" the Fine Tuning Problem.

    It seems like, by moving to the multiverse concept, you've made things much worse, exacerbating the very problem we want to solve. We've moved from the problem of our single, observable universe being extremely combinatorially unlikely, but still only finitely unlikely, to the problem of why one sort of multiverse production object exists that only produces certain types of universes out of an infinite number of possibilities.

    This alone might not be enough to take the bloom off the rose of the multiverse, but combined with the problems of explaining the Born Rule in a coherent fashion in a multiverse context and the problem of observers having any coherent identity through which to actually frame the theory it might. I've personally become increasingly less enamored of it over time.

    Max Tegmark's Mathematical Universe Hypothesis seems particularly vulnerable to this attack because it posits that all mathematical objects exist. Although I don't know how much this matters because people have already pointed out that it also makes Boltzmann Brains the overwhelming majority of observers.

Count Timothy von Icarus

Start FollowingSend a Message