Comments

  • A -> not-A


    Put differently, the notion of validity assumes a truth-functional context where truth and form are entirely separable. Yet when we think deeply about inferences themselves, such as modus ponens, truth and form turn out to be less separable than we initially thought. When we stop merely stipulating our inferences and ask whether they actually hold in truth, things become more complicated.

    Aren't truth and form mixed together in any tautology or contradiction? We wouldn't want to exclude those though, right?

    It seems you could do without it too. I hadn't really given it much thought.
  • A -> not-A


    A premise is defined as an analytic truth. It cannot be false, regardless of its synthetic falsity. If C means "Cows bark," it is irrelevant if they don't for the purposes of formal logic.

    An analytic truth is true by definition, e.g. "bachelors are unmarried men." Premises need not be analytic or considered so.
  • The dismal state of economics.
    I wish I had students this interested when I was still teaching economics.

    I hate to recommend Hegel, because even the relevant work here, The Philosophy of Right (his most accessible), is not at all an easy read. So, I'll just throw out a book I like on the application of Hegel to the 20th century, Axel Honneth's Freedom's Right. I recommend it because it addresses just these issues. Hegel was a big early fan/adopter of Adam Smith, but you could also argue that he was a proto-Keynesian who saw how the state might need intervene to counter recessionary forces and long-term unemployment caused by sectoral shift.

    PR is mostly about ethics and morality, but it covers the economy in the section on civil society. For Hegel, social institutions (marriage, guilds/unions, markets, the justice system, churches, etc.) objectify morality for their society and ultimately serve to maximize freedom (in the long run, through a process akin to natural-selection). They also shape our identities and, through this, help us both to be free, and, crucially, to not want to use our freedom to take away others' freedom.

    For example, people are more willing to embrace social welfare programs and wealth redistribution if they feel like they share a common identity with the primary beneficiaries of such policies. This is why high levels of migration have proven challenging in the West and have undermined support for the welfare state. People often see migrants from the developing world as "others;" they do not feel like they share an identity. Research bears out Hegel's intuition here. (This is not a judgement on the appropriateness of such identification, just what appears to be true.)

    Hegel would say that the state, working with and through other institutions, ideally works to resolve this sort of issue by building a common identity so that people want to cooperate and maximize each other's well-being, freedom, and flourishing. Basically, changes in identity serve to change people's social welfare function such that members of a society prefer what is better for others (and we could even argue that, at the limit, this might match what people would advocate for if behind Rawls' veil of ignorance).

    But Hegel also has a notion of freedom that hews much closer to the ancient and medieval view of freedom, so how this looks is a little different then one might expect (IMHO this is actually a good thing).
  • A -> not-A


    Yes, I think that's fair to say. We particularly care about this sort of thing in the sciences, the large focus on "correlation versus causation" for instance, or as respects states of affairs.

    Interestingly, Aristotle and St. Thomas do make some recourse to causes in discussing mathematics. Even though mathematics is the understanding of form abstracted from matter, they include a form/matter distinction within mathematical entities. The essence (or "what-it-is-to-be") of a triangle is its form and the form determines what is true of all triangles. But particular triangles vary according to their dimensions and this variance is attributable to their "matter," which is termed "intellectual matter" due to these abstractions existing in the mind (ens rationis). We might say the lines are the "material" that compose a triangle. And we might say that triangle is a genus with different species, e.g. "isosceles," with there being things that are true for all isosceles by virtue of their species form.

    For example, the Pythagorean theorem can be explained in terms of formal cause, whereas the values for a, b, and c will be explained in terms of material cause.

    Whether this distinction is useful is another matter. In the mathematics they had available, it seems like it could be helpfully explanatory, but whether one wants to try to bother reforming the concepts for modern mathematics will probably largely depend on if one thinks the rest of the metaphysics backing it is worth developing.

    They do have very interesting philosophies of mathematics though, particularly the potential/actual distinction as Aristotle applies it to the notion of the infinite in physics.


    * Because of Aristotle's metaphysics, everything except God has both act and potency, and so some analagous form(actuality)/matter(potentiality) distinction.
  • A -> not-A


    I would think
    'that it is the case' is a matter of giving an argument
    but
    'why it is the case' is a matter of exposition, not argument

    I think that is likely often true, but it also seems possible in some cases to construct a syllogism that addresses the "why" (as well as syllogisms that do not seems to address it.) Since I just shared some of the relevant sources in another thread I have them on hand:

    Knowledge of the fact (quia demonstration) differs from knowledge of the reasoned fact (propter quid demonstrations). [...] You might prove as follows that the planets are near because they do not twinkle: let C be the planets, B not twinkling, A proximity. Then B is predicable of C; for the planets do not twinkle. But A is also predicable of B, since that which does not twinkle is near--we must take this truth as having been reached by induction or sense-perception. Therefore A is a necessary predicate of C; so that we have demonstrated that the planets are near. This syllogism, then, proves not the reasoned fact (propter quid) but only the fact (quia); since they are not near because they do not twinkle, but, because they are near, do not twinkle.

    The major and middle of the proof, however, may be reversed, and then the demonstration will be of the reasoned fact (propter quid). Thus: let C be the planets, B proximity, A not twinkling. Then B is an attribute of C, and A-not twinkling-of B. Consequently A is predicable of C, and the syllogism proves the reasoned fact (propter quid), since its middle term is the proximate cause.

    From Aristotle's Posterior Analytics I.13:

    Aquinas relates this to causes (although his concept of "cause" is Aristotle's four causes, so they might still be invoked in mathematics)

    I answer that it must be said that demonstration is twofold: One which is through the cause, and is called demonstration "propter quid" [lit., 'on account of which'] and this is [to argue] from what is prior simply speaking (simpliciter). The other is through the effect, and is called a demonstration "quia" [lit., 'that']; this is [to argue] from what is prior relatively only to us (quoad nos). When an effect is better known to us than its cause, from the effect we proceed to the knowledge of the cause. And from every effect the existence of its proper cause can be demonstrated, so long as its effects are better known to us (quoad nos); because since every effect depends upon its cause, if the effect exists, the cause must pre-exist.

    From St. Thomas' Summa theologiae I.2.2c:

    Now, I do think this is probably something that has to stay to one side of form. It was long considered part of "logic," but this is logic interpreted broadly as the study of "good reasoning" (even rhetoric was sometimes lumped in with logic on curricula). However, when it comes to more amorphous debates like pluralism and the "correct logic" vis-a-vis some subject matter, it seems possible that an argument could be advanced that states that a certain sort of logic is "correct" because of the nature of the subject matter, in which case content (matter) would inform form (which I guess it always does, just not in a way everyone can agree upon).
  • A -> not-A


    I don't know what you mean.

    In natural language, predication is often not totally univocal, but is also not totally equivocal. There is a vagueness problem. For example, we might say that "lentils are healthy," or "running is healthy." These are true statements. And we might also say "Tones is healthy." Yet you would not be "healthy" in the same way that lentils are. However, neither is the usage totally equivocal. We call lentils "healthy" precisely because (normally) they promote the health of human beings, i.e. the same "health" we refer to in "Tones is healthy."

    Perhaps we could dismiss this as just a case of equivocation in disguise, but I don't think so.

    I know that people have tried to formalize this sort of thing; I am not particularly well-versed in how though. My understanding is that no attempt has proven particularly popular because they do not seem to fully capture how analogous predication is used.

    It's sort of like how, as far as I am aware, there is no popular formalization of the distinction between quia vs. propter quid demonstrations (i.e. demonstrating "that something is the case," vs. demonstration "why it is the case.") I don't think most people would deny that they're different (although some would), but rather it seems that the difference should be entirely reliant on the arguments' content, not their form (i.e. an issue of material logic).
  • A -> not-A


    I mean sure, if you want to collapse "moving quickly on feet" and "operates" into a single term. You could cover other equivocations of "run" as well and have a single term cover "flows," "seeks elected office," "is a candidate for winning," etc.

    But if you want to use your terms in any sort of a broad fashion, or if you want to make things simple, you can simply demand that the terms be disambiguated.

    Plus, these are just obvious examples, relying on equivocity. When it comes to analogous predication it will not be so simple to use such a solution.

    And at any rate, prior to recent advances in robotics, it was true that only animals run in the proper (proper here) sense.
  • A -> not-A



    I suppose accusations of hypocrisy are nearby. "Your anti-racism is itself a form of racism." "Your anti-capitalism materially benefits you." "Your piety is actually vanity." Generalize those and instead of saying, hey here's a case where the claim is A but it's really ~A, you say, every A turns out to be ~A. Now it's a rule.

    I am not sure about this one. The person is not arguing that A is actually ~A. Presumably, they believe real piety exists, just that this person doesn't possess it. They are arguing that what the person claims is A is actually just B. Perhaps, "anti-racism is racist," is closer to the mark, but again, I think this is still more of the same, a claim that what is presented under the term A is actually B. Presumably the person who earnestly makes such claims normally believes that one can be actually anti-racist without being racist (normally by being "colorblind.")

    And on second thought, about the first use case, I think that often, when people argue that the other party is accidentally implying the falsehood of their own position, the issue will also be unclear terms. Not always, sometimes people do refute themselves. But such arguments might not settle the issue even if both sides are acting in good faith, not because one party rejects the form, but rather the content.

    On a related note, although not the case here, I think a lot of the "gotcha" puzzles that involve presenting good formal arguments alongside what appear to be faithful natural language translations of them, which are nonetheless either clearly wrong or at least not obviously right, involve equivocation. I think these are particularly disarming because, at least in my experience, the basics of form is taught while ignoring the possibility of vagueness, which is a problem because arguments can fail in three ways, invalid form, false premises, or unclear terms.

    I didn't really get a straightforward introduction to this risk until being led to the ol' "three acts of the mind," in historical treatments of logic curricula in the past. Although perhaps my experience is not typical.

    Edit: Just for an obvious example:

    Everything that runs is an animal.
    My refrigerator is running.
    Therefore, my refrigerator is an animal.

    Works great formally if you're allowed to us "R" for "that which runs" in both premises.
  • A -> not-A


    Well, while I think Srap has a good point about our being able to live without A→~A in most situations, I think it is important that statements like "nothing is true," are able to entail their own negation—that logic captures how these claims refute themselves.
  • A -> not-A


    Probably unfortunately relevant to we Americans' immanent election (and the last one, and 2000... a pattern emerges). However, I think this would be an issue of unclear/disputed terms—equivocity re "winning"—not a case of A→~A.

    Interestingly, it's another example involving stipulated rules, just as with conventional self-reference. In the thread on logical nihilism I was thinking out loud about the older distinction between formal and material logic. This goes back to Aristotle, who discusses pure form in the Prior Analytics and the "matter" of discourse in the Posterior Analytics. The latter discusses the ways in which subject matter shapes discourse.

    I think it is at least possible that one might be able to ground the selection of different consequence relationships in the relevant subject matter in a way that is rigorous, preserving the intuitions of both pluralists and monists.

    John Poinsot (John of St. Thomas) and later CS Peirce (who took a lot from the Scholastics) could lay the groundwork for this with their well developed theory of signs (including attention to the unique aspects of stipulated signs and sign systems). The sign relation is irreducibly triadic. There is always the object signified joined to an interpretant by a sign vehicle. Yet in cases of self-reference in stipulated signs systems the object is the sign vehicle.

    Obviously, this isn't true in every sense. When we read "this sentence is false," there is a sense in which the paper or screen is the object, light acts as the sign vehicle, and we are the interpretant. Yet in the universe of the stipulated system, taken by itself, we have collapsed the necessarily triadic relationship into a dyadic one. The result, apparent "true contradictions."

    And this would also bear out some of the intuitions of the post-modern semiotics that grew out of Sausser, which collapses the triadic relationship, while at the same time allowing us to at least plausibly overcome their more radical and destructive (destructive to notions of truth and meaning) theses by demonstrating how these are limited to a specific area of discourse and not all sign relations. Maybe.

    Surely a hard sell, since "freeing the sign" has been bound up in notions of human freedom and flourishing in that tradition.
    .
  • A -> not-A
    It may seem bizarre that a valid argument could have at least one premise that is necessarily false at first glance, but I think it is fairly intuitive if one thinks in terms of truth-preservation. If the premises were true, it would preserve truth. But the "truth" of a false premise cannot be preserved.

    And it's a good thing that it is valid because we often can reason from necessarily false conclusions in valid arguments to identifying false premises. This example is simple, but sometimes self-refutation is not simple.
  • Why Religion Exists
    Consider the cultural context in which these theological developments emerged. They responded to societal pressures, power struggles, or intellectual debates, serving adaptive functions such as social control, emotional regulation, and cognitive resolution. Emphasizing divine sovereignty and predestination reinforced social hierarchies and authority, while the promise of salvation for a select few provided emotional comfort and hope. The notion of an unfathomable God resolved cognitive dissonances surrounding evil, suffering, and uncertainty.

    But the Reformation didn't reinforce hierarchies of authority. It often led to their apocalyptic breakdown, e.g. the siege of Munster. The "adaptive evolution" kicked off and sustained wars that were cataclysmic even by the standards of 1914-1945. The Thirty Years War alone killed a almost two and a half times the share of the German population as both World Wars combined. At the outset of the French Wars of Religion France had a population 33% lower than Syria in 2011. Consider how apocalyptic the war in small Syria has been with 600,000 dead. The wars in France killed 3-4+ million. Some areas of Europe were essentially depopulated on a scale only matched by the Black Death. Traditional authority was challenged throughout the period, by popular peasant revolts, by challenges to the Holy Roman Emperor and Papacy, and even in a related regicide in England.

    The rationalization that this all fits a general scheme of adaptive behavior to maximize well being seems sort of post hoc. I mean, would any development in religion not fit the theory? What would be an example of a possible falsifying development in world religion?
  • Why Religion Exists


    I am thinking in particular of a person I know who I've discussed this with at length, but one can certainly find this in some forms of Reformed theology (certainly not all, some in the Calvinist tradition even came round to a sort of universalism through Barth), and you can find it in some forms of Manualist Catholicism from what I understand (a tiny minority view today, no doubt, but at one point more influential).

    Historically, this arises out of the pivot to the univocity of being. If God is one being among many then any freedom for creatures is a subtraction from divine sovereignty. At the same time, if analagous predication of God is no longer an option, it starts to look like only total equivocity is left. This is indeed Hume's argument, although it seems to simply be ignorant of the analogical option that prevailed for millennia (even in Pagan philosophy) and which was still the norm in the theologies embraced by the churches of most Christians in Hume's time (and today).
  • Logical Nihilism


    I'm thinking this (very consistent!) holding onto the LNC is a part of why these developments have taken so long to be achieved.

    Perhaps partly, but I think the other big factor would be that it is not actually easy to remove LNC and not end up with triviality. You seem to have to get rid of disjunctive syllogism, reductio arguments, or disjunctive introduction, and on many early attempts to understand this, all three.

    But these all make sense, e.g. disjunctive syllogism intuitively seems right, so if a contradiction lets us prove anything from it, the contradiction seems to be the problem.

    So even if people further back in history wanted to remove it, they couldn't without making everything true.

    For my part I think the metaphysics of truth ought to be set to the side for purposes of the question -- I'd say if our metaphysics of truth can't accommodate our logic then it's our metaphysics that are in error.

    First part, I'd agree, although I think it will end up being relevant if arguments for nihilism (or a pluralism bordering on nihilism) are made from the assumption of relativism and deflation re truth (which I suppose are metaphysical positions of a sort, but can be presented as "anti-metaphysical"). However, I don't think one needs any sort of in-depth metaphysical theory to say, "good reasoning has something to do with leading to truth and logic is meant to model/enhance good reasoning." Normally, the move to define "correct logics" in terms of natural language, or in the more common sense formulation of "good reasoning" seems like a way to get at this without having to make any metaphysical commitments. Normative views of logic accomplish the same thing. I just think that if we interrogate the normative views, we end up finding some notion of truth further back (maybe not, it's irrelevant to the pluralism debate anyhow).

    The second part doesn't make sense to me. On this view, if we accept using truth in a model as truth for pragmatic purposes in logic we should dismiss non-relative truth in metaphysics. But I don't see how there is any connection here. The first move is a pragmatic bracketing of a thorny question, not producing an answer to that question.


    It seems to me logic is a bit like math (while not being reducible to math) in the way that it can be developed or "discovered".

    Yup, which is why I imagine they have very similar sorts of debates.

    Anyhow, it seems possible to both affirm and deny pluralism/monism in the terms laid out without contradiction, since there is equivocation in the "subject matter of logic." I don't think it's particularly implausible to say something like:

    "If you are interested in logic primarily as an abstract formal system, there are no correct logics, but there are uninteresting ones. (A sort of nihilism). If you are interested in logic as good reasoning this answer is less obvious, but there are clearly many incorrect logics, since it is not good reasoning to affirm everything, almost everything, or almost nothing."

    I think a difficulty, even in published articles, is equivocating on just this issue relative to one's opponents. It's one thing to disagree with how they define the subject matter of logic, but obviously another to use arguments based on one such definition to attempt to refute a position based on another.

    Early in this thread I mentioned the older distinction between formal and material logic. This distinction is similar, although not identical, to claims that consequence might vary by domain. I think the furthest advances in material logic in Poinsot, CSP, etc. do offer at least a plausible explanation of why exactly consequence might vary when we move to consider signs (which of course introduce self-reference), particularly stipulated signs systems. This is relevant if the very point in question is if logic is about reasoning by beings or about stipulated sign relations.

    Whereas if one conceives signs from the post-modern perspective that grew out of Sausser then it might seem obvious that formal relations are the only thing to consider. So, one could frame the debate in terms of the proper understanding of signs I think, and probably argue towards either position depending on how one understands signification.
  • A -> not-A


    Common sense wise, yes, but Plato has Socrates make an argument that relies on notions contrariety and us accepting death as the polar opposite of life (e.g. as darkness is the absence of light).

    A lot of scholars think this argument is meant to be bad and to have this hole in it (i.e. that death is not a straightforward negation of life). I tend to agree. Socrates chains several arguments and only one is really good (more or less the same argument used against forms of reductionism to this day), and then suggestively breaks into an interlude where he tells his interlocutors that they shouldn't give up on reason if they happen to discover that arguments they once embraced turn out to be bad ones. I honestly don't love the philosophy of the dialogue as far as Plato goes, but the execution is brilliant.
  • Why Religion Exists


    I suppose one way to "cope" with a lack of meaning could be to actually uncover to true meaning of life, how to "be a good person," or "life a good life," etc. :grin:

    I think most active religions would readily acknowledge that people often come to them precisely because they are suffering from lack of meaning, existential angst, moral anguish, or just plain suffering, so there is agreement on this point. But I think they might object to "cope" as the term.
  • Why Religion Exists


    Yes, I did think of that, and I agree with you that it might be plausible in some contexts. Animism is the norm both in early cultures and early childhood, e.g. "the river floods because it wants to." And there is a clear path from this to positing supernatural entities.

    But the idea that this is an "adaptive coping mechanism," then makes no sense in terms of some later religious developments, because they make the world both terrifying and unintelligible, the result of an unfathomable God who is beyond all human notions of good and evil, totally obscured by total equivocity. In these extreme voluntarist theological contexts God has also revealed that God intends to consign most of humanity to eternal torment, saving a small remnant, based on "His own good pleasure," for reasons "beyond human comprehension." And to top it all off, God has predestined everything, including our own acts and thoughts, according to his unfathomable will, which is, as mentioned, beyond all human comprehension (save miraculous illumination).

    This is not only not reassuring, it makes man entirely helpless, and it makes all of reality bottom out in the completely unintelligible and unfathomable. Through the obsession with divine sovereignty, all of existence becomes a pantheistic expression of the divine will, which is itself beyond comprehension.

    It seems to me like the opposite of a coping mechanism. "Nightmare fuel," would be a better term.

    But my understanding of ancient Near Eastern afterlives is that they are not necessarily all that much better. If you're a slave in this life you go on to be a slave for eternity. I suppose this explains the purpose of one's life at least, to be a slave (whereas in the aforementioned theology God's reasons for making man are inscrutable).
  • A -> not-A


    There is, in this case, a veneer of logic over what could scarcely be considered rational argumentation. If this appearance of rationality serves any purpose, it must be to mislead, hence abusive, eristic, sophistical, non-cooperative.

    Heh, describes how these threads normally go.
  • A -> not-A


    Well, if you've been taught that a contradiction has a truth table that is always false and you think you have identified something that is necessarily/always false, it seems possible to conflate the two.

    I mean, obviously people do confuse this quite often, I can recall several threads, so I figure it's something like that, similar mental buckets.
  • A -> not-A


    Yeah, but if you affirm that "death" is equivalent with "not-life," you'll be stuck affirming Plato's argument for the immortality of the soul in the Phaedo, which in turn implies that you may be reincarnated for innumerable lifetimes where you have to debate these same topics before finally achieving henosis and completing the process of exitus and reditus. That's a pretty rough commitment to have to make.



    But I can't say what is the source of the mental block in people who don't understand that "A -> ~A with A is contradictory

    It doesn't seem that hard to understand to me. If people don't use formal logic often, then the most common thing to do is to translate into natural language. In natural language, we don't say that a falsehood implies anything. There is a relevance condition to consequence. Thus, to say "if my dog is alive then my dog is not-alive," is to say something that seems necessarily false—false regardless of if the dog is alive, dead, or even if the dog never existed. And the "necessarily false" is in the same mental bucket as "contradiction."


    Or, if something like the OP is framed in terms of self-reference, e.g. "nothing is true," with assertoric force, it would be a self-refuting statement, which also is in the same vein. A Catch-22 would be similar, and people might even call it contradictory in common parlance.
  • A -> not-A


    Yeah, I actually thought of a more concrete one we see on this forum: "it is true that nothing is true."

    Normally this is just the claim "nothing is true," made with assertoric force. Same for "it is true (I know) that knowledge is impossible."

    This is still self-reference though I guess.
  • A -> not-A


    True, but if, pace Frege, we assume assertoric force, then to claim "A → ~A" along with "A" is to contradict oneself, and therein I think lies the confusion.
  • A -> not-A


    Thanks, that's an interesting one.



    The first premise is false though. We are only affirming a contradiction if we affirm A and ~A.



    In this one 2 is false. It is possible to have a valid argument that has some true premises and a true conclusion without all the premises necessarily being true.

    It is indeed contrary to intuition that A →~A should be true if A is false however, that affirming that "if you are incorrect" then "your being correct implies that you are incorrect," is pretty tortured, I'd agree.


    :up:

    I cannot think of a way to frame this as a real example outside of self reference and removing A→~A would solve that.
  • Animalism: Are We Animals?


    I suppose it depends on how you want to define persons. Harry Frankfurt had an influential paper that defined persons as entities that have second-order volitions, i.e. the effective desire to have a certain desire. See: https://philosophy.tamucc.edu/notes/frankfurts-theory

    But by this criteria, personhood is something that has to be fostered and developed. On the Platonic view, we might always be persons in some sense, but we become more fully persons and more fully ourselves when reason unifies the person (as opposed to being a seething composite of conflicting passions and appetites).

    This doesn't require that we are only persons when conscious though, because reason reaches down and "trains" (to use the imagery of the Phaedrus) the passions and appetites. So, we might consider how Aristotle thinks of virtue as a habit. It is something we can train ourselves to or that we can have trained into us. And this indeed is consistent with contemporary science on habit.

    So, even though many of our choices are reflexive or less than fully conscious, it can still be the case that our very reflexes are shaped by our more intentional choices. The same is true of "the environment," since we can shape the sort of environment, we find ourselves in.

    I think this is also very relevant for the free will debate, because it shows how, even if conscious, self-aware action built on proper understanding only represents a minority of our acts, it can nonetheless shape the course of are entire lives. A good example might be joining the Marines or the priesthood. Now, this might be done impulsively, but if it is done very intentionally, with full understanding, it is also a way to "lock in" one's environment in a way that is conducive to what one truly thinks is best.

    That said, I think animalism gets at something important, which is that the human is a specific sort of biological organism, and facts relative to this shape "the human good." So, while it seems to be the case that the "human good" will always be filtered through culture and social practice (we are a social animal), it is also true that facts outside or prior to culture and social practice shape this good. An understanding of the human as animal can be very important here.

    And I like animalism because it isn't reductive. I have a lot of overlap with folks like Sam Harris, who think that science can tell us things about the "human good." However, I think Harris' analysis goes off the rails by being overly reductive, reducing persons to brains and brains to neurons. In many cases though, what we care about in promoting the human good is not neuroscience, but more general principles that allow us to shape the world and understand it and ourselves.

    Man as animal gets at this, although I can also see how it can be too limiting. If man is "just an animal," we might start to think that the human good is just sensations of well-being, not the development of the person, the development of freedom and self-determination, etc. Yet the development of self-determination, I'd argue, is key to the human good and to ensuring well-being (e.g. the citizens of A Brave New World have an unstable well-being because it will collapse if their economy is disrupted and they cannot get their drugs and entertainment, whereas folks like Boethius, Socrates, St. Ignatius, etc. can be sublime while sitting in prison cells awaiting death.)
  • Why Religion Exists
    At its core, the Evolutionary Coping Mechanism Theory posits that as cognitive abilities increase, so does awareness of mortality and uncertainty. This heightened awareness triggers existential anxiety, prompting species to develop coping mechanisms. Religion and science emerge as two primary responses, evolving through cognitive, social, environmental, and cultural interactions.

    Cognitive development plays a crucial role in this process. As species' cognitive abilities advance, they become increasingly aware of their own mortality and the uncertainty surrounding their existence. This awareness sparks existential anxiety, driving the development of coping mechanisms. Social learning theory supports this notion, suggesting that individuals learn behaviors and beliefs through observation and interaction with others (Bandura, 1977).

    Religion, in this context, serves as an initial coping mechanism. It provides meaning, purpose, and reassurance in the face of uncertainty. Terror Management Theory (Greenberg et al., 1997) substantiates this claim, demonstrating that religion mitigates existential anxiety by offering a sense of control and significance. Religious beliefs and practices provide a framework for understanding the world, alleviating fears and anxieties associated with mortality.

    Well, I see one problem here:

    How does this explain, say, Calvinism where man has to be constantly worried about whether or not he is elect or destined to eternal damnation? Generally, in this religion, one has absolutely no ability to determine whether one will be saved or not, and one also knows that the overwhelming odds are that one is destined for eternal torment. There are also, traditionally, no ways to know for sure if one is truly elect.

    Or how does it explain the many early religions in which the Gods are largely capricious and cruel? I am not sure how believing in an extremely powerful sky rapist who likes transforming into animals before committing his infamies is "reassuring."

    I find this particularly unconvincing as respects "afterlife" beliefs because many ancient visions (and the dominant modern vision) of the afterlife seem significantly more unpleasant than just ceasing to exist.
  • I do not pray. Therefore God exists.
    Following the events of The Brother's Karamazov Ivan Karamazov has a conversion experience and becomes a priest (he got better from the syphilis and insanity :grin: ). Years later, an atheist intellectual of much the sort that Ivan used to be moved to Ivan's village from St. Petersburg. One day, Ivan gets to talking apologetics with the man. The man says that he believes in science and logic, and that neither can show that God exists.

    Ivan says, "well, if God does not exist everything is permitted, so I won't control myself and I'll sleep with your wife."*

    "You can't do that!" the atheist replies.

    He was inducted into the catechumenate the very next day baptized into the church the next Easter.

    * We should note the implied premise that if God exists, everything is not permitted.
  • A -> not-A


    Obviously the universe is the result of God accidentally contradicting Himself by making a mistake, which He cannot do, being infallible. Explains everything!

    31aZU8CRsML._UC256,256_CACC,256,256_.jpg

    "Yes."
  • A -> not-A


    Artistotle and Euclid use contradiction in reductio demonstrations all the time. If we have a valid argument with a conclusion we know to be false then we have warrant to reject a premise or assume that at least one is false.

    However, I do agree that the common analysis that, if ~A is true, then A→~A doesn't sit well with common sense intuitions about consequence. The truth table also is liable to look confusing because it varies from how the premises are laid out, but I think that it isn't once properly understood.
  • Animalism: Are We Animals?
    This seems like a topic where it will be very easy to fall into discussing something trivial through equivocation, however, I couldn't help commenting on:

    According to the animalism account of our most fundamental nature, we are not

    immaterial souls or egos (Descartes; Foster 1991);
    material bodies (Thomson 1997; Williams 1957);
    body-soul complexes (Swinburne 1984);
    bundles of mental states (Hume; Rovane 1998; S. Campbell 2006);
    material simples (Chisholm 1978 [1989]; Lowe 1996, 2001);
    parts of brains (Puccetti 1973; McMahan 2002);
    persons materially constituted by, but nonidentical with, animals (S. Shoemaker 1999; Baker 2000; Johnston 2007); or
    nothing at all (Unger 1979a,b; cf. Unger 1990).

    lol, I'm going to have to read that one.

    But to add something substantive, I'll clarify on an example. I think Robert Wallace's reading of Plato and Hegel (which I like) would claim that what a human "essentially is" is not defined by being a certain sort of animal precisely because we have access to the transcedent through reason/"the rational part of the soul." We can ask of things "is this truly good" or "is this truly true," and so transcend the given of what we already are in becoming more fully self-determining (not that Wallace addresses this, so maybe he wouldn't agree). Yet this is not to say that a human isn't an animal from the perspective of biology. I don't think Kierkegaard addresses this, but it seems like you could draw something similar out of his work.

    On St. Augustine, I would just point out that he and thinkers of his time normally simply refer to animals as "brutes," which denotes living, mobile beings without reason. I don't see Augustine taking issue with man as animal from a biological point of view. After all, the same "breath/soul" (Hebrew ruach) that describes life in animals in the Bible and which is given to the animals in Genesis is also used to describe what God puts in man and man's life. Both are created from the dust.

    What Augustine is often concerned with is precisely the ways in which we can degenerate into mere animals. And this is not uncommon for his period. I would tend to agree with Philip Cary that a defining feature of the ancient/medieval and modern splits is:

    - Modern man worries about becoming a machine.
    -Ancient/medieval man worries about degenerating into a brute.

    Given the relative security, access to education, food, actually useful medical care, etc., I think this fixation makes a lot of sense, and it helps explain why freedom for the ancients is so often reflexive freedom over the self or to unify the self (i.e. for reason to rule the animal parts of the soul). And, in defense of the ancients, I do think we might be giving ourselves too much credit when we worry more about the former than the latter.

    I also think Aristotle splits the difference here very effectively by recognizing these types of concerns while tying the human good to our essence as a particular sort of animal (in the biological sense).
  • Welcome to The Philosophy Forum - an introduction thread


    That's how I see it, also. I don't think it makes sense to say we understand single facts. I can know many facts, but not understand how they are related. This spherical thing is a baseball. This long, thin, tapering thing is a bat. That mound of dirt is called the pitcher's mound. That's three facts that have no obvious connection. Many more facts can be added without any obvious connections.

    This seems to fits with the intuition that things are also not fully intelligible in themselves (partially yes, else we would need to understand everything to understand anything). As far as I know, this idea really starts to come into focus in the (Neo)Platonic, Aristotlean, Stoic synthesis of the Patristic philosophers, with the idea of all finite things or concepts only being intelligible as part of whole (for them Christian Logos/Christ), e.g. St. Maximus, but also some guys before him. Even "two" is not intelligible without a concept of number, the other numbers, magnitude/multitude, etc. (and for the Artistotle-influenced Patristics it also only exists where contingently instantiated or the subsistent unity of the Logos).

    Hegel would he someone who pushes this idea particularly far, which is interesting because he lived in a period where atomism had been somewhat ascendent (although maybe more in Britain).

    What I find interesting is how this idea seems pretty strong for the medievals but then sort of gets lost. I suppose a similar idea is that things are (in part) defined by their relations and so we can't have a metaphysics where things are just their building block parts.

    But to roll this forward to the present day, I think this would be largely consistent with information theoretic conceptions (or at least some of them).



    I took a class on the philosophy of AI not that long ago and it revolved almost entirely on the processes you could use to structure atomic propositions relative to some agent, with desires just represented at a certain sort of atomic belief that needs to be made true (with action being determined by other atomic beliefs about how to make the desire proposition true).

    It was interesting, but I couldn't help thinking that this seemed to be structuring the model of intelligence around what is easy to model and not how thought actually works.
  • A -> not-A


    ChatGPT is just slamming text together that tends to flow together. Something with P and ~P is going to cause it to talk about contradictions because that's where the text usually appears.

    It is bad at logic and uncommon programing languages like Prolog. It is actually surprisingly good at Java and Python, but still sometimes comically bad. I am waiting for the first financial meltdown of a sizeable business based on people using ChatGPT to code spreadsheets or dashboards.

    Although I hear they are working on some that will do math well. I'll believe it when I see it.
  • US Election 2024 (All general discussion)


    I wish I had your confidence. I've been stuck waiting with time to kill all day and been feeling an increasing sense of doom looking at the analysis. Nate Silver's op-ed in particular.

    Bizarrely, polling suggests Democrats will do better where they need to do good to win if turnout is low.
  • Logical Nihilism


    There is a lot of interesting stuff in there, and I really wish they had it on a platform that made it easier to slice and dice the data, because you could also look at the correlations by specialty area and I think that's almost as interesting.

    The one trend I find amusing is that as one goes further back in time for historical specialties philosophers get less and less confident that "philosophy makes progress."

    It would be interesting to do a similar survey of amateur philosophy spaces and compare. Idealism, epistemic nihilism, and anti-realism for instance are much more popular here it seems.

    I would expect some big variance on key philosophers:

    bga2uj15zrl1blg0.png

    Aristotle and Kant make sense for people who go into academia. I imagine Nietzsche would be vastly more popular writ large. Judging from what bookstores carry he is by far and away the most popular.
  • Logical Nihilism


    Well I should note that the quotations I shared are from some other places as well, I was collating them because I discovered that (almost) the same question had been answered several times before.

    But as far as I can tell, they are all generally saying the same thing (feel free to search for "logical pluralism" though), which jives with my understanding of the question.

    So it remains that logical monism is an act of faith rather than a conclusion.

    But the view that there are multiple correct logics or none wouldn't require act of faith?
  • A -> not-A


    Sure. I am just referring to the truth table

    aifu1aomfbbupyft.png
  • A -> not-A


    Well, in the intuitive natural language context I think people would simply want to reject the entailment. E.g., "But that my dog is alive doesn't entail that he is dead."

    It could be more interesting in an instance of self-reference. E.g. A is the proposition "this proposition is false." This would, as far as I can see, be a case where intuition would actually tell us that if A is true it entails that A is false.

    I cannot think of any concrete examples where we wouldn't simply dismiss it as gibberish though.
  • A -> not-A


    I can't parse that.

    Understandable, there is a typo there. I mean the conclusion column is true regardless of the truth value of A.

    In this case we don't need to appeal to the fact that the premises are inconsistent. If the logic includes modus ponens, then the example is valid, even if the logic does not include explosion.

    Indeed.
  • A -> not-A
    Let's put something in for A, say "Sue is sitting."

    1. That Sue is sitting implies that Sue is not sitting.
    2.Sue is sitting.
    Therefore, Sue is not sitting.

    A truth table will tell you this (the whole statement) is true if Sue is sitting or if she isn't sitting.

    I just quoted Priest so I have him on hand:

    The notion of validity that comes out of the orthodox account is a strangely perverse one according to which any rule whose conclusion is a logical truth Is valid and, conversely, any rule whose premises contain a contradiction is valid. By a process that does not fall far short of indoctrination most logicians have now had their sensibilities dulled to these glaring anomalies. However, this is possible only because logicians have also forgotten that logic isa normative subject: it is supposed to provide an account of correct reasoning. When seen in this light the full force of these absurdities can be appreciated. Anyone who actually reasoned from an arbitrary premise to, e.g., the infinity of prime numbers, would not last long in an undergraduate mathematics course.

    Lewis wrote a lot about this too.

    I mean, you can always just laugh at these and ignore them too, there is always a judgement call element in logic anyhow.
  • Logical Nihilism


    I don't think it's that hard to define at all. Some posters in this thread seemed to pick it up intuitively. Aside from B&R's book, they have shorter articles, and this question has been answered succinctly in many places. Their argument is roughly that the intuitive/informal notion of logical consequence is multiply-realizable (granted it is more technical in its details).

    I find this version of pluralism quite plausible. The most obvious example of ambiguity in natural language is propositions about the future, which, given some (fairly popular) assumptions, are indeterminate, rather than true or false. And this is one that has a long pedigree, being discussed since antiquity (arguably being endorsed by Aristotle from the very outset on the readings of many commentators).

    One could also argue that the intuitive/informal notion of logical consequence is irrelevant. I think that's a tough argument to make, but it's a possible one. What is bizarre to me is claiming that this must be the case and that anyone who disagrees has utterly failed to understand what the topic and logic as a whole.

    The best way of summarising the difference between monism, pluralism, and nihilism is as follows:

    Monism: there is only one true logic.

    Pluralism: there are at least two true logics.

    Nihilism: there are no true logics.

    Whether one is a monist, pluralist, or nihilist will depend a lot on what one takes a logic to be about and whether logics have to satisfy certain properties, like being universal, normative (capturing "rules of good thought"), and so on. Certain kinds of nihilism have a lot in common with certain kinds of pluralism (Aaron Cotnoir's nihilism is very close to a view of pluralism called logic-as-modelling, for example).

    This is a typical response, but based on some takes in this thread typical answers to this question are all way out in left field.


    Active academic philosophers and logicians have been surveyed on this BTW:

    https://survey2020.philpeople.org/survey/results/4858?aos=37

    As Chalmers notes in the paper on this, "pluralism" was the most popular write in option so this overstates the commitment to classical logic, but not entirely.

Count Timothy von Icarus

Start FollowingSend a Message