Comments

  • Thomism: Why is the Mind Immaterial?
    Consider that "the mind is potentially all things." In this, it is somewhat akin to prime matter (indeed, some commentators speak of "intellectual matter").The formal object of sight is color, for smell odor. These are quite limited. For the intellect, it is the whole of being itself, actuality and intelligibility. In order for the intellect to be all forms thought, it cannot be itself any prior form. The intellect is able to receive form not by being "made of stuff/building blocks" but rather by being formless.

    Also, consider that "our thoughts can be elsewhere." Aristotle does not fall into the trap of the much maligned but often reproduced Cartesian theater, whereby all we experience and think about is "in our head." But the mind's ability to "be anywhere" is also an indication of its immateriality.

    Obviously, we can know an apple. Yet when we know an apple, our heads don't become apples, nor are there physical apples in our head. The mode of existence in the intellect is different, immaterial.

    This leads to a helpful way to understand the real distinction between essence and existence. Suppose we think of a purple horse. Well, for so long as we think of it, if has mental existence, but it doesn't have real existence; two different modes. The mental being is not composed of form received by material substrate. But this is also where Thomas departs most radically from Aristotle, who identified form/essence with actuality, whereas for Thomas being is ultimately existence, which is an act/actuality, and cannot be reifed into a concept because it is beyond concepts, beyond essence. This makes all beings "participatory."

    1. What is Aristotle's view of the mind here? Is it a nothingness, a negativity, like Hegel? Is it pure form that is immaterial?

    This is far from clear because those passages on De Anima have produced tons of speculation. The Islamic commentators take these in very different directions. It definitely isn't pure actuality though because we have the passive ("material" for Averoese and some others) intellect. The solution to the Meno Paradox lies in our knowing things potentially prior to that potential being actualized.

    The difficulty is that there is often an equivocation between matter as simply potency and matter as what receives form in composite physical (changing) beings. This is a tension right into Thomas, one he resolves with the act of existence and the essence existence distinction (from Avicenna). Material beings are in some sense constrained by their physicality. But, we must not make the mistake of the moderns here in thinking that this must make physical "substance" a sort of subsistent building block. Ultimately, it is anything at all only as respects it's form, and material being itself is a sort of limiting determination (we might say act). Hence, one way to put it is that it is all inside the "mind" of God, God as ground, since it is God "in whom we live and move and have our being" (Acts 17:28).

    2. How does Aquinas argue for the soul being immaterial? Is it just that thinking cannot have a sense-organ?

    In many different ways. The most obvious is its grasp of universals outside any materiality.

    So consider the famous Nietzsche quote:

    ejwoeqfte7ff2zkb.jpg


    Actually, guys like Saint Augustine were very aware that nature had no perfect circles or triangles. This is precisely why they thought the intellect must be immaterial (or one reason at least, they had many).

    So, if you ask GPT or Google around, this will be the most common answer, and the idea that the intellect, in order to be able to receive all form, cannot itself be materially limited. However, I think Thomas' case is more compelling as you layer on more and more of his philosophy, which all touches on it.

    Going back to Parmenides, there is the idea that "the same is for thinking as for being." A thing's eidos, form, is its whole intelligible quiddity, what makes it appear as any thing at all and not nothing in particular. To be untinelligible would be to be nothing, no thing. So, being itself is like a two headed coin, being and being experienced/known. Plotinus makes just this point. These are unified in the One.

    So, from Perl's Thinking Being:

    The key insight of phenomenology is that the modern interpretation of knowledge as a relation between consciousness as a self-contained ‘subject’ and reality as an ‘object’ extrinsic to it is incoherent. On the one hand, consciousness is always and essentially the awareness of something, and is thus always already together with being. On the other hand, if ‘being’ is to mean anything at all, it can only mean that which is phenomenal, that which is so to speak ‘there’ for awareness, and thus always already belongs to consciousness. Consciousness is the grasping of being; being is what is grasped by consciousness. The phenomenological term for the first of these observations is ‘intentionality;’ for the second, ‘givenness.’ “The mind is a moment to the world and the things in it; the mind is essentially correlated with its objects. The mind is essentially intentional. There is no ‘problem of knowledge’ or ‘problem of the external world,’ there is no problem about how we get to ‘extramental’ reality, because the mind should never be separated from reality from the beginning. Mind and being are moments to each other; they are not pieces that can be segmented out of the whole to which they belong.”* Intended as an exposition of Husserlian phenomenology, these words hold true for the entire classical tradition from Parmenides to Aquinas...

    In arguing that being qua intelligible is not apart from but is the content of intellectual apprehension, Plotinus is upholding what may be called an 'identity theory of truth,’ an understanding of truth not as a mere extrinsic correspondence but as the sameness of thought and reality. The weakness of any correspondence theory of truth is that on such a theory thought can never reach outside itself to that with which it supposedly corresponds.1 Thought can be ‘adequate’ (literally, ‘equal-to’) to reality only if it is one with, the same as, reality. In Aristotle’s formulation, which as we have seen Plotinus cites in support of his position, knowledge is the same as the known.2

    If thought and reality are not together in this way, then, as Plotinus argues, there is no truth, for truth just is the togetherness of being with thought. Plotinus’ arguments against the separation of intellect and being thus resonate profoundly with the nihilistic predicament of modernity. If thought and reality are conceived in modern terms, as ‘subject’ and ‘object,’ extrinsic to and over against one another, and truth is conceived as a mere correspondence between them, then thought cannot get to reality at all, then there can be no knowledge, and in the end, since nothing is given to thought, no truth and no reality. We must rather understand thought in classical Platonic, Aristotelian, and Plotinian terms, as an openness to, an embracing of, a being-with reality, and of reality as not apart from but as, in Plotinus’ phenomenological terms, “given” (V.5.2.9) to thought. This, again, is the very meaning of the identification of being as εἶδος or ἰδέα. Being means nothing if it is not given to thought; thought means nothing if it is not the apprehension of being. Hence at the pure and paradigmatic level of both, intellect as perfect apprehension and the forms as perfect being, they coincide. “We have here, then, one nature: intellect, all beings, truth” (V.5.3.1–2).

    The idea of truth as a transcendental also comes into play here. Truth cannot be convertible with being if there unintelligible, eidos-free noumena that are, but exist in a sort of intellect free space (which also implies they could never make any difference to anyone ever). The only being free of limiting essence and eidos is not "a being" but rather infinite being itself, God. In a sense then, everything is intellect.

    However, there is still a real distinction between mental being in our intellects and what has a true/full act of existence through participation with God (even though our intellects are also a participation in divine being). The intellect is immaterial, in part, because it is not limited by material existence, it has access to these mental beings, "ens rationis," as well as "conceptual but not real distinctions" (e.g. a cup being half full versus half empty—you won't find this distinction in material existence).
  • The decline of creativity in philosophy
    IIRC, the term Renaissance is a 19th century invention. I wouldn't deny it as a particular historical moment, but there was definitely a political and theological interest in making the dawn of humanism the end of a "dark age," when in reality the High Middle Ages already represented a period of rapid advances in many areas and the early-modern period perhaps a regression in some. You can see this tension when people want to reach all the way back to make Dante and Giotto "Renaissance" figures. But Dante in particular is distinctively in line with the High Middle Ages and High Scholasticism.

    Likewise, in at least military technology, the West continued to develop, and in some ways outpaced the East by a meaningful degree.

    Japan is another interesting example because there an intentional stagnation in technological and economic development did not stop cultural and artistic development. And indeed, plenty of scholars argue that the advances in culture were precisely why it was able to resist colonization and rapidly modernize, going for backwards even for the region to one of the Great Powers, within a span of a lifetime.
  • The decline of creativity in philosophy


    I don't disagree with the general judgement, there was a very real decline (Europe's population plunged by more than a fifth or even a fourth and stayed down), but the pictures are sort of cherry picking. The art of the north was in ways more primitive, but then Byzantium retained influence in the West well into the "Dark Ages" and had no such collapse. There are medieval wooden painted statues and stonework that are more lifelike, in some ways more similar to Greek statues (see below). The shift in painting is partially stylistic (e.g. typological, which is why faces and bodies exactly mirror themselves on both sides). You see this even in the Eastern Roman Empire where there was no collapse.

    This is not to say there wasn't a very real loss of knowledge. Civic engineering projects like the Roman roads and aqueducts arguably wouldn't be matched for 1,300 years, or at least 1,150. At the same time, the Byzantines erected churches that arguably best the great temples of antiquity during the "Dark Ages," and even when the Latin West was still culturally and economically backwards, its ability to dedicate a high chunk of GDP to cathedrals for generation after generation of construction (many spanning centuries), led to Gothic masterpieces that bested anything from antiquity or the Christian East.

    It should be noted too that progress and regression is not unidirectional. Europe today has great difficulty maintaining its great cathedrals (or say, rebuilding Notre Dame) because the skills required are almost extinct. There have been similar issues even in relatively short timespans, like highly classified military technology becoming "lostech" that no one knows how to maintain or recreate (e.g. the US nuclear modernization program's struggles, or efforts to return to the moon). This is actually a fairly common problem in the industrial sector, and it's also been a huge factor in Russia's inability to replace war losses.

    So then, we might also consider MacIntyre's thesis about a similar collapse in an understanding of ethics, or an arguably similar move with metaphysics.Piotr Jaroszyński (along with many others) makes this point re a sort of degradation in metaphysics through Scotus, Ockham, and Suarez's failure to understand the "act of existence" and high scholasticism. The result is the rise of idealism during the Enlightenment, best represented in Berkeley, who is in some ways putting forth a much more simplistic, fun house mirror, badly degenerated Aristotlianism/Thomism. This carriers on from there, for instance Heidegger's main model for criticism was Saurez, something Gadamer and the Thomists have challenged him on as leading to a major misrepresentation in the historical dimension. You can see echoes like this in other places, e.g. Deleuze's consideration of substance vis-á-vis Spinoza, when the conception of "substance" had arguably already come to collapse and conflate multiple distinct notions in problematic ways.

    Hence, I don't find this sort of thesis implausible in some sense. I'm no expert, but I've read a lot of Reformation Era German and English texts and they are in some ways a great step back from The Cloud of Unknowing or Meister Eckhart. The printing press led to an explosion in creativity but also perhaps a sort of democratization and reduction in signal to noise ratio that was corrosive, especially when paired with the explosive politics of the era, which tended towards polemic and radicalism.

    jpygd5jz9pkjo0rb.jpeg
    h7qxu8gubsz1qrun.jpg
    aoth84pthq808acl.jpg
    13gturob3lmirapi.jpg


    I am not sure the precise dates here, since the cathedrals span 1000-1400 in construction, but they appear to mostly be from the 11-1200s. In literature too, you have Dante, Chaucer, etc., although obviously the Renaissance brought a lot of great literature too. It is true that "dark age" literature tends to be more "primary epic," the original writing down of past oral epics, more akin to the Iliad than the Aeneid.
  • Must Do Better
    I took a class through Oxford a while back on philosophy of religion. It was entirely focused on analytic philosophy, mostly stuff since 1950. I recall thinking that one drawback here was that the "God" being considered seemed alien to all religious traditions.

    At any rate, we read a number of articles on "The Problem of Evil." Did analytic techniques help here? I am not sure. I put the Brother's Karamazov far above any of the influential articles we read. Is it philosophy? Arguably not. But it's lent itself to a great many philosophical treatments. I mentioned Williams earlier. David Bentley Hart's book on theodicy also focuses on BK, as does a quite large body of scholarship that crosses from the literary into the philosophical and theological. But the methodology here is obviously going to be quite different from the analytic school. Likewise, MacIntyre probably cites more poems, dramas, and novels in After Virtue than philosophers, or at least it's about equal, but it's better for that.

    Or, for a more direct example, we might consider how someone like Plantinga goes about showing how "God cannot create a rock so heavy he cannot lift it," is merely logically equivalent with "God can lift any rocks." Does this bit of work resolve the issue?

    Not really, it simply misinterprets the problem by trying to squeeze it into formalism. The problem is much deeper. It is twofold. First, there is the ultimately contradictory nature of freedom as a total absence of constraint and sheer, undetermined movement of the will, constrained by no determinancy, which itself collapses into the contradiction that any choice at all, in being determinant, is a limitation of freedom (a point Hegel makes towards the opening of PR).

    Second, we have a sort of self-reference in play in that the ability to "do anything" presumably would include to ability to make it so that one "cannot do anything." The unlimited contains within itself the potential for limitation. We perhaps have something of Hegel's "bad infinite" in play here. Either way, it's a thorny issue the formal solution simply obscures.

    And I would say the exclusion of this sort of option is precisely why academic philosophy is increasingly irrelevant. If one looks at the best sellers on Amazon or Audible in "philosophy" it is overwhelmingly New Age, informed by Eastern thought, or religious. If one looks at philosophy sections in bookstores, which normally bracket out the religious and New Age to either side, you see Nietzsche and other, older existentialists dominating, as well as the old perennials (Plato, Aristotle, etc.) and a smattering of Enlightenment tomes. Some Continental stuff shows up, very little analytic.

    Arguably, Continental philosophy still has a fairly large effect on culture through the arts and the humanities, although the effects on some fields like Classics hardly seem to its credit (e.g., publications quadruple as enrollment plunges by 80+% and big names in the field start openly calling for its own destruction). Analytic philosophy? Not so much. I suppose there is "effective altruism" and New Athiesm, but these aren't as obviously "analytic." Obviously, liberalism remains immensely influential, even totalitarian in some respects, but this isn't really "analytic" philosophy, but almost a sort of parallel, less Enlightenment-skeptic thread of Anglo-American thought that largely exists in the sciences and politics. This birthed the very influential, now hegemonic "neo-liberalism;" again, probably not to its credit.

    A common theme in the move towards bracketing and siloing down philosophy, and these aforementioned streams, seems to be a sort of skepticism of man's capacities (a key thread of modernity from Luther and Calvin) and of logos itself, and also an almost tyrannical irony, particularly wherever optimism is concerned (e.g. "liberalism is the worst of all ideologies, except for all the others"). I think this helps explain New Age and Eastern texts supplanting these in book sales to some degree.

    The difference is perhaps most obvious in ethics, where the entire focus seems to shift over to "what is right," theoria, and not "how do we make ourselves do what is right," i.e. praxis. What the New Age, Eastern, and religious texts have going for them is that they don't neglect praxis, or the need for "rhetoric" in motivating praxis.

    One way this plays out is in the absolutely catastrophic job market for philosophy PhDs. This is, in a way, less extreme for some areas of philosophy, which can take advantage of adjacent fields and work in religious institutions. Particularly, the boom in classical schools offers a lot of positions with pay better than many university positions, but that's also a praxis heavy area of focus. By contrast, the main outlet for analytic PhDs tended to be computer coding, a field which seems to be facing its own devastating wave of automation. And, to my mind, teaching younger students at a classical school is vastly closer to the original aim than being forced into coding shopping and finance apps. But this is just part of a much larger issue in academia, the German model of the "research university" making "progress" the chief focus, and not cultivation and education. I will allow that the case for bracketing down philosophy to focus on tractable "progress" is at least significantly stronger than it is for some disciplines such as classics, and yet the same drive has been present in those fields as well.



    Yes, this idea shows up a lot. There is, for instance, even in the athiest Leopardi a recognition of our orientation towards an infinite good, a desire that finds its rest in no thing. Robert M. Wallace is pretty good on this in Plato's conception and psychology of the Good and Hegel's true infinite. David Bentley Hart has an interesting article on this in "Ye Are God's" focusing on Nicholas of Cusa:

    In Christ’s human—which is to say, rational—nature, we see the rational human spirit in its most intimate and most natural unity with divine Spirit, which is absolute reason, and the most intimate and natural unity of human intellect with divine intellect.31 And so on. One should not let the sheer grandiloquence of these apostrophes to the God-man distract one from their deepest import, or from the rigorous logic informing them. Because what Nicholas is also saying here, simply enough, is that in Christ the fullness of human nature is revealed precisely to the degree that it perfectly reveals the divine nature of which it is the image, and that human spirit achieves the highest expression of its nature only to the degree that it is perfectly united with divine Spirit. That is, in Christ we see that the only possible end for any rational nature
    is divine because such also is its ground; apart from God drawing us from the first into ever more perfect union with himself, we do not exist at all. We are nothing but created gods coming to be, becoming God in God, able to become divine only because, in some sense, we are divine from the very first


    The idea that the goal of philosophical education was to become "like onto God" was common to both pagan and Christian education in late-antiquity. I don't know if these can be excluded we philosophy without making some pretty dramatic assumptions about philosophocal anthropology.
  • Must Do Better


    Yes, the religious phenomenologists (and we could include Henry, Scheler, and perhaps even Zahavi and Levinas in this group) believe that to exceed the solipsistic self-givenness of the subject requires metaphysics. But why?

    Well, first, it resolves the problem of seemingly presupposing giveness as a spontaneous, self-contained movement of potency to act, which would seem to make the world untinelligible. If something can just be given, "for no reason at all," or "no reason in particular," then there is no way to explain why the world is one way and not any other, no way to explain man's progress towards self-determining freedom, or the Good as such. The charge of solipsism against Kant always made some sense to me—not that he suggests it—but that it seems like he might actually be implying it against his will. But, and it's been a while, when I was reading Husserl's later stuff it sort of struck me as in some ways coming close to "Kant with extra steps."

    For one thing, the starting position of even allowing the Descartes and Kant has strong cases we must 'overcome' strikes me as giving up too much (which goes along with assuming too much).

    How does the transcendence of the subject toward a substantive in-itself (the Goodness , Height and Righteousness of the divine other) not represent a backsliding away from Husserl’s content-free ground towards an arbitrary substantive beginning?

    Well, consider my original question, in what way is this even a "ground?" Does it secure the authority of reason? Does it explain it in virtue of its causes or principles? Is the cause of giveness giveness-itself, man self-moving and spontaneously self-creating? The purely descriptive is not really a "ground" in the traditional sense. It is not a first principle either. And there is the issue I mentioned before where other "Great Names" attempt the same exercise and come to a radically different conclusion from Husserl, which seems to me to cast doubt on what we are to make about claims to have stepped behind all mediation. This same issue haunts the Greater Logic. Even advocates like Houlgate readily admit people following the same method are unlikely to come to the same "deductions."

    How does it not end up reifying both subjectivity and alterity?

    Many ways, consider for instance personalism. Persons are ontologically basic, but people are also more fully persons when they are unified in the Good, etc.

    Like I said in the other thread, the idea that immediate sensation is maximally unabstract is a presupposition that enters the door with Enlightenment materialism. I don't think it's an obvious conclusion; indeed Hegel's point is that this is the sort of least stable phenomena, devoid of content, and so the least itself and its own ground, the most abstract. The inability to transcend these sorts of presuppositions is partly why I think there is no truly post-modern philosophy, just the same trend of nominalism and individualism cranked continually upwards.

    Consider the etiology of "reify' in "res," and it becomes clear that the idea that moving away from immediate sensation as "reifying" is itself a loaded metaphysical supposition, just one that is often being ignored and taken for granted by "bracketing" (arguably, simply dogmatically assumed if this is then used to supplant metaphysical inquiry). It's true that some thinkers do the opposite, and elevate the universal inappropriately. But I think the more subtle thinkers on this topic are often at pains to elevate neither of the "two streams"—particular or universal—over the other. Rather, they are like Ezekiel's two wheels, passing through one another, each reflecting the other and revealing it.

    Of course, if one just assumes nominalism as a starting point by bracketing out realism a priori, one has already elevated the individual, but that's not the same thing as justifying that move, so I think that's one of the difficulties to be addressed. If we presuppose that phenomenology can be understood without reference to what lies outside the bracket we have already cleaved the part from the whole and declared the whole subsistent; or declared the part the whole (solipsism).

    Subsistent-Bing-Itself cannot be an "abstraction." It is rather most subsistent, most determined by itself, etc. Being truly infinite, it is not contained in any "abstraction,' hence the via negativa and analogia entis. Whereas the giveness of human phenomenology is always referred outside itself. Being radically contingent, it cannot be its own ground (unless it is self-moving potency), or so the concern goes.

    Plus, we might consider descriptions of an a full move around intelligibility and mediation. Jill Bolte Taylor's description of her stroke is an interesting one. Yet it is not obvious that we best know being, cutting through illusion, when we are undergoing severe brain damage that disrupts our faculties

    if we want to critique Husserl’s ground of pure presence as excluding Otherness, we can follow the path set by Nietzsche, Deleuze, Heidegger and Derrida, who don’t fall into the trap of imprisoning transcendence with a substantive divine content.

    How can you imprison transcendence? If it is imprisoned, it has simply failed to be truly transcendent. The true infinite isn't a prison, because it is beyond all concepts; e.g. Dionysius, Plotinus, etc. That Nietzsche never studied this tradition and projected the popular 19th century German Protestant pietism he grew up with backwards onto the whole of Christian (and Jewish, Islamic, and Pagan) thought is not really a failing of those traditions, but of Nietzsche as a source of historical analysis. This is also why I wouldn't put him beyond modernity. The God of the German Reformers looms large in the Overman. So too for Heidegger, projecting Suarez back onto the whole of scholastic philosophy, although I will allow he has a vastly better grasp.
  • The decline of creativity in philosophy


    If it is true of Modern man (and I include among this group Nick Land and Mencius Moldbug, despite their superficial aping of postmodern philosophical tropes), is it also true of Postmodern man?

    I think post-modern man is a myth; a bit like sasquatch. It seems to me that all supposed "post-moderns" achieve is Zygmunt Bauman's "liquid modernity." A phase transition, sure, but the same substance. The Reformation and Enlightenment shadow still colors everything. If the "Singularity" hits, I'm afraid we'll just have "gaseous modernity," a self-sustaining cycle of hot air made hideously prolific through the aid of LLMs.

    John Deely wrote a whole history of philosophy focused on how Charles Sanders Peirce was the first post-modern thinker. Maybe it is even so, but if it is, he was at least a century and a half too early.




    Seems more like vacuous self-indulgent name-dropping garbage to me

    Thanks, I seem to have hit my target!

    I’m not going to play politics..this sort of moralizing 'holier than thou' diatribe turns my stomach.

    Well now it cannot be moralizing and 'holier than thou' and vacuous, so now I'm questioning your original compliment.
  • Question About Hylomorphism


    We would need to ask: does the stuff that is organized towards the whole and the wholes of those organized things and so on go on infinitely or finitely?

    I see what you mean. It strikes me that the Five Ways sort of answer this question. There is a good dialogue on them called "Does God Exist?" by Robert Delfino that is pretty good on this. One need not accept the conclusion to see the relevance here. The Second Way shows how the chain ends in terms of hierarchically structured efficient causes (as opposed to accidental linear temporal ordering). Aquinas doesn't think we can know from reason and observation alone that the world has a temporal begining, so the question is open in the horizontal dimension (being closed by revelation), but it ends in God in the vertical dimension.

    The First Way shows us an end to the order of motion, the Third shows us an end in the order of necessity, and the Fourth a sort of "first principle of participation." The Fifth Way shows an end in the order of ends. We don't have infinite regresses anywhere here.

    The problem of infinite composition would seem to me to involve a sort of materialist presupposition alien to Aristotle or Aquinas (but more alien to the latter), that matter is in some way subsistent and not always referred to another.

    An information theoretic example might be helpful here:

    Some physicists claim that information is ontologically basic and that matter and energy emerge from it. At the bottom, we get down the the bit, 1 or 0. You cannot get anymore simple. Or can we? We also have the qbit, potentiality between 1 and 0 that resolves into either.

    But it would be a mistake to take this position as claiming the cosmos is "composed of bits," a position that is often ridiculed as a misunderstanding. For instance, an electron only carries information at all because it is a difference that makes a difference as measured against a background that is not an electron. As Floridi demonstrates in his Philosophy of Information, a toy universe must have at least some difference to be anything at all (akin to Hegel's point on sheer being in the Logic and sheer sense certainty in the Phenomenology). If the universe is only a point, then obviously a 1 or 0 cannot exist as there is no variance to define the difference. We only miss this because we act as "extra-real observers" sitting outside the toy universe.

    So, there might be a sort of "last part" in the form of the bit, but it isn't subsistent, but instead relies on another to be actual, to be anything at all. It isn't a building block. There isn't an infinite regress. We have God at the top, nothingness at the bottom.

    Dante's imagery in the Paradiso is helpful here. His material cosmos is Satanocentric. Earth is at the center, the point to which all matter moves, and Satan is at the center of the Earth. This is maximal multiplicity and potency. The sinners at the bottom of Hell and Lucifer are frozen in ice because potency cannot actualize itself because it is ultimately nothing at all. We are in the real of the untinelligible, where being trails off into nothingness.

    But in the Paradiso the entire image is inverted. The outermost sphere is actually a dimensionless point, the mind of God "outside" space and time. But all space and time is contained "inside" the point in reality. Creation is like a halo projected from a point, light diffusing in mist. The limit, where the light stops, growing dimmer and dimmer until there is darkness, is just the limit of being. The end of the order is nothingness. As one moves onward, one moves up in the scale of the Transcendentals, Truth, Goodness, Unity, Being.
  • Question About Hylomorphism


    Is humanity a "part" a of man? Is snub-nosedness a part of Socrates, or paleness? If what Socrates is does not explain that he is, would his existence be a separate "part" from his essence?

    I think there is a meaningful distinction between principles and parts, and between participation and composition. A circle is not composed of circularity, but circularity inheres in it, for instance. Parts are organizational. They are ordered to an end, and that's a key difference. "Humanity" by contrast, is possessed or participated in, the idea of "limiting essence."
  • The decline of creativity in philosophy


    The worst part is, it's all true... Modern man is an inverse Oedipus. He is born free, master of his own fate, and then tears out his own spiritual eyes, fating himself to wander the wilderness, unable to answer the Sphinx's queries. Jacob saw a ladder stretching down from heaven, angels ascending and descending, but modern man is more like Balaam, stuck on his path, hoping blindly in the better judgement of his ass to avert technopocopypse.
  • Must Do Better


    For Husserl reason returns to itself in the self-affecting presence to itself of the present moment, the speaking that hears itself speak in the same moment that it speaks. Once we bracket off all that consists of reference to all that which is not present and can never be present ( the idealizations of logic and empirical science) , what is left is the presence-to-self which grounds reason as pure self-identity.

    Right, but is this even a "grounding" or just mere description, tracing the way reason shows up in experience? How does this justify the authority of reason? And doesn't this risk absolutizing the epoché?

    There is also the question of individual judgement here. For Hegel, sheer sense certainty is contentless, it needs to go beyond itself to be anything at all. Instead of being pure immediacy, it turns out to be pure abstraction, and in a sense, a myth, since it is always already caught up in the Concept, which is mediated and self-developing. I suppose Hegelians and Husserlians can both accuse the other of falling into different illusions, but that hardly seems to go anywhere.

    So, first, we have the difficulty of judging different conclusions drawn from sheer sense certainty. Then we have to ask, from whence these different judgements?

    But as noted before, something of the materialist's presuppositions seem to still loom large in Husserl. The concrete particular is "most real," despite this elevation coming from what is supposed to be a methodological move in the order of our knowing (not the order of being). Yet even if one accepts that "everything in the intellect is first in the senses," this wouldn't necessarily imply any sort of ontic priority to the giveness of sensation itself.

    If the immediacy of experience is the ground of being (as opposed to merely first in the order of knowing) wouldn't this imply that it is subsistent in itself? And yet it seems radically contingent, with causes outside itself, always being referred outside itself. How is this not supposing a spontaneous move from potency to act if sheer experience is said to be ontological ground, and thus prior to all things? That the transcendental ego is always already active doesn't explain why this is so.

    So for instance, Stein recognizes the need for metaphysics to complete the description, Jean-Luc Marion recognizes that giveness exceeds the subject and must come from without, Ferdinand Ulrich probably extends this the furthest, countering the forgetfulness of being with an understanding of being as gift.
  • The decline of creativity in philosophy


    And by the same token, to banish the meditative or professorial image of the philosopher; to make the philosopher something other than a sage, and so other than a rival to the priest. Rather, the philosopher aspired to become a writer-combatant, an artist of the subject, a lover of invention, a philosophical militant—these are the names for the desire that runs through this period: the desire that philosophy should act in its own name

    But all the best philosophers have "Saint" in front of their name. Verily, if one encounters a new name, looks them up, and they don't look like an archetypal sorcerer, I'd be skeptical indeed! :cool: :rofl:

    And that's not just for the Christians, consider: Apollonius of Tyana, Plotinus, Nagarjuna, Laotze, Shankara, Dogen, Proclus, Al Farabi, or even our old beloved Plato.

    Yet behold! An epoch where even the philosophers are decadents. Even? Especially the philosophers! And now they've even made it to the Big Leagues—all the way to the Oval Office. I am not sure if being filtered through Nick Land, Mencius Moldbug, and "Bronze Age Pervert," (complete with a return to radical asceticism in the form of fasting tax payer funds) jives with the original intent, but it certainly demonstrates the rollicking freedom of thought. :wink: (This, of course, ignores the philosophers who made themselves into accountants, but that's what people do with them—ignore).

    When the Last Men become First, they can make themselves into Overmen—even colonize Mars if they want. The difficulty is that they fancy themselves Milton's Satan—or Macbeth, holding the dagger that killed God—and yet really they play Iago to themselves; yet it's not like the human race was ever more than the womb for AGI and Capital anyhow, the prime matter for the instantiation of Mammon, who's destined to birth Roko's Basilisk (i.e., ol' Jörmungandr, whose fiberoptic tail wraps tightly round the Earth underneath the waves even now). Volanturism clears away the old form and the ol' Demiurge—Yaldy-Baddy himself—shakes his mane, uncoils his tail, and does the rest. Dostoevsky was right about the Inverse Tower of Babel, bringing Heaven down to Earth, but missed that achieving this Brave New World would first require recreating God's punishment: linguistic atomization and separation.
  • Question About Hylomorphism


    Aquinas has it that angels and demons are composed in a sense. They have both essence (what they are) and an existence given by God (that they are). This constitutes their "act of existence." These aren't parts in the sense of substrate though.

    They have some actuality and some potency. They can learn, turn their attention, will this or that, act here or there, etc. but they cannot grow, decay, or lose form, because they have no matter. They are not subject to generation and corruption. Their form is fixed, they do not change in what they are, but only in how they operate.

    For a similar example, there is the human soul, which is immaterial but subject to change, and informed by the body.

    But this isn't a part whole relationship, but rather two different principles within an act of existence. Act is received in potency and there is a limiting relationship between potency and act.

    Aristotle is quite different in this regard because he hasn't separated out essence and existence. Aristotle complains about the notion of participation in the Metaphysics but Aquinas is able to plumb it more fully and make use of it. All creatures participate in God's being, which alone is subsistent. All actuality is a sort of limiting participation, that occurs according to virtual quantity (qualitative intensity). We don't have an infinite regress because all actuality ultimately traces back to God.
  • Must Do Better


    Would divergence indicate a problem then?
  • Two ways to philosophise.


    This is why the knowledge claim, "All knowledge claims are true" is simply false on it's face because we already know that some knowledge claims contradict each other and LEM. I don't even need to get to your other claim that "all knowledge claims are not true" to know that the first one is false. You start off with a faulty premise and it is faulty because it does not fit observation and follow the LEM. Adding the second claim as if it even relates to the first, or your use of "all" is an example of what language on holiday is.

    Yes, that was the point. P V ~P is not a premise and conclusion, but a premise itself, a basic disjunct. Also, might you be confusing the law of the excluded middle (LEM) with the law of non-contradiction (LNC)?
  • On Matter, Meaning, and the Elusiveness of the Real


    Yes, I have seen it expressed that way. I don't think it does more than make an interesting beginning for a theory. Hamlet's version is somewhat different. I've always wondered where it came from - Shakespeare may have thought it up himself, but it is also likely that he read it somewhere.

    It's "in the air," especially in England, the birthplace of nominalism, as a sort of extension of theological volanturism. Consider Shakespeare's younger contemporary Milton's great lines:

    ...What though the field be lost?
    All is not lost—the unconquerable will,
    And study of revenge, immortal hate,
    And courage never to submit or yield:
    And what is else not to be overcome?
    That glory never shall his wrath or might
    Extort from me. To bow and sue for grace
    With suppliant knee, and deify his power...

    A mind not to be changed by place or time.
    The mind is its own place, and in itself
    Can make a Heaven of Hell, a Hell of Heaven.

    What matter where, if I be still the same,
    And what I should be, all but less than he
    Whom thunder hath made greater? Here at least
    We shall be free; th’ Almighty hath not built
    Here for his envy, will not drive us hence:
    Here we may reign secure; and, in my choice,
    To reign is worth ambition, though in Hell:
    Better to reign in Hell than serve in Heaven.


    Theological volanturism obviously starts much earlier, but through nominalism and the politics of the Reformation the idea that God makes whatever is good or bad so by a sort of bare act of will gets transferred over to man. There is, strictly speaking, no Good, but what is called "good " God obviously still has the claim the proper authority, and those who opposed him will suffer no doubt, but it is at least not incoherent for Satan to proclaim "evil be thou my good." Whereas on the realist account that comes through the via antiqua, while evil can obviously still be willed, this always involves a certain sort of ignorance, since evil itself is nothing, a privation of perfection and of being.

    This reminds me that I have a PM from @boundless I need to respond to, but we were talking about how this volanturism emerges following the Black Death, but also why it seems to come to Islam first and is more successful there.

    Ah, I see. You are using "truly" to distinguish a realist concept from an ant-realist concept. In which case we are just talking about two concepts of desirability, and a concept is either useful or not, and never true or not. Yes. I'm dodging the question. That's because I don't know what I think (yet).

    Sort of. When there is an appeal to usefulness to ground practical reason, which appears to become groundless in anti-realism, the next obvious question seems to be "what do we mean by useful?" Are there facts about what is useful, or is it just a matter of taste? Is "the mind is its own place, and in itself can make a Heaven of Hell, a Hell of Heaven," and is "nothing good or bad but thinking makes it so?" This would seem to imply than anything can be useful, according to an act of will.

    I think most anti-realists allow that some things can be factually more or less useful towards some end, since this seems to be an obvious empirical fact (e.g. you cannot smash rocks with a hammer made from butter). But in denying any 'higher ends' by which proximate ends are ordered they seem to rule out any sort of ordering for "usefulness," which seems to lead towards the idea that "useful" is just whatever we just so happen to consider to be useful at the moment.

    But this has problems to. Regret is a ubiquitous experience. Are we to say that those last tequila shots of the night were "useful" when I was feeling no pain and gulping then down, but the self-same event became unuseful when I awoke brutally hung over hours later? The phenomena of regret might suggest that what is "useful" is what we will consider useful at some point in the future, or with more information, but here the issue is that there seems to be facts about this sort of usefulness, and the idea that more information helps ground practical judgement is at odds with the idea that they are afactual.

    Well, it often means that, though, I would say, never just means that. See above.

    Indeed, because this position has clear difficulties if not moderated. And yet an expansive volanturism seems to imply that it must be so, else the will is constrained in what is considered good or bad, which would seem to suggest the possibility that "what we are" and "what things are" determines their "usefulness," which again, seems very fact-like and less "taste-like." I think the difficulty here is avoiding inconsistency. So, when Rorty debates Eco, he wants to say that what a screwdriver is doesn't necessitate (or even "suggest") how we use it, since we could just as well use it to scratch our ear as turn a screw, and yet in an obvious sense this isn't so. A razor sharp hunting knife is not a good toy to throw into a baby's crib (at the very least, for the baby) because of what both are, and this is true across all cultural boundaries and seems that it must be true.

    It all depends on what you mean by rationality. Conventional logic, as I'm sure you know, can't establish good and bad. But we can reason about good and bad, ends and means. Why would anyone want to deny that we desire truth (on the whole) and goodness (so far as we understand it)?

    It's the denial of the appetite for them as such. So it doesn't deny that we might desire truth to attain some other means, but it does deny a rational appetite to know truth of itself that is a part of reason. IDK, this seems to be all over modern anthropology. Homo oecononimicus maximizes utility, and desire for truth is rolled into that black box and generally ignored. Rawls' bare abstract agent is invested in procedural reason, not the old intellectual appetites. Hume says quite straightforwardly that reason can never motivate action, full stop. Reason becomes entierly instrumental. Likewise, "the merit of benevolence, arising from its utility, is a proof that the notions of morals are not derived from reason."

    Nor can Hume just say, "but people just possess a sentiment for goodness itself," because this would obviously imply that there is something, goodness, to have an appetite for, which is distinct from people's other sentiments, which is at odds with the entire thesis.

    Now you have me puzzled. Why would anyone deny that we have a concept of morality, and of ethics?

    They don't, and I'm not sure how you read that as a denial of the existence of the field of ethics. Rather, the denial is that ethics has any real subject matter outside opinion and illusory judgement. It is just taste and emotional sentiment. Plenty of users here make this sort of claim, and plenty of famous thinkers.

    Yes. Sometimes, however, they do so because they think that position A does not imply position B. So I need details.

    This:

    Where am I, or what? From what causes do I derive my existence, and to what condition shall I return? ... I am confounded with all these questions, and begin to fancy myself in the most deplorable condition imaginable, environed with the deepest darkness, and utterly deprived of the use of every member and faculty.

    Most fortunately it happens, that since Reason is incapable of dispelling these clouds, Nature herself suffices to that purpose, and cures me of this philosophical melancholy and delirium, either by relaxing this bent of mind, or by some avocation, and lively impression of my senses, which obliterate all these chimeras. I dine, I play a game of backgammon, I converse, and am merry with my friends. And when, after three or four hours' amusement, I would return to these speculations, they appear so cold, and strained, and ridiculous, that I cannot find in my heart to enter into them any farther.

    ...is not a philosophical resolution of skepticism. The anguished skeptic can just say: "well it still bothers me."

    Fascinating. Could you let me have the reference so I can look it up?

    There are lots of critiques of Hume's attempt to ground morality in a sort of universal sentiment and average utility. MacIntyre's treatment in After Virtue comes to mind. A key point is that it doesn't keep egoism out, see the point above. It does not imply that it is actually better for us not to act like egoists and lie and cheat, etc. just in cases where we know we can get away with it. To the claim: "but people have a tendency to not want to do that sort of thing," the egoist can just reply "but I do want to do it."

    Arguably, Hume might not contradict himself, if we take his "grounding in sentiment" to be purely descriptive. But then he hasn't done anything to ground morality either, and hasn't justified a move from moral nihilism the way he claims he has. So it's a sort of damned if you do, damned if you don't.
  • Must Do Better
    One of the points here, about the way a lack of clear methodology (or at least an agreement that reason can adjudicate the question) leads towards authoritarianism, cults of personality, politicization, and power struggles is quite apt. This was one of my points in the "Two Ways to Philosophize" thread. It's also one that D.C. Schindler makes in his Plato's Critique of Impure Reason: On Goodness and Truth in the Republic and The Catholicity of Reason, which I've summarized before here.


    Part of the problem is that it is often left unclear just how extensively a constraint is being challenged. A philosopher treats the law of excluded middle as if it carried no authority whatsoever but implicitly relies on other logical principles (perhaps in the metalanguage): exactly which principles of logic are supposed to carry authority? A philosopher treats some common sense judgement as if it carried no authority whatsoever but implicitly relies on other judgements that are found pre-philosophically obvious: exactly which such judgements are supposed to carry authority?

    When law and order break down, the result is not freedom or anarchy but the capricious tyranny of petty feuding warlords. Similarly, the unclarity of constraints in philosophy leads to authoritarianism. Whether an argument is widely accepted depends not on publicly accessible criteria that we can all apply for ourselves but on the say-so of charismatic authority figures. Pupils cannot become autonomous from their teachers because they cannot securely learn the standards by which their teachers judge. A modicum of wilful unpredictability in the application of standards is a good policy for a professor who does not want his students to gain too much independence. Although intellectual deference is not always a bad thing, the debate on realism and anti-realism has seen far too much of it. We can reduce it by articulating and clarifying the constraints...

    Philosophers who reject the constraints mentioned above can say what constraints they would regard as appropriate. Of course, those who deny that philosophy is a theoretical discipline at all may reject the very idea of such constraints. But surely the best way to test the theoretical ambitions of philosophy is to go ahead and try to realize them in as disciplined a way as possible. If the anti-theorists can argue convincingly that the long-run results do not constitute progress, that is a far stronger case than is an a priori argument that no such activity could constitute progress. On the other hand, if they cannot argue convincingly that the long-run results do not constitute progress, how is their opposition to philosophical theory any better than obscurantism?
    — Timothy Williams

    That said, it seems to me that Williams has a narrower vision in mind than I (or Schindler) might. I will also just add (and I say this as a great admirer of Big Heg), that philosophy that tends to be written in a more obscure fashion seems to be more prone to slipping into the cult of personality. At the limit, this gets every bit as bad as debates about esotericism, where no one who critiques the preferred master has ever "truly understood them."

    Certainly, any thinker, particularly systematic thinkers, can become an authority figure in inappropriate ways, but the ability of people to read their preferences into obscure works, or to easily accuse others of misreading them, tends to make this worse.

    To be fair, some topics lend themselves to more obscure formulations, so obscurity isn't necessarily blameworthy. Indeed, Heisenberg had a very interesting unpublished philosophy paper on a linguistic analogy to his famous Uncertainty Principle, where the key thesis is that, the more we try to lock what we are describing down with precision, the more the full ambit of reality is excluded and slips by us (and this explains why a Horace or a Petrarch can do so much with so little, or how Heraclitus' few fragments could spark so much thought millennia later). Plotinus or Dionysius could hardly have written otherwise (Hegel maybe could have at least tried though :rofl: ). But it's certainly a risk when one has to rely on less clear language. Being minable for aphorisms by people of all different persuasions is not necessarily a testament to one's philosophical legacy.



    I could lament that we haven't answered or achieved agreement on a host of questions, but still acknowledge we've made progress in understanding them. For that matter, rather than lamenting, I could postulate that a lack of closure is a hallmark of what constitutes philosophy.

    Right, one might see the open-endedness as a feature and not a bug, or one might also judge the complaint as being akin to condemning nutritional science because people still eat poorly, or condemning ethics and law because there are still miscreants and criminals. For instance, an advocate of "philosophy as therapy" can object that the failure of some to undergo therapy can hardly be taken as an indictment of the effectiveness of the therapy.

    The Wittgensteinian Ur-picture, which I don't share, is that "philosophy leaves everything as it was." It is a diagnostic tool to help us understand where our language led us astray. Once we've done that, we'll be left with very little to worry about. Genuine problems will be assigned, or promoted, to the disciplines that study them, such as physics and politics. You can see why this is often viewed as a therapeutic understanding of philosophy -- or, less elevatedly, as plumbing out the pipes.

    I think this is what Banno is describing. Again, he will tell us, I'm sure. Personally, I think a dose of Doctor Witt's therapy is a very good thing for all of us from time to time, especially when we get a strong hunch that our terminology is backing us into implausible corners. As I said to Banno above, I don't think all the important philosophical questions can be treated and dissolved in this way, but it's a fantastically useful technique to have at the ready.
    J

    Right, particularly the focus on language seems like it will leave some things out. Just for example, for a paper I had an idea for I'm reading Rowan William's book on Dostoevsky's philosophy and Harold Bloom and some other folks' analysis of Hamlet and King Lear, as well as Milton's Paradise Lost. These secondary sources all treat these literary works as being deeply philosophical. My paper idea uses them as examples of particularly modern "pathologies of reason," (i.e. the tendency towards a straight-jacket, procedural intellectualism, or else an ultimately groundless voluntarism) that largely express themselves in the history of philosophy and literature.

    Maybe my paper will be tripe, but William's book is very good. Point being, this is certainly the sort of stuff that has historically be called "philosophy," even if some of it might fall into literary analysis. It isn't just literary analysis though, because it moves from fiction to universal statements about man and being. Yet I am not sure how a view of philosophy as beginning and ending in linguistic analysis doesn't have to cut out this sort of work, in which case, what is it? It still seems to be a sort of philosophy, and arguably it is more so, which seems to make the move to make all philosophy analysis a sort of equivocation.

    Or more simply, on the narrow view, are Nietzsche and Dostoevsky even philosophers anymore?





    Understanding Witt’s ‘therapeutic’ project in the context of consonant efforts in phenomenology and poststructuralism allows us to see that he doesn’t so much dissolve all philosophical questions as shows us that scientific , logical and mathematical domains are not self-grounding but instead are contingent and relative products dependent for their grounding on an underlying process of temporalization. Unlike writers like Husserl, Heidegger and Deleuze, Wittgenstein was reluctant to call the questioning that uncovers this process philosophical. He thought of philosophy as the imposing of metaphysical presuppositions (picture theories) on experience but not the self-reflexively transformative process of experiencing itself.Joshs

    Good point. Obviously, I interpret this differently. I see Wittgenstein the way I see Hume and Nietzsche, as great diagnosticians for the pathologies on modern thought, although I do not accept their conclusions. They follow out the dominant presuppositions of their era to their furthest (and IMO flawed) conclusions. Wittgenstein in particular shows the flaws of philosophy as a "system" and reason as wholly ratio, a sort of discursive rule-following. Here, reason ceases to involve ecstasis and an erotic Other, such that we become trapped within the fly-bottles of our own interpretations and systems, within human finitude. Procedural reason is raised up as the guarantor of ethics and political life (Rawls, Kant, etc.), but ultimately reveals itself to be groundless. Man is stuck within his own finitude, despite his natural "appetite for the infinite" (e.g. Leopardi).

    I'd extend Przywara here and say that the situation of modern man is analogous to the way in which the creature is not the source of its own being. What it is does not explain that it is. The creature is always referred to something outside of itself. Even the what of a thing, its quiddity/essence, is not wholly intelligible in itself.

    So too for reason. It is oriented outside of itself. We have come to see logos as a finite tool, the creation of man and his culture, but it is rather, I would argue, that man participates in Logos. The nature of logos is to transcend; it is always already past its limits and with the whole.

    The relevance to the larger topic here is that modern philosophy is defined by its move to "bracket out" all sorts of considerations as irresolvable by reason, or beyond the limits of reason. The boundaries vary, it can be the phenomenal, the mind, language, culture, etc., but in each instance the bracketing involves a methodological move that assumes much about the world and reason.
  • Two ways to philosophise.


    We're a little bit off topic, but this is obviously related to the Adorno thread. I am wondering what the contradiction here is said to be, in a precise way? Is it that democracies can turn into oligarchies, and once they do then they are no longer democracies? I think that's true, but it looks like a change rather than a contradiction.

    Or perhaps we have here the idea that democracy is incompatible with liberalism, because liberalism is tied to capitalism and therefore tied to oligarchy? If so, then I would want to ask, "What is it about liberalism that is tied to capitalism"? I'm not disputing the thesis, but I want to see the reasoning

    Well, this is "contradiction" in the context of Hegelian dialectical, which starts off pretty clear in the Logic with being/nothing -> becoming, but becomes less clear cut in historical analysis. The basic idea is that a historical moment (e.g. early liberal republicanism) comes to negate itself, making itself what it is not precisely because of what it is.

    For Hegel, who has a strong classic bent in this respect, the telos of history is the emergence of a truly self-determining human freedom (man becoming more wholly himself and more truly one). But freedom itself is subject to the dialectic. If we begin with freedom as "the absolute lack of constraint and determinateness," the "ability to choose anything," we run into the contradiction that making any choice at all implies some sort of determinacy, and is thus a limit on freedom. Yet the fact that, to sustain our perfect freedom, we need to never make any choices, while freedom is also "the capacity to choose," is a sort of contradiction. He identifies this sort of flight from all determinacy with the excesses of the French Revolution early in the Philosophy of Right, but you still see this in leftist and libertarian radicals all the time; they flee from any concrete, pragmatic policy because determination is a limit on liberty.

    So there ends up being many revisions of freedom, which has to be worked out across human history (the idea of a "commonwealth" coexisting with slavery and conquest is one example going back to Saint Augustine's City of God; the fact that the "lord" is not free to lift his boot off the neck of the "bondsman" without risking revolt is another). With liberal democracies, I would like to say that the problem was that they were self-undermining. They allowed for, and indeed positively promoted their own collapse into non-democracy, which is a negation of the original term that promotes and expresses freedom. Socialism and nationalism, in their respective ways, helped to avoid this self-negation by addressing the concentration of power that could be used to subvert liberal democracy. But now, globalization, mass migration, and then secularism and capitalism's tendency to erode culture, have destroyed the basis for nationalism, and yet "national identity" was holding up support for the redistribution of socialism.

    From this, we might diagnose the problem and some historical currents, but we could hardly use it to predict the future. That's Solovyov's main insight. History might be providential, but man is free, and he "loves the darkness" (John 3). I
    History doesn't have a utopian end point assured to it, as Eusebius and Hegel might assume, but is closer to Dante's vision (where he is most unique as a philosopher) of a goal whose assurance isn't realized. It is, however, perhaps actual only to the extent that it rationally embodies this telos (Hegel's famous point with "the rational is the actual and the actual is the rational.") Which is just to say, we can speak meaningfully of "progress," of actualization, but we mustn't fall into the trap of many left Hegelians, of thinking this makes history a sort of manichean battleground. The good is sewn with the bad and only time tests then.



    I didn't see the word, "some" in the original quote and that seems to make a difference. The original quote seems to be saying "either all narratives are true or all narratives are false", but that doesn't make any sense because there are narratives that contradict each other, so it cannot be that all narratives are true. But all narratives could be false in that we have yet to find the true narrative. This also doesn't seem to take into account that some narratives might be partially true/false.

    Natural language is fuzzy, so I suppose it could be read like that, although that seems to be a stretch to me. Saying "all x are y or they aren't" is a simple disjunct between affirmation and negation of "all x are y." That's how I intended it at least. So, the objection of the possibility of narratives without truth values was brought up, but I don't think this affects the disjunct. If some narratives are neither true nor false, then obviously they are not "all true." The excluded middle here would instead be "all narratives are neither true nor not-true." Note though that the context is epistemology and presumably epistemology, since it deals in knowledge, deals in narratives that have truth values, if not exclusively, at least primarily.

    I don't even like the term "narratives," to be honest. It's connotations seem perhaps inappropriate for epistemology. I would rather say perhaps "all knowledge claims."
  • Understanding Human Behaviour


    To be determined does not rule out being more or less self-determining and self-governing. To say that freedom requires that our actions are undetermined is equally problematic, since what is wholly determined by nothing prior is necessarily spontaneous and random, which is hardly "liberty."



    It needs an arrow from behavior going back to the other inputs, since our behavior shapes our environment, nutrition, experiences, etc. Indeed, our ability to shape our environment and to intentionally form habits, including habits that shape our desires, seems essential to freedom. Consider Aristotle's notion of virtue and vice as habits, and that the virtuous person learns to enjoy doing what is right (Harry Frankfurt's effective second-order volitions, being able to "want to want/not want"). Likewise, we assert control over our emotions and emotional responses.

    So, I think the model works in ways, but it seems like it might lead to a very flat anthropology if taken by itself. The human being can be described in many ways, but I do find the division between epithumia (concupiscible appetites, pleasure/pain), thymos (spirit, the irascible appetites, hope/fear), and logos (the rational part of the soul, intellect and will and the desire for truth and goodness themselves respectively) to be very helpful because it shows how liberty and virtue involves the right ordering of these. And while this conception obviously dominated Western thought until the Enlightenment, it also had a huge impact upon Islamic thought, and there are isomorphism in Indian and Chinese thought. I do think the idea of harmonious ordering and cultivation are necessary for explaining "choice" as opposed to mere input and output. Choice requires the parts of man to be organized into a self-governing and relatively self-moving whole.
  • Is there a “moral fact” about the function of cultural moral norms and our moral sense?


    I don't see any empty spaces where we can fit anything about evolution into your argument.

    Quite the contrary, you can fit evolution in via the "metaphysics of goodness" in Aristotle, the "Neoplatonic tradition," Thomism, Schelling, and Hegelianism in a number of interesting and satisfying ways. Charles Sanders Peirce and Vladimir Sergeyevich Solovyov represent two appealing directions (both being students of the Patristic/Scholastic tradition and German Idealism), although I'm more partial to the latter. David Bentley Hart is pretty good about this topic too.
  • On Matter, Meaning, and the Elusiveness of the Real


    Well, from the point of view of a realist, that would indeed seem to be so. But if you don't have and/or can't recognize, the Good, but, perhaps, only a range of activities and/or ends that are worthwhile in their own right, then moral anti-realism seems less like a form of scepticism. To be clear, for someone who doesn't but Aristotle's crowning of the hierarchy of purposes, or who thinks that the supposed crown is an illusion, "truly good" is just rhetorical pleonasm.

    Aristotle's Ethics is focused on just this question though, identifying what is sought for its own sake. Wouldn't the anti-realist position rather be that nothing is truly more or less desirable, that "desirable" just means "whatever we just so happen to currently desire." So, whenever falsity is preferred to truth, bad faith perceived as more "useful" than good faith, etc., it is simply better, because "better" just means "I prefer." That's the popular summary statement of emotivism: "x is good" just means "hoorah for x," and "y is bad," just means "boohoo for y." As Hamlet says, "nothing is either good or bad but thinking makes it so."

    Of course, few anti-realists deny that things can be better or worse vis-a-vis some ends, since this seems absurd. So normally it is the claim that ends themselves cannot be judged better or worse, normally packaged with a denial of the rational appetites (the desire for truth and goodness themselves) so that even rather obvious ends like "not being lit on fire" must stem from a sort of arational sentiment/feeling (this being the result of the axiomatic the denial of rational appetites). Anti-realism is also made more plausible by the Enlightenment division of the good, such that the good of the appetites is divorced from a sui generis "moral good," so that one doesn't have to deny things like: "it is bad for men to be lit on fire," but only the amorphous declaration that "lighting men on fire is 'morally' bad."

    Hence, "truly better or worse" can still be used by some anti-realists. Different race cars can be truly better or worse; some are faster. You can have truly better or worse choices for which school you attend, which vacation you go on, etc.. It's rather the "moral good" that is denied. But the counter is that this "moral good" is incoherent, and that the topics of ethics is so bound up in practical reason as a whole that the denial of this new category doesn't actually secure anti-realism the way the anti-realist thinks it does, or at the very least is an inappropriate category for analyzing pre-Enlightenment ethics (Western and Eastern).



    A preface. David Hume draws a sharp distinction, between what he calls Pyrrhonistic or radical scepticism and what he calls judicious scepticism. It is the former that he disapproves of. But he also thinks that judicious scepticism, which is cautious balanced judgement, is an important virtue in life. I think that's right. He doesn't mention Descartes, which is annoying, but I think that Descartes would count as a Pyrrhonistic sceptic.

    Right, although the common criticism is that this isn't actually justified, making it an arbitrary sentiment tacked on to what appears to be good grounds for skepticism (if we accept the argument). Not wanting to endorse a position and arguing for positions that imply that self-same position are two different things, and thinkers often do both. That, and that he contradicts himself in trying to have his cake and eat it too, like when he argues from an is to an ought re treating children well a few pages after arguing for the impossibility of such a move.
  • Where does logic come from? Some thoughts


    It also hangs due to gravity

    Exactly. Gravity, the weak force, electromagnetism, etc. must be what they are at every moment.

    If there was no gravity the chandelier would float and not hang. I think the issue here is you're simply leaving out ALL the necessary causes that preceded an effect (like our observation).

    Right, the examples are just there to show the difference between the linear (horizontal) series and hierarchical (vertical series), and the difference between metaphysical and temporal priority/posteriority, not to claim the dominoes falling have "one cause."

    Which you can only have by having access to information.


    Exactly, although this is necessary but not always sufficient. One tendency I've noticed in modern philosophical anthropology is that it tends to play down the possibility of "weakness of will." The idea being that is science could only "tell us what to do clearly," our issues would be solved (e.g., Sam Harris, Stephen Pinker, Francis Fukuyama, etc.). I think this stems from the liberal presumption that, barring major dysfunction or misfortune, all people achieve a similar baseline level of freedom and self-determination by age 18 simply through natural maturation, which is quite different from Epictetus' claim that most masters are in fact slaves to their passions and appetites.
  • Is there a “moral fact” about the function of cultural moral norms and our moral sense?


    Also, when thinking about the relevance of reproductive fitness to the evolution of morality, I suggest you keep in mind that increased reproductive fitness is merely how morality was encoded in the biology underlying our moral sense. What was encoded in our moral sense was cooperation strategies. Confounding the means (reproductive fitness) of encoding morality in the biology underlying our moral sense and what was actually encoded (cooperation strategies) can be a serious error when discussing human morality.

    Right, and that might certainly be part of it, but does this ground the whole of morality or practical reason? For instance, we feel a strong moral commitment to our children. And yet often, we are not cooperating with them, but rather they are an unhelpful burden to us, one who returns little in positive cooperation. Indeed, in the toddler years (and surely after too), good parenting can seem more like a battle of wills than cooperation (I've always liked Charlotte Mason's remark here, that the "strong willed child" is really weak willed, because they are unable to overcome their impulses). So too for being a teacher who cares deeply about their students, and so works to overcome their bad habits rather than cooperating with them within the context of their established vices.

    More to the point, when we do something out of love, we often want nothing in return. There is the desire to communicate goodness to the other for its own sake and for their sake, even if they are incapable of cooperating with us (e.g. care for the severely disabled), and even though we stand to gain no benefit.

    Now, I suppose that the goal here is cooperation, in a sense, but it's a very broad sort of cooperation—the attainment of virtue and ability to participate in a common good. Man is more fully fulfilled in social roles—parts of a "good life" will tend to involve being a good father, wife, doctor, deacon, leader, citizen, etc. and so there is a benefit for the individual, but also the whole, through participation in the common good and the realization of freedom in the positive communication of goodness to others.

    However, this sort of notion is generally much wider than the theories of evolutionary psychology allow (TBH, the field has always seemed to me to tend more towards Homo oecononimicus than Homo sapiens, at least in many tellings). If cooperation is understood as its representation in common evolutionary game theory models, or those of economics, it seems too shallow, precisely because it tends to focus on the individual reproducing organism or the utility maximizing (or satisfying) agent, and not the pursuit of any truly common good.

    There is also the seeming counter example of genetic predispositions towards psychopathy, which is a sort of inherited tendency to game fellow humans' collective commitments. Hence, it does not seem that the evolutionary process by which man becomes more cooperative is itself necessarily oriented towards cooperation or the good, but merely whatever "works."

    I suppose I am also somewhat skeptical of attempts to bracket off moral reason from practical reason. Both ultimately deal with ends, and so the Good, and they are often deeply intertwined. But practical reason involves the whole of the appetites, e.g. the good of food, or sleep, etc. as much as friendship and citizenship. Romantic love is a perfect example of where the two seem to become ineluctably bound up together. The Enlightenment idea of a sui generis moral good seems to make the Good itself strangely undesirable and alien to the world.



    I mean, most people think it comes from Divine Revelation, so there's that spanner .

    The theology of the largest denominations has both intrinsic and extrinsic groundings for morality. The good of man flows from man's essence, and in this sense man seeks the good by nature, and the good of man can be known by natural reason. So too, man has the telos essential to all rational natures, and so is oriented to the Good and True as such, through the rational appetites of the will and intellect respectively. God is involved in this intrinsic orientation as first cause and principle, not as extrinsic agent. However, God is also involved extrinsically, as the final end of man and, as you say, through revelation (this distinction is also why Saint Thomas' Fifth Way is actually very different from "Intelligent Design").

    Hence the distinction between natural law, which can be understood by natural reason, and the divine law of revelation.




    Well, I would say that man, in virtue of his rational nature, possesses both will and intellect and is thus oriented towards the Good, the Beautiful, and the True, as such, by their rational appetites, but that's a whole different case to make.

    In terms of evidence for this, I would just point out that history is full of people eschewing cooperation, or all social contact, to pursue what they think is truly best. And this extends to the denial of all the appetites (asceticism), reproduction (celibacy, monasticism), accepting ostracism, forgoing social contact entierly (hermits), taking great risks and enduring great hardships for ideologies one will never benefit from (e.g. Marxist revolutionaries), and even accepting torture and martyrdom.

    I'm fairly familiar with game theoretic interpretations of cooperation from economics, but as far as I can tell the thin anthropology they rest on would make much of human history unintelligible, unless the whole of "acting for higher principles," is rolled into the black box of "utility."

    For instance, Socrates is the opposite of cooperative during his trial in the Apology, refuses to be helped in escaping in the Crito, etc. Rather, the gadfly annoys Athens into executing him (for their own good) and the only thing he cooperates in is drinking the hemlock.
  • Philosophy by PM
    Since this thread seems to be largely acrimonious denunciations, we have decided to close it. Please, let's try to have threads that are not largely acrimonious denunciations. :rofl:
  • Is there a “moral fact” about the function of cultural moral norms and our moral sense?


    Try applying this to theoretical reason. I suppose the analogous statement would be something like: "our senses, reason, and our sense of truth/veracity developed because they help promote survival and reproduction."

    Does it follow that theoretical facts (i.e. non-aesthetic or moral facts) should be judged in terms of survival and reproduction? That is, I judge a fact, like "Moscow is the capital of Russia," using faculties developed to aid in reproduction, therefore the fact itself should be judged in terms of whether it aids reproduction or not?

    I imagine you can see the difficulty I am trying to get at here. It would be the same for aesthetic reason. We wouldn't necessarily want to judge a painting in terms of survival and reproduction, even though that's presumably the selection factor for our having eyes to see paintings.

    Nonetheless, we might ask: "why does this seem somewhat absurd for theoretical and aesthetic reason, but plausible for practical reasoning?" And my suggestion would be that it's because the human good is related to what man is. What man is helps to define the good of man (the relationship of formal and final causality). Hence, how man came to be man, sheds light on man's ends. Indeed, this goes along with the intuition that organisms are equipped to seek the end proper to them.

    That said, I do think this gets things somewhat backwards. Man has a moral sense to aid cooperation, perhaps, because this aids survival and reproduction. But it doesn't follow from this that the human good is limited to cooperation (or survival, or reproduction). Cooperation is not sought for its own sake, but rather as a means. Hence, cooperation cannot be the measure of the good; we should cooperate just when it is truly best to do so.
  • Two ways to philosophise.



    The convergence suggests biology is moving toward what some call a "process ontology" where identity emerges from patterns of relationship rather than essential properties - a view that resonates across these philosophical and scientific frameworks.​​​​​​​​​​​​​​​​

    Ideally, there is a via media between dispatching with individual organisms and dissolving them into a universal process (and thus making all predication accidental), and a static view that fails to take account of the fact that physical (i.e. changing) beings are inherently processes. That is, organisms are organic wholes and measures (form), and are also always subject to change. The goal-oriented effort to sustain form, entelechia, demarcates the whole. Gut microbiota are accidental though. One doesn't become a different person or species when one has a course of antibiotics.

    This dissolves the classical boundary between self and environment, much like enactivism rejects the subject/object distinction. Your gut bacteria aren't just "in" you - they're part of your extended phenotype, affecting everything from mood to immune function.

    Yeah, but this would apply to food that is consumed too.

    At any rate, the Cartesian cleavage is based on the unity of mental life, not the physical boundaries of organisms. Descartes philosophy of extended bodies and corpuscles arguably has already dissolved the individual body, erring in precisely the opposite direction.
  • On Matter, Meaning, and the Elusiveness of the Real


    That sounds like Aristotle, and I must admit, it makes more sense to me. One must remember, however, that he is also quite content to revise the knowledge that is handed down to him when necessary
    ; it is not sacrosanct or immune from doubt or anything like that. In specific circumstances, questioning one's presuppositions, beginning again with a clean slate are perfectly reasonable tactics. But as an approach to all knowledge, from the beginning,.... that's a different matter.

    Yes, and many others. And yes, certainly things can be revised, even radically so. That's why I'd say the difference is methodological. Post-Descatres, there is an extreme focus on method, while philosophy also starts to be thought of more as a "system" or "game." So, even though many later thinkers in this mold are adamantly "anti-foundationalist," they still retain this orientation towards foundational skepticism and the idea that method can overcome it and "build back."

    I don't think you are being fair to him.

    Yeah that's fair, it wasn't clear as written; "human" should be "humanist." That is, "the bulk of non-empirical human knowledge," as in (but not exclusively) "the humanities," due to his epistemic standards. E.g., "morals and criticism are not so properly objects of the understanding as of taste and sentiment. Beauty, whether moral or natural, is felt, more properly than perceived," when combined with the prior claims about history, etc. being grounded in observations of particulars, would seem to exclude most of the classical education's curriculum ("humanist knowledge," not "human knowledge"). Writers in the "Canon" like Plutarch, Tacitus, Juvenal, Cicero, etc. would be mostly valuable for the observations of fact they record under such criteria (probably the least useful thing about them), and this would seem to hit folks like Confucius too.

    Obviously, he doesn't really seem to intend that we should burn these (at least I don't think so), but rather that they don't represent knowledge.

    Arguably, I think we could also take the precepts as sort of demoting the authority of techne, i.e. arts, and indeed there is an interesting relationship between moral anti-realism, which affects our understanding of ends, and the arts, which are defined by ends, but seem to also involve "the understanding" and expertise. I'd have to reread it again though. Certainly, I think this demotion occurs at some point before the 20th century though (the speculative sciences were always ranked higher in a way, but not to the same extent). For instance, if you try to justify moral realism by appealing to the "facts" known by medical science, you will often face the objection that medicine is not really a science precisely because it is a productive art, whereas medicine as the "science" the the healthy body is all over older philosophy, even though it is also recognized as an art.

    1. It's not the really the bulk of human knowledge that's in danger - just "divinity and metaphysics".

    I think the preceding sections suggest that it's everything that isn't occurring according to observation or the relatively narrow range of the a priori; those topics are just particularly bad offenders. We're not really burning them, but we're downgrading them to taste and sentiment.

    But, I am sympathetic to thinkers who say that moral ant-realism or skepticism is itself a sort of radical skepticism (i.e. not limiting it to theoretical knowledge). For one, if nothing is ever truly good, then truth cannot be truly better than falsity, "good faith" good, and so too for "good methods," or "good argument," since these all relate to ends, i.e. "the Good," "that at which all things aim."

    2. His view of abstraction is somewhat similar to his idea of induction, but lacks the problematic element of making predictions.

    Right, and this view of abstraction is what aids the deflation of causation. This isn't really Hume's idea though, it's in earlier thinkers and goes back to Ockham in its etiology; Hume is just following the dominant trend in Anglo philosophy to its logical conclusion. He is a brilliant diagnostician in this sense.

    3. You may not like his resolution of the induction issue, but he does at least provide a candidate. Admittedly, it involves accepting that empirical observations cannot justify a generalization, but then explaining that we humans are just going to continue to rely on it, justified or not. What's wrong with that?

    It's still a "skeptical solution," though right? That was my point, people don't find these compelling (particularly the appeals to pragmatism, because arguably the skepticism also affects our knowledge of whether anything is ever truly useful). So, even if I did like his solution, I could still see why some people aren't satisfied with it and embrace a more "depressed/anxious" skepticism instead. That is, they simply do decide to worry about it, and I cannot really blame them for that. For instance, for Bertrand Russel, Hume's case implied that "there is no intellectual difference between sanity and insanity."
  • Two ways to philosophise.


    Yes, I think that's a good point. One of the deficits of the empiricist program is that it tended towards (although not always) making all emotion, goodness, and beauty and entirely internal affair, ignoring its directedness and interactivity. The issue with the subject/object accounting structure is that everything always has to be placed on one side of the ledger. Something similar happens with perception and knowledge in representationalist assumptions.



    Perhaps when we now talk about "history" we are talking about "knowing what happened in the past." Is that the thing that Maritain is considering, or is he considering history in some other manner? And do you happen to know the text where he talks about this?

    It's just called On the Philosophy of History. Like a lot of his stuff, it's free online. He is pretty opposed to the Hegelian project. I am not so much, because I think it has some truth to it and that Solovyov offers solutions to some of Hegel's problems.

    He does have laws, but they are more observational.

    So the issue here is apparently prediction of future events, or a determination of the principles that led from one point to another?

    More looking for principles. The problems posed by internal contradictions is a good one. I think Hegel's theory does explain the history of liberalism in the 19th and 20th century quite well. There, it faced challenges from nationalism and socialism, and it sublated both to varying degrees. Key planks of socialist platforms became universal in developed liberal states (e.g. the welfare state), and this helped address the internal contradiction between the idea of democracy and self-rule versus the tendency of capitalism to concentrate wealth such that elites become able to manipulate the system and lock out economic and political competition (which is essentially the system destroying itself, corrupting its own principles).


    Likewise, nationalism was absorbed into liberalism such that "Algeria for the Algerians," and "Iraq for the Iraqis," are phrases even leftists endorse. The idea that all France had to do was give Algerians liberal rights was fully dead by the time of decolonization, but liberalism actually began by being quite as universalist as Marxism (e.g. France forcing "sister republics" on foreign states as if conquered much of Europe).

    And now we see liberalism eroding national identity and so undermining public support for the socialist policies it absorbed. The contradictions are apparent and informative, but it would be foolish to think this gives one predictive insight on the exact shape of their resolution. Man is free and he can respond to this sort of contradiction in many ways. That's how Solovyov resolved Hegel's oppressive focus on the universal and Providential. He sees a telos to history, an end, but not necessarily its attainment; just as an organisms has ends but might grow ill instead. History becomes the meeting ground of truth and falsity, the dramatic encounter in which the wheat is winnowed from the chaff, the blazing fire that reveals what man has built his work from (I Corinthians 3:15). It's end is man's communication of goodness to man, through the Church and state as well as the union, family, etc., man lifting each other up towards the goal revealed in Christ of "godmanhood."

    So, to bring it back to the Aeneid, Virgil doesn't seem to suggest that Aeneas is a puppet. He could stay with Dido, or kill himself in despair over losing her, but he doesn't. That's why he's the hero, because he sees, if ever so vaguely, the logoi he must follow, and suffers to do so ("agony" in the Greek sense of contest), even though he also fails to wholly actualize.

    The problem that comes up in logo-skepticism is that nominalism and the elevation of the individual/particular has made it so that the logos must be embodied in Rome as it is, because culture and institutions are considered to be prior to any determinant logoi, the ground of their being. And so you get bad takes like: Virgil must be simply "writing propaganda," but then "sticking it to Octavian with his subtle skepticism" rather than the idea that Virgil (being exposed to Stoicism, Platonism, and the Peripatetics) simply recognizes that unities struggle to fully attain their form, and often fail, but that this struggle is needed for them to be anything (and anything good).



    I would argue that the Orthodox use of "passions" is at least somewhat different than Plato or Aristotle or colloquial usage. I would say that Orthodox "dispassion," very crudely, has to do with a state of self-possession and self-command. It is the idea that "thoughts" (again in a wide, Orthodox Christian sense) do not move you. So there is that connection of being unmoved by passions, and a desire to achieve a state of dispassion, but I don't see the Orthodox view contradicting the idea that passions are primarily things that happen to us in the postlapsarian state. That's why Orthodox on the whole view passions as bad and desire a state of dispassion (although I realize there are a few exceptions, who you have read). So my hunch is that the Orthodox might admit that the deified individual has motive powers similar to the passions, but that they would not generally call those things "passions."


    Yup, I think that's quite correct. Saint Isaac the Syrian is a good example:

    The world" is the general name for all the passions. When we wish to call the passions by a common name, we call them the world. But when we wish to distinguish them by their special names, we call them passions. The passions are the following: love of riches, desire for possessions, bodily pleasure from which comes sexual passion, love of honor which gives rise to envy, lust for power, arrogance and pride of position, the craving to adorn oneself with luxurious clothes and vain ornaments, the itch for human glory which is a source of rancor and resentment, and physical fear. Where these passions cease to be active, there the world is dead…. Someone has said of the Saints that while alive they were dead; for though living in the flesh, they did not live for the flesh. See for which of these passions you are alive. Then you will know how far you are alive to the world, and how far you are dead to it.

    During compline, when we ask for God to strengthen and correct us that we might awaken to "hymn [His] incomparable glory all night long" the goal is not to be free of affect (it is rather to be filled with it) but of inappropriate affect (and presumably for monks and nuns, to not accidently sleep through the midnight service :rofl: ). There is less separation between emotion and thought in general though. The "heart" as the "eye of the nous" has both, there being a sort of intellectual emotion too.
  • On Matter, Meaning, and the Elusiveness of the Real


    You put your finger on a fascinating phenomenon. When I returned to Hume recently, I was astonished to find that he is not at all what I would consider a sceptical philosopher; then I realized that Descartes' reputation is also a complete misunderstanding, since his project was precisely to resolve the nightmare he conjures up. The same goes for others, as well. It's very confusing. Is there any philosopher since Descartes who has actually defended, as opposed to trying to resolve, scepticism? Earlier scepticism was different in that it was proposed as a basis for achieving ataraxia or apatheia and so living a happy life.

    Yes, you raise a good point. By "skeptical" I think many critics of "skepticism" do mean precisely a methodological skepticism. This move essentially tears up most of the "web of belief," including central threads, and then attempts to rebuild the entire thing based on a very small set of remaining presuppositions. As I mentioned in an earlier thread, I think this has the effect of making philosophy chaotic, as in "strongly susceptible to initial conditions," where you get radically different "skeptical solutions," based on which part of the web was allowed to remain standing at the outset. Hence, a very large diversity of "camps" or "schools" developing out of common sorts of skepticism as a methodological starting point. Also, appeals to pragmatism over truth here seem to make the difference between the camps seem less secure.

    This is different from an approach that starts from what is known and then tries to explain a metaphysics of knowledge. In terms of empiricism, this wasn't unknown to the ancients. Gerson has a good article on Neoplatonic epistemology, and this was basically Plotinus' camp's main thrust, that the empiricist is incapable of knowing anything. They only know something like representations of things, and can never step outside them to compare them with reality, and underdetermination leads them towards equipollence, which might aid ataraxia, but certainly not any further move into the erotic ascent and henosis. But if an epistemology cannot secure even our most basic, bedrock beliefs, what we already "know we know," then the claim is that the epistemology has obviously failed.

    Plus, the modern paranoid or depressed skeptic is basically following the same route and just taking different emotional import from this.

    In this context, I can absolutely see why Hume is considered a skeptic. His position is skeptical re causes as generally understood. Saying "x can't exist or be attained so we should use the word x for y instead," (e.g. constant conjunction for causes, deflation, something limited to a specific human game, or coherence for truth, objective for the phenomenal realm, etc.) is arguably an equivocation and denial of the original x, or at least that's the critics' claim (and the claim of some supporters who embrace skepticism as the proper conclusion of these arguments.) This is not unlike how the immediate successors of Kant took his philosophy into dualism and subjective idealism, even though his letters show he didn't want to reach these conclusions. But what one wants and what one's philosophy suggests or at least allows (fails to exclude), can two different things.

    There is also Hume's thing about consigning the bulk of non-empirical human "knowledge" and past philosophy to the flames, or the unresolved problem of induction (made particularly acute by the prior move to make abstraction a sort of induction) being resolved by just playing billiards and forgetting about it. So too, the guillotine sort of assumes at least a mild sort of anti-realism as a premise, since if anything can be "truly good" if can presumably be "truly choiceworthy," and thus there can be syllogisms using facts about the choiceworthyness of actions that suggest (although do not force) action.
  • [Feedback Wanted] / Discussion: Can A.I be used to enhance our ability to reflect meaningfully?


    I tried to make an interlocutor, because I agree with you - The heart of philosophy is putting forth your ideas and challenging them.

    It's too sycophantic is my problem with it. I really think that's one of the bigger risks of the technology. It just affirms whatever you feed to it, even praises it, and asking it to be critical doesn't seem to work that well. I imagine that partly this is done to keep customers happy.

    I have experimented with putting some manifestly stupid ideas to it and I still get fire emojis about my genius.
  • On Matter, Meaning, and the Elusiveness of the Real


    Long ago, when I was philosophically active, there was a widespread opinion that scepticism was vanquished and could be put to bed (or its grave). It turns out that was not so. It seems to be still alive and kicking. Cavell was right - we need to get deeper into the phenomenon and understand better where it comes from. Part of that is noticing that Cartesian scepticism is not the only variety of scepticism, and that denial of common sense reality goes back a long way in philosophy, arguably right back to the beginning. It may be that it is an essential feature of any enquiry that we might recognize as philosophical. But it also seems to be found useful in religion - another point where religion and philosophy seem to coincide or at least to be near neighbours.

    It's a very interesting question why radical skepticism existed in the ancient world (and was indeed somewhat popular for the sort of position it is, being addressed across centuries from Parmenides to Augustine) but not really anything quite like Cartesian skepticism.

    I think part of the problem of attempts to "put skepticism to bed," is that they are often using something like Kripke's "skeptical solutions." For instance, in his context, "the argument from underdetermination has real strength and cannot be defeated... but that's ok because..." There is something similar going on in Quine, and a lot of other thinkers. So, perhaps it's partly that the skeptical solutions are not considered acceptable, or are themselves considered to be radically skeptical. I have certainly seen philosophers say this, not only about Wittgenstein, etc., but even about Kant's attempted solution. And then Hume was self consciously riffing on ancient skepticism.

    There was, however, a pretty long period between the decline of the Academics and Descartes where skepticism was, if not entirely dead, at least in a coma, which is interesting too.

    One interesting historical facet is that the original Empiricists (Sextus and co.) were seen as skeptics, and for reasons not that unlike Hume, and the later modern traditions type of arguments (underdetermination being the common thread I can see, but also, while not a common metaphysics, a denial of common metaphysical positions).

    IDK, I'd love to find a good treatment of the history. My inclination is that some of the resistance might also have to do with the "thin" anthropology used in some resolutions to skepticism, which is unappealing to some.
  • An Analysis of "On Certainty"


    You are presupposing that it is a mere presupposition. How about thinking that in the absence of any possibility of demonstrating that a faculty of noesis exists, the conclusion that is does not is warranted? Or more modestly a pragmatic conclusion that if it cannot be demonstrated to exist then it is of no philosophical use?

    This is what the eliminativist says about consciousness. Of course there are demonstrations, that's why it was the dominant theory. But if one presupposes epistemic standards that remove it by default (much as behaviorism and eliminativism make consciousness epiphenomenal by default) one hasn't done much of anything except beg the question.

    And that's not really the point. If such a faculty is accepted as a hinge proposition, it shows that the theory of hinge proposition itself is not presuppositionless, but fails to obtain given certain assumptions.

    Where the empiricist tradition has ended up, bottoming out in denying consciousness, denying truth as anything more than a token in "games," etc., along with the radical skepticism engendered by arguments from underdetermination, which are undefeatable given its premises (likewise for Hume's attack on induction), is arguably a reductio conclusion against the initial assumptions.
  • Two ways to philosophise.


    Parts of this do seem consistent with the classical faculty psychology that dominated antiquity through the early-modern period though. The passions and appetites are an engagement with the world. The concupiscible appetites (pleasure/pain) are a sort of pull towards union or away from union (aversion).

    So I don't disagree with . I was actually thinking of Saint Thomas because I read the section on the appetites and passions in ST for my incomplete paper on Virgil a few weeks ago. And indeed, a lot of Catholic thinkers (e.g. Sokolowski) pull together phenomenology, enactivism, Thomas, and Aristotle.

    There is a sense in which the passions are something we do, as one of our powers/facilities, and yet another sense in which they happen to us, in that they are often involuntary, and indeed often run counter to the will. Likewise, it would be bizarre in most settings to say that having a heart attack is something we do, although in some sense it is still true.

    The passions and appetites aren't like a heart attack though. They can be commanded by the will, even if they are often recalcitrant. And our ability to command them can be improved with training; that's one of the ideas of asceticism. So, the other writer I was thinking of is Saint John Climacus, who I have been reading at night, and this is precisely what the monk aims at with "blessed dispassion," not the elimination of the appetites and passions per se, but their right orientation and ordering (granted, it sometimes seems like the latter in some passages). This is why, if you pray the Horologian, you end up reciting Psalm 50 many times a day. It's the "cultivation of blessed tears" and repetence, as Climacus would put it, a right emotional state that is willed.

    Emotions are always situated in lived contexts and cultural practices — they are not the same everywhere, for everyone, in every moment. This supports the idea that emotions are interactive and historical, not static mental contents.

    Right, or just as all cats are different, or each cat in different moments—yet their still all cats. There is an idea in Aristotle, developed by Thomas that I really like, which is that the virtues are like universals for action. Hence, a virtue like temperance perhaps takes different forms in different cultures and epochs, or in each instance, yet remains a single principle. It's not a perfect analogy though, because obviously cultural standards can themselves be more or less temperate, and more or less conducive to virtue.
  • Two ways to philosophise.


    And there is this temptation in both disciplines, I've noticed, to "universalize" these methods to a kind of ontology. I think the ontology you get with science is some kind of indirect realism that the guesses approximate towards, at least with respect to our representations (know-that) rather than know-how. I think the ontology you get with history is like a constantly evolving reality that's never still.

    That's an interesting point. I'd generally agree. Historians can sometimes absolutize historicism and scientists of a certain persuasion can sometimes absolutize their inductive methodology into a presumption of nominalism and the idea that all knowing is merely induction. In the latter case, this is sometimes quite explicit, e.g. Bayesian Brains.

    I'll just add that the classical formulation of the difference is that science deals with the universal and the necessary. History is always particular though. Indeed, it's the particular in which all universals are instantiated. This doesn't preclude a philosophy of history, but it does preclude a science of history. Jaques Maratain has a very short lecture/book on philosophy of history that makes this case quite compactly, and he's drawing on the "traditional" distinction (in the West) that was assumed for many centuries.

    In terms of a logos at work in history, I certainly think we can find one, just not a science. Hegel's theory seems to explain some aspects of 20th century history quite well. There is a sort of necessity in the way internal contradictions work themselves out, and you see this same point being made in information theoretic analyses of natural selection that look at genomes as semipermeable membranes that selectively let information about the environment in, but arrest its erasure. Contradiction leads to conflict that must be overcome.

    But you cannot predict this sort of thing in any strict sense, because it is always particular. A great image for this is in Virgil. Virgil is very focused on the orientation of thymos (honor, spirit) in service of a greater logos (the good of the community, the historical telos of Rome, and ultimately, the Divine). However, although his gods (themselves a mix of personified man-like deity and more transcendent Logos) set the limit of logos in human history, and characters only ever recognize them when they leave. I've been rereading the Aeneid and this seems true in almost every case; only when they turn to go, when we are "past them" in the narrative, are they recognized as gods by man. It's very clever, and works well with elements in the narrative that are skeptical of the ultimate ability of man to consistently live up to logos.

    Hence, history can be more than Gibbon's "register of [the] crimes, follies, and misfortunes of mankind." For instance, Gibbon's observation that the switch from citizen soldiers to professional legions "elevated war into an art, and degraded it into a trade," applies quite well to the United States after Vietnam—a recognition of a universal in the stream of particulars. (I like Durant's "every civilization begins like as a Stoic, and dies an Epicurean," too, even if it isn't always true).
  • Two ways to philosophise.


    A feeling is generally seen as something that happens to us, whereas an activity is generally seen as something we do. To define feelings as activities is a bit like saying, "Internal things that happen to us without our doing anything are things that we do."

    Well, a person's passions are their passions. They are also something we can have more or less control over, through the cultivation of habits (virtues/vices) and the will's ability to overcome the passions.

    So, I don't think I would locate the passions outside of us, or we wholly something "we don't do." However, I would at least locate some of them outside the will. For instance, when a man cheats on his wife, even though he wished he hadn't (giving in to an appetite/passion), we say he has suffered from weakness of will, and perhaps even that his act was not fully voluntary. Whereas, when a man doesn't cheat on his wife because he sees this as truly worse, we don't say that he suffers from "weakness of passion."

    The passions are properly ordered to the will and intellect. When they "happen to us," as often is the case, the opposite is happening to some degree. I wouldn't describe this as "coming from without" though, but rather, as Plato does, as a lack of unity. That is, what we have is warring parts, and a whole that is less unified in its aims (less perfected).
  • Where does logic come from? Some thoughts


    I'm not sure if I'd agree that lift is a cause of flight.

    Fair enough, I'd agree in a sense. A principle is something that unifies a diverse number of causes. It is what makes many instances of lift, natural selection, etc. the "same" whilst obviously being different in each instance, thus allowing for "the many" to be known through a unifying "one" (e.g. entomologists can know insects well, even though there are 60 million individual insects for each human, and one never closely observes even a tiny fraction of these).

    The particulars need not have absolute priority though (either epistemically or ontologically). For instance, the difficulty in saying that "infection" is never the cause of infectious disease, but only individual interactions between viruses/bacteria and cells, is that this itself can be further broken down. We could also have the demand that virus/cell interaction is always really caused by molecular interactions. This is the drive towards reductionism/smallism. Yet it has to make certain assumptions, for instance, that wholes are always nothing more than the sum of their parts, else the continued decomposition ceases to be warranted. And, while smallism is not prima facie anymore reasonable than "bigism" from an ontological point of view, it is also unwarranted from an epistemic point of view, given that even the basics of molecular structure cannot be reduced to physics.

    I'll thow out here the difference between linear (temporal) causal series, which are accidental, and hierarchical causal series. The first is the classic example of one domino knocking over another, or a ball breaking a window. The second is the example of a book resting on a table, or a chandelier hanging from a ceiling. For the book to be on the table, the table had to be there. This has to be true at every moment or interval; there is a vertical—as opposed to horizontal—element to efficient causation.

    Likewise, the chandelier hangs due to its linkage with the ceiling at each moment. Neither the ceiling nor the table are dependent upon the book or chandelier sitting/hanging on them, but there is dependence (priority) in the other direction. So even efficient causes have these different elements of priority and posteriority that help our analysis. The plane is generating lift at each interval, unless it is stalling (this is a larger principle of fluid dynamics). And at each interval it has to be the case that fluid dynamics is such that lift works in this way (formal causality). Or, for another example, we could consider human decisionmaking. Man being man (a particular whole) is always prior to man making a decision as man, and this is a sort of vertical priority that affects both efficient and formal causes.

    Not necessarily. I am a determinist and a free-will Libertarian. How do I reconcile the two? I see freedom as having access to as much information as possible. By having access to as much information as possible, you are able to make more informed decisions. By having access to more information, you might choose differently, or you at least have the power to choose differently than you would have if you didn't have the information.

    Many people make this assertion that determinism implies that you have the feeling of being forced into something you didn't want to. I say that determinism implies that you have a feeling of naturally choosing what decision is best. Your decisions and actions would feel natural, not forced, if determinism is the case. You always make the best decision with the information you have at that moment. It is only your fear of the consequences that you cannot foresee that make it feel forced. Thinking that you should have chosen differently only comes after the consequences have been realized (after you have more information).

    Why would this not be comptiablism?

    Anyhow, you highlight a very important element that is missing from many considerations of freedom, both the idea that ignorance is a limit on freedom, and the idea that freedom involves understanding why one acts. I tend to want to frame liberty in terms of (relative) self-determination and self-governance (as opposed to being undetermined).
  • On Matter, Meaning, and the Elusiveness of the Real


    This is something I have noticed too. I'm not totally sure why, since historically arguments for God rely on exactly the opposite sort of pitch, and they largely still do for religious thinkers who are part of traditional religions—everything being very well nailed down. I think it has something to do with the particular allure of New Age spirituality as a sort of freedom from both traditional religious practice and secular materialism. But I also think the challenging of epistemic norms helps to empower the individual, and "freedom as authenticity" is very important there too.
  • Where does logic come from? Some thoughts
    I am pretty sure I had almost this same conversation re reasons versus causes with , using the stop sign example. Maybe it was a stop light :rofl: .

    I would just suggest that a difficulty here is that "causes" is often used very narrowly, as always referring to a linear temporal sequence (either as extrinsic ordering, or a sort of intrinsic computation-like process), but also very broadly as encompassing the former, but also all "reasons." Or, causes might also be used narrowly in a counterfactual sense. "Reasons" often tend to include a notion of final and formal causality that is excluded from more narrow formulations of "cause."

    So, it's tricky. Lift is a "cause of flight," but you won't find the "principle of lift" as an observable particular in any instance of flight. Likewise, moral principles are causes of people's actions, but you won't find them wandering about the world.

    It's probably the one of the most challenging disambiguations. There was once an extremely influential book called the Book of Causes that is hardly ever taught any more because no one knows who wrote it (and it is derivative of Proclus' Elements). I mention it because it's a great example of how extensive the understanding was in former epochs. Essentially, anything that did not occur spontaneously for "no reason at all," (which was presumed to be nothing) was considered to have a cause, and indeed many causes, because proximate causes were arranged under more general principles. Which is interestingly, not a position that demands any particular ontology, and works as well for idealism as physicalism despite its rather stark decline.
  • The decline of creativity in philosophy


    I think it is, strangely, at least partially a problem of too much creativity in academia, which in turn leads to stagnation through a poor signal to noise ratio. There is an incentive for radical rereading, radical critiques—novelty for the sake of novelty, etc.—because this gets attention in a massively oversaturated market. Even saying something absurd can be a good way to get citations. It's publish or perish, and even fields like classics have seen this weird phenomena where publications and the number of journals soared even as enrollment plummeted and Greek and Latin vanished from most high schools, along with even translations of the classics. This is particularly evident in some fields, where consensus oscillates wildly based on the same old evidence (Biblical studies being a prime culprit).

    This is, in part, an effort to replicate the style of the technical and natural sciences. "Research" becomes the key output of the academic. So too, there is the idea of "progress" borrowed from technical fields, which was generally given a political tilt in philosophy and the humanities, which in turn led to siloed echo chambers. The push for political progress paired with the drive to novelty leads to inanity and, at the limit, insanity.

    That's part of it. There is also the fact that, at a certain point, you cannot get anymore radical. You reach maximal nihilism or relativism, or maximal authoritarianism in the case of fundamentalism. You can't keep making your art more abstract after a certain level. So, in terms of the general modern push towards "creativity" envisioned as a sort of "freedom as potency," we seem to have approached a sort of limit.

    The other thing is that philosophy is more professionalized now. Philosophy was also more stable (less "creative") in late antiquity and the later middle ages. I don't think this was wholly a bad thing. It made for more rigorous thought.

    It was more dynamic in the early modern period and earlier in antiquity. You have far more new movements starting in these periods. You do get more creativity, but also more bad, even widely damaging philosophy.

    I also think the new movements at least began vastly less sophisticated and they often did quite poor justice to what came before them in the early modern case. There is a huge democratization that comes with the printing press (i.e. "who can sell the most pamphlets" versus "who can win enough admiration to be hand-copied at great expense by other lifelong contemplatives"), which happened to occur during the massive socio-political firestorm of the Reformation, which created a drive to just tear everything down and destroy it (regardless of if it had been understood) in order to create something new. That's obviously very broad, but I think it's generally true. Early modern thought is an explosion of creativity and also hugely historically forgetful.

    Late modernity is more akin to late antiquity than the medieval period though. There is a sort of fixed plurality that seems to have calcified. It's more of a similar historical moment too. That said, the focus of the philosophers of late antiquity tended towards the contemplative as time went on, which is quite the opposite of today.
  • Two ways to philosophise.


    Be that as it may, let's suppose someone claims that there is an overarching standard and that Jake has violated it. Does it follow that the person has a thin and not a thick conception of paradigms, or that Jake is being asked to put down baggage? I don't see why it would. All that is needed is a common thread running through every paradigm, from which the standard can be derived. The paradigms can be as thick as you like.

    Yeah, that's one of the points I wanted to make. There are certain assumptions that need to be made for it to be the case that all general epistemic principles (or any at all) must require a standpoint outside any paradigm to achieve. I don't think those are good assumptions though.

    Consider Plato's "being ruled by the rational part of the soul," as an epistemic meta-virtue. The basic idea that, ceteris paribus, one will tend towards truth if one prefers truth to falsity and one's pursuit of truth is not derailed by other concerns (passions and appetites) that are prioritized above truth, doesn't seem to require any move to a paradigmless space. It is rather a statement about all paradigms, made in the context of a particular thinker. But unless it is impossible to make statements about paradigms from without, this doesn't preclude its being true. Plato might be wrong, but he isn't wrong because his claim requires standing outside "Platonism."

    Note too that, if it is asserted that Plato can only speak about what is true for his own paradigm (that truth is bound to paradigms), this charge would apparently refute itself, since it would itself also be limited to a particular paradigm. Yet this objection would also seem to rest on the same absolutization of paradigms into "what we know" instead of "how we know."

    But this is also not an appeal to an axiomatized system or "rules," or epistemology as a system. Indeed, Plato has a marked skepticism towards language and sensible realities (including cultural institutions) as a whole, not a preference for a "perfected system" that exists within the context of these.
  • Two ways to philosophise.


    I think @Count Timothy von Icarus is especially interested in being in position to tell someone that they *should* put down some baggage they're carrying. The grounds for saying so would be (a) that this particular burden does not help you in making rational judgments, and (b) that Tim can tell (a) is the case by exercising rational judgment. (Stop thinking you need to sacrifice chipmunks to the river every spring so it will thaw, would be a typical Enlightenment example.)

    I'm not sure how close that is to your view (or if it is in fact Tim's), but that's the sort of thing I imagine is on the table when people say they want an overarching standard.

    Sure, in a very broad sense. If epistemology can never identify better or worse ways to achieve knowledge it is useless. Or, if knowledge is always wholly defined and contained within some paradigm, such that "sacrificing chipmunks truly thaws rivers just so long as you're a member of a certain community that currently accepts this," it seems perhaps to be equally pointless.

    There do seem to be some epistemic "rules" that it is quite hard to think of counter examples for, e.g. "just making up observations to support your claims," is not a reliable way to achieve knowledge. More extreme, "intentionally sabatoging your research program" is not a reliable way to attain knowledge. Not accepting arguments from premises known to be false, or where the conclusion doesn't follow from the premises, might be others.

    Likewise, there are habits that seem to be more or less conducive to attaining knowledge, i.e., "epistemic virtues." The opposite of this claim would be the idea that no habits (e.g. epistemic humility) can be said support epistemic success in general.

    But note that this does not require the Enlightenment conception of reason as primarily a sort of discursive rule following, or a sort of "method," "system," or "game." Later critiques of reason tend to leave this assumption firmly in place and rely on it heavily. I don't think it's an adequate notion of reason, as demonstrated by where it leads.

    To address your earlier question about the limits of reason, I would point out that the claim that reason cannot adjudicate between paradigms or world-views is, of course, a gnostic claim. One presumably knows this if one claims it to be so. Yet to have recognized a boundary is to already have stepped over it.

    Now, if we claim that reason is in a sense isolated within "world-views and paradigms," we face the odd situation where some world-views and paradigms resolutely deny our claim. They instead claim that knowing involves ecstasis, it is transcendent, and always related to the whole, and so without limit—already with the whole and beyond any limit. And such views have quite a long history.

    Our difficulty is that, if reason just is "reason within a paradigm," then it seems that this view of reason cannot be so limited, for it denies this limit and it is an authority on itself. Our criticism that this other paradigm errs would seem to be limited to our own paradigm.

    The positive gnostic claim, to have groked past the limits of intelligibility and seen the end of reason from the other side faces an additional challenge here if we hold to the assumption that any such universal claim must be "from nowhere," and itself issued from "outside any paradigm, " since it is also generally being claimed that precisely this sort of "stepping outside" is impossible. But perhaps this is simply a misguided assumption. Afterall, one need not "step out of one's humanity" to know that "all men are mortal." One can know this about all men while still always being a particular man.

    So, that's my initial thoughts on the idea that reason cannot adjudicate between paradigms. It seems this must remain true only for some paradigms, and one might suppose that being limited in this way is itself a deficiency. After all, what is left once one gives up totally on reason as an adjudicator? It would seem to me that all that remains is power struggles (and indeed , some thinkers go explicitly in this direction). Further, the ability to selectively decide that reason ceases to apply in some cases seems obviously prone to abuse (real world examples abound)—in a word, it's misology.

    But none of this requires stepping outside paradigms, except in the sense that reason may draw us outside our paradigms (and indeed this happens, MacIntyre—RIP—was drawn from Marxism to Thomism). To know something new is to change, to have gone beyond what one already was. That's Plato's whole point about the authority of the rational part of the soul. The desire for truth and goodness leads beyond the given of current belief and desire, and hence beyond our finitude.

    I'll just add that the absolute, to be truly absolute, cannot be "objective" reality as set over and against appearances, but must encompass the whole of reality and all appearances. Appearances are moments in the whole, and are revelatory of the whole. Appearances are then not a sort of barrier between the knower and known, but the going out of the known to the knower—and because all knowing is also in some sense becoming—the ecstasis of the knower, their going out beyond what they already are in union with the known.

Count Timothy von Icarus

Start FollowingSend a Message