Comments

  • On Purpose


    So, science is embedded in the society it operates in and takes on many of the values of that society. Sure, but you make is sound like some sort of conspiracy. The difficulty some scientists have in getting society to accept their well-studied and critical understanding of the world makes it hard to accept the claim that politics is unfairly hindering the inclusion of human values. It is exactly human values - money and power - that is muddying the water.

    How so? People can have ideological blinders and biases without being involved in anything like a "conspiracy."

    Consider that in ancient Greece, power structures influenced thought. Socrates was executed after all. Others ended up in hot water. You see the same sort of thing in ancient Rome, or even more so in Islam. This was obviously true within the context of the Latin Church and the Eastern Churches as well. Have we really reached the one time in human history where thought has become free of this sort ideological influence? Is liberalism and our dominant form of scientism truly "the clear light of reason one gets when superstition and error are removed?"

    Hardly, right? It doesn't seem like our era should be unique. It's just that ideology is more transparent when one lives within it, especially when it has "gone global."

    I think you're example makes a point exactly in contradiction to the one you seem to think it does. It is the human values embraced by classical liberalism that corrupt the process. It seems your problem isn't the exclusion of human values, it's the exclusion of the particular values you share.

    Ha, well that was exactly the point I was trying to make. "Goodness, Beauty (and sometimes Truth) only exist in your head, as a privatized projection, a sui generis hallucination produced by the mysterious, but ultimately mechanistic mind," obviously isn't neutral. It is not a view that arose through sheer substraction, i.e., just "stripping away old narratives and superstitions," to get to the "clear view of reason." It is itself an ideological construct, a particular tradition. And the motivations for it have been variously political, economic, religious, etc., as well as philosophical. The idea of freedom as primarily being "freedom from constraint," and "the ability to do anything" (i.e. freedom as power/potency) seems quite relevant here too (and it's a notion of freedom that comes out of early-modern theology, man being the image of a God who was sheer will).

    Does your whole argument rest on the basis of absolute, i.e. non-subjective, morality?

    I'm not sure. Obviously, the conclusion that privatizes value and purpose, and renders it somehow "illusory," seems less flawed if it is somehow right (I don't think it is though). But I don't think my point really relies on making a judgement one way or the other, since we could also just say that we merely lack warrant to reach this sort of conclusion—that the conclusion is reached for other reasons, and that it relies on questionable presuppositions. I mean, a lot of ethics in this tradition (e.g., Hume) don't argue to anti-realism, so much as assume it, and then try to show that their assumption cannot be challenged (crucially, based on what they consider to be acceptable evidence). The common anti-realist move (which we can see on display in other threads up right now) is not to "prove" anti-realism, but rather to assert it and then to deny any counterarguments (arguably, to just set the bar for evidence arbitrarily high).

    That is: "I don't have the burden of proof because I am saying something doesn't exist." But with things as basic as truth and goodness, this seems extremely questionable to me. For one, the hardcore eliminativist can do the exact same thing with consciousness. They can claim it doesn't exist, and easily stonewall any attempt to demonstrate its existence. Does that mean we don't exist? Denying that anything is ever good or bad for us seems on par with this. I have never met, or even heard of, a single person who espouses such a view who actually acts like they have the courage of their convictions here. In that, it is like radical skepticism.
  • On Purpose


    The question of whether life, the universe, and everything is in any sense meaningful or purposeful is one that entertains many minds in our day. The mainstream view is probably that the Universe in itself is meaningless, and that whatever meaning we seek or see is projected or manufactured by us, as biological and social beings. The universe itself is kind of a blank slate, ‘atoms and the void’, in Democritus’ terms, constantly being re-arranged through energetic dynamics into a never-ending cascade of forms.

    I agree in principle, but I would question the exact way in which this is "mainstream." I don't think it was ever overwhelmingly popular as a position accepted by your average person on the street, or even a majority of people. It was dominant within the narrow silo of Anglo-empiricist philosophy and with some scientists, and I think even that is less true today than it was in the 20th century.

    Still, I can see why it might be considered "mainstream" because 'something like it' seems to be a very common framing. That is, "when we put out 'scientist hats on' we must suppose to world is purposeless and valueless. We focus on 'description'" (where "description" is axiomatically assumed to exclude value, which is privatized). This isn't true for all science though. No one expects medical researchers to do this, or zoologists, or even evolutionary biologists, let alone social scientists.

    Originally, this sort of separation of value and purpose from a wholly mathematized world (which, of course, excludes value by definition, axiomatically) was a sort of "methodological bracketing." Yet it was quickly absolutized into a full blown metaphysics, first a theistic one where all value and purpose comes from without, an external imposition (e.g. Divine Command Theory), and later in athiest forms that tended to just leave all the old theologically justified assumptions in place, but then chop God out and put the human individual, social community, or a sort of panpsychic "will soup" in its place.

    I used to think that the reason we still tended to teach science in terms of 19th century metaphysics (ideas no longer popular in physics itself) was just inertia. The old model was intuitive and no one clear paradigm had come along to replace it, so it continued on by default. I now think there are serious problems with such a narrative.

    First, the model isn't intuitive. It makes explaining the most obvious facets of our experience of the world impossible, dismissing most of human experience as in some way "illusory," and leaves all sorts of phenomena, particularly life and consciousness (quite important areas) as irresolavable mysteries. Second, one could easily argue that the very reason 19th century metaphysics hasn't been replaced in the teaching of chemistry, biology, etc. is that alternatives have been denied any inertia of their own by positive efforts to keep the old model in place, including censorship. One can see this in the EES controversy in biology, or in quantum foundations, etc. People get harassed out of their fields for challenging this paradigm, even though it is arguably a major factor in "science skepticism" and the emergence of an entire area of "new age 'science'" (precisely because it leaves so much out).

    If one looks back to earlier epochs, one sees that shifts in the "scientific model," that predominates in societies, what C.S. Lewis call the "backcloth," were often resisted for political and ideological reasons. I don't think our own era is any different here. A view that makes all questions of value and purpose "subjective" aligns with the hegemonic political ideology of our era by effectively privatizing all questions of value, all the way down to the level of metaphysics and "what science says is true." It's worth remembering here that the current model grows out of a particular theology.

    Such a view, by making all questions of goodness, usefulness, beauty, etc. "subjective" also helps to support the anthropology assumed by classical liberalism. This thin anthropology ("utility" as a sort of black box which decides all intentional human action, but which cannot itself be judged, i.e., volanturism) is hugely influential in contemporary economics and public policy. The entire global political and economic system is organized around such a view, and considerable effort is expended to make man conform to this view of him, to positively educated him in this role (e.g., highly consequential economic "shock treatments" aimed at privatization and atomization).

    So, I think it's worth considering the exact way in which such a view is, and remains, "mainstream." It isn't so much as firmly held belief (although it is for a minority), but more a sort of dogmatic position that is thought to be necessary for "modern society." The privatization of value (and its demotion to illusion), builds support for a particular political theory grounded in volanturism, liberalism. One problem with this is that, if you privatize purpose, goodness, and beauty, you seem to face an inevitable slide towards privatizing truth as well, and making everything into a struggle for power. Indeed, there is a hidden volanturism at the bottom of the model that suggests the Will to Power as the ultimate primitive.

    This way of thinking made perfect sense in a world where observation and common experience guided inquiry. But in his Physics, Aristotle extended teleology into cosmology, famously asserting that heavy bodies fall because their “natural place” is the center of the earth. This kind of explanation—while meaningful in its own context—was ultimately, and righfully, displaced by the rise of modern mechanics. Galileo showed that bodies do not fall because of their purpose, but due to forces and motions that could be described mathematically, without reference to final causes. Physics since then has largely dispensed with teleology (to the point where it was practically a taboo!)

    Well, these are "inclinations" and "desires" in an analogous sense. They aren't meant to imply consciousness, only the way a thing's nature determines how it interacts with other natures. Prima facie, it is no more anthropomorphic than claiming that rocks and stars "obey" "natural laws." Arguably, the second is actually more anthropomorphic. And is one reflects on the language of "obedience" and "law" that dominates modern science, I think it's easy to guess the type of theology that originally motivated such a shift. Both imply an ordering. The latter just implies a wholly extrinsic, imposed order. First this order was imposed by God. Now it's more common (at least in secular academic philosophy) to see it as a product of man, either "the mind" or "human language," which imposes the order from without.
  • The Old Testament Evil


    Your argument seems to hinge on the idea that there were no children on earth during the Flood; but the very previous chapter, 5, outlines in detail the lineage as normal procreation and Noah is said to have three sons in chapter 6.

    Does it? It says Noah has his sons when he is 500 years old. His sons are all a century old when the Flood comes, when Noah is aged 600. Noah's sons are the last births mentioned in the text. If one reads this literally, I'm not sure how fair it is to make assumptions about human life cycles at these scales, particularly if one considers the radically different biology that is being suggested elsewhere.


    Also, it is worth mentioning that these kinds of rejoinders, like Rashi’s, seem to fall prey to violating the principle of parsimony. No where in the OT does it suggest remotely that there were no children or that the beasts were shapeshifters: you’d think it would mention that, or at least not mention things which imply the contrary.

    There is no "principle or parsimony" for reading historical texts that says: "stick to just one text." Really quite the opposite. We try to confirm things through as many traditions and texts as possible. I am not sure where Rashi got that idea though, if it might have been in an earlier tradition.
  • Assertion


    There is no such thing as a language, not if a language is anything like what many philosophers and linguists have supposed. There is therefore no such thing to be learned, mastered, or born with
    — Davidson, A Nice Derangement of Epitaphs

    Behold! The power of analytic philosophy!

    There are conventions, to be sure, but those conventions do not determine the meaning of an utterance - this is shown by your example, that any phrase can serve as a password.

    What's the reasoning here:

    P1: Any phrase could be used as a password.
    P2: ????
    C: Therefore there are no language to learn and linguistic conventions don't determine what words mean.

    Prima facie, that's a ridiculous claim unless one runs back from the motte to the bailey in order to massively caveat it so as to make it an entirely different claim. That the claim can be written down, and anyone can understand it, would suggest there are such things as conventional meanings to words and languages for instance.




    Incidentally, I doubt whether using it as an example of a declarative sentence or of a statement or of a proposition or of a claim prevents it being any of those. Indeed, it clouds the issue to take any clear distinction between any of those varieties of hot air for granted.

    On the other hand, names seem to stand apart as a different kind of hot air. No? (E.g. they seem to be generally simpler in semantic structure and function.) And I wondered whether considering the situation of using a name as an example of a name, and this not appearing to cause it to cease being a name, might lead you to reconsider your reasoning in the case of assertions.

    Perhaps I ought to have chosen a different analogy. Is a table not a table when presented as an example of a table?

    If I use it mostly as a chair, perhaps it ceases being a table. But then I'm hardly presenting it as an example of a table.

    All fair points. I wasn't using "name" in a very specific or technical sense, just pointing out that in the case of examples or quotations it's often just an assertion of some sort that isn't being asserted that is instead being nested within another assertion. I've always come at this from the way Aristotle uses grammar to help justify his embryonic notion of the Doctrine of Transcendentals. To say: "Theseus is standing" is also to claim that such is the case, to say that it is so, and thus true. And to say "a man," or "a duck," is to say "one man," or "one duck" (the theory of measure).

    My thoughts then have generally tended to be that, even if there are some examples where assertoric force isn't implied, this is sort of irrelevant. Indeed, I'm not sure what questions it would be relevant for, because I normally see proposed "counterexamples" brought up in contexts where they need to be non sequiturs or nit picking. I'd say that, even if assertoric force isn't always implied, it usually is, and that this is a basic function of language, and essential to how it works.
  • The Old Testament Evil


    You mean:


    It is highly implausible that there were no children, including babies not developed enough to even be capable of sinning yet, on the earth when God flooded it intentionally; and Him drowning these innocent children was a means towards His end of cleansing the earth (to start over with Noah). Thereby, He directly intentionally killed innocent persons and murder is the direct intentional killing of innocent persons; therefore, God committed murder.

    However, God is all-just and it is unjust to murder; therefore, this "God" who flooded the earth was not truly God Himself (viz., the purely actual, perfectly good creator of the universe).

    As noted, the story only mentions men, most of whom are several centuries old. I don't see how it isn't selectively reading to add what you find "reasonable" in a story where animals don't predate one another, men live centuries, etc. Other people are implied, but God creates Adam seemingly as an adult capable of speech and reproduction. Since these others are seemingly also created, why not assume the same for them? Indeed, we might suppose that fertility worked differently in this epoch because Adam appears to be young for centuries without birth control and only has three children (although many more in the tradition, Judaism is not Sola Scriptura). Generally, a principle was that if one adds something to the text that makes God appear evil, one has erred.

    If you're allowed to add things from outside the text, then justification is easy. Rashi proposed that God has Saul kill the Amalekite animals as well because they practiced shape shifting through demonic arts for instance.

    If the question is: "how is a 20th century fundamentalist reading of the Bible consistent with Christian or Jewish 'neoplatonism?'" I think the obvious answer is it isn't. You're mixing traditions separated by millennia, with vastly different theologies. If you want to know how someone like Philo or Origen found the Bible consistent with their theology, you need to read them and their understanding of Scripture.
  • Moral-realism vs Moral-antirealism


    But by defining “bad” in this way, one is essentially equating moral terms with desires or emotions. That leads to non-cognitivism—a position that comes with many of its own issues.

    Let's start here. I don't see how you're getting that. If some emotions are sensations are, ceteris paribus, bad, how does it follow that anything that is bad must only involve those emotions and sensations?

    What's the actual argument that this follows.

    P1. Pain is bad.
    P2. ???
    C: All value reduces to pain and sentiment.

    For me, a fact is an aspect of the world, and statements that reflect facts must be descriptive in nature. The key word here is descriptive—that is, concerned with how the world is. So if we are to give morality the status of facthood, then a clear metaphysical and epistemological account must be provided.

    Sure, and many are. I think Aristotle is a fine place to start, but you could look at Boethius, Dante, or even much Eastern thought for good examples. There is a very robust metaphysics of goodness in the Aristotlian, Neoplatonic, Christian, and Islamic traditions. Millbank's Social Theory and Theology is pretty good on how the tradition you are advocating itself emerged from a very particular theological position, as well as the need for liberalism to privatize most questions of value (this was originally justified as pragmatic bracketing, but was later absolutized into a metaphysical denial of Goodness, Truth, and Beauty).

    But I think the tendency in Anglo-empiricist thought towards simply begging the question is floating about here too. You say "descriptive" as if saying something is descriptive somehow suggests that it doesn't relate to value. That is only true if one has already accepted that there aren't truths/facts about value. Saying facts are about "how the world is," and then expecting this to somehow make the case for anti-realism only works if you already assume anti-realism is true. Otherwise, there are simply facts about what is good or bad for different creatures (which, prima facie, seems to have strong support). In what way is stomping on an infant, or a any baby mammal, obviously not bad for it? That seems to be a hard case to make.


    If someone broke into your house, stole all you owned, and tortured you, would you accept: "there is no possible way to prove that anything I did was bad for you," as a response? Sure, you can appeal to such acts as violating norms or laws. But why have norms and laws if nothing is good or bad for anyone?
  • The Old Testament Evil


    I see. Can you respond, then, to the three examples I gave and explain how they are allegories and what they are allegories about?

    Not briefly lol. Consider that the Noah story is rarely tackled in isolation and is normally always considered with everything that has happened up until then, particularly the "Third Creation Narrative" of Genesis 5, which seems to be nothing but a bunch of "begats" but actually has a lot of interesting things going on.

    First, when considering what is meant to be taken "literally," as a sort of crime report or history:

    - The first two creation narratives, which are delivered back to back, while not necessarily contradictory, appear to contradict each other on the order in which creatures came into being when read as a sort of ordering report.

    - The above was often read as the difference between the creation of the forms of creatures, which are spoken into being in the first narrative (by the Logos, Christ, in the Christian tradition), and the creation of matter in the second narrative, where man is shaped from the dust and God's spirit is breathed into him (and presumably all living things), which was often taken as relating to the Holy Spirit. But things appear to happen in different orders in each, and in the first narrative man and woman are created seemingly simultaneously, not so in the second.

    -The text does not concern itself with where wives for the sons of Adam come from or how there appears to be peoples aside from those mentioned specifically in the genealogies. If you're committed to an extremely literal reading, then you're also going to find it impossible to justify the claim that "God killed children in the Flood." They are only implied.

    -There are indications that predation (between any animals) does not exist in the Garden but also that it didn't exist after the Fall either, until man's poor guidance of the cosmos led to progressive degeneration.

    -Aside from how literally we are supposed to take the slide into predation, some of the language seems to be metaphorical. Cain's generations produce a picture of the human race that is typical of a bronze age heroic era. His descendents all have names that reflect a sort of independence from God, and a sort of heroic brutality, and it is from these that we get the first city. But note that Cain himself is banished to the "Land of Wandering," presumably "to be a wanderer," but then turns "wandering" into a city, and his line gives birth to the arts, and seemingly war. Women are not mentioned at all in this set of generations.

    -The generations of Seth are a sort of mirror image of the generations of Cain, but they do mention women, and the names, while similar (and identical in some cases), are slight but important variations of the earlier names. But we also see how long people lived, and if you read closely you'll see that, while death entered the world with the murder of Abel, no one dies a natural death in the Bible until the end of the generations. Noah is the first person born after any man appears to have died a natural death.

    -Man's descent into wickedness then seems to be a response to the revelation of man's mortality and finitude, although it also obviously involves a conflict over "beautiful women" (and thus generation).

    -The "sons of God" are sometimes taken to be rebellious angelic beings, although I think the most convincing reading is that these are the sons of the line of Cain.

    -The story is itself an inversion of similar Near Eastern stories (a lot could be said here).

    -Noah, having heard that he is to let the animals go forth to be fruitful and multiply, without any divine instruction, immediately gets off the Ark and begins butchering all the "clean animals." If read literally, as all terrestrial life having to be descended from just one male and one female of each species, Noah is here driving all clean animals to extinction immediately after God had him save them.

    -Noah repeats Cain and Abel's mistake of thinking God is "just like me and likes what I like." The text is quite ambivalent as to what God thinks about this act. Actually, God responds by saying he will not bother trying to start over again because man is evil from his youth in response. That is, "even simple Noah of the line of Seth is already killing and consuming and failing to be a good steward, and so it isn't worth trying to simply 'reroll' and hope man chooses the right course, because he won't," is one interpretation that suggests itself. But note that if read literally Noah has already engaged in a mass extinction event and you'd have to assume that God recreated all livestock and game animals without the text telling us that.

    So, whereas the story of the Amalekites is taken more literally (as suggesting a war and the destruction of Amalekites, although we know this doesn't happen because there are Amalekites centuries later to try to exterminate the Jews in exile) by pretty much all the Church Fathers (which is not to say it isn't also read allegorically), the early Genesis stories often weren't read literally because they are extremely stylized in this way (literal readings are bizarre without adding all sorts of extra details that aren't in the text), and it certainly seems like they aren't meant to be read as straightforward reports.

    But I won't go on about the Amalekites because others have already responded to you on that one.

    As an aside, references to allegorical reading crops up even in the NT. Consider Saint Paul on the story of Hagar and Sarah, which he calls allegory (although what exactly he means by this is open to interpretation).



    I find this implausible for, e.g., Exodus where they are outlining rules. Rules are not usually meant metaphorically or allegorically.

    First century Jews, including the Alexandrian "middle Platonists" who were the forerunners of Plotinus (e.g. Philo), did find this plausible though. For instance, the idea that the dietary instructions are to direct one to deep Torah study (i.e., ruminating, as one only eats ruminants).

    But of course, the law was taken quite literally by many. However, it's also a law for the type of society it regulated. It did not institute slavery, etc., it merely introduces regulations for existing institutions, seemingly moderating them and setting limits on them.

    God's exact purposes in giving the Mosaic Law could be discussed at length. It's worth noting that admonitions to honor the spirit of the law above letter in the prophetic writings are among the oldest writings in the Bible. I don't think even a literal reading is committed to the idea that the limits it sets on behavior (for a tribal Near Eastern people mind you) are in any way meant to be the guidelines for a sort of cultureless, Enlightenment-style "ideal behavior for all rational agents for all time." They are largely concerned with limiting excess and guiding worship within a particular context. Part of the point seems to be that the Law is tailored to their current way of life, and they still fail to follow it. Recall that at this point the Hebrews have seen miracles non-stop for years and yet still constantly rebel against God any time the going gets tough (fairly realistic if you ask me lol).
  • Assertion


    Is it then not an assertion? Is a name not a name when it's an example?

    It's an assertion about a "name" right. So if I say: "an example of a proposition is: 'The cat is on the mat.'" I am saying something like: "it is true that S is an example of P," but crucially, not asserting S.

    There is an added difficulty with context though. For instance, in theory, any sentence could be specified as a password, signal, safe word, etc. Philosophy of language has sometimes tended towards totalizing this relationship though, or absolutizing either speaker's intended meaning or social meaning. But words do have a stipulated, conventional meaning that relies on limited context, that is accessible to all speakers.
  • Moral-realism vs Moral-antirealism


    I think it is unfair to claim that these cases are facts that can be discovered through empirical sciences. While they strike us as merely descriptive propositions, there are implicit value prescriptions in the presumption of each case. For example, let us take the case that 'it is bad for the fox to have its leg mangled in a trap.' The truly descriptive proposition is 'having its leg mangled in a trap decreases the fox's probability of survival.' To say that this is 'bad' for the fox presumes that survival is something worth pursuing. The same presumption about the value of survival is present in the case of 'it is bad for people to be kidnapped, tortured and enslaved,' because these conditions increase the likelihood of death. So if one is to claim these as facts, then one must first accept certain presumed values, such as that survival is worth pursuing. Therefore, to merely use the words "good" or "bad" is to presume that they are meaningful terms and that they refer to some definition. Even in philosophical discussions, when we say an argument is "bad", what we really want to say is that this argument does not meet the criteria of logical coherence, which is already something we think worth pursuing (I will expand a bit on this later).

    First, those examples don't rely on the goodness of survival per se. Presumably, even people who want to die don't want to be tortured and to undergo gratuitous suffering (leaving aside that people also don't generally want to die unless there is some other evil they consider worse than death, which they hope to avoid through death, or some good they hope to attain through their death, e.g. self-sacrifice.) We don't have to "assume" suffering is bad. Experiencing it is enough. But I also don't see how you get around simply assuming it isn't bad. Why preference one assumption over the other?

    It seems pretty obvious that being maimed and extreme suffering is, at least ceteris paribus, bad for animals. I can think of few things more obvious, and aside from being "common sense," it's also something confirmed by medicine, veterinary science, zoology, psychology, etc. And it's certainly something known empirically, i.e., through the senses. One experiences suffering, and learns to recognize suffering in others (men and beasts), and, ceteris paribus, it is bad to suffer, no?

    I am not sure if you avoid begging the question here in assuming that it is "unfair" to call these facts. Prima facie, they appear to be facts. That is, it seems like one of your initial premises in calling these "unfair" is: "there are no facts about values, so even facts that seem obvious, such as 'it is bad for school children to have lead dumped in their lunches,' (i.e. things virtually every competent adult recognizes and acts as if they were true) are not *really* facts because they involve value judgements. Indeed, you make this question begging explicit below:

    is not valid on the ground that P1 is not true (at least without first examining the implicit value prescription i.e. avoid pain is good), and thus cannot be used to construct a valid argument.

    First, this wouldn't make it invalid, but rather unsound.

    But this isn't really so much a counter argument against the obviousness of the factual status of at least some value claims, so much as it is simply assuming axiomatically that these examples could not constitute facts because they are value claims (because there are not facts about values), or that they must face some arbitrarily high standard of evidence to be justified, which would of course be assuming the very thing in question. That is: "anti-realism re values is true because anti-realism re values is true."

    Prima facie, "gratuitous suffering is bad for us" seems as obvious as, "water is wet," and your response is akin to: "you cannot just assume that water is wet." But we're not assuming, we've experienced water and suffering. We're talking about things where almost everyone says it is so, and essentially everyone acts as if it is so. The burden of proof then, should go in the opposite direction, just as it would for the person who denies that the external world or other people exist. In which case, what is the positive case for: "even ceteris paribus, it isn't true that being burnt alive is bad for men, dogs, etc. Rather, everyone (including the dogs, who try to avoid burning) has simply been deceived by an illusion?"

    Yet just consider this: knowing what every competent adult, or even healthy children (even toddlers), know about man, allows them to know that absolutely no one is going to want you to slam their hand in a door repeatedly until all their bones are broken. That seems to be a rather obvious connection between what man is and what he thinks is good for him. And then we might consider the question: "do we really think man is so wholly ignorant about what is good for him that even these very obvious judgements are actually illusory?"

    Lastly, you might consider that being committed to such a rejection of values means rejecting a great deal of medicine, psychology, economics, etc. as not actually dealing with facts. Indeed, even the more theoretical sciences are still firmly grounded in value judgements, because they rest on standards of "good" evidence, "good" faith, and a preference for truth over falsity.

    I don't think you can just assume that there are things that are choice-worthy, and by observing that empirical sciences can be used as a tool to direct us towards these "things," conclude that empirical sciences discover moral facts. I'm not saying that these choice-worthy things are purely subjective. Take survival, for instance: it is something deemed worth pursuing by all humans, if not all animals. But just because we have the intuition and desire to survive does not mean "one must pursue survival" is a fact.

    Is your claim that nothing is more or less choiceworthy or that it is impossible for us to ever know what is more or less choiceworthy? Doesn't that strike you as an extremely radical claim? No one ever knows what is better or worse for them (because nothing is really better or worse for them?)? Medicine can never inform us as to what is truly better or worse for us? Focusing on "survival" is a red herring. The point is merely that man can discover things about what is choiceworthy.

    If man cannot discover what is choiceworthy, what exactly is the point of philosophy? Surely it could not possibly help us to live better were this true, as we could never discover what is to be preferred.


    I see no problem with saying that the entirety of philosophy is based on the assumption that truth is worth pursuing (if I had to). The fact that the pursuit of truth is a subjective desire has no bearing on the validity of a person’s arguments. Ultimately, pursuing truth could just be simply an activity people choose to engage in, regardless of its deeper meaning. I take the same view with respect to morality: if morality is something inherent to human nature, then I will practice it (which I do, just fyi). But that does not automatically make morality a fact, and to claim that it is already presupposes that truth is worth pursuing. Therefore, I believe one can practice morality without regarding it as objective truth, just as one can practice philosophy without viewing it as objectively superior.

    So even if you have good arguments here, it cannot possibly be "better" for me to agree with you here, right? One should only agree with you if they just so happen to prefer to agree with you. Otherwise, there is no reason to prefer truth over falsity. It's an arbitrary preference.

    If that's the case, then validity is only tangentially relevant. A "good argument" is just whatever argument gets you what you currently want. It might be valid, it might not be. There are actually no facts about what is better or worse, so we should just pursue whatever feels best.

    Are there facts about what we will prefer to have chosen in the future though? It seems there are. Do these facts seem to tie to human nature, medicine, the sciences, etc? They certainly seem to. Hence, the denial of facts about what is better or worse actually seems to be itself arbitrary. It can be a fact that smoking will give me lung disease, and that I will greatly dislike having chosen to smoke, and a fact that, ceteris paribus, lung disease makes people claim to be unhappy, and yet there will be no fact of the matter as to whether this choice was truly better or worse for me? That's an odd proposition, and I'm not sure how it is maintained without simply assuming that there aren't *real* facts about values.

    This is, to say the least, problematic, and also prima facie hard to believe.

    You might also consider that the anti-realists' game can be successfully duplicated with truth as easily as goodness. "Prove, without any appeal to truth, and without assuming truth exists, that anything is really true." It will prove extremely difficult. Yet that's hardly an argument against truth. It could just as well be taken as a reductio in favor of it, because its denial leads to absurdity (and continual backdoor moves to bring it back in disguise).
  • Moral-realism vs Moral-antirealism


    ↪Astrophel
    I would say that a value is a prescriptive idea that makes its possessor believe everyone else ought to approve of and adopt it.

    Well, consider that:

    - It is bad for human beings to be lit on fire; and
    - It is bad for a bear to have its leg mangled in a bear trap

    ...are both statements about value, facts about what is bad for something, and yet neither is prescriptive. The prescriptive could be seen as derivative of such facts, since clearly we will prefer the better to the worse and want to achieve better ends and avoid worse ones.
  • Moral-realism vs Moral-antirealism


    One difficulty that jumped out to me is that, despite ending with the quote from Socrates, Greek ethics is not addressed. Neither is traditional Christian ethics, nor are the ethical philosophies of Islamic thinkers or any points further East. Instead, the analysis is of modern Anglo-empiricist thought, going as far back as Hume.

    Hence, any judgement about realism is only going to apply to narrow stretch or thought. More importantly, it's a stretch of thought that shares epistemic and ethical presuppositions that are challenged in other traditions. Of course, ethics is very broad, do this might not be as big of an issue depending on the audience.

    For instance, we might find it pretty strange that no one noticed Hume's Guillotine across millennia of thought, until we realize that the Guillotine itself requires certain assumptions to work (assumptions that arguably beg the question re anti-realism). Given any robust notion of final causality, that status of the Guillotine is far less clear, but of course Hume himself is dealing with an extremely deflated notion of causation to begin with, because that's the tradition he inherited. Yet it's a tradition based on presuppositions we might be liable to doubt today.

    Just for an quick example, the idea that ethics is about some sort of sui generis "moral good," a sheer "thou shalt" of duty (without reference to desire and what is "truly desirable" or "best for us") is product of Reformation theology (beginning with late medieval volanturism and nominalism). It's alien to earlier Western ethics and to a great deal of contemporary ethics that doesn't follow the analytic tradition.

    So, anti-realism here would tend to mean a blanket denial of value tout court. There is no good or bad, period. But this seems difficult to maintain.

    From another post:

    Consider the oppositions' case (and it's worth noting that the opposition is quite diverse, running from New Atheists like Sam Harris to contemporary Thomists). It seems obvious that there are empirical facts about what is good for us. For instance:
    • It is bad for children to have lead dumped into their school lunches.
    • It is bad for people to be kidnapped, tortured, and enslaved.
    • It is bad for a fox to have its leg mangled in a trap.
    • It is bad for citizens of a country to experience a large-scale economic depression.

    There are also empirical facts about values involving social conventions. E.g. "Gary Kasparov is better at chess than the average preschooler."

    It seems fairly obvious that the truth of such statements is something that we can discover through the empirical sciences, the senses, etc. To insist otherwise is to insist that medicine, veterinary science, biology, welfare economics, etc. never provide us with information about what is truly good or bad for humans or other living things.

    Now, the Humean will often try to counter here in two ways. First, they will try to move to universal maxims, with the Enlightenment assumption that ethics must be formulated in terms of universal maxims. So, they might claim: "ok, maybe you can reason from empirical observations to the fact that being lit on fire is bad for you, but you can hardly move from this to 'no one should light another on fire.'" But such a move simply defaults on the is-ought gap, since it allows that we can still reason from:

    P1. The effects of burning are bad for me (i.e. burning is not choice-worthy).
    P2. If I throw myself into the fire, I shall burn.
    C. I ought not choose to throw myself into the fire.

    The Humean might object that we need some sort of additional "ought premise" here, something along the lines of:

    We should choose what is truly better over what is truly worse. That is, we should choose what is truly choice-worthy.

    This seems completely unnecessary to me, since to be (truly) better, i.e., to be (truly) more choice-worthy, simply is to be what ought to be chosen. Further, it certainly seems that empirical sciences such as medicine, vetinary science, etc. can at least sometimes tell us about what is truly choice-worthy. Someone committed to the Guillotine can, of course, object to this. They can claim that there simply are no "facts of the matter" about what is truly choice-worthy, or that such facts must be always be epistemically inaccessible. Fair enough. I think that is a hard position to defend, but at least now the particular brand of anti-realism/skepticism that underlies the Guillotine is explicit.


    Here is the rest of the post: https://thephilosophyforum.com/discussion/comment/971888


    Of course, someone from the modern Anglo-empiricist tradition might try to claim that ethics is properly only about a sui generis "moral good." Fair enough. But then they have to justify this distinction and explain what makes the "moral good" unique and discreet. Yet this tends to be extremely difficult to impossible (as your post helps to indicate ). Indeed, the reason the anti-realist in the modern tradition has such a strong case is precisely because this notion of a unique "moral good."

    Just consider what it would mean to deny values if we weren't separating off a sort of discrete "moral value." If practical reasoning (about good and bad) is not distinct from moral reasoning (about good and evil) and we deny practical reason, then we are denying that truth can ever be truly "better" than falsity, that good faith argument is better than bad faith argument, that invalid argument and obfuscation of this is ever worse than clear, valid argument, etc. Having taken away all values, argument, the search for truth, etc. seems to boil down to "whatever gets me whatever it happens to be that I currently desire."

    It also seems that this would make us infallible as to what is truly best for us, as "truly best," or "better" just mean "I currently prefer." Hence, things would change their practical value as we changed our minds about what we prefer. For instance, extra tequilas shot for us late at night would be good when we were feeling no pain and desired them, and then the self-same event would become "bad" when we woke up hung over in the morning. Arguably, this destroys reason as a whole, not just practical, but theoretical and aesthetic as well.

    Also as I've written before re virtue ethics:

    Hence, we can ask: “is it not true, at least on average, ceteris paribus, that it is better for people to be temperate instead of gluttonous or anhedonic, courageous instead of brash or cowardly, properly ambitious instead of grasping or apathetic, etc.? A strong rebuttal of virtue ethics would need to show that these traits are not beneficial on average, or that we somehow equivocate on these terms when we move from culture to culture. Yet this does not seem to be an easy case to make. To be sure, the critic can point to instances where “bad things happen to virtuous people,” or vice versa, but everyone is exposed to the vicissitudes of fortune, and it is the virtuous person who is most able to weather bad fortune (and in an important sense, most self-determining and most free).

    It's worth noting here that forms of virtue ethics predominate not just in the West (Pagan and Christian), but also in the East (Islam, India, China, etc.) despite getting short shrift in many analytic treatments.

    Now that's just bringing in another perspective, one I happen to be partial to. If you're interested, Alasdair MacIntyre's After Virtue is one classical case for this sort of idea that is fairly recent and engages with the tradition you mentioned. Something to note here is that these theorists tend to agree with the case for error theory and various anti-realisms, but claim that the starting presuppositions that result in these conclusions are wrong. They also tend to point out that the open endedness identified by Moore holds for theoretical and aesthetic reason as well, and follows points made by Plato and those following him re the transcendence, and thus "defenselessness" of reason.
  • Assertion


    ↪J Haha, no, I do (unironically) think a sentence is an assertion sign. Alright... a naming sign

    At least most commonly. I am not sure however, if other uses really get that far away from assertion. Of course, if "the cat is on the mat," is used as an example, it isn't being asserted, but if it's being used as an example, then it is going to be something we are asserting something about. So the "sentence" is really more of a name/variable that is having something predicated of it as an assertion.

    I have always found the difference between declarative statements (i.e., the assertion of the self, "I," as an "agent of truth" who is responsible for the statement) and informational sentences (i.e., those that displace the agent/make them transparent) more interesting re assertion. For me, it's enough that "a man is standing" and "it is true that a man is standing," is the same thing in its most basic, common usage (just like "a man" or "a duck" is the same as "one man," and "one duck," etc.), and that it is implied that "exists" follows from "is true" or "one" or any basic assertion.
  • The Old Testament Evil


    Origen isn't saying the wars didn't happen. He is speaking to how the Scriptures tie into the spiritual life, which is their main function (i.e. how they are meant to be read). Allegory doesn't nullify historical content. Nor do I think we are forced to choose between young Earth creationism and the whole of terrestrial animal life being descended from animals on board the ark on the one hand and the denial of historical content to the story.

    Interpretation is multilayered. For example, Leon Kass's The Beginning of Wisdom is a good look at interpretation from the Jewish perspective. It considers the Flood and the expulsion of Eden on a number of levels. Whereas, treating the Bible like a witness report and then passing judgement on God or any of the other figures is alien to most of the traditions that use the text. Theology isn't the sort of thing where you pull out isolated passages and then try to make a statement about God or morality from it. It can, of course, be done this way. Psalm 14 can be taken as an endorsement of total depravity, etc. It just leads to wild inconsistencies because of both the hyperbole common to Biblical language and the nature of the text.
  • The Old Testament Evil


    Well, my initial point was merely that reading the Scriptures in English translation, without additional context (from the Jewish or Christian tradition) is going to be problematic. God's intentions are not always obvious, nor are the ways in which texts were meant to be interpreted. Many Jewish, Christian, and Islamic "Neoplatonists" do write about these at length, but it's a quite broad area of commentary.

    Just for example, Origen, the first Christian systematic theologian, one of the earlier Church Fathers (although also condemned long after his death, largely due to how others took his speculative theology), writes:

    Unless those physical wars bore the figure of spiritual wars, I do not think the books of Jewish history would ever have been handed down by the apostles to the disciples of Christ, who came to teach peace, so that they could be read in the churches. For what good was that description of wars to those to whom Jesus says, “My peace I give to you; my peace I leave to you,” and to whom it is commanded and said through the Apostle, “Not avenging your own selves,” and, “Rather, you receive injury,” and, “You suffer offense”? In short, knowing that we do not have to wage physical wars, but that the struggles of the should have to be exerted against spiritual adversaries, the Apostle, just as a military leader, gives an order to the soldiers of Christ, says, “Put on the armor of God, so that you may be able to stand firm against the cunning devices of the Devil.” And in order for us to have examples of these spiritual wars from deeds of old, he wanted those narratives of exploits to be recited to us in the church, so that, if we are spiritual — hearing that “the Law is spiritual” — “we may compare spiritual things with spiritual” in the things we hear. (Homily 15)

    Or consider Saint Maximus in the Hundred Texts on Theology:

    53. Saul is the natural law originally established by the Lord to rule over nature. But Saul was disobedient: he spared Agag, king of Amalek [cf. 1 Sam 15.8-16, 13], that is, the body, and slipped downward into the sphere of the passions. He was therefore deposed so that David might take over Israel. David is the law of the Spirit — the law engendering that peace which so excellently builds for God the temple of contemplation.

    54. Samuel signifies obedience to God. So long as the principle of obedience exercises its priestlike office within us, even though Saul spares Agag — that is, the earthly will — yet that principle in its zeal will put him to death [cf. 1 Sam 15. 33]: it strikes the sin-incited intellect and puts it to shame for having transgressed the divine ordinances. (from The Philokalia, translated by Palmer, Sherrard and Ware, Vol 2, p. 150)

    Note however that these are not denials of the historical conflicts. All of creation is a revelation of God, as is history (although it is also the dramatic stage on which man's freedom and ruin plays out, and the cosmos is a fallen cosmos).
  • The Old Testament Evil
    Certainly, some parts of Divine Law (revelation) are hard to understand in terms of Natural Law. God's command to the Israelites in the Book of Joshua, that they should destroy all of the Canaanites, is probably the paradigmatic example (Deuteronomy 7 and 20 have the commands, but their execution occurs in Joshua). As noted, there are later instances or this sort of thing.

    Here, it is useful to recall that God's purposes in giving commands is not always apparent to us. For example, at first glance Genesis 22:2 appears to be demanding that Abraham sacrifice his son Isaac to God. We later discover that this was not God's true intent. God spares Isaac. The episode is perhaps a test of Abraham's faith, perhaps a way for God to reveal things about God's nature—readings vary quite a bit—yet the goal of the command is certainly not what it first seemed.

    Likewise, the Canaanites are not all killed in the Book of Joshua. The prostitute Rahab and her family are spared on account of her righteousness (Hebrews 11:31, James 2:25, Joshua 2). Likewise, the Hivvites fear God and attempt to make peace with Israel, rather than trying to destroy them. In turn, the Hivvite cities are spared (Joshua 9). More to the point, those who are "destroyed" show up later, having obviously not been placed under the ban, else they wouldn't exist. Nor are the Amalekites actually extinguished, even after Israel secures its borders. One, Haman, shows up to play a very important role in the Book or Esther, which takes place much later.

    Thus, it is clear both that we do not always know the purpose of revelation and that we are called on to try to uncover these purposes through questioning. For example, consider how Abraham questions God about sparing any righteous souls who live in Sodom and Gomorrah (Genesis 18:16-33). Here, Abraham himself has experienced a seeming disconnect between Natural Law (his own grasp of the Good), and God's revelation of Providence re the destruction of those two cities. In turn, God offers new revelation, assuring Abraham that the righteous will not be punished (thus putting Natural Law and Divine Law back into harmony).

    On the question of the command to "utterly destroy," the Canaanites, this is an interesting article: https://www.detroitcatholic.com/voices/did-god-command-joshua-to-utterly-destroy-the-canaanites#::text=In%20fact%2C%20it%20appears%20that,the%20win%20was%20only%20temporary . It talks about the use of hyperbolic writing in the Bible, which can cause problems in interpretation to modern eyes, particularly in a Sola Scriptura context that is alien to how the texts were originally received (e.g. Jewish sources outside Scripture shed light on these events). A similar problem shows up when Malachi 1:3 is taken literally and in isolation, and extrapolated into a sort of unexplained hatred of God for Esau (and so perhaps other individuals). Esau seems to do quite well in Genesis, and his descendent Job is one of the few people directly called righteous in the Old Testament, so there seems to be more going on here.



    From the article.

    The command to utterly destroy these people seems pretty clear, and Joshua, after taking control of the land, said that he did everything the Lord commanded (Joshua 11:20-23). But a careful reader will notice something strange going on in these texts. The very people who were supposed to have been utterly destroyed are nevertheless still there in the Holy Land (Judges 1:8, 1:21, 2:21-23, etc.).

    Even more strange, there is a flip-flop that occurs regarding these peoples’ supposed obliteration — sometimes even in the same verse! For example, Joshua 10:20-21 says, “When Joshua and the men of Israel had finished slaying them with a very great slaughter, until they were wiped out, and when the remnant which remained of them had entered into the fortified cities, all the people returned safe to Joshua in the camp at Makkedah; not a man moved his tongue against any of the sons of Israel” (emphasis mine).

    How could these people be “wiped out” and a remnant still survive? Joshua 11:21 likewise says that Joshua wiped out the Anakim in the hill country, Hebron, Debir, Anab and all the hill country of Judah, “utterly destroying” both them and their cities. Yet, Joshua 15:13-15 says that Caleb once again had to drive out the Anakim in Hebron and Debir. How can the Lord command these people to be wiped out (Deuteronomy 7 and 20), Joshua fulfill this command (Joshua 11:20), and the people still be alive and well in the Holy Land? Something is at work behind these passages.

    If one compares the language used in Joshua and Judges with the conquest writings of other ancient cultures (i.e., Egyptian, Hittite, Akkadian, Moabite, etc.), you’ll find there are a lot of similarities. The recorded battles and reports of conquest by these nations often give exaggerated hyperbolic accounts about how their enemies were completely wiped out, utterly destroyed, without any survivors, much like in Scripture. In fact, it appears that this was once a popular stylized form of war rhetoric that was used in the ancient near east. When we read it, it sounds like the Israelites were commanded to totally annihilate these people, when it simply was commanded of them to fight and win, even if the win was only temporary.

    This raises another question: Why would God allow such rhetoric to be used in Scripture? Here is where things get interesting. First, God was speaking to the original audience in a way that they would understand. No one took these words literalistically, otherwise, Joshua would never have been said to fulfill them. Second, Scripture operates on more than just its literal historical meaning. It has other meanings as well. God not only writes with words, but he also writes with the events that the words describe. Therefore, the Old Testament provides spiritual lessons that apply to us today. In this regard, the war rhetoric used provides a solid allegorical lesson about Christ and our sanctification. As the early father Origen once wrote:

    “Would that the Lord might thus cast out and extinguish all former evils from the souls who believe in him — even those he claims for his kingdom — and from my own soul, its own evils; so that nothing of a malicious inclination may continue to breathe in me, nothing of wrath; so that no disposition of desire for any evil may be preserved in me, and no wicked word ‘may remain to escape’ (Joshua 8:22) from my mouth. For thus, purged from all former evils and under the leadership of Jesus, I can be included among the cities of the sons of Israel.”

    It's common is modern commentaries to see some sort of choice between allegory and history, as it the two are mutually exclusive. But this is often the result of a rigid understanding of how Providence "must" work. The same sort of thing shows up all the time with causality vis-á-vis free will, e.g. "God hardening Pharaoh's heart," or giving people over to their corrupted desires, etc. The entire dichotomy of natural, i.e., "man does it subsistently himself" and supernatural, i.e., "God magically forces it to happen," is alien to the tradition. More to the point, it treats God as one being among many. (On top of this, the Flood example has the added interpretive difficulty of the Nephilim).

    So to the OP, I'd ask, would it absolve God if some false prophet had invented those commands and God simply allowed them to be carried out by omission? Or wouldn't God be just as guilty (by commission) for what God does to Israel and Judah (consider Lamentations) in using foreign peoples to destroy them? Or would God be guilty by omission for any the myriad similar acts that occured across the scope of human history? On this last question Elie Wiesel's "The Trial of God," is quite good, and a short play that can be read in a single sitting (it is allegedly based on a real trial held for God during the Holocaust).

    Anyhow, if you treat God as a person, or as something like a very powerful, magical alien, then even the Binding of Isaac will seem very troubling indeed.
  • Must Do Better



    Don’t this:

    doesn’t require that there is something to be properly led to
    — Banno

    And this:

    that leads us into confusion, pseudo-questions, or circular debates
    — Banno

    Contradict each other?


    I'm not sure if it's a contradiction per se, but I would suggest that it's going to prove impossible to justify any standards while denying philosophy any purpose or ends. Consider:



    To call something misleading is to say it leads somewhere—but crucially, somewhere we didn’t intend, or that doesn’t fulfill the function we took ourselves to be engaging in. That’s not the same as saying there is a metaphysical end-point we ought to be led to; rather, it’s to say that a particular use diverts us from how the practice normally works or what it aims at internally.

    Intend:

    1. Have (a course of action) as one's purpose or objective; plan.

    2. Design or destine (someone or something) for a particular purpose or end.

    I don't think one can discuss "better or worse" while denying ends completely.

    I suppose though that in following sentence @Banno is setting up some sort of dichotomy between "how the practice normally works" and some notion of a "metaphysical end-point?" Is this a real dichotomy though? It seems to me that we can deny or remain skeptical about a "metaphysical end-point," and still elucidate our own goals, and what our goals/ends ought to be.

    Further, I'm not sure if "how a practice normally works," allows us to speak of "better or worse." It merely tells us about what current practice is, and if we are deviating from it. Hume's Guillotine seems relevant here. Presumably, current practice could be deficient. Indeed, if we accept "deviation from current practice" as our standard re "good and bad" or "misleading", that would seem to perhaps suggest a sort of intellectual straitjacket, an inability to deviate from current practice, since our standard is just whatever practice just so happens to currently be. Likewise, a focus on current, consensus "internal aims" would seem to rule out radical critique a priori. Is "the goal of metaphysics" the sort of thing that ought to be democratized?

    Here, we might also consider that metaphysics very much was thought to have a sort of telos, maybe even in something like the sense Banno means to indicate here. So, to think of metaphysics in this way was (and still often is) "how the practice normally works." And yet Banno earlier sang the praises of Hume, Wittgenstein, etc. for challenging what then was "current practice." Yet for these efforts to have been improvements ("better"), the standard for their being "better" cannot be "whatever metaphysics just so happened to tend towards in that epoch" or "what its practitioners took to be its end." If that was the standard, they would be deviant, and so "bad metaphysics." Or, we end up with a sort of relativism where Neoplatonist metaphysics is good for Neoplatonists, Hegelian metaphysics good for Hegelians, anti-metaphysics good for anti-metaphysicians, etc., because each of these is "current practice" and aligns with "internal aims" within the context of those specific "games."

    Nor is the dominant "current practice" in metaphysics today something like the assumption that: "metaphysics is not discovering the deep structure of the world per se, but proposing better ways to conceptualize and systematize our thought and language." This isn't even the dominant view in analytic metaphysics. Introductory texts do not introduce the field as being primarily about language, but instead still tend to refer to "ontology, mereology, causality," etc., and not as linguistic terms, but as facets of the being. Yet it hardly seems like one can argue for a revisionist agenda in metaphysics while appealing to "current practice," or "current internal aims."

    @Banno I would just add that:

    if one doesn’t think there is any final “truth” about Being or substance or whatever at the end of the metaphysical road.

    ...would itself be a metaphysical truth claim if stated as a belief that is used to define how the discipline of metaphysics ought to proceed. It's not the sort of thing that can be assumed a priori in attempting to define metaphysics at least, or so it would seem, since it would beg the question against the bulk of the history of metaphysics, both Eastern and Western. Which is just to say, it's not the sort of thing that could be worked into the standards by which metaphysics is judged from the outset, but is itself related to the questions of metaphysics.

    Hence, why I would say the goal is truth, and not merely what is currently believed to be true, or what one gains honors by claiming to be true.

    Also:



    To call something misleading is to say it leads somewhere—but crucially, somewhere we didn’t intend, or that doesn’t fulfill the function we took ourselves to be engaging in.




    “To call a metaphysical claim ‘misleading’” doesn’t require that there is something to be properly led to—it only requires that the claim presents itself as if there were. “Misleading” is a pragmatic evaluation of the function or effect of the claim, not necessarily a commitment to metaphysical realism or a teleology of inquiry

    Again, I don't think it's useful to have a binary dichotomy between "pragmatic evaluation" and some sort of "Aristotelian telos." "Pragmatism" itself requires some sort of end/goal to be pragmatic about. We'd have to unpack what you even mean by the latter, because I was only speaking of ends/goals in a very general sense; as a principle for judgement, which is to say, in the same sense that to say: "'[some]thing speaks for [or] against it...' presupposes a principle of speaking for and against. That is, [we] must be able to say what would speak for it."

    But it also seems that it must be a false dichotomy, since plenty of thinkers who deny Aristotle's notions of final causality nonetheless speak to a purpose of philosophy, such that "better or worse" is determined in terms of some sort of end. Indeed, with no end, it's hard for me to see how any outcome (i.e. end) for philosophy or discourse could be considered "better or worse."
  • What are the philosophical perspectives on depression?


    The tradition Dostoevsky comes out of recognizes acedia (despondency) and tristitia (sorrow) as part of the Eight (Seven in the West) Deadly Sins/Thoughts. This first comes up in the writing of Evagrius Ponticus, and makes it to the West through John Cassian. In the West, they get combined into sloth, but the modern usage of sloth is quite far from these in being primarily "laziness" and not necessarily a lack of hope and sadness. A lack of proper hope is considered a vice of the irascible appetites (hope and fear), while sadness has more to do with the concupiscible appetites (pleasure and pain).

    You can find a lot of stuff written about this.
  • Must Do Better


    It may help here to steal an idea from the study of the arts. There, you don't get an answer to the question what makes some novels or pictures, etc. better than others. What you do get is a collection of examples which have been widely accepted as good examples. The expectation is that you will not be limited to imitating them (although that might be a useful exercise). The expectation is that students will be enabled to create new work by developing a critical judgement from those examples. The examples are collectively known as the canon.

    True, there are various theories about what makes one work better than another, and students are taught these, or some of them. But they are taught as theories, subject to criticism. Again, the expectation is not that those theories will dictate what students will do. It is that those theories will be the basis of developing new ones.

    That's an idea I'm amenable to; I mentioned it in the current thread on aesthetics. However, I don't think it's what @Banno or the original article has in mind.

    Consider:

    Teleology is metaphysically extravagant and misleading. Galileo, Descartes, and Newton sought mechanical rather than final causes. Hume warned against inferring purposes from observed regularities. Darwin replaced natural teleology with natural selection. Wittgenstein urged philosophers to describe how things are used in practice, not to seek hidden purposes or essences. So today, to speak of ends in the Aristotelian sense is to reinvigorate a discredited metaphysical picture

    This is not the sort of claim one finds in the arts. To even call a metaphysical claim "misleading" instead of simply "ugly' is to suppose to there is something to be properly "lead to." One wouldn't be likely to find claims that Virgil or Statius' verse is "misleading" in the sense of what makes for good poetry for instance. Perhaps a theory of what makes Virgil or Horace good poets could be misleading, but that's because it's already moved halfway back to aesthetics and so to philosophy. Whereas, it wouldn't make sense to say that Milton or Keats has rendered Chaucer and Dante ugly, much less exposed them as "incoherent" or "meaningless," (claims that are quite common in the context of metaphysics). Nor do I think this difference is unwarranted. We might very well criticize "two worlds Platonism" or corpuscular mechanism as incoherent, misleading, or counterproductive (counterproductive because it occludes the progress of the sciences, for instance).

    Nor would one dismiss Michaelangelo as extravagent and misleading because Rembrandt, Degas, and van Gogh have suggested something different to us. Nor would we expect to find an indictment of meter as "discredited" in virtue of the fact that some later famous poets have eschewed meter.

    Which is to say, I think the two are quite a bit different, art and philosophy. Yet if they aren't, then I would expect philosophical criticism to look more like art criticism (that is, from anyone advancing such a similarity).

    However, this is a tricky topic because the dominant theory vis-á-vis art today is that its quality is a matter of wholly subjective "taste." Yet plenty of thinkers argue quite the opposite, that a good education involves being properly oriented towards what is truly beautiful. Indeed, the nature of beauty itself is central to some metaphysics (mentioned in the post I referenced for instance). Hence, the difficulty in using artistic criticism as a lens for metaphysics or philosophy more generally is that these often tend to assume things about metaphysics and philosophy more generally. For instance, it will do us no good to try to put to bed concerns over anti-realism in the original article by simply appealing to a field where anti-realism is dominant (although hardly a consensus).

    I think the notion of progress in the original article cuts against this comparison to some degree, as does the idea of metaphysics as centering around the improvement/betterment of language. Obviously, it cuts against the classical notion of metaphysics as a sort of knowledge, a science, as well (although classical theories tend to acknowledge a large role for beauty).

    Tricky stuff. I do agree that the separation might not be as broad as suggested. I'm currently reading Homer and the Tradition of Political Philosophy: Encounters with Plato, Machiavelli, and Nietzsche by Peter J. Ahrensdorf, and he makes a strong case for the confluence of art and philosophy. But in the end I cannot agree with the suggestion that our study is will be like the art students, primarily about a doing and producing. I do think there is a real and meaningful distinction between the productive arts (including the "fine arts') and science and wisdom, and philosophy is heavier on the other side of this division.
  • Must Do Better


    Sure, it was pretty common to see the highest goal of philosophy as intelligentia (understanding), as above scientia (knowledge), wisdom (sophia) being a higher sort of virtue than science.

    At first glance, this notion might seem to be quite a bit different from what moderns of an analytic persuasion have in mind. I don't think it's completely dissimilar though. In both cases it suggests the superiority of the intuitive and holistic over the discursive and demonstrative. In pre-modern thought, the former is normally seen as being in service to the latter. In modern thought, a focus on discursive justification and method tends to crowd out the latter (not that it isn't there, but it's role in epistemology becomes decidedly muted).

    In the tradition that comes out of the Egyptian desert, there are three stages of progress. The first is praxis, largely defined by ascetic labors, study, and meditation. The next is theoria, the knowledge of the uncreated through created things (the third is "theology" which is irrelevant here). But theoria is precisely not instrumental knowledge. It's quite the opposite of the Baconian mastery of nature. The whole idea is that the holistic ordering of the cosmos is revelatory, and it is beings' relation to whole that is grasped (a theme of some great naturalists as well). A "linear," diabolical (in the original sense of the term) thought process that is focused on the ends of the appetites and passions is contrasted with a spiral, and then circular movement of the nous, which is contemplative and tends towards unity in both the movement of the mind and its object.

    Obviously, ancient and medieval Christianity might recall Hindu thought or Taoism more than modern Western thought in this respect. Yet in the modern move from "knowledge" to "understanding" I think there is still normally something of the intuitive and even aesthetic that hangs on, or even the notion that to understand, to contemplate, is an end in itself ("all men by nature desire to know.")

    Consider Wittgenstein in On Certainty on Moore attempting to win over the person who has been raised thinking that the world has only existed since they have been born:

    Remember that one is sometimes convinced of the correctness of a view by its simplicity or symmetry, i.e., these are what induce one to go over to this point of view. One then simply says something like: "That's how it must be."


    But, as I see it, the difficulty in later contexts is that the focus on justification and method make explaining such a focus, let alone that it is choice-worthy, next to impossible. One can only be silent about it or attempt a sort of breakout from discursive justification, at least of the highly demonstrative sort.



    That quote fits nicely.



    I don't think it presupposes any robust sense of final causality to ask: "what is the purpose of philosophy?" or more specifically "what is the purpose of this particular area of philosophy?" How could we ever agree on methods if we do not consider what we want to accomplish (i.e. our end)?

    Imagine you are giving an introductory lecture on metaphysics. You tell your class: "Metaphysics is not discovering the deep structure of the world per se, but proposing better ways to conceptualize and systematize our thought and language.”

    And then a hand shoots up, and you decide to take a question and it's:

    "Professor Banno, can you please explain what makes some conceptualizations and systemizations of our language better than others?"

    It hardly seems adequate to say simply: "if you can't choose I'll decide" without offering an explanation. And if the next question is: "but what is the aim of even doing this?" I am not sure if it's fair to dismiss that question as "loaded" or somehow commiting us to "Aristotlianism."

    To say: "'[some]thing speaks for [or] against it...' presupposes a principle of speaking for and against. That is, [we] must be able to say what would speak for it." That's Wittgenstein, On Certainty 117, not Ol' Slick Ari.

    Likewise, I hardly think one can invoke Darwin as eliminating the explanatory function of aims within the context of intentional human practices. Darwin didn't think he had shown that human science is without aims. A denial of final causality, misguided as I might find it, is still a flying leap away from the denial that human practices possess proper aims. This seems to be conflating final causality related to forms, and the entire notion of aims and goal-directedness. Now arguably, you do need the former to adequately explain the latter, but plenty of thinkers don't think you do.

    It would be problematic, for instance, if we had to say that medicine is just whatever it is that doctors just so happen do, without respect to any "aim of medicine." For example, it would make identifying quacks difficult. More to the point, we'd face the difficulty that doctors themselves certainly do think their field has an aim. But so too for philosophers (this article being one example).
  • Must Do Better


    This is a misunderstanding. Physicalism is not a variety of ontological realism.

    IDK, it's normally offered up as the paradigmatic example. At least in most forms, it makes ontological claims that are independent of human experience, language, etc., i.e. realism. "Ontological realism is the view that there are objective facts about what exists and what the nature and categories of being are." By this definition at least, most forms of physicalism/materialism would qualify. I'm not sure what you have in mind though.

    Ontological realism just says we have the ability to declare what the world is made of, whether physicalism, idealism, or whatever.

    Sure, and "it's physical," or "its matter," are popular answers here. Whether these can escape Hemple's Dilemma is another question, but they make the claim.

    An example of a justification for ontological realism would be that God told us in some book that the world is his mind, so it's idealism. So though we don't have the means to verify that, we believe it because we believe everything in the sacred book by faith.

    Physicalism, for obvious reasons, isn't likely to have that kind of justification, but whatever justification a physicalist comes up with, it will come down to faith.

    Ok, well you initially said:


    Just be aware that some anti-realisms exist because of apparently insurmountable problems with the corresponding realism (no pun intended.) If one persists in being a hard ontological realist, for instance, it appears the basis is pure whim... or a kind of faith. There's no power to persuade.

    But now you're saying physicalism and materialism's ontological claims don't count as realism because they don't justify themselves by making an appeal to faith? So, physicalism isn't ontological realism because it isn't based on that sort of brute claim?

    If ontological realism is defined by such appeals, then sure, your point stands. But that's because it appears to be tautological.

    No, they are free. A hard ontological antirealist (like me), doesn't believe ontology is anymore than a sort of philosophical game. It has nothing to do with what it purports to be.

    Free of all presuppositions? Like a sort of Hegelian project? This does not seem true in terms of the advocates of anti-realism I am familiar with. What is an example?
  • Must Do Better


    Just be aware that some anti-realisms exist because of apparently insurmountable problems with the corresponding realism (no pun intended.) If one persists in being a hard ontological realist, for instance, it appears the basis is pure whim... or a kind of faith. There's no power to persuade.

    This seems a bit much for me. Consider the most popular variety of ontological realism, physicalism. Is this based wholly on whim and faith? I should hardly think so. It might be, in many of its more popular forms, a deeply flawed ontology, but it's not because it is a brute assertion.

    Second, it's not as if anti-realists are free of their own epistemic and metaphysical presuppositions. It's just that they tend to take them from the tradition that birthed physicalism and scientistsm so their primary targets tend to share them. However, if it's these starting points that lead to anti-realism being so strong, then arguably that's just an indictment of those starting points.
  • Must Do Better


    All language games involve ends, but of course which ends aren't always obvious. I've had many a person tell me that "good arguments" are just those arguments that lead to people seeing things your way, or which convince them to do what you want. I find it curious when people who embrace such a view fault arguments for being merely rhetorical or aesthetic. Presumably, arguments can be as vacuous or invalid as we please, so long as they work, so long as they are "useful" (to us).



    You're welcome to advance your own at least. If metaphysics is not a science of being qua being, or any of its other various definitions, but instead a debate about language—about "proposing better ways to conceptualize and systematize our thought and language," surely there has to be some notion of the end this language is "better for." It certainly cannot be "better" at doing metaphysics in its traditional sense, and it hardly seems that it can be based on a "metaphysics of goodness," because switching the terms of the debate to philosophy of language (itself a move supported by metaphysical presuppositions no doubt) seems to have put that out of reach.

    However, if the very issues at hand are various forms of anti-realism, e.g. anti-realism re values (i.e. the very idea of anything being better or worse at all), anti-realism re truth (i.e. the very idea of anything ever being truly better or worse), anti-realism re linguistic meaning, etc. it seems to me that it will be impossible to appeal to "better or worse language," without begging the question re anti-realism. That is, anti-realism is itself not unrelated to the ends of philosophy, e.g., the post-modern "ontologies of violence," grow out of a certain sort of anti-realism and philosophy of language.



    But we keep discussing:
    - our language, as it
    - comes from a speaker, and as it
    - references a thing in the world.

    I mean every word in that last sentence.

    Many OP’s start from “laws in the universe” or “ways to philosophize” or “what is belief” or so many others, and we are back to grappling over language, speakers, and the world.

    I'm not sure this monomania is necessary. It takes a particular sort of philosophy of language to make all philosophy into philosophy of language. Such a philosophy is not without its own assumptions about ends and metaphysics either.




    I think Williamson is only demanding that philosophical theories succeed as theories, to some recognizable degree. Whether they make our lives better or worse or give us a warm fuzzy, he's presumably going to consider a separate question.

    I think that's right. "Succeed" how though? I'm not sure how one smuggles in ends by which to measure success without foreclosing on at least some claims of anti-realism (e.g. re the desirability of any ends). If one claims that success leads to knowledge and truth, and the point in question is anti-realism re knowledge and truth (either tout court, or within "science" or "metaphysics"), this just seems to beg the question (see my response to @Banno above).

    My point would be that some forms of anti-realism remove any grounds for considering their own position choiceworthy. Of course, some are forthright about this. Philosophy is ultimately sophistry, a contest of power, the will placed above the intellect. And the charge here is that, aside from begging the question, Williamson is simply engaged in the same dynamics, his standards a power play, although he lacks full self-awareness of this fact.

    I don't think the purveyors of the metaphysics are violence are wrong here, given some of their assumptions, and the tricky thing for contemporary philosophy, including in the analytic space, particularly within the linguistic turn, is that it has already given them those assumptions.

    Edit: if you want a concrete example, just consider deflationary theories of truth, where truth is just the use of that token ("true") within a system/game. Williamson's claim that advances in formalism represent "knowledge of truth" might point in that direction, maybe not. I think full deflationism is far more popular here than in mainstream analytic thought. But if one accepts deflationism, particularly when paired with logical nihilism or a liberal logical pluralism (something some analytics are led towards, particularly through the elevation of philosophy of language), I think one will find the more "post-modern" theorists who descend from Nietzsche to be quite irrefutable, which will then cast doubt on the whole project.
  • Must Do Better


    The vision of philosophy being supported by other disciplines is certainly very interesting and makes an excellent change from the more traditional (and markedly unpopular outside philosophy) view that the role of philosophy is to police the other disciplines

    IDK, it seems to me that the opposite is generally the issue, at least in analytic "scientific" philosophy, where the "scientists" are absolute authorities within their discipline. This is equally problematic in that many disciplines are heavily effected by their own philosophical commitments, which are sometimes historically accidental or political, and yet they are far less likely to question these because they become transparent due to the focus on the particular subject matter. Economics is a fine example, the texts I've taught are filled with properly philosophical presuppositions about politics and philosophical anthropology. And smaller fields have more of a risk of becoming idea logical echo chambers.

    This has a lot of consequences when scientists tend to be publishing many of the more philosophical best sellers. For instance, neurosciences' tendency towards reductionism (as opposed to say chemistry or physics) shows up when a number of neuroscientists are publishing popular philosophical works, and vice versa.
  • Must Do Better


    But when an argument settles a disagreement, one side agrees that the other was right. The disagreement isn't dissolved, but remedied.

    I think that's fair. Although, with Socratic dialectic, it's often that the sophist simply lapses into silence and refuses both to accept the argument nor to continue the discourse. Obviously, Plato's dialogues are contrived, but this is a real enough outcome. This isn't so much "acceptance" or "defeat" as it is often instead an interval leading up to "the pursuit of argument by other means," (to borrow from Clausewitz). After all, the story's arc is that those silenced by Socrates turn from discourse to power and simply have him killed, and there are undertones of violence throughout the dialogues, in the opening of the Republic for instance, with the idea being that reason is defenseless precisely because it points beyond itself (is ecstatic), and is ultimately grounded in what is sought for its own sake, as opposed to what is sought instrumentally (which points back to power and sophistry).

    Whereas the ultimate conclusions of the "ontologies of violence," that follow Nietzsche is that those who killed Socrates ultimately understood the nature or discourse and politics better than Socrates did.

    A comparison to the productive arts and their intersection with the natural sciences is useful here. There, techne, the ability of theory to lead towards the improvement of art, is the proof of theory. Yet this cannot be the case in the pursuit of what is sought for its own sake (and one sees this in attacks on "useless" scientific fields like quantum foundations, which are accused of being "too philosophical"). Nor does it really work that well in the social sciences, since the very desirability of what is produced is at issue in "politics," broadly speaking. So, while technology can play the role of objectifying theory (e.g. positive and negative charge are objectified in jumping your car, rewriting an outlet, etc.), this doesn't work for philosophy, leaving it more open to attack or the descent into power—i.e. "sophistry" for Plato. And this is consequential since philosophy continues to have a very large effect on the sciences and politics, albeit the academic discipline of "philosophy" has ceased to be the main source of this philosophy.

    Just for an example, the empirical results of economics could be fit to many different anthropologies. That it is instead fit largely to the anthropology of man as the atomic rational utility maximizer/satisfier is in many ways accidental, a view whose origins were heavily influenced by a certain theology and theory of law. Yet this is hugely influential, since economics has come to play a huge role in political science, politics, psychology, and philosophy itself, and I'd argue that it even influenced which sort of metaphysics is judged to be preferable and which ought to be taught (I don't think it's any accident or by sheer inertia that the 19th century vision of the world as ensembles of "balls of stuff" has remained so dominant in education for instance).

    If one wants to speak "truth to power,' one has to first demonstrate the desirability of truth over power. It's only then that "good argument' becomes "argument that leads towards truth," as opposed to "whatever argument convinces or silences." And I think it's tempting to rely on method to do this, and maybe it even works to some degree. An agreement on method is also an agreement on ends to some degree.

    Which is just to say that the failure of arguments to terminate in agreement or synthesis is not always an indictment of argument.



    Metaphysics is not discovering the deep structure of the world per se, but proposing better ways to conceptualize and systematize our thought and language.”

    "Better" in virtue of what? The question of ends shows up here too.
  • A Matter of Taste


    Especially with a few topics whereby otherwise reasonable people with all the resources one could ask for -- professors, philosophers, academics, in a word "experts" -- that don't reach termination.

    Might this be a poor criteria though? Praxis is almost absent from the academy, it's been wholly privatized by the dominance of philosophies of secularism. But on the view that praxis is a necessary prerequisite for theoria, being a professional, reasonable, etc. isn't enough.

    I certainly think the perennialists often distort the traditions they appeal to in trying to make them uniform. Nonetheless, their point is not entirely without merit, and the convergence seems to me to be a sign of robustness, whereas a process that leads to endless fractal divergence bespeaks a sort of arbitrariness (particularly when the divergence occurs due to competing bare, brute fact claims or "givens").

    At least, from within the traditions of praxis themselves, this is exactly what is predicted, so in their own terms, this is not a great difficulty.

    I say "God, freedom, and immortality" as the obvious topics because Kant. And I disagree with Hegel where he speaks about having to be across a barrier to point to its limit. Like you note -- I know I'm mortal because I'm human. I don't have to know what it's like to be superhuman to know my limit.

    I am not sure if this is a good example for what Hegel is talking about though. Presumably, you know that which is not human, and that's "the other side." Hegel is also certainly not saying one must step on the other side of an issue to express uncertainty about it. He is in some ways a fallibilist after all. Hegel is speaking to gnostic pronouncements about the limits of knowledge. This is isn't to proclaim something undecided, but rather to claim that one has decisively decided it.

    To borrow the quote I shared in the other thread from D.C. Schindler's the Catholicity of Reason that focuses on the major presumptions made by those who, out of "epistemic modesty" set hard limits on reason.

    First, he responds to the idea that we never grasp the truth, the absolutization of Socratic irony as the claim that "all we know is that we don't know anything (absolutely)."

    [Here], the scope is universal: one expresses a general reluctance to claim truth, “absolute knowledge,” in any particular instance. But note: this stance implies that the question of whether or not one’s ideas, in one case or another, are true in fact is, for all intents and purposes, irrelevant. The phrase “all intents and purposes” is particularly appropriate here because the stance willy-nilly absolutizes pragmatism.

    But there is an outrageous presumption in this: if pursuing the question of truth requires one to venture, as it were, beyond one’s thinking to reality, dismissing this question means resolving not to venture beyond one’s own thinking as one’s own, which is to say that one keeps oneself away from the world and in one’s own head [or perhaps language game] — which is to say, further, that one absolutizes one’s own ego over and against God, reality, others, whatever it may be, all of which is equally irrelevant to that ego.

    What reason does one have for dismissing the question of truth and suspending one’s judgment? While it could turn out in a particular case or another that suspending judgment is prudent, there can in fact be no reason at all for a universal suspension of judgment, insofar as accepting a reason as true requires suspending this suspension. It follows that this suspension is strictly groundless; it is a wholly arbitrary a priori, which claims preemptively that no statement will ever have a claim on one’s judgment without obliging oneself to listen to and consider any given statement. It may be that one opinion or another that one happens to hold is in fact true, but the suspension of judgment neutralizes its significance for me qua truth, again for no reason. I thus absolve myself of all responsibility: if I make no claim on truth, then truth never has a claim on me.

    pg.24

    The second idea he addresses is a sort of "bracketing" out of "epistemic humility."

    The second alternative above, namely, that I claim knowledge about things in a delimited area, but make no judgment one way or the other regarding anything outside the limits, is at least apparently less presumptuous than the first, ironically because it does indeed admit that some of its knowledge is true.

    The difficulty is in fact twofold. On the one hand, as we observed at the outset of this chapter, one can set limits in the proper place only if one is already beyond those limits, which means that to the extent that self-limitation is strictly a priori, and not the fruit of an encounter with what lies outside of oneself [or language], the limitation is an act of presumption: one is acting as if one knows what one does not in fact know. On the other hand, and perhaps more profoundly, to allow oneself judgment on one side of a boundary and at the same time to suspend judgment on the other side is to claim — again, in an a priori way, which is to say without any sufficient reason — that what lies on the other side does not in any significant sense bear on my understanding of the matter or matters lying on this side. But of course to make this claim without investigation and justification is presumptuous.

    It does not in the least do to insist, “But I am limiting my claims only to this particular aspect!” because this begs the very question being raised here...

    For example, one might isolate economics from politics as a closed system in itself, which is evidently misleading insofar as the “agents” of economic transactions are living members of communities whose choices inevitably reflect in a significant way the nature and structure of those communities. Perhaps less obviously, but with analogous implications, one might also separate politics from philosophical anthropology, anthropology from metaphysics, or metaphysics from theology. The problem will be there whenever one isolates a part from the whole in a way that excludes the relevance of the meaning of the whole to the meaning of the part, which is to say that one fails to approach the part as a part, i.e., as related to what is greater than it, and so one (presumptuously) makes it an absolute in itself.

    To avoid this presumption, one might first seek to attenuate one’s insistence on knowledge within the delimited sphere in light of one’s ignorance of the larger whole, which would seem to acknowledge at least in principle the significance of that whole. But in fact this is a retreat into what we showed above to be the greatest possible presumption, namely, the universal suspension of judgment. The only way to avoid the dilemma is in fact to achieve actual knowledge about the whole...

    pg. 24-26

    ...ironically, the more one insists on modesty in science, the more “impenetrable” one makes it, i.e., the more one makes it an absolute in itself and so unable to be integrated into a larger whole. To set any absolute limit not only keeps reason from exceeding a boundary, it necessarily also keeps anything else from getting in.

    pg. 28

    Or as Plato has the Stranger say in the Sophist: "the ultimate destruction of reason is the separation of one from all" (259e).

    And if we hop out of one fly bottle and into another, no matter what, then wouldn't that be nice to know that there is no "outside the fly bottle"?

    Well, ironically, on the relativistic view, one is only ever in a fly bottle if one has already placed themselves inside it.

    Nice. These are the sorts of judgments I'm thinking about here. What is it about eliminative materialism or austere behaviorism that makes them ugly


    I could opine at length about that one, but I'll suggest that one way to distinguish between paradigms is the extent to which they must reduce and demote aspects of human experience and being, as well as beauty, to illusion and error. "Two worlds Platonism" is rather famous for this. Yet radical empiricism might dismiss even more of experience. Post-modern theorists paint with their own monochrome brushes to dismiss quite a bit, to demote to "abstraction"—abstraction, which comes to have the ring, not of "intellectual apprehension," the "possession of form/actuality," but rather of "illusion and error." Solipsism and solipsistic paranoia are of course, the extreme examples. Shankara and the Advaita Vedanta goes so far as to break out into the other side. Whereas, when reading some of the traditions that come out of phenomenology, I often think that it is a very different thing to recognize that "Atman is Brahman," as opposed to "Brahman is really just Atman."
  • A Matter of Taste
    I will add that, even in translation, late antiquity is a sort of golden age for style, and style helps, even if it doesn't make bad philosophy good. Philosophy is known for its often abstruse and unclear style, your Hegels and Kants. Even when clearer, it can be quite dry, as Aquinas often is. However, the later Greeks and Latins, who are unfortunately quite out of style themselves, had a developed culture of oratory and rhetoric that seems to come through in their writing.

    For instance, in On Prayer Origen writes:

    Good is one; many are the base. Truth is one; many are the false. True righteousness is one; many are the states
    that act it as a part. God’s wisdom is one; many are the wisdoms of this age and of the rulers of this age which come to nought. The word of God is one, but many are the words alien to God.


    The same idea can, and often was, delivered in much drier scholastic terms. This is pithy though.

    It's not all high oratorical style either, they have a knack for slipping between this and the conversational, or even conspiratorial. Saint John Climacus is a master of passing between these modes. In Cicero, style arguably becomes a vice (as for Nietzsche).

    They also tend to spice up their tractates with interesting appeals to literature, poetry, history, myth, and Scripture. Virgil's poetry, for instance, is liberally employed by Saint Augustine, who still considers him "Our Poet."

    I don't know what happened as the centuries progressed. I suppose feudalism meant an end for the need for the sort of formal education that existed in antiquity and the heavy focus on public speaking as the key tool for public life. Or maybe it was that the audience narrowed, often to other experts. But it definitely led to a decline in style, one that humanism brought back with guys like Erasmus.

    I'm not really sure what happened in between that "rebirth" and German idealism to make style what it was then...

    So, I guess I like people who can write in this way, not so much inspiringly, although that helps, but interestingly. Charles Taylor is a good example. He doesn't strike any high oratorical notes, but despite having great density of ideas he nonetheless writes more like a great historian, the opposite of dry or abstruse. William Durant's philosophy stuff is like this too, and he is also pretty pithy.

    This can make a big difference. I don't know if I'd ever recommend reading Gibbon to learn about Rome, but he's worth reading for the prose and Enlightenment era philosophy splashed liberally within his commentary. Whereas I sometimes struggle with works when reading them feels tedious.

    That said, I don't really like polemical works, even when I agree with them. They certainly aren't the same thing, style (even oratorical) and polemic. Nagel, Lewis, and Frankfurt are good in this way, as recent examples. Augustine is a master. Chesterton is too good at it for his own good.
  • A Matter of Taste


    I'm currently reading David Bentley Hart's first book, The Beauty of the Infinite: The Aesthetics of Christian Truth. There, he takes on post-modern theorists instead of secular empiricists (the focus of his critiques in later works).

    It draws a lot on Hans Urs von Balthasar, who had a huge role for aesthetics and drama in philosophy and theology. It also reminds me of D.C. Schindler's work, which also engages a lot with post-modernism and also follows Von Balthasar and Ferdinand Ulrich a good deal (Ulrich himself largely being in dialogue with Aquinas, Hegel, and Heidegger). Hart is also drawing quite a bit on the Eastern Christian tradition, particularly Saint Maximus the Confessor and Saint Gregory of Nyssa. This makes sense, as their great spiritual text is the "Philokalia," "Love of Beauty."

    If Beauty is taken as a transcendental (even as one parasitic in Goodness or Goodness and Truth), it applies properly to everything. Beauty is the going out of Being, and so of Goodness and Truth, in appearances, the meeting ground between us and the rest of being, the sight of nuptial union between knower and known. It is how we encounter the world, how we are drawn outside ourselves in a sort of erotic ecstasis, and yet it is also what we try to become in communicating goodness to others (eros ascending, agape pouring down).

    But Hart is not universally critical of the post-moderns (nor are the other's mentioned). They think they hit on real limits that are met when finite human reason is absolutized, such that the whole of philosophy and possible knowledge becomes defined by our systems and what we already are. When this occurs, Beauty ceases to lead us beyond our own finitude in pursuit of the Good and the True.

    Yet such a view cannot be demonstrated a priori. As you rightly ask, how does one "choose between camps?"

    Charles Taylor makes this case through a deep study of cultural history re modern materialist secularism. One is not "forced to the facts" here, but it is in many ways and aesthetic judgement. The vastness of the cosmos is said to be decisive on absurdity, or the non-existence of God, and yet if the visible cosmos shrunk by half or even 90%, it's unclear what should change. Elder Ephraim of Arizona takes this vast scale to indicate the exact opposite, the obvious grandeur of God. This difference seems ultimately aesthetic, which is Taylor's point. A preference for the mechanistic, or against it, is ultimately a sort of taste. John Millbank makes a similar sort of historical argument in his influential Theology and Social Theory re the positive construction of the "secular."

    As Russell allowed, empiricism does not seem to tell us if the world had just started to exist moments ago, with our memories fully in tact. Nor does it seem to rule out p zombies or eliminativism. Likewise, rationalism can only justify itself given certain assumptions and shakey deductions. So too, the post-modern tendency to prize immediacy and difference is arguably itself and aesthetic presupposition. There is no "purely rational," as in a "deduction from set axioms," way to decide. It's a bit like Chesterton's madman, who is also "wholly rational:"

    The madman's explanation of a thing is always complete, and often in a purely rational sense satisfactory. Or, to speak more strictly, the insane explanation, if not conclusive, is at least unanswerable; this may be observed specially in the two or three commonest kinds of madness. If a man says (for instance) that men have a conspiracy against him, you cannot dispute it except by saying that all the men deny that they are conspirators; which is exactly what conspirators would do. His explanation covers the facts as much as yours. Or if a man says that he is the rightful King of England, it is no complete answer to say that the existing authorities call him mad; for if he were King of England that might be the wisest thing for the existing authorities to do. Or if a man says that he is Jesus Christ, it is no answer to tell him that the world denies his divinity; for the world denied Christ's.

    Nevertheless he is wrong. But if we attempt to trace his error in exact terms, we shall not find it quite so easy as we had supposed. Perhaps the nearest we can get to expressing it is to say this: that his mind moves in a perfect but narrow circle. A small circle is quite as infinite as a large circle; but, though it is quite as infinite, it is not so large. In the same way the insane explanation is quite as complete as the sane one, but it is not so large... Now, speaking quite externally and empirically, we may say that the strongest and most unmistakable mark of madness is this combination between a logical completeness and a spiritual contraction.

    The lunatic's theory explains a large number of things, but it does not explain them in a large way. I mean that if you or I were dealing with a mind that was growing morbid, we should be chiefly concerned not so much to give it arguments as to give it air, to convince it that there was something cleaner and cooler outside the suffocation of a single argument. Suppose, for instance, it were the first case that I took as typical; suppose it were the case of a man who accused everybody of conspiring against him. If we could express our deepest feelings of protest and appeal against this obsession, I suppose we should say something like this: "Oh, I admit that you have your case and have it by heart, and that many things do fit into other things as you say. I admit that your explanation explains a great deal; but what a great deal it leaves out! Are there no other stories in the world except yours; and are all men busy with your business? Suppose we grant the details; perhaps when the man in the street did not seem to see you it was only his cunning; perhaps when the policeman asked you your name it was only because he knew it already. But how much happier you would be if you only knew that these people cared nothing about you! How much larger your life would be if your self could become smaller in it; if you could really look at other men with common curiosity and pleasure; if you could see them walking as they are in their sunny selfishness and their virile indifference! You would begin to be interested in them, because they were not interested in you. You would break out of this tiny and tawdry theatre in which your own little plot is always being played, and you would find yourself under a freer sky, in a street full of splendid strangers..."

    Or it might be the third case, of the madman who called himself Christ. If we said what we felt, we should say, "So you are the Creator and Redeemer of the world: but what a small world it must be! What a little heaven you must inhabit, with angels no bigger than butterflies! How sad it must be to be God; and an inadequate God! Is there really no life fuller and no love more marvellous than yours; and is it really in your small and painful pity that all flesh must put its faith? How much happier you would be, how much more of you there would be, if the hammer of a higher God could smash your small cosmos, scattering the stars like spangles, and leave you in the open, free like other men to look up as well as down!"


    This is, of course, an aesthetic and rhetorical appeal. But crucially, it is not one that reduces philosophy to aesthetics all the way down. It does not deny the truth of its own position above any other, but rather denies that wholly discursive, procedural reason, mixed with sense data or not, can decide the issue. If we take MacIntyre's position, that reason always occurs within a tradition, which can be more or less consistent with itself, this does not mean that reason is limited within that tradition, nor that we have a different reason for each tradition. All that is required is the notion that traditions, models, language, the senses, etc. are all means of knowing—how we know—and not the sole or primary objects of our knowledge (i.e. what we know).

    Any philosophy that appeals to praxis as a prerequisite for theoria, and so any contemplative philosophy at all it would seem, needs to make such appeals, since the whole idea is that advancement in praxis must come first, and so must be motivated by a sort of promise. And so the rhetorical and aesthetic shall loom large. But it is not as if austere empiricism or post-modernism don't rely on such appeals.

    To bring up something I said earlier about the "limits of reason" in many contemporary philosophical camps—I would point out that the claim that reason cannot adjudicate between paradigms or world-views is, of course, a gnostic claim. One presumably knows this if one claims it to be so. Yet, as Hegel says, to have recognized a boundary is to already have stepped over it.

    Now, if we claim that reason is in a sense isolated within "world-views and paradigms," we face the odd situation where some world-views and paradigms resolutely deny our claim. They instead claim that knowing involves ecstasis, it is transcendent, and always related to the whole, and so without limit—already with the whole and beyond any limit. And such views have quite a long history.

    Our difficulty is that, if reason just is "reason within a paradigm," then it seems that this view of reason cannot be so limited, for it denies this limit and it is an authority on itself. Our criticism that this other paradigm errs would seem to be limited to our own paradigm.

    The positive gnostic claim, to have groked past the limits of intelligibility and seen the end of reason (or immanence or presence) from the other side faces an additional challenge here if we hold to the assumption that any such universal claim must be "from nowhere," and itself issued from "outside any paradigm, " since it is also generally being claimed that precisely this sort of "stepping outside" is impossible. But perhaps this is simply a misguided assumption. Afterall, one need not "step out of one's humanity" to know that "all men are mortal." One can know this about all men while still always being a particular man.

    So, that's my initial thoughts on the idea that reason cannot adjudicate between paradigms (which suggests an aesthetic answer perhaps). It seems this must remain true only for some paradigms, and one might suppose that being limited in this way is itself a deficiency (one that is both rational and aesthetic). After all, what is left once one gives up totally on reason as an adjudicator? It would seem to me that all that remains is power struggles (and indeed , some thinkers go explicitly in this direction). Further, the ability to selectively decide that reason ceases to apply in some cases seems obviously prone to abuse (real world examples abound)—in a word, it's misology.

    Eliminitive materialism or austere behaviorism might seem absurd, yet they are unassailable given their own presuppositions. Yet I'd maintain that it is ugly and small regardless of this consistency and closure. Nietzsche's thought has a certain beauty, Milton's Satan is inspiring, yet these also suffer from a certain smallness and ugliness. Absurdity is in the end, not glory. So too the idea of a maze of fly bottle like games that thought is forever trapped to buzz about in.
  • [TPF Essay] Dante and the Deflation of Reason


    Oops, forgot that, it's in Looking East in Winter: Contemporary Thought and the Eastern Christian Tradition pg. 51-54
  • Must Do Better


    Well, since Descartes there has been the idea of discarding all past thought and, through the adoption of the proper method (plus supporting assumptions, since no methodology is presuppositionless [...except for Big Heg of course :cool: ]) solving philosophy and science. This means a lot of projects involve tearing down and restarting, which allows singular principles such as "meaning is use" to loom larger than they would when pieced together with past insights that stand up to scrutiny. Eco takes this on in his book on semiotics and contemporary analytic philosophy of language.

    So, this sort of thing is maybe a broader trend. But it does seem more pronounced in Anglo philosophy of language (across different "camps" within it even). For instance, the Routledge introduction to the topic doesn't mention anything before the 20th century and anything outside the analytic space (and virtually nothing outside the English language). Neither does the Oxford introduction, and having browsed some other table of contents, this seems to be the norm. Philosophy of language starts with Russell or Frege (and barely extends outside English language authors, if at all). Interestingly, the one title that bucked this trend was called something like "Philosophy for Linguists" instead.

    Now, is this an unfair criticism? Afterall, the introduction to phenomenology also ignores earlier works that might qualify (e.g. Augustine's De Trinitate, Hegel's Phenomenology, etc.), although it does at least touch on its scholastic roots. Yet I don't think this is the same thing. Phenomenology, for better or worse, is recognized as a particular discipline that is sort of defined by a recent tradition. "Philosophy of language," is rather, presumably, all philosophy has to say about language.

    And yes, the series I mentioned skew analytic and recent, but it's not like their epistemology texts don't mention Plato, Descartes, Kant, etc. So too for other topics like philosophy of mind or free will. Philosophy of language really struck me as an outlier, having checked out several titles.

    Yet this isn't just an issue in the analytic space. "Post-modernism" is, broadly speaking, in many ways worse. This is the sort of charge David Bentley Hart lays at the feet of Derrida, Deleuze, Heidegger, Lyotard, Levinas, and Nietzsche, (and it certainly might apply to Adorno in some ways), that they are engaged in a rather aggressive and egregious version of what they criticize when attempting to boil down the history of thought to something that can then be swept aside (e.g. Derrida as very much a structuralist when taking in the history of "metaphysics," or Deleuze's claim to have recognized and set the limits of immanence paired with a conflation of Enlightenment philosophy's "transcendent" and the "transcendence" of theology, the latter of which exists in a world-historical dimension, as one plane).

    I'm most familiar with Nietzsche, and the charge certainly fits there; first that I don't think anyone would want to use him as a historical authority or an authority of Plato (let alone Christian philosophy), but more damningly, that the tyranny of the same, the monochrome paintbrush, is relied upon heavily for the dismissal of vast tracts of thought. Kant was at least contentious enough to only call the bulk of prior thought "twaddle" in a private letter, not so for the Masters of Suspicion and Hume's library bonfire. There is certainly something of the Reformarion-era iconoclasm here, as opposed to a transcending of modernity.

    This becomes particularly clear when one considers the vast difference between the Enlightenment attempt to have reason step outside history to grasp the eternal truths of being through a priori procedural reason and the idea that the Truth has broken into the world, in history as a Jewish Rabbi who was humiliated, tortured, and killed by humanity—and that Truth can be had because we have already been invited into it from without, not "always already" as a sort of (eternal) ontological co-constitution, but due to a distinct historical event during the reign of the Emperor Tiberius, at a distinct place, Golgotha outside Jerusalem. Even in the more mystical writings, e.g. Dionysius, the history of the cross is present, and his most famous commentators such as St. Maximus explicate him with the life of Moses and the Gospel, not a priori deductions.

    The point here re method is that an absolutization of method leads towards the endless "restarting" of the entire philosophical project, which also lends itself to a cheaping and forgetfulness of history, even as historicism becomes absolutized (indeed, the two are related). I have pointed out how this tends to make philosophy chaotic, "highly sensitive to initial conditions" (i.e. the new methodology and its presuppositions). This is, of course, not really "post-modern," but in a way the definition of modernity, which begins with a similar move, the Reformer's attempt to sweep away the history of the Church, theology, philosophy, etc. and to recover that mythic, original, untainted outlook—first the Church of the first century, later Western rational culture before the "Christian Dark Ages," or "philosophy before Plato—prior to metaphysics and presence." In a way, it is philosophy trying to turn itself into one of the very many sciences it has birthed, with a clear starting point in history and structure. But I'd argue that philosophy still contains all that it has birthed, and hence can never shrink itself down properly to become one of its own parts, since wisdom itself always relates to the whole.
  • Why are there laws of nature ?


    You could frame this in information theoretic terms. You can sum up laws very simply, they have a low algorithmic complexity. Whereas, if behavior was random, such a summary (data compression) would be impossible. But there are vastly many more ways to appear random than ordered, so order begs for an explanation, since it is prima facie unlikely given a non-informative prior.

    You could think of it in terms of how much Leplace's Demon would have to remember outside of initial conditions as well, in classical terms.
  • The decline of creativity in philosophy


    Maybe a bit of an echo chamber effect too. Like I said, the "obligatory" paragraph about how the art critiques or combats racism/patriarchy/capitalism is very common. If these aren't included, then the description will be apolitical, but if it's philosophical it will be some form of extreme nominalism. I can't think of a single time I have encountered a description from a different political or philosophical direction though. That would actually be the shocking thing in context, something like an appeal to platonic solids as properly "platonic" would be more outrageous than the excrement.
  • [TPF Essay] Dante and the Deflation of Reason
    On a related note, I came across a great explanation by Rowan Williams of why "love for all men equally" is not something that washes out particularity and freedom, but rather the greatest recognition of it:

    This may help us to make sense of one specific set of Maximian ideas clearly of great importance in the Centuries on Charity and rather open to misunderstanding in the contemporary intellectual context. Cent. I.17 and 25 touch on a theme that will recur several times in the text: the imperative to love all human beings equally, as God does. God loves human beings because of their nature: as we read later on,6 ‘Perfect love does not split up the single human nature, common to all, according to the diverse characters of individuals.’ At first sight, this may look like a recommendation to what we might think of as an impersonal sort of love, indifferent to the need of specific persons and reducible to benevolence towards humanity as a whole. This is in fact completely contrary to what Maximos argues: to love human beings in their nature is to be awake to the very particular things that make each of them more or less in tune with that nature and to respond accordingly.

    What matters is that we should not begin by assessing the claims of human beings to be loved on the basis of individual characteristics; love is not a reward if we understand it in the light of God’s love. And if we put this together with the repeated emphasis in the Centuries on what ‘dispassionate’ love means, the point becomes still clearer. Nothing is by nature evil or unlovable, because all things come from the loving will of God, embodying particular reflections of the one Logos in their diverse logoi, and thus have the potential for mutuality or reconciliation; but when we view them through the lens of passion, self-serving self-referential desire, we do not see things as they are, in their nature. The basic theme is familiar from Evagrios’s treatise On Thoughts 87 with its seminal distinction between angelic, human and diabolical awareness of things, where the angelic consciousness knows things in their essences and the initially ‘neutral’ human consciousness has to beware of slipping into the diabolical knowledge that sees things only in terms of their use to another self. Love must be grounded in the recognition that all things are what they are by nature in virtue of their participation in the Logos: nothing can take away their ‘entitlement’ to love, because they are all capable of growing through the exercise of their proper eros towards their destiny. All are struggling towards mutuality, the fullest possible action of reciprocally sustaining each other’s lives by the gift of their own. Our own love for any other person or indeed any other finite substance is rooted in our own longing to become ‘natural’, to be in perfect mutuality. My eros aligns itself with theirs.

    Passion-free eros is the desire that the other be itself – but not in quite the Levinasian sense of abjection before the other because this is rooted in an ontology for which there is no being-for-the-other abstracted from the pattern of mutual life-giving. Passion is thus what is fundamentally anti-natural, what seeks, consciously or not, to frustrate the natural desirous movement of all finite substances in concert. Maximos can put it even more vividly in the Centuries on Theology II.30, where he speaks of how my failure to grow as I should into my nature is a diminishing of Christ. And in defining passion as a moment of frustration or stasis, we are reminded of the crucial point that at no moment in time is any finite substance or agent yet fully natural. To love their nature is to love both what they already are as logos-bearing and to love the unknown future into which their eros is moving them – to love the ‘excess’ of their being, what Loudovikos would see as their ‘eucharistic’ future as perfected gift.8 All things are en route towards this future, and thus en route towards – as we put it earlier – a universal culture; and, to go rather beyond what Maximos himself says in so many words, this is to say that all things are always already on the way to language, to being understood and spoken, being present in the ‘priestly’ discourse of human beings who make connecting sense of the logoi of what they encounter.

    Loving what is true or real, free from the distortions of passion, is loving what is grounded in the Logos; hence the paradox asserted in Centuries on Charity III.37 – ‘he who loves nothing merely human loves all men’.9 To love what is ‘merely human’ must here mean loving simply what is contingent in this or that individual, what does not belong to their nature as related to God. Universal love is love for the individual as related to the infinite act that sustains it through its particular logos, its specific reflection of the one divine Logos. Proper Christian love thus ‘dispossesses’ itself of its object in more than one sense. Not only does it seek to see and know the object without passion (without self-referential desire), it recognizes that the true being of the object is always in relation to something other than the beholder prior to the seeing or registering of this particular other by the beholder. Thus there is always some dimension of what is encountered that is in no way accessible to or at the mercy of this particular beholder. It is in acknowledging this relatedness to a third that a relation of love involving two finite subjects becomes authentic and potentially open to the universal.

    What is in relation to the ‘third’ is precisely what exists in and by the action of that ‘third’, which is the nature of the subject in question, the project defined by infinite act that is now working through by its own particular mode of eros towards its ultimate purpose. If our love is conditioned by the specific point currently reached by the other subject, it will not be universalizable; it will not be love for the whole project, nature realizing itself through eros. It will be love for a fiction, for the unreal object that is just another finite substance or ensemble of finite substances conceived in abstraction from God and logos. We cannot properly love an unrelated object; if we start from that particular fiction, we rapidly come to regard the other as available for our possession because it is cut off from its ground in God/logos/nature. Our relation to it is no longer truly eros, because we have isolated it in our thoughts from its own desirous movement towards its natural place in the universal network of mutual gift. It cannot be gift to us any longer, and we cannot relate to it in gift-like mode. But if the relation is one of my eros communing with the eros of what I love – desiring the desire of the other, but not in competitive and exclusive mode – the possibility of that ‘eucharistic’ interrelation noted already is opened up to us.
  • [TPF Essay] Dante and the Deflation of Reason


    1. I agree with Leon (and Wayfarer I think), and had to think around this idea to move past it: "Utterances are acts, yet it is substances—things—that primarily possess being, and so it is people (and God) who primarily possess truth." I think you addressed this in your reply to Leon, but I mention it again because I think it should not just be restated, but expounded upon. It gets at something that is essential to understanding what truth is, and that modern thinking avoids. Truth is being, known in the person. Things have being regardless of whether any person knows them (perhaps only because God knows them, but that may be another topic). But the truth of things is in the person who knows these things. (I don't know if I said this clearly, nor that I didn't get this idea from you anyway, but I think this one-liner deserves more attention.)

    Agreed. I worried about the length of the introduction, but it would be a good place to introduce the similar argument for why freedom should primarily be thought of as possessed by persons, not actions.

    2. Here is another concept that I wished you spoke more about: "Hell is much more diverse than Purgatory and Paradise. It has more divisions …This is because the damned pursue multiplicity rather than the unifying " and "sin, which drives us downward and dissolves the person in multiplicity." Driving this home with more analysis and concreteness seems would really hammer home the fact of the modern deflation and flattening of what we know and how we know. I don't have much to offer (which is why I wished you said more!) but this struck me as an important insight again, deserving more attention.

    Yeah, there is something I like to call the "Anna Karenina Principle," based on the opening of Tolstoy's novel: "All happy families are alike; each unhappy family is unhappy in his own way." In terms of moral and intellectual virtues, there are many more ways to have bad habits than good habits, just as there are generally many more ways to do something wrong than correctly, e.g. many more ways to treat cancer in a way that is ineffective than cure it, to break a car than fix it, etc.

    Both virtue and vice are habits, yet we do not tend to think of vices in terms of “cultivation” because:

    A. It is not something we try to promote, although no doubt bad environments can promote it; and

    B. Since there are many more ways to not act in accordance with reason than there are to act in accordance with it, it is much easier to enter a state of vice than virtue (and this goes along with the persistent theme in many of the Patristics, that man’s nous has been “darkened” by the Fall and is in need of healing to achieve any move to a state of virtue).

    Given B, man is in need of aides to help him attain to virtue instead of vice, e.g. laws, rewards, honors, education, ascetic labors, etc. (and we might add here the holy sacraments and repentance). We do not, as modern political and economic theory often seems to assume, just become free by avoiding great misfortune and turning 18. Rather, as the philosopher-slave Epictetus points out of his own era, most masters are slaves to their own passions, appetites, and ignorance. As Saint Augustine says in The City of God: "a good man, though a slave, is free; but a wicked man, though a king, is a slave. For he serves, not one man alone, but what is worse, as many masters as he has vices" (note that Augustine is not here denying the need for political freedom in the way it might seem; he is quite concerned with it in the City, but also pessimistic about any true "commonwealth.")

    We should note two caveats here. First, virtue as “action in accordance with reason,” can be read far too narrowly if we assume this is something like the Enlightenment project centered on the abstract “rational agent” for whom moral action consists in avoiding the violation of universal moral maxims suggested by procedural reason. Rather, to “act in accordance with reason,” is simply to act in accordance with what is known as good, to reach out towards the Good itself by attempting to actualize it in one’s own being. In doing “the right thing” we are avoiding action that is ultimately unjustifiable, and so, in a sense, unintelligible. Moral action is a sort of tendency towards Being (the actuality known by reason), whereas vice is a slide towards unintelligibility and nothingness (evil being a privation of the Good that is known by reason).

    Second, along with the point above re truth and freedom, it is primarily people, not actions, that possess virtue. A focus on action is isolation obscures an important element of freedom, that the virtuous person enjoys acting virtuously.

    Being able to act in accordance with reason allows us to respond better in all situations. As Aristotle has it, a virtue is in a way the universal vis-a-vis right action (a principle). It transfigures the multitude we face in fortune into a unity. Likewise, virtue insulates us from bad fortune, by making our happiness less dependent on external goods, while also making us more able to attain and share those goods we do require. The virtuous person is more self-determining because they are able to unify themselves in pursuit of the ends they see as truly best, rather than being led around by diffuse passions.

    The pursuit of virtue makes us free in another way as well. It allows us to have the desires we actually want to have, i.e., a second-order control over our own appetites and emotions. The eliminativist philosopher and writer R. Scott Bakker has a short story called Crash Space about a technology that allows people to have immediate control over their own emotions and desires through an app interface (using a cybernetic implant in the brain). The story, which is rather disturbing, shows how this capability could easily spin out of control into total disinhibition and violence, since what we desire—and so how we would choose to manipulate our own nervous system with the app—would depend on our prior manipulations of the app, leading to a run-away process whereby we lose all contact with what we truly think is best.

    As sci-fi as this scenario is, there seems to be a real world parallel between this and recreational drug use and the way vice leads to our desiring vice (or virtue to more virtue; "For whosoever hath, to him shall be given, and he shall have more abundance: but whosoever hath not, from him shall be taken away even that he hath - Matthew 13:12).

    At any rate, what the story underscores is that there is some element of freedom in being able to “desire what you want to desire” and “feel how you want to feel.” However, this is only “freedom” if these choices are not limited by pre-existing ignorance and passions, but rather occur according to what we know as truly best. And indeed, this is exactly what habituation in the virtues accomplishes. The virtuous person comes to enjoy doing good. Their desires and emotions have become harmonized with their reason (making the person more fully one, and so more fully free).

    Peter Thiel actually had a surprisingly insightful comment on this in an interview on transhumanism, that according to Orthodox Christianity, it doesn't go far enough: "transhumanism is just changing your body, but you also need to transform your soul and you need to transform your whole self." Dante agrees, hence his being the first in history to pose the question of "transhumanization." "Dispassion" is not the absence of feeling in the Patristic tradition, but their regeneration and right orientation, to be free of passions that force themselves on us from without, against our will, or as Bakker would put it: "to cease to be ruled by the darkness that comes before [the light of understanding]."

    But a key point here is that the unity of virtue doesn't equate to a "lack of freedom." It's actually the perfection of freedom. The slide into multiplicity and potency offers 'more options,' but if it's something that is ultimately worse, and chosen out of weakness of will or ignorance, then it isn't freedom. Really, it's the exact opposite. Growth in virtue makes us more fully unified, more fully ourselves, and more fully persons; it is precisely the destruction of a sort of "false consciousness" in sin, a false individuality, that personhood is realized.

    3. Last comment, and I have no idea how to accomplish what it asks, but if you could somehow secularize the language of the piece, I think more people could receive it, and even internalize the points and allow themselves to really challenge "modern" sensibilities and notions of reason. The piece needs the concept of sin. The piece needs the concept of God. But perhaps for sin it could refer to stunting one's own growth, or turning against one's self and self-defeating acts, or taking ignorance as if it was knowledge, or pride as something to be proud of... Instead of refering to "sin" refer to limit and the as yet unperfected (unpurged)... maybe? For God, my only thought is what you often said, which is "Good" or "Truth" and "Beauty" and "Love", so maybe just use them more.
    It's not that such a revision would improve the piece, just essentially not turn away many who, I think, would benefit from really reading it.

    Yes, this is always a tension with Christian philosophy, and it's unfortunate. I could point out a bit more how much of what Dante says does not depend on explicitly Christian revelation. He has key Islamic commentators in Limbo with Aristotle and Plato because he read them and greatly appreciated them. What Dante is drawing on can be found, perhaps not as fully developed, in many strands of Pagan thought and certainly in Jewish and Islamic thought, and even to a surprising degree in Eastern thought.

    Will that help a committed secularist? I am not sure. The move to privatize all such thought in the domain of politics seems to have followed on to philosophy. There are the secularists who will instantly dismiss it as "woo" for mentioning God, but there are also those who will simply take it as something that ought to be bracketed into the category of "taste," and which thus cannot ground any philosophical ethics. I think a different sort of paper is needed for that issue. Christian philosophy cannot really be expected to do without "sin" any more than Hindu or Buddhist thought should be expected to drop their own terms.

    I'm going to print your piece out (in a large font for the old man's eyes) and share it with my father. He'll like it for many of its insights, but this great reference to transformation, Dante's "transhumanized," will be inspiring.

    Great, I hope he likes it!
  • Must Do Better
    There is perhaps a useful analogy here with public policy. In the 2000s and 2010s, there was a huge effort to "bring data to bear," in public administration and military affairs. The idea was that mathematical analysis could provide rigor for policymaking. You had your metrics, you managed to targets based on those metrics, continuous data collection and improvement, etc.

    The idea was in some ways similar. Use formal criteria and systems to make progress tractable, and to make nebulous issues more concrete. This was a core idea underpinning the Bush era education reforms, COIN efforts in Iraq and Afghanistan, and dealing with crime waves in inner cities (e.g. Citistat in Baltimore).

    The problem is that many of these efforts continued to show "progress" and had their praises sung right up until it could no longer be ignored that the emperor was wearing no clothes and that the "progress" was entirely illusory. The Anbar Awakening was a great victory for data driven COIN right up until the fighters defects en masse to ISIS as soon as they had momentum. Only then was it obvious that the pattern was simply following the time-tested behavior of teaming up with whatever side is currently winning a civil war. The ANA was making progress, until that progress was revised back every few years, right up until it collapsed without a fight. But people in the field doing the training could and did predict the outcome. Likewise, Baltimore and other cities hit all time crime levels, and rigorous testing and accountability did little for education metrics in the long run, leading to many being rolled back.

    My point here is that defining progress in formal terms can sometimes prove illusory. I am not sure about the claim that we "know much more about truth then we did decades ago," unless it is caveated for instance.

    The amount of specificity and rigor needs to be appropriate to the subject matter, meaning it will only work in some cases. Clearly, it didn't work out great for combating insurgents or gangs, or teaching kids geometry. Might it work in some cases within philosophy? Surely, but it can also create a false sense of progress.

    One example of an area I have read about extensively is the "Scandal of Deduction." I am fairly confident that it cannot be explained in wholly formal terms, but plenty of papers bang their head against the wall to do just that, because that's the methodology.

    The other issue is that people very quickly learn to game metrics. I don't think this is the only reason that they have failed in many cases in public policy though. At any rate, this is relevant in that the way progress gets defined will come to determine how people do philosophy, and you will invariably get a sort of "gaming" of trends in any field (unfortunately unavoidable; the impulse to novelty for novelty's sake seems endemic to how academia is set up for instance).
  • Must Do Better


    Of course, one area where you get a lot of specificity is in scientific terms and jargon, and a common charge against Continental philosophy is that it uses these in cases that seem to fail to understand the original usage, while also not clarifying any alternative usage, which is, so the charge goes, at best a misunderstanding and at worst obscurantistism.

    Catherine Belsey covers this in her book on post-structuralism for instance, and in some cases the charge does not seem misplaced. The same goes for seeming misreadings of philosophical sources, if they are to be brushed away by appeals to there being no correct readings.

    So, the Sokal Affair, and it's capacity to be reproduced is obviously one of the concerns about rigor that comes up when obscurity reaches a certain level (and this can be true in the sciences as well). However, I think the concerns here can be overblown. Certainly, the high success rate of "Sokal Squared" is concerning, but you'd have to do a comparison to other fields to determine if it's not simply a larger issue unrelated to subject matter and style. Second, prima facie it would be easier to do this sort of thing not only in "litcrit" but also in analytic philosophy, or even more abstract/theoretical areas of the natural sciences, so more convincing evidence would show some real variance there.

    Note, however, that some of the responses to this sort of thing seem deficient. For example, simply pointing to seemingly incoherent analytic or scholastic philosophy. This doesn't say much; presumably there can be bad scholastic philosophy, bad theoretical physics, etc.



    Right, here Bohm is talking physics, which is one of the places information theory has been most influential. The "difference that makes a difference" here is any physical difference at all, which, assuming physicalism, is simply any difference. Although the qbit also becomes basic here.

    But to return to my original point, it is simply that the concerns of the many phenomenologists who see a need for metaphysics aren't addressed by the response that act is posterior to difference because this seems to simply equivocate on the term actuality. Hence, their continued concerns.
  • Must Do Better


    Ok, I don't think that addresses my point at all though. A bit has to be a 1 or 0. I was just quoting the article. A non-actual difference on the classical use of "actual" is not a difference.



    In a bit (of information as in computer science), there is a difference between 0 and 1. It is a difference that does not make a difference.

    This is not true of information theory (or complexity studies, cybernetics, etc.) i.e, that a bit is a "difference that does not make a difference" though. I am not sure were you are getting that. Quite the opposite.The phrase "a difference that makes a difference" comes from Donald MacKay and Gregory Bateson, and is hugely influential in the application of information theory (probably in every intro class to help define it). The whole idea is that the media has to allow a measurable difference for it to contain information at all. You can abstract the bit from all physical media, but then you're still positing a difference to be aware of, one that makes a difference. If they didn't make a difference, then 0 and 1 are should be considered identical on conventional views. The notion here is pretty similar to how Hegel thinks that sheer indeterminate being, being contentless, collapses into nothingness. There are parallels between Hegel, Spencer Brown's Laws of Form, and Floridi's work in the philosophy of information here.

    Shannon's original theory is in the context of receiving messages (or we could consider making measurements), and in either case a 1 has to be different (be discernible) from a 0 to be a 1 at all. This is why digital physics was lambasted. Bits cannot be "building blocks" that the cosmos is "made of," because they depend on context to be anything at all (and so we might say that they refer outside themselves for their being). For example, an electron measured against a field background that all has the same exact measured value as an electron would cease to be something different from the background. When physicists talk about the information carrying capacity of particles they are still talking about their variance from not measuring said particles (the difference that makes a difference).

    This is what Bohm is speaking to. Hence, I don't think he is totally correct about the priority of difference, the two principles are arguably both required to make sense of the other, the bit always referring outside of itself. I suppose we might also ask whether the bit is always essence (form) as compared to other form (background) and not accounting for existence, which must be prior to difference and similarity. Because, in a certain sense, the order of being itself could be considered an overaching similarity. This is certainly something in the Analogia Entis, but it seems true of the wholeness of Bohm's implicate order or even perhaps Deleuze's plane of immanence (though no doubt he would probably want to deny that).

    Obviously, Deleuze uses terms differently. This is always a difficulty in translation. In the original sense of "virtual" (as first employed in scholasticism), for instance, it would not make sense to talk of something like a "virtual difference" prior to any actuality, because, in denying all actuality, you would just be talking about nothingness. Virtual has the same root as virtue, it's about a power (potency), but for a specific power to be a power it has to be potency shaped by some prior actuality, or else it is just the sheer potency of being "nothing at all." So, for instance sight is the power to see, it is potential, but it also isn't totally indeterminate, it has a formal object, etc. Likewise, in the example of an embryo, if the virtual power were prior to all actuality, an embryo would be "nothing at all," or "nothing in particular," on the scholastic usage.
  • The decline of creativity in philosophy


    Gothic architecture was pretty amazing (and philosophical). They just lacked the technology to fully see it through

    C.S. Lewis' The Discarded Image has some pretty neat stuff on how the Gothic cathedral is an image of the medieval cosmos.

    But I agree with the bolded. What we need is a revival where we build a Gothic cathedral on the proper scale, with a 3,000 foot spire! :rofl:

    Or not. That huge clock tower in Mecca was a cool idea, but it looks incredibly gaudy to me in context.
  • Must Do Better


    The true infinite can only be considered infinite to the extent that it is an endless repetition of the same finite quality.

    Oh no, that's literally the definition of the "bad infinite," the infinite that is defined in terms of the finite. And I don't think continual variation on the finite counts either. We'd be in a pickle if we were stuck with that:

    1. The finite is superseded precisely in the way that we have analyzed in 3.4 and 3.6. Hegel sums up the argument with his statement that “finitude is only as a transcending of itself” (WL 5: 160/GW 21:133,34/145). Finite qualities can be what they are by virtue of themselves, rather than being defined by their relation to others, only insofar as they go beyond their finitude. To the extent, then, that a quality fails– as it does at every moment of the “progress to infinity”– to transcend itself, to go beyond its finitude, it fails to be. (More precisely, I suggest: It fails to be “fully.” It is, but it isn’t real: It fails to be what it is by virtue of itself.) So finitude must be superseded, in order to be real.

    2. The spurious infinite,on the other hand, is superseded by the observation that infinity is only as a transcending of the finite; it therefore essentially contains its other and is, consequently, in its own self the other of 78 Hegel’s philosophy of reality, freedom, and god itself. The finite is not superseded by the infinite as by a power existing outside it; on the contrary, its infinity consists in superseding its own self. (WL 5: 160/GW 21:133,36–2/145–146) Since an “infinity” that is over against and flatly opposed to the finite is limited by the finite and thus fails to be infinite, true infinity must include the finite by being the finite’s superseding of itself. To the extent that the finite transcends itself, the finite is, and to the extent that the finite transcends itself, infinity is. Rather than being,on the one hand, and arriving (or, in fact, not arriving) at the goal of pure freedom (and goodness), on the other, the finite something constantly comes (fully) into being by creating pure freedom and goodness, by transcending it self. Both the finite and the infinite come (fully) into being through, and thus they both are, the same process. Though infinity transcends, goes beyond, the finite, it does so not by replacing the finite with some thing totally different, something entirely “beyond” the finite, but by being the self-transcendence of the finite itself. The true infinite, the true “beyond,”is in the finite rather than opposed to or simply“ beyond” it.

    Robert M. Wallace - Hegel's Philosophy of Reality, Freedom, and God

    Or, as Saint Augustine says, God is "within everything, yet contained in nothing," and "more inward to me than my innermost self."

    It is not just man that is self-moving, it is the world that is self-moving. And self-movement does not mean willing what one chooses to will. The movement is as much passive as it is active. One finds oneself in motion. One is throw into situations.

    Sure, but I don't think this addresses the concern of the metaphysicians. What spontaneously moves itself from potency to act "for no reason at all" (is causeless) has no reason to be one way and not any other. To say being somehow generates its own regularities and intelligibility doesn't really address this issue, since for it to generate any particular sort of regularity would assume a sort of prior actuality. Nor does an evolutionary account really fix things because selection still requires some sort of prior actuality to work by, else anything "selects" as well as anything else.

    Intention and intuition, potency to act and action are not separated in poststructuralist thinking, except artificially. Repetition and difference are prior to this distinction.

    This seems like it might just be a translation error in concepts. At least in the traditional usage of the terms, it is incoherent to say there is repetition and difference prior to actuality, for then they wouldn't be "difference and repetition," since to be anything at all, anything determinant, is to be actual. The two couldn't be really distinct without being actual.

    There is a neat article on the physicist David Bohm's work on difference and similarity that might interest you:

    Difference is seen as more basic than similarity. The reason is that similarity presupposes difference which makes difference logically prior to similarity. In fact, similarity is a consequence of disregarding difference. In such a context difference becomes fundamental. It is therefore natural to ask if there are different kinds of basic differences, i.e. is there really only one difference, usually expressed as in a≠b? It is conceivable that two different objects comprise of two aspects of difference: one collective and one individual. The collective aspect refers to some collective totality, whereby different objects are different because they are differently contributing to the whole, or collective. One could say that each object is defined collectively by being different from all others in the shared context or collective. The individual difference then concerns a direct relation between two individuals. This difference is always used when some object is named, labelled, indexed to identify each object uniquely. A “collective” difference is then reflecting that objects are different in the sense that they, by the very being part of some whole or collection, are differently contributing to this whole. If they were not, they would not be different at all. If cardinality represents the collective aspect of difference, ordinality would represent the individual. It is hard to see any reason why these two aspects necessarily should be identical. This motivates a proposal of two basic kinds of differences where non-ordinality will imply indistinguishability. Another reason for a discussion of indistinguishables is that there are very few systematic attempts to deal with id:s (here after I use the shorthand id for indistinguishable)

    Georg Wikman - The Notion of Order in Mathematics and Physics. Similarity, Difference and Indistinguishability

    But any "difference that makes a difference" is of course actual, sheer potential itself being nothing at all. Difference presumably presupposes something to be different. It might be that one cannot have a 1 without some background to constitute a 0 (finite things refer outside themselves), but you still need to actuallyhave the 1 and the 0 to have a difference.

    I am not sure Bohm has this all correct though. To return to your reference to Hume, the difficulty in the idea of the "bundle" is that if each component were truly sui generis, there would be no bundle. You would have a bunch of wholly unrelated moments, and to even be aware of them as a bundle presupposes something that unifies them for consideration. Likewise, cognizance of difference seems to require comparison. On the other hand, anything that is anything (different or similar) is the same in possessing being. If participation in being is fundamental, as it seems it should be, then this is an overarching similarity. Or, if we follow Parmenides, "the same is for thinking as for being" there is a fundamental similarity (the same is true if all thought/being is predicated analogously).





    Well, one difficulty is perhaps a conflation between specificity and rigor. For instance, I love Robert Sokolowski's The Phenomenology of the Human Person, but one of my criticisms while reading it was that it didn't always specify what it was talking about as much as I would have liked. However, I came around on this, that this was actually a wise choice, in line with Aristotle's advice in the Ethics that we ought not demand greater specificity than our subject matter allows. Wittgenstein's appeal to a "family resemblance" is another good example (although it's funny to see this then sometimes transformed into an appeal to a sort of formal "concept of family resemblance"). Actually, I think this is one of the points Grayling (who is quite analytic) criticizes Wittgenstein on, being too vague in these ways.

    I don't think that charge is totally without its merits in some cases. The degree of specificity needs to be in line with the subject matter, and it is possible to err in either direction. But it is easy to mistake a lack of specificity with a lack of rigor. The drive towards reductionism and atomism is a sort of pernicious demand for specificity in some cases, often paired with questionable metaphysical assumptions.
  • Must Do Better
    This topic reminded me of one of the quotes I like on the value of historicism in philosophy, which has tended to play a larger role in Continental thought (although IMO, it hasn't always been a helpful one because it can be problematically absolutized):

    Only the abstract is non-historical. Philosophy is, or should be, an effort to think the concrete. That is why it cannot attempt to surmount the conditions of temporality by seeking out categories which seem to be exempt from history, as do mathematics and logic. It is true that any mind at any socio-historical perspective would have to agree on the validity of an inference like: If A, then B; but A; then B. But such truths are purely formal and do not tell anything about the character of existence. If metaphysics views its categories as intelligible in the same manner, it has really taken refuge in formalism and forsworn the concrete. That is why a metaphysics which conceives itself in this way has such a hollow ring to it...

    ...Let us now consider the second aspect of the sociology of knowledge, its positive contribution. For the impression must not be left that the social and historical dimensions of knowledge are simply a difficulty to be somehow "handled" by one who wants to continue to maintain the objective value of our knowledge. This would be to miss the very real contribution made by the modem historical mode of thought to our appreciation of what objectivity is. Here we may advert to the remarks made in connection with Kant's view that we can only be properly said to know things and that only phenomenal consciousness (a combination of formal category and sense intuition) apprehends things. To this we may add, with Dewey and the pragmatists, that action is also involved in the conception of a "thing."24

    Now with this in mind we may confer a very positive cognitional relevance on the social and historical dimensions of human existence. For if metaphysical categories like "being," "soul," "God," "immortality," "freedom," "love," "person," and so forth are to afford us the same assurance as phenomenal knowledge, they must be filled in with some kind of content-they must begin to bear upon something approximating a "thing." Now obviously this content cannot come from the side of sense intuition as such, which cannot exhibit these notions. It might come, however, from action of a superior kind. And here is where the social and historical dimensions become extremely relevant. For it is through his higher activity as a social and historical being that man gives a visible manifestation to the meaning creatively apprehended in these philosophical concepts. His grasp of himself as a trans-phenomenal being is weakened and rendered cognitionally unstable unless he can read it back out of his existence. Therefore, the historical process by which he creates an authentic human existence for himself is integral to the cognitive grasp of the transcendent dimension of real.

    Kenneth Gallagher - The Philosophy of Knowledge

    Which also reminds me of:

    The foundation for such a view was already laid in that great law of"reflecting realities" expressed in the Mystagogia, according to which whole and part, idea and individual, ultimately the whole intelligible world and the whole sensible world, are formed in each other and with in relation to each other.

    For the totality of the intellectual world appears mysteriously in sensible forms, expressed through the whole sensible world, to those who have the gift of sight; and the whole sensible world dwells within the intellectual, simplified by the mind into its meanings by the formative process of wisdom.... For the ability to contemplate intellectual realities through sensible ones, by analogy, is at once intellectual insight and a way of understanding the visible world by means of the invisible. It is necessary, surely, that both of these realms-which are ultimately there in order to reveal each other-should possess a true and unmistakable impression of each other and an indestructible relationship to each other.26

    This paragraph, which recalls for us the metaphysics of the whole and the part, would be enough in itself to purge Maximus of any reputation of unworldly spiritualism. Precisely as a mystic, he understands the limitations of pure thought, which of its own power embraces the object only through abstract concepts, not on the basis of experience.

    26. Mystagogia PG 91, 669 CD

    Hans Urs Von Balthasar - Comic Liturgy: The Cosmos According to Maximus the Confessor

    In terms of the focus on method, there is perhaps a risk of going too far over into either side here, either the retreat into formalism (abstraction), or a sort of totalitarian empiricism. And I suppose the difficulty here is that the question of method relies to some degree on questions about being itself. It's a bit of a chicken and egg issue. For instance, the conclusions of the elimintivist do support their methodology, but then the methodology is also what leads to their conclusions, and I'm sure this applies in many cases.
  • The decline of creativity in philosophy


    When people talk about the death of art I don't think they tend to mean Picasso, but rather stuff like human excrement or menstrual blood thrown at a canvas with a paragraph on how it's attacking capitalism, the patriarchy, etc. attached. This might be provocative once, but as a trend it starts to look very "emperor's new clothes-ish."

    That said, I am a great appreciator of contemporary art museums and I think the frequency of such work is vastly overblown. There is a lot of good stuff out there that is very creative. However, it is true that a lot of this very creative stuff also has a seemingly obligatory paragraph about capitalism or patriarchy attached to it, and that does seem to be a bit of a straight jacket on much (but hardly all) contemporary art. Likewise, in drama there is a move towards the more interactive, self-guided experience ("Sleep No More" being the big example).

    jvk7cpj80opr6xku.jpg

    xsvi1qk1zxzw97lv.jpg

    yet what is totally obvious is that a feudal society simply doesn't employ artists as much as a more prosperous society that enjoys international trade and a high level of job specialization.

    I'm not sure if this is obvious. The work of artists and artisans tends to get replaced by mass production, guilds lose their political clout, and cottage industries go extinct. Museums will often recreate old interior decor and what you have is spaces, even middle class spaces, covered in handcrafted art, furniture itself often being decorative. When you walk around Pompeii, the interiors are floor to ceiling art. Today, the vast majority of art hanging on walls, rugs, furniture, clothing, etc. is mass produced, which of course includes a design element, but it is one design for thousands of copies. The only analogous spaces today tend to be the interiors of some types of church or temple.

    Which is partly to say, Marx certainly wasn't entirely wrong about the alienation from labor brought about by industrialization, and there is definitely a tendency in modern culture to equate value with the ability to generate volume that is at odds with the idea of beauty as the market of quality for functional art. Some cultures such as Japan seem to have fallen for this a little less hard.

    6fcw7isbvapb2v07.jpg


    Likewise, it's only through protectionism that industries like France's artisanal bakeries and cafes survive.

Count Timothy von Icarus

Start FollowingSend a Message