• amoralism and moralism in the age of christianity (or post christianity)


    I can only say that I think that is a reading largely or wholly absent from the tradition. Aristotle's typology, for instance, has it that the furthest state of vice is precisely one where the person prefers vice and sees it as better. That's a theme in Plato too, who famously has Socrates argue in several places that no one ever knowingly does wrong (a point picked up by many Patristics).

    Do you think it is impossible to define prudence, intelligence, wisdom, gluttony, etc. without having to ground them in blame?

    I don't see why it should be. For temperance for instance, there is presumably a mean vis-á-vis the satisfaction of some appetites that is healthy regardless of what people currently think it is. Likewise, it's particularly hard to see how the intellectual/doxastic virtues could even be defined in terms of blameworthiness. What makes the intellectual virtues to be virtues at all is that the lead towards understanding, not that they meet some sort of criteria of proper effort. If this was the case, everyone could become wise simply by "trying hard." And this is originally just as true for the practical virtues. Bad choices don't become good ones just because you tried your best.

    Courageous , temperate, loving, fortitude, prudent and faithful become transformed into assessments which are not the product of the application of an inner willpower, but involve behaviors which reflect how the situation makes sense to one, given one’s pre-existing means of understanding.Joshs

    Well, if you ground virtue in blame and layer this on, doesn't that mean that no one is ever more courageous, temperate, just, intelligent, wise, etc. than anyone else? They are all doing their best given their understanding, right?

    But doesn't that seem absurd? Certainly some people are wiser, more just, more intelligent, etc.
  • amoralism and moralism in the age of christianity (or post christianity)


    because the only way you could back it up is by giving examples of how your preferences benefit you..ProtagoranSocratist

    Sure you could, you could:
    A. Argue for moral anti-realism, nothing objectionable there.
    B. Point out that all decisions must thus be motivated by "pragmatism."
    C. Argue that individuals seeking their own preferences is what makes the world go round, drawing on liberal political economy, etc.

    You know, Ayn Rand, Gordon Gecko "greed is good." Or even a Sam Harris: "reasonable individuals want to maximize their well-being and that's why justice can be justified," or a Rawlsian elevation of the abstract chooser's reasonable self-interest vis-á-vis a wholly procedural "justice" as set over the presumed unknowability or irreducible plurality of the good.

    I did say "well-argued." The procedure is important. Whereas, even if you swamp an argument for race or sex segregation and unequal rights in peer reviewed citations and make no logical errors, I imagine that's still going to land you in hot water.
  • amoralism and moralism in the age of christianity (or post christianity)
    I disagreeProtagoranSocratist

    With what, the examples themselves? But my point isn't that those positions are correct, but that our current systems allows students to pick between them as equally "right" alternatives. Whereas racism, sexism, Hitlerism, or even fundamentalism are not considered worthy of exploration by the "autonomous agent."

    All of the terms you listed above are truisms in that what they have in common is the assumption that the person they are describing has fallen below a norm of conduct.Joshs

    It seems to me like you're just absolutizing a certain (problematic) post-Enlightenment understanding of virtue here, such that virtue is really always grounded in deontological standards and variance from them. But I think that's a poor way of understanding it, and at any rate a dismissal of virtue ethics on the grounds that opposing modern views use its terms wrong doesn't seem like a fair criticism.

    This is most obvious in terms of the physical virtues. When we say that Tom Brady is dexterous, we don't mean "in comparison to doing his best," but "in comparison to every other person." He is excellent at throwing a ball where he wants it when he wants it. Fleet-footed Achilles isn't called such because he tries his best to run (presumably, all the men he chases down are trying their best to run away as well). He has arete (virtue, excellence) because he is the most excellent in his particular role as a warrior.

    And, all else equal, it is better to be healthy, strong, intelligent, wise, prudent, just, etc. than their contrary. That has nothing to do with "blame" per se. Although we might blame people for their vices, that is not why they are vices, or what defines them as vices.

    Of course, there is also the objection that strength, agility, courage, prudence, wisdom, fortitude, charity, etc. don't exist. But this seems absurd. It seems particularly absurd for the physical virtues because some people are clearly stronger, faster, healthier, etc., and yet it also seems rather absurd to say that this is so for the intellectual virtues, or for the practical virtues. Some people are particularly impulsive and rash for instance. Whether they ought to be "blamed" for this is besides the point as far as prudence being a virtue.

    the television show "the good place"?ProtagoranSocratist

    This is a good example. The philosopher character is an extreme comic example of indecisiveness. It is not excellent to have this level of indecisiveness; that is what makes the character humorous. One need not "blame" him to think he could benefit from a change.
  • amoralism and moralism in the age of christianity (or post christianity)
    Relative to the perspective of the individual.praxis

    So when a child feeds their cat antifreeze because it looks like a fun drink. Cats love antifreeze too. Is it thus truly good for the cat to drink antifreeze because all the individuals in question think it is so?

    The tiger enjoys a satisfying monkey hunt and meal—which is good.praxis

    This is simply changing the subject to what is good for the tiger. Again, is it false that is "bad for the monkey to be eaten?"

    Is it a fact that it is "bad for women to be raped," or does this somehow become false and it is good for women if they are raped just in case you're a rapist? But not even most rapists believe this.

    Let me ask, on this view, how is it ever possible for someone to be wrong about what is good?
  • amoralism and moralism in the age of christianity (or post christianity)


    Relative in what sense? What exactly is: "All else equal, it is bad for a monkey to be eaten," relative to? Certainly not the tiger. To the extent that the tiger has beliefs, I don't imagine it thinks what it is doing is good for the monkey either.

    Or for: "having access to proper water and sunlight are good for my plant," if this is relative, in what context is it false?
  • amoralism and moralism in the age of christianity (or post christianity)


    To see what I mean about the modern paradigm, just consider that if you wrote a paper in most schools claiming that:

    -ethics is wholly bunk and we should just act selfishly; or
    -ethics comes from God by command and anyone who tries to justify it otherwise is kidding themselves
    -ethics is a wholly formal, Kantian duty
    -ethics is absolutely unknowable and everyone who says anything is unjustified

    Would all probably lead to fine grades if they were well written and well argued. The only one that might get you in trouble is the divine command theory one, potentially on the grounds that the Bible and the Koran are not proper sources for a humanities class.

    This is very much unlike mathematics or even history, where to give multiple wholly contradictory answers will generally mean one is wrong. The paradigm seems invisible here because it rules out nothing.

    Yet the paradigm becomes visible if you consider what would happen if you wrote a paper justifying racism, sexism, defending Hitler, etc. A high school English student who justifies the society of A Handmaid's Tale might very well end up with a call home, particularly if the teacher thinks they are serious.

    But what this shows is that the process is the focus of moral education (Dewey, Rodgers, etc.), although with certain bounds (no racism, etc.). Yet this isn't really an absence of ethics, it's just a particular dominant type that is taught.
  • amoralism and moralism in the age of christianity (or post christianity)




    My personal orientation to good and bad is that it's subjective 100% of the time: when the tiger eats the monkey, it's good for the tiger, bad for the monkey. The tiger gets nourishment, the monkey feels unpleasant and dies. The tiger can't be "morally wrong" because it can't question its behavior. However, this subjectivity gets extremely complex when you have humans who believe in free will and compatibilism.

    Right, but is it not a fact that "being eaten by a tiger is bad for monkeys?" It seems to me that this is obvious. What monkeys are tells us at least something of what is good for them.

    Likewise, is it not a fact that it is—at least all else equal—better for human to be strong rather than weak, agile instead of clumsy, intelligent instead of dim witted, courageous instead of cowardly, knowledgeable rather than ignorant, prudent instead of rash, possessing fortitude instead of being weak of will, healthy instead of sick, etc.?

    But who is saying that nothing is good or bad in any sense?ProtagoranSocratist

    My point was that the bracketing out of "moral goodness" as a wholly distinct "sort of goodness" comes from early modern Christian theology. Those who reject that theology need to justify using the distinction rather than just assuming it and going from there. Without such a distinction, anti-realism about values is presumably global

    the subjectivity aspect of "good and bad" also goes beyond predation, especially when it comes to situational responses. Doing one thing in one situation will lead to positive results, and other times negative responses.ProtagoranSocratist

    Sure, I can help with an example: it is good to scoop up your own toddler when they fall and start crying. It is probably not good to run and scoop up a stranger's kid from the park. It might be good to bow to an elder if not doing so would grievously and needlessly insult them in some cultural context, and bizarre in another context. No serious moral realist theory fails to take account of such variances though.

    You also mention Plato: my understanding with him and other Greeks is they largely believed moral righteousness was correlated correlated with the happiness that you feel, and that independent of the latter factor, that there was no basis for talking about morality or justice. However, the question becomes: to what extent can this be established objectively and scientifically. What behaviors lead to happiness, which ones lead to unpleasantness? I believe it's possible to answer this to a limited degree.ProtagoranSocratist

    That's more of Aristotle's framing, but yes, the good is desirable. But, since we are often wrong about what will be good for us, the focus is on what is truly desirable, not what merely appears to be desirable. So, the Republic centers on why justice (and freedom) are truly desirable.

    "Happiness" here is a translation of eudaimonia, which might be better rendered as "flourishing" or "blessedness."

    And yes, knowledge here might be "limited." As Aristotle points out, it is foolish to demand greater specificity than a topic allows. But as for what can be known, it seems that it is better to be: strong, healthy, agile, courageous, wise, prudent, magnanimous, loving, etc. than their contraries.

    What can be known "scientifically" and "objectively" is another matter. I have seen people define objectivity such that even physics is subjective because it is shaped by the mind. Aristotle thought ethics and politics were a practical science. But if "scientific" means presupposing that values are subjective, then obviously this will be impossible, not because of any truth about the human good, but because "science" has been defined so as to exclude such truths regardless of if they exist. The sciences are saturated in value judgements however, so such a framing is always a preformative contradiction.

    One could argue that the mere studying of moral philosophy could improve people's lives, but you would have to acknowledge that this lack of study in moral philosophy has more to do with people not wanting to do it more so than a systemic failure in education.ProtagoranSocratist

    Well, on the older view of ethics as the study of ends, how to live a good life, and how to be a good person, I think people do study this. The self-help industry is huge, wellness terminology has flooded our everyday speech, novels and media focus on these questions, etc. Explicit moral philosophy is banished from most curricula however because teaching any positive content is anathema to liberal individualism. However, I'd argue that students are indoctrinated into a particular ethics, a sort of therapeutic, individualist liberalism that oscillates between civic humanism and emotivism. It's just that this ethics pretends it isn't an ethics, and pretends that its forms of indoctrination are uniquely liberatory because its presuppositions are those of "pure reason" and possess an epistemic humility no prior systems possess (arguably, this is just another sort of dogmatism though).
  • amoralism and moralism in the age of christianity (or post christianity)


    Most pre-modern theories of "objective moral truths" (a modern way of putting it) are at least prima facie as "verifiable" as those of the relativist/emotivist. However, I think the big thing to realize here is that:

    A. Virtually all pre-modern ethics are virtue ethics (in the East as much as the West) and so they are framed fundamentally differently from the dominant modern moral theories (utilitarianism, deontology/Kantianism, and divine command theories) because they do not make "laws" their explanatory center point.

    B. They do not separate "moral good" as a sort of sui generis, wholly separate "sort of good" that is distinct from the good of "good food," "good evidence," "good argument," "a good baseball player," "a good car," etc. For instance, for Plato, the Good is present even if what merely appears good (and so in evils that people choose because they think they are good). But this is as true for later Christian thinkers like Aquinas; all things that are "good" in any way are so by participation in the Good. Indeed, for the medievals, "Good" is a transcendental property of being precisely because it is a conceptual not real distinction. Goodness is "being as (truly) desirable." It doesn't add anything over and above being (just as there is not the reality of a thing and, over and above it, its truth—yet of everything that is, it is true that it is). And yet, because they make a distinction between apparent and real goods, such a distinction is "objective" in a sense (but not Kant's sense, which would be rejected as incoherent).

    So, the older ethicist wants to say to the emotivist, Kantian, and utilitarian: "justify your claim that moral good is unique and unrelated to other goods." But the thoroughgoing relativist/anti-realist generally cannot justify this distinction because they don't think "moral goodness" is real in the first place.

    Thus, the real question is a more global "values anti-realism." Nothing is good or bad in any sense. Yet this is prima facie way less plausible. Is it not truly bad for a bear to have its leg crushed in a bear trap? Is it not truly bad, at least ceteris paribus, for human children to be lit on fire? The fact that these things are bad for these organisms seems directly tied to what they are, and are seemingly verifiable. For them to be "unverifiable" we would have to say that the empirical facts of medical science, veterinary science, zoology, psychology, etc. are not really facts. Why aren't they real facts? If we just say they aren't real facts because they involve values and values aren't factual then all we have done is beg the question and assumed our conclusion as a premise.

    Second, global values anti-realism is straightforwardly self-refuting. If nothing is truly good or bad then there can be nothing truly "good" about good argument, good evidence, good faith, etc. Indeed, if nothing is truly better or worse (truly more or less desirable) then truth cannot be "better" than falsity. And so, when the anti-realist expects us to agree with them because what they say is true, they have no grounds for expecting this of us. "Good" just means "I like." So if I don't like anti-realism, I ought not affirm it.

    Likewise, if "good" means "I currently like...," then a late night tequila shot is "good for us" when we want it and becomes "bad for us" when we wake up hungover. Smoking is good for us... until we get lung disease and regret smoking. Etc. But this leads to a sort of global misology where we can never be wrong about anything and every decision we make is "good" (for us), which is absurd.

    Prima facie, virtue ethics is very plausible. A strong rebuttal to it needs to show that, all else equal, it is not better for man to be:

    Courageous instead of cowardly or rash.
    Temperate instead of gluttonous/licentious or anhedonic/sterile.
    Loving instead of wrathful or cold
    Possessing fortitude instead of being slothful and unmotivated
    Hopeful instead of fearful
    Strong instead of weak
    Agile instead of clumsily
    Prudent instead of lacking in consideration
    Wise instead of wise
    Faithful instead of recalcitrant
    Etc.

    But prima facie, on average, it is better to possess those virtues rather than their corresponding vices. To be sure, a virtuous man might suffer from bad fortune, and a wicked man might benefit from good fortune, but fortune is by definition outside our control. Virtue is what makes us happiest with what is within our control. It also makes us more self-determing, less ruled over by our appetites and external causes, and so more free and self-governing (which is what allows us to flourish in spite of fortune). Hence, the anti-realist has to say the virtues aren't better for man, or else that they mean something different in every instance and don't really exist (a hard claim to make, since Eastern cultures have fairly similar core virtues).

    I think the plausibility of anti-realism rests on the modern framing of ethics. Alasdair MacIntyre has a famous book, After Virtue, about how this framing emerges. It comes from Reformation era theology. And he argues that it is what makes modern ethics incoherent. I think he is right at least in broad outline here.
  • On how to learn philosophy


    A few really good resources I can think are:

    The Oxford Very Short Introductions - they can vary in quality, but they are generally quite good. They are on quite specific topics, which is helpful. The one on objectivity is especially good.

    The Routledge Contemporary Introductions to - these are pretty good topical introductions. They are generally quite biased towards later analytic thought. However, since historical surveys are always incredibly thin, they're still better than attempting some sort of chronological slog (and I say this as someone who is not particularly hot on analytic philosophy).

    The Teaching Company - they do a lot of lecture series on various topics. The one on mind-body philosophy and philosophy of science are particularly good, as is the one on the Platonic dialogues by Segrue. The one on information theory is also neat, if not quite philosophy. They're hideously overpriced on their site but on Audible they aren't that expensive and I think a number are free with a membership at any given time. Or the site Wonderium had them fairly cheap and I discovered that if you do a trial and then cancel they give them to you at a steep discount.

    The Modern Scholar - Sort of the parallel to the Teaching Company. They have a good series on Plato and Aristotle, and one on speech act theory that I didn't get to finish that seemed good. The guy who does the Divine Comedy and other epic literature is great too (Timothy B. Shutt).

    And then for stuff I know is up online:

    I really like Eric Perl's Thinking Being for metaphysics, although being a survey it is quite broad. That one happens to be up on Google right now: https://afkimel.wordpress.com/wp-content/uploads/2018/06/eric-perl-on-metaphysics.pdf (IDK, maybe it isn't supposed to be, but it's been one of the top search results for ages now so I assume I'm not doing any harm by pointing it out).

    Pierre Hadot's Philosophy as a Way of Life on how ancient philosophy was practiced (very different from modern philosophy is also interesting, and it's essays so you can dip into it: https://ascetology.wordpress.com/wp-content/uploads/2016/09/pierre-hadot-philosophy-as-a-way-of-life-spiritual-exercises-from-socrates-to-foucault-1.pdf

    I normally recommend Robert M. Wallace's Philosophical Mysticism in Plato, Hegel, and the Present because I think it's a really great introduction to the broadly "Platonic" tradition in quite accessible terms. If you ever get into Hegel (which I wouldn't at first lol), his book on Hegel is really great (so is Houlgate's commentary on the Logic and Gary Dorrien's Kantian Reason and Hegelian Spirit as an introduction on German Idealism, at least the first parts of the book).

    And lastly, Boethius' Consolation of Philosophy is a really beautiful book.

    Oh, and Fukuyama's Origins of Political Order is quite encyclopedic on theories of state development (political theory), although it is a bit of a "whig history" in favor of liberalism.
  • The integration of science and religion


    I'm only vaguely familiar with a few of those names. Wouldn't this diagnoses be more broad though? People have seen "building a bridge back to the world," and "securing other minds" as a chief problem for philosophy since Descartes (rationalism). Grounding morality given an epistemology that starts from an enclosed agent equipped with a wholly discursive reason is a problem in Hume (empiricism) as well (e.g., ethics' collapse into sentimentalist anti-realism). The two flavors become fused in Kant. For Kant, other minds—of God or our fellow man—cannot be objects of speculative reason (knowledge) but are merely "postulates or practical reason," i.e., an assumption needed for the individual good will to will itself in a wholly formal, law-like manner, in accordance with what Kant says is the discursive, rule-following nature of reason. This is probably still the most important ethics in politics (through Rawls) and it is one that feels it must justify proper behavior despite our being cut off from the world and knowledge of goodness (hence, Rawls elevation of procedural/formal justice over goodness).

    This is certainly the problem Kant's main successors so as central to modern thought (e.g., Fichte, Schelling, Hegel). It shows up as a driving concern in a pretty diverse group of thinkers, from Kierkegaard, to Husserl, to Wittgenstein. It's just that the solutions are very diverse, from something of a step back towards participation in the Logos (Absolute) in later German idealism, to attempts to argue that language presupposed community, to the phenomenological project, to dissolving the subject entirely, to the anti-metaphysical solution of calling such concerns "meaningless" (in both its more dogmatic empiricist and pragmatist forms, from the Vienna circle to Rorty or Dewey).

    The traditionalist response has actually built quite a bit on the post-modern deconstructions of the framing that leads towards solipsism. Charles Taylor uses Heidegger and Merleau-Ponty to deconstruct the modern "closed world system" for instance:


    “From within itself, the epistemological picture seems unproblematic. It comes across as an obvious discovery we make when we reflect on our perception and acquisition of knowledge. All the great foundational figures – Descartes, Locke, Hume – claimed to be just saying what was obvious once one examined experience itself reflectively. Seen from the deconstruction, this is a most massive self-blindness. Rather what happened is that experience was carved into shape by a powerful theory which posited the primacy of the individual, the neutral, the intra-mental as the locus of certainty. What was driving this theory? Certain ‘values’, virtues, excellences: those of the independent, disengaged subject, reflexively controlling his own thought processes, ‘self-responsibly’ in Husserl’s phrase. There is an ethic here, of independence, self-control, self-responsibility, of a disengagement which brings control; a stance which requires courage, the refusal of the easy comforts of conformity to authority, of the consolations of an enchanted world, of the surrender to the promptings of the senses. The entire picture, shot through with ‘values’, which is meant to emerge out of the careful, objective, presuppositionless scrutiny, is now presented as having been there from the beginning, driving the whole process of ‘discovery’.”

    A Secular Age

    I think what tends to unite traditionalists is the insight, borrowed from 20th century thought, that this paradigm is itself historically contingent, and that these problems did not plague earlier systems (and not because they failed to be "critical" in their use of transcendental arguments and scrutiny of reason).

    Yet, since their attention is turned backwards (many key figures are historians) they have been able to take genealogical critiques of modernity much further (Michael Allen Gillespie's "The Theological Origins of Modernity," Amos Funkenstein's "Theology and the Scientific Imagination," Brad Gregory's "The Unintended Reformation," Peter Harrison's "Some New World," as well as MacIntyre, Milbank, Taylor, and Schindler's larger project). Having always liked early modern history, and having engaged with plenty of sources outside this sort of genealogy, these seem extremely plausible to me, and some like Funkenstein and Milbank are intricately researched.

    Most of this work is fairly recent (from the 90s or later, although a few are earlier), but I'm still surprised that it hasn't spread as much outside this set. Perhaps it is because it is often quite technical, focused on a period most people don't pay much attention to, or maybe because it exacerbates the tension in a lot of continental/post-modern though that what is said about the historical contingency of Enlightenment thought applies just as much to contemporary "post-modern" thought (and in ways that are more direct and explicit than a general acknowledgement of this allows), undercutting in ways.

    seem to be clamouring for a counter-Reformation to the Enlightenment.Tom Storm

    Yes, but there are actually three sets in the traditionalist camp here.

    There are the largely Catholic "TradCaths" who look mostly as far back as Trent and to neoscholastic readings of Aquinas, and tend to want to move towards a pre-Vatican II early-modern philosophy. They think the nature versus supernatural distinction is essential for explaining grace as gratuitous. They are ambivalent towards Plato.

    Against these there is a camp that looks back to ancient and early medieval, and Eastern Christianity more, with their own more "neoplatonic" reading of Aquinas. They also like Saint Maximus quite a bit. And here is where David Bentley Hart, de Lubac, Milbank, etc. would fit (and really Wallace, outside the Christian context). They reject the nature / supernature division entirely. They have been more successful in the Vatican and in theology and philosophy, although they don't have the same sort of popular cult following online. They love Plato.

    And then there is a sort of Neopagan and often Nietzschean traditionalism (e.g. Bronze Age Pervert). But, strangely perhaps, this crowd is quite close with a libertarian yet "Christian Nationalist" traditionalist camp (more Protestant), so I put them together. And these folks tend to hate Plato (except for the Guenon, etc. ones into Hindu traditionalism; but they like the Laws more than the Republic)
  • Understanding 'Mental Health': What is the Dialogue Between Psychiatry and Philosophy?
    Two famous books on this topic that are more on the critical side, Philip Rieff "The Triumph of the Therapeutic: Uses of Faith After Freud" and Christopher Lasch's "Culture of Narcissism" come to mind. I would say the 10,000 foot takeaway there is that there are major risks/issues if psychology and the language of health/wellness come to define ethics and the philosophy of "living a good life" and "being a good person." The fact that patients are very much "customers" in the US context adds a particularly interesting wrinkle to the idea that the "therapeutic" can guide ethics and politics (I touched on this in the thread currently up on self-help).

    James Davidson Hunter's "The Death of Character: Moral Education in an Age Without Good or Evil" has been less influential, but deals with the wider effect of the "therapeutic" ethos, along with Dewey, Maslow, and Rogers profound effect on education.

    Hunter does a pretty good job showing just how far psychology has penetrated into cultural institutions at least. For instance, he pairs texts prepared for Girl Scouts in the first half of the 20th century versus the 1980s, and a range of texts from the late 1700s to early 1900s versus those after 1970, showing how radically they have changed. The framing of ethical life is his focus, and this shifts radically towards the procedural ("values clarification"), while psychological terms like "self-esteem" and "self-actualization" take on a central role, while a explicit moral framing of "character" largely vanishes (although, pace Hunter, I would argue this older framing had already badly atrophied since 1500, and was essentially incoherent by 1900, and this is why it was banished—after all, what does "don't cheat because it isn't morally good" even mean?). An interesting finding he documents is that different worldviews (broken into: expressivist/emotivists, utilitarian, civic humanist, conventionalist, and theistic) are the strongest predictors of ethical decisionmaking (or at least, given how this research is done, simulated decisionmaking); moreso than race, class, sex, etc.

    Certainly, psychology played a very large role in philosophy and culture more broadly. A criticism I'd like to point out here is that psychology, like economics, is not metaphysically neutral. Aside from empirical work, it provides an interpretive lens for how data is interpreted, which is based on ideals dominant in the field. This sort of philosophical backdrop, which one might describe better as a "world-view," "social imaginary," or even a "religion" (as in, the widest possible interpretive lens—God normally makes no appearance) obviously tracks with what is dominant in the culture, but even more so it tracks with what is dominant in the academy, which can drift quite far from the mainstream culture, or even between disciplines. But because the academy has had a huge influence on education (the key organ is socialization and indoctrination in Western society) there is a sort of feedback loop here, where the values of the academy make it into the wider culture. Our lexicon is chalk full of therapeutic terms today.

    Hence, you can get prior philosophical (or almost aesthetic) commitments driving the ship in some ways. This is perhaps most obvious in the situationist critique against personality, character, etc. in social psychology and sociology, which centered on a philosophy that wanted dissolve the subject/individual (one might suspect, for largely philosophical reasons). Lo' and behold, programs when curricula loaded with texts that claimed that the illusory nature of the individual must be overcome also discovered that it the individual was illusory. And yet, this area later became ground zero for much of the replication crisis, and some of the claims it made for things like "priming" are, in retrospect, the sort of thing that should have rung alarm bells in the same way claims of psychokinesis do. But it was in line with fashionable and politically relevant dogmas, so it didn't.

    What's also interesting here is how far different, related fields can diverge, social psychology and economics (which tends to absolutize the atomized rational actor) being prime examples. Social psychology might say the whole of economics is built on the fundamental attribution error.

    I guess a difficulty here is that the tendency in philosophy to want to "defer to the sciences" can miss the ways in which the sciences are themselves often built on particular philosophies. And the more dominant and philosophy is in a science, the more transparent it becomes.
  • Self-Help and the Deflation of Philosophy
    I find this connection striking. For instance, hedonism might become "common-sense" because one struggles to comprehend something of value outside one's own experience. Ethics is obviously just personal preference, if it is anything at all. It captures the vast spread of contemporary cynicism and scientism well. Was it something in this direction you had in mind with "anti-metaphysics?"GazingGecko

    I was thinking primarily of the empiricist movement away from any metaphysical theorizing, although the continental tradition has its own version of this. Hume is a good early example. But really it's an issue across modern thought that emerges as epistemology becomes "first philosophy," while at the same time that epistemology comes to absolutize the immediate as—if not "more real"—then at least "more secure" than the intelligible. (This is a bit of a generalization of course. This move to the sensible sometimes leads to a prioritization of what can be quantified, such that mathematics is "most real," but I think that this move towards mathematization is actually also grounded in the elevation of the "common sensibles" as laid out here). More broadly, the problems of modern epistemology have generally been "building a bridge back to reality" precisely because its starting assumptions include a buffered, if not solipsitic agent. I'm not really sold on post-modern attempts to dissolve this agent either, since they still seem to fall into the same habit of absolutely prioritizing the immediate. On this, I think Hegel had a good point in the Phenomenology that the absolutely immediate, if taken to the limit, is itself contentless.

    Charles Taylor lays this out quite well in A Secular Age, and I found a good summary of the part I liked:

    The Closed World System (CWS; closed to "transcendence") he describes is the one most commonly held in the west today – a picture of individuals as knowing agents who build up their knowledge of the world by taking in information and forming mental pictures from which they build theories. An understanding of science often combines with this structure, and a series of priority relations tell us what is learned before what. Sense experience acts foundationally – “I must grasp the world as a fact before I can posit values.” In this CWS, any contact with the transcendent must come as an inference and “it is obvious that the inference to the transcendent is at the most extreme and most fragile end of a series of inferences; it is the most epistemically questionable.”

    Taylor uses the work of post-modern thinkers such as Heidegger and Merleau-Ponty to deconstruct these ‘master-narratives’ of modernity and to show how they are constituted by a “massive self-blindness” – the supposed neutrality of secularity actually appears to be bogus.

    https://mrlivermore.wordpress.com/2014/06/03/charles-taylor-secularity-and-miracles/

    This tends to isolate us from anything but surfaces appearances (indeed, consider the wide, continuing influence of the Kantian notion that all we have access to is appearances). This seems to lend itself quite well to Han's "death of the Other."


    I suspect the analytic stance is less pornographic than the ironic, at least in general. Yet, while introspecting, I can certainly see the allure even in the analytic. Only focusing on a narrow problem inside a big problem, breaking it down into conditionals and treating important questions like sterile puzzles has a strange comfort.GazingGecko

    Sterile, but also potentially solipsistic if we get stuck in an epistemology that never lets us outside our own heads. And I'm not sure if merely being "stuck inside language" is that much better.

    Not to put to fine a point on it, but the essence of sin for Saint Augustine was the incurvatus in se, the way the soul becomes bent in around itself. At the limit, it becomes like a black hole, wholly cut off from communion and the erotic ascent towards Being. Two trends in modern literature suggest this problem. The first is a sort of straight jacket intellectualism that is unable to transcend procedural reason, and so gets stuck in the cul-de-sac of skepticism, cut off from being. The second is voluntarism, the will becoming entirely its own object. The two pathologies reinforce each other though, since if one can know nothing else, the self must be the object of the will, while a will focused on itself will only direct the intellect outwards in terms of the desires of the self.

    For examples of the intellectualist strain, there is Hamlet, Ivan Karamazov, or Raskolnikov in Crime and Punishment, or the society of A Brave New World (an extremely rational society that is focused on hedonism and economic growth because it sees nothing else to seek outside the immediacy of "utility"). For the volanturist strain, there is Milton's Satan or 1984 (or maybe some of Ayn Rand's heros, although less self-consciously)—sheer self-assertion and power. Those are sort of the limit cases in art, but I feel like much more mundane philosophies can get trapped in the dialectic that those literary works bring out in the extremes.



    The "help" offered is in line with those assumptions, less or more obviously, thus the egoic and managerial focus of self-help products.

    I don't disagree with you here really. A big point made by the authors of the "Manosphere" is that their transactional view of romantic relationships (and even friendship) and open embrace of "might makes right" is simply a more honest approach to what people already knowingly (if not as explicitly) embrace in the common framing of human life in terms of "market forces." I think this is a warped view of reality though (although it can be a self-fulfilling sort of view). The atomized utility maximizer of liberal economics is not an empirical fact, but an interpretive lens.



    Well, a common counter to this is that some ills seem to be decreasing. Violent crime has decreased (before tipping up again lately), although this seems to be largely attributable to an aging population and the fact that young people (who tended to commit most crimes) increasingly just don't socialize at all and anesthetize themselves with drugs and electronic entertainment instead. Teen pregnancies have gone down, but that's because pregnancies in general have collapsed and people are having sex, forming couples, and getting married less overall. Other ills, such as "deaths of despair" seem higher than ever though.

    I think it's telling that the great monster of our era has shifted from serial killer predator of the Baby Boomers, a sort of monstrous hedonism wed to human cunning, who became a sort of cultural icon (e.g., anti-heros like Hannibal Lecter), to the sheer inchoate rage (and terrible efficiency) of the spree killer.
  • The Preacher's Paradox
    the other hand, by calling the sermon "infection," I used a very vivid metaphor that perfectly aligns with my convictions: faith develops within a person, but begins with a seed (which enters from outside). And I emphasize this once again—faith develops within the subject!Astorre

    As a counterpoint, a book I really love, Robert Wallace's Philosophical Mysticism in Plato, Hegel, and the Present argues, compellingly I think, that mysticism is a regular part of human experience, and that this is what the "Platonic tradition," rightly understood, is grounded in. Here, "faith" has more to do with loyalty to what is highest in us (including our experience of our own freedom), and trust in beauty, love, and truth that is directly and ubiquitously experienced. And so, part of the role preaching is to merely awaken people to this, and to motivate them to recognize it and live into it.

    I just shared part of the introduction so I won't repost it here. I've shared some of the psychological and metaphysical grounding of this claim before.

    So, against the "closed world system," where the claims of the "mystic" or preacher are "maximally distal" from what can be known with confidence, Wallace argues that the divine is not only what is more immediate, but also what is most fully real.

    Now, with a "preacher" we are normally also talking about someone who is discussing, to at least some degree, revealed religion. Revealed religion is different, since it often involves historical claims and more distal metaphysical claims. But these are normally mixed with claims about this "generally accessible mysticism" and how to develop and live into it (although some religions lose track of this). I think the role of "knowing by becoming" (of which Boethius is such a great example) is an excellent example of how this works in practice. The relevant knowledge is in many cases a sort of self-knowledge.

    And indeed, for a lot of theologians the role of revelation, particularly historical, public revelation (as opposed to private), is precisely to elucidate those things not easily accessible by this sort of experience. But faith (trust) in these revelations is supported by the former sort of faith (loyalty); hence "have faith that you might understand " (Isaiah, Augustine, Anselm).

    I really love Wallace's book, but I think it also shows the limits of "natural theology." Aside from being unable to meditate disagreements, the larger issue is that, buffeted by skepticism and distraction, it only gets one so far. Particularly in our modern context, it seems like it could easily become a sort of sterile orientation towards the Good/Beautiful/True as mere "conceptual objects," the target of a "limitless desire for goodness" that is nonetheless unattainable, where union is always out of reach. I can think of no better image of this then Dante's Limbo, filled with the righteous Pagans who, though lovers of the Good in the abstract, are forever separated from the object of desire (and it is perhaps better here to take this as an image, and not as a theological statement about the fate of particular souls after death).


    I'm inclined to think that the whole point of religion/spirituality is the pursuit of wealth, health, and power.baker

    And yet so many religious texts devalue these, and so many key figures eschewed them and gave them up in life.
  • The integration of science and religion
    I forgot about a good quote I like on this topic:

    There if anywhere should a person live his life, beholding that Beauty.

    Plato, Symposium 211d

    The Platonic philosophical theology unifies us with ourselves, with each other, with the world, and with God, by explaining that a higher reality or God is present in this world and in us inasmuch as it inspires our efforts toward inner freedom, love, beauty, truth, and other ideals. These efforts give us a unity, as “ourselves,” that we can’t have insofar as we’re the slaves of our genes, hormones, opinions, self-importance, and so forth. For in contrast to our genes and so forth, which are implanted in us or are reactions to what surrounds us, efforts toward ideals like inner freedom, love, beauty, and truth are more likely to reflect our own choice. So that if anything reflects “us,” ourselves, and not just our surroundings, they do.

    So through ideals like inner freedom, love, beauty, and truth, something that’s “higher,” because it’s free and fully “us,” is in us. Since we often fall short of it and lapse into merely reactive or merely bodily functioning, we can call this higher self-determination, by contrast, “divine.” And there’s nothing that we know better or more directly than we know this inner choice that we make, to be either automatic and reactive or free. and self-determining. So we have every reason to regard the choice as real, and our awareness of it as knowledge. And since “mysticism” is the name for the doctrine that we have direct knowledge of a higher reality or God, and this Platonic train of thought shows how we have such knowledge through awareness of our inner choices, it shows how mysticism in this sense is entirely rational.1 Since we often fall short of inner freedom, love, beauty, and truth, they have the “transcendence” that we expect of religion. They are inspiring as well as rational, “above” us as well as “in” us. But what’s remarkable is that because this transcendence is rational, it’s a feature not only of the higher reality that mysticism and religion celebrate but also of science. In fact, because science is one of the ways in which we choose to pursue truth and thus transcend our genes, hormones, favorite opinions, and self-importance, science is a part or an aspect of the higher reality that mysticism and religion celebrate...

    So rather than inherently conflicting with mysticism and religion, science is a part of the higher reality that mysticism and religion celebrate. Religion and science both transcend by seeking inner freedom and truth. It’s just that science, being restricted to what we can know by scientific methods, is narrower. It’s only one aspect of the transcendent freedom, love, beauty, and truth, the higher reality, that religion or religion in the making celebrates. This unusual way of understanding the relation between science and religion can free us from a good deal of mental fog and fruitless disputation.

    But the relation of science to religion isn’t the only familiar issue that the Platonic higher reality transforms. It’s probably evident from what I’ve said that the Platonic higher reality reveals an intimate connection between “fact” and “value.” A world in which there was no pursuit of values like love, beauty, and truth, or (as Plato puts it) “the Good,” would not be self-determining or fully “itself.” If being fully “itself” is the most intensive kind of reality, such a world would lack what’s most real. By directing our attention to the role of value in what’s most real, Platonism shows the limits of the “disenchanted” and “value-free” account of reality that we associate with scientific objectivity. Important though it is, the reality that science identifies is not the ultimate reality. The reality apart from itself that science in its normal activities identifies is not, in fact, the ultimate reality of which science itself, as a pursuit of truth and thus of self-determination, is an aspect. When science becomes aware of this ultimate reality to which it contributes, and which depends on values such as truth as well as freedom, love, and beauty, it becomes evident that the ultimate “fact” or reality is not actually independent of “value.”

    Robert M. Wallace - Philosophical Mysticism in Plato, Hegel, and the Present





    It seems to me that Indian thought avoids a lot of the problems that dominate Western discourse on this issue. As I see it, the West suffers from a sort of self-inflicted metaphysical wound that stems from the Reformation, that results in a truncated world-view and closed off epistemology that has approached solipsism at the limit. Indeed, so much of modern thought has been an attempt to escape this solipsism (or a sort of moral solipsism)—to build a bridge between us and the world—or else to learn to live as self-enclosed, contingent entities. In the midst of such a "crisis" (as it is often called), any bridge beyond the sensible becomes "a bridge too far."
  • The integration of science and religion
    Pavel Florensky, a priest, mathematician, scientist, and electrical engineer (sometimes called the "Russian Da Vinci") approached this in a Christian frame through Sophiology. Come to think of it, I think Bulgakov, probably the biggest figure in Sophiology, started in political economy. Michael Martin's The Submerged Reality: Sophiology and the Turn to a Poetic Metaphysics is a pretty neat book on this (it also looks at Jacob Boheme, German romanticism, and Valentin Tomberg).
  • The Preacher's Paradox


    :up:

    This is similar to my thoughts, but since I had already written this earlier, I'll share?

    Inspired by Kierkegaard's ideas:Astorre

    I was reading Peter Harrison's "Some New World" recently, another genealogy of modernity, and one of his early chapters is on the radical changes in epistemic terminology due to the theological controversies that ended up driving the creation as the secular/naturalist/empiricist/exclusive humanist paradigm that emerges, as he has it, as an evolution of Christian theology (as opposed to a rejection of theology; others, Taylor, Milbank, etc. have made this same point).

    Three changes are particularly important. "Natural versus supernatural" emerges as a new distinction. "Faith" is redefined from something like "trust," and at a deeper level, a sort of positive illumination (one inclusive of knowledge) to a something like "belief in the absence of knowledge."

    Of course, part of the reason faith must now be "belief without knowledge" is because knowledge also gets redefined. It becomes something more like "justified true belief," as opposed to "the mind's grasp of being," and "justification" itself radically changes its meaning. To quote an earlier thread:

    The history of "justification" as a theological term turned philosophical is itself telling here. To be "justified" was originally an internal process, a change in that which is justified. It meant "to be made righteous." With Luther, it is displaced to external divine judgement, an imputation. Then it ends up becoming a philosophical external imputation that devolves down to either the community or the individual. A "justification," of claims to be in contact with reality (in possession of knowledge) on the basis of appearances needs some metaphysics of how appearances relate to reality. If this linkage doesn't exist, I am not sure how justification ever falls into place or how truth would ever show up in our experience. But if justification is about the private and communal imputation of status in the first place, and not about a relationship between the knower and known, how could it ever bridge the gap?Count Timothy von Icarus

    These are all connected though. The idea of a wholly isolated and self-contained nature is also paired with a denial of any sort of contemplative knowledge, and eventually a denial that reason has any direct access to being (which leads towards reason becoming wholly instrumental and procedural, a computer).

    My point in bringing this up?

    What exactly doesn't Saint Paul know after being struck blind on the road to Damascus, being gathered up to the Third Heaven, etc? What doesn't Ezekiel know, or Abraham? For them, any doubt certainly isn't framed in terms of Kierkegaard's dialectic of the subjective and objective, with the later denoting an empirical consensus space centered around mechanistic, purposeless world where God is absent except as a "transcendent" force reaching in. So, what exactly do they doubt?

    It seems to me that they might have claims to knowledge. That doesn't mean they are correct or that they lack doubts. However, their doubts might be different from our doubts if we inhabit the "closed world" of natura pura. At any rate, this "risk of being wrong" isn't particularly unique to religion.

    For instance:

    I was drawn to this topic by conversations with so-called preachers (not necessarily Christian ones, but any kind). They say, "You must do this, because I'm a wise man and have learned the truth." When you ask, "What if I do this and it doesn't work?" Silence ensues, or something like, "That means you didn't do what I told you to do/you didn't believe/you weren't chosen."Astorre

    This happens with fitness gurus all the time. Yet we normally don't think of "how to gain muscle" or "how to bench press more" as questions of faith. The same sort of thing might happen with creative writing, relationship advice, etc.

    The modern Western retooling of epistemology tends to wholly exclude contemplative knowledge, which is a core part of all pre-modern philosophy (Eastern even more than Western even). This affects religion more than other areas, but it also affects how the physical world is viewed, aesthetics, politics, ethics, etc. These all risk becoming areas of "faith" because they aren't open to becoming a sort of reliable techne that justifies and objectifies itself in regular, reliable use. However, as the scientific anti-realists argue, this applies just as much to scientific theory (as opposed to technology).

    Afterall, while the elimination of contemplative knowledge was originally argued for on the grounds that people who appeal to it contradict one another, it seems to be a fact of history by this point that empiricism and instrumental reason have led to no more agreement in the relevant areas. Nor have modern ideologies (fascism, communism, liberalism) been particularly less violent or assertive in their dogmas. Indeed, arguably Buddhism, Hinduism, Islam, and Christianity, in at least many traditional forms, agree on more of importance than post-Enlightenment thought (which of course, has many strands that wholly deny value any true reality, or truth, etc.).

    I think the paradox arises more from what Charles Taylor calls the "closed world system:"


    Modernity, according to Taylor, has developed very powerful versions of phase 2. These are ‘closed’ or ‘horizontal’ worlds, which leave no place for the transcendent (or ‘vertical’) – they even render it inaccessible or unthinkable. I will give a brief picture of the contemporary western CWS.

    The CWS he describes is the one most commonly held in the west today – a picture of individuals as knowing agents who build up their knowledge of the world by taking in information and forming mental pictures from which they build theories. An understanding of science often combines with this structure, and a series of priority relations tell us what is learned before what. Sense experience acts foundationally – “I must grasp the world as a fact before I can posit values.” In this CWS, any contact with the transcendent must come as an inference and “it is obvious that the inference to the transcendent is at the most extreme and most fragile end of a series of inferences; it is the most epistemically questionable.”

    Taylor uses the work of post-modern thinkers such as Heidegger and Merleau-Ponty to deconstruct these ‘master-narratives’ of modernity and to show how they are constituted by a “massive self-blindness” – the supposed neutrality of secularity actually appears to be bogus.

    Taylor explains the three aspects of a challenge to such an epistemological picture:

    1. Our grasp of the world can’t be accounted for in the simple terms of mental representations of outer reality – such representations only get their meaning for us from a more fundamental process of ‘coping’ with the world as bodily, social and cultural beings.

    2. This ‘coping’ activity is not primarily that of individuals, but is a social process which we are inducted into.

    3. We do not deal with objects as part of the coping process, but what are called by Heidegger pragmata – the focal points of our coping, and which therefore already come to us with meaning and relevance.

    The upshot of all these arguments is that they completely overturn the priority relations of foundationalist epistemology – as Taylor says, “there is no priority of the neutral grasp of things over their value”; things that are considered to be late and questionable inferences are seen to be part of our primary predicament, so that the sense that the divine comes as a remote inference is completely undercut by this challenge.

    “From within itself, the epistemological picture seems unproblematic. It comes across as an obvious discovery we make when we reflect on our perception and acquisition of knowledge. All the great foundational figures – Descartes, Locke, Hume – claimed to be just saying what was obvious once one examined experience itself reflectively. Seen from the deconstruction, this is a most massive self-blindness. Rather what happened is that experience was carved into shape by a powerful theory which posited the primacy of the individual, the neutral, the intra-mental as the locus of certainty. What was driving this theory? Certain ‘values’, virtues, excellences: those of the independent, disengaged subject, reflexively controlling his own thought processes, ‘self-responsibly’ in Husserl’s phrase. There is an ethic here, of independence, self-control, self-responsibility, of a disengagement which brings control; a stance which requires courage, the refusal of the easy comforts of conformity to authority, of the consolations of an enchanted world, of the surrender to the promptings of the senses. The entire picture, shot through with ‘values’, which is meant to emerge out of the careful, objective, presuppositionless scrutiny, is now presented as having been there from the beginning, driving the whole process of ‘discovery’.”

    https://mrlivermore.wordpress.com/2014/06/03/charles-taylor-secularity-and-miracles/

    I will just add to the Taylor quote that what is missing is any notion that such virtues need to be cultivated. They are generally considered to be automatic. They don't require praxis or cultivation. Kant's formal freedom is always there for all. There is no "knowing by becoming" or conformity to being required. Likewise, these virtues don't seem like they should rule out contemplative knowledge, but other axioms do rule it out.

    And this is how you get your tough questions for the preacher. "I did the formula, I said the rosaries, or sat on the mountaintop, etc. But the procedure didn't work. If the procedure didn't work, it is bunk, or at least ineffective for me." Such an objection is, where techne is the gold standard for knowledge, absolutely fatal (although perhaps it can be overcome if there is evidence that the "procedure" works for enough people). Yet the counterpoint from the preacher or sage is likely to be that the "procedure" is mere supporting praxis, and that one ought not expect it to work like a course of antibiotics, or changing a light bulb.
  • The problem of psychophysical harmony and why dualism fails


    That's true. I suppose physics is seemingly gradually moving away from "building block" models to more continuum models (eg quantum field theory). This is something I need to think more ontom111

    Yes, a new, popular way of looking at things is of seeing the universe as a sort of (quantum) computer. A veritable whose who list of eminent physicists have embraced this view to some degree, Tegmark, Davies, Vedral, Landauer, Lloyd, Wheeler, etc. However, what exactly this means can vary considerably, because there are a great deal of open questions in the philosophy of information. Some still say information is a sort of subjective illusion projected over mere mechanism (more popular in biology), others say it is "emergent" from matter and energy, and still others say that matter and energy are emergent from information. The last also breaks down in different ways, from "It From Bit" participatory universes (Wheeler) to ontic structural realism where the universe just is a mathematical object (Tegmark).

    This opens up new pathways. In a certain sense, while computation can be decomposed into simpler Turing Machines, it is not reducible into building blocks. Different is different. This sort of goes along with arguments in process metaphysics that a process metaphysics does not require any sort of emergence at all (instead, we just have the nesting of processes).

    IMO though, Bickard's appeal to process metaphysics is a bit too sanguine. It isn't a silver bullet. Process metaphysics tends towards a sort of universal monoprocess, and so it just ends up sliding towards another pole of the Problem of the One and the Many if we aren't careful, but more to the point, it still leaves mind fairly mysterious, just not contradictory perhaps. Terrance Deacon has some interesting stuff to say on this too (quoted here). His open-source paper on biosemiotics is pretty good on this topic and its relation to information theory (so big in physics) now too.

    The problem is that, while these new views open up new paths that the old "building block" metaphysics foreclosed on, they lead in a vast plurality of directions. What it means for a physical system to be "computing" any particular computation is an extremely fraught question, and arguably any system of enough complexity can be said to be computing any program below a certain Kolmogov complexity. Add on to this the "subjective" element of information (or something like Jaynes' argument that entropy is itself in a sense subjective) and there are loads of possible interpretations here.

    For me, I think this brings me back to the most obvious source of unity and multiplicity. We are ourselves and no one else. We are a unity, although as Plato's psychology suggests, we can be more or less unified. So, we need to explain how there are beings (plural). I think this might be possible in a process context by speaking in terms of systems that are more or less self-determining, unity and multiplicity being contrary opposites (of degree; like dark and light) not contradictory opposites. I am particularly fond of how Aristotle takes Plato's spot on psychological insights and broadens them into a full on metaphysical and physical principle. The problem is that, while many people in contemporary analytic thought are drawn to Aristotle on this issue, ethics, etc., they almost always use a deflated Aristotle who is penned in by the aesthetic commitments of our era. This is a shame, because I think a promising way to approach the Hard Problem is to flip the picture, and look at things from the top down, such that "matter" emerges from form, which is inherently intelligible (and here, Neoplatonic extensions of Aristotle, or even Hegel are more helpful than "Aristotle the naturalist").

    On that point, I like David Bentley Hart's All Things Are Full of Gods quite a bit, or C.S. Peirce's Agapism. But I think that, in general, "naturalism" is a sort of theology stemming from the older natura pura, and by that I mean just that it is more of a "world view," both aesthetic and epistemic; sort of what Charles Taylor means by a "closed world system." It's a sort of built in preference for the mechanistic, which I think you can see in the way people will find Max Tegmark's ontology, where the universe is just a mathematical object, and that all such objects (and so pretty much all imaginable timelines) truly exist, to be plausible, but will find a much less expansive ontology where mind (intentionality and intelligibility) are fundemental to be totally implausible, because it is "anthropomorphic" or even "mystical" (used pejoratively). You can also see this in how people want to "escape mechanism" but then are only willing to stray as far as a reductionist formalism that ends up being very similar.

    I don't think "empirical facts" help to decide this sort of issue much. It's closer to a religion. People generally stick to what they were raised with, or they "convert." The issue is as much aesthetic as anything else. And unfortunately, the Hard Problem and psychophysical harmony are the sorts of issues where such a starting "frame" become essential to even framing the problem, making everything very hazy.
  • The problem of psychophysical harmony and why dualism fails
    Given the implausibility of perfect psychophysical harmony under dualism, monism seems the only coherent position left. Consciousness is not something added onto the physical world. It is the physical world itself, viewed from the inside.tom111


    Do you think such a "dual-aspect" monism resolves the problem of psycho-physical harmony? I have always thought the problem remains just as acute. Consider that, if behavior is wholly explicable in terms of mechanism, then it doesn't seem that "how phenomenal experience is" can ever be selected for by natural selection. And yet, all the ways in which consciousness seems to be set up as a "user interface" (e.g. Hoffman)—as problematic as that analogy might be—would seem to suggest a causal role for "volitional choice."

    More to the point, we seem to be forced to choose between panpsychism or strong emergence. Strong emergence arguably violates the tenants of physicalism on its own, and is arguably rolled out as a hand waving panacea for the ills of materialism, but even leaving that aside, supposing we could explain it, whatever is "strongly emergent" is itself "fundamental" (irreducible). And so it would turn out that we have an irreducible, causally efficacious mental sort of being alongside the mental-less, merely physical. But that's just a restatement of dualism.

    Whereas, if we take the panpsychism route, we still need to address psychophysical harmony. Why does a glass or orange juice experience, but not like we do? Why do we experience as individuals, when ten people in a room make up a "physical system?" Why do strokes and drugs alter experience in the way they do when everything, including the molecules of drugs, "experiences?"

    Appeals to "complexity" don't work because, aside from being variously defined, a room with two people is more complex than a room with one person, and yet we don't tend to think it contains a single mind, but two. Indeed, the entire physicalist paradigm seems to have a difficulty with unity and multiplicity, in defining how a single bee is a "one." The older idea was to declare that the world was made up of subsistent building blocks (so your idea about form, consciousness being the product of arrangements of building blocks), but this sort of ontology has been critically undermined by advances in physics, and always had explanatory issues (e.g., gravity as a sort of occult force acting at a distance, and inability to explain mind, etc.)
  • Hume and legitimate beliefs


    So Hume's premises should be accepted over others because he is "doing psychology?" And it's not problematic that they are self-undermining because its "psychology?"

    But surely past thinkers were just as much engaged in a psychology of knowing, so why are they all to be dismissed and Hume to be accepted on sheer assertion?
  • Hume and legitimate beliefs
    You expect a deductive logic all the way down.Banno

    I don't, I just think Hume's conclusions are reductio ad absurdum against empiricism, while famously, the dogma also isn't supportable by its own epistemic standards, in which case it isn't "humility" it's dogmatism. Just because an asserted dogma leads to skepticism and materialism doesn't make it "humility."
  • Hume and legitimate beliefs

    Of course Hume would agree, if not in those terms - he understands that his own philosophy is based in the same empirical and psychological habits it describes. He's not offering a proof of scepticism, he's mapping out, with humility, what can be deduced and what cannot.Banno

    What can or cannot be deduced, given the opening books of the Treaties are taken as true and infallible, as Hume himself takes them. He refers to his description of how the mind works and interacts with the world as a "proofs" throughout the later parts of the book to justify his theses. Again, why is Hume's introspection here absolute, and almost all prior thinkers' introspection mere delusion, and why is it "humility" for Hume to have assumed this is so?

    How is simply asserting things as true, despite the fact that these very assertions imply that you cannot possibly know the truth of what you are saying, "humility" and not the very height of dogmatism?
  • Hume and legitimate beliefs


    He is attacking dogmatism.

    Which is ironic, because we are expected to dogmatically accept Hume's judgement on how the mind works, the exact contours and limits of introspection, and the nature of reflexive knowledge, as well as the causes of the act of understanding/ideas, while those self-same premises preclude Hume's knowing that his theorizing is true. It's very curious.

    Consider:

    (1) We cannot know causes, only constant conjunction.

    (2) We cannot know necessary connections.

    (3) All ideas are copies of impressions (essentially, caused by impressions).

    (4) Reason is entirely discursive ratio; there is no faculty of intellectus. Reason is not ecstatic, erotic, or unitive. It is wholly instrumental and calculative and also behaviorally inert.

    Note that none of these are knowable through constant conjunction and impressions. Hume might appeal to introspection, but why is that reliable given what he's said? More to the point, he is here disagreeing with the vast bulk of prior Western thought, which has a role for contemplative, unitive knowledge (and this role is even larger in Eastern thought). What makes Hume's introspection a source of truth and the sages of past eras' a sort of vapid delusion?

    As Étienne Gilson put it: "Hume could only prove that nothing can be known by knowing something he could not prove.”
  • Every Act is a Selfish Act
    Until the communitarian comes to terms with the fact of our separateness, of our individuation, the communitarian Good can never be imagined in any other sense as individual, selfish desireNOS4A2

    According to who? And certainly, it can at least be imagined as such. One can say many things about the Neoplatonists, or say the Sufi poets, but that they lacked imagination is not one of them.

    He wants conformity to certain ancient ideals, to return us to ancient ways of life, and so on.NOS4A2

    Odd, I seem to recall the biggest communitarian movement of the past century or so doing things like dynamiting cathedrals to turn them into the world's largest swimming pool, massacring priests and monks, re-educating minorities out of Buddhism and Islam, etc., and trying to rebuild man in a radically new image.

    In general, when there is an appeal to ancient framings or norms, the idea is that they are better, not that they are merely old (although to be sure, some folks do tend towards tradition for tradition sake, just as some see innovation as an end in itself).

    ↪Copernicus I don't know. It seems you are defining "selfish" in such a way that makes it meaningless, as there is no contrast to what "selfishness" is notHarry Hindu

    Bingo. But then it also seems to commit a fallacy of equivocation on this usage later on.
  • Every Act is a Selfish Act
    Anyhow, this reminds me of a common debate in contemporary analytic thought vis-á-vis their version of "Aristotle." The claim is that a focus on cultivating virtue is "selfish." This charge is leveled even more implausibly against Neoplatonists such as Plotinus, or early Christians, with the idea being that "becoming like God" is a "selfish goal."

    I think this is just a misunderstanding of older terms and concepts. If this "problem" was brought to the attention of these past thinkers, I think they would be perplexed. Surely the excellent person is a blessing to others, not a curse. To become like God is always to bless others, because the Good is itself diffusive and always relates to the whole, and it is the life of the sage and saint that is most desirable because it is the life that attains the greatest freedom and deepest joys. It's only in the context of an ethics already grounded in the dialectic of "goods that diminish when shared" that all inward pursuits become selfish.

    That is, it is precisely the epistemic presuppositions that absolutize the individual in solipsistic bubbles that make it impossible for the Good to be recognized as diffusive (because the "desirable" just becomes "whatever is currently desired by an individual). It becomes impossible to know the Good (particularly in a naturalist frame where teleology is stripped out) and so what we really have is emotivism established by axiomatic presupposition, with the "Good" now demoted to a sort of procedural ideal for the allocation of an irreducible multiplicity of goods sought by individuals. But this isn't the result of logical necessity or any empirical finding, but simply flows from axiomatic epistemic assumptions.
  • Every Act is a Selfish Act


    This premise:


    If every action originates from the actor’s internal state, then no act can be wholly “selfless.”...

    The mind is inherently solipsistic

    ...seems to do all the heavy lifting. I'll allow that every intentional act involves desire. How could it not? But you seem to be arguing that:

    Desire is experienced by the self
    Therefore, action according to desire is always selfish.

    I don't think this follows though, at least not given the way the term "selfish" is normally used in moral discourse. This seems to be a case of equivocation to me. When we say that a person is being selfish we normally mean something like the Oxford definition:

    (of a person, action, or motive) lacking consideration for others; concerned chiefly with one's own personal profit or pleasure.

    We do not mean:

    "An action having any relation to the actor" (which is clearly all action insomuch as it is attributable to anyone or any thing).

    Or:

    "Any action that is desired by the actor." (Indeed, people often talk about the ills of "selfish desires").

    You seem to be using the term "selfish" in this second sense to argue that all action is selfish, and then moving back to the common usage later in the argument. So, even if we grant the solipsistic premises (which I wouldn't) this appears to be an equivocation.

    Indeed, the gold standard for rational moral action tends to be something like: "doing what is known to be truly best." Now, in a sense, what is "best" is always in our own interest in that what is better is more choice-worthy than what is worse. When people decry selfishness, what they mean is that people choose the worse over the better because they are myopically focused on the self as a sort of false consciousness or because they are ruled over by their passions and lower appetites, or else ignorant of what is truly best out of negligence, due to the prior two factors.

    Let me give one of my favorite examples. In the middle of the Purgatorio, Dante sets up the key issue of human life as the proper ordering of loves (drawing on Saint Augustine here). Sin results from loving what is less truly desirable more, from confusing merely apparent goods with what is truly good. To focus on finite, worldly goods (both physical goods, but also status, sexual partners, etc.) is to focus on goods that "diminish when shared. These are not wholly "false goods." They are truly desirable to some extent. But their proper function is to act as a ladder up towards higher goods (consider here Plato's Ladder of Love in the Symposium). Spiritual goods, by contrast (beauty, contemplation, etc.) are "enhanced when shared." The pursuit of goods that diminish when shared sets up a dialectic of competition, and this is where selfishness comes into play.

    Due to historical accidents in the development of Western theology and science, most modern ethics starts here, within this dialectic of competition. Ethics and politics become primarily about "the individual (the selfish) versus society." But this isn't the only way to look at it. Much prior ethics focuses instead focuses primarily on the higher versus the lower, and the proper ordering of the appetites to what is understood as truly most desirable. This isn't "selfish" though in that the Good always relates to the whole and is itself diffusive.

    The ultimate motivation, therefore, always resides within.

    I don't see how this follows. Do our desires and experiences leap from the aether uncaused? If not, then the "ultimate" terminus of our desires lies outside of us. We might become relatively more or less self-determining vis-á-vis our own desires and their ordering (as Frankfurt's second-order volitions for instance, the effective desire to have or not have other desires). Yet our desires have causes that lie outside of us. An appealing meal can stir desire in us because of what it is, not solely because of what we are.

    Again, the "selfishness" claim relies on the redefinition of "selfish" to "having any relation to the self at all." But eros primarily relates to the erotic other, and agape flows outwards from the self. Although both obviously relate to the self in some way, they are not centered on the self. Your redefinition is, interestingly though, pretty much what Byung-Chul Han is talking about in "The Agony of Eros," the elimination of the other by an ever more inflated self. Yet to my mind though, this just shows that solipsitic philosophy, due to its errors, leads towards selfishness.

    Psychological studies confirm this. Acts of charity, generosity, and volunteerism are correlated with activation in the brain’s reward centers (ventral striatum, medial prefrontal cortex). Helping others feels good, biologically. The altruist experiences hormonal reinforcement through dopamine and oxytocin — demonstrating that “good deeds” literally reward the doer.Copernicus

    I don't think this shows much. Vision always involves activity in the occipital lobe. Does this prove that light always relates solely to the self? Our brains are always involved in everything we do. Does this mean that everything we do and know is actually about the brain (and so really, the self)? But if this was so, it would undercut the very epistemic warrant we have for believing in neuroscience, etc. in the first place, since we would actually never have access to "brains" or "fMRIs" only our own selves.

    IDK, it seems to me that all this shows is that all intentional behavior involves desire and that all things desire the good. To show that all intentional action is selfish would require that the good, that to which all things strive, never extends past the self. Yet this hardly seems true, and if it has to be justified by presupposing solipsism, that seems problematic as well.
  • Is sex/relationships entirely a selfish act?


    It seems to me that positions like "in the real world it's: "might makes right," "nature is red in tooth in claw," "everyone is an atomized, self-interested utility maximizer," or a New Age, "the world is love" etc. are more interpretive lenses than statements of empirical fact. They are ways of fitting empirical observations into a narrative. There are always plenty of counter examples, but these can always be read into the interpretive lens.

    There might be various justifications for such an interpretive lens, but they cannot come from an appeal to a "real" world as filtered through that same lens, or else the justification is circular. It's sort of like how everything from anonymous self-sacrifice for strangers to extreme self-harm can still be rolled into "self-interested utility maximization," but only at the risk of making such a statement a vacuous tautology.

    I think this is simply a pathological way of viewing the world, one hostile to human flourishing. Surely, it is better to be in a good marriage, based on love, than to be in a zero sum struggle for utility. That some people are able to paint everything in terms of "self-interest" is arguably just a sign of a sort of spiritual illness. This is precisely Dostoevsky's point in Crime and Punishment vis-á-vis the new social theories of his day.

    At any rate, Homo oecononimicus wasn't born from "empirical findings" but exists in liberal political-economy from its earliest days, were it was simply borrowed from Calvinism. It's an unfalsifiable grounding dogma used to make sense of observations. The irony then is that society seems to make many positive efforts to transform citizens into Homo oecononimicus.
  • Banning AI Altogether


    annoyingly, it uses a lot of em dashes, like I do myselfJamal

    For some reason it always puts spaces between em-dashes, which is a stylistic faux pas outside a few style guides (basically just AP), and so this is one way to tell between usages—also, it virtually never uses an em-dash for a single trailing clause, instead always bracketing a clause in them.

    Anyhow, it seems to me that the main risk of them are:

    Sycophantic confirmation bias; anything said to GPT in particular is "genius;"
    Hallucination of sources, which is bad for any information environment when they get repeated.

    It's gotten pretty good at citing popular authors, and so it's good for something like looking up where Socrates says something if you already know basically what he says, etc., although it tends to hallucinate bizarre paraphrase translations even of famous works. And it's not bad at giving lists of relatively important books on fairly specific topics. However, ask it for sources related to a narrow thesis and it routinely just hallucinates the thesis into tangentially related articles, sometimes summarizing them as saying the opposite of what they actually say, which is ironic considering they ripped Anna's Archive and fed it into these things, so the original papers are in there somewhere in many cases.
  • The value of the given / the already-given


    You might find the chapter "The Sage and the World" in Pierre Hadot's Philosophy as a Way of Life interesting as it gets at this topic, not just thankfulness for what we have, but a fuller appreciation of all things. As William Blake puts it:

    To see a World in a Grain of Sand
    And a Heaven in a Wild Flower,
    Hold Infinity in the palm of your hand
    And Eternity in an hour.


    I have also always loved this quote from the great Jewish Platonist Philo of Alexandria, with which he starts the next essay:

    Every person – whether Greek or Barbarian – who is in training for wisdom, leading a blameless, irreproachable life, chooses neither to commit injustice nor return it unto others, but to avoid the company of busybodies, and hold in contempt the places where they spend their time – courts, councils, marketplaces, assemblies – in short, every kind of meeting or reunion of thoughtless people. As their goal is a life of peace and serenity, they contemplate nature and everything found within her: they attentively explore the earth, the sea, the air, the sky, and every nature found therein. In thought, they accompany the moon, the sun, and the rotations of the other stars, whether fixed or wandering. Their bodies remain on earth, but they give wings to their souls, so that, rising into the ether, they may observe the powers which dwell there, as is fitting for those who have truly become citizens of the world. Such people consider the whole world as their city, and its citizens are the companions of wisdom; they have received their civic rights from virtue, which has been entrusted with presiding over the universal commonwealth. Thus, filled with every excellence, they are accustomed no longer to take account of physical discomforts or exterior evils, and they train themselves to be indifferent to indifferent things; they are armed against both pleasures and desires, and, in short, they always strive to keep themselves above passions … they do not give in under the blows of fate, because they have calculated its attacks in advance (for foresight makes easier to bear even the most difficult of the things that happen against our will; since then the mind no longer supposes what happens to be strange and novel, but its perception of them is dulled, as if it had to do with old and worn-out things). It is obvious that people such as these, who find their joy in virtue, celebrate a festival their whole life long. To be sure, there is only a small number of such people; they are like embers of wisdom kept smouldering in our cities, so that virtue may not be altogether snuffed out and disappear from our race. But if only people everywhere felt the same way as this small number, and became as nature meant for them to be: blameless, irreproachable, and lovers of wisdom, rejoicing in the beautiful just because it is beautiful, and considering that there is no other good besides it … then our cities would be brimful of happiness. They would know nothing of the things that cause grief and fear, but would be so filled with the causes of joy and well-being that there would be no single moment in which they would not lead a life full of joyful laughter; indeed, the whole cycle of the year would be a festival for them."

    On the Special Laws, 2, 44-48

    All of life a festival indeed! ("Citizens of the world" reminds me of Hegel's "being at home in the world"). I think part of this is that it is easier to appreciate what is when one is no longer striving for what is not yet, nor competing for what Dante calls "goods that diminish when shared" (physical goods, but also honors, status, etc.)

    I find Saint Maximos the Confessor's "threefold path" compelling here. It focuses first on the practice and cultivation of the virtues (practical philosophy), moves to deciphering and understanding the Logos through the world (natural philosophy/physics, but in a contemplative sense), and finally to "contemplation" proper (as "theology" in its original sense, the mystical experience).
  • Hume and legitimate beliefs
    I'll just leave it there, and see if it appeals to anyone else.unenlightened

    It made sense to me. This same basic problems shows up for any metaphysics of sheer difference. If everything is completely different in each "moment" then you don't have "different moments" in any sort of whole, but just sui generis, unrelated "beings" plural. I think this applies for any attempt to eliminated causality as well.

    I have a quote I like to use here, which maybe will find helpful:

    Kant realized that Hume’s world of pure, unique impressions couldn’t exist. This is because the minimal requirement for experiencing anything is not to be so absorbed in the present that one is lost in it. What Hume had claimed— that when exploring his feeling of selfhood, he always landed “on some particular perception or other” but could never catch himself “at any time without a percepton, and never can observe anything but the perception”— was simply not true.33 Because for Hume to even report this feeling he had to perceive something in addition to the immediate perceptions, namely, the very flow of time that allowed them to be distinct in the first place. And to recognize time passing is necessarily to recognize that you are embedded in the perception.

    Hence what Kant wrote in his answer to Hamann, ten years in the making. To recollect perfectly eradicates the recollection, just as to perceive perfectly eradicates the perception. For the one who recalls or perceives must recognize him or herself along with the memory or perception for the memory or impression to exist at all. If everything we learn about the world flows directly into us from utterly distinct bits of code, as the rationalists thought, or if everything we learn remains nothing but subjective, unconnected impressions, as Hume believed— it comes down to exactly the same thing. With no self to distinguish itself, no self to bridge two disparate moments in space-time, there is simply no one there to feel irritated at the inadequacy of “dog.” No experience whatsoever is possible.

    Here is how Kant put it in his Critique of Pure Reason. Whatever we think or perceive can register as a thought or perception only if it causes a change in us, a “modification of the mind.” But these changes would not register at all if we did not connect them across time, “for as contained in one moment no representation can ever be anything other than absolute unity.”34 As contained in one moment. Think of experiencing a flow of events as a bit like watching a film. For something to be happening at all, the viewer makes a connection between each frame of the film, spanning the small differences so as to create the experience of movement. But if there is a completely new viewer for every frame, with no relation at all to the prior or subsequent frame, then all that remains is an absolute unity. But such a unity, which is exactly what Funes and Shereshevsky and Hume claimed they could experience, utterly negates perceiving anything at all, since all perception requires bridging impressions over time. In other words, it requires exactly what a truly perfect memory, a truly perfect perception, or a truly perfect observation absolutely denies: overlooking minor differences enough to be a self, a unity spanning distinct moments in time.

    William Eggington - "The Rigor of Angels: Kant, Heisenberg, Borges, and the Ultimate Nature of Reality."

    It makes a bit more sense if you're familiar with Borges' story Funes the Memorious though. The basic idea is a guy who can remember everything perfectly. He can spend 24 hours remembering and basically relive the same day at will. Because of this, he finds language incredibly inefficient and confusing. Why speak of "clouds" when you can speak of "that particular cloud on the morning of November 8th, 1928?" And why speak of that same cloud early in the morning before the sun hit it as the same thing as the cloud once the sun illumined it? They can all be held absolutely distinct.




    The fact that skeptics don't act with the courage of their convictions has been pointed out since ancient Greece and India, but I am not sure if this diffuses the general challenge. The skeptic's point is not generally that we ought to go walk off a precipice, etc. In practice, they normally use the impossibility of justification "pragmatically" themselves. For example, if one thinks it is impossible to ever justify any claims about morality and ethics, it's pretty easy to use this skepticism selectively in self-serving ways. In many ways, absolute misology is less threatening than "pragmatic" misology. The former, a sort of absolute epistemic nihilism, is at least obviously ridiculous. The true ruin of reason lies in selective adherence. That's why I think appeals to "pragmatism" can often be the worst sort of solutions to skepticism.

    At any rate, this sort of skepticism comes up in plenty of places. The Boltzmann Brain and various variants, particularly those involving a "Many Worlds Hypothesis" are a good example where the conclusion of radical skepticism can come through surprising paths. But of course, these also tend to be self-undermining in that, if we think we are likely to be "randomly generated" we ought not trust this very belief itself, since random beliefs will not tend to be true given that there are always vastly more ways to be wrong than right.

    A genuine miracle is occurring — a supernatural violation of natural laws, or I am probably in a universe (within an infinite multiverse) where an extraordinarily improbable natural fluctuation — say, a “Boltzmann fish” scenario — has spontaneously produced the fish.

    If I I think I probably live in a multiverse, which explanation would Hume think I should favor?
    RogueAI



    Ha, a good question! If you begin to see miracles, even deeply personal ones, you should probably just accept that these necessarily happen in some part of the wave function with a probability of 100%, so why not "here?" Likewise, quantum immortality is a reality, so you probably should be concerned about ending up in one of the innumerable paradise timelines instead of the damnation ones (not that you can help it, both will inevitably happen, regardless of your sins or karma). It also follows that Jesus certainly did rise, "somewhere," so there is that.

    Lots of silly things result from Many Worlds. That it has become popular as a "solution" to the Fine Tuning Problem to me is sort of baffling. To my mind, it represents an essentially religious commitment to the essentially aesthetic ideals of "naturalism" to posit "everything possible happens" as a solution to the threat of life seeming vanishingly unlikely otherwise.
  • Hume and legitimate beliefs


    But consider Russell's Turkey. The turkey knows from a lifetime of experience that every morning the nice man comes to feed him at sunrise. This correlation has never failed; it is as regular as the rising sun. And yet on Thanksgiving morning, when the man comes for the turkey, he isn't planning on feeding him...

    The doubt is justified on similar grounds. Might we be like the turkey? You might "remember" the sun always rising, but in virtue of what do you know that your memory is reliable? Plus, given Hume's disjoint bundle anthropology, the reliability of memory is perhaps more open to doubt.

    Can you explain that further?Fire Ologist

    Hume gives a very specific narrative of how the mind and consciousness works, from which the skeptical arguments follow. But such a narrative itself presupposes the reliability of his own memory and introspection, induction, and indeed his own knowledge of a causal relationship between impressions and idea formation, as well knowledge of a cause-like computational role for reason.

    And consider his objection to the idea that reason has direct access to reality or that the actuality of what is known must, in some sense, be "in" the knower. If this is dismissed as "metaphysical speculation" only because, "we could only know such through impressions, which are merely appearances, and there is no knowable relationship between appearances and reality," that simply begs the question.

    I think Hume's argument hangs on the appearance of humility here. To be sure, he is saying that most of the past thinkers of East and West are grievously mistaken, but he isn't saying their beliefs are false, just that they cannot be known to be true. Yet this applies just as much to Hume's own narrative of how the mind works, which is what justifies this skepticism. Hence, what we have is an argument to global gnosis about the limits of knowledge made from a position of ignorance.

    Arguably, when we see a rock shatter a window, we are experiencing a cause. Likewise, when we encounter evil, we are experiencing it. Hume thinks we cannot be experiencing what we think we are experiencing. But it is helpful to turn around and ask what would constitute the experience of a cause? If we had a discrete "cause sense" like smell, and experienced it when a rock smashed a window, surely Hume would just dismiss this as mere appearance as well. So there is actually no evidence that can falsify Hume here; the conclusion that we cannot experience causes is axiomatic, which just brings us back to why we ought to accept his axioms. It's important that his axioms don't just limit us to the senses (after all, the Peripatetic Axiom: "whatever is in the intellect is first in the senses" doesn't result in these consequences) but also declare what can be known through the senses a priori.

    This is more obvious when we get to his claim that we never experienced vice or evil in Book III. Arguably, we do. What else is disgust, outrage, pain, etc.? These are surely sensations. And pain is a sensation that is continuous with touch and hearing. So why can we not sense badness? Again, it's axiomatic that these sensations are internal (and in a certain sense, all sensation is internal because it only deals in appearances).




    But I could see our experience of our own mind being different than our sense based experience.Fire Ologist

    I don't see how this helps. In virtue of what is Hume's introspection more right than those of pre-modern thinkers or modern phenomenologists, etc. such that we should dismiss their understanding of how the mind works and accept Hume's? Consider also the idea that the act of understanding is luminous (reflexive). Hume can deny this on the grounds of introspection, but why ought we believe he introspects more correctly than his opponents?
  • Thoughts on Epistemology
    Because they have withstood defeater screening across the routes of justification available to me: testimony, reasoning, sensory experience, linguistic clarity, and logical consistency. If new defeaters arise, I will adjust. But until then, the best explanation for their stability is that they are tethered to truthSam26

    But you have already allowed that cultural-historical regressions might lead to a case where a culture widely accepts that a true idea/theory has been "defeated" when it hasn't been. How do you know that you're not in that situation?

    The move of: "scholasticism lost ground because it was properly defeated, but if secular naturalism and exclusive humanism lose acceptance that will be simply a regression," seems arbitrary unless you can show why some beliefs are actually true and cannot be the result of regression/error. That is, an apparent defeater or error is not solid evidence that a theory/idea is actually wrong, since you have already allowed that whole cultures can misidentify defeaters and errors for long periods.
  • Hume and legitimate beliefs
    Hume's anthropology/psychology is what justifies his skeptical positions. Book II of the Treatise lays this out pretty well. The senses—impressions—are appearances. They do not bring the mind into direct contact with reality. Ideas are themselves derivative of impressions (as manipulated and analyzed by reason and imagination) and so they also do not bring us into contact with reality. Following the "Scandal of Deduction," "relations of ideas" cannot ever tell us anything we don't in a sense already know, and at any rate the ideas themselves are internal to the thinker. Reason is instrumental; it is, as Hume puts it, "inert" and has no direct contact with reality, nor any appetite for it (no drive towards union or knowledge).

    So, if all we have access to is appearances, what then is the relationship between appearances and reality? Hume bars speculation here. Any knowledge of the relationship would need to come to us through appearances themselves, but then appearances, in being appearances, can never inform us as to this question. Appearances might be arbitrarily or only accidentally related to reality (consider the arguments against causality).

    It should be obvious from this starting point that skepticism cannot be escaped if we accept the premises. At no point are we ever in contact with reality, and any relationship between reality and the appearances we do have access to is forever obscured.

    The question then is, should we accept Hume's anthropology and psychology and his metaphysics of appearances? I don't think we should. For one, he doesn't really argue for it; he merely assets it as obvious, e.g., in Book II (and then refers to this assertion as a "proof" going forward, e.g. 3.1.1.8). But it isn't obvious. A great many thinkers have disagreed here, including almost everyone prior to the Enlightenment (plus plenty since).

    A problem for Hume here is that, per his own epistemology, he cannot possibly know what he is asserting here. He cannot know the reality of how the mind works for the same reason he cannot know causes in the classical sense, etc. Not only that, but he doesn't even have any strong probabilistic warrant here, just an appeal to "shrugging and going back to billiards." Is it more "pragmatic" to believe Hume here though? Certainly, he cannot claim that he is "more likely" to be right based on inductive inference. He arguably is cutting off the branch he sits on, giving himself no warrant for asserting a hotly contested set of premises.

    Now, to be fair, this inability to justify his own claims is exactly what we should expect if he is correct, but that hardly neutralizes the way in which the epistemology is self-undermining. We might also think that, prima facie, an epistemology that cannot justify even our most bedrock beliefs is likely to be a defective epistemology. Hume and the many who follow him normally justify these fairly radical claims by making them seem to be paragons of humility. However, arguing for the ignorance of all from one's own ignorance is arguably quite a presumption.

    Second, we should take a good hard look at any philosophy that demands an appearances versus reality distinction but then denies access to reality. If "reality" is inaccessible, then we have no warrant for positing it. Only appearances show up. But if there are only appearances, then appearances just are reality.

    Third, if appearances are arbitrarily related to what they are supposedly "appearances of," then they aren't actually appearances of those things in any meaningful sense. We might as well call them free floating, spontaneous, uncaused apparitions if we're going to deny that there is a reality that is in any way the causes of appearances (and no, I don't think the Kantian "limiting relation" is strong enough to secure status of "appearances" as appearance; it is just the spectral relation that is left by the sheer dogmatic presupposition that "phenomena" are "appearances of").
  • Thoughts on Epistemology


    Right, but if you cannot be sure that you have true beliefs now why should you trust your own beliefs about the long term trend of knowledge or epistemology more generally?



    :up:

    I am interested though, since you recommended a Bayesian approach, how you think they are able to deal with the problems of selection narratives. The structure of BT by itself does not seem to get around the problem that fitness vis-á-vis reproduction/survival does not seem to necessarily track with truth.
  • In a free nation, should opinions against freedom be allowed?


    And I believe that a society that strives for constant liberation from anything restrictive and oppressive is liberated to the point of freedom from beingAstorre

    D.C. Schindler makes this point in his cleverly labeled Freedom From Reality: The Diabolical Nature of Modern Liberty. The second part of the title is clever too because he means "diabolical" in the original Greek context as the opposite of symbolical, a slip towards sheer multiplicity and potency.



    I think that's right, but I'd go a bit further and say that the very idea that core components of becoming fully actualized as a human being, such as being a good mother, a good citizen, a good priest, a good soldier, a good teacher, a good friend, neighbor, etc. are "restrictions" is presupposing a defective notion of liberty. Freedom has to be a freedom towards a truly good end or else it ultimately turns out to be arbitrariness, which is the opposite of freedom.

    As Hegel demonstrates in the Philosophy of Right, freedom as sheer lack of constraint, taken to its limit, reveals itself to be contradictory. No choice is possible without lapsing into determinacy, which is itself seen as a lack of freedom. So, choice itself contradictorily becomes a negation of freedom.
  • Thoughts on Epistemology


    I think that is a fair reading. IIRC, Godel was something of a platonist, so his reading of axioms would also not be that they are unjustified.



    :up:



    Well, if you acknowledge that regression can occur, then it seems that defeaters can appear to pile up against a position, and yet this is itself a sort of illusion or product of pathological justification. So my question then is: "how do you know that what you think are defeaters and are progressive evolution really are?"

    For instance, scholasticism didn't disappear. It still has a fairly robust presence in philosophical publications, perhaps more than a century ago. How are you sure that your identification of errors and defeaters is correct?

    Or in the religion example, you seem to accept that no matter if it comes to be affirmed, it will be false. But if it comes to be affirmed, surely people will be able to argue, with justification, that defeaters and errors piled up within the secular naturalist/ exclusive humanist paradigm, and they will claim that this is precisely why it collapsed. Indeed, plenty of thinkers (and not just religious ones) do claim that errors and defeaters against naturalism have indeed piled up, and that it is on its way out. The fact that evolutionary narratives of knowledge seem to be self-refuting is precisely one of the issues here.
  • Thoughts on Epistemology


    :up:

    I think that's a good answer. A difficulty in epistemology that I think is often under addressed is that the idea that knowledge "progresses" (e.g., "scientific progress," "moral progress," etc.) needs to be justified itself. The original Enlightenment justification for this was theological, and so if it is adopted in a secular naturalist context it needs another sort of justification. However, historically, it does not seem that technological, scientific, moral, or philosophical progress is assured. They don't seem to have always occurred; periods of regression show up as well. The early secular narratives that put forth the idea that only "superstition" blocks the path to progress seem too simplistic to account for this (they seem like downright ideological propaganda a century on TBH).

    However, there are still plenty of issues. What constitutes an "error" or "failure" is itself dependent on goals and understandings that are always shifting. Consider the contemporary traditionalist critique that crashing fertility rates are a sign that exclusive humanism is maladaptive. Well, if it really does go extinct because its population falls by more than half each generation, and the dominant paradigm a century from now is something more in line with the traditionalist ethos, shall it thus be true that secular exclusive humanism was discovered to be a sort of error? From the exclusive humanist perspective, this would not prove it was in error; its own extinction would not be evidence of the truth of religion, tradition, etc., only of those traditions' reproductive value (indeed, anti-natalists would probably argue that religion is reproductively successful precisely because it is false). But this shows that "success" might arguably correlate with falsehood not truth.

    I think your response works best in terms of technological questions where "success" is fairly easy to define. Either a plane crashes or it doesn't. It becomes much more difficult in political, social, moral, philosophical, etc. questions. For example, arguably the main liability to Scholastic thought in the early modern period was not its apparent falsity (or its dogmatically asserted "dogmatism") but that it absolutely did not lend itself to pamphlets aimed at a common audience, the new dominant market for philosophy, and that its institutions became prime targets for political violence and expropriation.

    So, there is the issue that past "successes" and "errors" are being defined in terms of current practice. There is a bit of a "history is written by the victors" problem. There is also the problem posed by Hoffmann's "Fitness Beats Truth Theorem" and similar arguments, where selection-based approaches to belief do not ensure that beliefs are true. Fitness does not seem to be equivalent with truth in how information (or beliefs/memes) replicate. But then if our justification for our own beliefs rests on a selection model, this ends up being self-refuting. If our selection theory is true, we ought not believe our own beliefs are true, because their fitness is only loosely related to their truth.

    Nonetheless, I think selection can be an important factor in explaining progress, just not the only factor. The other issue here is that it would only suggest that knowledge will be produced in the long run, not that we have it now. But if we aren't likely to have true beliefs now, then we ought not believe in our own progress narratives (a similar sort of issue). Hence, I think a stronger linkage is needed.
  • Thoughts on Epistemology
    For example, if T is not traditional-T but rather pragmatic-T or communal-T, then of course JTB is undermined.Leontiskos


    I was thinking rather the opposite. The reason people fiddle with T is to make it so that we can possess "knowledge" and access "truth" while still maintaining a view of J (and B) that makes it impossible to possess knowledge and have access to truth in their traditional sense.

    The problem I see is that this just seems like equivocation. The problems of global fallibilism appear to go away because "knowledge" and "truth" have been redefined, but they aren't actually being dealt with. Now we can say things like: "I know p, but of course p might be false."



    Might this not risk instrumentalizing knowledge? But the purpose of at least some knowledge isn't to do anything, or demonstrate any competency.

    Also, how might this apply to moral knowledge? If one discovers that the dominant norms of one's society are, in fact, evil, how does one end up demonstrating this understanding? Pretty much by definition, one's community will think you are in error (violating U). But it does seem possible to be right about what is just, or choiceworthy, when everyone around you is wrong, and deems you to be in error, and "misusing language." For example, when Saint Gregory of Nyssa first began making a concrete Christian justification for the total abolition of slavery, this was a pretty wild claim. When he said "slavery is unjust," arguably he could be accused of misusing the term "just" in his context. And yet we tend to think he was absolutely correct here, and that his society would later come to agree with him and largely abolish slavery because he was correct.

    It ensures that both justification and understanding are grounded in observable criteria within language-games.Sam26

    What about shifting contexts? As a Marxist, I might be able to justify and demonstrate understanding of the labor theory of value to other Marxists. I might also believe the theory is true. However, we have pretty good reason to think the labor theory of value is false. Can I know something that is false?

    More problematically, suppose I have become versed in both Protestant and Catholic/Orthodox language games. Can I both understand and know that the Eucharist is the real body and blood of Christ, and that it is not the real body and blood of Christ because I can justify both and demonstrate a competent understanding of both?

    Obviously, we might object that I cannot actually believe both (barring some sort of power of self-hypnosis perhaps), so I fail the B criteria on at least one of these. However, it seems possible that I could act like I believe both. The B criteria here seems ineluctably private, and so not "observable."

    The problem I see with grounding J in current practice is that many forms of J do not seem to secure, or even lead towards truth. Some seem to positively block access to truth. So, referring back to current practice and use doesn't secure T. This would mean that knowledge exists just in case current practice and use corresponds to what is true (I think it's fair to say that no one except for the relativist vis-á-vis truth thinks this is always the case). But then there still needs to be some linkage between justification, use, and practice (@J's issue if I understand it right). Just because current practice requires that I cut out a victim's heart to keep the sun from going out won't make my justified belief, through which I demonstrate mastery of the relevant language game, true; it must also be true that this practice actually keeps the sun from going out.

    But then J and U must have something to do with truth, or else they seem irrelevant, and likewise if B and U can be arbitrarily related to T, they will only ever accidentally line up with it. Presumably, J links them. But sometimes J requires that we contravene established practices that demonstrate U as well. We might decide that we have to start speaking about DNA or justice differently, before we have convinced anyone else.

    I think this relates to another question. Practices and language clearly evolve over time. What causes them to change the way they do? Presumably, this is how J might relate to T and U.

    In my own work I have drawn a parallel between these hinges and Gödel’s incompleteness theorems,
    just as Gödel showed that no consistent formal system strong enough for arithmetic can prove all the truths it contains or even establish its own consistency from within, Wittgenstein shows that epistemic systems rest on unprovable certainties. Both reveal a structural limit on internal justification. Far from undermining knowledge, these limits are enabling conditions: mathematics requires axioms it cannot justify, and our epistemic practices require hinges that stand fast without proof.
    Sam26

    I am not sure about this comparison, axioms are justified and questioned all the time. If you tried to present a system with arbitrary axioms, or ones that seemed prima facie false, no one is likely to take them seriously. The gold standard is that they seem self-evident (arguably, a sort of justification). There have been intense debates over axioms, which can take place because "justification" is not itself bound by any axiomatized system. Afterall, what are the axioms for English, German, or Latin? Axioms are assessed by intuition, consequence, coherence, explanatory success, or even aesthetics, etc. Reasons/justifications are given.

    But the same is true vis-a-vis these examples of hinge propositions in philosophy. They are challenged and justified. That's the whole history of philosophy. And I wouldn't want to accuse Kant and Hegel of misusing language (torturing the reader perhaps), but maybe just of being wrong or unjustified in their conclusions (particularly Kant!).

    Plus, even if we accept that we must accept some things as true to engage in justification, this does not seem to entail that they are true. So the linkage between belief and truth still remains an open question.

    On this point, Wittgenstein’s contribution is not to propose another model of knowledge beside JTB, but to dissolve the demand for an ultimate account of justification outside our forms of life. The factivity of know remains untouched, as does its commitment to belief. What changes is our view of justification: no longer a timeless condition, it is an activity rooted in our shared background. When Wittgenstein says that “knowledge is in the end based on acknowledgment” (OC §378), he is not abandoning JTB but pointing to the human practices in which justification has its weight.Sam26

    This account is right in line with the shifting meaning of "justification." But it seems to me to leave open the same question.

    The history of "justification" as a theological term turned philosophical is itself telling here. To be "justified" was originally an internal process, a change in that which is justified. It meant "to be made righteous." With Luther, it is displaced to external divine judgement, an imputation. Then it ends up becoming a philosophical external imputation that devolves down to either the community or the individual. A "justification," of claims to be in contact with reality (in possession of knowledge) on the basis of appearances needs some metaphysics of how appearances relate to reality. If this linkage doesn't exist, I am not sure how J ever falls into place or how T would ever show up in our experience. But if J is about the private and communal imputation of status in the first place, and not about a relationship between the knower and known, how could it ever bridge the gap?Count Timothy von Icarus
  • The value of the given / the already-given


    There are the "Four Thoughts" one reflects on before Lojong practice in Tibetan Buddhism.

    There is first reflection on the preciousness and extreme rarity of human birth. This motivates us not to waste our time to practice Dharma.

    Next is dwelling on the idea that everything is transient; death is certain, the time of death uncertain. We might die today, and we shall surely die sometime not so long from now.

    And this leads to dwelling on karma and the defects of samsara, which in turn leads us to take refuge in the Three Jewels (Buddha, Dharma, Sangha) and Bodhicitta, as well as recognizing the grounding of all transience in emptiness.

    But this is very similar in ways to Christian practice, although it diverges in the metaphysical contemplation.

    Catholics use ACTS to guide prayer:

    Adoration (of the Holy Trinity, the Theotokos, angelic hosts, saints, but of course only worship of the Father, Son, and Holt Spirit).

    Contrition: the acknowledgement of sins and all the ways we fall short and "miss the mark" of Christlike life and love.

    Thanksgiving: the one you're mentioning, which is now contextualized.

    Supplication: directed at the good of ourselves and others

    Dwelling on death and the impermanence of all things is likewise very common as a recommended practice for meditation ("meditation" in tradition Christianity being much like the common modern usage of "contemplation," focusing on pondering a particular subject, while Christian "contemplation" is much more like Buddhist "meditation;' I have no idea how these terms flipped).

    Since you come from a background, I'm sure your familiar with the motif of portraits of Orthodox monks in their monastery's ossuaries where they are sitting contemplating the skulls of their deceased brothers (or sisters I suppose) by the light of the alter. Some Catholic saints are also often depicted with a skull for similar reasons. I have heard of Eastern monks even sleeping in their own eventual caskets as a meditation on death.

    This all helps to focus on what "really matters," and what one ought to be thankful for. I think the repeated practice and conscious effort is probably more important than the exact form. Adoration leads to thanksgiving, since by contrast it shows how little lesser goods matter, while illuminating the infinite plentitude man is called to participate in through theosis. There is even a focus on attaining a view of non-duality in some Christian praxis, as you see in Meister Eckhart and even Dante's Divine Comedy, although this is a bigger focus in Hinduism.

    I recall some similar motifs in Hindu practice as well.

    There have been varying attempts to "secularize" these. I think the temptation with secularization is to drift into therapeuticization, the reduction of practice to an instrumental lever for self-assessed "well-being." For example, if dwelling on death bums you out, or actually makes you less likely to "strive to achieve your goals" or less motivated for the "daily grind" (which it well might, that tends to be one of the explicit goals of this practice) then arguably it is running counter to the therapeutic goal (although maybe only in the short term). The issue of prioritizing ends looms large here. Dwelling on death might be therapeutic, but it depends on the ends. Is being thankful a proper end or only useful instrumentally?

    Also, in an exclusive humanist frame, death is simply the end, and worldly goods the measure of all goods. I find it hard to see how virtue doesn't end up instrumentalized here, unless perhaps a premium is put on freedom, and freedom is conceived of as requiring the virtues (but even then, we might ask, why is freedom good?). A greater appreciation of current goods seems easy to justify here, but a justification and embrace of suffering seems more difficult (not that it can't exist, e.g., Nietzsche's amor fati).

    In modern framings, philosophy and psychology are normally quarantined from "spirituality." Hence, the challenge for a strictly "philosophical" sort of exercise in the modern sense will be that of exclusive humanism. Exclusive humanism doesn't necessarily rule out these moves, but they have to "cash out" in the sorts of goods it recognizes, and if a taste for the "spiritual" is a function of private preferences, then they will be something more like a hobby unless they can be justified ethically or instrumentally, whereas in the contexts mentioned they already have a firm justification.
  • Thoughts on Epistemology


    Well, on a "pragmatism all the way down" account I assume that J would be a mix of cultural norms and personal norms that have been wired into us via induction through past experiences. A reductionist might want to boil this down to "fire together, wire together." We receive constant feedback, are constantly interacting with the world, and so develop a set of predictive, justificatory practices.

    Consider the position that "causality" is just a sort of pragmatic narrative-making exercise. The idea that empiricism is incompatible with causality goes quite a ways back, and obviously there is disagreement about how strong the arguments against causes are. However, they have generally be taken seriously, even if Russell's full eliminativism hasn't been universally embraced. At any rate, if the objects of experience cannot be said to be the cause of our perceptions of them or of our intellectual understanding of them (or if all we have is a Kantian "limiting relation), except by "useful" convention, then in what way are experience and understanding ever "justificatory?" It seems they are so only just in case we come to find them useful ("find them so," not that they are so, since to claim that something is truly useful in the old sense of "truly" is simply to invite the same set of problems).

    You can find such a path fleshed out in different aesthetic modes though. There is the Continental version, which offers a sort of metaphysical explanation of how pragmatic seeking generates reality (and so truth) through phenomenological arguments, a linguistic variety that tries to be bare bones, and a conventionally naturalist, eliminitivist variety (which appeals to neuroscience and evolutionary theory, while allowing that these are of course "true" only so far as it goes).

    It seems to me though that the outlines of such a path can be seen in Locke, Hume, Kant, and more so by the time of Dewey and James. The basic issue is: "do the acceptable sorts of "justification" ever secure the truth of what is being justified?" If not, how is the J criteria ever met? One option here has been to leave J is place and to start fiddling with the definition of T. I don't think B is unproblematic either though, to the extent that the analysis tends to focus on linguistic propositions and how those are supposed to relate to truth is another question.

    That's why I say the J, T, and B are more loaded than they might first appear. For J, what constitutes justification? Is there ecstasis, a going out to the known, or act of understanding that is justificatory, or is it all discursive relationships between observations or the deductive unfolding of tautologies? Is the dissolution of duality possible, and the experience of this dissolution justificatory as in Indian thought, or must that sort of justification be written off as unreasonable? For T, is the essence of T just "P is true if P," which of course offers no real linkage between reality and appearances unless one comes from J or B? Is T "in" the senses prior to the intellect, and "in" things?

    Then for B, I read a very interesting book recently by Peter Harrison, "Some New World," which is a genealogical account of modern naturalism. Part of his thesis is that "belief" itself was radically redefined due to the theological conflagrations of the early modern period. This is where B starts to get concretized as a certain linguistic formulation, which brings all the problems that go along with the way meaning is contextual, bound up in language, etc. Sometimes, you'll see the claim that Plato is the origin point of JTB, and that the Gettier problem is very old in Western and Indian thought. But I don't think this is wholly accurate, because the understanding of the terms shifts so much, the new context becomes important.

    The history of "justification" as a theological term turned philosophical is itself telling here. To be "justified" was originally an internal process, a change in that which is justified. It meant "to be made righteous." With Luther, it is displaced to external divine judgement, an imputation. Then it ends up becoming a philosophical external imputation that devolves down to either the community or the individual. A "justification," of claims to be in contact with reality (in possession of knowledge) on the basis of appearances needs some metaphysics of how appearances relate to reality. If this linkage doesn't exist, I am not sure how J ever falls into place or how T would ever show up in our experience. But if J is about the private and communal imputation of status in the first place, and not about a relationship between the knower and known, how could it ever bridge the gap?



    I'm a bit puzzled about you are getting at here.Ludwig V

    Sorry if I wasn't clear. I just meant that if P is true, then P cannot also be false. That's all. The reason I thought this was relevant gets back to this point @J was making:

    That is indeed a problem. But we can't solve all the problems at the same time. For the purpose of defining knowledge, we can assume that we have a concept of truth and worry about what it is on another occasion.Ludwig V

    The problem, as I see it, is that global fallibilism seems to make any sort of meta-knowledge impossible. There is no way to decisively justify that one has knowledge, as opposed to a justified but false belief. But is it problematic for a theory of knowledge if it is impossible to know if one ever has it? I would think so.

    As he put it:

    To avoid the circularity, you have to posit X as true without knowing it to be true,J

    But of course, we could reply here that you "know it to be true" just in case you have a justified belief that it is true, and it is true. I don't think that answers J's question though, because we still have to assume the "it is true" part. The problem is the one mentioned above, the relationship between truth and appearances, and a justification that is presumably based on appearances (or else a direct relationship to truth). If appearances are arbitrarily related to truth, or there is no clear relationship, that gap is unbridgeable from the outset.

    What is the point of saying knowledge must be "justified" if justification doesn't secure truth, or worse, might only be accidentally related to it?
  • Thoughts on Epistemology
    My concerns with JTB are all about how the truth of P is supposed to be establishedJ

    I would think it isn't. We just act like it is true until we are prompted to reconsider.

    To put it another way, the possibility of p being false seems to me to be irrelevant to the question of knowledge. What is relevant is whether p is or is not false, on the assumption that if it is not false, it is true.Ludwig V

    Doesn't that have to answer the possibility question. If P is true, it cannot possibly be false.

    But the question isn't whether P is itself possibly false. Rather, it is, do we think P is possibly false. If we think P is possibly false, then it hardly seems that we can know that we know P, since to know that one knows P is to have the justified true belief that P is itself a JTB (which presupposes that P is true, in which case it is impossible for it to be false).

    Well, my approach would be to explain that certainty and doubt, possibility and impossibility, etc. are meaningless without a concept of truth.Ludwig V

    What sort of concept though? Rorty's move to redefine truth as "what our peers let us get away with is a conception of truth.

    I do think this is one of the key problems for JTB. What is meant by T is generally thin, and by being thin it is opaque. It doesn't avoid having to posit a certain metaphysics of reality versus appearances/belief, but it often isn't clear about what this metaphysics is.

    As for affirmation, I think the "retranslation " approach would just redefine affirmation. To affirm P, rightly understood, doesn't mean "P is the case, regardless of what we all believe." We've redefined truth, so affirmation is different now too. Affirming P is a sort of endorsement. "It is good to believe P," where "good" is also "hurrah for..."



    Well, I was going to respond to you before about "infallibalists," but I figured it might be beside the point of the thread. The classic examples of infallibalists are first and foremost Descartes, but then also generally the "Neoplatonists." But the "Neoplatonists" generally have it that even creatures are inexhaustible, and so we can always be wrong about particular judgements related to them. Likewise, we can always be wrong about the mutable world around us to the extent that it is always changing and not wholly intelligible in itself (e.g., there are no perfect triangles or circles in nature).

    To understand something is to understand its context, which is, exhaustively, to understand everything. Likewise, to understand something exhaustively is to understand its causes, which would mean fathoming the First Cause and Principle. But this is deemed impossible. Eriugena denies any sort of exhaustive (and so infallible) knowledge in the opening pages of the Periphyseon, drawing on Dionysius the Areopagite and Saint Gregory the Theologian as authorities. Saint Maximus makes a similar point in the Ambigua.

    What then in infallible? Well, one can hardly understand a triangle and have actually understood nothing (understanding is not created ex nihilo). Illusion is parasitic on being on their view. Appearances cannot be arbitrarily related to reality (or else they wouldn't even be properly "appearances," but rather free standing acausal apparitions). I suppose the phenomenological distinction between the object of knowledge and intentionality and the knower is important here too.

    I think the difficulties for truth and knowledge crop up when the metaphysics of reality versus appearances is ignored, and so we default into this thin idea of "p is true if p." There is no explanation of how the being of p relates intrinsically to the thought of p. Appeals to cognitive science or the physics of perception don't end up being able to bridge this gap if they themselves are viewed as largely a matter of pattern recognition within appearances. Fallibalism will be unavoidable, except perhaps within the realm of our own experiences (a sort of solipsistic tendency).

    What is odd to me is that, rather than going back to the drawing board on the metaphysics in question, the historical move has generally been to instead equivocate on "truth" and "knowledge" such that we can be led to epistemic nihilism, but that's ok because truth and knowledge are really just these other things (e.g., useful behavior). But epistemic nihilism never resulted in people poofing out of existence, or going catatonic. They always continued to act "pragmatically" anyhow, so it's unclear what the value of such a rhetorical exercise is supposed to be. Although I will add that the fact that people are incapable of living like they believe nihilism is true is precisely what you would expect if their intellects were being informed by the world around them; they would be unable to shake off their understanding. No matter how hard they reasoned about the groundlessness of their own knowledge, they would still run from rabid dogs like Pyrrho or climb a tree to get away from raging bull elephants like Sanjaya.

Count Timothy von Icarus

Start FollowingSend a Message