• Positivism in Philosophy
    There is a pretty massive conflation common in this area of thought re "science" and "empiricism." This is pivotal in how different varieties of empiricism often justify themselves. They present themselves as responsible for the scientific and technological revolution that led to the "Great Divergence" between Asia and Europe in the 19th century, and this allows for claims to the effect that a rejection of "empiricism" is a rejection of science and technology, or, in some versions, that empiricism is destined to triumph through a sort of process of natural selection, since it will empower its users through greater technological and economic advances.

    But this narrative equivocates on two different usages of "empiricism," one extremely broad, the other extremely narrow. In the broad usage, empiricism covers any philosophy making use of "experience" and any philosophy that suggests the benefits of experimentation and the scientific method. By this definition though, even the backwards Scholastics were "empiricists." Hell, one even sees Aristotle claimed as an empiricist in this sense, or even claims that Plotinus was an empiricist because his thought deals with experience. By contrast, the narrow usage tends to mean something more like positivism, particularly its later evolutions.

    I have written about this before, but suffice to say, I think empirical evidence for this connection is actually quite weak. As I wrote earlier:

    However, historically, the "new Baconian science," the new mechanistic view of nature, and nominalism pre-date the "Great Divergence" in technological and economic development between the West and India and China by centuries. If the "new science," mechanistic view, and nominalism led to the explosion in technological and economic development, it didn't do it quickly. The supposed effect spread quite rapidly when it finally showed up, but this was long after the initial cause that is asserted to explain it.

    Nor was there a similar "great divergence," in technological progress between areas dominated by rationalism as opposed to empiricism within the West itself. Nor does it seem that refusing to embrace the empiricist tradition's epistemology and (anti)metaphysics has stopped people from becoming influential scientific figures or inventors. I do think there is obviously some sort of connection between the "new science" and the methods used for technological development, but I don't think it's nearly as straightforward as the empiricist version of their own "Whig history" likes to think.

    In particular, I think one could argue that technology progressed in spite of (and was hampered by) materialism. Some of the paradigm shifting insights of information theory and complexity studies didn't require digital computers to come about, rather they had been precluded (held up) by the dominant metaphysics (and indeed the people who kicked off these revolutions faced a lot of persecution for this reason).

    By its own standards, if empiricism wants to justify itself, it should do so through something like a peer reviewed study showing that holding to logical positivism, eliminativism, or some similar view, tends to make people more successful scientists or inventors. The tradition should remain skeptical of its own "scientific merits" until this evidence is produced, right? :joke:


    I suppose it doesn't much matter because it seems like the endgame of the empiricist tradition has bifurcated into two main streams. One denies that much of anything can be known, or that knowledge in anything like the traditional sense even exists (and yet it holds on to the epistemic assumptions that lead to this conclusion!) and the other embraces behaviorism/eliminativism, a sort of extreme commitment to materialist scientism, that tends towards a sort of anti-philosophy where philosophies are themselves just information patterns undergoing natural selection. The latter tends to collapse into the former due to extreme nominalism though.

    I suppose I should qualify that though in that there seems to be a third, "common sense" approach that brackets out any systematic thinking and focuses on particular areas of philosophy, particularly in the sciences, and a lot of interesting work is done here.
  • What is real? How do we know what is real?


    Pluralism, as I understand it, allows different epistemological perspectives, with different conceptions of what is true within those perspectives.

    This just seems like relativism though, as in "what is true is relative to systems that theoretical reason (truth) cannot decide between." Here is why I think this:

    Either different epistemic positions contradict each other or they don't.

    If they don't, then they all agree even if they approach things differently.

    Whereas, if there can be different epistemological perspectives that contradict one another and they are equally true (or not-true, depending on which perspective we take up?) then what determines which perspective we take up then? Surely not truth, theoretical reason, since now the truth sought by theoretical reason is itself dependent on the perspectives themselves (which can contradict each other).

    If we appeal to "usefulness" here, it seems we are appealing to practical reason. But there is generally a convertability between the practical and theoretical, such that practical reason tells us what is "truly good." Yet this cannot be the case here, since the truth about goodness varies by system. Hence, "usefulness" faces the same difficulty.

    Dialtheist logics normally justify themselves in very particular ways, e.g. through paradoxes of self-reference. So their scope is limited to rare instances with something like a "truth-glut." We might find these cases interesting, and still think they can be resolved, perhaps through a consideration of material logic and concrete reasons for why some alternative system is appropriate for these specific outliers. But this isn't the same thing as allowing for different epistemologies and so different truths.

    The straightforward denial of truth, e.g. moral anti-realism, actually seems less pernicious to me here. Reason simply doesn't apply to some wide domain (e.g. ethics), as opposed to applying sometimes, but unclearly and vaguely.

    It also encourages discussion between perspectives, including how conceptions of truth may or may not converge.

    And a denial of contradiction doesn't? Why? Does denying contradiction or having faith in the unity of reason require declaring oneself infallible?

    You could frame it the opposite way just as easily. Because in relativism one need not worry about apparent contradiction, one need not keep at apparent paradoxes looking for solutions. Nor does one need to fear that opposing positions might prove one wrong. One can be right even if one is shown to be wrong, and so we can rest content in our beliefs. As reason becomes a matter of something akin to "taste" it arguably becomes easier to dismiss opposing positions out of hand.

    This at least comports with common experiences in the fields where relativism has become dominant, where students and professors report frequent self-censorship and "struggle session" events within the context of an ideology that nonetheless promotes a plurality of equality valid epistemologies and "ways of knowing." Marxism is in decline, but this is also still an area where "history" (power) is often appealed to as the final authority ("being on the right side of history").

    Relativism (about truth) would deny even this perspectival account as incoherent. (A very broad-brush picture of a hugely complicated subject, of course.)

    IDK, this seems like how most relativism re truth is framed (as opposed to from anti-realism, which is the common framing for ethics).
  • What is real? How do we know what is real?


    I was just thinking of more straightforward examples, like if we had never seen an animal, nor any picture or drawing, it could still be described to us. Or, had we never seen a volcano erupt, it could still be explained in terms of comparisons to fire, etc. I just wanted to head off the counter that we don't always need sense experience to competently speak of things. No one has ever seen a phoenix, but we can learn to speak of them too.

    This works because you can use comparisons, analogy, composition and division, etc. But some prior exposure to things is necessary. It can't just be linguistic signs and their meanings (it could be just signs, if you take light interacting with the eye, etc. as a sign).

    The causal priority of things is needed to explain why speech and stipulated signs are one way and not any other. If one wants to say that act of knowing water and knowing 'water' are co-constituting, one still needs the prior being of water to explain why knowing water and 'water' is not the same thing as knowing fire and 'fire' or why, if 'water' was used for fire, that would involve knowing a different thing with the same stipulated sign.

    Actually, I see this later point has come up here ( ; ).

    No doubt, how we act vis-á-vis fire would be different from how we act vis-á-vis water, even if we called fire 'water,' but this action, and the "usefulness" driving it, doesn't spring from the aether uncaused, but has to do with differences between fire and water.

    I am a big fan of some thinkers who put a heavy focus on language here, such as Sokolowski or parts of Gadamer, but I also think it's precisely philosophers of language who are apt to make claims like: "that which can be understood is language." Would a mechanic tend to make the same claim? Is understanding how to fix a blown head gasket primarily a manner of language? Or throwing a good knuckleball? What are the limits of knowing for people with aphasia who can no longer produce or understand language (or both)? I think that's a difficulty with co-constitution narratives as well. They tend to make language completely sui generis, and then it must become all encompassing because it is disconnected from the rest of being. I think it makes more sense to situate the linguistic sign relationship within the larger categories of signs.



    What is it the critic wants to conclude - that our use of the word is grounded in a pre-linguistic understanding of what water is? Perhaps we learn to drink and wash before we learn to speak. But learning to drink and wash is itself learning what water is. There is no neat pre-linguistic concept standing behind the word, only the way we interact with water as embodied beings embedded in and interacting with the world. Our interaction with water is our understanding of wate

    This just might be an misunderstanding. Some pre-linguistic understanding of water might well exist (indeed, this seems clear in babies), but that's not really the issue. The point is that water exists, has determinant being/actuality, prior to being interacted with by man. Otherwise, we wouldn't interact with water any differently than we would fire, except through the accidents of co-constitution. But co-constitution that is one way, instead of any other (e.g. water for washing, fire for cooking) presupposes that there are prior, determinant properties of both. There would be no reason for us to interact with any one thing differently from any other if this weren't the case. "Act follows on being."

    So on one hand we have a triadic {water – concept-of-water – use of water}; on the other just water being used.

    This is misunderstanding the triadic sign relation. The sign vehicle could be the "concept of water" in some relation, but in general it will be the interpretant.

    A basic relation in sight would be:

    Object: water
    Sign vehicle: light waves bouncing off water and to the eye
    Interpretant: person

    But light only reflects of water differently than a tree because the two are already determinantly different. The difference doesn't come after the fact. Co-constitution theories have difficulty with this because they often lack a notion of essences/essential properties (or a strong one) and so they are left with the problem that whenever something is known differently it has seemingly changed and become a new thing.
  • What is real? How do we know what is real?


    You and Tim objecting to formal modal logic robs you both of the opportunity to present your arguments clearly.

    I objected to the weak modal formulation of essences, that's hardly the same thing. But yes, there are also other ways to conceive of modality as well. For someone who argues that formalisms are merely tools selected for based on usefulness, you sure do like to appeal to them a lot as sources of authority and arbiters of metaphysics a lot though.

    What was "Banno 's Principle? "It is easier to disagree with something if you start out by misunderstanding it." A bit rich coming from someone who frequently wants to post about "Aristotelian essence" and "Aristotelian logic," but seems to be unwilling to read about the basics of either.

    The suggestion that formal logic is restricted to analytic philosophy is demonstrably ridiculous

    :roll:



    What should I do? Is it OK for me to just shoot you, in order to eliminate dissent? Should I do what the One True Explanation of Everything demands, even if that leads to abomination?

    :roll:

    Slow down, you're going to run out of straw over there. I suppose if you think that "the truly best way to do things" involves shooting dissenters that says more about you.

    Funny, how here we are now moving over to the ideas entertained in the thread on Faith. I wonder why.

    Hey, we made it 15 pages before the "Banno starts bringing up the religion of everyone who disagrees with him" bigotry phase of the thread. I'd say that's pretty good.
  • What is real? How do we know what is real?


    You know, those basic liberal virtues. How much worse would a world be in which only the One True Explanation Of Everything was acceptable, uncriticised?

    I assume the unstated premises here are that the "One True Explanation of Everything" isn't really true and is only not criticized out of force, otherwise, it sounds like a world that would be immeasurably better—a world free from error and ignorance and in harmony.

    I mean, what's the assumption here otherwise, that there would be a One True Explanation that was demonstrably true, and yet it would be bad if people didn't criticize it and demand error over truth and the worse over the better? (The old elevation of potency as "freedom" I suppose).

    Pluralists can accept many truths within different practices - physics, literature, religion, without affirming logical contradictions. But this doesn’t mean that "2+2=5" and "2+2=4" are both true. Pluralism has limits, governed by coherence, utility, and discursive standards.

    I think this a much more wholesome response than supposing that some amongst us have access to the One True Explanation and the One True Logic.

    Yes, it's the bolded that seems to lead to the problems described here. You keep setting infallibilism and "absolute knowledge" up in a dichotomy with a pluralism based on utility, but this is a false dichotomy. Most fallibilists are not relativists. All that is required is a faith in reason (i.e. not misology) not "knowing everything."

    From the thread:

    Misology is not best expressed in the radical skeptic, who questions the ability of reason to comprehend or explain anything. For in throwing up their arguments against reason they grant it an explicit sort of authority. Rather, misology is best exhibited in the demotion of reason to a lower sort of "tool," one that must be used with other, higher goals/metrics in mind. The radical skeptic leaves reason alone, abandons it. According to Schindler, the misolog "ruins reason."

    If we return to our caricatures we will find that neither seems to fully reject reason. The fundamentalist will make use of modern medical treatments and accept medical explanations, except where they have decided that dogma must trump reason. Likewise, our radical student might be more than happy to invoke statistics and reasoned argument in positioning their opposition to some right wing effort curb social welfare spending.

    Where reason cannot be trusted, where dogma, or rather power relations or pragmatism must reign over it, is determined by needs, desires, aesthetic sentiment, etc. A good argument is good justification for belief/action... except when it isn't, when it can be dismissed on non-rational grounds.In this way, identity, power, etc. can come to trump argument. What decides when reason can be dismissed? In misology, it certainly isn't reason itself.
  • What is real? How do we know what is real?


    In that sense, our version of reality or truth functions similarly to how language works; it doesn’t have a grounding outside of our shared conventions and practices.

    I guess I probably wouldn't agree with the ideas behind this, so that might be a difference.

    The position isn’t that truth is mere popularity, but that truth is built through ongoing conversation and agreement. What counts as true is what survives criticism, investigation, and revision within a community over time. So instead of certainty, we have a fallible and evolving consensus. Tradition, in such a context, is something that should be investigated and revised if necessary.

    Right, I have no problems with fallibilism and a circular epistemology. A certain sort of fallibilism seems necessary to defuse the idea that one most know everything to know anything.

    But I also don't think it's helpful to conflate a rejection of relativism with a positive assertion of foundationalism and infallibilism, which I seem to recall Rorty doing at times. Precisely because one need not know everything to know anything, it does not seem necessary to have an "ultimate" or "One True" in sight to make judgements. Rorty sounds sensible to me sometimes, but then there is stuff like the idea that a skrewdriver itself, its properties, recommends nothing to us about its use, that strike me as obviously wrong.

    Anyhow, those all seem to me like points related to fallibilism and foundationalism. But with

    Humans work to create better ways to live together,

    We settle, at least for a while, on what works

    having conversations about improvement,

    In a relativism based on anti-realism (which I'm aware no one in this thread has suggested) there is simply no fact of the matter about these criteria you've mentioned. Nothing "works better" than anything else. So, we can debate in terms of "what works," or "is good," but, per the old emotivist maxim, "this is good" just means "hoorah for this!" That seems to me to still reduce to power relations.

    In a relativism where truth about "what works" and "better" changes with social context (where there is no human telos), where "we decide" (as individuals or as a community), none of those claims about "what works" or "what constitutes improvement" is grounded in any sort of underlying "goodness" or "working" that is separate from current belief and desire. That is, there is nothing outside the "playing field" of power politics. Rather, if something "truly works" or is "truly good," depends on what people are driven towards at the moment.

    This certainly still seems to me to be very open to a reduction towards power. Truth as "justification within a society," for instance, seems obviously open to becoming a power struggle. One can just look at real life examples from totalitarian societies or a limit case in fiction like 1984 and "A Brave New World." "A Brave New World" is probably the more difficult case because it's obviously a case with a tremendous amount of manipulation, yet one where people are positively inclined towards the system, and even their own manipulation.

    Now, I am all for the idea that the human good is always filtered through some particular culture or historical moment, and that this will change its general "shape." It's the denial of any prior form to this good that I think sets up the devolution into power politics. Likewise, human knowledge is always filtered through a particular culture and historical moment. Yet there are things that are prior to any culture or historical moment, and which thus determine the shape of human knowledge in all cultures and historical moments. The being of an ant or tiger for instance, is prior to culture, as its own organic whole, and so there is a truth to it that is filtered through culture, but not dependent upon it.

    Edit: the other thing with him (and a lot of other relativistic arguments) is the heavy reliance on debunking. But debunking only works if we have a true dichotomy such that showing ~A is equivalent with showing B. I guess to the early topic of skeptical and straight solutions, it seems to me like a lot of skeptical solutions likewise rely on debunking heavily.
  • What is real? How do we know what is real?


    I think you'd see, rereading, that this isn't accurate.

    How so? I'm genuinely confused here? What exactly would be your explanation of why relativism and pluralism re truth is wrong?



    A thorny issue. I suppose one's understanding of signs is important here, as well as the proper ordering of the sciences/philosophy (if one is supposed at all).

    If metaphysics has priority, we can say that water has to be before it is known. The being, its actuality, is called in to explain the sign and evolutions in what the sign evokes

    ys5tyagny4g2k3vk.jpg


    But the role of the object is collapsed with that of the interpretant in the Sausserean model that has so much influence on post-structuralism, and that paints a different picture.

    yk0w8rxt9d1x1bfr.jpg

    And the difference between these two models lies in the question: in the second model, what is signified: the object, or an interpretation called forth by the sign (the meaning)? That seems to be the essence of the question here to me.

    It would be question begging to simply assume the prior model of course, but I think one can argue to it on a number of grounds.

    First, a system of signs that only ever refers to other signs would seem to be, in information theoretic terms, about nothing but those signs. This would be the idea, rarely explicitly endorsed, but sometimes implied, that books about botany aren't about plants, but are rather about words, pictures, and models, or that one primarily learns about models, mathematics, and words when one studies physics. There is also the question of the phenomenological content associated with signs. Where would that come from?

    Second is the old question: "why these signs (with their content)?" This is the old question of quiddity that Aristotle is primarily interested in. Then also, "since these appear to be contingent, why do these signs exist at all?" (the expanded question of Avicenna and Aquinas). Which is, to my mind, a question of how potential is made actual.



    Yes, in a way, but I think reality comes first. I think we have to have some familiarity with water before we have any sensible familiarity with "water."

    I agree as a rule, although the tricky thing is that one might indeed become familiar with something first through signs that refer to some other thing. We can learn about things through references to what is similar, including through abstract references. Likewise, we can compose, divide, and concatenate from past experience and share this with others so that what we are talking about refers to no prior extra-mental actuality (at least not in any direct sense).

    But this cannot be the case for everything, else it would seem that our words would have no content. Our speech would be a sort of empty rule following, akin to the Chinese Room.

    Mary the Color Scientist can know so much about color because she has been exposed to the rest of the world, just sans color. But if she had no experiences at all, it's hard to see how she could "know" much of anything.

    Now I suppose this doesn't require some prior actuality behind sense experience and signs. They could move themselves. It's just that if they did move themselves there wouldn't be any explanation for why they do so one way and not any other.
  • RIP Alasdair MacIntyre


    He is definitely worth checking out. After Virtue is the classic for a reason but I actually think folks who don't agree with After Virtue might find his later stuff (particularly "Whose Rationality?") more fruitful. His background in Marxism (and thus Hegel), as well as Nietzsche, gives him a historicism at odds with a lot of Thomism. He makes an argument that "rationality" is always embedded in tradition and that tradition is a means of knowing/being rational.

    In some ways, he builds on the post-modern theorists, who he is in dialogue with (particularly Foucault). However, he remains a critic of modernity here. He points out that the Enlightenment, liberal tradition is self-undermining, and this is precisely why it has bottomed out in relativism and perspectivism and has such a deep problem with a "slide towards multiplicity." He then goes about defending his preferred tradition as a tradition (as opposed to denying it is one).

    Interesting stuff, although I might modify bits of it. It seems to me that reason can be broader, more truly catholic (always relating to the whole and so always bringing itself beyond itself) and still always filtered through some particular tradition. This is perhaps a sort of form/individuating matter distinction we could make. Tradition unfolds in history according to reason, as one of its particular modes. But it can also attain this form more or less well, in the same way an animal can be healthy or sick. (For those of us Solovyev fans, perhaps this can even be the Providential unfolding towards theosis.) Modernity is sick because the Enlightenment has built in contradictions (and arguably has kept sublating new contradictions as it consumed nationalism and socialism through competition).

    This element of his thought is what makes it particularly annoying to see MacIntyre occasionally lumped in with strawman of critiques of modernity that declare that they amount to simply asserting the superiority of antiquity of the middle ages, and must involve a denial of women's rights, technology, etc. At least Weaver brings this sort of response on himself (and does say some rather churlish things). Or even Schindler, who is perhaps a deeper thinker, still is partly responsible for being taken this way due to his polemical style and adages like "liberalism is the from of evil in the modern world." MacIntyre always struck me as more subtle and diplomatic, with something for most people to like.
  • What is real? How do we know what is real?


    I am not a relativist about truth

    No? And yet to the question of where relativism applies you say that this itself is subject to relativism.

    Of course, so much so that I'd hesitate to talk about "truths" here at all. Or maybe I don't understand what a non-context-dependent truth about a philosophy would be.

    Presuming that philosophy includes epistemology, ethics, metaphysics, logic, and natural philosophy/the philosophy of the special sciences, this would mean that there are no non-relativized, pluralized truths vis-á-vis most of human knowledge though, no?

    But the very claim that the truths of philosophy are relative is a (presumably non-context-dependent and potentially contradictory?) claim about metaphysics and knowledge.

    At any rate, I'm curious, if one is not a relativist, but assumes that there aren't truths about epistemology, metaphysics, logic, or ethics, how does one go about demonstrating the relativism is not correct? What would be your counterargument to the relativist?

    I'm assuming there is some misunderstanding here because it seems to me obviously impossible to accept that there are no non-pluralized truths about philosophy generally and to move from this to an anti-relativist stance, particularly if we have also affirmed that the question of whether or not any topic is relativistic or not is itself relativistic.

    What is the argument against relativism given these starting premises?


    Nor do I think that acknowledging "pluralistic, context-dependent truths" makes someone a relativist.

    I agree here. The truth of: "it is raining," is context dependent for example. However, if one expands pluralism to the whole of logic, epistemology, metaphysics, ethics, and the philosophy of nature, or if the justification of logic and predication rests on "we decide" I cannot see how a fairly all-encompassing relativism wouldn't be the result.
  • What is real? How do we know what is real?


    Distinctions between our intuitions about the real, actual, existing, etc. are the bread and butter of metaphysics. Indeed, words like actual, virtual, essential, substance, form, information, idea, being, potency, existence, etc. come from this tradition, which influenced the development of English.

    Of course, it's unhelpful to make vague distinctions. But that's not generally what metaphysics does (at least, there is some attempt at clarification). When people refer to the "common sense" meanings of such terms, they are sort of appealing to the residue of millennia of metaphysical and theological speculation. It is also unhelpful to use the same terms in different ways, and metaphysics often does this. It does this precisely because it is always striving to make these terms more definite. I would say that historically, the terms are vague precisely because so many people have tried to clarify them in different ways.

    Formalism is one way to try to clarify terms. But a difficulty here is that not all explanations and understandings are equally easy to formalize. Hegel's dialectic couldn't be formalized until the 1980s with major advances in mathematics, particularly category theory. Analogical predication, the a core feature of classical metaphysics, has yet to be convincingly formalized. Indeed, arguably logic is rightly the domain of univocal predication alone.

    Certainly, discussions of logic and the form of arguments and discourse can inform metaphysics. But I think the influence tends to go more in the other direction. Metaphysics informs logic (material and formal) and informs the development of formalisms. This can make pointing to formalisms circular if they are used to justify a metaphysical position.
  • RIP Alasdair MacIntyre
    The last of an era. Almost, Charles Taylor is still alive.



    He certainly has an interesting thesis in After Virtue. Arguably, the "apocalypse" thesis can be applied to a much wider area than ethics alone, really to our entire metaphysical vocabulary re substance, essence, causes, etc.

    That'd be my pet radical claim. The move to "modernity," including what MacIntyre looks into in ethics, is defined by the elevation of potency over actuality (often in terms of potency as "freedom"). And if one says: "hey now, my preferred modern area thought doesn't even have a clear conception of actuality or potency," or "but potency is covered differently in each system these days," my response will be "exactly!" QED. :cool:
  • What is real? How do we know what is real?


    Subject to certain purposes, you might say.


    And these are true measures of usefulness, or only "useful" measures for usefulness? The problem is that this seems to head towards an infinite regress. Something is "useful" according to some "pragmatic metric," which is itself only a "good metric" for determining "usefulness" according to some other pragmatically selected metric. It has to stop somewhere, generally in power, popularity contests, tradition, or sheer "IDK, I just prefer it."

    So:

    what we can point to is broad agreement,

    So popularity makes something true? Truth is like democracy?

    shared standards

    Tradition makes something true?

    and better or worse outcomes within a community or set of practices.

    Better or worse according to who? Truly better or worse?

    I hope you can see why I don't think this gets us past "everything is politics and power relations." I think Nietzsche was spot on as a diagnostician for where this sort of thing heads.



    Well, either there is a truth about which truths are "pluralistic, context-dependent truths" or there isn't, right? Is "which truths are pluralistic, context-dependent truths?" a question for which the answers are themselves "pluralistic, context-dependent truths?"

    To be sure, if one starts throwing around all sorts of capitalized concepts without explaining them, they will be confusing. I would generally assume that when someone asks a pluralist re truth about what is "really true," though, they are asking about the existence of truths that are not ultimately dependent on what some individual or community currently considers to be useful or true.

    The mistake comes when we think we've consulted the Philosophical Dictionary in the Sky and discovered what is Really Real.

    A "mistake." Are you saying it would be wrong to affirm this? Curious. Would this be another of those "non-serious" philosophies that we can dismiss? But let me ask, are they "truly non-serious,' or would truths about which philosophies are "wrong," "mistakes," or "unserious" be "pluralistic, context-dependent truths?"

    Second, what separates a pluralism that sees assertions of non-pluralism as mistakes from the "crude pluralism" discussed earlier? The problems of the "unity of dogmatism and relativism," the way their reinforce one another, do not seem resolved here.
  • What is real? How do we know what is real?


    What I got from @Banno seems to be that pluralistic or context-based truths don’t mean that every contradiction is true. Instead, truths depend on the situation, purpose, or point of view.[/quote

    Of course. Just the ones that are useful to affirm are "true"... and "false." Maybe neither too. Perhaps in the interest of greater tolerance we shall proclaim in this case that there both is and is-not a One True Truth (TM)?

    But that doesn't really seem to work. To say "is" and "is-not" here is really just to deny "is." Yet can it be "wrong" to affirm the "One True Truth" in this case?
    When contradictions happen, it usually means they come from different ways of looking at things -not that truth doesn’t exist

    Why not just: "there are different ways of describing the same thing that might be equally correct. Some might be more useful in some situations. And some might appear to contradict each other if one is not careful with one's distinctions, simplifying assumptions, definitions, clarifications, etc." as opposed to the idea that something can be both true and not-true depending on what is useful?

    I imagine you’re unlikely to be a Rorty fan, but didn’t he say that truth is not about getting closer to some metaphysical reality; it’s about what vocabularies and beliefs serve us best at a given time?

    Yes, is what Rorty says true? I know Rorty says it is "more useful." Is it truly more useful? I would deny it. But there are either facts about what is "truly more useful" or there aren't. If there aren't, and we are both just asserting sentiment, then won't this just becomes a power struggle? (I like my chances against Rorty since I still have a heart beat).

    Well it may well be useful for one's survival to accept that Big Brother is right, so at one level (that of ruthless pragmatism) sure. But being compelled to believe something out of fear of jail or death is a different matter altogether, isn't it?

    Yes, but if you're the one doing the "compelling" it can be plenty "useful" right? Truth becoming a power relation doesn't ensure that you always win the power game.
  • Which is the bigger threat: Nominalism or Realism?


    Have you finished Olesky's book? I have not made it all the way through, but I think his exact objections are covered in depth (that's at least what the introduction and chapter summaries suggest, I only got through the discussion on Scotus and Ockham).

    Lots of thinkers have called nominalism "diabolical" or demonic though. There is a history here. It's not just that they think they are "bad ideas." Something akin to nominalism shows up at the very outset of Western philosophy, but never gets much traction. By late antiquity, it is all but gone.

    A lot of philosophy in this later period (late-antiquity to the late medieval period) is focused on self-cultivation, ethics, excellence, and "being like God." "Being like God" was the explicit goal of late-antique philosophical and monastic education in many surviving guides (even the biographies of Pagan sages paint them as "saints"). Here, reason plays an essential role in self-determination, freedom, the transcending of human finitude, and ultimately "being more like God." Reason itself also has a strong erotic and transformative element. This is a theme in Pagan, Christian, Islamic, and even Jewish thought to varying degrees.

    Hume's formulation that "reason is, and ought only be, a slave of the passions," and his general outlook on reason (wholly discursive and non-erotic), hedonism, and nominalism/knowledge, very closely parallels what the old sages describe as the paradigmatic state of spiritual sickness (e.g. "slavery in sin"). The Philokalia, for instance, describes this sort of perception/experience of being in terms of "what can I use this for to meet my desires" (i.e., the Baconian view of nature) as the "demonic" mode of experience. That is, they consider something like utilitarianism (Mill mentions Bentham), and decide that this is how one thinks in the grips of demonic fantasy (for a whole host of complex reasons). The state of Hume's ideal, which is for him insulated from dangerous "fanaticism" and "enthusiasm," is in some ways pretty much a description of the state of the damned in Dante's Hell.

    Nominalism also tends towards the "diabolical" in the term's original sense (where it is opposed to the "symbolical"). It will tend to focus on multiplicity and division. But the "slide towards multiplicity/potency/matter" is the very definition of evil in classical thought. Evil is privation, and matter is, ultimately, privation on its own.

    The "pragmatic" (and so generally volanturist) recommendations of a number of nominalist thinkers line up pretty well with what Pagan and Christian thinkers thought was the state of a soul "enslaved by the passions and appetites," with a corrupt and malfunctioning nous. It's an orientation towards a hunger that turns out to be a sort of self-consuming, self-negating nothingness (the Satanocentrism of Dante's material cosmos, or perhaps R. Scott Bakker's image of Hell as inchoate sheer appetite, and the consumption of the other, and thus total frustration of Eros-as-union—or Byung-Chul Han's "Inferno of the Same" recommend themselves as images here).

    The idea is not that only nominalists, or specifically nominalists are uniquely "evil." A realist might easily allow that it is better to be led by a virtuous nominalist liberal than by a vice-addled realist. A liberal nominalist society might be organized more virtuously than a corrupt realist one (e.g. the Papacy of Dante's time).

    The point is more about how nominalism will make it impossible to identify virtue as virtue and vice as vice in the long run. Indeed, that is, I would imagine, a big motivation for the polemics, the idea that broad currents in modern thought slowly make vice into a virtue. I think there is a strong case for this re pleonexia (acquisitiveness) in capitalism.

    And I think this is how you get forceful takes like Weaver's:

    Like Macbeth, Western man made an evil decision, which has become the efficient and final cause of other evil decisions. Have we forgotten our encounter with the witches on the heath? It occurred in the late fourteenth century, and what the witches said to the protagonist of this drama was that man could realize himself more fully if he would only abandon his belief in the existence of transcendentals. The powers of darkness were working subtly, as always, and they couched this proposition in the seemingly innocent form of an attack upon universals. The defeat of logical realism in the great medieval debate was the crucial event in the history of Western culture; from this flowed those acts which issue now in modern decadence.

    By contrast, the modern tends to approach philosophy more like Hume on average than a Plotinus or an Origen. At the end of the day, you kick back from it and play billiards. It's not that serious. Daoism appeals more to some people, nominalist pragmatism to others, realism to still others, etc. It is, in some sense, a matter of taste.

    So, when some raving realist says: "don't drink that, it's poison!" the response is likely to be: "well that's quite rude to call it poison. I quite like it." But of course the response here assumes that "poison" is uttered as a matter of taste. The realist thinks they have good reason to think it is really poison. It's a fundamental disconnect. This is why nominalist rebuttals will tend to be less organized.

    This is all speaking in very broad terms of course. I am just speaking to the broad pitch of the rhetoric and where it seems to have its history. Nominalism, volanturism, and the elevation of potency over actuality are anathema to broad swaths of the history of Western thought, but the elevation of desire also puts it in conflict with a lot of Eastern thought (which is why the latter has become a popular alternative).
  • What is real? How do we know what is real?


    IS there some conclusion that you would like to draw from all this?

    Yes, that the one sentence explanation of essences you've offered is metaphysically insubstantial (which was @Wayfarer's point in the other thread). Now, there are attempts to use this basic conceptual machinery to develop a more robust notion of essence. The point made in the articles referenced earlier is that the machinery itself is perhaps inadequate for this task (or perhaps requires modification). It's hard to start with a system designed with nominalist presuppositions and work one's way back to essences, perhaps impossible.

    In particular, if it leaves open the possibility that "essential" is only predicated of things accidentally, it is not even really a theory of essences in anything like the classical sense, more a method of stipulation that could be developed into a workable theory of essences.
  • What is faith


    I agree whole-heartedly that the notion that one has grasped an Absolute Truth is extremely dangerous. It makes it impossible to acknowledge and tolerate any disagreement. I cannot think of a situation in which this might be a a Good Thing, but I can think of many in which it is clearly a Bad Thing.

    What about propositions such as: "other groups of humans should not be enslaved?" or "all humans deserve dignity and some groups are not 'subhuman?" Or "one ought not molest children?"

    Are these extremely dangerous absolutes we should be open to reconsidering?

    At any rate, what you're saying clearly can't be "Absolutely True," itself, right? :wink:
  • What is real? How do we know what is real?


    So if "One Truth" (I guess I will start capitalizing it too) is "unhelpful," does that mean we affirm mutually contradictory truths based on what is "useful" at the time?

    Or, if not, if truth does not contradict truth, then it seems to me that we still have "one" truth and not a plurality of sui generis "truths" (plural).

    As I mentioned earlier, a difficulty with social "usefulness" being the ground of truth is that usefulness is itself shaped by current power relations. It is not "useful" to contradict the Party in 1984 (the same being true in Stalin's Soviet Union or North Korea). Does this mean "Big Brother is always right,' because everyone in society has been engineered towards agreeing? Because this has become useful to affirm?
  • How do we recognize a memory?


    There are indeed a lot of different "types of memory," and perhaps "faculties" involved in different sorts. I figure episodic memory is what we're focused on here, although this same thing also applies to crystalized knowledge recall too (i.e. that we can tell facts we have made up, fictions, and lies, from facts that we think we genuinely know).



    My favorite book on this sort of thing is Sokolowski's "The Phenomenology of the Human Person." He talks a lot about imagination. In imagining, we can either self-insert or imagine in a "third person" way. One does not have "third person" episodic memories, but this doesn't seem like the key differences.

    Obviously, they are phenomenologicaly distinct, and it would be problematic if they weren't. Pace some of the much (over) hyped studies on prompting "false memories," these only really work in vague cases. You might be able to get someone to misremember being lost in the mall when they were three, but you're not going to prompt them into thinking they went to college when they didn't, etc.

    I might instead look for the difference chiefly in them being physically/metaphysically distinct actually. When considering the formation of a memory, we are talking about the senses, memory, and intellect being informed by some external actuality. Whereas, with imagination, we are dividing and composing stored forms. The two processes look quite different from a purely physical standpoint (although the same areas of the brain get used for imagining, perception, and memory, but to different degrees).

    It would make sense that an actual stimulus would tend to leave a deeper impression than a virtual/synthetic one, and that we would indeed have an anatomy that structures this sort of difference into our experience, since an inability to keep imaginings and memories straight would be very deleterious for human life. Although, if consciousness is purely an epiphenomenon, there actually wouldn't be any benefit to memory and imagination being phenomenologicaly distinct (another mystery of psycho-physical harmony).

    If I were building an android for instance, I would "tag" real versus synthetic experiences so as to keep them distinct during recall to avoid accidents like looking for food that never existed, etc.
  • What is faith


    More and more it's the extended family/(intentional) community, at least in the ideal case (for religious intellectuals).

    But it's not like the alternatives don't enforce power dynamics. The power dynamic in more self-consciously "progressive" thought just tends to be the exceptional individual destroying other power relations so as to increase individual freedom on behalf of the "masses" (a move favoring the exceptional individual most of course), and then the (progressive) state stepping in to remove friction between individuals and to correct various "market failures."

    However, since individuals liberated from culture (particularly exceptional ones) tend to have a lot of friction, and because markets fail a lot and entrench, rather than revalue existing disparities, the state (and activist) has to do a lot of intervention and reeducation. Hence, they need to have a lot of power.
  • What is faith


    Isn't your take informed by a bias that values traditionalism and is suspicious, perhaps even hostile towards political radicalism (particularly of the Left)? Is your use of irony as Rorty uses it? Is 'unseriousness' how they would describe it, or is that your description for it? There's a further quesion in what counts as a politically radical circle?

    The constant use of irony and humor is sort of a defining feature of the Alt-Right and something they are self-consciously aware of. It's why their biggest voices, and now the presidential administration itself, often advances ideas through vague but provocative "funny" memes. E.g., Trump as the new Pope, joke memes about deportations, etc.

    Tucker Carlson fit this mold quite well (who does the two minutes hate better?). He also fits the mold of the sarcastic "exceptional individual who sees through through all the bullshit" (the audience being implicitly one as well, a style incredibly popular since at least Nietzsche).

    I wouldn't put this on the left in particular. If anything it is bigger for aspects of the right. The entire Manosphere ideology would seem to make meaningful romantic relationships impossible. Everything is transactional and defined in an economic calculus defined by evolutionary psychology, with catchphrases like "alpha seed and beta need" or "alpha fucks and beta is for the bucks." One cannot "fall in love" without risking becoming a sucker and a "cuck." But the obsession with being "cuckolded" goes beyond romance, and expands to all realms of social life. Hence, one must "keep it real," which means being a strong willed egoistic utility maximizer with one's gaze firmly on those goods which diminish when shared so as to "get one's share."

    Simone de Beauvoir's analysis of gender relationships in terms of Hegel's Lord-Bondsman dialectic is spot on here. The "pick up artist" craves female validation (sex being one of the last goods to be commodified) but makes woman incapable of giving him recognition because he has denigrated her into a being lacking in dignity.

    Likewise, the right-wing fixation on warrior culture, war, and apocalypse, which seems akin to 1914 in many ways, is a desire for war precisely because "nothing matters/is serious." It's the desire for war, apocalypse, crisis, etc. precisely because of this sort of spiritual constipation and the fear of degenerating into Nietzche's "Last Men," i.e. into the "consumers / workers" they are so likely to be seen as by those in authority.

    But it's certainly still a factor in the left as well, in different ways. The political left has done more to lead the way on undermining all claims to authority, advancing the idea that everything comes down to power relations, and yet there is still shock that people no longer trust sources of authority, such as doctors or scientists.

    Anyhow, re traditionalism, I see no reason to prefer tradition for the sake of tradition alone. All tradition was new at some point. But iconoclasm, the destruction and denigration of tradition for its own sake, for the sake of an amorphously defined "progress" that has no clear view of human flourishing, or "to liberate the exceptional individual," strikes me as the more common problem. There are indeed people who value tradition for tradition's sake, but they have far less influence than those who value desacralizing everything in the name of "progress."

    It is the person restrained by custom who most benefit from its destruction. This is unlikely to be the meek and gentle.
  • What is real? How do we know what is real?


    It's the thing we were discussing. If water was not H2O in Aristotle's day would this mean that being H2O is neither essential nor necessary for water or that water itself changes? Or could heat be caloric? might be a similar sort of question (or was it?)
  • What is real? How do we know what is real?


    Lots of thinkers. If it's "there are something like essences, but they change," we can consider Hegel, a number of Hegelians, Whitehead, maybe Heidegger (unfolding), a lot of contemporary process philosophers, etc.

    If it's "there is nothing like an essence (in the classical sense) but what classical metaphysicians called essences changes" then Deleuze, Kuhn, Butler, Merleau-Ponty, constructivists generally, etc.
  • What is faith


    You'd imagine this is fairly common today. Why do you find this more pernicious?

    First, because people end up offending others without realizing it and holding on to a sort of subtle bigotry.

    But more importantly, I think it ties into a large problem in liberal, particularly Anglo-American culture, were nothing can be taken seriously and nothing can be held sacred. Deleuze and Guattari talk about this sort of "desacralization" that occurs under capitalism. I think it leads to a sort of emotional and spiritual constipation. Feeling deeply about anything (thymos), or especially being deeply intellectually invested in an ideal (Logos), as opposed to being properly "pragmatic" (which normally means a focus on safety and epithumia, sensible pleasures) is seen as a sort failing. This is born out of an all-consuming fear of "fanaticism" and "enthusiasm" (something Charles Taylor also documents).

    Part of what made Donald Trump's campaign so transgressive was the return to a focus on thymos, whereas elites have long had a common habit of complaining that people were not "voting according to their economic interests" (which apparently ought to have been their aim vis-a-vis politics, the common good).

    Today, even in politically radical circles, it seems everything must be covered in several layers of irony and unseriousness. Indeed, all pervasive irony is particularly a hallmark of the alt-right. To care about anything too deeply is to be vulnerable, potentially a "fanatic," or worse "a sucker."

    This tendency can also lead towards a sort of elitism, which I think Deneen explains this well using Mill:

    Custom may have once served a purpose, Mill acknowledges—in an earlier age, when “men of strong bodies or minds” might flout “the social principle,” it was necessary for “law and discipline, like the Popes struggling against the Emperors, [to] assert a power over the whole man, claiming to control all his life in order to control his character.”9 But custom had come to dominate too extensively; and that “which threatens human nature is not the excess, but the deficiency, of personal impulses and preferences.”10 The unleashing of spontaneous, creative, unpredictable, unconventional, often offensive forms of individuality was Mill’s goal. Extraordinary individuals—the most educated, the most creative, the most adventurous, even the most powerful—freed from the rule of custom, might transform society.

    “Persons of genius,” Mill acknowledges, “are always likely to be a small minority”; yet such people, who are “more individual than any other people,” less capable of “fitting themselves, without hurtful compression, into any of the small number of moulds which society provides,” require “an atmosphere of freedom.”11 Society must be remade for the benefit of this small, but in Mill’s view vital, number. A society based on custom constrained individuality, and those who craved most to be liberated from its shackles were not “ordinary” people but people who thrived on breaking out of the customs that otherwise governed society. Mill called for a society premised around “experiments in living”: society as test tube for the sake of geniuses who are “more individual.”

    We live today in the world Mill proposed. Everywhere, at every moment, we are to engage in experiments in living. Custom has been routed: much of what today passes for culture—with or without the adjective “popular”—consists of mocking sarcasm and irony. Late night television is the special sanctuary of this liturgy. Society has been transformed along Millian lines in which especially those regarded as judgmental are to be special objects of scorn, in the name of nonjudgmentalism. Mill understood better than contemporary Millians that this would require the “best” to dominate the “ordinary.” The rejection of custom demanded that society’s most “advanced” elements have greater political representation. For Mill, this would be achieved through an unequal distribution of voting rights...

    Society today has been organized around the Millian principle that “everything is allowed,” at least so long as it does not result in measurable (mainly physical) harm. It is a society organized for the benefit of the strong, as Mill recognized. By contrast, a Burkean society is organized for the benefit of the ordinary—the majority who benefit from societal norms that the strong and the ordinary alike are expected to follow. A society can be shaped for the benefit of most people by emphasizing mainly informal norms and customs that secure the path to flourishing for most human beings; or it can be shaped for the benefit of the extraordinary and powerful by liberating all from the constraint of custom.

    Deneen goes on to cite Burke's at least plausible response that it is actually "innovators" who have the greatest tendency to be tyrannical.
  • What is real? How do we know what is real?


    That's an interesting point, although I am not sure if it would challenge notions of essences or substantial form directly. Essences and substantial form do not in any way rule out equivocal predication, what they are supposed to do is explain the possibility of univocal predication.



    Thinking through this question now -- Kuhn's Structure of Scientific Revolutions is what I have in mind, but with a more materialist mindset which doesn't give into the notion that nature itself changes with the sciences.

    C.S. Lewis published a book made from his lectures on the "model of nature" underlying medieval and renaissance literature (The Discarded Image) just two years before Kuhn and it has this interesting, although not very developed, quite similar insight about the role of paradigms, but obviously coming from a very different background. I always thought that Kuhn (and particularly later interpreters) was perhaps overreaching a bit. Models, language, theories, etc., my argument would be that these are not what we know, but rather means of knowing. Hence, when a model, or paradigm, or in Lewis's terminology a "backcloth" changes, we are not transported to a new world or dealing with new things, so much as making use of refined tools. But, I will allow that if one has already accepted:

    A. That truth is primarily a property of sentences;
    B. Representationalism; and
    C. The empiricist epistemic starting positions that tend to make arguments from underdetermination nigh impossible to defeat;

    The theories that turn natural science into more of a question of sociology start to make a lot of sense. I guess part of what initially made me skeptical here though is just the wide plurality of "skeptical solutions" (as opposed to "straight solutions" à la Kripke) leading in radically different directions in 20th century thought.
  • What is real? How do we know what is real?


    IIRC, Spade actually gets to some of the well known problems with "bundle metaphysics," fairly early on.

    At any rate, it seems fairly unobjectionable that "in every possible world all cats have the property of being cats." Something cannot be cat without being a cat, and it cannot be cat if it is not-a-cat. As far as a metaphysical theory though, this doesn't say much.

    For one thing, we might ask, "can a thing have both the property of being a cat and the property of being a dog?" If not, why? If so, doesn't it seem that, were we in the presence of such a being, we would be in the presence of some sort of third entity, a chimera, instead of being in the presence of both "a cat and a dog." Can we be in the presence of one thing and two essences, two distinct types of thing? The metaphysical notion of measure is unresolved here.

    We might also ask: "what sorts of things have essences?" Our definition is unhelpful here. Is warm water essentially different from cold water? Well, warm water is necessarily warm, so we could say that it is warm in every possible world. If it wasn't warm, it wouldn't be "warm water," but would instead be something else, like "cold water," "hot water," etc. Does this mean there is no possible world where my cup of "hot tea" cools, because hot tea is necessarily hot? Rather, there would only be possible worlds where my hot tea is spontaneously replaced with an essentially different "lukewarm tea," and then that is replaced by a "cold tea."

    We could ask the same question re ice and steam or black cats and white cats. A black cat is necessarily black in any world where it is a "black cat." Yet if Fluffy the cat falls into a vat of paint will we be faced with a sort of sorcerery whereby one being has been spontaneously replaced with an essentially different sort of being?

    Does a chair or table have an essence? But then there are all sorts of chairs and tables with different properties, and all sorts of things might be used as improvised chairs or tables.

    What makes something have an essence? Or what makes a property essential? If the answer cannot go further then: "a property is essential just in case some thing has it in every possible world," then that doesn't actually seem to tell us anything about positively distinguishing between essential and accidental properties at all. If someone denies that water is essentially H2O, arguing instead that water is only essentially clear, potable, and wet, what decides between these?

    Without more, it seems that the answer would have to be either: "we just know essences when we see them," or "we decide based on what it is useful to consider essential." The former isn't much of a theory, whereas the latter says that "essential" is itself always predicated per accidens, which is actually a denial of essences and essential properties, and so not much of a theory of essences.






    I tend to favor the epistemic side over the ontology side -- I understand it's basically a "player's choice", but it's my preference. On the reverse of "How do you know unless you start with what is?" is "How do you know what is unless you start with what you know?"

    That's what Pryzwarra says. You will always have a sort of "passing back and forth" between questions of the "metaontic" and "metaepistemic" in first philosophy, although he does give a slight nod to the ontic here in that even framing an "act of knowing" presupposes something about "act." He ties this back very interestingly to the instability of "creaturely being," where essence does not explain existence.

    This was a problem for Plotinus as well. Even if being and being known are two sides of the same coin, they seem to imply some sort of composite action, an essential difference. There is the being and the knowing of being, or "being the knowing of being." Yet if the One is absolutely simple, this distinction causes a problem (hence "real" versus "conceptual" distinctions). It's the same deal with the "life of the Trinity."

    I think it's in virtue of the things our species relies upon water for -- drinking, cooking, bathing, etc.

    Yes, but we might argue that water is only good for these things because of what it is in the first place. There is a (prior) reason (cause →actuality → form) determining why people want to drink water but not molten rock. Usefulness and human behavior flow from some sort of determinant being, both in terms of man himself and what he interacts with.

    At least, that seems quite reasonable for me.
  • What is faith


    In the context of athiesm, it seems to me like there are two general modes of bigotry. The first is your (earlier) Sam Harris or Nietzsche type, which tends towards hostile "arguments from psychoanalysis" that paint anyone of faith as irrational, child-like, weak willed, etc. This sort of view is straightforwardly bigoted, and in the hands of many of the "New Athiest" it often gets paired with a fairly extensive ignorance of the topic they have decided to address.

    I actually don't think this is the most pernicious sort of anti-religious bigotry, for the same reason that bigoted fundamentalists are themselves not as dangerous as their noxious views might suggest. In either case, the bigotry is so overt that everyone sees it, and of course plenty of atheists think the more aggressive of the New Athiest are just obnoxious.

    The more pernicious sort of bigotry, to my mind, seems to be much more common in the upper classes, and tends to get practiced by people who are "accepting of religion" or even identify as from a certain faith (although it tends to be people for whom this is more of a cultural identity). In this view, religion is fine—provided it is not taken very seriously. It's ok to be a Baptist or a Catholic, so long as you're not one of those ones, the ones who take it to seriously, allowing it to expand beyond the realm of private taste.

    And this means nodding along with sacrilege and blasphemy, preferably joining in. You're supposed to nod along when someone refers to the Eucharist as a "Jeez-It," etc. It's a bit like the old Roman sacrifices to the emperor. One must prove one's allegiance to the secular liberal order above all else—burning one's incense to Caesar—and then one is free to practice the local faith in private. This is a sort of tolerance of faith just so long as it is rendered meaningless, a mere matter of taste, and a taste that confirms to the dominant culture.

    I've read plenty of African Americans describe a sort of similar phenomenon, although there the dominant culture has sort of come around on this sort of thing.

    This comes out in two ways:

    First, it's not uncommon to see comments directed at religious groups or ethnic/class groups that would be considered "beyond the pale" if they were directed at races or on the basis of sex. Liberalism has a particular focus of biological identity, precisely because people do not choose these things. Whereas, religion, ethnicity, and class are things that the upwardly mobile individual must shed upon attaining to the global "middle class" (which is really more of an economic elite comparatively speaking).

    BTW, I also think this sentiment is why so much moral debate on homosexuality and trans-sexualism focuses on whether or not it is "natural," (whether people are "born this way," i.e. not a choice). I don't think this framing is helpful though. I would tend to think the bigotry and cruelty are unjust regardless of whether they are based on "naturalness," (that is, it is not necissarily just to oppress someone for their choices either). Whereas, at the same time, something's being "natural" hardly makes it acceptable. Rape is perhaps "natural," but we hardly want to defend that.

    Second, religious beliefs are only allowed a sort of freedom from condemnation in as much as they accord with liberal norms. So, things like not ordaining female priests, viewing fornication as a form of sin (against the "Sexual Revolution"), more conservative positions on divorce (sacrament versus contract between individuals), get decried. This, of itself, is not a problem. Some religious beliefs might be bigoted, unjust, etc. The problem is that, because "religious belief" has become merely a matter of "private taste," disagreements on such issues simply get written off as always a sort of bigotry. Yet, it seems to me that there is a sort of rational argument to be had re fornication, pornography, gluttony, acquisitiveness, etc. that it is not helpful to dismiss in this way.
  • Which is the bigger threat: Nominalism or Realism?
    BTW, collectivist attacks on the "individual" from the direction of nominalism are not the only ones. You could also consider here what Nietzsche says about the "I" and ego, Hume's "bundle of sensations," or post-modern dissolutions. If your concern is the primacy of individuals, I am not sure nominalism will prove particularly friendly.
  • Which is the bigger threat: Nominalism or Realism?


    There has never been a nominalist, or rather, individualist country

    There have been countries dominated by nominalist ideology though. They can be individualistic or collectivist. That's the whole idea. There is no such thing as human nature. Thus, the state can engineer man into whatever it needs man to be. If this means making man into an ant-like collectivist, why is this wrong? If man lacks a nature, it can hardly be because it goes against his nature, or his "natural individuality."

    In what sense is the "individual" the right unit for society. Again, what even constitutes an "individual?" "Individual" is just a name applied to sense data. Yet human infants die on their own. Individual men do not tend to last long in the wild on their own either. Therefore, the proper unit of individuality for "man" is arguably the tribe. "Children," "women," and "men," are merely parts of this "natural" whole. They are accidents, not substance. The substance is the society. (Or at least, this view is just as good of a way to view things as any other, at least from a metaphysical perspective.)

    That's the reasoning for collectivist nominalists. If you say, "no, 'man' refers to discrete individuals," I will just ask, "why must this be so?" This would seem to assume some sort of essence that is filled out by unique particulars, the very thing we have already denied. I will just maintain that the more useful measure of the individual is the society, and that you are referring to mere parts, just as a "hand" or "eye" is not a proper whole, neither is this "human being" of which you speak (these are merely cells in the proper organism of state/party).
  • Which is the bigger threat: Nominalism or Realism?


    No, the fact that something looks like a human being makes something a "human being".

    Something is a human being because it looks like what? As you said:

    It means that everything we know about human beings is derived from the senses and experience.

    How does one collocation of sense data "look like a human being?" in any definitive sense? It seems we are just attaching names to regularities in sense data, right? By what criteria do we attach such names? Supposing I'm a racist and I do not find it "useful" to attach the name "human" to Asians, why am I wrong about what a human being is? It's just an ensemble of sense data after all.

    And what about any particular ensemble of sense data makes it worthy of dignity?


    It's more like "realism is false because no one can find universal or abstract object". One of the common objections from nominalism against realism is that forms and universals and abstract objects cannot be found.

    I rephrased it as I did because what you're saying is straightforwardly question begging. The realist claims we see humanity every time we see a man. To expect to "see" (sense) a universal as one would a particular isn't a critique of realism, it's just failing to understand it.
  • Which is the bigger threat: Nominalism or Realism?


    Simply that he looks like other human beings.

    If something "looks like a human being" we should treat it with dignity because...?

    One thing is for certain, we are not developing these general ideas by looking at forms and essences.

    "Nominalism is true because realism is certainly false." Good one.

    The notion that one attaches and removes dignity to terms and definitions in order to dignify a human being is precisely the threat that I’m talking about. When one dehumanizes, like calling people rats for example, nothing at all changes in any individual human being outside the realist skull, but his treatment of them certainly does.


    Surely if that's the threat then people's treatment of each other must have improved markedly after 1500, when nominalism became ascendent. More nominalist Protestant nations like the US must have treated minorities better, and the Soviet Union and communist China must have been particular exemplars of upright behavior. In terms of the volanturism that tends to accompany nominalism, I am aware of a society called "the Third Reich" that vastly prioritized the will, which should have resolved the problems of intellectualism in ethics. Let me just flip to my history book to confirm this...
  • Which is the bigger threat: Nominalism or Realism?


    What makes something a "human being?" "Usefulness?" The judgements of some "language community?" Real life caricatures like H.P. Lovecraft seemed to doubt that the "gibbering French-Canadians," the residue of the Acadians, the Portuguese, the Welsh, etc. were truly possessed of the same humanity embodied by "good New England stock." More to the point, I have seen people here and elsewhere argue that, not only should elective abortion be legal, and not only is it completely unproblematic, but that this is in part because, prior to passage through the birth canal, the entity in question "is not human."

    Humanity, it seems, can be defined many ways. It can even be defined in such a way that no dignity attaches to the term, such that we shouldn't be any more concerned about killing inconvenient humans than we would inconvenient rodents. So, in virtue of what is the sort of definition you're looking for, one that says 'killing men is wrong,' more accurate vis-á-vis our term "man?"

    If Lovecraft was wrong, what was he wrong about? He can hardly have been wrong about his own concept of "man."
  • Which is the bigger threat: Nominalism or Realism?


    But if someone kills another for some the sake of some name like “country” or “God”, then we have an instance of destroying what is boundlessly more valuable for the sake of an idea or figment.

    Kills another what exactly? :wink:
  • What is real? How do we know what is real?


    Before Copernicus there was overwhelming evidence of the spheres having and will always being in existence.

    This is a difficulty for truth as primarily a property of (linguistic) propositions instead of "the adequacy of an intellect to being." Epicycles get superceded as a theory, and so all propositions about them (outside of the purely observational) take on the binary value of false. Phlogiston and caloric theory is superceded, and so likewise we are left with a bunch of false sentences, despite these theories representing real advances in knowledge. But one might suppose these are more or less adequate understandings of reality, truth being a case of contrary, as opposed to contradictory, opposition (like light and darkness).

    I think the underdetermination argument is what undermines this notion -- it's what I'd guess now, but it could be that we're reading patterns into the past that we accept now which are predictive and make sense,

    But this is, at most, an epistemic issue. The step further, that claims that essences themselves change, has to say that the water that carved out the Grand Canyon isn't the same water we see causing erosion today. I think there are a host of problems with that though, not least of which is "why should our ideas about things evolve in one way instead of any other if there is no actuality that is prior to our speech about things?"

    Just think about applying this same sort of relationship to: "who killed JFK?" Would the killer be Oswald today and then become someone else if we discover decisive evidence that someone else killed JFK? But how could we "discover evidence" of a fact that doesn't exist until we acknowledge said fact? And would it be ok to have punished Oswald while he was "still the killer?" Obviously, this is absurd, but if it is absurd we need a reason for why this sort of change only applies to identies like "water" and not to "JFK's killer."

    Second, if water was not H2O in 1600, then by today's definition, water was not water then. But then in virtue of what do we speak of some enduring thing, "water," that changes over time as theories change. It seems to me that, on this view, it might be better to say that nothing is changing so much as one thing is replacing another. If things just are whatever dominant opinion says they are, then things are popping into and out of being as theories change.

    The other question is, how could we be wrong in these sorts of cases if what a thing is depends on what we currently think of it? How can we discover that "fire is not phlogiston" if there is no such thing as fire outside of ideas like phlogiston theory?

    IDK, perhaps some of these can be ironed out but it certainly seems more intuitive to me that fire and water are, and remain, what they are.
  • What is real? How do we know what is real?


    Yes, this akin to the problem of circularity in Locke where the nominal essence by which different things are defined ends up determining the real essence by which they are identified as a certain nominal essence. Nothing seems particularly essential in this formulation, as it's all subject to revision.

    But this certainly goes against the intuition that water was water millions of years ago, one which is supported by plenty of empirical evidence.

    This is to my mind a general weakness of formulating a theory of essences by begining with language and naming. It puts the effect before the cause, and one needs a sort of circularity to resolve things.
  • What is real? How do we know what is real?


    Paul Vincent Spade's The Warp and Woof of Metaphysics might frame things in terms you are more familiar with.

    Or Klima's comparison, but I would say the first is more direct and accessible.

    As Spade (along with many others) remarks, there is confusion because: "In analytic philosophy, there is a view called “Aristotelian essentialism”— by both its supporters and its opponents — that in fact has nothing to do with Aristotle."

    Note that subject can be said in two ways, in terms of logic or in terms of metaphysics. In logic, the subject is what any predicate is "attached" to. In metaphysics, subject can be said properly or improperly, as of a thing or as of a things underlying substrate. Substance is said of particular things primarily. It is secondary substance where we see essence, the "type of thing" something is.

    The logic interacts with the metaphysics but I am not really sure what to recommend for that aside from just reading the Categories (and Porphyry's Isogogue), the Physics, and Metaphysics. I have no found a really good summary the way I have for some parts of Plato.
  • Demonstrating Intelligent Design from the Principle of Sufficient Reason


    That's a good example. What is impossible/contradictory is not always obvious. That is one of the risks when talking about potential/possibility in lose terms. We end up affirming the "possibility" of any words we can smash together without obvious contradictions.

    This can get sort of out of hand in "bundle" and "pin cushion" theories of predication. E.g. a subject just is a "bundle of predicates," or "predicates attached to some bare haecceity" (the pin cushion that makes things individual). It would seem that anything can become anything else here, because the subject is completely bare.

    Anyhow, an interesting thing is that "the first number that violates the Goldbach Conjecture" is a rigid designator. It uniquely specifies a number (if it exists). You could think of such a designator in terms of the shortest program that would retrieve a number too (it is easy to check if a number fits the criteria). But, strangely, this ability to uniquely specify the number fails to reveal its identity. It's a sort of recreation of the Meno Paradox. You don't really know what you're looking for until you've found it.
  • Which is the bigger threat: Nominalism or Realism?


    Are the particulars not as worthy of being loved, admired, or understood as the abstractions and universals the realist holds dear?

    I'm not sure if this makes much sense as a critique. A lot of realism is extremely person centered and sees a strong telos at work in history (the history of particulars). Valuing particulars is not really what is at stake.

    Actually, I think some realists attack nominalists precisely for destroying particulars and turning them into a formless "will soup." Note that personalism and phenomenology seems to be biggest in traditional Christian philosophy, which tends to be unrelentingly realist.

    I'm hoping someone can point me in the direction of those who see realism as a threat, and we can continue this ancient battle on an even footing.

    While there are strong similarities between anti-nominalist critiques from a variety of Eastern and Western sources, I think nominalist critiques of realism will be more diverse because nominalism tends towards greater plurality (for better or worse). There are simply skeptical critiques ("you cannot know that because you cannot know the noumenal,") and there are critiques rooted in a view of freedom primarily has power/potency ("your ideas are keeping you from realizing maximal freedom"). The critique of an eliminitive materialist is going to be different from that or a Nietzschean, which will be different from that of a skeptical liberal, etc.



    Ockham is singled out by lots of people, it's a bit of a trope. But it's really the voluntarism that's more important. Arguably, the nominalism is just a means to his voluntarism.

    But it's not like all realism comes from the angel of intellectualism. Some seek to find a unity of intellect and will above all distinctions (more common in the East because the "nous" and "heart" do not map neatly to intellect/will e.g. Palamas, but even for Aquinas the distinction of will and intellect in God is purely conceptual, not real, while in Eckhart there is the "darkness above the light" beyond all distinctions as "Unground"). Yet these will tend towards volanturism being "more wrong, particularly as respects man. Or, in the Philokalic tradition, we might even say that volanturism is the state of the sick soul, the presence of the diabolical, linear reason.

    I don't think these critiques are totally off-base, although Ockham and Scotus might be bad targets. The later anthropology that comes to dominate modern thought in thinkers like Hume, and its conception of reason in particular, is very close to the description of the mind in the condition of sin/under demonic influence in writers like Evagrius. So they are diametrically opposed in a fairly strong sense. But I think people tend to confuse "moral opprobrium" with a more "philosophical opprobrium" here.
  • What is real? How do we know what is real?


    “ . . . a position no one familiar with philosophical inquiry could take seriously.”

    Do you not see the irony in having the write off fairly popular opinions in philosophy as "unserious" here? Grayling is responding to other professional philosophers. Harris's stories come from professional conferences as well, e.g. a speaker at an ethics conference who claimed that we could not say whether or not another culture was wrong if they tore out the eyes of every third born infant out mere custom because "it's their culture." Nietzsche is, I would imagine, the long-running title holder for "most popular thinker in the West."

    But in virtue of what are positions to be dismissed as "unserious?" Again, it seems to me that you have to start with some (more or less foundational) premises here to avoid the problems mentioned in the "Dogmatism and Relativism" thread. And like I said before, the problem you mention seems to apply to affirming all sorts of premises, not just "foundational" ones. People will either affirm what follows from more or less obvious or well-supported premises or they won't. In Harris's example, "tearing infant's eyes out is not good for them," would be the obvious premise in question.

    The person denying this premise seems factually wrong. Are they wrong in a moral sense? With that particular premise, I'd say yes. In particular, if they allow children to be blinded when they could have otherwise prevented it, or blind a child because "when in Rome do as the Romans," that seems particularly bad. Whereas, while "act is prior to potency" might be more "foundational," it's hardly blameworthy to have failed to consider it. Those terms need some serious unpacking. "What is known best in itself" is not generally "what is known best to us." What is known best to us is particulars, stuff like "blinding children isn't good for them." If people have any rational responsibility at all (through action or negligence)—and I would tend to say they do—it will tend to lie most heavily here, in these sorts of concrete judgements.
  • What is real? How do we know what is real?


    That sentence isn't meant to be a definition of essential properties. It's a response to representationalism.



    We had a thread on a while back. I think my answer might be a very qualified "yes," as presented through the notions of virtual quantity in Aquinas and similar notions in Platonism and Hegel. Perhaps "more intelligible in itself" would be better.
  • What is real? How do we know what is real?


    And I'm sure you're right that a "crude relativist" could leave a discussion worse off than they found it, by accusing people who aren't relativists of being wrong. I hope we agree that this doesn't characterize a position that anyone could take seriously.

    Lots of people take that sort of view seriously. I see it all the time. We were just in a thread were total anti-realism and relativism re values was being argued, and where "good argument" was framed entirely in terms of persuasion, not truth (i.e. a "good argument" is one that gets people to agree with you—gets you what you want—i.e. a power relation). Virtually every ethics thread on this site had at least one user popping in to add that value is subjective and "objective value" a sort of delusion. You yourself seem to think the septic, if not the relativist, has extremely strong arguments here. It's also the view I was brought up with. Relativism is very popular. It's certainly less popular re theoretical reason, but it is hardly a fringe position there either.

    Let me ask, did A.C. Grayling make the "cognitive relativism" thesis discussed here up? Is he objecting to a view no one has advocated for? I posted the thread here because his description of cognitive relativism reminded me of a thesis I had seen presented here before.

    Sam Harris likewise opens up the Moral Landscape by running through a number of troubling encounters he had with extreme relativists at academic conferences, and quoting a number of similar positions. If relativism is a hallucination, it's apparently a group one.



    I may never understand your rhetorical habit of contrasting Position A with a Position B that no one has ever espoused!

    Fictionalism, etc. are popular opinions. Open up a mainstream introductory text on metaphysics, something like the Routledge Contemporary Introduction, and you will find it introduced there as a major position.

    No one has ever espoused these positions? I have personally espoused them :rofl:! That was the default I was brought up with. And I have had plenty of conversations on this site with people exposing extreme forms of nominalism. We get someone (normally a new user) popping in to assert epistemic nihilism every few weeks. I assume they rarely stick around because epistemic nihilism makes philosophy fairly boring.

    Go check out the sort of questions that get asked to credentialed philosophers on AskPhilosophy. Some people are genuinely confused about how anyone could not be a relativist.

    So, I may have misunderstood what you were getting at, but I hardly think I have hallucinated the existence of relativist positions.

    But in terms of your particular framing, you said the problem was:

    In short, if you start from premises you believe you can show to be foundational, does that commit you to also saying that everything that follows is rationally obligatory?


    To which I said:

    What would the opposite of this be? You start with premises that are foundational and then refuse to affirm what follows from them?

    So:
    P
    P→Q

    But then we affirm:
    ~Q, or refuse to affirm Q.

    Yes, this is what most people would call "irrational." No?

    To which you replied:

    That's why indisputably foundational premises might be abandoned in favor of something closer to epistemic stance voluntarism. This may not be a worry for you, but many philosophers, myself included, are concerned about the consequences of rational obligation which do seem to follow, as you correctly show, from allegedly indisputable premises. The idea that there is only one right way to see the world, and only one view to take about disagreements, seems counter to how philosophy actually proceeds, in practice, and also morally questionable.

    But what you're saying isn't a problem just for "foundational premises," it literally is a problem for affirming any proposition at all. To say:

    P
    P → Q

    Is to say that you believe that the person affirming ~Q is mistaken (or that some further distinction is needed, etc.). Assuming the principle of non-contradiction, it is to say that there is a right way to describe the world and that the right way includes P and Q, not ~P and ~Q. An appeal to "voluntarism" as resolving the issue of disagreement just seems to me like relativism. How is it not?

    If you don't want P to imply that the person affirming ~P is mistaken, you need all judgements to be hypothetical. Perhaps that is your solution? I recall you saying that we can reason about values, but only ever generate a "hypothetical ought." All I can say is that this would seem to imply a far-reaching skepticism. Doesn't this imply that we could never say "P is wrong," but only "if you adopted these premises, with these inference rules, P would be wrong?"

    And again, I am not sure how holding to premises non-hypothetically necessarily precludes considering that it might be we ourselves who are in error, or attempting to resolve seeming contradictions through distinctions.

Count Timothy von Icarus

Start FollowingSend a Message