• What is right and what is wrong and how do we know?


    Are you saying that if truth only depends on us, then we should already know all truths, but since we don’t, truth must exist independently of human practices?Tom Storm

    Sure, that would seem to be one consequence of the idea that truth only exists (is created by/dependent upon) human practices and language. How could an unspoken, unacknowledged truth exist?

    All I’m suggesting is that we interact with our environment and build stories, models and conversations to explain things. What we call truth emerges for a process. This is in constant flux and never reaches capital-T Truth. But many different models will be useful for certain purposes.Tom Storm

    Might I suggest that this seems to be conflating two different things? One idea is fallibilism, the idea that we never know everything, or know something exhaustively. To know something exhaustively is, in a sense, to understand all its causes and its entire context, which is arguably to know everything. Nonetheless, this is not normally taken to mean that one must know absolutely everything to know anything at all.

    But the other idea is that truth is actually generated by and dependent on "stories and conversations," which are themselves driven by "usefulness" (and so too, apparently any truth about usefulness itself) . It's this latter thesis that I am objecting to. The former has a long pedigree. The latter only seems to show up as a position for Plato to make jokes about and then millennia later as an "ironic" post-modern position.

    Saying the world was flat made sense in the context of what we knew at the time. Now it makes sense to say it is a sphere. Today most of us obviously prefer the latter, and it's more justifiable. But where will we be in 1000 years? Will we still think of the world as a material entity, or might we come to see it as a product of consciousness, rather than a physical object? I note also that there is an emerging community of flat earthers and globe deniers. Is Trump one of these? :wink:Tom Storm


    Well, my confusion is that "makes sense in the context of," is not normally taken to be a synonym for "is true." Is the idea that these are the same thing? Perhaps it "made sense" to sacrifice people to make sure the sun didn't disappear in the context of Aztec civilization, but surely it wasn't true that the continued shining of the sun was dependent on cutting victims' hearts out on an alter.

    Yet the idea that our conversations and practices and generative of all truths would suggest just this. That "makes sense to" is synonymous with "is true."

    It’s not that anything could be true just because we say it is. Things in the world still constrain what we can do. Our conversations and practices are built around those constraints. We find some statements “useful” precisely because they help us navigate reality as it seems to behave.Tom Storm

    Ok, did reality truly behave this way before we found it useful to say it is so? Either it did, and there was a truth about these "constraints" that lies prior to, and is, in fact, the true cause of, human practices (i.e., these constraints were actually, really the case, that is, truly the case) or else it was our own sense of "usefulness" that made the constraints truly exist in the first place. Or, did these constraints which shape practice and conversations actually exist, but it wasn't true that they existed (which is an odd thing to say)?

    If practices are necessary for truth you cannot posit constraints that lie prior to practices as the cause of those practices without denying the truth of those constraints it would seem. For they only become truly existent when declared so in practice.

    And as noted earlier, there are the two other difficulties:

    A. If there is no fact about what is truly useful, then "usefulness" is just whatever appears useful. Cutting out the hearts of sacrificial victims once seemed useful, and so apparently it was,.for instance. But since usefulness determines truth, truth is simply determined by appearances.

    B. Since such a position isn't popular, it is false on its own terms.
  • What is right and what is wrong and how do we know?
    We’re never omniscient; we just get better at describing the world in ways that work for us.Tom Storm

    If truth only exists inside the context of human practices—is indeed dependent on them—what truths could we possibly be missing such that we are not omniscient? Wouldn't our (collective) lack of possession of all truths itself show that all truths aren't actually dependent on us and our practices, for how could they exist without our knowing of them if our practices make them true?

    You didn't answer any of the questions directly, but I think they demonstrate the problem here. Did the Earth lack a shape prior to man and his practices? Or did it have a shape but it wasn't true that it had that shape? If man once again began to believe the Earth is flat would it "become flat again?" And if it wasn't round before man decided it was round, in virtue of what did evidence suggesting the Earth was round exist?

    Truth, in that sense, isn’t about matching reality, it’s about what proves useful in our ongoing conversations.Tom Storm

    So, the fact that one cannot raise livestock to live on by mating males to males or sheep to pigs is a result of our "conversations?" But why would such a conversation arise if it wasn't already the case that one cannot mate males to males or sheep to pigs (or their precursor wild ancestors)? Why would people find it "useful" to formulate such truths if they weren't already the case, and why does it seem prima facie ludicrous that it "would be true that sheeps and pigs could produce offspring just in case everyone found it 'useful' to affirm this?" This is the problem with the dependence claim.

    More generally, it seems to make 'usefulness' a metaphysical primitive. If there are no facts prior to usefulness, because usefulness is what generates all facts, then in virtue of what are some things deemed useful and not others? Yet surely what seems useful has prior causes, and there are facts about those prior causes. For instance, the reason it never seemed 'useful' to mate boars to wild goats is because it was already true that they cannot produce offspring because they are different species.

    Further, what is actually, truly "useful" on this account seems to be "whatever is currently said/believed to be 'useful'" since there does not seem to be any possible facts about usefulness that are external to current belief and practice. But this straightforwardly collapses any appearance/reality distinction.

    And since the vast majority of people don't believe this (they don't find it useful to affirm this theory as true), wouldn't it be false according to its own definition (a sort of self-refutation through the appeal to current practice)? Rorty famously said "truth is what our peers let us get away with saying," but his own theory (which is similar) wasn't embraced by his peers, unless his "peers" are not other humans, or fellow philosophers, but just a small parochial clique of philosophers, in which case it doesn't seem to have been "true" even in its own loose terms.

    but if the Earth were to die of heat death, that fact would become irrelevant and effectively vanishTom Storm

    If all men died out it would cease to be true that man ever existed? So likewise, if we carry out a successful genocide and people come to forget about it or don't find it "useful" to bring up, it ceases to have ever occured?
  • On emergence and consciousness


    Hey, I didn't say it wasn't absurd, insane even. But his argument that it follows from some commonly held positions is not bad. I think the idea is that the good naturalist should be an eliminativist and that the "non-reductive naturalist" is in a sense kidding themselves or else really a dualist because they believe in an irreducible, "strongly emergent" (and thus fundamental) causally efficacious mental activity/substance whose behavior cannot be reduced to or wholly explained by physical laws. The second option isn't really physicalist, its dualistic, and it's arguably not "naturalist" if it makes intentionality causally fundamental (granted, that terms is stretched ridiculously far in different directions).

    It's admirable in a way. In the same way that Luther is admirable when, challenged by Erasmus that God would be evil if He creates creatures without freedom just to cosign them to an eternal torment they could never have avoided, Luther simply claims that God's "goodness" is wholly equivocal (i.e., that God is evil in human terms, or as he puts it, that we are evil in God's terms, which is still the same thing). It's following things out to their conclusion at least... (I suppose it's one way or solving the ol' "Problem of Evil" :meh: )
  • On emergence and consciousness


    According to Alex Rosenberg down at Duke, this is simply true of all of us. The reason paradoxes emerge is because the "self" and "intentionality" are akin to ghosts and magic, i.e., superstitious folk explanations. Neurons are not about anything. They don't "think" or "believe" anything. Everything, including neurons, are just elementary particles, and these are not "about" anything either. Hence, it is quite impossible for "thoughts" to be "about" anything else, or for there to be "intentional" states about the future. Five hundred years of scientific progress back this up. Mechanism cannot spontaneously produce aboutness and there is only mechanism. Ergo, intentionality doesn't exist. Ergo, the self doesn't exist!

    He wrote a 400 page book "about" this (titled with his name, which is of course not a reference to a self). It has some good advice, like "solve unhappiness with medication, not introspection, since you are your neurochemistry and some tools work better than others for producing chemical changes," although when he gives this advice it's not wholly clear "who" or what is capable of even being "unhappy." It's a fun book from the parts I read though.

    Now, we could laugh at Rosenberg, but he has some serious philosophical chops and knowledge (much more so than a Dawkins, etc.) and I think he's interesting because his target isn't so much the non-physicalist (a lost cause of course) but the "non-reductive naturalist," who he wants to convert by showing that naturalism entails his point of view.
  • What is right and what is wrong and how do we know?


    , but science only cares about quantifications and causal connections and works entirely within the structure of thought of its paradigms.Constance

    Only on a particularly deflationary view of "science." At any rate, those who embrace such a view, and who stick to a "hard" empiricism and naturalism also often tend towards denying causality. But in such contexts, consciousness itself, reference, intentionality, etc. are every bit as "queer" as "evaluative judgement."

    Might I suggest though that this is an unhelpful starting point for framing a metaphysics of goodness, given that camp largely tends to deny goodness, or else to put forth some sort of reductive, mechanistic view of it as reducible to "brain states?" I mean, your earlier point about kerosene (or presumably also one's own beloved, or anything else) being reducible to empirical data seems to already have assumed an answer about ethics. Yet it can hardly be one that it is "good" to affirm.



    I should have said fragile rather than vulnerable, perhaps. Pragmatically truth serves a purpose which remains stable while a given truth is of use to us. And you’re right in 1000 years much of present sciencemay well be understood as factually wrong. But this doesn’t mean current scientific understanding isn’t useful now.Tom Storm

    Well, are our current theories wrong now, and just not understood as such? Or are they "true" now and will become false at some point in the future? I assume you have "scientific progress" in mind, but supposing some sort of apocalypse where science reverts towards pre-modern beliefs (i.e., animist spirits as primary causal agents, geocentrism, etc.) would these beliefs "become true again?" Likewise, were geocentrism and the "flatness of Earth" facts when dominant practice, discourse, and belief still led to their affirmation? (And wouldn't "scientific progress" be arbitrary if "truth" did change like this?)

    If the answer to these questions is negative, then I would contend that this shows that truth is not posterior to (i.e., dependent upon) practice, language, culture, etc. (although it may be filtered through them).

    I'm not convinced. Do we need an extra “truth” hovering behind that to explain why knowledge and intelligibility are possible?Tom Storm

    Well, I am not sure about truth "hovering behind" anything, but the notion that truth isn't posterior to practices and beliefs resolves the problems highlighted by the questions above. Was the Earth truly flat when dominant practices and beliefs affirmed it as such? If not, the truth of the Earth's roundness cannot have been dependent upon those practices. Indeed, if the reality (truth) of things just is whatever the dominant practice/culture says they are, how could beliefs ever fail to be "pragmatic" and why would they ever change? We are always omniscient in that case, just so long as we don't disagree.

    The fact that human practices generate and sustain standards of intelligibility is all the explanation we really need.Tom Storm

    From whence this intelligibility? Does man or his practices generate it ex nihilo?

    Facts only have meaning in the context of a set of practices. Without us facts effectively vanish.Tom Storm

    Well, here's your answer for why you need a truth that isn't dependent on man, otherwise you have to affirm this sort of thing. So, did the Earth lack a shape before man and then man brought determinant the shape of the Earth into being by "practicing?" Or did it have a shape when the dinosaurs walked the Earth, but it wasn't true that it had a shape, and it wasn't a fact that it had any particular shape?

    Even accepting your claim (which seems extreme), we face the problem of why practices and beliefs should be one way and not any other. Since there are no facts outside of practice and language, it follows that there can be no prior facts that determine practice and language themselves. And, since there are no facts outside of current belief and practice, no facts can explain how or why beliefs and practices change and evolve.

    All that aside, it seems a little grandiose to me, the idea that man makes everything what it is through his speech, judgements, etc. That sort of expansive constructivism or linguistic idealism (different varieties of the same thesis I suppose) seem to me to just repeat Genesis 1, only with God cut out and man pasted in God's place. You know: "In the beginning the language community hovered over the formless deep and called forth trees and stars..." And to the extent that there is no true actuality prior to man's practices, he seems to be very much the volanturist God of the Reformers, with wholly inscrutable origins and reason for "creating."

    I am quite sympathetic to the problem that, if truth is the intellect's grasp of being, there can hardly be "mind-independent truths," but the solution of making truth dependent on man leads to some bizarre conclusions, especially if man is considered to be contingent.
  • What is right and what is wrong and how do we know?


    I won't deny that some people use "inter-subjective" to mean essentially the same thing as "objective" once meant. This seems to me to be an unwillingness to argue the conflation of "objectivity" with some sort of Kantian "noumenal."

    Facts about Germany’s surrender or the date of a declaration remain fixed because our institutions and habits of checking evidence are stable. If those forms of life were gone, the way we talk about truth would likely be gone too, but that does not make present truths vulnerable. It only means there is no view from nowhere that holds them beyond these kinds of practices.Tom Storm


    How so? Given your description, if our institutions, habits of checking evidence, and systems intelligibility change—which they do—it seems like the facts change, and so it absolutely could cease to be true that Germany surrendered during WWII, no? You say I am confusing inter-subjectivity with instability, but then seem to present an understanding about the truth of past events that makes such truths unstable. That is, current systems and practices become prior to past history.

    With histroical facts, intersubjectivity is essentially grounded in agreed methods for checking evidence.

    So if man goes extinct, are there no facts about human history?

    This goes back to something I mentioned earlier about a particular metaphysics of language and appearances where "Socrates must 'step outside his humanity' to make the universal pronouncement that 'all men are mortal,'" or where one must "'step outside history, culture, and language, to say anything about what is other or prior to these." This is the most common metaphysical underpinning for historical anti-realism I am aware of. I also think it's somewhat self-refuting since it tends to rely on a metaphysical presumption that truths are bounded by "language games," but then (often dogmatically) absolutizes this particular metaphysics of language and appearances to make this apparently universal claim.

    I guess I would just repeat what I've already said about the difficulty of containing reason within "norms and practices" or "paradigms" and their difficulties:

    To address your earlier question about the limits of reason, I would point out that the claim that reason cannot adjudicate between paradigms or world-views is, of course, a gnostic claim. One presumably knows this if one claims it to be so. Yet to have recognized a boundary is to already have stepped over it.

    Now, if we claim that reason is in a sense isolated within "world-views and paradigms," we face the odd situation where some world-views and paradigms resolutely deny our claim. They instead claim that knowing involves ecstasis, it is transcendent, and always related to the whole, and so without limit—already with the whole and beyond any limit. And such views have quite a long history.

    Our difficulty is that, if reason just is "reason within a paradigm," then it seems that this view of reason cannot be so limited, for it denies this limit and it is an authority on itself. Our criticism that this other paradigm errs would seem to be limited to our own paradigm.

    The positive gnostic claim, to have groked past the limits of intelligibility and seen the end of reason from the other side faces an additional challenge here if we hold to the assumption that any such universal claim must be "from nowhere," and itself issued from "outside any paradigm, " since it is also generally being claimed that precisely this sort of "stepping outside" is impossible. But perhaps this is simply a misguided assumption. Afterall, one need not "step out of one's humanity" to know that "all men are mortal." One can know this about all men while still always being a particular man.

    So, that's my initial thoughts on the idea that reason cannot adjudicate between paradigms. It seems this must remain true only for some paradigms, and one might suppose that being limited in this way is itself a deficiency. After all, what is left once one gives up totally on reason as an adjudicator? It would seem to me that all that remains is power struggles (and indeed , some thinkers go explicitly in this direction). Further, the ability to selectively decide that reason ceases to apply in some cases seems obviously prone to abuse (real world examples abound)—in a word, it's misology.

    But none of this requires stepping outside paradigms, except in the sense that reason may draw us outside our paradigms (and indeed this happens, MacIntyre—RIP—was drawn from Marxism to Thomism). To know something new is to change, to have gone beyond what one already was. That's Plato's whole point about the authority of the rational part of the soul. The desire for truth and goodness leads beyond the given of current belief and desire, and hence beyond our finitude.

    I'll just add that the absolute, to be truly absolute, cannot be "objective" reality as set over and against appearances, but must encompass the whole of reality and all appearances. Appearances are moments in the whole, and are revelatory of the whole. Appearances are then not a sort of barrier between the knower and known, but the going out of the known to the knower—and because all knowing is also in some sense becoming—the ecstasis of the knower, their going out beyond what they already are in union with the known.
    Count Timothy von Icarus





    Now of course, we might allow that all human knowledge is always filtered through culture, language, history, etc. (as well as human nature) but this does not requires that there are the ground of—a prior to—truth itself. For if there was no truth (no potential for knowledge, no intelligibility) the former couldn't exist in the first place.
  • What is right and what is wrong and how do we know?


    It's common for moral objectivists to trot out low hanging fruit such as murdering babies when attempting to demonstrate their worldview, since it has a >99% agreement rate among "normal" folk. But ignore topics like welfare assistance which has a 40/60 split.LuckyR

    Well, it's also common for anti-realists, even professional analytic philosophers, to assume that the realist must be committed to the idea that "most people's judgements re value must be mostly true, most of the time." But such a view is in fact a development of the "sentimental" anti-realist theories of the Enlightenment that want to ground value in some sort of "common denominator" in sentiment.

    Obviously, the vast majority of earlier Western and Eastern thought radically disagrees. They generally argue that man is fundamentally deluded about value. Epicetus claims that most masters are slaves (whereas a slave might be truly free) because the majority suffers from vice and is deluded about value and freedom. The difference between apparent and real goods is so extreme that many who have tried to point this out to people have died grizzly deaths. Socrates was executed. As Plato puts it in the Allegory of the Cave, when the philosopher descends back into the cave to try to guide others out he will likely be beaten. So too, Christ (God himself!) was crucified for his moral and spiritual teachings, and Saint James, Saint Peter, and Saint Paul followed him in execution. Boethius lost everything and died for doing the right thing. Dante ended up writing the Commedia under a sentence of being burnt to death. Nor is the East particularly different in the fundamental theme here, man's inability to determine what is truly desirable.

    This also highlights a pretty key difference, which is that the norm in pre-modern philosophy is that the saint or sage is the measure of value, as opposed to the "common sentiment." Buddha, not the masses, knows the truth about suffering, etc.

    I'm a little unclear what it would mean for something like Germany to not be objective. Does this mean it is not an objective fact that German surrendered in WWII? Is it not an objective fact that the Declaration of Independence was signed on July, 4th, 1776? Are there objective rules to chess? What about objective truths of arithmetic (which is often considered a "game" like chess)?

    If they are "intersubjective" does this mean that if all relevant subjectivity changes, the truth changes too? So if in the far future man's understanding of history becomes radically confused and people come to think that: "Adolf Hitler was the first president of the United States," and all surviving evidence somehow points in this direction, does it become true? I would assert that this is ridiculous. Adolf Hitler cannot "become" the first president of the USA anymore than George Washington can cease to have been its first president, regardless of what man currently thinks. Hence, I'd consider that fact to be "objective" in the sense that it is not dependent on current belief and opinion.

    Now, sometimes "objective" gets used more as a synonym for "noumenal," or as "holding in a view from nowhere," but I find this unhelpful. Arguably, on this definition, no potential knowledge is objective, and so it fails to be a useful category (or fails to make a useful distinction). If knowledge is "the intellect's grasp of being," and truth "being qua knowable," then the idea of "mind-independent" truths is a contradiction in terms. Yet it would also seem to be nonsense to say that something "is the case" but that it also is "not true that it is the case."

    It does not seem there can be "being qua knowable" with nothing to know anymore then there can be good—"being qua desirable"—with nothing to desire. However, it seems that there can certainly be mistaken beliefs and opinions about what is true and desirable.
  • Mental to mental causation is not possible if mental events are related
    Wouldn't B exist potentially before it is actual? So A doesn't need to contain B, it just must contain what brings B from potentially into actuality. We might say that A contains B virtually. But in reality, causes are never isolated. So our analysis is really about many things interacting together so as to bring B from potential to actual.

    So for instance, before I studied Arabic I had the potential to know Arabic and there were things in the world capable of actualizing that potential (which I was capable of seeking out). That potential was actualized (state B). Having A contain the potential for B doesn't change A and make A something other than A. A could not have been the "cause" of B if it wasn't capable of bringing B from potency to act. By definition, anything that can be caused has the potential to be caused earlier.
  • References for discussion of mental-to-mental causation?
    I also thought David Bentley Hart's "All Things Are Full of Gods" was pretty good on this topic tooCount Timothy von Icarus

    Reminds me:

    Psyche: ...Mechanical processes are series of brute events, determined by purely physical causes, obedient to impersonal laws, whereas thinking is a process determined by symbolic associations and rational implications. Yes, perhaps the electrical events in the neurology of the brain can serve as vehicles of transcription for thoughts; but they can’t be the same things as the semeiotic and logical contents of those thoughts. The firing of one neuron might induce another neuron to fire, which leads to another firing in turn, as a result of physical necessity, but certainly not as the result of logical necessity. The strictly consecutive structure of a rational deduction— that simple equation, that elementary syllogism— simply isn’t, and can’t be reduced to, a series of biochemical contingencies, and the conceptual connections between a premise and a conclusion can’t be the same thing—or follow the same “causal” path—as the organic connections of cerebral neurology. One can’t be mapped onto the other. Nor, by the same token, should the semantics and syntax of reasoning be able to direct the flow of physical causes and effects in the brain. Not, at any rate, if anything like the supposed “causal closure of the physical” is true. So, really, the syllogism as an event in the brain should, by all rights, be quite impossible. And, while we’re at it, I might note that consecutive reasoning is irreducibly teleological: one thought doesn’t physically cause its sequel; rather, the sequence is guided by a kind of inherent futurity in reasoning— the will of the mind to find a rational resolution to a train of premises and conclusions— that elicits that sequel from its predecessor. Teleology is intrinsic to reasoning and yet repugnant to mechanism.

    Oh, really, don’t you see the problem here, Phaesty? There can’t be both a complete neurophysiological account of a rational mental act and also a complete account in terms of semeiotic content and logical intentionality; and yet physicalism absolutely requires the former while every feat of reasoning consists entirely in the latter. The predicament becomes all the more utterly absurd the more one contemplates it. If, for instance, you seem to arrive at a particular belief as a result of a deductive argument— say, the belief that Socrates is mortal— physicalist orthodoxy obliges you to say that that belief is actually only a neurological event, mindlessly occasioned by some other neurological event. On the physical ist view of things, no one has ever really come to believe anything based on reasons; and yet the experience of reaching a conclusion tells us the opposite.
  • Evidence of Consciousness Surviving the Body


    The Gnostics called NPCs "hylics" (the spiritual seed of Cain). Those who were conscious but dragged down into worldly concerns and their own bodies were "psychics" (the seed of Able). Those who had glimpsed the realities outside materiality were "pneumatics" (the seed of Seth). Granted, some of them seemed to believe that people reincarnated until everyone eventually attained the gnosis and escaped the world-prison/simulation of Yaldaboath, the deluded, evil creator of the material cosmos.
  • Arguments From Underdetermination and the Realist Response


    That doesn’t sound like moral anti-realism to me.Ludwig V

    It strikes me as a sop thrown to common moral sentiment in the context of his broader philosophy. Of course Hume doesn't deny the appearance of evaluative facts (sentiment), or that they seem important to us (indeed, nothing else could be "important"). Yet what his starting point has led to is a move the privatizes and subjectivizes value such that the distinction between reality and appearances collapses. On this view, what is "truly desirable" is just whatever happens to be desired. There are only appearances, so appearances just are the reality (for the person experiencing them). It leads to a sort of Protagorean relativism that he tries to paper over with an appeal to "common sentiment." But all desire ultimately bottoms out in inscrutable, irrational, and irreducibly private impulse. And if one differs from "common sentiment" there are no "reasons" to go along with it.

    Obviously, the older ethics grounded in final causes also starts from the good as what is desired/sought. However, it says that there is a truth about what is actually "most desirable." If someone's highest desire (what they currently believe and feel, i.e., appearances) is to be a wastrel who drinks all day and scrounges off their parents, the older view would deny that this is what is actually "most desirable" given a true understanding of the good and healthy appetites. Hume has eliminated this distinction. He can appeal to the "general point of view," yet this is really just an appeal to "whatever people currently say is good." The "general point of view" in the context of "A Brave New World" sees that dystopia as eminently desirable, and so apparently it is. Slavery, child brides, etc. would be acceptable so long as general sentiment holds this to be so. Hume cannot claim that any socio-historical "general point of view" is better than any others without smuggling reason back into the picture (nor does he seem to have a strong argument for why this view should have any hold on the egoist).

    This is what makes it an "anti-realism." Our desires (appearances) become the measure of goodness. Hume's change here is analogous to making "whatever we happen to believe" the measure of truth (for each individual), which is exactly what Protagoras does for truth tout court (as opposed to only practical reason).

    Now, we might ask, on this view, are things we regret "bad?" I think an honest reading would have it that a late-night shot of tequila is "good for us when we want to drink it" and only becomes "bad" when we wake up hungover. Cheating is good when one does it, and becomes bad if we later regret it, etc. We may, of course, desire things based on how we think we will feel in the future (such a view doesn't preclude "thinking ahead"). However, if goodness just is sentiment then it must shift as our sentiment does. Otherwise, it would be the case that extra-subjective facts about what we will desire determines what is good (and so not sentiment and appetite, but facts as they relate to sentiment and appetite). That's a small but crucial distinction. We either go with the horn that makes "good" just "whatever is currently desired," or we start allowing goodness to rest in fact-related causes of sentiment/appetite (which presumably relate to "what man is" and "what rational creatures are," i.e. telos).

    I’ve never understood metaphysics and I don’t know enough about the doctrine to dissect this. But it looks as if metaphysics and logic reflect each other here and that someone who accepts the doctrine of transcendentals agrees that there is a distinction that is at least very similar to the modern fact/value distinction.Ludwig V

    They are pretty similar. I'd say the later view is just the old one with the reality/appearances distinction collapsed, often paired with a denial that there can be any consistent relationships between how the world is and how man is, and what any individual man will find most fulfilling (i.e., a denial of human nature). I tend to find denials of human nature farcical, because they invariably have to be walked back with so many caveats as to simply reintroduce the idea of a nature in some modified form. It is clear that man is a certain sort of thing. We do not expect that our children might someday soon spin themselves into cocoons and emerge weeks later with wings, because this is not the sort of thing man does. We know that we will fall if we leap off a precipice, and we understand that we are at no risk of floating away into the sky when we step outdoors. Things possess stable natures; what they are determines how they interact with everything else. That doesn't wash out individuality, it allows it to have some sort of ordering so that it isn't arbitrary. But again, theology lurks in the background here. Man was seen as in the image of God, and a view of divine liberty had emerged where liberty is most fully revealed in inscrutable arbitrariness.






    OK. Would you mind explaining what the arguments are that you consider to be quite strong? I’m intrigued by the idea of appetites associated with reason.

    The desire to know truth for its own sake is the most obvious. The desire for truth over falsity, which is a desire and so evaluative, is IMO a prerequisite for any rationality at all. Elsewise, we ought only affirm what we otherwise feel like affirming. A "good argument" and "good evidence" would otherwise just mean "arguments and evidence that affirm what I am already predisposed (by whatever irrational sentiments I just so happen to have) to affirm." To have "good argument," "good reasoning," and "good evidenced" ordered to the end of truth/knowledge, and not only accidentally related to it, presupposes the appetite for truth qua truth. Otherwise, we ought only affirm truth over falsity whenever it just so happens to fulfill or leads towards an unrelated, irrational desire we happen to possess.

    So too for practical reason:

    Questioning and what’s really good

    In the case of just and beautiful things, many people are content with what are believed to be so, even if they aren’t really so. [But] nobody is satisfied to acquire things that are merely believed to be good . . . everyone wants the things that really are good and disdains mere belief here. (Republic 505d)

    Here Socrates is saying that regarding the things, experiences, relationships, and so forth, that we get for ourselves, we want to be sure that they really are good, rather than just being what we, or other people, think is good. We don’t want to live in a “fool’s paradise,” thinking that we’re experiencing what’s
    really good, when in fact it isn’t really good.

    Even if we could be sure that we would remain in this fool’s paradise for our entire lives, and never find out that we had been mistaken, we hate the thought that that might be the case—that what we take to be really good might not really be good. If that were the case, we feel, our lives would have been wasted, whether or not we ever found out that they were wasted. We can joke about how other people are “blissfully ignorant,” but I have yet to meet a person who says that she would choose to have less information about what’s really good, if by doing so she could be sure of getting lots of what she currently thinks is good. The notion of choice, itself, seems to be oriented toward finding out (if possible) what’s really good, rather than just being guided by one’s current desires or one’s current opinions about what’s good.

    Robert M. Wallace - Philosophical Mysticism in Plato, Hegel, and the Present

    That is, it seems implausible that the nihilist/anti-realist would want to be ignorant of the Good if it truly exists. To be sure, it is often the case that we are unhappy about what we discover to be true. In some cases, we might even prefer not to be informed about the details of certain events. However, it does not seem plausible that a person would prefer to be deluded when it comes to the fundamental nature of the world and their relation to it. But this is of course, indicative of an appetite to know what is best itself, a sort of open-ended, rational appetite for knowledge and goodness as such.

    It is also precisely this open-ended desire that makes it possible for us to transcend current beliefs and desires, to always question them. Likewise, it is what allows for any sort of coherent second-order volitions, such that we desire to have or not have certain other desires. Such a capacity is essential to any sort of rational freedom, since otherwise we would just be mechanically pursuing whatever desires we just so happen to have started out with (with reason as a wholly instrumental computational tool that simply tries to find its way towards these predetermined ends). Any sort of rational freedom requires an ability to judge which desires are worthy of pursuing, and also which should be uprooted (this is the whole idea of virtue being the development of habits of acting rightly, but also of desiring to act rightly, such that one enjoys what one does).

    Now, the fact that a truly self-determining freedom requires rational appetites doesn't prove that such a thing must exist. Perhaps we lack it. However, the desire for freedom itself (freedom as good in itself) does suggest exactly this. No doubt, some men deny that they have any appetite for truth, what is "truly good," or freedom. Yet such an appetite surely exists in many. It's precisely what has motivated men across the ages to do things like to give up all their wealth, foreswear sex and children, and retreat into the desert to live as Sufi ascetics, or what motivates Marxist atheists who deny an afterlife to embrace suffering and anonymous deaths in order to further the struggle towards "what is truly best."
  • The Concept of 'God': What Does it Mean and, Does it Matter?


    Are you familiar with any of the physicists who suggest that information is ontologically basic and that matter and energy emerge from it? Sometimes it is dependent, but sometimes it is put on par with energy, or even prior to it, which is a pretty abstruse conversation. Suffice to say, I am not sure how matter could exist without the other, so if it is prior it, it must be a sort of logical or ontological priority (unless one holds to the idea that information is a sui generis product of mind and perspective, which some do).

    If mass/energy is the potential to receive form, it's nothing without some informing determinacy. But we have no (observable) potential as such, but always a certain sort of potentiality in fields, even "void" being filled with all sorts of activity (which seems, by definition, to include information). Indeed, nothing could be measurable without information (one of the better arguments for why it cannot he a sort of "illusion" produced by mind; the "difference that makes a difference" seems to be prior).




    You seem to be suggesting that our memories could be copied to another form and re-attached to our souls after death.

    If memories and persons were "nothing but information" this would be the case. It follows from many versions of computational theory of mind. We could even be reinstantiated in a clever information processing system made up of paper towel rolls and rubber bands, if it had the same data structure. I actually find this a bit absurd, but it's not an unpopular idea. On a classical view, Laplace's Demon (or any Omega Point) should be able to resurrect us at will in any media of sufficient complexity, although the media we are formed from would have nothing to do with our experience of ourselves and our environment (a knock against such theories perhaps, since it suggests a sort of skepticism, and a greater ubiquity of Boltzmann Brains or stray Boltzmann thoughts flitting through the aether under dual aspect theories).

    Sure, this is logically possible, but it's an ad hoc hypothesis that lacks supporting evidence. If this is something that occurs, I wonder why the deity bothers at all with brain-storage of memories, and why she fails to help out dementia patients with access to this resource.

    I am not convinced that the exact substrate for producing minds is not important. Note that, while Augustine's experiences might be passed around in innumerable ways, they are only experienced when human minds interact with these records.

    I won't speculate about God except to say that the existence of a regular third person substrate through which we can affect one another, and yet not simply merge into one another, seems to be a prerequisite for us to be individuals at all.
  • The End of the Western Metadiscourse?


    I consider liberalism not as a set of ideals, striving for which we will certainly build paradise, but as a system for searching for a certain point of compromise of aspirations. From the moment of the formulation of the ideas of liberalism until today, it has coped well with challenges in the long term. And, it must be said, this is not some great invention of mankind, but a tracing of the structure of nature: It is not the strongest/dexterous/fastest that survives, but the most adaptable. Authoritarianism is bad (not to mention totalitarianism) not because it violates human rights, but because it is less flexible than liberalism in the long term. As a temporary solution, authoritarianism is very good and much more effective than liberalism (provided that it is sovereign authoritarianism)

    At the same time, if we constitute an ideal, instead of constantly searching for points of compromise and adaptability, we will get a great brake that will lead to decline.

    This is where, in my opinion, today's problem arises: Liberalism has ceased to moderately seek this compromise, has ceased to adapt sensitively, its strengths have taken on some extreme form, and the ideas themselves have become dogmatized, instead of working dynamically.
    Astorre


    Well, liberalism has had its evangelical moments. Revolutionary France initially began by setting up "sister republics" everywhere it could. In the early to mid-20th century Superman could, with no sense of irony, be a hero fighting for "truth, justice, and the American way." Even the early phase of neo-liberalism often framed its aims in fairly utopian terms, with the whole of humanity being lifted out of want and oppression to participate in a techno-optimist vision where things would be "getting better all the time."

    I'd agree though that a key theoretical component of liberalism has involved eschewing any strong interpretation of the human good or human purpose, and privatizing all appeals to transcendent ends (booting the relevant religious and philosophical ideas from politics). It also tends to dissolve most traditional forms of identity (the capitalist component does a lot of the work here). I am just saying that the theory on this front often wasn't put into practice. Also, when most of the population was Christian there was a strong cultural pull towards a particular conception of ultimate ends and value that helped align the public, but that is no longer the case.

    I'd argue that what we're seeing now though is that liberalism, without these deviations, isn't actually "adaptive." Civilizations require the pursuit of arduous goods. They require heroism and self-sacrifice, and a capacity to resist serious temptations (since liberalism is always prone to slipping towards oligarchy or dictatorship). Sans any strong ordering ends, any vision of what we are defending liberalism to "adapt towards" why don't self-interested utility maximizers (which is what liberalism tells us we are) with power take advantage of their ability to direct the system towards their own ends?

    Fukuyama, a champion of liberalism, allows that its anthropology is too thin. It focuses almost wholly on the goods of epithumia—bodily pleasure, safety (and so wealth), etc. He tries to reintroduce some notion of thymos, of the desire for recognition, honor, etc., something people used to get from community membership and religion (things liberalism tends to erode). People have tended to focus on his "End of History" thesis, but it's the "Last Man" thesis that has proven most prescient. This is the idea that people will rebel against being reduced to bovine worker/consumers, at becoming Nietzsche's Last Men, and so seek to destroy the very system they live under in a quest for recognition. This phenomena is certainly hard at work on the far right.

    For some reason Fukuyama never gets around to logos, the desire for truth, and to do what one thinks is "truly best" (morally, etc.). The thread in liberalism you outline seems to actively undermine these desires, or at least their consistent ordering, but they are arguably the most powerful, being the desires that lead monks to renounce all wealth and sex, or which lead athiest Marxist revolutionaries to nonetheless embrace painful, anonymous deaths in pursuit of what is "truly best." When these appetites (thymos and logos) have no direction, and no positive education, they don't cease to exist. Often, they end up turning against liberalism. The sort of procedural, safety focused politics of Rawls, etc. might appeal to the sort of people who become academics, but they probably should be tested at martial arts gyms and churches, where I would imagine they might not do as well.

    While "Why Nations Fail" is a flawed book, it does also do a good job laying out why it is in elite's self-interest (economically construed) to subvert liberalism, even at the cost of lowering their own national military and economic power. Given this incentive structure, a society with no bias towards any particular final end, which only justifies liberalism in terms of its ability to allow "the most people to reach whatever ends they happen to find attractive" would seem to be inherently unstable because liberalism simply doesn't make sense for those who have the power to subvert and take control of it.

    (Just one data point on thymos might be the very low percentage of people in the wealthiest liberal states who say they would be willing to defend their country if it was attacked when compared to the rest of the world.)
  • The End of the Western Metadiscourse?


    He famously claimed that with rise of liberal democracy around the world, we had reached the end of history, a Hegelian-like pinnacle of political and philosophical organization. But the recent trends away from liberalism and toward various forms of autocracy and totalitarianism around the world would seem to argue against the idea that history has been moving in the one direction Fukuyama described.

    Well, the core of Fukuyama's thesis isn't that every country will soon become a liberal democracy, nor that no liberal democracies will cease to be so, but that no ideological challenge to liberal democracy will emerge to rival liberalism for legitimacy (the way communism, fascism, and reactionary monarchism once did). So far, I would say he has been proven right on this over three and a half decades.

    The liberal backsliding since 2008 isn't actually out of line with his core thesis, although it does run against the general optimism of the 1989 article and 1992 book. Illiberal leaders in previously liberal countries do not justify their authoritarianism or interventions in opposition to liberalism. In general, they position themselves as saviors of liberalism. On both the right and the left, the need for norm breaking interventions is justified in terms of the need to secure liberalism against opposing "illiberal forces." That is certainly how Trump positions himself for instance. He is saving liberal democracy from illiberal "woke mobs" and "elites" and his economic interventions aren't positioned against free enterprise and capitalism per se, but against bad state actors who are "ripping us off" by not abiding by true free market principles. He sells his policies in liberal terms.

    Likewise, dictators across the world still feel the need to have rump legislatures, to hold votes on reforms, etc. They still feel the need to hold sham elections. Even Assad did this during the civil war. They still go by "president" or "prime minister" instead of "king," "emperor," "emyr" or "shah." When they attack the West, they normally do so while tacitly accepting the values of liberalism. They deride the West as not being truly democratic, as having become an oligarchy, or just as often, as having fallen into a sort of technocratic socialism. Such criticisms accept liberal values however. When they attack "Western values" such a LGBT issues, they do so using the same language used by conservative liberals within the West, speaking to "freedom to differ" and "freedom of religion" or "freedom for traditions."

    Yet they decidedly do not recommend some sort of alternative ideology the way the Soviet Union did. China occasionally positions itself as a sort of alternative position, but not in any coherent way. They aren't evangelical about their form of state-capitalism, trying to force it on their allies, or trying to boost it internationally as a popular movement.

    The one standout counter-example is Islamism. Obviously, the Iranian supreme leader does have an alternative title (although many of the trappings of liberalism are still embraced). The caliphs of the Islamic State are a more obvious counterexample. Yet this isn't really a good counterexample because such a movement can never have global appeal (and seems potentially in decline).

    I think this is worth pointing this out because, to 's question, it makes it incredibly hard to predict what will come next. There is no coherent replacement ideology to rival neoliberalism, just neoliberalism destroying itself due to its own internal contradictions. When states fall away from liberalism, it isn't towards some new ideology, but merely towards a sort of degraded, more oligarchic liberalism. I don't think this trend will reverse though because the issues that drive it are endemic to liberalism itself and the solution for liberalism's problems proposed within the West are almost always "more liberalism!" (just of conservative or progressive varieties). Maybe China represents an alternative model with ideological appeal. It is not clear to me that Russia does. Russia seems more like the far end of degraded liberalism, a liberal constitution with an oligarchic dictatorship in practice.

    I am also a bit skeptical of any real multi-polar reversal any time soon. Iran and it's "Axis of Resistance's" military embarrassment and Russia's performance in Ukraine suggest that no real military rebalance has taken place. It would be foolish to assume that China's military reforms and technological efforts would prove quite as inept and ineffective, but it also wouldn't be very surprising if they were (e.g. scandals like their missiles being filled with water instead of fuel, etc.). Unless China can get into the export game with comparable hardware and promises of assistance, there won't be a security rebalance.
  • Arguments From Underdetermination and the Realist Response


    I'll respond to the rest later but I wanted to point out a potential miscommunication:

    1) No. 2) Because “good” is an evaluation and “x is y” is a description.

    Right, I am aware of the distinction. But it isn't a "logical distinction" in the sense that it is something that is discovered about how logic works or syntax works. It is a distinction made based on a certain metaphysical theory (generally, anti-realism). It's akin to how emotivism claims that the logical function of "good" is equivalent to "hooray for." The distinction follows from the metaphysics in the same way that someone who accepts the Doctrine of Transcendentals will acknowledge a distinctive logical function for One, Good, and True and their derivatives (Something, Thing, Beautiful, etc.), in that they are transcategorical and that they are conceptual/logical (as opposed to real) distinctions that add nothing to Being but which are coextensive with it.

    Now, certainly that distinction is very helpful, if you accept the metaphysics. But it's the metaphysics driving the recognition of the logical distinction (in each case).

    As noted earlier, I don't think "good" always indicates or approves of an action. On something like an Aristotleian account, the goodness of actions is always parasitic. Goodness is primarily descriptive there and grounded in final causes, and particularly in beings (organisms). Even in common language today though, "good" often seems to be used in strictly descriptive ways. Nor do I think that what makes a claim "evaluative" is generally clear.

    "That's hot" can be a claim recommending action. It can also be merely descriptive. "That's too big," is often a claim recommending action, but it can also be descriptive. Context determines if it is taken to recommend action or not. But more to the point, no one thinks that because "that's too big," or "that will break it" might recommend action, that they are not also, and often simultaneously fact claims and descriptions. Their being evaluative in one context doesn't remove their descriptive nature.

    Anyhow, perhaps I interpreted this wrong, but you seemed to be supporting the general fact/value distinction in light of the logical distinction. If so, I would say this argument is circular. It would be like arguing for moral realism on the grounds that the Doctrine of Transcendentals makes a different logical distinction re "Good."

    Note that the move to subjectivize value here could just as well he made for all descriptions. We could reinterpret all descriptive claims to the effect that "x is y" as "I believe/feel that x is y" (indeed, some have recommended this). This would, IMO, do violence to natural language in the same way that it does violence to natural language to assume that if "y = good" a claim always ceases to be a fact claim, or to assume that if "y = good" the *real* meaning is "hooray for x." Some people make a differentiation between first person declarative and third person informational statements. I find this distinction more useful, but it cuts across claims of value and "facts" and does not presuppose the two are exclusive.

    I don’t think that “x is y”, of itself, suggests that we should affirm it or should not affirm it, except in specific contexts

    Sure, there isn't always assertoric force. But in every language I am aware of, assertoric force is the default. In any case, there obviously often is assertoric force. If there is, then "x is y" is equivalent with "it is true that x is y." Now, we might not believe that "x is y," but surely if it is really true we ought to affirm it, right?

    Although, I suppose it's true that for the values anti-realist "y is true" never implies "affirm y," and the move to affirm y must always come from irrational, inchoate sentiment. I am not convinced that this doesn't result in nihilism and misology if taken to its logical conclusion however. Whereas the counter to the effect that we have a "sentimental" desire for truth qua truth ("all men desire to know") is just reintroducing the rational appetites with the adjective "sentimental" tacked on.

    Hume doesn’t disagree with you.


    I might have been unclear. I am referring to the section in Book III where he says that we never sense (touch, smell, see, etc.) vice or badness.

    "Take any action allow’d to be vicious: Wilful murder, for instance. Examine it in all lights, and see if you can find that matter of fact, or real existence, which you call vice. … The vice entirely escapes you, as long as you consider the object."


    This is very similar to his claim that we never sense causes. But, prima facie, a valid response is to say that when one sees a ball smash a window one has just seen a ball cause a window to shatter. I am not sure what Hume expects a cause or badness to "look like" or "feel like" or whatever. We certainly do directly experience the appetites (pain, revulsion/nausea, beauty, craving) and on many accounts of goodness it is being qua desirable. Under that view, this division seems bizarre. The appetites are part of sensous experience and relate to what is sensed directly and immediately. I am aware that Hume's empiricism denies this by claiming all of these are "impressions of reflection," and thus, by definition, internal, private, and subjective. But this is another case of an argument from mere stipulation. In reality, touch is continuous with pain, we don't cross a threshold that neatly divides them for instance.

    Obviously, Hume is writing to his own context. In that context, morality tended to be thought of in terms of rules (which shows up in all the examples he uses). When I say Hume is being influenced by theology, I don't mean that Hume adopts this framing because of his personal theology. He might have been an atheist. I mean that he is going with the ideas dominant in the Reformed/Calvinists context he lived in. I bring it up because if one rejects that source for the framing, one might question if it is worth sticking with its categories. The idea that "good" involves something like "thou shalt" or that "ought" primarily denotes duty or obligation (or even action), is a product of that context.

    Likewise, one need not suppose that Hume rejects final and formal causality on theological grounds to accept that he is writing in a context where final and formal causality have already been excised from "scientific/philosophical discourse" primarily on theological grounds. Obviously, athiests sometimes defend mechanistic causality with religious zeal, but they often do so because they see it as a historical product of science. My only point is that it wasn't. It's a genealogical argument. Of course mechanism might be justified on other grounds; this only attacks the claim that it is primarily "scientific." There weren't experiments run to "rule out final causes" (indeed, biology, medicine, the social sciences, all still rely on them); it was a theological/philosophical position to exclude them from consideration. This sort of thing still comes up in stuff like the Libet experiments.
  • Idealism in Context


    Is this right? I don't know Scholastic philosophy very deeply, but I thought that the concept of intelligibility meant that we can know what is real in the physical world as well.J

    Yes, although "physical" retains its original meaning here in that it is "being qua changing" or "mobile being." This would contain the subject matter of modern physics but is more expansive. A big difference is that it includes final causes and the mutable's relationship to the immutable, the material to the intelligible. The other big difference would be a denial that everything can be reduced to mathematics (mathematical physics) or claims to the "unreality" of non-mathematical properties.

    But the basic idea of realism is there. A tree is itself a being. We can know it. A rock has less unity but it's still knowable. Things aren't exhaustively knowable because this would entail knowledge of their entire context and their causes. As Saint Maximus puts it: "For all created things are defined, in their essence and in their way of developing, by their own logoi and by the logoi of the beings that provide their external context. Through these logoi they find their defining limits." To know anything in total one must know everything, which is impossible. This leads to a limited sort of fallibilism.

    "Knowledge" of the physical is more expansive than in modern forms of naturalism however. Experiment and observation are important (key elements of the scientific method were developed across this period) but we don't rule out the phenomenological grasp of things and the act of understanding as we might under empiricism.

    And lastly, there is no mental/physical dualism. Physicalism is supposed to be a "monism" but in all its forms it struggled with an unresolved dualism. The theory of signs dominant in scholasticism doesn't set up this dichotomy in the same way though.

    and ask him whether, when we see an apple, we are seeing something that is really there, more or less as presented to our senses, wouldn't he say yesJ

    Yes, the actuality of an apple is in the intervening media, in the senses, and in the intellect, where "in" is used more metaphorically.

    But to 's point, the idea of "mind-independent" truths or "mind-independent" values would have been dismissed as nonsense. True and Good are Transcendentals. They don't add anything to being. There is not a thing and then its truth, as if the truth is some sort of additional thing sitting outside the being of the thing (as would be the case in early analytic theories of truth as primarily relating to propositions as abstract objects). True and Good are logical/conceptual distinctions, not real ones. Truth is being as apprehended by the intellect, the mind's grasp of being. Goodness is being qua desirable, from the perspective of appetite (love in the highest sense). A "mind-independent" truth or values would be a contradiction in terms. So too, a "mind-independent being."

    To return to Berkeley, I think the loss of these notions (or at least their fidelity) is why Berkeley's invocations of God start to seem ad hoc.



    That's a good point. The reduction of mechanism to mathematics itself starts to look more idealist than mechanistic. I would argue that one might consider many forms of ontic structural realism popular among "physicalists" to be a sort of idealism.
  • References for discussion of mental-to-mental causation?
    I also thought David Bentley Hart's "All Things Are Full of Gods" was pretty good on this topic too, if not particularly original. But it covers a very wide array of topics in a quite long dialogue, and in a somewhat rambling manner, so it might not be ideal for this particular subject.
  • References for discussion of mental-to-mental causation?


    Yup, that's the one. There is even an abridged version that is acted on YouTube, although it is not particularly easy to follow lol.
  • What is right and what is wrong and how do we know?


    But science cannot be about absolutes because there is nothing in the discovery that cannot be second guessed and this is true because, at its most basic level, it is a language construction and ALL that language produces can be second guessed--

    Right. Plato attributes this open endedness to reason itself, and in a way, G.E. Moore seems to have merely it on this vis-á-vis practical reason with the "Open Question" argument. D.C. Schindler and Robert Wallace's books on Plato are quite good on this point. It is this open endedness that gives reason authority to lead, because it can always bring us beyond our own finitude—beyond current belief and desire.


    this is the nature of contingency itself: One spoken thing has its meaning only in context. One would have to reach out of contextuality itself to posit an absolute, and this is absurd

    Well, objection works with the early analytic notion of the "absolute ' which was bound up with their conception of "abstract objects " and the notion that "objectivity approaches truth at the limit." It comes out of a certain view of naturalism where the perspective of consciousness is a sort of barrier to be overcome, the much maligned but often reproduced "view from nowhere." However, such a consideration of the "absolute" has probably had a longer life as a punching bag for continentals than it did as a position that was actually embraced by large numbers of philosophers.

    I would think though that to be properly absolute, in the sense the term is normally used outside that context, is not to be "a reality as set over and against (and outside) all appearances," but rather to include all of reality and appearance. Appearances are really appearances, and so they cannot fall outside the absolute. Hegel's Absolute does not exclude any of its "moments" for instance.

    It's like how in the Republic Socrates wants to show Glaucon that Justice is both desirable for itself and on account of something else (both relative and intrinsic, and so not "outside" the relative). Likewise, the Good is not on the Divided Line. It cannot simply be the furthest most point on the line, but encompasses the whole, which is also why the philosopher king must descend back down into the cave for the whole, and why Socrates must at this point "break into" his own story from without, to refer us to the historical Socrates (the "saint") who is wholly outside the confines of the dramatic narrative. I forget who said it, but it's a great quote; "at the center of the Republic sits a life, not an argument."

    The absolute need not require a view from nowhere, however. Socrates need not step outside his own humanity to know that "all men are mortal." Similarly, the claim that Socrates can only know that he is mortal within a specific language game itself presupposes a specific metaphysics of language and truth. Likewise, to claim that nothing is immutable is to seemingly make an immutable claim. Some things do not seem subject to revision though. One need not step outside history (or language) to point out that it will never be true that "Adolf Hitler was the first President of the United States." Donald Trump will never become the inventor of the telephone. Dogs will not become cats without ceasing to be dogs.

    This is relevant as far as grounding the human good in human nature goes. Sometimes, one sees the claim that: “there is no such thing as human nature.” Prima facie, such a claim cannot be anything but farcical if it is not walked back with so many caveats so as to simply reintroduce the idea of a nature in some modified form. It is clear that man is a certain sort of thing. We do not expect that our children might some day soon spin themselves into cocoons and emerge weeks later with wings, because this is not the sort of thing man does. We know that we will fall if we leap off a precipice, and we understand that we are at no risk of floating away into the sky when we step outdoors. Things possess stable natures; what they are determines how they interact with everything else. Beans do not sprout by being watered in kerosene and being set ablaze, nor can cats live on a diet of rocks. Attempts to wholly remove any notion of “human nature” invariably get walked back with notions like "facticity," “modes of being,” etc. (Generally, the original idea of a "nature" is presented as a sort of straw man in these cases).

    Hence, when it comes to ethics, a blanket denial of “human nature” will not do. It is not the case that children benefit as much from healthy, regular meals as from having mercury dumped into their water. While the political theory underlying today’s hegemonic ideology, neoliberalism, might sometimes attempt to consider man as an essenceless, abstract, “choosing agent,” it can never truly commit to this in practice. In terms of actual praxis, no theory can wholly elevate a procedural right over all notions of the human good. We recognize that people need access to certain things to thrive and to become self-governing “agents” in the first place. One cannot have a republic of infants, or the severely brain damaged.

    What does it mean for something to be good or bad that is non question begging.Constance

    If someone offers you your favorite meal to eat and a rancid, rotting fish, is it difficult to decide which option is better? Or is it hard to choose between being awarded $5,000 and having to stick your hand in a blender?

    Well, at least the appearance of goodness seems obvious. Rational, ends-directed thought would be incoherent without it. Denying goodness, as an appearance, seems on par with denying one's own consciousness and reason itself (not that naturalism and empiricism haven't driven some to just this!).

    Ethics and politics come in because it is also apparent that what appears good to us is not always what is actually best. The ubiquitous phenomena of regret, sometimes immediate, is enough to ground this conclusion I would think. Ethics and politics then, are there to explore what is truly desirable, and not merely apparently desirable, or what is said to be desirable by others. The "good," on this view is "that towards which all things aim," (Nicomachean Ethics, Book I) when considered from the perspective of ends, and from a higher level it is "being qua desirable."

    Just as to say "a man is on the hill," is the same as to say "one man is on the hill," so it is the same to say "a man is on the hill" and "it is true that a man is on the hill" (assuming assertoric force, which is implied in most contexts). "One" and "true" aren't adding anything to being here. The being of "a duck" is that of "one true duck." So too, "good" doesn't add to being, but is being under a certain conceptual (not real) distinction. That's the core of the Doctrine of Transcendental in a nutshell (and how this got mixed into "goodness is something that sits outside the world" is beyond me, since folks like Saint Augustine and Saint Thomas clearly think goodness is absolutely everywhere).

    The appetites of a man are not the appetites of a bee or a sheep. There is contextuality here. Likewise, what a man thinks will must fulfill his desires is not equivalent with what will do so. Ethics and politics want to uncover what truly fulfills desire, so that we can "live a good life," "be good people," etc.
  • Identification of properties with sets


    Indeed. But wouldn't this be because the mirror reflects any light, and not just red? Likewise, we call certain things "magnetic" but it isn't that electromagnetism is wholly absent from other things. All sorts of things emit photons, but we only call those that emit discernible amounts of light "bright." Likewise we call a boulder "heavy" and yet it would be relatively weightless on Pluto. Yet this sort of variance only makes sense if properties are relational and involve interaction (or are revealed in interaction).

    I think the property itself is often conceived of as the actuality that is prior to any specific interaction that reveals the property (there must be something that causes things to interact one way and not any other). That's the original idea of a "nature." However, it's questionable if this "actuality" can be thought of without the interaction itself. It is rather a potency/power that is actualized in the interaction (e.g., salt only dissolves in water when in water, lemons only taste sour when in the mouth, etc.). Yet a particular potency/power to act in a given way must itself be actual. That is, it isn't a sheer potency to act in any way at all, but a potential to interact in specific ways. The property is a (sometimes confusion) way of grouping this potency and actuality.
  • Identification of properties with sets


    So does the redness of the apple disappear when we turn out the lights, or the water solubility of the salt cease when the water it is dissolved in evaporated?

    It seems to me that some notion of potential (often invoked in process metaphysics) is needed as well. Consider the apple. When we turn of the light, it ceases to "appear red" to anyone. Yet if we turn on the light, anyone with healthy eyes will be able to see the redness. The apple seems to be potentially red even when this event meeting is not occuring. And this is not the same way in which it is "potentially blue," in that we could cover it in blue paint, but rather that if anyone saw it with healthy eyes under normal conditions, the redness would appear.

    If properties are only the actual even meetings, then they would seem to come from nothing. But our experience is that what we call properties are quite stable. Hence, I would say the redness of apples has to do with their potential to appear red in normal lighting to anyone with healthy vision. Even "thing-in-itself" ontologies allow that nothing appears red when no one is looking, but the appearance and the color are distinguished for this reason. But this could be explained as a potentiality grounded in process.
  • Arguments From Underdetermination and the Realist Response



    Surely you can see that those two statements have very different force? One implies an instruction or command, or recommendation. The other doesn't. "`It is common for people to take a summer vacation" is an observation which does not have the force of a recommendation or instruction, while "It is good for people to take a summer vacation" does not imply that it is common and is compatible with it being rare to do so, but it does imply that one should. When the surgeon holds out his hand and calls "scalpel", it's an instruction and the surgeon expects the nurse to put one in his hand; when the nurse holds up a scalpel and asks what it is, the same word is a description - there is no expectation that the nurse will put it in his hand.

    I'm not denying a difference between commands and recommendations and descriptions, just the idea that so descriptions involving values are actually commands or expressions of emotion. Such theories do violence to language.

    The idea that "good" always refers to something like "thou shalt" is a product of Reformation volanturist theology, the tradition that shapes Hume. To say that all value claims are about "thou shalt" isn't to observer an ironclad law of philosophy or language. It's just the (originally explicitly theological) premise that shaped Hume's context, i.e., "there is no intrinsic value (teloi) because intrinsic value would be a constraint on the divine will. Thus, value must be about divine command."

    "This is a great car," does not mean "thou shalt drive my car," or even "I should drive my car," just as "this is good (healthy) food" does not directly convert to "thou shalt eat this food," or even "you ought to eat this food." This is even more obvious when we move to the beings that most properly possess goodness. "Peter is a good man," need not mean "thou shalt choose Peter," or "I recommend Peter." It can, but it needn't; it can be merely descriptive.

    Centuries of war waged against intrinsic value in the language haven't been able to paper over these issues. While "that's a good tiger," might seem a bit odd in English, descriptive value statements made in a slightly different ways are still common and natural. Hence, "that tiger is a perfect specimen," or "that is a perfect tiger," is generally about the tiger as tiger, not recommending the tiger or commanding us to do anything vis-á-vis the tiger. So too, "that is a pathetic, miserable bush," isn't telling us to do anything vis-á-vis the bush, but is normally telling us something about the bush as a bush.


    So let me ask a pointed question: does the descriptive statement "x is y" essentially mean "you ought to affirm that x is y is true?" If not, then why, if y is "good" would it automatically change to "you ought to do y." To be sure, we ought to choose the good and avoid the bad. But we also want to affirm truth and reject falsity. And yet we don't say that "x is true" becomes equivalent with "affirm x," and so "x is good" shouldn't be subject to this sort of transformation either.

    On the same point, "ought" is often taken to imply duty, and I think this is the same sort of deficient theological extension. "This food is good, you ought to try it," and "she likes you, you should ask her out," do not imply "you have a duty to eat this food," or "you have a duty to ask her out."

    So why do you think that the fact/value distinction is a distinctive error of empiricism - or even an error at all?
    (As I remember it, Aristotle even asserts that "Reason, by itself, moves nothing". That's what motivates his construction of the practical syllogism.)

    I'm pretty sure we've discussed this before. Hume's variant would be something like saying only the animal (sensible) and vegetative soul ever move the body. The rational soul (or at least a fractured part of it) is acknowledged, but powerless. Hume doesn't argue for this position in Book II though, he just stipulates this as a definition. He does not take up the influential arguments for the appetites associated with reason, but simply declares they cannot exist. But I consider the phenomenological and psychological arguments made for such appetites to be quite strong, and Hume's declarations to be quite destructive, so I have no idea why we should take them seriously.

    The fact/value distinction in Hume (see Book II) is justified in a circular fashion from this premise. Reason only ever deals with facts. It can never motivate action (stipulated, not argued). Value must motivate action. Therefore, value is not a fact. He will use this in Book III to claim that when we investigate "vice" (disvalue) or presumably "virtue," we can never actually experience them anywhere. I can only say here that this seems obviously false and that millennia of thinkers disagreed. We live in a world shot through with value. We experience obscenity, depravity, cruelty, etc.

    Hume's argument, that "virtue and vice" don't show up in our "sense data" is extended into the seeming reductio claims of later empiricists and phenomenologists, that we also don't experience cats, trees, the sun, etc., but only an inchoate sense stream, and so that these too are less real abstractions. If the one argument from abstraction is valid, I don't see why the other isn't, although the conclusions seem absurd. They suppose that the healthy man is surrounded by unreal abstractions and that the person having a stroke and the infant "see reality as it is" by experiencing it in an inchoate fashion. The sage is most deluded and the infant and victim of brain damage lifted up, just as the earlier argument makes the psychopath, the damaged and malformed soul, into the measure of the moral truth of the world. But as Hegel points out in the chapter on sense certainty in the Phenomenology, this process bottoms out in the completely contentless.
  • Wisdom: Cultivation, Context, and Challenges


    Gee, that's tough. I would say that wisdom is like a skill, in that you can develop it through practice and habit. But unlike a skill (like medicine, or woodworking) it doesn't have a particular end it produces (like health, or a table). Rather, it involves knowing which ends are actually worth pursuing. Teens know about celebrities who gets everything they want and then become miserable. That's a great example; the wrong end has sought.

    That's practical wisdom. Theoretical wisdom is more like science, but involves understanding how everything fits together. It's not unrelated to practical wisdom. If you know how the world fits together, presumably you also know what is worth pursuing. If you know how life fits together, you can be "at home in the world."

    Or something like that. I think it's maybe easier if you know some famous scientists to differentiate between the brilliant, and those who were also brilliant and seem wise. But it's hard to put one's finger on the difference easily.
  • Idealism in Context
    For instance, I only discovered John Vervaeke's lectures in 2022, but his original 'Awakening from the Meaning Crisis' series comprises 52 hours of material! And that there really is such a crisis, I have no doubt, although it's never hard for the naysayers to say 'prove it' and then shoot at anything that's offered by way of argument.Wayfarer

    I've thought about this sort of thing for a bit, and I'm trying to put my finger on some patterns. For instance, there is a sort of "perennial problem" fallacy. It works like this: if a problem is always around—e.g., "people have always committed suicide and become addicts"—this is used as somehow precluding the idea that this problem could become particularly acute in a given epoch or place. But the driving causes of Russia's abysmal male life expectancy after the fall of the USSR show that this sort of thing can vary wildly, having the same level of effect as major wars. Or less dramatically:

    v3lhegkkm695p63v.png


    I came across a different explanation recently as well. In the Problem of Pain, C.S. Lewis takes up the modern reduction of all virtues to kindness and the claim that our era is kinder than past eras. Perhaps it is so, although he does ask if kindness is really a virtue if it doesn't cost one anything. I think there is a valid point there in that we might be said to be coasting on past successes.

    But either way, his more salient point was that, even if our own era excels in one particular virtue, it doesn't mean it excels in every virtue. For instance, even if we are kinder, this does not mean that we possess greater fortitude, greater prudence, greater courage, greater chastity, greater hope, greater temperance, etc. Even if our era was "the best" that still wouldn't mean that there would be nothing worthy of emulation in prior epochs. For, even if we are the most scientific and kind, it would still be the case that if we were also the most temperate and had the greatest fortitude things could be much better. I'd even say that it's possible that an increase in intellectual virtue and techne has helped paper over declines in other areas, or to set the stage for vice (just consider the cornucopia of addictive drugs unlocked by our innovations).

    I think here about the Amish who, for all their faults (and they are many and severe) manage to outlive and build up greater wealth than their neighbors (despite huge household sizes to split inheritances), all whilst eschewing centuries of technological progress, largely due to an ability to foster a few key virtues.

    Then there is the issue of framing. This article from the Guardian, which is a great reminder that propaganda takedowns are sometimes subtle (consider who they felt the need to focus on first and the title they chose) is a great example. It focuses on homesteading and a sort of ascetic way of life and explains it by saying:

    The “paradox of choice” is the theory that humans, when offered too many options, become overwhelmed and unhappy. If liberal consumer capitalism is underpinned by the belief that individual autonomy and choice should be society’s highest values, then perhaps the trad movement is one response to the decision paralysis of modern liberal life.

    Faced with a dizzying barrage of technological, social and consumer choices, some people prefer fewer options: duties rather than rights, constraints rather than freedoms, defined roles rather than elastic identities.

    That narrowing is part of a larger reaction against modernity, a frustrated feeling that our secular technological age promised progress and instead brought loneliness, worsening material prospects and a numbing onslaught of social media, spam, porn, gambling, gaming and AI slop, with the cold hand of capitalism – or Satan, or both – extending further into our lives with every chime, buzz and click.


    That is, the problem is almost that things are "too good" or at least "too free." But I wish a self-described expert in this area would at least offer up the way this sort of movement is justified in its own, and not only liberal terms. The internal interpretation there would not be that we have "too much freedom," but—in line with Plato, Epicetus, Rumi, etc.—that freedom requires virtue, and that these efforts help to foster virtue. It is rather consumerist neoliberalism that educates us in vice, and so deprives us of liberty. That is, you don't automatically become self-determining and self-governing by turning 18 and avoiding severe misfortune. It is rather, considerable work, and involves a sort of habit formation and training.

    I wouldn't call this a fallacy so much as an inability to step outside a particular frame (in particular, a specifically modern notion of liberty). So, a group that is acting precisely to achieve greater self-determination and liberty is instead described as fleeing liberty for comfort (which is, ironically, how the decadence of modernity is often described as well). My point would be that both critiques have a good deal of teeth, but the critique of liberalism will probably tend to hit harder because its dominance makes it harder to escape (whereas fringe movements are contained within a volanturist system where membership is "at will.")
  • Arguments From Underdetermination and the Realist Response


    There is some value to recognizing that statements of value (evaluations) are not in the same logical category as statements of fact.Ludwig V

    Sure, but that isn't new to Hume. The separation of practical and theoretical reason was centuries old. It's precisely the assumption that there are no final causes (and perhaps, no facts about goodness) that allows for a novel move here. Prior thinkers hadn't missed the difference between "ought" and "is;" yet they thought there could be descriptive statements about the good and beautiful (just as we can speak about what "ought to happen" given purely descriptive predictive models").

    Here is a pretty typical intro to philosophy text in the empiricist tradition:

    Descriptive Claims: How the World Is

    Descriptive claims make statements about how the world is. They describe the facts of something, what you observe to be the case without any form of evaluation or judgment. For example, “the weather today is sunny” is a descriptive claim because it simply describes what someone observes.

    Evaluative Claims: How the World Ought to Be

    Evaluative claims make statements about how the world ought to be. They express judgments of value: what is good, just, fair, beautiful, healthy, important, etc. Instead of simply describing, evaluative claims interpret facts or assert what should be the case.

    https://openstax.org/books/introduction-philosophy/pages/8-1-the-fact-value-distinction#:~:text=Instead%20of%20simply%20describing%2C%20evaluative,sunshine%E2%80%9D%20is%20an%20evaluative%20claim.

    Note, it is assumed that we can "describe truth," but only "interpret" value. "Facts" are juxtaposed with value here. "Description" is juxtaposed with "judgement." Obviously, in the Continental tradition, you're more likely to see the claim that both are interpretive and involve judgement. Here, this is even expanded to "healthy."

    Anyhow, there is a difference between:

    "People ought to be kind to their mothers." and
    "It is good for people to be kind to their mothers."

    You can see this in the fact that if you replace "good" in the second statement with "common" you get a straightforwardly descriptive statement: "It is common for people to be kind to their mothers." Now, if the claim is that the latter can be said to be "describing a fact," yet the former cannot, I'm not sure how this isn't just assuming: "One can never describe facts about value" (which tends to flow from "there are no facts about values"). An appeal to "judgment" makes no difference here, because this applies to what we think is true sans issues of value as well.

    Yet, if we allow for "facts about values" we might get something like:
    X is better (more choice-worthy) than Y. ("is," fact)
    Therefore, choose X. ("ought")

    When framed this way, it seems a bit strange to have the objection that we cannot move from "X is more worthy of choice than Y" to "X should be chosen over Y." Yet, if someone is really a stickler on this, we can always include an ought premise to the effect of: "one ought always choose the better over the worse." I am just not sure why this is needed. It seems to me to be akin to demanding that every logical argument include the additional premise that: "we ought affirm the true over the false" tacked on to it. Granted, I see no problem in adding either since they seem obviously true.
  • Wisdom: Cultivation, Context, and Challenges


    I am certainly interested in folk actually setting out their ontological commitments. And talk of "good and bad" could mean adaptively optimal and its privation, or transcendently perfect and its privation.apokrisis

    Well, I wouldn't want to change the subject to something besides the point. As near as I can see, the only commitment of a "reality versus appearance distinction" vis-á-vis value is some sort of value realism, such that what is good, choice-worthy, adaptive (as opposed to maladaptive), etc. doesn't just amount to what appears or is said to be so. Or to put it negatively, that it is possible for us to be wrong about what is most desirable or choice-worthy, and that our being wrong in this respect doesn't just reduce to our experiencing regret at some later point, or people disagreeing with us.

    I am not sure if I would set the "adaptive" and the "perfect" against each other either. Your use of "transcendent" in these posts seems to suggest that you think that what is "transcendent" is somehow outside and absent from what is transcended. Yet that isn't "transcendent" in the sense the term is generally employed in terms of goodness. Nothing is being transcended in your usage. A better term would be "extra-worldly." But, suffice to say, few ethicists appeal to a "good that lies wholly outside our world," but rather one that is, to paraphrase Saint Augustine, "within everything yet contained by nothing."

    And wisdom can't "know precisely which ends are actually good". That is an appeal to transcendent truths that float above the real world. It can only, in the usual fallible and pragmatic fashion, keep testing, keep exploring, by being alert to its own failings and getting out the clever thinking to figure a better world model out.apokrisis

    I'd like to unpack this because it seems to me like another unsupportable dichotomy. You seem to be saying that either:

    "It must be impossible for us to distinguish between what appears good and what is actually good"
    or
    "There must exist "'transcendent truths that float above the real world' to measure what is truly good."

    I am not sure if these are the only two options. You do say "know precisely," and I am curious what "precisely" is supposed to be doing there. Is it supposed to be indicating some sort of infallibility? If so, might that be lurching towards strawman territory? That is, that there is only your position or else "declaring yourself infallible due to your relation to a measuring stick 'outside the real word," seems a bit much. Surely, one might be a realist vis-a-vis values, and still a fallibilist, as the great majority of realists have been. Indeed, for all those thinkers you seem to have in mind for whom Goodness is ultimately the Divine Essence, Goodness itself is precisely unknowable.

    Like I said earlier, I am very skeptical about uses of the term "transcendent" because it is regularly used equivocally on this forum and in professional philosophy. Which thinkers exactly appealed to "transcendent truths that float above the real world?" I will not deny that at least some thinkers may have advanced such a theory, but none of the main figures in the realist tradition do. Where it shows up, it seems to largely be a product of early analytic thought with its notions of abstract objects. This is, however, extremely far from something like the medieval Doctrine of Transcendental, where Good, True, One, etc. are merely conceptual distinctions that add nothing to Being (being as viewed in a particular aspect, e.g. vis-a-vis the appetites for Good).

    This is the same mistake Sam Harris makes:

    Harris rejects any “transcendent source of value,” as being irrelevant to well-being, since it must “bear [no] relationship to the actual or potential experiences of conscious beings.”1 Likewise, he describes “the Platonic Form of the Good” as existing “independent of the experiences of conscious beings.”2 Further, he argues that Christians cannot truly dedicate themselves to the pursuit of God “for its own sake,” since—ultimately—people are only following God because they desire the extrinsic rewards won through God’s favor, or fear the extrinsic punishments of God’s wrath.3

    Clearly, Harris has not understood Plato, and his characterization of “Christianity” throughout The Moral Landscape bears little resemblance to the philosophies of St. Thomas, the Church Fathers, or many other influential Christian thinkers. Indeed, the very idea that “God’s good” could be arbitrarily related to what is “good for us,” only makes sense within the context of a very particular sort of voluntarist theology.

    Much more could be said here, but it is sufficient to point out that Plato’s Good and the God of St. Thomas are not “independent” of the good experienced by creatures. Nor are they independent even of what merely appears to be good to creatures. For Plato, when we choose what merely appears good, as opposed to what is truly better, we are still choosing “that which appears good” in virtue of its participation in the Good. The Good is not absent from “good appearances.” This is brought out even more strongly in St. Thomas, who arguably elevates “the Beautiful”—alongside “the One,” “the True,” and “the Good”—as a transcendental property of being itself. Likewise, for Aristotle, God is the “First Cause” precisely because God is the end to which all things are oriented and striving.i By definition, this excludes God’s being wholly independent from the desires and well-being of creatures.

    What appears to be deficient here is Harris’s understanding of the concepts of transcendence and the absolute. The transcendent is not absent from what it transcends. An infinite Good—one that is truly without limits—is not bracketed off by the finite and missing from it. Likewise, the absolute cannot be “reality as separated from all appearances or subjectivity.” The absolute—to be properly absolute—must include all of reality and appearances. Appearances are part of reality in that they really are appearances.ii Harris seems to be conflating something like the notion of “objectivity” (as in, “being as seen from ‘the view from nowhere’”) with the idea of a transcendent and absolute Good. Hence, he uses good reasons for dismissing the idea of an “objective good” (at least under this flawed definition of “objectivity”) as a way to dismiss any notion of transcendent good.

    You can always use Peirce's religiosity against me.apokrisis

    I am not "using Peirce's religiosity" against you. I am pointing out how Peirce avoids the issue you've fallen into. Consider, if I don't currently find it good, or adaptive to agree with you, how could I possibly be wrong about this if proper adaptation is just defined in terms of current belief? The fact that you think that article can be dismissed as "religiosity" is itself telling though.

    In his trichotomy of tychasticism, anancasticism, and agapasticism, we can see that what he calls love, a systems scientist would call constraint.apokrisis

    Only by doing violence to the original text I would say.

    Peirce is railing against the tone of his times – the mindless competitiveness that the Victorian understanding of Darwinism was meant to condone.apokrisis

    He is surely doing this, but he is doing more. He is making a point he makes throughout his papers, that "reasonableness" is incoherent without an ordering to an end.

    Well surely only because they lost the war?apokrisis

    The idea that "Nazism would be good if only they had won," is one of the absurdities of popular reductionist forms of anti-realism, yes.
    But it is soft soaping the tough business of forging understandings of how competition and cooperation can be rebuilt in the new circumstances that human history keeps presenting.apokrisis

    Why ought they be rebuilt? If whoever wins is justified, has properly adapted, I don't see how nihilism doesn't follow.

    And that is if wisdom and cleverness are cognitive processes, then how does that relate to the evolved structure of brains and nervous systems? If one isn’t minded to treat these things as gifts given by God to humans, but instead naturally evolved traits, then how does one make sense of their evolutionary continuity with mammalian neurobiology?apokrisis

    Again, the dichotomy, either your narrative, or an appeal to God as the proximate cause. Does that not seem like a strawman?

    Further, just because one needs a brain to be wise does not mean that wisdom is best explained in terms of brains. This is akin to claiming that we are best able to understand flight (the principles of lift, etc.), by looking at the individual cells making up the wings of all the animals that fly, or conducting a close examination of wings.

    Yet this is demonstrably not the best way to understand flight or lift. We did not learn to build flying machines through an intensive study of the chemistry at work in insect or bird wings. Indeed, there is much we still do not know about how those cells work (and the same is true for brains). Rather, we mastered the more general, generating principles at work across all instances of heavier than air flight in nature. The fact that “the cells in insects' wings are necessary for flight” need not compel us to conclude that flight is best understood through a study of these cells, just as the fact that we need our brains to “know goodness” need not suggest that the goodness is itself something that can be best known through studying neurons.
  • What is right and what is wrong and how do we know?


    A path is made by walking on it; ethics are made by questioning our actions.unenlightened

    An excellent point. You've put it much better than I, but that's the basic idea I was trying to get across here:

    However, I want to suggest that one can have "the best," or an "infinite good" in mind as a goal without knowing such a good... That's sort of the root of Plato's notion. It is the desire for what is "truly good" not merely what "appears good" or "is said to be good by others" that allows us to transcend current beliefs and opinions, to move beyond our own finitude in a self-determining pursuit of the Good. The object is not known at the outset, it is merely desired (that's the whole idea of the "erotic ascent").Count Timothy von Icarus

    "Metaethics" often deals with questions of moral truth, etc., but I think there is a very real question about what sort of things will be needed to "walk any path" regardless of where that path leads. That is, to my mind, the most important sort of "meta" question. A metavirtue here is one which is required for any such questioning, and to abide by what one finds, and I think these can indeed be identified. Plato's notion of the "rule of the rational part of the soul," seems to me to be an excellent candidate for instance. Even if we might want to radically rework the underlying anthropology, the habits described seem necessary for all inquirers.
  • Consciousness and events
    This topic is very interesting from a sociological perspective because multiverse theories have become very popular as a solution to the Fine Tuning Problem, and yet "Consciousness Causes Collapse" (the von Neumann–Wigner interpretation) solves the problem just as well.

    Why do we exist in a universe fine tuned for conscious life? Because only such life collapses potentiality into actuality. All possibilities exist, but only those with life become actual. Problem solved. If the degree of collapse corresponds to the degree of consciousness, perhaps there is even something like an iron law leading towards de Chardin's Omega Point. Makes as much sense as the multiverse.

    At least, prima facie, I am not sure why "everything possible actually happens, it is just impossible to ever observe that this is so," is considered more plausible than this. I can see why some might find both absurd, but the preference for one over the other seems hard to explain as anything other than an aesthetic preference.
  • Wisdom: Cultivation, Context, and Challenges


    I don't think anyone has mentioned God except for you. It seems to me that you are trying to set up the following dichotomy:

    "Either wisdom is just adaptation (cleverness) or else one must explain wisdom in terms of God."

    But this seems to me to be a false dichotomy. Again, my objection was that any action can be seen as an "adaptation" towards some end, but wisdom generally consists precisely in knowing which ends are actually good to achieve. It is better to know what is worth doing than to be very clever about doing evil or feeding vices. You seem to be collapsing any distinction between apparent and real value however. Yet it seems obvious that people can be very clever in pursuing merely apparent goods, and that this is typically what we mean by "being unwise."

    Such a collapse seems to indicate something like a "values anti-realism." What is good, beautiful, just, etc. is just whatever appears as such or is said to be such. However, realism here is not the exclusive domain of theists; far from it. Plus, I'm not really sure what you think an appeal to God is supposed to do here. Arguably, some appeals to God, such as divine command theory, have much more in common with anti-realism than realism; they just shift whose opinion matters. Meanwhile, even a realism grounded in God doesn't need to appeal to God to explain anything and everything about value.

    I'm not sure what to make of the appeal to Peirce. I don't think his agapism runs into this problem because it has an end it is oriented to. His "reasonableness" is not merely procedural and instrumental. There is a summun bonum (indeed there must be one for rational action), and this is certainly the case for the Scholastics he was drawing on as well.

    "Three modes of evolution have thus been brought before us: evolution by fortuitous variation, evolution by mechanical necessity, and evolution by creative love. We may term them tychastic evolution, or tychasm, anancastic evolution, or anancasm, and agapastic evolution, or agapasm. The doctrines which represent these as severally of principal importance we may term tychasticism, anancasticism, and agapasticism. On the other hand the mere propositions that absolute chance, mechanical necessity, and the law of love are severally operative in the cosmos may receive the names of tychism, anancism, and agapism."

    C. S. Peirce, Evolutionary Love


    More explicitly in terms of the "maladaptive:"

    The gospel of Christ says that progress comes from every individual merging his individuality in sympathy with his neighbors. On the other side, the conviction of the nineteenth century is that progress takes place by virtue of every individual's striving for himself with all his might and trampling his neighbor under foot whenever he gets a chance to do so. This may accurately be called the Gospel of Greed...

    Well, political economy has its formula of redemption, too. It is this: Intelligence in the service of greed ensures the justest prices, the fairest contracts, the most enlightened conduct of all the dealings between men, and leads to the summum bonum, food in plenty and perfect comfort. Food for whom? Why, for the greedy master of intelligence.

    Ibid.

    289. Everybody can see that the statement of St. John is the formula of an evolutionary philosophy, which teaches that growth comes only from love, from I will not say self-sacrifice, but from the ardent impulse to fulfill another's highest impulse. Suppose, for example, that I have an idea that interests me. It is my creation. It is my creature; for as shown in last July's Monist, it is a little person. I love it; and I will sink myself in perfecting it. It is not by dealing out cold justice to the circle of my ideas that I can make them grow, but by cherishing and tending them as I would the flowers in my garden. The philosophy we draw from John's gospel is that this is the way mind develops; and as for the cosmos, only so far as it yet is mind, and so has life, is it capable of further evolution. Love, recognizing germs of loveliness in the hateful, gradually warms it into life, and makes it lovely. That is the sort of evolution which every careful student of my essay "The Law of Mind" must see that synechism calls for.

    My original qualm was that there seemed to be no distinction between the adaptive and maladaptive if wisdom is just adaptation and cleverness. But Peirce gives himself grounds for such a distinction. Something like the Nazis' rise to power could be described as a sort of anancasm because it worked largely through coercion. It was a maladaptive response to the post-war economic and political pressures the Weimar Republic faced.

    Hence the earlier mention of Whitehead on this same point; there is something similar there (and too often missing in analytic thought). I think Aldous Huxley is intellectually in the same vein too. Although, the cosmic teleology here seems to me closer to St. Maximus the Confessor than anything else I can think of.
  • What is right and what is wrong and how do we know?


    To say ethics is the study of ends presupposes the value of an end.

    I'm not sure if it does. That there are ends that people act for seems obvious. If we were wrong about this, it seems we should be wrong about just about anything and everything. Moreover, there are obviously better or worse means towards different ends. If I want to sail off my island and have at my disposal logs to make a raft or stones, the logs shall work far better for achieving my end.

    One states an end, a purpose to one's actions, and no matter what this is, there is another question latent and ignored: What good is this?


    Right, this is how I would put it: Aside from the question about what is good vis-á-vis any particular end (e.g., winning a race) we can also ask about the choiceworthyness of that end itself. Ends can be ordered to other ends. For instance, we might want to win a race against fleet footed Achilles because if we do then he will agree to release our captured friend. Ends can be ordered to other ends. So, we face the problem of grounding proximate ends and means in some sort of end that is choice-worthy for its own sake, i.e., "an end that is sought for its own sake."

    Are there any such ends? Are there many?

    A common answer here has been pleasure. Yet it is generally been agreed that simple pleasures—from food, sex, drugs, etc.—are inferior to the "pleasure" derived from participation in common goods (i.e., the good of being a good husband, a good parent, a good citizen, a good soldier, a good priest, a good artist, etc.) or a sort of intellectual pleasure associated with wisdom, contemplative understanding of the world and one's role in it, or as Hegel says: "being at home in the world." Some people use "pleasure" only for "bodily pleasures," but I think we can speak about different sorts analogously. Sometimes, we seek bodily pleasures so as to accomplish one of the "higher" sorts of pleasures, as when we want to be in a good mood so as to help someone else better. Yet happiness as a sort of overall flourishing, "living a good life" and "being a good person," seems to be sought for its own sake.

    Anyhow, that's just a candidate for what is good in itself, and sought for its own sake.

    Others deny that there can be anything worth seeking for its own sake. So all judgement about ends relies on an infinite regression of finite ends ordered to other finite ends, which must, in virtue of our being finite creatures, bottom out somewhere, seemingly in irrational impulse. However, these folks still think we can live "better or worse lives," and "learn to deal with this ungroundness." So, even for them, there can still be an ethics and a politics (politics as an art for producing good/virtuous societies)—a study of living a better or worse life.

    However, I want to suggest that one can have "the best," or an "infinite good" in mind as a goal without knowing such a good. Indeed, if the human mind has an infinite appetite for goodness, as philosophers of all stripes from Platonists to athiests have claimed, then it seems that, on account of our finitude, we can never fully know or be "mentally conformed" to such a good, since it surpasses us. Yet we can be "always ever more open to it." That's sort of the root of Plato's notion. It is the desire for what is "truly good" not merely what "appears good" or "is said to be good by others" that allows us to transcend current beliefs and opinions, to move beyond our own finitude in a self-determining pursuit of the Good. The object is not known at the outset, it is merely desired (that's the whole idea of the "erotic ascent").

    Normally, traditions that build on Plato—Boethius, the Golden Age Islamic thinkers, many of the Patristics, the Scholastics, etc.—also posit a sort of "knowing by becoming" here. Praxis is essential (e.g., contemplation, ascetic labors, etc.). But within these schools it isn't "knowing the good" that comes first, but knowing what essentially precludes knowing and consistently willing the good, which is being divided against oneself and controlled by one's passions and lower appetites, rather than the rational appetite for goodness or truth as such. Hence, ethics here beings from a sort of "meta" position, from looking at what must be the case for any ethical life regardless of what goodness and justice turn out to be. Indeed, much of what Plato puts out there would seem to hold even if "good" just means "what I myself will prefer." It applies to anyone not embracing full nihilism, in that being ruled over by one's appetites and passions will only lead to good outcomes by accident (and we know from experience that it will often result in disaster).

    Which is all to say that ends can be quite unknown and we can still have an ethics.
  • What is right and what is wrong and how do we know?


    The problem itself is, of course, messy, as the OP notes, but does this make ethics itself reducible to the thinking only, that is, ethics being the kind of thing that is made and conventional only, and not discovered. If ethics is essentially discoverable, then this implies something outside of thought , addressed by thought to determine how to understand it.

    I don't see how it could be. If ethics is the study of ends, of what is sought, then it seems clear that some ends are not sought merely as a matter of convention. People do not seek happiness and avoid suffering as a sort of convention. We do not desire food, oxygen, warmth, etc. by convention; and yet these do seem to be chief "goods." That it is, at least ceteris paribus, bad to be blinded, to have one's hand cut off, to suffer brain injury, etc. does not seem to be a matter of convention. Convention itself is only coherent if it springs from a sort of goal-directedness that already presupposes value, else there would be no reason to follow conventions.

    As to discoveries, surely some moral insights are discovered. Newton famously drank mercury because he thought it was good for him. Yet today, knowing what we know about the effects of mercury ingestion on the body, we can say that, all else equal, it is bad for people to have mercury slipped into their food and drink. This is knowledge of value that must be discovered though.
  • The Concept of 'God': What Does it Mean and, Does it Matter?


    Indeed, but it seems that memories can be stored in other ways. For instance, Saint Augustine goes on at length about memory in the Confessions but he also recounts many of his early memories. And yet now these memories are, in a sense, stored in a text that millions have read and shared in. Likewise, if we write a reminder note for ourselves, and it prompts us to act, it is serving as a sort of medium term memory storage device. A pen and paper do the same for short term memory when doing arithmetic.

    Augustine's memories were originally recorded in parchment and velum, but have since spread to paper, hard drives, etc. They are recorded as sound waves on magnetic tapes and optical disks as well, and in a sense, in the bodies of all those who have experienced these and now also remember them. Yet physically, an optical disk is very different from paper which is very different from a sound wave, which is very different from sound waves. The physical substrate does not seem to matter much. It is the information (form) that matters, and arguably this is "immaterial" in a number of senses.
  • What is right and what is wrong and how do we know?


    You can observe brain activities corresponding to pleasure, pain and even consciousness on functional MRI scans.

    We know these states "correspond" to pleasure or pain because people tell us they do. A huge amount of neuroscience in this general area presupposes that people are accurate reporters of real, private, mental states. If we didn't assume that, did not presuppose it as fact, then all of our "measurable, third person data" would only tell us about how different stimuli cause different responses in different parts of the body, e.g., "do this and people emit this sort of sound wave." This is why some philosophers and neuroscience argue that we should declare consciousness a sort of unscientific illusion.

    Anyhow, if this counts as "observing" inner life, how is goodness not observed? Isn't medical and vetinary science incoherent without the good of the body, health? Isn't most of the field of psychology incoherent with the assumption of a mind and what is good for it? "Psychology" is itself the "discourse of the soul." So too, engineering as a science, architecture, etc., all sorts of arts and sciences, are quite incoherent without a notion of goodness. How can one decide between a good bridge and a bad one, or a good water treatment plant and a bad one, without ends you want to achieve? If a building that falls down is just as good as one that stands, or a treatment that kills patients just as good as one that heals them, these disciplines disappear.

    Hence, the good (ends, desirability, choice-worthyness) seems to be everywhere. Further, if it is in the mind, and the mind comes from the physical, then ends, desirability, etc. come from the physical.

    I guess that's my point. Your division here seems to beg the question, and I don't think it's actually a wise thing to just assume. IMHO, it's unclear exactly why pleasure should be so different from goodness, one "real" the other illusory for instance.
  • What is right and what is wrong and how do we know?


    Things like that need to be stopped, not because they’re bad but because they hurt people

    Yet if hurting people isn't bad, why ought we try to prevent it?

    It’s a society‘s responsibility to protect its members.

    Even if this responsibility existed, if it isn't good to live up to one's responsibilities (or wrong not to) then I am not sure what this amounts to.

    I guess there seems to be two things. Whether it is called such or not, there seems to be a sort of social level morality being invoked, right (i.e., what societies ought or ought not do)? However, at the same time, societies are made up of individuals, and if they do not value this social morality and it has no claim on them then how does it apply?
  • Arguments From Underdetermination and the Realist Response


    If we explain naturalism in terms of "natural" we need to explain what "natural" means, and this is generally where there is equivocation. What makes a cause "natural?" (Also, it seems to me that people who are on board with eliminating causation still consider themselves naturalists; yet if there are no causes this definition won't work).

    When naturalism is defined as the view that "the natural is all there is," then for something to be "natural" is just to say it exists. If ghosts and spirits and magic exist, then they must be natural. It's the same issue as Hemple's Dilemma.
  • What is right and what is wrong and how do we know?


    Your original question was: "Is right and wrong just a matter of thinking something is right (e.g. it is right to save and improve lives) and something is wrong (e.g. theft, fraud, rape, robbery, enslaving, torture and murder are wrong)?"

    But here:

    Different because scientific theories, e.g. the theory of gravity, are about something physical outside one's mind... Morals and laws are psychosocial constructs.

    aren't you presupposing the answer to this question. It seems to me to get close to: "Facts about morality are different because morality is only in the mind." Or, "moral anti-realism is true because moral anti-realism is true."

    There is no objective measure of right and wrong in the universe, the way we can objectively measure the gravity on Earth and on the Moon.

    There is no objective way to measure pleasure or pain, nor consciousness itself. Are these illusory too? Are the only things that exist that which can be measured (presumably quantified)? Yet if nothing really exists except for that which can be quantified, then it would still seem that the illusion that such things exist must itself truly exist. For surely we experience values, beauty, pleasure, etc. And yet is "illusion" something that can be quantified? If not, then we must reject the idea that morality, beauty, etc. are illusions, and must simply say that most of our experiences aren't even illusory, they are nothing at all.

    Our morals and laws arise out of the dynamic interactions of our genes, environments, nutrients, and experiences.

    Ah, well the things you've mentioned morality arising from are "physical things outside the mind," no? So how does something that is not a "physical thing" (e.g., goodness) arise from physical things? There must be some sort of convertability, or else such an arising would not be possible. But if physical things relate to value in this manner, then it seems to me that there is no reason why value should be exclusively "in the mind." What is in the mind "arises" from the "physical" and so the physical seems to somehow contain, at least virtually, values, etc.
  • What is right and what is wrong and how do we know?


    "There is nothing either good or bad, but thinking makes it so." - William Shakespeare, Act 2, Scene 2, "Hamlet".

    Anyhow, I believe the correct response here is: "There are more things in heaven and earth... than are dreamt of in your philosophy" - William Shakespeare, Act 1, Scene 5, "Hamlet". :smile:
  • What is right and what is wrong and how do we know?


    Morals and laws are different from physical things like the shape of the Earth. Morals and laws are mental constructs which come from our beliefs,

    Different how? Are scientific theories not "mental constructs?" What about understandings of history? Now if morals are "mental constructs" what causes them? Presumably, they do not spring from the aether uncaused into our minds fully formed, but have causes that lie outside of us. But this makes them, at least in this respect, like scientific models and theories, no?

    e.g. apostasy and blasphemy are considered wrongs in Islam and are punishable by the death penalty in some Muslim-majority countries, while apostasy from Islam and blasphemy against Islam are not considered wrongs in Western countries and are not punished

    Right, behaviors and norms vary. But this is true as relates to all sorts of factual claims. With the advent of germ theory, some parts of the world started to boil their water to sterilize it in order to curb outbreaks of infectious diseases. Other parts didn't, and recommended other procedures to try to fight epidemics. Norms and public policy varied based on what was thought to be best, the same as is going on in your example. In some places, young Earth creationism is taught as the origin of the world. In others, the narrative explains that the world is 4 billion years old and that life slowly evolved on it over billions of years. This is a question of fact, and yet norms about it, what is taught, etc., vary by time, place, and culture in much the same ways that laws and moral beliefs vary.
  • Arguments From Underdetermination and the Realist Response


    More than that, there are, in my book, two versions of empiricism. One of them has been popular in philosophy and leads to the empiricism of appearances, ideas or sense-data. The other is mostly unspoken but is the foundation of science; this version understands experience in a common-sense way and doesn't posit theoretical objects that boast of being irrefutable and turn out to prevent us from understanding the stars or anything else.

    Ah yes, I mention this. But I think this leads to an unfortunate and common conflation where the second sort of "empiricism" is appealed to in order to justify the first sort, such that all scientific progress is called on as evidence for the superiority of the first sort of empiricism, and a rejection of empiricism is said to be a rejection of science.

    As noted in the OP, if we go with the second version, then Hegel is an empiricist and figures like Aristotle or Albert Magnus, and Archimedes would be more "empiricsts" than the original Empiricists.

    This way of justifying the first sort of empiricism isn't just flawed on the grounds that it equivocates. As far as I am aware, it has no good empirical support either. The Great Divergence whereby Europe pulled dramatically ahead of China and India in economic and military development doesn't track well with the (re)emergence of empiricist philosophy. Areas where rationalists dominated did not lag behind in military and economic might. Many famous inventors and scientists did not hold to the first sort of philosophy. And I have never heard of an experimental study finding that having more empiricist (first sense) philosophical views makes one a more successful scientist or inventor.

    It's easy to see how the two often become mixed together though. I think this is especially actue in metaethics, where empiricsts epistemic presuppositions essentially amount to metaphysical presuppositions. "Examine the sense data; there are no values (or universals, or facts about meaning, etc.) to be found." But of course, our lives are full of apparent universals (wholes) and values. The critic can rightly claim that these, in fact, seem to be everywhere. Phenomenology seems to find them, as did the philosophy of the past. So all the heavy lifting seems to be done by what is assumed to be admissable from experience.

    At any rate, in arguments such a J.L. Mackie's "queerness argument" against values, I think it's clear that the epistemological presuppositions do all the work, and essentially assume the conclusion. But the conclusion that all prior talk about ethics, goodness, beauty, etc. is a sort of "error" is a radical, and in a sense, skeptical conclusion. Yet to my point in the OP, if our epistemology leads us to this—to dismiss claims as seemingly obvious as "it is bad to have my arm broken," or "it is bad for children to be poisoned at school" as lacking any epistemic grounding (i.e., not possibly being facts)—then I'd say this is an indication that we simply have a bad epistemology. Doing science does not require such views. But this is particularly true as the same basic arguments once used to dispatch goodness and beauty have since been leveled at truth. With the deflation of truth into emotivism, such an epistemology becomes straightforwardly self-refuting.

    The same sort of thing that happens with "empiricism" happens with "naturalism." Both have been equated with accepting or rejecting science to such a degree that virtually no one says that they aren't an naturalist. Yet this just leads to a huge amount of equivocation, where "naturalism" can be either extremely expensive, or "only reductive, mechanistic materialism." I think it is, in general, an increasingly useless term. It's also subject to Hemple's Dilemma, where "natural" just comes to mean "whatever there is good evidence for."
  • What is right and what is wrong and how do we know?


    Hamlet is pithy on this point, but I prefer Milton's Satan:

    A mind not to be changed by place or time.
    The mind is its own place, and in itself
    Can make a Heaven of Hell, a Hell of Heaven.
    What matter where, if I be still the same,
    And what I should be, all but less than he
    Whom thunder hath made greater? Here at least
    We shall be free; th’ Almighty hath not built
    Here for his envy, will not drive us hence:
    Here we may reign secure; and, in my choice,
    To reign is worth ambition, though in Hell:
    Better to reign in Hell than serve in Heaven.


    Note, Satan always relates everything to himself. He begins this first epic speech ostensibly talking about another demon, but then it is all "me mine me me" for most of it. He is almost always speaking in similes relating the rest of reality to himself most often. God, very strikingly for Paradise Lost, never uses any similes. I think it's a brilliant, subtle, linguistic point about elements of the "New Science" and mechanistic philosophy of Milton's day, which makes everything a matter of private valuation.

    On the question of different laws holding at different times, if this is meant to suggest skepticism, I would just point out that the same holds for all issues of truth. For most of human history, people held disparate beliefs about the shape of the Earth, how infectious diseases spread, etc. What individuals believed on these issues has tended to be a function of the answers they grew up around. If you're culture thought the Earth was flat , you probably did too. But surely this doesn't give us grounds to believe that there is "no fact of the matter," or that the shape of the Earth varies depending on which cultural context you are currently in.




    In practice, this seems to be what many "ethics" end up reducing to.

Count Timothy von Icarus

Start FollowingSend a Message