• What is real? How do we know what is real?


    But does this principle also mean that everything you and I think and do is similarly poised between "determined by prior actuality" and "having no reasons at all"?

    Sure. Thinking is an act, a change. It either occurs for some reason or it doesn't. Thinking is a move from potentially thinking something to actually thinking it. If something is thought (or perceived) for no reason, there is no reason why it should be any one thought instead of any other.

    Apart from the metaphysical difficulties around causes versus reasons

    Here is a potential confusion. We might say we think or do something "for no reason at all," when what we really mean is "we acted without any rational deliberation." These aren't the same thing. While we might affirm something because we are angry, hungry, or to curry favor, etc., and call this irrational, yet it would not be acting for no reason at all. Acting according to one's appetites is still acting for a reason. Just because there is a reason that someone does something (e.g. stealing because one is a kleptomaniac due to one's personal history, chemical imbalances, etc.) doesn't make it "reasonable."

    Causes and reasons are fairly synonymous is some senses. If by "reasons" you have "rational justifications" in mind, these two wouldn't occur spontaneously either.

    As Kenneth Gallagher puts it for mobile, changing being:

    "For no being insofar as it is changing is its own ground of being. Every state of a changing being is contingent: it was not a moment ago and will not be a moment from now. Therefore the grasping of a being as changing is the grasping of it as not intelligible in itself-as essentially referred to something other than itself."

    it also raises the unpleasant specter of there being only one reasonable way to think and do.

    I am not sure how it directly relates to this. A metaphysics of act wouldn't, in general, tend to suggest this in any rigid sense. What it would suggest is that there are unreasonable ways to think.

    All reasonable ways of thinking either share something in common or they don't. If they share nothing in common, in virtue of what would they all be called reasonable? More to the point, what would these multiple, sui generis "types of rationality" look like? How do they relate differentially to truth?

    On the other hand, if all ways of thinking and acting are reasonable, then being unreasonable (or incorrect) is impossible, and being "reasonable" doesn't seem to mean much of anything. To think of act is to be reasonable.
  • Are moral systems always futile?


    It's important for building up a coherent ethics and moving to a "metaphysics of goodness." From a practical perspective, I don't think it's necessary to go that deep (indeed, most people will find it annoying or impossible). That's sort of the great thing about it, it's useful even if you don't want to go all the way into the Doctrine of Transcendentals and the ultimate grounding of value.

    But prima facie it's quite hard to attack virtue ethics as at least a solid set of principles for self-development and moral action. To provide a strong rebuttal of virtue ethics requires demonstrating that, ceteris paribus (and not just in bizarre counter examples), it isn't more desirable to be courageous instead of cowardly or rash, that prudence isn't better on average than being impulsive or indecisive, that having fortitude isn't better than being weak willed, etc.




    One of MacIntyre's points is that any notion of "just desert" or human excellence requires some notion of man's telos. Otherwise, there is no standard by which to judge excellence.

    Contemporary liberal political theory tends to focus on rights instead of just rewards/punishments. It doesn't turn to just desert because liberalism makes man's telos a "private," individual question. This is at odds with politicians and citizens in liberal states (on the left and right), as well as most lay philosophers here. They constantly appeal to just desert and excellence. It's very hard not to. Even fatalists do this. Denying excellence and any human telos seems to be almost as difficult a feat to carry off as radical skepticism.

    Anyhow, even if we are skeptical of our knowledge of man's "natural ends," it will still be the case that at least some virtues will be a prerequisite for even discovering these ends (or discovering that no such ends exist). Hence, we can at least say that: "the virtues important to the good life of man are those virtues necessary for discovering the good life of man" (a catch phrase of MacIntyre's). Here is a paper sort of walking through this step by step. Plato's "being ruled by the rational part of the soul," turns out to be a fairly ideal metavirtue (a virtue required for the attainment of any other virtues, regardless of what they turn out to be). Also, because moral virtue is also epistemic virtue, even the relativist cannot simply write it off. They will also need some virtues in order to become confirmed in their relativism or anti-realism.
  • What is real? How do we know what is real?


    The leap from "no determinate causes" to "no reason at all" in particular still eludes me.

    "Act follows on being." What a thing does, how it interacts with other things or parts of itself depends upon what it is. Otherwise, anything could be essentially be (and act as) anything else.

    For something to have no determinant cause would mean that it is caused by "nothing in particular" as opposed to some determinant being that acts in a determinant way. But nothing in particular doesn't act in any determinant way for any reason. Being nothing, it cannot act according to its being, but must act for "no reason at all."

    This doesn't rule out stochastic action (e.g. some interpretations of quantum mechanics), but it does rule out action that is not determined by prior actuality. Defaulting on this would be defaulting on things having causes and the world being intelligible. If potency can move to act for no reason at all, then there is no limit on how much this can occur, since it is completely undetermined. This gets at the whole: "what if we and our memories just popped into existence randomly 5 seconds ago and will vanish the same way in another five seconds," concern of the radical skeptic. If potency moves itself, the skeptic has reason to be concerned.

    It's also the case that things are only knowable through their interactions. If interactions are not determinant, then neither is knowledge. The being itself being unintelligible implies epistemic nihilism.
  • What is real? How do we know what is real?


    There are hierarchies of distinctions. Distinctions like act/potency or part/whole are more general than any particular science (e.g. physics, logic, mathematics, etc.). It seems to me that nothing could be more general than being/existence.

    What is true for the higher level must be true for the lower. What is true of parts and wholes in mathematics cannot be distinct from physics, or physics from biology. If it were, we wouldn't have "science" but a great multitude of unrelated sciences, and no rational way to demarcate different sciences.

    "Tie back" raises the problem once again. Why does it do so? In what way? The priority of existence to human experience wouldn't guarantee the fidelity of our descriptions of that existence. Why does the key fit?

    See the post above. Just think through what it means to say that thought is not determined by being in any determinant way. You would be posting a distinction between thought and "objective reality," but then be saying "but maybe objective reality has no determinant effect on thought." This undercuts the distinction itself, rendering it contentless.

    A realm of thought alone with random causes would be identical to a realm of thought influenced randomly and indeterminately by some extrinsic "realm of reality." Nothing would connect the two.

    How is it the case that the world, and our experience of it, is so structured? Does the PNC and its cousins represent spade-turning principles about both thinking and being, in the same way, and for the same reasons?

    I don't think there is any spade turning at all. All that is required is to affirm the priority of actuality over potency. Something actual (determinant) must move the mind from potency to actuality—must move it so that thought is one way and not any other. To deny this would be to say that thought occurs "for no determinant reason at all." This severes any real connection to a "being prior to thought," while also rendering the world unintelligible and philosophy ultimately pointless.

    But the world doesn't seem unintelligible. At any rate, at the end of the day being is either intelligible, and things do happen for reasons (act is prior to potency) or it isn't. If it isn't, all philosophy is wrong. Things can be any way at all.

    .

    A bent key is no longer a key? And if you can bend it back it becomes a key again I suppose? Come on, the analogy doesn't make any sense.

    Oh, I agree. I don't want it to be aporetic at all. It's just a hard question to answer, when the analogy is extended to logical primitives.

    It's hard because it isn't a good analogy.

    Why does my key turn my lock? Because it fits in, lifts the pins, and gives me leverage to turn the deadbolt.

    And again. Why puff it up in this way? No one, least of all me, is saying anything like this.

    Because this is what the denial of the primacy of actuality (the priority of determinant being) to potency entails. It means that potency moves itself to actuality "for no reason at all," due to no prior actuality. This means things do not have causes or reasons.

    Either the mind is moved to actuality by prior actuality (by determinant properties of being) or it isn't. If it isn't, then it is moved by "nothing at all" (by nothing actual). Epistemic PNC follows from metaphysical PNC.
  • What is real? How do we know what is real?


    Now I think what I'm supposed to imagine next is that both questions get an explanation or a deconstructive answer that can resolve my puzzlement. To the first question, the reply is, "Because that's what 'your housekey' means. You can't have 'your housekey' without it having both those attributes: it fits your lock, and only your lock. So if you understand 'your housekey', there is no further question to be asked about it." To the second question, the reply is, "Because that's how an object comes to be yours: you possess it, it's been made for you and given to you. Also, since it's an important object in your life, you'll have it to hand, and shouldn't be surprised that this is the case. Are you still puzzled about why you live in a world in which all people fortunate enough to be housed have keys? You just do; that is your world; there's nothing special about you."

    But that isn't what "your house key" means. If someone changes the locks on my door while I'm out, my key doesn't cease to be mine. And if I bend the key, it won't turn the lock, even though it is still the same key and the same lock. Nor do we possess keys "because we just do." The fact that you have a key in your pocket and whether or not it fits your door has intelligible causes. If we allow "why does my key turn my lock?" to become an aporia, then what won't be?

    This is, IMO, simply a bad analogy. Chess is not a good analogy for logic and truth. House keys aren't either. Not all analogies are appropriate. This is all I mean by "deflecting," the move into seemingly only tangentially related and wholly unjustified analogies.


    They resemble each other so closely yet have such different objects? Or am I wrong about that? Must I simply accept that the "key" of logic fits the "lock" of the world? Is it the case that, just as you can't have "my housekey" without understanding "my uniquely fitting key", you can't have (p v ~p) without understanding "our description of the world" or perhaps "what we do, talking about the world"?

    What would it mean for them to have different objects? It would mean that thought is arbitrarily related to reality as far as I can tell. How could anyone, ever, justify such a claim? A reality versus appearance distinction can only have content if there is something more than appearances. If there are only appearances (thought), then appearances are reality. Whereas, if reality is arbitrarily, randomly related to appearances then you don't really have one sort of being that encompasses both sides of the reality/appearance distinction, but two sui generis, unrelated "types of being." Appearances would be their own, discrete sort of being.

    Now let me ask, why should we posit any sort of unique, sui generis being that is unrelated to any thought or experience anyone has ever had, or could ever have?

    If reality, the actuality of things, does not determinantly affect thought, it isn't worthy of the name. It's just some irrelevant, arguably incoherent bare posit.
  • What is real? How do we know what is real?


    PNC can be formulated as a metaphysical, epistemic, or semantic principle. Ultimately, the latter will tie back to the former if the former is affirmed because being (existence) is prior to being experienced and being spoken about.

    As a purely logical principle, it might be considered normative, in that it would not be rational to affirm what is necessarily impossible, though no doubt people can affirm all sorts of things (whether they can actually believe them is another matter).

    As a metaphysical principle, it might boil down to the idea that being and not-being, existence and non-existence (in any determinant form we might signify or think about anything) are exclusive. They are exclusive because if being and non-being could be the same, things could both have and not have determinant existence, be both something and nothing, collapsing the most basic of all distinctions that allow anything to be any particular thing at all.

    The metaphysical principle is not primarily about truth then, nor affirmation and negation. Truth has to do with the relationship between the intellect and being. We do not say of a rock or tree that it is "true." Rather, things said or thought about rocks or trees are true. The metaphysical principle is about being (which obviously has a close relationship to truth since anything that is truly is).





    I'd suggest it is a law about use of language which is truth preserving.

    If nothing truly is and is-not, then this is necessarily so. One cannot preserve truth while affirming something that is never true. But it is never true that something is also nothing, that existence is non-existence, that being is non-being, etc.

    The difficulty here is that it is absolutely true that we might think we have identified contradictions where there are none. There might be qualifications and distinctions that dissolve apparent contradiction. But no qualification or distinction can dissolve "something is also nothing" and "existence is also non-existence." This is basic.
  • What is real? How do we know what is real?

    Usefulness isn't determined by some rule. That's kinda the point.

    Or apparently by anything more distinct than "what I currently desire." But logic involves what is true, so this makes truth simply a consequence of whatever one desires.

    If we do not accept that the frog can be both alive and dead, then a logic that allows this is not suitable.

    Conversely, if we do accept it, then such a logic would be suitable? So logic has to do with our current beliefs, and whatever we feel. But then discursive reason isn't about truth; it comes down to desire, feeling, and thus ultimately to power.

    Everyone agrees about the often contradictory truths announced by the state in 1984. They all play the same language game. To not play is made not useful; it's double-plus ungood. But that doesn't make the game's truth claims true.



    Reason is simply consistent thinking. You start with premises, and then work out what they entail.

    This is just a restatement of "reason is nothing but discursive ratio" without addressing any of the problems it entails (mentioned in the post you are responding to).

    For one, you note that we must "start with premises" to have reason at all. So, are our initial premises about entailment itself irrational because we must begin with them? Are they outside reason? Is all of reason based on unjustified (and unjustifiable) starting points then?

    Again, give me a discursive argument while assuming absolutely no inference rules. You can't, by definition. Without assuming that some things follow from other things there is nothing to link one assertion with another, and all you have is a bare posit. If reason is just discursive rule following, then such starting points aren't rationally justifiable and you get something like the appeal to bare "usefulness" above.

    "Logics" without LNC exist. Trivial logics that allow us to both affirm and deny anything expressible exist. Explosion exists. If reason is just rule-following, then there is nothing unique about LNC, it's just another rule that can be asserted or not asserted, with consequences for the structure of whatever "game" we are playing.

    I'd allow that LNC is self-evident. But for something to be self-evident requires that there be a non-discursive grasp on truth. Your justification for denials of LNC not making sense above are straightforwardly circular for instance. They have to assume what they set out to prove.

    Either not all justification is discursive (some things are self-evident, and known non-discursively, e.g. what has often been meant by intellectus in this context), or all justification ultimately rests on unjustifiable and unknowable assertions.
  • What is real? How do we know what is real?


    If I'm misrepresenting you, surely you can lay out what determines usefulness then.

    This:



    This is an example of how the choice of logics might be made. Pick one that does the job you want done, or that will extend and enhance the conversation.

    Leaves "use" as an unanalyzable primitive. And a trivial logic absolutely can be "used" to do all sorts of things. It can be used, for instance, to prove that if a frog is alive it is also not-alive. That's a use. Some people don't want to "enhance conversation." They find domination and power most useful. Does logic and truth conform to this use as well?

    What I'd maintain a trivial logic cannot do, is conform to what is actually true, which is the point of logic, not some amorphous "usefulness." If usefulness instead of truth grounds logic, what you have is relativism based on whatever is felt to be useful (more Nietzschean then where Wittgenstein was going if you ask me).
  • Demonstrating Intelligent Design from the Principle of Sufficient Reason


    This was the original idea though, natural laws were "active." Hence the change in philosophy of nature/natural science from a language of "desires," "inclinations," etc. to one of "laws" and "obedience," (which as you might suppose had strong theological undertones and motivations). Part of what made Hume's initial attack on causality and induction more effective was that the dominant view at the time was one of a sort of "active laws" that were the source of regularity in nature. Hence, we see Hegel writing to contest this issue a few decades later, pointing out the preposterousness of the idea of some "natural law" shooing the planets into their orbits like rambunctious school children who would otherwise go shooting off any which way.

    Part of this was the occult nature of the early mechanistic picture. Gravity was itself "spooky action at a distance," as was electromagnetism later. Life and conciousness also posed difficulties. The corpuscular metaphysics of the day needed something to account for this non-locality.

    "Things act the way they do because of that they are," is a later reintroduction of material causation, while information theory helped bring back a sort of formal causality. This has helped causation make a robust comeback from the death Hume (and later Russell and others) wanted to pronounce over it.



    A lot has been written on the old sea battle. I think it might be more helpful to look at how Aristotle defines chance in the Physics. There, chance arises out of a confluence of the actions of discrete, relatively self-determining natures as they attempt to attain their natural ends. Contingency exists because of the locus of self-determination in things as they exist at some point. We don't have fatalism here, but neither do we have more expansive notions of libertarian free will. Beings' actions are what determine future moves from potency to actuality and modality is primarily handled in terms of potentiality.

    Aristotle also has falsity related to truth as contrary opposition as opposed to contradictory opposition (i.e. as affirmation and negation) and this opens up reformulations of LEM that potentially resolve some of the issues here.



    So you've just moved from the Aristotelian definition of "possible" where excluded middle is violated because "possible" means neither has nor has not the property,

    That isn't what it means to have a potential though. Consider Aristotle's theory of the acquisition of knowledge in De Anima. A human infant potentially knows French. The reception of form through the senses allows this potency to move to actuality (first actuality). A rock, by contrast, cannot learn French. It lacks that potential. This does not require both having and not having the same property without qualification (which would be a contradiction, not an excluded middle at any rate).

    A person who doesn't know French has the power of learning French, they potentially have knowledge of French. The person who speaks French but is not currently speaking French has the property of being able to speak French at will (first actuality), whereas the person who is actively speaking French is actualizing that power (second actuality). These distinctions exist, in part, to avoid the excluded middle and to solve the Meno Paradox without Plato's implausible recollection theory.

    Nor does the notion of a possible world necessarily violate the law of identity. If identity were defined by accidents then you would be a different person when you were to the west of your house as opposed to the east. Indeed, if relation defined identity you could stay the same and still become a different person if something else changed relative to you.

    Arguably, the most expansive conceptualizations of possibility can start to erode identity, but this is because they end up collapsing any distinction between substance and accidents. So for instance, if we say "a frog can potentially turn into a rabbit,' we might ask "does a frog have the potential to become a rabbit?" It would seem not, in which case this "possibility " would really just be an act of sorcery, replacing one thing (a frog) with a different thing (a rabbit). But some thinkers (e.g. Ockham) were uncomfortable with even this sort of distinction, because it seemed like a limit on divine sovereignty (i.e., "things are whatever God tells them to be"). I'll allow that a sort of maximalist conception of possibility can run into trouble here, but a notion of possible worlds need not have this problem.
  • What is real? How do we know what is real?


    I'm not sure that qualifies as an answer, even generously. Unless usefulness is an unanalyzable bedrock?

    I've asked this question to @Banno many times and never received anything but deflection. His notion of use seems to bottom out in a sheer voluntarist will. The need to speak to any causes of this "usefulness" is some sort of pseudoproblem, for vague reasons. Apparently, any logic or notion of truth is "useful" just because others are "playing the same game." Yet this doesn't seem true. If everyone told you that commonly accepted logic proved that jumping off the Brooklyn Bridge would make you immortal, and you saw them jumping off the Brooklyn Bridge, you'd still be stupid for following them.

    For instance :

    Now we might be tempted to ask why p v ~p is so much more useful than p ^ ~p. But isn't one answer here just that we can do more with it?

    "We decide" if a frog can be both living and not living at the same time based on how useful this is to us? How this position would not result in an all encompassing relativism is beyond me. Nothing grounds logic or truth except the bare assertion of "usefulness" and a sort of appeal to democratization. How the bishop moves is somewhat arbitrary. You can make a chess variant where it moves differently. Can you make a frog be alive and not alive by having a language community agree to speak of it in certain ways? No doubt it would be useful to have one's cake, to not eat it, and to eat it to. It's just that this is impossible regardless of how useful it would be.

    Nor does it make much sense that a frog could be both alive and not alive just in case we find it useful for the frog to be such. Seems to me it'll either be dead or alive without much regard for our uses.
  • What is faith


    It's often more difficult to come up with definitions for notions other than substance (things), since such concepts will always inhere in something else. For instance, one never had a "fast motion" without some thing moving, or "red" without there being something (light, a ball, etc.) that is red. More general principles will tend to be harder to define because they can be analogously predicated under many aspects.

    So, for instance, one set of definitions in this thread has focused on faith as the persistence of belief (or even "belief without evidence," although I find the latter sort of ridiculous). But the persistence of beliefs is arguably just one thing that results from faith. St. Paul's dramatic conversion on the road to Damascus is often considered to be a quintessential example of an event defined by faith, but if fact this event involves him abandoning most of his most firmly held beliefs—beliefs that he has been up to that moment willing to fight and risk his life for.

    On the flip side, the radical skeptic is also persistent in their beliefs. No evidence can move them from their skepticism, and yet this immobility is because they lack faith in anything.

    The mistake here might be akin to claiming that flight is defined by the flapping of wings. While the two go together, there can be flight without flapping or flapping without flight.

    Other definitions in this thread seem to use "faith" more akin to trust. People "have faith" in airplanes, doctors, etc. But this is perhaps more a sort of trust in people and institutions, as opposed to the deeper uses of "faith." Again, we might suppose there is a relationship here of the sort where faith entails a sort of trust, is not reducible to trust.

    Faith, when discussed as a theological virtue, at least suggests this. It suggests fortitude in assent to the illumination of faith (perseverance and immobility), and it suggests trust, but goes beyond either of these.
  • The Forms


    It's not too far of a stretch to see how this suggests biosemiosis (signs and sign relations) as fundamental to cognition (and indeed to organic processes generally.) So here we're encountering the metaphysics of meaning, to which Platonic and Aristotelian principles still have considerable relevance.

    There is a historical relation too in that biosemiotics and the invocation of semiotics in physics almost always involves the tripartite semiotics received through Charles Sanders Peirce. But Peirce was himself a lifelong student of the Scholastics, and received his semiotics through them. His model, although it has some very important new developments, still looks just like Saint Augustine's semiotic triad in De Dialectica in its main structure and elements.

    John Deely's "Four Ages of Understanding" traces this history. It's an interesting work, although it is pretty deficient as a history of philosophy outside of tracing the history of semiotics. He writes off Neoplatonism entirely despite it's huge influence on the reception of Aristotle he is speaking to, which I found sort of odd.
  • The Forms


    Seeing is a power of humans, one every sighted person is innately familiar with. I don't think a reductionist account is the only true account of sight, nor necessarily the best. Perhaps more importantly, I don't think one must "know everything in order to know anything." The continued existence of some mystery vis-á-vis a phenomenon does not preclude us having any knowledge about it.

    But why appeal to the complexity of the brain in particular? Sight also involves light, and the light wave/photon have more than enough mystery to make the same sort of argument. This is the problem with "neuroscience as first philosophy." It's an even worse candidate than epistemology because it is itself reliant on the principles of other sciences (e.g. physics).

    Anyhow, this still seems to be misunderstanding the concept of form. The form is, in part, the actuality in things that is responsible for their appearance. Being is prior to interaction. Something needs to first exist in order interact so as to appear a certain way. Appearance—perception—is also prior to the development of language. Form is not primarily about explaining language, although it might do that to. It's about what must lie prior to language and perception (else our determinant perceptions would be caused by "nothing in particular," in which case they essentially wouldn't have causes at all). The form of things isn't just their appearances though (which you seem to be suggesting), nor what is said of them, but rather is upstream of each of those, because being (existing) is a prerequisite for interaction and being known.

    In the broadest sense, a thing's form is what makes anything any thing at all, it's particular actuality or "act of existence" by which it is some thing and not "nothing in particular." If form were instead, as you seem to suggest, merely "what brains perceive and talk about," then "brains" themselves would have no true existence as anything distinct, and so would have no determinant powers, ruling out the very possibility of a "science of brains." Brains themselves would be merely "something brains perceive and talk about." This appears to me to be a rather vicious circle.

    Besides this, as points out, form doesn't just explain perception, but the ability to reason about things and to attain intellectual knowledge (as opposed to sense knowledge). Things are not just perceivable, they are also intelligible. Form is what is communicated to the intellect such that things are known as more than mere collocations of sensation.

    But, perhaps more to the point, even if one rejects any notion of form, it still wouldn't be the case that form is just about what is perceived and spoken of. Metaphysicians might be wrong, but they would be wrong about form as a basic metaphysical principle, not as a property of perception.
  • The Forms


    Right, Perl is very good on this. I suppose one of the difficulties here is the modern phobia that appearances might be arbitrarily, randomly related to appearances. Now, to appear a certain way to man is to act in a certain way, and since "act follows on being," we might suppose that things must reveal something of their reality in their appearances. The classical assumption here is that if something acts (interacts) with man in some particular way, then the definiteness of this interaction, that it is one way instead of any other, must be attributable to some prior actuality in both the thing in man. Otherwise, the phenomenological elements of the experience would be what they are "for no reason at all," or, on the side of the acting thing perceived, it would be acting for "no reason at all."

    But I feel pretty safe in this assumption. If things do happen for no reason at all, if the world is not intelligible, then philosophy and science are a lost cause. However, they certainly do not seem to be lost causes.

    One interesting thing to note is that this fear of arbitrariness and randomness is almost always placed on the "world/thing" side of the ledger. Yet the elevation of potency over act such a fear presupposes could apply just as well to man himself. Maybe man, his perceptual organs, his cognition, etc. is what acts entirely arbitrarily in relation to the world? We would each be "hallucinating our own world" for "no reason at all, according to not nature or prior actuality." If this seems implausible, which I think it does, I am not sure why flipping the same concern over to the "world" should be any less implausible though.
  • The Forms


    But anyone using the word 'round' is using it because they are engaging with the world around them and they see 'round' things.

    Indeed, that was precisely my point.

    Imo, if we want to explain the actual reasons why we use the word round, you have to talk about an immensely complicated brain and how it interacts with the rest of a very complicated world in an intractable manner - from the perspective of our own intelligibility - to infer something about how it represents or embodies structure out in the world in the world.

    I don't think neuroscience is any more properly first philosophy then philosophy of language, particularly if it leads to the radical skepticism you lay out in the rest of the post (a skepticism at odds with plenty of neuroscience itself).

    For instance, the claim that "none of us actually know how or why we personally are able to perceive and point out 'roundness' in the world," is simply not one many people, including scientists, are going to agree with. There are great mysteries related to consciousness, but how (and that) things possess shape and how their shape in communicated through intervening media to a person, and how the sense organs engage this information, is well understood in some respects. At any rate, doubts that "anything is really round" involve a quite expansive skepticism.

    However, even if we grant this skepticism, it wouldn't follow that the very diverse, well-developed tradition of metaphysical theories endorsing a notion of form would be rendered contentless. I'm not following this jump at all. This would be like saying that, because different interpretations of quantum mechanics are not currently decisively testable against one another, they fail to say anything unique about the world at all. A metaphysics of form might be wrong (although skepticism precludes even saying this much), or it might be unjustified, but it isn't "not saying anything," or a theory about word use.


    So I guess my conclusion is that appealing to forms and word-use is not meaningfully different.

    One might indeed criticize a metaphysics of form in any number of ways, but to say that such a broad and well-developed area of philosophy is contentless would seem to simply demonstrate a total lack of familiarity with it.

    C.S. Peirce, John Deely, John Poinsot, etc. have very well developed theories of the causality particular to signs and the way in which form is communicated. These theories might be misguided, but they are not reducible to "word use." Indeed, the most popular criticism of the via antiqua by those who were well acquainted with it (e.g. William of Ockham) was that it was too complex, not that it failed to say anything.

    For example, Nathan Lyons "Signs in the Dust:"

    [The] particular expression of intentional existence—intentional species existing in a material medium between cogniser and cognised thing— will be our focus...

    In order to retrieve this aspect of Aquinas’ thought today we must reformulate his medieval understanding of species transmission and reception in the terms of modern physics and physiology.11 On the modern picture organisms receive information from the environment in the form of what we can describe roughly as energy and chemical patterns. 12 These patterns are detected by particular senses: electromagnetic radiation = vision, mechanical energy = touch, sound waves = hearing, olfactory and gustatory chemicals = smell and taste.13 When they impinge on an appropriate sensory organ, these patterns are transformed (‘transduced’ is the technical term) into signals (neuronal ‘action potentials’) in the nervous system, and then delivered to the brain and processed. To illustrate, suppose you walk into a clearing in the bush and see a eucalyptus tree on the far side. Your perception of the eucalypt is effected by means of ambient light—that is, ambient electromagnetic energy—in the environment bouncing off the tree and taking on a new pattern of organisation. The different chemical structure of the leaves, the bark, and the sap reflect certain wavelengths of light and not others; this selective reflection modifies the structure of the energy as it bounces off the tree, and this patterned structure is perceived by your eye and brain as colour....

    These energy and chemical patterns revealed by modern empirical science are the place that we should locate Aquinas’ sensory species today.14 The patterns are physical structures in physical media, but they are also the locus of intentional species, because their structure is determined by the structure of the real things that cause them. The patterns thus have a representational character in the sense that they disperse a representative form of the thing into the surrounding media. In Thomistic perception, therefore, the form of the tree does not ‘teleport’ into your mind; it is communicated through normal physical mechanisms as a pattern of physical matter and energy.

    The interpretation of intentions in the medium I am suggesting here is in keeping with a number of recent readers of Aquinas who construe his notion of extra-mental species as information communicated by physical means.18 Eleonore Stump notes that ‘what Aquinas refers to as the spiritual reception of an immaterial form . . . is what we are more likely to call encoded information’, as when a street map represents a city or DNA represents a protein. 19... Gyula Klima argues that ‘for Aquinas, intentionality or aboutness is the property of any form of information carried by anything about anything’, so that ‘ordinary causal processes, besides producing their ordinary physical effects according to the ordinary laws of nature, at the same time serve to transfer information about the causes of these processes in a natural system of encoding’.22

    The upshot of this reading of Aquinas is that intentional being is in play even in situations where there is not a thinking, perceiving, or even sensing subject present. The phenomenon of representation which is characteristic of knowledge can thus occur in any physical media and between any existing thing, including inanimate things, because for Aquinas the domain of the intentional is not limited to mind or even to life, but includes to some degree even inanimate corporeality.

    This interpretation of intentions in the medium in terms of information can be reformulated in terms of the semiotics we have retrieved from Aquinas, Cusa, and Poinsot to produce an account of signs in the medium. On this analysis, Aquinas’ intentions in the medium, which are embeded chemical patterns diffused through environments, are signs. More precisely, these patterns are sign-vehicles that refer to signifieds, namely the real things (like eucalyptus trees) that have patterned the sign-vehicles in ways that reflect their physical form.24 It is through these semiotic patterns that the form of real things is communicated intentionally through inanimate media. This is the way that we can understand, for example, Cusa’s observation that if sensation is to occur ‘between the perceptible object and the senses there must be a medium through which the object can replicate a form [speciem] of itself, or a sign [signum] of itself ’ (Comp. 4.8). This process of sensory semiosis proceeds on my analysis through the intentional replication of real things in energy and chemical sign-patterns, which are dispersed around the inanimate media of physical environments

    Or there is John Deely's work, or something like Robert Sokolowski's "Phenomenology of the Human Person," etc., all of which include quite determinant statements on how form ties into perception (and language downstream of perception).

    Anyhow, take a gander at: https://thephilosophyforum.com/discussion/comment/987328 and I'll ask, "how is scientific knowledge possible if principles don't exist?"

    Second, do things have any determinant being? If so, that's all form, in the broadest sense, is saying. To be skeptical about form in this broad sense seems to entail radical skepticism, it's to say "the properties of all things are unknowable, and indeed we cannot know if they have any determinant properties at all." But to the skeptic, I'd ask: "if things have no determinant properties, why should they cause determinant perceptions?" Particularly, given the appeal to "brains" (which does not ever produce consciousness without constant interaction with a conducive environment), why should brains ever produce one sort of cognition instead of any other if brains do not possess a determinant nature/properties? There can be no "neuroscience" if there is nothing determinant that can be said about brains.

    Is there actually much difference between my 'structure' and your 'forms' (in the most generic sense of structure)? Maybe I just prefer the former word without the connotations of the latter... other similar words might be 'patterns', 'regularities', etc, etc.

    Form is often described as "intrinsic structure" or "organization." Appeals to "regularities" are often reductive though, tending towards smallism. While some invocations of form are reductive, many are not.

    Paul Vincent Spade's article "The Warp and Woof of Metaphysics" is a pretty good introduction on Aristotlian essences (an example of intrinsic structure) and how they tie in to predication for instance: https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://pvspade.com/Logic/docs/WarpWoo1.pdf&ved=2ahUKEwjt6su-mqGNAxWaw_ACHUQVOqQQFnoECCgQAQ&usg=AOvVaw1XwkMjcPAAZ0aM2Ne2b-c-

    I actually mentioned the common use of "regularities" and "patterns" (always in scare quotes!) earlier in this thread. Either the Kant-like (Kant-lite?) skepticism here is absolute, and we get subjective idealism, or it isn't, and those terms must have some determinant form and content.
  • The Myopia of Liberalism


    I'm not sure if Mill was necessarily motivated by racism. His initial examples on slavery are from ancient Europe. He just has a view of "natural man" as wholly without liberty that I find questionable.

    Looking over the historical moments you cite, all I can do is repeat that such a picture would have us believe that some monolithic thing called liberalism never gave a damn about morals or justice or good government, caring only for individual freedoms no matter the cost, tearing down whatever was necessary to achieve them, etc., etc.

    I read Rawls a pretty long time ago now, and I am not sure if I had a chance to finish the book. I don't think liberalism "never gave a damn about morals, justice, or good government." Liberal theorists often write about these at great length. They just tend towards defining them in terms of the individual. Good government is a priority, and can be given extremely expansive focus in progressive liberalism, but it's also there primarily to enable the freedom of the individual to flourish. There is a marked contrast here with classical and Marxist political theory. Individual freedom is generally raised up over morality because moral questions are privatized to the extent possible (with "rights" holding down what morality enters into the public sphere). This only makes sense. If you have a theory of government that avoids giving answers on man's telos, instead making this a private, individualized matter, then what is important is enabling the private exploration and attainment of that telos, whatever the individual determines it to be.

    Lots of liberal theory sounds utopian, that's true. I remember thinking that with Rawls. But this is also true of plenty of Marxist theory, and even some "Third Position" crypto-fascist theory that stays away from racism and militarism. It's certainly true of Hegel as well. Obviously, no theory is realized perfectly, but I think a useful question is if contradictions or intrinsic tensions in the theory lead towards problems in their implementation. I think this is the case in both liberalism and Marxism. It's perhaps most obvious in conservative liberalism in the way it necessarily sidelines and renders irrelevant the very cultural and religious institutions it most wants to conserve by excluding them from the state and thus the broadest conception of the common good (through a commitment to "small government") while also fetishizing a market that tends to bulldoze culture.

    I do recall one specific explanation from Rawls to the effect that his theory must exclude a notion of "just desert." This tends to hold just as true for conservative liberal theories, which instead look at rights. By contrast, progressive and conservative politicians and citizens are constantly justifying their platforms in terms of just desert. The difficulty for liberal theorists is that, once the question of man's ultimate telos is privatized, one loses any ability to ground standards of excellence, and thus of desert.

    This is maybe the most important contradiction, in that it seems to contradict human nature. I have seen many an avowed moral anti-realist, and many people who claim that political theory should not be based on morality, but I've never seen them discuss any political topic at length without falling into continually invoking standards of excellence and desert. A focus on rights alone leaves political theory chestless (and arguably losing any sense of telos also leaves all notion of rights critically undermined. Fukuyama, a great advocate of liberalism, expresses just this concern at the end of The End of History and the Last Man.)
  • The Forms


    Principles might be a better way to understand it.


    The epistemic issues raised by multiplicity and ceaseless change are addressed by Aristotle’s distinction betweenprinciples and causes. Aristotle presents this distinction early in the Physics through a criticism of Anaxagoras.1 Anaxagoras posits an infinite number of principles at work in the world. Were Anaxagoras correct, discursive knowledge would be impossible. For instance, if we wanted to know “how bows work,” we would have to come to know each individual instance of a bow shooting an arrow, since there would be no unifying principle through which all bows work. Yet we cannot come to know an infinite multitude in a finite time.2

    However, an infinite (or practically infinite) number of causes does not preclude meaningful knowledge if we allow that many causes might be known through a single principle (a One), which manifests at many times and in many places (the Many). Further, such principles do seem to be knowable. For instance, the principle of lift allows us to explain many instances of flight, both as respects animals and flying machines. Moreover, a single unifying principle might be relevant to many distinct sciences, just as the principle of lift informs both our understanding of flying organisms (biology) and flying machines (engineering). 

    For Aristotle, what are “better known to us” are the concrete particulars experienced directly by the senses. By contrast, what are “better known in themselves” are the more general principles at work in the world.3,i Since every effect is a sign of its causes, we can move from the unmanageable multiplicity of concrete particulars to a deeper understanding of the world.ii

    For instance, individual insects are what are best known to us. In most parts of the world, we can directly experience vast multitudes of them simply by stepping outside our homes. However, there are 200 million insects for each human on the planet, and perhaps 30 million insect species.4 If knowledge could only be acquired through the experience of particulars, it seems that we could only ever come to know an infinitesimally small amount of what there is to know about insects. However, the entomologist is able to understand much about insects because they understand the principles that are unequally realized in individual species and particular members of those species.iii

    Plato's Theory of Forms is a particular metaphysical explanation of unifying principles. Whether it was even originally intended as the sort of naive "two world's Platonism" that is often associated with Plato today is an open question (I for one am doubtful). But either way, Aristotle and then the Neo-Platonists make some useful elucidations of the theory (how much they are really altering it is also an open question).

    Plato didn't think there was a form for every generalizable term. This is why examples using artifacts are not good counterexamples for pointing out problems with the Theory of Forms. One of the points of the theory is to be able to distinguish between substance and accidents/relation, but in artifact examples these become easily confused. Hence, books might not be a great example. Plato's student Aristotle rejects the idea that Homer's Iliad would have a definition and also casts doubt on even simple artifacts having essences, and I think he is in line with his old master here. If there has to be a form for every term, and there are potentially infinite, relatively arbitrary terms, then the forms would be useless for doing what they are called in to do.

    Plato rejects materialist attempts to explain everything on the basis of that of which it was made. According to Plato, the entities that best merit the title “beings” are the intelligible Forms, which material objects imperfectly copy. These Forms are not substances in the sense of being either ordinary objects as opposed to properties or the subjects of change. Rather they are the driving principles that give structure and purpose to everything else. At Sophist (255c), Plato also draws a distinction between things that exist “in themselves” and things that exist “in relation to something else”. Though its precise nature is subject to interpretation, this distinction can be seen as a precursor to Aristotle’s distinction between substances and non-substances described in the next section, and later followers of Aristotle often adopt Plato’s terminology.

    https://plato.stanford.edu/entries/substance/
  • The inhuman system


    To start, I just wanted to say that it's fun to discuss things with you.

    Thanks, you as well.

    I have mostly read stuff on Taoism that is tied to its contemporary formulations, so that might be the discrepancy. I don't know much about the historical development and it's quite possible that the focus on self-cultivation comes through later thinkers and cross-pollination between Confucius' tradition and Buddhism, both of which have a sort of virtue ethics.

    I've seen Chuang Tzu presenting as laying out a sort of model for self-cultivation in some anecdotes. So for instance, there is a butcher who becomes incredibly skilled in his trade and it is because he has ceased to try to implement a sort of false constraint on his art, or even to "see a cow" (IIRC), but has instead learned to "flow" totally with nature. This interpretation might rest on later additions though.

    This is, at first glance, very different from the Western focus on knowledge of intelligible forms, but I don't think it's quite as different as it would seem. Aristotle is, for instance, in very many ways, creating a process metaphysics in the Physics. Unchanging form is an abstraction (and in later formulations, only "static" in the infinite, simple being of the Logos, but never in the world of nature). And there is also the very strong conception of nature as a whole, substantial forms only being what makes things relatively more or less wholes, unities, and intelligible, and the goal of action in harmony and accord with nature (which tends to slip out of modern ethics).
  • The Forms


    It's not that all predication is equivocation, but that ordinary language is flexible and dependent on context.

    Right, but if one does not distinguish between univocal and equivocal usage then common facts such as "running involves legs," become unequivocally false because refrigerators, rivers, roads, and noses all "run." Ordinary language involves equivocal, analogical, and univocal predication. Form, the actuality of things, relates to the latter two.

    This is not a threat to logic, which can happily rely on univocal terms.

    Maybe not to formal logic, but the primary use of logic, including in the natural sciences, uses natural language. So, it would be problematic if equivocity rendered something like natural language syllogisms invariably subject to vagueness.

    But would this leave formal logic in a good place? Any term used in formal logic, say M for "man," couldn't correspond univocally to any natural language usage of "man" if all terms were subject to the same vagueness as "game." Formal logic and natural language would be talking about different things.

    Our understanding of words is shaped by practical use, not metaphysical essences. In this view, terms like "round" or "red" don't require metaphysical forms to function meaningfully in context, nor does logic.

    And what determines practical use? Here is the argument, existence is prior to speech. There are round things, ants, trees, etc. prior to speech, and prior to the existence of any "language community." For example, the Earth is spherical and it was spherical prior to man deciding what the token 'spherical' should mean. It was true that the Earth was spherical prior to any man declaring it as such.

    Unless "practical use" is determined by nothing at all, or by nothing but the sheer human will, as uninformed by the world around it, then it will be informed by the being of things (through the senses). A term like "round" is "practically useful" precisely because round things exist prior to the creation of the term (or of language itself). Children who have not learned the word "round" presumably still experience round things (and indeed they are capable of of sorting shapes prior to learning their names). Experience is prior to naming. But the form is called in to explain how things are round, ants, trees, etc., not primarily to explain how words work.

    Nor does realism suppose any sort of metaphysical super glue between tokens and forms in the way you present it. Indeed, Plato has Socrates spend a lot of time exploring how people mean quite different things by using the same token. If Plato held the naive view you attribute to him, then the opening books of the Republic, where "justice" is being defined and used in radically different ways, shouldn't exist.
  • What is real? How do we know what is real?


    You seem to be trading on an equivocal idea of intuition. Self-evidence obtains when something is true by definition. We don't need intuition to see it, it is obvious by virtue of the meaning of the terms. If you make a statement that contradicts itself, it is clear that you haven't asserted anything because you have asserted two things which cancel each other out.

    I'll agree that there are multiple notions of "intuition" and "understanding" that are unhelpfully related but distinct. I was referring to "what is self-evident," which is often attributed to "intuition" because it does not rely on discursive justification, but is rather the starting point for discursive justification (and in some philosophy, also its ending point).

    I don't know if I would necessarily identify the self-evident with "what is true by definition." If one takes "definition" in the Aristotlian sense (i.e., things have a proper definition), then definitions are generally not self-evident, whereas if one takes more nominalist accounts, then the definition might be nothing more than stipulation (which must be empirically arrived at, and so is not self-evident either).

    Maybe I should have said "intellectus," but I don't think many people are familiar with that term.



    We can theorize further and posit noesis, direct knowledge, innate intelligibility and so on, but we have no way of testing those theories.

    True, but this is equally the case for the opposite claim that reason is nothing but discursive ratio/computation. And it faces the problem of being wholly unable to explain the phenomenological aspects of understanding and knowledge (hence eliminitive materialism), nor how "something computes so hard it begins to have first person experiences and understanding." So too for the symbol grounding problem, the Chinese Room, etc.

    More radical forms of empiricism start from the presupposition that the phenomenological side of cognition is "off limits," but when this has tended to bottom out in either the denial of consciousness (eliminativism) or the denial of truth and almost all forms of knowledge, one might question if empiricism has become self-refuting at this point (or at least proven to be a bad epistemology). At any rate, even empiricists tend to accept that empiricism is not justifiable in the terms of empiricism. But the difficulty here is the tendency for there to be equivocation between "empiricism" as any observation and experiment at all (in which case the Scholastics and even the Neoplatonists are "empiricists) and the more radical tradition coming out of Hume and continuing in logical positivism, etc., which is used to argue that rejecting "empiricism" is tantamount to rejecting "science and the scientific method," even though plenty of famous inventors and scientists have rejected the more radical philosophical school, but not "science."



    Such a trivial logic would, by the very fact that no one agrees with it, have the singular misfortune of being quite unless.

    Why does no one agree with it? You seem to be saying that trivial logics are useless just because no one accepts them, but then this leaves totally unexplained how the choice of logic is made in the first place. Presumably, the world did not begin populated by people, with certain logics already popular or not popular.

    How would it be "useful" to affirm anything and everything that can be expressed just in case other people also agreed to do the same? There seems to be obvious problems with this.

    Rather, I'd say that trivial logics are unpopular because they are useless. People's preferences for different logics have to do with how they represent being.
  • GETTIER – Why Plato Would Reject Justified True Belief (a Platonic dialogue)


    Do you mean Borges' Library of Babel or the story from Genesis or something else? (The first is one of my favorite ways to think about this sort of thing).
  • The Forms


    Right, but it's worth pointing out that this is sometimes denied (i.e., there is no truth about "what a thing is") and people still try to do ontology with this assumption. Although, when they—as they often do—appeal to "regularities," "patterns," and "constraints," that are prior to the act of "naming things what they are," I do think there is a problem, since these terms themselves either have some form or are simply contentless hand-waving to avoid a slip into an absolute volanturism (where the will makes anything what it is by a bare act of choice).

    IMO, this mostly comes down to the elevation of potency over actuality. When the order is inverted, then one always has limitless possibility first, and only after any (arbitrary) definiteness. Voluntarism plays a large role here. It becomes the will (of the individual, God, the collective language community, or a sort of "world will") that makes anything what it is through an initial act of naming/stipulation. But prior to that act, there is only potency without form and will.

    Presumably though, you need knowledge of an object in order to have any volitions towards that object. This is why I think knowing (even if it is just sense knowledge) must be prior to willing, and so acquisition of forms prior to "rules of language," and of course, act before potency (since potency never moves to act by itself, unless it does so for no reason at all, randomly).

    Edit: I suppose another fault line here that ties into your post (which I agree with) is: "truth as a property of being" versus "truth solely as a property of sentences." In the latter, nothing is true until a language has been created, and so nothing can truly be anything until a linguistic context exists. That might still require form to explain though, because again, it seems some knowledge must lie prior to naming.
  • The Forms
    [

    “In virtue of what are all just acts just, or all round things round?”—is itself misleading. It presumes there must be some essence or metaphysical commonality underlying all uses of a term. But why should this be so? Why should there be a thing that is common to all our uses of a word? Why should we not, for example, use the same word to name different things? And if one looks at the uses to which we put our words, it seems that this is indeed what we do. The red sports car and the red sunset are not the same colour, despite our using the same word for both. The round hill and the round ring are quite different.

    Yes, those would be instances of equivocal predication or pros hen predication, etc. What's the claim here though, that all predication is equivocal? Then you don't have logic. That terms are never predicated univocally? Then you also don't have logic.

    But a basketball and baseball are not spherical in different ways, nor is red paint splashed on a wall here a different red than the a hockey stick painted with the same red paint. "Some predication is equivocal" is not a good argument for "no predication is univocal."

    Who in "analytic logic" says otherwise?

    There simply need be nothing common to all red or round things. And perhaps the same is true for the Just. Rather there may be many, diverse and overlapping similarities. The classic example here is of a game: we use the word "game" quite successfully despite not having at hand a rule that sets out for us what counts as a game. And indeed, it seems that were any such rule proposed, it would be a simple matter to find or invent a counter instance, a game that does not fit the rule. Yet we manage to use many, many words without access to such rules.


    Is all predication supposed to be vague in this way? That seems pretty problematic. That'd be supposing all terms are vague.

    A basic syllogism such as:

    Socrates is a man.
    All men are mortal.
    Therefore Socrates is a mortal.

    Would be in jeopardy if "man" is some vague notion of this sort. Certainly, one couldn't do geometry this way. Imagine trigonometry with a triangle defined in this way.

    We might see this more clearly by asking how we learn what is red, what is round, or what is just. We don't learn to use these words by becoming familiar with a form for each. We learn to use these words by engaging in the world and with those around us

    This is simply question begging if taken as an argument against realism though. Perception, including perception of language, involves forms in realism. The form is what is transmitted to the intellect. As an argument (as opposed to say, just laying out an alternative theory, I'm not sure of your goal here) this would be akin to: "nominalism is correct because nominalist theories say so."

    Anyhow, how does one figure out how to "apply a rule for the word round," if there are not first round things? The form is, first and foremost, called in to explain the existence of round things, second our perceptions of them, and then language. It is not primarily about language because language was never considered "first philosophy" before the advent of analytic philosophy (i.e., "being and thought are prior to speaking.") People must be able to identify roundness to use to words to refer to it.
  • The Myopia of Liberalism


    BTW, I think this is fair if the measuring point is 1925 (a century). But what if we use 1975, half a century? Or the end of the Cold War, when neo-liberalism was really taking off and liberalism ceased to have any competition to "keep it honest." Certainly, there has been some expansion of rights since then, but also a lot of backwards steps.

    Since then, median wages across the developed world have stagnated despite gains in productivity from the information revolution, while wages for lower income workers have actually tended to fall in real terms. Economic growth has tended to almost totally benefit a small elite, and economic mobility has been declining. The Black-White wealth gap in the US expanded to become larger than under Jim Crow, while America's underclass endures homicide rates above those of the Latin American states used to justify refugee status (or states with active civil wars in some cases).

    Plenty of other similar stuff I'm sure you're familiar with. My point would be that if a trend extends across half a century, and appears to be accelerating, it isn't a hiccup.



    Mill was against the institution of slavery as practiced, on liberal grounds. However, in "Considerations on Representative Government," he calls for compulsion over “uncivilized” peoples in order that they might lead productive economic lives, even if they must be “for a while compelled to it,” including through the institution of “personal slavery.” This is very similar to Locke's justification of slavery as "freedom from indolence," many of the American Founder's justification of slavery as "temporary but necessary," and liberal justifications of colonialism up through the 20th century.

    I don't think these are equivalent to something like Kant or Hegel's statements on race because these sorts of justifications were used in revised form by liberal theorists and statesmen through the ends of colonialism and the justification of some of the more unsavory parts of neo-liberalism look very similar. If freedom is primarily (or at least largely) freedom to consume, then "economic growth" becomes a justification for all sorts of actions because it is "emancipatory in the long run." Also, they come directly out of the vision of freedom and the "state of nature" anthropology, they aren't some sort of ancillary comment tacked on to theories that would otherwise negate such views.

    The justification for colonialism also looks a lot like the justification for tearing down the Church and forcing monks and nuns out of the monasteries and convents in France, Italy, and Spain (or the mass executions of clergy in France). The people have to be "freed from custom" to live more individualist, productive lives. Hence, it isn't just a sentiment grounded in racism, the same logic extended to the Infernal Columns' actions Vendee in France.
  • The Myopia of Liberalism


    OK, as long as we don't equate these alleged problems with "the apocalyptic decline of Western civilization"!

    I maintain that Western Civilization has been in serious decline since the death of Marcus Aurelius and the ascension of his son to the purple! :cool: :rofl:

    Extreme cherry-picking, wouldn't you say? :smile:

    No, and it seems absurd to me to call this cherry picking when all the major liberal states engaged in absolutely massive colonial projects that they justified in the terms of liberalism, for most of their history, across most of the world's landmass, affecting most of the human population, largely stopping only when military defeat forced them to stop (and arguably, they just continued it by other means under neo-liberalism via less direct coercive measures to force liberalization, e.g. in Eastern Europe, Asia, Latin America, etc., including backing armed groups, coups, etc.)

    Locke and Mill I pick because they are foundational liberal theorists, but I could just as well point to America's Founding Fathers (the oldest example of liberalism in practice) or the justification of colonial rule and slavery by the liberal scions of the French Revolution. The big drive for abolition (which only targeted the most egregious practice of this sort) came from Christianity (as it did in Europe at the end of antiquity, where slavery was largely abolished), not liberalism. The "Battle Hymn of the Republic," is not a secular or deistic ode for instance.

    Did some liberals object to these practices on liberal grounds? Yes, particularly to the exceptionally egregious institution of African chattel slavery, although even most of those who balked at hereditary slavery nonetheless championed colonial expansion over the rest of the American continent on liberal grounds (and the subsequent genocide of the native population).

    But saying that just because there were some unheeded liberal voices against colonial expansion across North America, into India, into almost all of Africa, into China (attempted but partly repelled), and the Middle East, or say, opening Japan to trade with artillery fire, etc., that this isn't "real liberalism" would be a bit like saying collectivization wasn't "real communism" because a handful of communists opposed it.





    Democracy can constrain liberalism?

    I think so. Desegregation was unpopular, even in the North where it largely had to do with bussing for schools (e.g. riots in Boston). It would have lost as a ballot question, even if African Americans were allowed to vote. If you're familiar with the way democracy interacted with sectarian politics in pre-war Lebanon or post-war Iraq, I think you can find lots of examples of this sort of tension between democracy and individual liberty.

    The Western liberal states have benefited from largely homogenous populations, so they haven't had these same tensions (lately). But that's because of both huge, sometimes coercive campaigns to create homogeneity and titanic rounds of ethnic cleansing to sort people across Europe.

    Besides, those that are sceptical about democracy (or neoliberalism) are nearly everybody simply angry about how badly the whole system is working currently: that it's only the rich or those close to power that benefit, or that there is corruption or inefficiency or useless bureaucracy. It's really only a very few people that are inherently against democracy as the vast majority believe that "the people" are still quite rational and capable of handling a democracy.

    Right, skepticism over "illiberal democracy" doesn't tend to result in a wholesale abandonment of democracy. Rather, complaints against Brexit, Trump, Erdogan, Orban, etc. are generally against "populism" and a democracy that is "too direct." Hence, advocacy for changes like a switch to closed list parliamentary systems, where party elites pick the MPs and people just vote for a party and their platform, or the advocacy for rank choice voting specifically as a means to preclude radical shifts in policy (both of these policies might be good ideas BTW).

    For progressives, checking democracy generally involves strengthening the reach and independence of the administrative state (the "deep state" of career professionals, technocracy) often at the expense of the directly elected executive and using courts (and so appeals to other elites , judges) to expand rights that voters cannot overturn. Or progressives recommend something like a city manager system, where the executive is selected by elected representatives with the help of the administrative state itself. Whereas conservatives have tended to just want to weaken the state so that it cannot be wielded by the "people" against the individual.
  • The 'Hotel Manager' Indictment


    One my lecturers in philosophy wryly pointed out that Hume’s condemnation at the end of his Treatise actually applies to the Treatise. ‘Take any book of scholastic metaphysic…’ The lecturer compared Hume, like the positivists after him, to the Uroboros, the mythical snake that swallows itself. ‘The hardest part’, he would say with a mischievous grin, ‘is the last bite’.

    :rofl:

    Historically at least, this seems to have proven quite true.
  • The Myopia of Liberalism


    Gotcha, so I guess or disagreement might be this: I think the rather titanic problems of liberalism in the current moment, not least of which is the rise of the far-right and long term discontent over the replacement migration strategy vis-á-vis growth, the long term problems of globalization in the developing world (where in at least some instances it appears to retard growth and good governance), and the looming ecological crises, are not accidental to liberalism itself, but directly attributable to it




    As mentioned earlier, I think Locke and Mill's justification of enslaving populations by force to "liberate them from indolence," is a prime example. Cold War colonial war rhetoric is also a good example. Obviously, the wars were so difficult because people in the occupied states largely did not want to remain part of the colonial empires. But, they had to be "freed by force" because the communitarian ideologies (Marxism, political Islam, etc.) that held sway with large segments of the population oppressed individual and market rights (liberal freedom).

    Hobbes grounds the state entirely in the atomized individual in the "state of nature." The state has legitimacy just insomuch as it is a better choice for individuals qua individuals to actualize their individual freedom (generally as fulfilling whatever desires they happen to have). This is the core assumption of "social contract theory," which is certainly still present in contemporary liberal theorists. An ideal society maximizes liberty for individuals as individuals (including liberty vis-á-vis desires for material goods, which is why "economic growth" and consumption play such an outsized role in liberal theory and welfare economics).

    This is the ordering of the higher (common good) to the lower, the whole to the parts (in line with reductionist tendencies in materialist thought). The common good becomes merely a colocation of individual goods. The "veil of ignorance" is all about the individual for instance, and indeed the individual as initially abstracted from all community and common goods or social identity.

    By contrast, there is Hegel, one of the great critics of social contract theory:

    My particular end should become identified with the universal end… otherwise the state is left in the air. The state is actual only when its members have a feeling of their own self-hood and it is stable only when public and private ends are identical. It has often been said that the end of the state is the happiness of the citizens. That is perfectly true. If all is not well with them, if their subjective aims are not satisfied, if they do not find that the state as such is the means to their satisfaction, then the footing of the state itself is insecure.”

    A common critique of liberalism is that this conception of the state (which often finds its way into legal decisions, particularly in the US through the Federalist Papers) only works so long as custom, culture, etc. continue to bind individuals together as wholes. Yet liberalism, and particularly capitalism, undermine all such connections, making liberalism self-undermining.
  • The inhuman system


    I don't know if it's that much of a contradiction. I suppose that quote, taken alone, could be read in a very Nietzschean or Sartrean light, but I have always seen Taoist notions of freedom set in opposition to the former, often as their polar opposite (although I think they are opposites that might meet at their limit). E.g.,:

    In the Taoist philosophy, we find a strikingly different idea of freedom [from the Western one based on the absence of all constraints]. The Taoist conceives of freedom from the very opposite direction: instead of focusing on an
    absence of external constraint or coercion, the Taoist focuses on modifying the self that can be in conflict with external constraints. Instead of being critical of the ex-
    ternal environment and requesting the environment to give room to the individual's desires or will, the Taoist requires the individual to be critical of him/herself, and to be in harmony with his/her environment.

    This Taoist idea of freedom logically starts from a realization that the constrained and the constraints are mutually dependent; without the constrained, the constraints would not exist as constraints. The founder of the Taoist philosophy Lao Tzu says: "Honor great misfortunes as you honor your own person. Only because you have your own person, you will have great misfortunes. Without a person, how could there be misfortunes?" Furthermore, the kind of misfortunes or constraints one has depends on the kind of individual one is. Limitations vary from one individual to another. As Lao Tzu's great follower, Chuang Tzu, says: "Fish live in water and thrive, but if men tried to live in water they would die." This clearly applies not only to the natural limitations of fish and humans, but to all subject-object relations. An individual's particular desires and ambitions also define particular constraints. Any anticipation or desire will bring a set of constraints. To shop-lifters the video monitors
    installed in stores are big constraints, but to the rest of us, they are nothing but video monitors. To smokers "No Smoking" signs are constraints, but non-smokers consider them to be protection.

    The more one desires or expects, the less one is free, because there are more constraints one has to break in order to have the desires satisfied or expectations
    fulfilled. We often think that powerful people have more freedom. But that is not always true, for they usually have more desires and ambitions. My two-year-old daughter has never felt short of money, even though she does not have any; but Donald Trump does...

    https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://scholarworks.gvsu.edu/cgi/viewcontent.cgi%3Farticle%3D1561%26context%3Dgvr%23:~:text%3DThe%2520Taoist%2520conceives%2520of%2520freedom,in%2520conflict%2520with%2520external%2520constraints.&ved=2ahUKEwiPwabp45iNAxVUM9AFHfTmOQQQ5YIJegQIFhAA&usg=AOvVaw0pNzCylqHtaoKsyMf_b3Mt

    There is obviously a similarity here with Indian thought and with Western pagan thought, with its struggle for ataraxia and apatheia (as well as the fruits of contemplation, e.g. "enlightenment" or "henosis," which have a more positive element).

    The emphasis on self-cultivation—and the role of the sage, the daoshi, and the zhenren—seem to follow the intuition of other traditions that the renunciatory move often isn't spontaneous, but rather requires received wisdom, reflexive discipline, and guidance—in a word, cultivation. This is a positive element in freedom though. The very fact that "most people" are too caught up in striving is evidence of how freedom is constrained through a sort of deficient cultivation and pernicious social forces A person doesn't simply attain to renunciation through a natural maturation process. It's just that this move isn't (as much) dependent on external resources (but they still play a role, particularly in the teachings of the sage). This is like how Epictetus claims he became free while a slave, but that most masters are slaves, while he still credits exposure to Rufus's teaching as decisive.

    But, I don't agree with purely renunciatory philosophy because it tends to generate a reflexive freedom that actually bottoms out in the contentless negative freedom so dominant in modern Western thought when taken to its logical conclusion. One needs a more robust consideration of man's telos and the notion of the summon bonum to which all goods can be logically ordered.

    This leads towards the later Pagan goal of philosophy as "becoming like onto God," present in the Christian tradition as theosis. But whereas the Pagan tradition tends to wash out all human particularity and concern for any telos of history (and so concern for mankind generally), this is not how the Desert Fathers and their descendants saw things. Dispassion means mastery over the passions, but not a constant suppression of the appetites and passions, but rather their purification and reorientation towards what is truly Good, Beautiful, and True (hence the Philokalia, the "love of Beauty" as the ascetic manual of Eastern Christendom, versus philosophy, the "love of wisdom.") This comes out of a more robust notion natures with intelligible final causes, the human telos, and a metaphysics of Goodness. You can see this sort of dynamic in Attar of Nishapur's Sufi classic, the Conference of the Birds, which focuses on the total abrogation of the Self, versus Dante's Commedia, which can combine a robust respect for particularity and history right up to its climax in the beatific vision.

    The difference leads to a much more optimistic vision of the chances of "enlightenment" for all individuals and participation in the spiritual life. It also tends towards a broader notion of the cultivation required to develop reflexive and social freedom (the cenobitic monastic community as opposed to the life of the hermit, requires a social focus, even if on the small scale). But this makes perfect sense to me, because these things aren't easy to explain, and do require structure and guidance to be accessible to most people.

    Maybe the Incarnation is another important difference. This has infinite being breaking into the finite world, and the Church as the immanent, social, mystical body of Christ realized in history, through the Marian mode of the Church, "man giving birth to God and freedom in thoughts and deeds."

    But I don't think this is an outright contradiction. Rather, contradictions in each level of freedom given birth to a higher level (Hegelian style), leading from negative freedom, to reflexive freedom, to authenticity (since self-discipline can be turned against nature), to social freedom, to moral freedom, the capacity of societies to recognize and achieve the human good.
  • The inhuman system


    But in my experience with unhappy people, which is extensive, as I work in mental health and addiction - people often forget or overlook how fortunate their situation is and how much they tend to catastrophize.

    Yes, good point. I think that's very true. I think the OP sort of gets at the social forces that lead to that catastrophizing and lack of appreciation. It perhaps misses some others though. The work on deteriorating mental health for women and girls tends to highlight different, although related issues.

    I suppose that goes along with the discussion of renunciatory traditions, in that they attempt this sort of reevaluation.

    I would say that the amount of material goods one needs will tend to vary by culture and time. For example, to be unable to afford a private vehicle or phone in many contexts is now to be unable to find work in a culture that places a huge premium on work. But I don't think that necessarily means cars and phones do that much for happiness (useful as they no doubt are), at least not as a prerequisite. Or perhaps a better way to put it is that they take on special relevance in a culture where they are almost required for membership and recognition.
  • The inhuman system


    Well, fair enough, he might not even have been a real person. The text appears to be an accretion. Siddhartha Gautama might have been a better example.
  • GETTIER – Why Plato Would Reject Justified True Belief (a Platonic dialogue)


    Just to make sure I am understanding you, are you saying LLMs "produce knowledge" for us, or that something like ChatGPT actually "knows" things? Or is this a case of analogy?

    I would say LLMs contain knowledge in the same way books do. Indeed, they mostly just slam together text predictively from books and similar documents. I suppose it could be argued that they can also produce novel knowledge through synthesis, but this would still seem to me more like a "book that writes itself," than a "knower."
  • The inhuman system


    But possibly the best thing to do when one is fretting over how distorted and ambitious humans are is to go out and help others

    Isn't this precisely what people like Laotze and St. Francis thought they were doing by telling people to stop following worldly ambitions, helping others?
  • The inhuman system


    Or perhaps the list of material goods you have mentioned are simply not the most important things for happiness? I don't think people kill themselves more often "because they have it too good," at any rate. So wouldn't this be evidence that those material goods are only so important and that we should stop hyper focusing on them, particularly on growth in consumption (to the point of courting ecological disaster)?

    This reminds me of Byung-Chul Han's point that in the "achievement society" it is a "personal failure" not to achieve happiness.

    Of relevance:

    Say we have a privileged, wealthy guy with a "good family" who cares for him. He has lots of opportunities. And he follows the middle to upper class dictum: "get good grades and wrack up accomplishments so you can go to a good college, and do the same there so you can get a good job, and then you can get a good job and do what you want."

    He does this. No extraordinary evil befalls him. He has no extraordinary vices. Maybe he drinks or smokes pot a bit too much, or plays too many video games, or has a porn habit, or cannot get a girlfriend, or cannot keep to just one. Maybe not. Nothing out of the ordinary.

    And he's miserable. He's prime bait for radical ideologies of one sort of another precisely because he "did everything he was told," and is miserable. This isn't an uncommon phenomena. That's sort of the recruiting mantra of radicals on the right and left, although it certainly helps if people struggle in the labor market or are "overeducated." We could imagine this sort of thing playing out across many gradations. It can even happen to the ultra wealthy (perhaps particularly to the ultra wealthy).

    Here is Han's point: in the autoexploitative context of modern liberalism, this man's unhappiness is a personal failure. The self is a project, and it's happiness is a goal that has to be achieved as an accomplishment.

    And there are lots of men and women who have encountered this sort of "personal failure." Millions it would seem. So the question is, at what point do we stop thinking this is an aggregate of millions of personal, individual failures and begin to say it is a systematic, social failure or a philosophical failure?



    Hmmm. Do I believe this? Probably not.

    You don't think good, or at least adequate parenting, education, etc. are prerequisites for "living a better life," developing self-control, or having the capacity to be a good citizen?

    I feel like this is at least obvious in extreme cases. If you leave a baby in the wild it will die. If you raise a child like a zoo animal, keeping them in a cage and throwing them food scraps, they will not be free to develop into a flourishing human.

    But my point would be that we don't hit some threshold of "mere adequacy," after which human beings simply "become fully self-determining (as much as man can be) upon biological adulthood." This requires cultivation.

    I think there's some irony, or maybe contradiction, here. To a large extent, cultivation and education are the agents that immerse us in the sea of social expectations.

    Yes, one is not free to become a "good father," a "just leader," or a "good teacher," without filling social expectations either. One of the problems of defining freedom simply as freedom from external constraint is that it makes social expectations a check on freedom. Indeed, a purely negative freedom makes all human relationships, including marriage and parenthood, limits on our freedom. As Hegel points out in the opening of the Philosophy of Right, taken to an extreme, such a negative freedom collapses into contradiction. One cannot make any determinant choice without in some way limiting oneself. Such a freedom is limited by choice itself, a contradiction. It also reveals itself to be arbitrariness to the extent that it demands that "what is free" is "determined by nothing," making it random action, the exact opposite of freedom.

    This is often where "authenticity as freedom" goes off the rails. Authenticity is important, but without reflexive freedom it is just following impulse and instinct.

    Hence the need for a more robust notion of reflexive and social freedom:


    To quickly define these terms:

    Negative Freedom is defined by a subject’s freedom relative to the external world. It is freedom from external barriers that restrict one’s ability to act, e.g., the government or thieves seizing your tools so that you cannot work.

    Reflexive Freedom is defined by subject’s freedom relative to themselves. To quote Hegel, “individuals are free if their actions are solely guided by their own intentions.” Thus, “man is a free being [when he] is in a position not to let himself be determined by natural drives.” i.e., when his actions are not subject to contingency. Later philosophers have also noted that authenticity, and thus the free space and guidance needed for us to discover our authentic selves, is another component of reflexive freedom.

    Social Freedom is required because reflexive freedom only looks inward; it does not tie individual choices to any objective moral code. This being the case, an individual possessing such freedom may still choose to deprive others of their freedom. (This the contradiction inherent in globalizing Nietzsche’s “revaluation of all values”).

    (Note: I have borrowed from and modified Axel Honneth’s work in Freedom’s Right in drawing up this typology)

    Since individuals will invariably have conflicting goals, there is no guarantee than anyone will be able to achieve such a self-directed way of life. Negative freedom is also contradictory because “the rational [reflexive] can come on the scene only as a restriction on [negative] freedom.” E.g., being free to become a doctor means being free to choose restrictions on one’s actions because that role entails certain duties.

    Social Freedom then is the collective resolution of these contradictions through the creation of social institutions. Ideally, institutions objectify morality in such a way that individuals’ goals align, allowing people to freely choose actions that promote each other’s freedom and wellbeing. Institutions achieve this by shaping the identities of their members, such that they derive their “feeling of selfhood” from, and recognize “[their] own essence” in, membership.

    In the language of contemporary economics, we would say that institutions change members’ tastes, shifting their social welfare function such that they increasingly weigh the welfare of others when ranking “social states.” In doing so, institutions help resolve collective action problems, prisoners’ dillemas, etc. They allow citizens to transition into preferencing social welfare over maximal individual advantage.

    We are free when we do what it is that we want to do, and we can only be collectively free when we are guided into supporting one another’s freedom. Otherwise, there will always be some who are not free. Further, those who appear to have freedom will not be truly free. They will not be free to pursue any course they’d like, as they must always fear losing their freedom — losing their status — and becoming just another of the oppressed. Further, we do not have to balance freedom and happiness. Freedom entails happiness, as people will not do what makes them miserable if they are free to do otherwise.

    “My particular end should become identified with the universal end… otherwise the state is left in the air. The state is actual only when its members have a feeling of their own self-hood and it is stable only when public and private ends are identical. It has often been said that the end of the state is the happiness of the citizens. That is perfectly true. If all is not well with them, if their subjective aims are not satisfied, if they do not find that the state as such is the means to their satisfaction, then the footing of the state itself is insecure.”

    — This and all quotes above from Hegel’s Philosophy of Right

  • What is real? How do we know what is real?


    That's an interesting question. I have heard, but not really looked into, the idea that Egyptian Memphite Theology contains an early version of Plato's Theory of Forms. This wouldn't be totally out of left field because there has always been a story attached to Plato that he went to study with the Egyptian priests and learned their wisdom when he was young. Michael Sugrue, among others, also connect Parmenides and thus Plato to older traditions coming out of India and the Orphic tradition (hence the transmigration of souls according to one's karmic/virtuous actions in life).

    This would place these ideas pretty far back, but not necessarily before the Bronze Age collapse. However, and I might be conflating later notions of Brahman with earlier ones here, those notions of Brahman as "fundamental reality" would seem to go all the way back to the second millennium. Likewise, ancient Sumerian and Egyptian myths (and Native American ones) seem to at least have the idea of illusion vis-á-vis magic (but this is perhaps not quite the same thing).

    You might be on to something, but I would guess that there is also a more primordial grounding of a reality/appearance distinction in the phenomena of dreams, optical illusions, mistakes of judgement, the fallibility of memory, and deception in warfare and hunting (which has apparently always been around). Yet I could absolutely see how civilization, and the problem of standards, media of exchange, and commerce could inflate this notion into something with greater depth.



    Why would any one agree or not agree to a logic? That's the question.

    Odd.

    You might find this helpful: https://philosophy.stackexchange.com/questions/91492/whats-with-philosophers-and-their-use-of-quotation-marks/91501#91501
  • The Myopia of Liberalism
    I am curious if proponents of liberalism would at least agree with this analysis though:

    There are three broad principles of the modern liberal state: capitalism, democracy, and liberalism.

    In my view, it seems that both capitalism and democracy are subservient to liberalism. For instance, progressive liberals are willing to constrain capitalism precisely because they see it as conflicting with liberalism. Conservatives are skeptical of this, but only because they don't see unrestrained capitalism as at odds with liberalism.

    There are pretty vocal groups on the left and right who are skeptical about democracy, precisely because democracy can constrain liberalism. Hence, I would say liberalism is the highest principle. "Freedom over all else," with freedom obviously being the ideal of freedom in the liberal tradition.
  • The Myopia of Liberalism


    That would make sense if Trumpism and the rise of the far-right in Europe were some sort of foreign, extrinsic force attacking liberalism from without, rather than something produced by liberalism's own contradictions. But Trump is just an extension of trends within liberalism (and not just the US) going back decades.

    Likewise: "you either affirm neo-liberalism and late-capitalism or you want a return of polio, Jim Crow, slavery, etc." seems a bit much. Do any of the traditionalist critiques you have in mind argue for a return to such things? I doubt it. So what's the claim then, that all of the advancements you've listed were primarily caused by liberalism and would simply be unachievable without it? That without liberalism one must have slavery?

    Historically, liberalism actually opposed many of the things you mentioned, often quite aggressively, and only grudgingly acceded to them due to its need to combat socialism (or when it was outright defeated on these issues). The end of child labor laws, advent of state pension systems, the end of Jim Crow—these were all grudgingly accepted by liberalism after being advocated for primarily by socialists (but to be fair, progressive liberals too). Often (as in the case of federal support for the end of segregation) liberalism explicitly embraced these in terms of them being a "lesser evil" in the fight to contain communism. For instance, liberalism only "stopped colonialism" after killing millions of people trying to sustain it (and in the French case, seriously considering using nuclear weapons on Vietnam to "keep it free"). The military defeat of the liberal nations then becomes, in the revisionist history, "something nice liberalism did." But the liberal states didn't give up their colonies because of "open ended liberal debate," but because they lost on the battlefield or risked immanently doing so if they tried to force the issue.

    Competition was a check on liberalism and in order to compete liberalism had to sublate elements of nationalism and socialism, making them core parts of liberal norms (e.g. "an Algeria for Algerians," not just liberal rights for Algerians under a French state; the adoption of core elements of the socialist platform into virtually all liberal states). I don't think it's any coincidence that median wages across the developed world stagnated, despite robust productivity growth, as soon as liberalism's last opponent collapsed, or that standard of living and life expectancy began to decline following the triumphant "End of History" victory. What the "End of History" did was let liberalism and capitalism go back to being more fully themselves—back towards the Guilded Age, but with Christianity also increasingly out of the way as a rival/check on capitalism.

    Polio and measles are interesting things to mention, since these are coming back in the US thanks to the liberal ethos and erosion of all sources of authority outside the coercive power of the state.

    At any rate, critiques of liberalism do not claim that liberalism and capitalism erode tradition and culture over night. Indeed, that they don't is part of the problem re the politics of replacement migration. They have done so steadily however, which is precisely why these critiques claim we have Trump, a Europe that is unable to stomach short term declines in consumption to check Russian aggression, etc. Yet the dominant, hegemonic, now globalized force at "the End of History," which faces no real rivals, somehow manages to defend itself by claiming its flaws are mostly really just attacks from without (this is, IMO, only plausible in the case of radical Islam).



    Right, that's a pretty common response, and in line with Fukuyama's argument. Liberalism is inevitable and human nature. I disagree on that obviously. I will just note that this same claim was long advanced by reactionary monarchists in much the same way. Monarchy was natural. It was in line with human nature itself. It was inevitable, and evidence to the contrary was a temporary aberration. This was also the claim of Marxism. It too was inevitable. Both collapsed, and there were signs that they would do so prior to the fact. For my part, I don't think humanity can reach the End of History while life entails the contradictions that liberalism embodies. Just the problem of the Last Man identified by Fukuyama should have been enough to clue him into that IMO.

    But the end of the Ancien Régime, the Tsardom, and the Soviet Union were not the apocalypse. The end of liberalism will not be the "end of the world," even if "it is easier to imagine the end of the world than the end of capitalism." It will almost certainly be painful though, as was the death of the monarchies, but that was hardly a reason to keep traditional monarchies around forever, or to be a reactionary.
  • What is real? How do we know what is real?


    The most important phrase, perhaps, is the first, since it links intelligibility with "anything being anything" -- thinking with being, in other words. I believe this is probably true, as a description of consciousness in the world. And that may be good enough, since philosophy doesn't pretend to tell us what philosophy (thinking) would be like, if no one were doing it! It does, however, often try to talk about what the world is like, unmediated by the experience of human consciousness. From that perspective, can we say that "there can be no 'physical order' without an intelligible order by which things are what they are"? We simply don't know.

    This skepticism relies on a particular metaphysics of consciousness and appearances. In materialism, potency is king, thus contingency and accident reign over necessity and essence. Consciousness becomes an accidental, contingent representation of being (i.e. representationalism). Appearances are, or at least can be, completely arbitrarily related to reality. It's a world where intelligibility must be projected onto things, or at least "constructed by the mind" (and this also involves a different conception of reason and knowledge). Obviously, if appearances can be arbitrarily related to reality, so can language (by contrast, we could consider someone like Hegel, who would argue that the historical evolution of language and logic cannot be arbitrary).

    That's sort of the deep separation between modern and pre-modern metaphysics, their understanding of appearances/eidos. Plotinus and a number of his followers take up the criticisms of Sextus Empiricus on what are, in some ways, very modern questions of the sort you mention, but have a much easier time resolving the problems because of these differences. So to, Aquinas' consideration of the question of "if the minds knows its own phantasms instead of things."

    I don't want to get into all that, except to say the priority of potency (sheer possibility) over actuality is a presupposition of the "raw material world the can be arbitrarily related to intelligibility." First there is potency, and then there is something that actualizes something from it (presumably , potency itself). One of the counterarguments against this is that this doesn't make sense, sheer potency, being nothing in particular, cannot result in any specific actuality; act is always prior to act. But modern thought tends towards just denying this. Actuality comes either out of God's sheer, indeterminate power/potential (all act coming from the Divine Will), or in later atheist cosmology our actual world springs from the possibility space of possible worlds "for no reason at all" (and being a mathematized world, intentionality and meaning are either illusory or spring from man's will as power).

    Which is just to say that, while there are other issues, I think this might boil down to the priority of act over potency or vice versa. That's very abstract, but one should expect such for the root of major metaphysical differences.

    I wrote that, and then recalled I had a good quote on this re causes, and why causes used to bridge the sort of gap you're bringing up, but do not with with Humean causation:

    Forms had explanatory power in the older realist framework, not because general belief in that power was supposed to replace the empirical work of discovering and characterizing how they operated, but because confidence that there were such causal powers helped to account for the order of nature and the very possibility of successful scientific inquiry.

    It is commonly said that modern science neglects formal causes but attends to efficient and material causes; but classically understood, efficient and material causes cannot function or even be conceived without formal causes, for it is form which informs matter, giving concrete objects their power to act on other objects. The loss of formal causality is thus in a sense the loss of efficient and material causality as well—an implication that is not quite fully realized until we see it brilliantly explored in the philosophy of David Hume.

    Of course, the gravity of the loss of teleology is also evident in the realm of ethics. Ockham was no libertine or relativist, but he prepared the way for the intractable confusion of modern moral reflection. Morality is concerned with ends, and humans, having the natures they do, need to acquire certain further qualities or forms—virtues—which help them fulfill their essential natures and achieve their
    ultimate end. Alasdair MacIntyre has most famously traced the inevitable failure of the Enlightenment project to explain morality without teleology. Ockham’s denial of forms and formal causality is unquestionably part of the conceptual disaster that left Enlightenment thinkers with only misunderstood fragments of a once very different project of moral theorizing.

    There is another, even more basic, implication of the nominalist rejection of forms and formal causality. In the realist framework, the intrinsic connection between causes and effects was particularly important for explaining how the mind knows the world; concepts formed by the mind, insofar as they are causally connected to things which are the foundation of those concepts, necessarily retain some intrinsic connection to those things. While we can be mistaken in particular judgments, we can be assured of the basic soundness of the mind’s power, thanks to the intrinsic connection between concept and object. The kind of radical skepticism Descartes proposed, even if only methodologically, was simply never entertained through most of the middle ages.

    More classical versions of skepticism, usually having to do with the fallibility of the senses, were commonplace, but the possibility of a complete incongruity between the mind and reality—such that even mathematical concepts could be the product of some deceptive manipulation and have no connection to the mathematical “realities” they seem to represent—this was not available in a realist
    framework for which concepts are formally and so essentially related to their objects. Ockham’s nominalist innovations almost immediately raised the specter of such radical doubt; this was noticed not only by the first generation of Ockham’s critics, but even by Ockham himself, who proposed thought experiments about God manipulating our minds to make us think things that are not true. For Ockham, such thought experiments were possible not only because of God’s absolute transcendent power, but because the human mind retained for him no intrinsic connection to an intelligible order. Ockham was no skeptic, and he was no Descartes; indeed, he was rather confident in the reliability of human cognition. But the law of unintended consequences applies in the history of philosophy as elsewhere, and it was only a matter of time before some philosopher exploited, as fully as Descartes did, the new opportunity of skepticism made possible by the nominalist rejection of forms and formal causality.

    Accordingly, Thomists and other critics of Ockham have tended to present traditional realism, with its forms or natures, as the solution to the modern problem of knowledge. It seems to me that it does not quite get to the heart of the matter. A genuine realist should see “forms” not merely as a solution to a distinctly modern problem of knowledge, but as part of an alternative conception of knowledge, a conception that is not so much desired and awaiting defense, as forgotten and so no longer desired. Characterized by forms, reality had an intrinsic intelligibility, not just in each of its parts but as a whole. With forms as causes, there are interconnections between different parts of an intelligible world, indeed there are overlapping matrices of intelligibility in the world, making possible an ascent from the more particular, posterior, and mundane to the more universal, primary, and noble. In short, the appeal to forms or natures does not just help account for the possibility of trustworthy access to facts, it makes possible a notion of wisdom, traditionally conceived as an ordering grasp of reality.

    Preoccupied with overcoming Cartesian skepticism, it often seems as if philosophy’s highest aspiration is merely to secure some veridical cognitive events. Rarely sought is a more robust goal: an authoritative and life-altering wisdom. Notice: even if contemporary philosophers came to a consensus about how to overcome Cartesian doubt and secure certainty, it is not clear that this would do anything to repair the fragmentation and democratization of the disciplines, or to make it more plausible that there could be an ordered hierarchy of sciences, with a highest science, acknowledged as queen of the rest—whether we call it first philosophy, or metaphysics, or wisdom

    "What's Wrong With Ockham?"
  • The 'Hotel Manager' Indictment


    It's an interesting subject. The empiricist tradition often justifies itself by pointing to the fruits of modern science and ascribing these to its philosophy (and thus to its rejection of much of what came before). However, historically, the "new Baconian science," the new mechanistic view of nature, and nominalism pre-date the "Great Divergence" in technological and economic development between the West and India and China by centuries. If the "new science," mechanistic view, and nominalism led to the explosion in technological and economic development, it didn't do it quickly. The supposed effect spread quite rapidly when it finally showed up, but this was long after the initial cause that is asserted to explain it.

    Nor was there a "great divergence," in technological progress between areas dominated by rationalism as opposed to empiricism. Nor does it seem that refusing to embrace the Anglo-empiricist tradition's epistemology and metaphysics has precluded people from becoming influential scientific figures or inventors. I do think there is obviously some sort of connection between the "new science" and the methods used for technological development, but I don't think it's nearly as straightforward as the empiricist version of "Whig history" likes to think.

    In particular, I think one could argue that technology progressed in spite of (and was hampered by) materialism. Some of the paradigm shifting insights of information theory and complexity studies didn't require digital computers to come about, rather they had been precluded (held up) by the dominant metaphysics (and indeed the people who kicked off these revolutions faced a lot of persecution for this reason).

    By its own standards, if empiricism wants to justify itself, it should do so through something like a peer reviewed study showing that holding to logical positivism, or some similar view, tends to make people more successful scientists or inventors. The tradition should remain skeptical of its own "scientific merits" until this evidence is produced, right? :joke:


    I suppose it doesn't much matter because it seems like the endgame of the empiricist tradition has bifurcated into two main streams. One denies that much of anything can be known, or that knowledge in anything like the traditional sense even exists (and yet it holds on to the epistemic assumptions that lead to this conclusion!) and the other embraces behaviorism/eliminativism, a sort of extreme commitment to materialist scientism, that tends towards a sort of anti-philosophy where philosophies are themselves just information patterns undergoing natural selection. The latter tends to collapse into the former due to extreme nominalism though.



    By all means, I'm just laying out the case as I see it. That the sciences involve discussions of causality, identity, emergence, or universals just seems to me like a good starting point for common ground.

    Anyhow, people wedded to the Wittgensteinian approach are constantly making metaphysical assertions of TFP and other venues. Rorty's use of Wittgenstein is a fine example. And these assertions are based on an analysis of language as prior to metaphysics (which I'd argue just results in implicitly presupposing an unclarified metaphysics for that analysis of language). Yet if the very thing in question is the existence of, or role of metaphysics/first philosophy, and one turns to philosophy of language as the arbiter of this question, it seems that one is already picking an answer, no?

    Now, the classical metaphysician must do something similar, but it's explicit. "Metaphysics is first philosophy because it is most general." This is different from, "there is no first philosophy, or if there is, we must be skeptical about it, thus we must analyze language to decide the issue (i.e., philosophy language is defacto first philosophy, and will decide the issue of first philosophy as first philosophy).

    The reason I think this is often not profitable is because, as noted above, I think metaphysical assumptions are inescapable. So, the analysis just assumes certain assumptions. I think I pointed this out to you before vis-á-vis the extreme authority given to Wittgenstein's "rule following argument," which is often taken as "this is something that is always and irrefutably true about rule following," not "this is what is true about rule following given we grant Wittgenstein his metaphysical and epistemic presuppositions." Wittgenstein's assumptions, premises in the argument, then have to later be analyzed in light of the very conclusions of that very same argument, i.e., "here is what can be said about epistemology and metaphysics, and how we can justify them, given what we have already said about language." The horse cannot pull the cart on the grounds that we have already set the cart before the horse.

    That is why I tend to be skeptical of the approach in a nutshell. There is nothing wrong with wrangling about definitions IMO, it's a time honored tradition.



    I don't like the term 'universal' much because I think it's loaded with metaphysical baggage, and it really doesn't mean anything more that 'general'

    Does swapping in "general" for "universal" resolve the issue of "in virtue of what are different things the same as respects some feature?"

    It seems that "general" would just require the same sort of metaphysical assumptions. Likewise, how does saying "things that share properties actually share 'morphologies'" resolve the issues of universals and natural kinds?

    And of course I don't see universals coming into play, but just a human capacity to generalize on account of the ability to recognize patterns and regularities, as I already noted above.


    I mean, this just seems to me like: "we will call them 'morphologies' to presuppose nominalism without debate." I don't see how a word swap justifies such a move though. We could call them "tropes" just as well, but it doesn't remove the issue of presuppositions.

    Edit: actually, to be fair, I think "patterns and regularities" could probably qualify as universals. It's perhaps more profitable and accurate to define nominalism as the rejection of form and nature than of universals, since only the most extreme, self-refuting sorts of nominalism denies "universals" in a broad sense (normally in the context of anti-rationalism.
  • What is real? How do we know what is real?


    You seems to be suggesting that if one is not following an explicit rule, one is acting arbitrarily. Do you really want to make such a claim?

    No, I am suggesting that if one chooses something "for no reason at all," then one is acting arbitrarily. Now, you suggest that there is "no reason," no prior truth to point to, in selecting any one of the infinite possible logics. Such a selection is instead based on the fact that "others have already chosen to agree to some game rules," and so it is "useful to agree." (Note here though that the very truth of your argument for the utility of "agreement" would itself depend on a particular logic though. In a trivial logic, it is trivial to prove that it is actually better to pick a logic that no other human agrees with, because one can prove anything expressible in such a logic.)

    Anyhow, were this true, it would mean that each individual "picks a logic" only because other people have already picked it. Yet there cannot be an infinite regress of people picking, say to affirm LNC, just because other people have already agreed to it. The choice has to bottom out in some arbitrary first mover. So, the dominant "custom" is just that, arbitrary custom. This also means that the pluralist, in bucking the dominant custom, is also being arbitrary. The utility of a logic rests in "how many other people agree to it."

    I think this is obviously not the case. The fact that trivial logics, which are very common, lack utility, is because it is not the case that every statement one can formulate in language is both true and not true. That's absurd. It's false. And people eschew trivial logics because they allow for false conclusions; they are not actually truth preserving, only "truth preserving" vis-á-vis some deflated notion of truth.

    But at any rate, simply choosing to affirm something as true solely because "other people are doing it," is not good reasoning. This is the old: "if everyone jumped off the Brooklyn Bridge would you do it too?"
  • The inhuman system


    And regarding politics.... Are the current politics working for us? Who is accountable for the mass migration and the issues surrounding them? Who is responsible for the housing crisis, the climate change crisis, and so on? Our politicians? They shift and change every few years, but these problems persist

    I mentioned in a recent thread here recently why I am skeptical of attempts to pin these problems on either "progressives" or "conservatives:"

    I'm a bit skeptical of narratives that try to pin all these problems on just the (mis)rule of leaders on one side of the political spectrum. The problems being discussed (difficulty getting good jobs, huge numbers of applicants for each job, over qualified workers, unaffordable housing, low quality services, welfare expenses becoming unaffordable, etc.) are endemic to the West. You see the same sorts of complaints re Canada, France, Germany, Sweden, Spain, the US, etc. Yet different sides of the political spectrum have had very varying degrees of long term control across these different states.

    Nor is it clear that things are better anywhere else. Housing is increasingly unaffordable in the US, yet it is one of the most affordable rental and ownership markets in the world. It's "hell" in Canada and the UK, yet income to rental/mortgage rates are actually a good deal worse in most of the developing world.

    Certainly, Japan and Korea, might shed some light on things. These are wealthy states that haven't experimented with the neo-liberal ideal of the free movement of labor across borders (migration on a fairly unparalleled scale, e.g. to the extent that German children born today will be minorities in Germany before they are middle age) to nearly the same degree. This, and differing cultures, has given them a different blend of problems (e.g. too much work instead of not enough; homes losing value as investments, or even being given away for free, which is a total loss for someone). Yet some of the other problems are very much the same, or even more acute (e.g. the gender-politics gap/war is probably the worst in ROK, scarcity vis-á-vis healthcare services, etc.).

    That said, I think one can identify neo-liberalism and globalization as a key driver of many of the issues, although ocean acidification, global warming, and sea level rise as well as other forms of environmental degradation related to overconsumption (e.g. micro plastics) have an earlier etiology. Neoliberalism was originally pushed by the political right, but now the left has found itself in the odd place of defending it and late-capitalism (just while advocating for more transfer payments from underfunded welfare states to moderate it).

    G.K. Chesterton has a great quote here: "The whole modern world has divided itself into Progressives and Conservatives. The business of Progressives is to go on making mistakes. The business of Conservatives is making sure they never get fixed."

    However, in the case of globalization and neo-liberalism, the roles are sort of reversed. Of course, while pithy, it's not entirely accurate. Progressivism has done some great things, like universal education (even if they messed up the execution) or ending Jim Crow. It's also done some very bad things, like the unrestrained market/cultural forces that have allowed the Black-White wealth gap to grow even larger than under Jim Crow (or the Israeli - Palestinian gap), and then conservatives have chosen to defend these same bad things after having resisted them.

Count Timothy von Icarus

Start FollowingSend a Message