Comments

  • What is real? How do we know what is real?

    Usefulness isn't determined by some rule. That's kinda the point.

    Or apparently by anything more distinct than "what I currently desire." But logic involves what is true, so this makes truth simply a consequence of whatever one desires.

    If we do not accept that the frog can be both alive and dead, then a logic that allows this is not suitable.

    Conversely, if we do accept it, then such a logic would be suitable? So logic has to do with our current beliefs, and whatever we feel. But then discursive reason isn't about truth; it comes down to desire, feeling, and thus ultimately to power.

    Everyone agrees about the often contradictory truths announced by the state in 1984. They all play the same language game. To not play is made not useful; it's double-plus ungood. But that doesn't make the game's truth claims true.



    Reason is simply consistent thinking. You start with premises, and then work out what they entail.

    This is just a restatement of "reason is nothing but discursive ratio" without addressing any of the problems it entails (mentioned in the post you are responding to).

    For one, you note that we must "start with premises" to have reason at all. So, are our initial premises about entailment itself irrational because we must begin with them? Are they outside reason? Is all of reason based on unjustified (and unjustifiable) starting points then?

    Again, give me a discursive argument while assuming absolutely no inference rules. You can't, by definition. Without assuming that some things follow from other things there is nothing to link one assertion with another, and all you have is a bare posit. If reason is just discursive rule following, then such starting points aren't rationally justifiable and you get something like the appeal to bare "usefulness" above.

    "Logics" without LNC exist. Trivial logics that allow us to both affirm and deny anything expressible exist. Explosion exists. If reason is just rule-following, then there is nothing unique about LNC, it's just another rule that can be asserted or not asserted, with consequences for the structure of whatever "game" we are playing.

    I'd allow that LNC is self-evident. But for something to be self-evident requires that there be a non-discursive grasp on truth. Your justification for denials of LNC not making sense above are straightforwardly circular for instance. They have to assume what they set out to prove.

    Either not all justification is discursive (some things are self-evident, and known non-discursively, e.g. what has often been meant by intellectus in this context), or all justification ultimately rests on unjustifiable and unknowable assertions.
  • What is real? How do we know what is real?


    If I'm misrepresenting you, surely you can lay out what determines usefulness then.

    This:



    This is an example of how the choice of logics might be made. Pick one that does the job you want done, or that will extend and enhance the conversation.

    Leaves "use" as an unanalyzable primitive. And a trivial logic absolutely can be "used" to do all sorts of things. It can be used, for instance, to prove that if a frog is alive it is also not-alive. That's a use. Some people don't want to "enhance conversation." They find domination and power most useful. Does logic and truth conform to this use as well?

    What I'd maintain a trivial logic cannot do, is conform to what is actually true, which is the point of logic, not some amorphous "usefulness." If usefulness instead of truth grounds logic, what you have is relativism based on whatever is felt to be useful (more Nietzschean then where Wittgenstein was going if you ask me).
  • Demonstrating Intelligent Design from the Principle of Sufficient Reason


    This was the original idea though, natural laws were "active." Hence the change in philosophy of nature/natural science from a language of "desires," "inclinations," etc. to one of "laws" and "obedience," (which as you might suppose had strong theological undertones and motivations). Part of what made Hume's initial attack on causality and induction more effective was that the dominant view at the time was one of a sort of "active laws" that were the source of regularity in nature. Hence, we see Hegel writing to contest this issue a few decades later, pointing out the preposterousness of the idea of some "natural law" shooing the planets into their orbits like rambunctious school children who would otherwise go shooting off any which way.

    Part of this was the occult nature of the early mechanistic picture. Gravity was itself "spooky action at a distance," as was electromagnetism later. Life and conciousness also posed difficulties. The corpuscular metaphysics of the day needed something to account for this non-locality.

    "Things act the way they do because of that they are," is a later reintroduction of material causation, while information theory helped bring back a sort of formal causality. This has helped causation make a robust comeback from the death Hume (and later Russell and others) wanted to pronounce over it.



    A lot has been written on the old sea battle. I think it might be more helpful to look at how Aristotle defines chance in the Physics. There, chance arises out of a confluence of the actions of discrete, relatively self-determining natures as they attempt to attain their natural ends. Contingency exists because of the locus of self-determination in things as they exist at some point. We don't have fatalism here, but neither do we have more expansive notions of libertarian free will. Beings' actions are what determine future moves from potency to actuality and modality is primarily handled in terms of potentiality.

    Aristotle also has falsity related to truth as contrary opposition as opposed to contradictory opposition (i.e. as affirmation and negation) and this opens up reformulations of LEM that potentially resolve some of the issues here.



    So you've just moved from the Aristotelian definition of "possible" where excluded middle is violated because "possible" means neither has nor has not the property,

    That isn't what it means to have a potential though. Consider Aristotle's theory of the acquisition of knowledge in De Anima. A human infant potentially knows French. The reception of form through the senses allows this potency to move to actuality (first actuality). A rock, by contrast, cannot learn French. It lacks that potential. This does not require both having and not having the same property without qualification (which would be a contradiction, not an excluded middle at any rate).

    A person who doesn't know French has the power of learning French, they potentially have knowledge of French. The person who speaks French but is not currently speaking French has the property of being able to speak French at will (first actuality), whereas the person who is actively speaking French is actualizing that power (second actuality). These distinctions exist, in part, to avoid the excluded middle and to solve the Meno Paradox without Plato's implausible recollection theory.

    Nor does the notion of a possible world necessarily violate the law of identity. If identity were defined by accidents then you would be a different person when you were to the west of your house as opposed to the east. Indeed, if relation defined identity you could stay the same and still become a different person if something else changed relative to you.

    Arguably, the most expansive conceptualizations of possibility can start to erode identity, but this is because they end up collapsing any distinction between substance and accidents. So for instance, if we say "a frog can potentially turn into a rabbit,' we might ask "does a frog have the potential to become a rabbit?" It would seem not, in which case this "possibility " would really just be an act of sorcery, replacing one thing (a frog) with a different thing (a rabbit). But some thinkers (e.g. Ockham) were uncomfortable with even this sort of distinction, because it seemed like a limit on divine sovereignty (i.e., "things are whatever God tells them to be"). I'll allow that a sort of maximalist conception of possibility can run into trouble here, but a notion of possible worlds need not have this problem.
  • What is real? How do we know what is real?


    I'm not sure that qualifies as an answer, even generously. Unless usefulness is an unanalyzable bedrock?

    I've asked this question to @Banno many times and never received anything but deflection. His notion of use seems to bottom out in a sheer voluntarist will. The need to speak to any causes of this "usefulness" is some sort of pseudoproblem, for vague reasons. Apparently, any logic or notion of truth is "useful" just because others are "playing the same game." Yet this doesn't seem true. If everyone told you that commonly accepted logic proved that jumping off the Brooklyn Bridge would make you immortal, and you saw them jumping off the Brooklyn Bridge, you'd still be stupid for following them.

    For instance :

    Now we might be tempted to ask why p v ~p is so much more useful than p ^ ~p. But isn't one answer here just that we can do more with it?

    "We decide" if a frog can be both living and not living at the same time based on how useful this is to us? How this position would not result in an all encompassing relativism is beyond me. Nothing grounds logic or truth except the bare assertion of "usefulness" and a sort of appeal to democratization. How the bishop moves is somewhat arbitrary. You can make a chess variant where it moves differently. Can you make a frog be alive and not alive by having a language community agree to speak of it in certain ways? No doubt it would be useful to have one's cake, to not eat it, and to eat it to. It's just that this is impossible regardless of how useful it would be.

    Nor does it make much sense that a frog could be both alive and not alive just in case we find it useful for the frog to be such. Seems to me it'll either be dead or alive without much regard for our uses.
  • What is faith


    It's often more difficult to come up with definitions for notions other than substance (things), since such concepts will always inhere in something else. For instance, one never had a "fast motion" without some thing moving, or "red" without there being something (light, a ball, etc.) that is red. More general principles will tend to be harder to define because they can be analogously predicated under many aspects.

    So, for instance, one set of definitions in this thread has focused on faith as the persistence of belief (or even "belief without evidence," although I find the latter sort of ridiculous). But the persistence of beliefs is arguably just one thing that results from faith. St. Paul's dramatic conversion on the road to Damascus is often considered to be a quintessential example of an event defined by faith, but if fact this event involves him abandoning most of his most firmly held beliefs—beliefs that he has been up to that moment willing to fight and risk his life for.

    On the flip side, the radical skeptic is also persistent in their beliefs. No evidence can move them from their skepticism, and yet this immobility is because they lack faith in anything.

    The mistake here might be akin to claiming that flight is defined by the flapping of wings. While the two go together, there can be flight without flapping or flapping without flight.

    Other definitions in this thread seem to use "faith" more akin to trust. People "have faith" in airplanes, doctors, etc. But this is perhaps more a sort of trust in people and institutions, as opposed to the deeper uses of "faith." Again, we might suppose there is a relationship here of the sort where faith entails a sort of trust, is not reducible to trust.

    Faith, when discussed as a theological virtue, at least suggests this. It suggests fortitude in assent to the illumination of faith (perseverance and immobility), and it suggests trust, but goes beyond either of these.
  • The Forms


    It's not too far of a stretch to see how this suggests biosemiosis (signs and sign relations) as fundamental to cognition (and indeed to organic processes generally.) So here we're encountering the metaphysics of meaning, to which Platonic and Aristotelian principles still have considerable relevance.

    There is a historical relation too in that biosemiotics and the invocation of semiotics in physics almost always involves the tripartite semiotics received through Charles Sanders Peirce. But Peirce was himself a lifelong student of the Scholastics, and received his semiotics through them. His model, although it has some very important new developments, still looks just like Saint Augustine's semiotic triad in De Dialectica in its main structure and elements.

    John Deely's "Four Ages of Understanding" traces this history. It's an interesting work, although it is pretty deficient as a history of philosophy outside of tracing the history of semiotics. He writes off Neoplatonism entirely despite it's huge influence on the reception of Aristotle he is speaking to, which I found sort of odd.
  • The Forms


    Seeing is a power of humans, one every sighted person is innately familiar with. I don't think a reductionist account is the only true account of sight, nor necessarily the best. Perhaps more importantly, I don't think one must "know everything in order to know anything." The continued existence of some mystery vis-á-vis a phenomenon does not preclude us having any knowledge about it.

    But why appeal to the complexity of the brain in particular? Sight also involves light, and the light wave/photon have more than enough mystery to make the same sort of argument. This is the problem with "neuroscience as first philosophy." It's an even worse candidate than epistemology because it is itself reliant on the principles of other sciences (e.g. physics).

    Anyhow, this still seems to be misunderstanding the concept of form. The form is, in part, the actuality in things that is responsible for their appearance. Being is prior to interaction. Something needs to first exist in order interact so as to appear a certain way. Appearance—perception—is also prior to the development of language. Form is not primarily about explaining language, although it might do that to. It's about what must lie prior to language and perception (else our determinant perceptions would be caused by "nothing in particular," in which case they essentially wouldn't have causes at all). The form of things isn't just their appearances though (which you seem to be suggesting), nor what is said of them, but rather is upstream of each of those, because being (existing) is a prerequisite for interaction and being known.

    In the broadest sense, a thing's form is what makes anything any thing at all, it's particular actuality or "act of existence" by which it is some thing and not "nothing in particular." If form were instead, as you seem to suggest, merely "what brains perceive and talk about," then "brains" themselves would have no true existence as anything distinct, and so would have no determinant powers, ruling out the very possibility of a "science of brains." Brains themselves would be merely "something brains perceive and talk about." This appears to me to be a rather vicious circle.

    Besides this, as points out, form doesn't just explain perception, but the ability to reason about things and to attain intellectual knowledge (as opposed to sense knowledge). Things are not just perceivable, they are also intelligible. Form is what is communicated to the intellect such that things are known as more than mere collocations of sensation.

    But, perhaps more to the point, even if one rejects any notion of form, it still wouldn't be the case that form is just about what is perceived and spoken of. Metaphysicians might be wrong, but they would be wrong about form as a basic metaphysical principle, not as a property of perception.
  • The Forms


    Right, Perl is very good on this. I suppose one of the difficulties here is the modern phobia that appearances might be arbitrarily, randomly related to appearances. Now, to appear a certain way to man is to act in a certain way, and since "act follows on being," we might suppose that things must reveal something of their reality in their appearances. The classical assumption here is that if something acts (interacts) with man in some particular way, then the definiteness of this interaction, that it is one way instead of any other, must be attributable to some prior actuality in both the thing in man. Otherwise, the phenomenological elements of the experience would be what they are "for no reason at all," or, on the side of the acting thing perceived, it would be acting for "no reason at all."

    But I feel pretty safe in this assumption. If things do happen for no reason at all, if the world is not intelligible, then philosophy and science are a lost cause. However, they certainly do not seem to be lost causes.

    One interesting thing to note is that this fear of arbitrariness and randomness is almost always placed on the "world/thing" side of the ledger. Yet the elevation of potency over act such a fear presupposes could apply just as well to man himself. Maybe man, his perceptual organs, his cognition, etc. is what acts entirely arbitrarily in relation to the world? We would each be "hallucinating our own world" for "no reason at all, according to not nature or prior actuality." If this seems implausible, which I think it does, I am not sure why flipping the same concern over to the "world" should be any less implausible though.
  • The Forms


    But anyone using the word 'round' is using it because they are engaging with the world around them and they see 'round' things.

    Indeed, that was precisely my point.

    Imo, if we want to explain the actual reasons why we use the word round, you have to talk about an immensely complicated brain and how it interacts with the rest of a very complicated world in an intractable manner - from the perspective of our own intelligibility - to infer something about how it represents or embodies structure out in the world in the world.

    I don't think neuroscience is any more properly first philosophy then philosophy of language, particularly if it leads to the radical skepticism you lay out in the rest of the post (a skepticism at odds with plenty of neuroscience itself).

    For instance, the claim that "none of us actually know how or why we personally are able to perceive and point out 'roundness' in the world," is simply not one many people, including scientists, are going to agree with. There are great mysteries related to consciousness, but how (and that) things possess shape and how their shape in communicated through intervening media to a person, and how the sense organs engage this information, is well understood in some respects. At any rate, doubts that "anything is really round" involve a quite expansive skepticism.

    However, even if we grant this skepticism, it wouldn't follow that the very diverse, well-developed tradition of metaphysical theories endorsing a notion of form would be rendered contentless. I'm not following this jump at all. This would be like saying that, because different interpretations of quantum mechanics are not currently decisively testable against one another, they fail to say anything unique about the world at all. A metaphysics of form might be wrong (although skepticism precludes even saying this much), or it might be unjustified, but it isn't "not saying anything," or a theory about word use.


    So I guess my conclusion is that appealing to forms and word-use is not meaningfully different.

    One might indeed criticize a metaphysics of form in any number of ways, but to say that such a broad and well-developed area of philosophy is contentless would seem to simply demonstrate a total lack of familiarity with it.

    C.S. Peirce, John Deely, John Poinsot, etc. have very well developed theories of the causality particular to signs and the way in which form is communicated. These theories might be misguided, but they are not reducible to "word use." Indeed, the most popular criticism of the via antiqua by those who were well acquainted with it (e.g. William of Ockham) was that it was too complex, not that it failed to say anything.

    For example, Nathan Lyons "Signs in the Dust:"

    [The] particular expression of intentional existence—intentional species existing in a material medium between cogniser and cognised thing— will be our focus...

    In order to retrieve this aspect of Aquinas’ thought today we must reformulate his medieval understanding of species transmission and reception in the terms of modern physics and physiology.11 On the modern picture organisms receive information from the environment in the form of what we can describe roughly as energy and chemical patterns. 12 These patterns are detected by particular senses: electromagnetic radiation = vision, mechanical energy = touch, sound waves = hearing, olfactory and gustatory chemicals = smell and taste.13 When they impinge on an appropriate sensory organ, these patterns are transformed (‘transduced’ is the technical term) into signals (neuronal ‘action potentials’) in the nervous system, and then delivered to the brain and processed. To illustrate, suppose you walk into a clearing in the bush and see a eucalyptus tree on the far side. Your perception of the eucalypt is effected by means of ambient light—that is, ambient electromagnetic energy—in the environment bouncing off the tree and taking on a new pattern of organisation. The different chemical structure of the leaves, the bark, and the sap reflect certain wavelengths of light and not others; this selective reflection modifies the structure of the energy as it bounces off the tree, and this patterned structure is perceived by your eye and brain as colour....

    These energy and chemical patterns revealed by modern empirical science are the place that we should locate Aquinas’ sensory species today.14 The patterns are physical structures in physical media, but they are also the locus of intentional species, because their structure is determined by the structure of the real things that cause them. The patterns thus have a representational character in the sense that they disperse a representative form of the thing into the surrounding media. In Thomistic perception, therefore, the form of the tree does not ‘teleport’ into your mind; it is communicated through normal physical mechanisms as a pattern of physical matter and energy.

    The interpretation of intentions in the medium I am suggesting here is in keeping with a number of recent readers of Aquinas who construe his notion of extra-mental species as information communicated by physical means.18 Eleonore Stump notes that ‘what Aquinas refers to as the spiritual reception of an immaterial form . . . is what we are more likely to call encoded information’, as when a street map represents a city or DNA represents a protein. 19... Gyula Klima argues that ‘for Aquinas, intentionality or aboutness is the property of any form of information carried by anything about anything’, so that ‘ordinary causal processes, besides producing their ordinary physical effects according to the ordinary laws of nature, at the same time serve to transfer information about the causes of these processes in a natural system of encoding’.22

    The upshot of this reading of Aquinas is that intentional being is in play even in situations where there is not a thinking, perceiving, or even sensing subject present. The phenomenon of representation which is characteristic of knowledge can thus occur in any physical media and between any existing thing, including inanimate things, because for Aquinas the domain of the intentional is not limited to mind or even to life, but includes to some degree even inanimate corporeality.

    This interpretation of intentions in the medium in terms of information can be reformulated in terms of the semiotics we have retrieved from Aquinas, Cusa, and Poinsot to produce an account of signs in the medium. On this analysis, Aquinas’ intentions in the medium, which are embeded chemical patterns diffused through environments, are signs. More precisely, these patterns are sign-vehicles that refer to signifieds, namely the real things (like eucalyptus trees) that have patterned the sign-vehicles in ways that reflect their physical form.24 It is through these semiotic patterns that the form of real things is communicated intentionally through inanimate media. This is the way that we can understand, for example, Cusa’s observation that if sensation is to occur ‘between the perceptible object and the senses there must be a medium through which the object can replicate a form [speciem] of itself, or a sign [signum] of itself ’ (Comp. 4.8). This process of sensory semiosis proceeds on my analysis through the intentional replication of real things in energy and chemical sign-patterns, which are dispersed around the inanimate media of physical environments

    Or there is John Deely's work, or something like Robert Sokolowski's "Phenomenology of the Human Person," etc., all of which include quite determinant statements on how form ties into perception (and language downstream of perception).

    Anyhow, take a gander at: https://thephilosophyforum.com/discussion/comment/987328 and I'll ask, "how is scientific knowledge possible if principles don't exist?"

    Second, do things have any determinant being? If so, that's all form, in the broadest sense, is saying. To be skeptical about form in this broad sense seems to entail radical skepticism, it's to say "the properties of all things are unknowable, and indeed we cannot know if they have any determinant properties at all." But to the skeptic, I'd ask: "if things have no determinant properties, why should they cause determinant perceptions?" Particularly, given the appeal to "brains" (which does not ever produce consciousness without constant interaction with a conducive environment), why should brains ever produce one sort of cognition instead of any other if brains do not possess a determinant nature/properties? There can be no "neuroscience" if there is nothing determinant that can be said about brains.

    Is there actually much difference between my 'structure' and your 'forms' (in the most generic sense of structure)? Maybe I just prefer the former word without the connotations of the latter... other similar words might be 'patterns', 'regularities', etc, etc.

    Form is often described as "intrinsic structure" or "organization." Appeals to "regularities" are often reductive though, tending towards smallism. While some invocations of form are reductive, many are not.

    Paul Vincent Spade's article "The Warp and Woof of Metaphysics" is a pretty good introduction on Aristotlian essences (an example of intrinsic structure) and how they tie in to predication for instance: https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://pvspade.com/Logic/docs/WarpWoo1.pdf&ved=2ahUKEwjt6su-mqGNAxWaw_ACHUQVOqQQFnoECCgQAQ&usg=AOvVaw1XwkMjcPAAZ0aM2Ne2b-c-

    I actually mentioned the common use of "regularities" and "patterns" (always in scare quotes!) earlier in this thread. Either the Kant-like (Kant-lite?) skepticism here is absolute, and we get subjective idealism, or it isn't, and those terms must have some determinant form and content.
  • The Myopia of Liberalism


    I'm not sure if Mill was necessarily motivated by racism. His initial examples on slavery are from ancient Europe. He just has a view of "natural man" as wholly without liberty that I find questionable.

    Looking over the historical moments you cite, all I can do is repeat that such a picture would have us believe that some monolithic thing called liberalism never gave a damn about morals or justice or good government, caring only for individual freedoms no matter the cost, tearing down whatever was necessary to achieve them, etc., etc.

    I read Rawls a pretty long time ago now, and I am not sure if I had a chance to finish the book. I don't think liberalism "never gave a damn about morals, justice, or good government." Liberal theorists often write about these at great length. They just tend towards defining them in terms of the individual. Good government is a priority, and can be given extremely expansive focus in progressive liberalism, but it's also there primarily to enable the freedom of the individual to flourish. There is a marked contrast here with classical and Marxist political theory. Individual freedom is generally raised up over morality because moral questions are privatized to the extent possible (with "rights" holding down what morality enters into the public sphere). This only makes sense. If you have a theory of government that avoids giving answers on man's telos, instead making this a private, individualized matter, then what is important is enabling the private exploration and attainment of that telos, whatever the individual determines it to be.

    Lots of liberal theory sounds utopian, that's true. I remember thinking that with Rawls. But this is also true of plenty of Marxist theory, and even some "Third Position" crypto-fascist theory that stays away from racism and militarism. It's certainly true of Hegel as well. Obviously, no theory is realized perfectly, but I think a useful question is if contradictions or intrinsic tensions in the theory lead towards problems in their implementation. I think this is the case in both liberalism and Marxism. It's perhaps most obvious in conservative liberalism in the way it necessarily sidelines and renders irrelevant the very cultural and religious institutions it most wants to conserve by excluding them from the state and thus the broadest conception of the common good (through a commitment to "small government") while also fetishizing a market that tends to bulldoze culture.

    I do recall one specific explanation from Rawls to the effect that his theory must exclude a notion of "just desert." This tends to hold just as true for conservative liberal theories, which instead look at rights. By contrast, progressive and conservative politicians and citizens are constantly justifying their platforms in terms of just desert. The difficulty for liberal theorists is that, once the question of man's ultimate telos is privatized, one loses any ability to ground standards of excellence, and thus of desert.

    This is maybe the most important contradiction, in that it seems to contradict human nature. I have seen many an avowed moral anti-realist, and many people who claim that political theory should not be based on morality, but I've never seen them discuss any political topic at length without falling into continually invoking standards of excellence and desert. A focus on rights alone leaves political theory chestless (and arguably losing any sense of telos also leaves all notion of rights critically undermined. Fukuyama, a great advocate of liberalism, expresses just this concern at the end of The End of History and the Last Man.)
  • The Forms


    Principles might be a better way to understand it.


    The epistemic issues raised by multiplicity and ceaseless change are addressed by Aristotle’s distinction betweenprinciples and causes. Aristotle presents this distinction early in the Physics through a criticism of Anaxagoras.1 Anaxagoras posits an infinite number of principles at work in the world. Were Anaxagoras correct, discursive knowledge would be impossible. For instance, if we wanted to know “how bows work,” we would have to come to know each individual instance of a bow shooting an arrow, since there would be no unifying principle through which all bows work. Yet we cannot come to know an infinite multitude in a finite time.2

    However, an infinite (or practically infinite) number of causes does not preclude meaningful knowledge if we allow that many causes might be known through a single principle (a One), which manifests at many times and in many places (the Many). Further, such principles do seem to be knowable. For instance, the principle of lift allows us to explain many instances of flight, both as respects animals and flying machines. Moreover, a single unifying principle might be relevant to many distinct sciences, just as the principle of lift informs both our understanding of flying organisms (biology) and flying machines (engineering). 

    For Aristotle, what are “better known to us” are the concrete particulars experienced directly by the senses. By contrast, what are “better known in themselves” are the more general principles at work in the world.3,i Since every effect is a sign of its causes, we can move from the unmanageable multiplicity of concrete particulars to a deeper understanding of the world.ii

    For instance, individual insects are what are best known to us. In most parts of the world, we can directly experience vast multitudes of them simply by stepping outside our homes. However, there are 200 million insects for each human on the planet, and perhaps 30 million insect species.4 If knowledge could only be acquired through the experience of particulars, it seems that we could only ever come to know an infinitesimally small amount of what there is to know about insects. However, the entomologist is able to understand much about insects because they understand the principles that are unequally realized in individual species and particular members of those species.iii

    Plato's Theory of Forms is a particular metaphysical explanation of unifying principles. Whether it was even originally intended as the sort of naive "two world's Platonism" that is often associated with Plato today is an open question (I for one am doubtful). But either way, Aristotle and then the Neo-Platonists make some useful elucidations of the theory (how much they are really altering it is also an open question).

    Plato didn't think there was a form for every generalizable term. This is why examples using artifacts are not good counterexamples for pointing out problems with the Theory of Forms. One of the points of the theory is to be able to distinguish between substance and accidents/relation, but in artifact examples these become easily confused. Hence, books might not be a great example. Plato's student Aristotle rejects the idea that Homer's Iliad would have a definition and also casts doubt on even simple artifacts having essences, and I think he is in line with his old master here. If there has to be a form for every term, and there are potentially infinite, relatively arbitrary terms, then the forms would be useless for doing what they are called in to do.

    Plato rejects materialist attempts to explain everything on the basis of that of which it was made. According to Plato, the entities that best merit the title “beings” are the intelligible Forms, which material objects imperfectly copy. These Forms are not substances in the sense of being either ordinary objects as opposed to properties or the subjects of change. Rather they are the driving principles that give structure and purpose to everything else. At Sophist (255c), Plato also draws a distinction between things that exist “in themselves” and things that exist “in relation to something else”. Though its precise nature is subject to interpretation, this distinction can be seen as a precursor to Aristotle’s distinction between substances and non-substances described in the next section, and later followers of Aristotle often adopt Plato’s terminology.

    https://plato.stanford.edu/entries/substance/
  • The inhuman system


    To start, I just wanted to say that it's fun to discuss things with you.

    Thanks, you as well.

    I have mostly read stuff on Taoism that is tied to its contemporary formulations, so that might be the discrepancy. I don't know much about the historical development and it's quite possible that the focus on self-cultivation comes through later thinkers and cross-pollination between Confucius' tradition and Buddhism, both of which have a sort of virtue ethics.

    I've seen Chuang Tzu presenting as laying out a sort of model for self-cultivation in some anecdotes. So for instance, there is a butcher who becomes incredibly skilled in his trade and it is because he has ceased to try to implement a sort of false constraint on his art, or even to "see a cow" (IIRC), but has instead learned to "flow" totally with nature. This interpretation might rest on later additions though.

    This is, at first glance, very different from the Western focus on knowledge of intelligible forms, but I don't think it's quite as different as it would seem. Aristotle is, for instance, in very many ways, creating a process metaphysics in the Physics. Unchanging form is an abstraction (and in later formulations, only "static" in the infinite, simple being of the Logos, but never in the world of nature). And there is also the very strong conception of nature as a whole, substantial forms only being what makes things relatively more or less wholes, unities, and intelligible, and the goal of action in harmony and accord with nature (which tends to slip out of modern ethics).
  • The Forms


    It's not that all predication is equivocation, but that ordinary language is flexible and dependent on context.

    Right, but if one does not distinguish between univocal and equivocal usage then common facts such as "running involves legs," become unequivocally false because refrigerators, rivers, roads, and noses all "run." Ordinary language involves equivocal, analogical, and univocal predication. Form, the actuality of things, relates to the latter two.

    This is not a threat to logic, which can happily rely on univocal terms.

    Maybe not to formal logic, but the primary use of logic, including in the natural sciences, uses natural language. So, it would be problematic if equivocity rendered something like natural language syllogisms invariably subject to vagueness.

    But would this leave formal logic in a good place? Any term used in formal logic, say M for "man," couldn't correspond univocally to any natural language usage of "man" if all terms were subject to the same vagueness as "game." Formal logic and natural language would be talking about different things.

    Our understanding of words is shaped by practical use, not metaphysical essences. In this view, terms like "round" or "red" don't require metaphysical forms to function meaningfully in context, nor does logic.

    And what determines practical use? Here is the argument, existence is prior to speech. There are round things, ants, trees, etc. prior to speech, and prior to the existence of any "language community." For example, the Earth is spherical and it was spherical prior to man deciding what the token 'spherical' should mean. It was true that the Earth was spherical prior to any man declaring it as such.

    Unless "practical use" is determined by nothing at all, or by nothing but the sheer human will, as uninformed by the world around it, then it will be informed by the being of things (through the senses). A term like "round" is "practically useful" precisely because round things exist prior to the creation of the term (or of language itself). Children who have not learned the word "round" presumably still experience round things (and indeed they are capable of of sorting shapes prior to learning their names). Experience is prior to naming. But the form is called in to explain how things are round, ants, trees, etc., not primarily to explain how words work.

    Nor does realism suppose any sort of metaphysical super glue between tokens and forms in the way you present it. Indeed, Plato has Socrates spend a lot of time exploring how people mean quite different things by using the same token. If Plato held the naive view you attribute to him, then the opening books of the Republic, where "justice" is being defined and used in radically different ways, shouldn't exist.
  • What is real? How do we know what is real?


    You seem to be trading on an equivocal idea of intuition. Self-evidence obtains when something is true by definition. We don't need intuition to see it, it is obvious by virtue of the meaning of the terms. If you make a statement that contradicts itself, it is clear that you haven't asserted anything because you have asserted two things which cancel each other out.

    I'll agree that there are multiple notions of "intuition" and "understanding" that are unhelpfully related but distinct. I was referring to "what is self-evident," which is often attributed to "intuition" because it does not rely on discursive justification, but is rather the starting point for discursive justification (and in some philosophy, also its ending point).

    I don't know if I would necessarily identify the self-evident with "what is true by definition." If one takes "definition" in the Aristotlian sense (i.e., things have a proper definition), then definitions are generally not self-evident, whereas if one takes more nominalist accounts, then the definition might be nothing more than stipulation (which must be empirically arrived at, and so is not self-evident either).

    Maybe I should have said "intellectus," but I don't think many people are familiar with that term.



    We can theorize further and posit noesis, direct knowledge, innate intelligibility and so on, but we have no way of testing those theories.

    True, but this is equally the case for the opposite claim that reason is nothing but discursive ratio/computation. And it faces the problem of being wholly unable to explain the phenomenological aspects of understanding and knowledge (hence eliminitive materialism), nor how "something computes so hard it begins to have first person experiences and understanding." So too for the symbol grounding problem, the Chinese Room, etc.

    More radical forms of empiricism start from the presupposition that the phenomenological side of cognition is "off limits," but when this has tended to bottom out in either the denial of consciousness (eliminativism) or the denial of truth and almost all forms of knowledge, one might question if empiricism has become self-refuting at this point (or at least proven to be a bad epistemology). At any rate, even empiricists tend to accept that empiricism is not justifiable in the terms of empiricism. But the difficulty here is the tendency for there to be equivocation between "empiricism" as any observation and experiment at all (in which case the Scholastics and even the Neoplatonists are "empiricists) and the more radical tradition coming out of Hume and continuing in logical positivism, etc., which is used to argue that rejecting "empiricism" is tantamount to rejecting "science and the scientific method," even though plenty of famous inventors and scientists have rejected the more radical philosophical school, but not "science."



    Such a trivial logic would, by the very fact that no one agrees with it, have the singular misfortune of being quite unless.

    Why does no one agree with it? You seem to be saying that trivial logics are useless just because no one accepts them, but then this leaves totally unexplained how the choice of logic is made in the first place. Presumably, the world did not begin populated by people, with certain logics already popular or not popular.

    How would it be "useful" to affirm anything and everything that can be expressed just in case other people also agreed to do the same? There seems to be obvious problems with this.

    Rather, I'd say that trivial logics are unpopular because they are useless. People's preferences for different logics have to do with how they represent being.
  • GETTIER – Why Plato Would Reject Justified True Belief (a Platonic dialogue)


    Do you mean Borges' Library of Babel or the story from Genesis or something else? (The first is one of my favorite ways to think about this sort of thing).
  • The Forms


    Right, but it's worth pointing out that this is sometimes denied (i.e., there is no truth about "what a thing is") and people still try to do ontology with this assumption. Although, when they—as they often do—appeal to "regularities," "patterns," and "constraints," that are prior to the act of "naming things what they are," I do think there is a problem, since these terms themselves either have some form or are simply contentless hand-waving to avoid a slip into an absolute volanturism (where the will makes anything what it is by a bare act of choice).

    IMO, this mostly comes down to the elevation of potency over actuality. When the order is inverted, then one always has limitless possibility first, and only after any (arbitrary) definiteness. Voluntarism plays a large role here. It becomes the will (of the individual, God, the collective language community, or a sort of "world will") that makes anything what it is through an initial act of naming/stipulation. But prior to that act, there is only potency without form and will.

    Presumably though, you need knowledge of an object in order to have any volitions towards that object. This is why I think knowing (even if it is just sense knowledge) must be prior to willing, and so acquisition of forms prior to "rules of language," and of course, act before potency (since potency never moves to act by itself, unless it does so for no reason at all, randomly).

    Edit: I suppose another fault line here that ties into your post (which I agree with) is: "truth as a property of being" versus "truth solely as a property of sentences." In the latter, nothing is true until a language has been created, and so nothing can truly be anything until a linguistic context exists. That might still require form to explain though, because again, it seems some knowledge must lie prior to naming.
  • The Forms
    [

    “In virtue of what are all just acts just, or all round things round?”—is itself misleading. It presumes there must be some essence or metaphysical commonality underlying all uses of a term. But why should this be so? Why should there be a thing that is common to all our uses of a word? Why should we not, for example, use the same word to name different things? And if one looks at the uses to which we put our words, it seems that this is indeed what we do. The red sports car and the red sunset are not the same colour, despite our using the same word for both. The round hill and the round ring are quite different.

    Yes, those would be instances of equivocal predication or pros hen predication, etc. What's the claim here though, that all predication is equivocal? Then you don't have logic. That terms are never predicated univocally? Then you also don't have logic.

    But a basketball and baseball are not spherical in different ways, nor is red paint splashed on a wall here a different red than the a hockey stick painted with the same red paint. "Some predication is equivocal" is not a good argument for "no predication is univocal."

    Who in "analytic logic" says otherwise?

    There simply need be nothing common to all red or round things. And perhaps the same is true for the Just. Rather there may be many, diverse and overlapping similarities. The classic example here is of a game: we use the word "game" quite successfully despite not having at hand a rule that sets out for us what counts as a game. And indeed, it seems that were any such rule proposed, it would be a simple matter to find or invent a counter instance, a game that does not fit the rule. Yet we manage to use many, many words without access to such rules.


    Is all predication supposed to be vague in this way? That seems pretty problematic. That'd be supposing all terms are vague.

    A basic syllogism such as:

    Socrates is a man.
    All men are mortal.
    Therefore Socrates is a mortal.

    Would be in jeopardy if "man" is some vague notion of this sort. Certainly, one couldn't do geometry this way. Imagine trigonometry with a triangle defined in this way.

    We might see this more clearly by asking how we learn what is red, what is round, or what is just. We don't learn to use these words by becoming familiar with a form for each. We learn to use these words by engaging in the world and with those around us

    This is simply question begging if taken as an argument against realism though. Perception, including perception of language, involves forms in realism. The form is what is transmitted to the intellect. As an argument (as opposed to say, just laying out an alternative theory, I'm not sure of your goal here) this would be akin to: "nominalism is correct because nominalist theories say so."

    Anyhow, how does one figure out how to "apply a rule for the word round," if there are not first round things? The form is, first and foremost, called in to explain the existence of round things, second our perceptions of them, and then language. It is not primarily about language because language was never considered "first philosophy" before the advent of analytic philosophy (i.e., "being and thought are prior to speaking.") People must be able to identify roundness to use to words to refer to it.
  • The Myopia of Liberalism


    BTW, I think this is fair if the measuring point is 1925 (a century). But what if we use 1975, half a century? Or the end of the Cold War, when neo-liberalism was really taking off and liberalism ceased to have any competition to "keep it honest." Certainly, there has been some expansion of rights since then, but also a lot of backwards steps.

    Since then, median wages across the developed world have stagnated despite gains in productivity from the information revolution, while wages for lower income workers have actually tended to fall in real terms. Economic growth has tended to almost totally benefit a small elite, and economic mobility has been declining. The Black-White wealth gap in the US expanded to become larger than under Jim Crow, while America's underclass endures homicide rates above those of the Latin American states used to justify refugee status (or states with active civil wars in some cases).

    Plenty of other similar stuff I'm sure you're familiar with. My point would be that if a trend extends across half a century, and appears to be accelerating, it isn't a hiccup.



    Mill was against the institution of slavery as practiced, on liberal grounds. However, in "Considerations on Representative Government," he calls for compulsion over “uncivilized” peoples in order that they might lead productive economic lives, even if they must be “for a while compelled to it,” including through the institution of “personal slavery.” This is very similar to Locke's justification of slavery as "freedom from indolence," many of the American Founder's justification of slavery as "temporary but necessary," and liberal justifications of colonialism up through the 20th century.

    I don't think these are equivalent to something like Kant or Hegel's statements on race because these sorts of justifications were used in revised form by liberal theorists and statesmen through the ends of colonialism and the justification of some of the more unsavory parts of neo-liberalism look very similar. If freedom is primarily (or at least largely) freedom to consume, then "economic growth" becomes a justification for all sorts of actions because it is "emancipatory in the long run." Also, they come directly out of the vision of freedom and the "state of nature" anthropology, they aren't some sort of ancillary comment tacked on to theories that would otherwise negate such views.

    The justification for colonialism also looks a lot like the justification for tearing down the Church and forcing monks and nuns out of the monasteries and convents in France, Italy, and Spain (or the mass executions of clergy in France). The people have to be "freed from custom" to live more individualist, productive lives. Hence, it isn't just a sentiment grounded in racism, the same logic extended to the Infernal Columns' actions Vendee in France.
  • The Myopia of Liberalism


    OK, as long as we don't equate these alleged problems with "the apocalyptic decline of Western civilization"!

    I maintain that Western Civilization has been in serious decline since the death of Marcus Aurelius and the ascension of his son to the purple! :cool: :rofl:

    Extreme cherry-picking, wouldn't you say? :smile:

    No, and it seems absurd to me to call this cherry picking when all the major liberal states engaged in absolutely massive colonial projects that they justified in the terms of liberalism, for most of their history, across most of the world's landmass, affecting most of the human population, largely stopping only when military defeat forced them to stop (and arguably, they just continued it by other means under neo-liberalism via less direct coercive measures to force liberalization, e.g. in Eastern Europe, Asia, Latin America, etc., including backing armed groups, coups, etc.)

    Locke and Mill I pick because they are foundational liberal theorists, but I could just as well point to America's Founding Fathers (the oldest example of liberalism in practice) or the justification of colonial rule and slavery by the liberal scions of the French Revolution. The big drive for abolition (which only targeted the most egregious practice of this sort) came from Christianity (as it did in Europe at the end of antiquity, where slavery was largely abolished), not liberalism. The "Battle Hymn of the Republic," is not a secular or deistic ode for instance.

    Did some liberals object to these practices on liberal grounds? Yes, particularly to the exceptionally egregious institution of African chattel slavery, although even most of those who balked at hereditary slavery nonetheless championed colonial expansion over the rest of the American continent on liberal grounds (and the subsequent genocide of the native population).

    But saying that just because there were some unheeded liberal voices against colonial expansion across North America, into India, into almost all of Africa, into China (attempted but partly repelled), and the Middle East, or say, opening Japan to trade with artillery fire, etc., that this isn't "real liberalism" would be a bit like saying collectivization wasn't "real communism" because a handful of communists opposed it.





    Democracy can constrain liberalism?

    I think so. Desegregation was unpopular, even in the North where it largely had to do with bussing for schools (e.g. riots in Boston). It would have lost as a ballot question, even if African Americans were allowed to vote. If you're familiar with the way democracy interacted with sectarian politics in pre-war Lebanon or post-war Iraq, I think you can find lots of examples of this sort of tension between democracy and individual liberty.

    The Western liberal states have benefited from largely homogenous populations, so they haven't had these same tensions (lately). But that's because of both huge, sometimes coercive campaigns to create homogeneity and titanic rounds of ethnic cleansing to sort people across Europe.

    Besides, those that are sceptical about democracy (or neoliberalism) are nearly everybody simply angry about how badly the whole system is working currently: that it's only the rich or those close to power that benefit, or that there is corruption or inefficiency or useless bureaucracy. It's really only a very few people that are inherently against democracy as the vast majority believe that "the people" are still quite rational and capable of handling a democracy.

    Right, skepticism over "illiberal democracy" doesn't tend to result in a wholesale abandonment of democracy. Rather, complaints against Brexit, Trump, Erdogan, Orban, etc. are generally against "populism" and a democracy that is "too direct." Hence, advocacy for changes like a switch to closed list parliamentary systems, where party elites pick the MPs and people just vote for a party and their platform, or the advocacy for rank choice voting specifically as a means to preclude radical shifts in policy (both of these policies might be good ideas BTW).

    For progressives, checking democracy generally involves strengthening the reach and independence of the administrative state (the "deep state" of career professionals, technocracy) often at the expense of the directly elected executive and using courts (and so appeals to other elites , judges) to expand rights that voters cannot overturn. Or progressives recommend something like a city manager system, where the executive is selected by elected representatives with the help of the administrative state itself. Whereas conservatives have tended to just want to weaken the state so that it cannot be wielded by the "people" against the individual.
  • The 'Hotel Manager' Indictment


    One my lecturers in philosophy wryly pointed out that Hume’s condemnation at the end of his Treatise actually applies to the Treatise. ‘Take any book of scholastic metaphysic…’ The lecturer compared Hume, like the positivists after him, to the Uroboros, the mythical snake that swallows itself. ‘The hardest part’, he would say with a mischievous grin, ‘is the last bite’.

    :rofl:

    Historically at least, this seems to have proven quite true.
  • The Myopia of Liberalism


    Gotcha, so I guess or disagreement might be this: I think the rather titanic problems of liberalism in the current moment, not least of which is the rise of the far-right and long term discontent over the replacement migration strategy vis-á-vis growth, the long term problems of globalization in the developing world (where in at least some instances it appears to retard growth and good governance), and the looming ecological crises, are not accidental to liberalism itself, but directly attributable to it




    As mentioned earlier, I think Locke and Mill's justification of enslaving populations by force to "liberate them from indolence," is a prime example. Cold War colonial war rhetoric is also a good example. Obviously, the wars were so difficult because people in the occupied states largely did not want to remain part of the colonial empires. But, they had to be "freed by force" because the communitarian ideologies (Marxism, political Islam, etc.) that held sway with large segments of the population oppressed individual and market rights (liberal freedom).

    Hobbes grounds the state entirely in the atomized individual in the "state of nature." The state has legitimacy just insomuch as it is a better choice for individuals qua individuals to actualize their individual freedom (generally as fulfilling whatever desires they happen to have). This is the core assumption of "social contract theory," which is certainly still present in contemporary liberal theorists. An ideal society maximizes liberty for individuals as individuals (including liberty vis-á-vis desires for material goods, which is why "economic growth" and consumption play such an outsized role in liberal theory and welfare economics).

    This is the ordering of the higher (common good) to the lower, the whole to the parts (in line with reductionist tendencies in materialist thought). The common good becomes merely a colocation of individual goods. The "veil of ignorance" is all about the individual for instance, and indeed the individual as initially abstracted from all community and common goods or social identity.

    By contrast, there is Hegel, one of the great critics of social contract theory:

    My particular end should become identified with the universal end… otherwise the state is left in the air. The state is actual only when its members have a feeling of their own self-hood and it is stable only when public and private ends are identical. It has often been said that the end of the state is the happiness of the citizens. That is perfectly true. If all is not well with them, if their subjective aims are not satisfied, if they do not find that the state as such is the means to their satisfaction, then the footing of the state itself is insecure.”

    A common critique of liberalism is that this conception of the state (which often finds its way into legal decisions, particularly in the US through the Federalist Papers) only works so long as custom, culture, etc. continue to bind individuals together as wholes. Yet liberalism, and particularly capitalism, undermine all such connections, making liberalism self-undermining.
  • The inhuman system


    I don't know if it's that much of a contradiction. I suppose that quote, taken alone, could be read in a very Nietzschean or Sartrean light, but I have always seen Taoist notions of freedom set in opposition to the former, often as their polar opposite (although I think they are opposites that might meet at their limit). E.g.,:

    In the Taoist philosophy, we find a strikingly different idea of freedom [from the Western one based on the absence of all constraints]. The Taoist conceives of freedom from the very opposite direction: instead of focusing on an
    absence of external constraint or coercion, the Taoist focuses on modifying the self that can be in conflict with external constraints. Instead of being critical of the ex-
    ternal environment and requesting the environment to give room to the individual's desires or will, the Taoist requires the individual to be critical of him/herself, and to be in harmony with his/her environment.

    This Taoist idea of freedom logically starts from a realization that the constrained and the constraints are mutually dependent; without the constrained, the constraints would not exist as constraints. The founder of the Taoist philosophy Lao Tzu says: "Honor great misfortunes as you honor your own person. Only because you have your own person, you will have great misfortunes. Without a person, how could there be misfortunes?" Furthermore, the kind of misfortunes or constraints one has depends on the kind of individual one is. Limitations vary from one individual to another. As Lao Tzu's great follower, Chuang Tzu, says: "Fish live in water and thrive, but if men tried to live in water they would die." This clearly applies not only to the natural limitations of fish and humans, but to all subject-object relations. An individual's particular desires and ambitions also define particular constraints. Any anticipation or desire will bring a set of constraints. To shop-lifters the video monitors
    installed in stores are big constraints, but to the rest of us, they are nothing but video monitors. To smokers "No Smoking" signs are constraints, but non-smokers consider them to be protection.

    The more one desires or expects, the less one is free, because there are more constraints one has to break in order to have the desires satisfied or expectations
    fulfilled. We often think that powerful people have more freedom. But that is not always true, for they usually have more desires and ambitions. My two-year-old daughter has never felt short of money, even though she does not have any; but Donald Trump does...

    https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://scholarworks.gvsu.edu/cgi/viewcontent.cgi%3Farticle%3D1561%26context%3Dgvr%23:~:text%3DThe%2520Taoist%2520conceives%2520of%2520freedom,in%2520conflict%2520with%2520external%2520constraints.&ved=2ahUKEwiPwabp45iNAxVUM9AFHfTmOQQQ5YIJegQIFhAA&usg=AOvVaw0pNzCylqHtaoKsyMf_b3Mt

    There is obviously a similarity here with Indian thought and with Western pagan thought, with its struggle for ataraxia and apatheia (as well as the fruits of contemplation, e.g. "enlightenment" or "henosis," which have a more positive element).

    The emphasis on self-cultivation—and the role of the sage, the daoshi, and the zhenren—seem to follow the intuition of other traditions that the renunciatory move often isn't spontaneous, but rather requires received wisdom, reflexive discipline, and guidance—in a word, cultivation. This is a positive element in freedom though. The very fact that "most people" are too caught up in striving is evidence of how freedom is constrained through a sort of deficient cultivation and pernicious social forces A person doesn't simply attain to renunciation through a natural maturation process. It's just that this move isn't (as much) dependent on external resources (but they still play a role, particularly in the teachings of the sage). This is like how Epictetus claims he became free while a slave, but that most masters are slaves, while he still credits exposure to Rufus's teaching as decisive.

    But, I don't agree with purely renunciatory philosophy because it tends to generate a reflexive freedom that actually bottoms out in the contentless negative freedom so dominant in modern Western thought when taken to its logical conclusion. One needs a more robust consideration of man's telos and the notion of the summon bonum to which all goods can be logically ordered.

    This leads towards the later Pagan goal of philosophy as "becoming like onto God," present in the Christian tradition as theosis. But whereas the Pagan tradition tends to wash out all human particularity and concern for any telos of history (and so concern for mankind generally), this is not how the Desert Fathers and their descendants saw things. Dispassion means mastery over the passions, but not a constant suppression of the appetites and passions, but rather their purification and reorientation towards what is truly Good, Beautiful, and True (hence the Philokalia, the "love of Beauty" as the ascetic manual of Eastern Christendom, versus philosophy, the "love of wisdom.") This comes out of a more robust notion natures with intelligible final causes, the human telos, and a metaphysics of Goodness. You can see this sort of dynamic in Attar of Nishapur's Sufi classic, the Conference of the Birds, which focuses on the total abrogation of the Self, versus Dante's Commedia, which can combine a robust respect for particularity and history right up to its climax in the beatific vision.

    The difference leads to a much more optimistic vision of the chances of "enlightenment" for all individuals and participation in the spiritual life. It also tends towards a broader notion of the cultivation required to develop reflexive and social freedom (the cenobitic monastic community as opposed to the life of the hermit, requires a social focus, even if on the small scale). But this makes perfect sense to me, because these things aren't easy to explain, and do require structure and guidance to be accessible to most people.

    Maybe the Incarnation is another important difference. This has infinite being breaking into the finite world, and the Church as the immanent, social, mystical body of Christ realized in history, through the Marian mode of the Church, "man giving birth to God and freedom in thoughts and deeds."

    But I don't think this is an outright contradiction. Rather, contradictions in each level of freedom given birth to a higher level (Hegelian style), leading from negative freedom, to reflexive freedom, to authenticity (since self-discipline can be turned against nature), to social freedom, to moral freedom, the capacity of societies to recognize and achieve the human good.
  • The inhuman system


    But in my experience with unhappy people, which is extensive, as I work in mental health and addiction - people often forget or overlook how fortunate their situation is and how much they tend to catastrophize.

    Yes, good point. I think that's very true. I think the OP sort of gets at the social forces that lead to that catastrophizing and lack of appreciation. It perhaps misses some others though. The work on deteriorating mental health for women and girls tends to highlight different, although related issues.

    I suppose that goes along with the discussion of renunciatory traditions, in that they attempt this sort of reevaluation.

    I would say that the amount of material goods one needs will tend to vary by culture and time. For example, to be unable to afford a private vehicle or phone in many contexts is now to be unable to find work in a culture that places a huge premium on work. But I don't think that necessarily means cars and phones do that much for happiness (useful as they no doubt are), at least not as a prerequisite. Or perhaps a better way to put it is that they take on special relevance in a culture where they are almost required for membership and recognition.
  • The inhuman system


    Well, fair enough, he might not even have been a real person. The text appears to be an accretion. Siddhartha Gautama might have been a better example.
  • GETTIER – Why Plato Would Reject Justified True Belief (a Platonic dialogue)


    Just to make sure I am understanding you, are you saying LLMs "produce knowledge" for us, or that something like ChatGPT actually "knows" things? Or is this a case of analogy?

    I would say LLMs contain knowledge in the same way books do. Indeed, they mostly just slam together text predictively from books and similar documents. I suppose it could be argued that they can also produce novel knowledge through synthesis, but this would still seem to me more like a "book that writes itself," than a "knower."
  • The inhuman system


    But possibly the best thing to do when one is fretting over how distorted and ambitious humans are is to go out and help others

    Isn't this precisely what people like Laotze and St. Francis thought they were doing by telling people to stop following worldly ambitions, helping others?
  • The inhuman system


    Or perhaps the list of material goods you have mentioned are simply not the most important things for happiness? I don't think people kill themselves more often "because they have it too good," at any rate. So wouldn't this be evidence that those material goods are only so important and that we should stop hyper focusing on them, particularly on growth in consumption (to the point of courting ecological disaster)?

    This reminds me of Byung-Chul Han's point that in the "achievement society" it is a "personal failure" not to achieve happiness.

    Of relevance:

    Say we have a privileged, wealthy guy with a "good family" who cares for him. He has lots of opportunities. And he follows the middle to upper class dictum: "get good grades and wrack up accomplishments so you can go to a good college, and do the same there so you can get a good job, and then you can get a good job and do what you want."

    He does this. No extraordinary evil befalls him. He has no extraordinary vices. Maybe he drinks or smokes pot a bit too much, or plays too many video games, or has a porn habit, or cannot get a girlfriend, or cannot keep to just one. Maybe not. Nothing out of the ordinary.

    And he's miserable. He's prime bait for radical ideologies of one sort of another precisely because he "did everything he was told," and is miserable. This isn't an uncommon phenomena. That's sort of the recruiting mantra of radicals on the right and left, although it certainly helps if people struggle in the labor market or are "overeducated." We could imagine this sort of thing playing out across many gradations. It can even happen to the ultra wealthy (perhaps particularly to the ultra wealthy).

    Here is Han's point: in the autoexploitative context of modern liberalism, this man's unhappiness is a personal failure. The self is a project, and it's happiness is a goal that has to be achieved as an accomplishment.

    And there are lots of men and women who have encountered this sort of "personal failure." Millions it would seem. So the question is, at what point do we stop thinking this is an aggregate of millions of personal, individual failures and begin to say it is a systematic, social failure or a philosophical failure?



    Hmmm. Do I believe this? Probably not.

    You don't think good, or at least adequate parenting, education, etc. are prerequisites for "living a better life," developing self-control, or having the capacity to be a good citizen?

    I feel like this is at least obvious in extreme cases. If you leave a baby in the wild it will die. If you raise a child like a zoo animal, keeping them in a cage and throwing them food scraps, they will not be free to develop into a flourishing human.

    But my point would be that we don't hit some threshold of "mere adequacy," after which human beings simply "become fully self-determining (as much as man can be) upon biological adulthood." This requires cultivation.

    I think there's some irony, or maybe contradiction, here. To a large extent, cultivation and education are the agents that immerse us in the sea of social expectations.

    Yes, one is not free to become a "good father," a "just leader," or a "good teacher," without filling social expectations either. One of the problems of defining freedom simply as freedom from external constraint is that it makes social expectations a check on freedom. Indeed, a purely negative freedom makes all human relationships, including marriage and parenthood, limits on our freedom. As Hegel points out in the opening of the Philosophy of Right, taken to an extreme, such a negative freedom collapses into contradiction. One cannot make any determinant choice without in some way limiting oneself. Such a freedom is limited by choice itself, a contradiction. It also reveals itself to be arbitrariness to the extent that it demands that "what is free" is "determined by nothing," making it random action, the exact opposite of freedom.

    This is often where "authenticity as freedom" goes off the rails. Authenticity is important, but without reflexive freedom it is just following impulse and instinct.

    Hence the need for a more robust notion of reflexive and social freedom:


    To quickly define these terms:

    Negative Freedom is defined by a subject’s freedom relative to the external world. It is freedom from external barriers that restrict one’s ability to act, e.g., the government or thieves seizing your tools so that you cannot work.

    Reflexive Freedom is defined by subject’s freedom relative to themselves. To quote Hegel, “individuals are free if their actions are solely guided by their own intentions.” Thus, “man is a free being [when he] is in a position not to let himself be determined by natural drives.” i.e., when his actions are not subject to contingency. Later philosophers have also noted that authenticity, and thus the free space and guidance needed for us to discover our authentic selves, is another component of reflexive freedom.

    Social Freedom is required because reflexive freedom only looks inward; it does not tie individual choices to any objective moral code. This being the case, an individual possessing such freedom may still choose to deprive others of their freedom. (This the contradiction inherent in globalizing Nietzsche’s “revaluation of all values”).

    (Note: I have borrowed from and modified Axel Honneth’s work in Freedom’s Right in drawing up this typology)

    Since individuals will invariably have conflicting goals, there is no guarantee than anyone will be able to achieve such a self-directed way of life. Negative freedom is also contradictory because “the rational [reflexive] can come on the scene only as a restriction on [negative] freedom.” E.g., being free to become a doctor means being free to choose restrictions on one’s actions because that role entails certain duties.

    Social Freedom then is the collective resolution of these contradictions through the creation of social institutions. Ideally, institutions objectify morality in such a way that individuals’ goals align, allowing people to freely choose actions that promote each other’s freedom and wellbeing. Institutions achieve this by shaping the identities of their members, such that they derive their “feeling of selfhood” from, and recognize “[their] own essence” in, membership.

    In the language of contemporary economics, we would say that institutions change members’ tastes, shifting their social welfare function such that they increasingly weigh the welfare of others when ranking “social states.” In doing so, institutions help resolve collective action problems, prisoners’ dillemas, etc. They allow citizens to transition into preferencing social welfare over maximal individual advantage.

    We are free when we do what it is that we want to do, and we can only be collectively free when we are guided into supporting one another’s freedom. Otherwise, there will always be some who are not free. Further, those who appear to have freedom will not be truly free. They will not be free to pursue any course they’d like, as they must always fear losing their freedom — losing their status — and becoming just another of the oppressed. Further, we do not have to balance freedom and happiness. Freedom entails happiness, as people will not do what makes them miserable if they are free to do otherwise.

    “My particular end should become identified with the universal end… otherwise the state is left in the air. The state is actual only when its members have a feeling of their own self-hood and it is stable only when public and private ends are identical. It has often been said that the end of the state is the happiness of the citizens. That is perfectly true. If all is not well with them, if their subjective aims are not satisfied, if they do not find that the state as such is the means to their satisfaction, then the footing of the state itself is insecure.”

    — This and all quotes above from Hegel’s Philosophy of Right

  • What is real? How do we know what is real?


    That's an interesting question. I have heard, but not really looked into, the idea that Egyptian Memphite Theology contains an early version of Plato's Theory of Forms. This wouldn't be totally out of left field because there has always been a story attached to Plato that he went to study with the Egyptian priests and learned their wisdom when he was young. Michael Sugrue, among others, also connect Parmenides and thus Plato to older traditions coming out of India and the Orphic tradition (hence the transmigration of souls according to one's karmic/virtuous actions in life).

    This would place these ideas pretty far back, but not necessarily before the Bronze Age collapse. However, and I might be conflating later notions of Brahman with earlier ones here, those notions of Brahman as "fundamental reality" would seem to go all the way back to the second millennium. Likewise, ancient Sumerian and Egyptian myths (and Native American ones) seem to at least have the idea of illusion vis-á-vis magic (but this is perhaps not quite the same thing).

    You might be on to something, but I would guess that there is also a more primordial grounding of a reality/appearance distinction in the phenomena of dreams, optical illusions, mistakes of judgement, the fallibility of memory, and deception in warfare and hunting (which has apparently always been around). Yet I could absolutely see how civilization, and the problem of standards, media of exchange, and commerce could inflate this notion into something with greater depth.



    Why would any one agree or not agree to a logic? That's the question.

    Odd.

    You might find this helpful: https://philosophy.stackexchange.com/questions/91492/whats-with-philosophers-and-their-use-of-quotation-marks/91501#91501
  • The Myopia of Liberalism
    I am curious if proponents of liberalism would at least agree with this analysis though:

    There are three broad principles of the modern liberal state: capitalism, democracy, and liberalism.

    In my view, it seems that both capitalism and democracy are subservient to liberalism. For instance, progressive liberals are willing to constrain capitalism precisely because they see it as conflicting with liberalism. Conservatives are skeptical of this, but only because they don't see unrestrained capitalism as at odds with liberalism.

    There are pretty vocal groups on the left and right who are skeptical about democracy, precisely because democracy can constrain liberalism. Hence, I would say liberalism is the highest principle. "Freedom over all else," with freedom obviously being the ideal of freedom in the liberal tradition.
  • The Myopia of Liberalism


    That would make sense if Trumpism and the rise of the far-right in Europe were some sort of foreign, extrinsic force attacking liberalism from without, rather than something produced by liberalism's own contradictions. But Trump is just an extension of trends within liberalism (and not just the US) going back decades.

    Likewise: "you either affirm neo-liberalism and late-capitalism or you want a return of polio, Jim Crow, slavery, etc." seems a bit much. Do any of the traditionalist critiques you have in mind argue for a return to such things? I doubt it. So what's the claim then, that all of the advancements you've listed were primarily caused by liberalism and would simply be unachievable without it? That without liberalism one must have slavery?

    Historically, liberalism actually opposed many of the things you mentioned, often quite aggressively, and only grudgingly acceded to them due to its need to combat socialism (or when it was outright defeated on these issues). The end of child labor laws, advent of state pension systems, the end of Jim Crow—these were all grudgingly accepted by liberalism after being advocated for primarily by socialists (but to be fair, progressive liberals too). Often (as in the case of federal support for the end of segregation) liberalism explicitly embraced these in terms of them being a "lesser evil" in the fight to contain communism. For instance, liberalism only "stopped colonialism" after killing millions of people trying to sustain it (and in the French case, seriously considering using nuclear weapons on Vietnam to "keep it free"). The military defeat of the liberal nations then becomes, in the revisionist history, "something nice liberalism did." But the liberal states didn't give up their colonies because of "open ended liberal debate," but because they lost on the battlefield or risked immanently doing so if they tried to force the issue.

    Competition was a check on liberalism and in order to compete liberalism had to sublate elements of nationalism and socialism, making them core parts of liberal norms (e.g. "an Algeria for Algerians," not just liberal rights for Algerians under a French state; the adoption of core elements of the socialist platform into virtually all liberal states). I don't think it's any coincidence that median wages across the developed world stagnated, despite robust productivity growth, as soon as liberalism's last opponent collapsed, or that standard of living and life expectancy began to decline following the triumphant "End of History" victory. What the "End of History" did was let liberalism and capitalism go back to being more fully themselves—back towards the Guilded Age, but with Christianity also increasingly out of the way as a rival/check on capitalism.

    Polio and measles are interesting things to mention, since these are coming back in the US thanks to the liberal ethos and erosion of all sources of authority outside the coercive power of the state.

    At any rate, critiques of liberalism do not claim that liberalism and capitalism erode tradition and culture over night. Indeed, that they don't is part of the problem re the politics of replacement migration. They have done so steadily however, which is precisely why these critiques claim we have Trump, a Europe that is unable to stomach short term declines in consumption to check Russian aggression, etc. Yet the dominant, hegemonic, now globalized force at "the End of History," which faces no real rivals, somehow manages to defend itself by claiming its flaws are mostly really just attacks from without (this is, IMO, only plausible in the case of radical Islam).



    Right, that's a pretty common response, and in line with Fukuyama's argument. Liberalism is inevitable and human nature. I disagree on that obviously. I will just note that this same claim was long advanced by reactionary monarchists in much the same way. Monarchy was natural. It was in line with human nature itself. It was inevitable, and evidence to the contrary was a temporary aberration. This was also the claim of Marxism. It too was inevitable. Both collapsed, and there were signs that they would do so prior to the fact. For my part, I don't think humanity can reach the End of History while life entails the contradictions that liberalism embodies. Just the problem of the Last Man identified by Fukuyama should have been enough to clue him into that IMO.

    But the end of the Ancien Régime, the Tsardom, and the Soviet Union were not the apocalypse. The end of liberalism will not be the "end of the world," even if "it is easier to imagine the end of the world than the end of capitalism." It will almost certainly be painful though, as was the death of the monarchies, but that was hardly a reason to keep traditional monarchies around forever, or to be a reactionary.
  • What is real? How do we know what is real?


    The most important phrase, perhaps, is the first, since it links intelligibility with "anything being anything" -- thinking with being, in other words. I believe this is probably true, as a description of consciousness in the world. And that may be good enough, since philosophy doesn't pretend to tell us what philosophy (thinking) would be like, if no one were doing it! It does, however, often try to talk about what the world is like, unmediated by the experience of human consciousness. From that perspective, can we say that "there can be no 'physical order' without an intelligible order by which things are what they are"? We simply don't know.

    This skepticism relies on a particular metaphysics of consciousness and appearances. In materialism, potency is king, thus contingency and accident reign over necessity and essence. Consciousness becomes an accidental, contingent representation of being (i.e. representationalism). Appearances are, or at least can be, completely arbitrarily related to reality. It's a world where intelligibility must be projected onto things, or at least "constructed by the mind" (and this also involves a different conception of reason and knowledge). Obviously, if appearances can be arbitrarily related to reality, so can language (by contrast, we could consider someone like Hegel, who would argue that the historical evolution of language and logic cannot be arbitrary).

    That's sort of the deep separation between modern and pre-modern metaphysics, their understanding of appearances/eidos. Plotinus and a number of his followers take up the criticisms of Sextus Empiricus on what are, in some ways, very modern questions of the sort you mention, but have a much easier time resolving the problems because of these differences. So to, Aquinas' consideration of the question of "if the minds knows its own phantasms instead of things."

    I don't want to get into all that, except to say the priority of potency (sheer possibility) over actuality is a presupposition of the "raw material world the can be arbitrarily related to intelligibility." First there is potency, and then there is something that actualizes something from it (presumably , potency itself). One of the counterarguments against this is that this doesn't make sense, sheer potency, being nothing in particular, cannot result in any specific actuality; act is always prior to act. But modern thought tends towards just denying this. Actuality comes either out of God's sheer, indeterminate power/potential (all act coming from the Divine Will), or in later atheist cosmology our actual world springs from the possibility space of possible worlds "for no reason at all" (and being a mathematized world, intentionality and meaning are either illusory or spring from man's will as power).

    Which is just to say that, while there are other issues, I think this might boil down to the priority of act over potency or vice versa. That's very abstract, but one should expect such for the root of major metaphysical differences.

    I wrote that, and then recalled I had a good quote on this re causes, and why causes used to bridge the sort of gap you're bringing up, but do not with with Humean causation:

    Forms had explanatory power in the older realist framework, not because general belief in that power was supposed to replace the empirical work of discovering and characterizing how they operated, but because confidence that there were such causal powers helped to account for the order of nature and the very possibility of successful scientific inquiry.

    It is commonly said that modern science neglects formal causes but attends to efficient and material causes; but classically understood, efficient and material causes cannot function or even be conceived without formal causes, for it is form which informs matter, giving concrete objects their power to act on other objects. The loss of formal causality is thus in a sense the loss of efficient and material causality as well—an implication that is not quite fully realized until we see it brilliantly explored in the philosophy of David Hume.

    Of course, the gravity of the loss of teleology is also evident in the realm of ethics. Ockham was no libertine or relativist, but he prepared the way for the intractable confusion of modern moral reflection. Morality is concerned with ends, and humans, having the natures they do, need to acquire certain further qualities or forms—virtues—which help them fulfill their essential natures and achieve their
    ultimate end. Alasdair MacIntyre has most famously traced the inevitable failure of the Enlightenment project to explain morality without teleology. Ockham’s denial of forms and formal causality is unquestionably part of the conceptual disaster that left Enlightenment thinkers with only misunderstood fragments of a once very different project of moral theorizing.

    There is another, even more basic, implication of the nominalist rejection of forms and formal causality. In the realist framework, the intrinsic connection between causes and effects was particularly important for explaining how the mind knows the world; concepts formed by the mind, insofar as they are causally connected to things which are the foundation of those concepts, necessarily retain some intrinsic connection to those things. While we can be mistaken in particular judgments, we can be assured of the basic soundness of the mind’s power, thanks to the intrinsic connection between concept and object. The kind of radical skepticism Descartes proposed, even if only methodologically, was simply never entertained through most of the middle ages.

    More classical versions of skepticism, usually having to do with the fallibility of the senses, were commonplace, but the possibility of a complete incongruity between the mind and reality—such that even mathematical concepts could be the product of some deceptive manipulation and have no connection to the mathematical “realities” they seem to represent—this was not available in a realist
    framework for which concepts are formally and so essentially related to their objects. Ockham’s nominalist innovations almost immediately raised the specter of such radical doubt; this was noticed not only by the first generation of Ockham’s critics, but even by Ockham himself, who proposed thought experiments about God manipulating our minds to make us think things that are not true. For Ockham, such thought experiments were possible not only because of God’s absolute transcendent power, but because the human mind retained for him no intrinsic connection to an intelligible order. Ockham was no skeptic, and he was no Descartes; indeed, he was rather confident in the reliability of human cognition. But the law of unintended consequences applies in the history of philosophy as elsewhere, and it was only a matter of time before some philosopher exploited, as fully as Descartes did, the new opportunity of skepticism made possible by the nominalist rejection of forms and formal causality.

    Accordingly, Thomists and other critics of Ockham have tended to present traditional realism, with its forms or natures, as the solution to the modern problem of knowledge. It seems to me that it does not quite get to the heart of the matter. A genuine realist should see “forms” not merely as a solution to a distinctly modern problem of knowledge, but as part of an alternative conception of knowledge, a conception that is not so much desired and awaiting defense, as forgotten and so no longer desired. Characterized by forms, reality had an intrinsic intelligibility, not just in each of its parts but as a whole. With forms as causes, there are interconnections between different parts of an intelligible world, indeed there are overlapping matrices of intelligibility in the world, making possible an ascent from the more particular, posterior, and mundane to the more universal, primary, and noble. In short, the appeal to forms or natures does not just help account for the possibility of trustworthy access to facts, it makes possible a notion of wisdom, traditionally conceived as an ordering grasp of reality.

    Preoccupied with overcoming Cartesian skepticism, it often seems as if philosophy’s highest aspiration is merely to secure some veridical cognitive events. Rarely sought is a more robust goal: an authoritative and life-altering wisdom. Notice: even if contemporary philosophers came to a consensus about how to overcome Cartesian doubt and secure certainty, it is not clear that this would do anything to repair the fragmentation and democratization of the disciplines, or to make it more plausible that there could be an ordered hierarchy of sciences, with a highest science, acknowledged as queen of the rest—whether we call it first philosophy, or metaphysics, or wisdom

    "What's Wrong With Ockham?"
  • The 'Hotel Manager' Indictment


    It's an interesting subject. The empiricist tradition often justifies itself by pointing to the fruits of modern science and ascribing these to its philosophy (and thus to its rejection of much of what came before). However, historically, the "new Baconian science," the new mechanistic view of nature, and nominalism pre-date the "Great Divergence" in technological and economic development between the West and India and China by centuries. If the "new science," mechanistic view, and nominalism led to the explosion in technological and economic development, it didn't do it quickly. The supposed effect spread quite rapidly when it finally showed up, but this was long after the initial cause that is asserted to explain it.

    Nor was there a "great divergence," in technological progress between areas dominated by rationalism as opposed to empiricism. Nor does it seem that refusing to embrace the Anglo-empiricist tradition's epistemology and metaphysics has precluded people from becoming influential scientific figures or inventors. I do think there is obviously some sort of connection between the "new science" and the methods used for technological development, but I don't think it's nearly as straightforward as the empiricist version of "Whig history" likes to think.

    In particular, I think one could argue that technology progressed in spite of (and was hampered by) materialism. Some of the paradigm shifting insights of information theory and complexity studies didn't require digital computers to come about, rather they had been precluded (held up) by the dominant metaphysics (and indeed the people who kicked off these revolutions faced a lot of persecution for this reason).

    By its own standards, if empiricism wants to justify itself, it should do so through something like a peer reviewed study showing that holding to logical positivism, or some similar view, tends to make people more successful scientists or inventors. The tradition should remain skeptical of its own "scientific merits" until this evidence is produced, right? :joke:


    I suppose it doesn't much matter because it seems like the endgame of the empiricist tradition has bifurcated into two main streams. One denies that much of anything can be known, or that knowledge in anything like the traditional sense even exists (and yet it holds on to the epistemic assumptions that lead to this conclusion!) and the other embraces behaviorism/eliminativism, a sort of extreme commitment to materialist scientism, that tends towards a sort of anti-philosophy where philosophies are themselves just information patterns undergoing natural selection. The latter tends to collapse into the former due to extreme nominalism though.



    By all means, I'm just laying out the case as I see it. That the sciences involve discussions of causality, identity, emergence, or universals just seems to me like a good starting point for common ground.

    Anyhow, people wedded to the Wittgensteinian approach are constantly making metaphysical assertions of TFP and other venues. Rorty's use of Wittgenstein is a fine example. And these assertions are based on an analysis of language as prior to metaphysics (which I'd argue just results in implicitly presupposing an unclarified metaphysics for that analysis of language). Yet if the very thing in question is the existence of, or role of metaphysics/first philosophy, and one turns to philosophy of language as the arbiter of this question, it seems that one is already picking an answer, no?

    Now, the classical metaphysician must do something similar, but it's explicit. "Metaphysics is first philosophy because it is most general." This is different from, "there is no first philosophy, or if there is, we must be skeptical about it, thus we must analyze language to decide the issue (i.e., philosophy language is defacto first philosophy, and will decide the issue of first philosophy as first philosophy).

    The reason I think this is often not profitable is because, as noted above, I think metaphysical assumptions are inescapable. So, the analysis just assumes certain assumptions. I think I pointed this out to you before vis-á-vis the extreme authority given to Wittgenstein's "rule following argument," which is often taken as "this is something that is always and irrefutably true about rule following," not "this is what is true about rule following given we grant Wittgenstein his metaphysical and epistemic presuppositions." Wittgenstein's assumptions, premises in the argument, then have to later be analyzed in light of the very conclusions of that very same argument, i.e., "here is what can be said about epistemology and metaphysics, and how we can justify them, given what we have already said about language." The horse cannot pull the cart on the grounds that we have already set the cart before the horse.

    That is why I tend to be skeptical of the approach in a nutshell. There is nothing wrong with wrangling about definitions IMO, it's a time honored tradition.



    I don't like the term 'universal' much because I think it's loaded with metaphysical baggage, and it really doesn't mean anything more that 'general'

    Does swapping in "general" for "universal" resolve the issue of "in virtue of what are different things the same as respects some feature?"

    It seems that "general" would just require the same sort of metaphysical assumptions. Likewise, how does saying "things that share properties actually share 'morphologies'" resolve the issues of universals and natural kinds?

    And of course I don't see universals coming into play, but just a human capacity to generalize on account of the ability to recognize patterns and regularities, as I already noted above.


    I mean, this just seems to me like: "we will call them 'morphologies' to presuppose nominalism without debate." I don't see how a word swap justifies such a move though. We could call them "tropes" just as well, but it doesn't remove the issue of presuppositions.

    Edit: actually, to be fair, I think "patterns and regularities" could probably qualify as universals. It's perhaps more profitable and accurate to define nominalism as the rejection of form and nature than of universals, since only the most extreme, self-refuting sorts of nominalism denies "universals" in a broad sense (normally in the context of anti-rationalism.
  • What is real? How do we know what is real?


    You seems to be suggesting that if one is not following an explicit rule, one is acting arbitrarily. Do you really want to make such a claim?

    No, I am suggesting that if one chooses something "for no reason at all," then one is acting arbitrarily. Now, you suggest that there is "no reason," no prior truth to point to, in selecting any one of the infinite possible logics. Such a selection is instead based on the fact that "others have already chosen to agree to some game rules," and so it is "useful to agree." (Note here though that the very truth of your argument for the utility of "agreement" would itself depend on a particular logic though. In a trivial logic, it is trivial to prove that it is actually better to pick a logic that no other human agrees with, because one can prove anything expressible in such a logic.)

    Anyhow, were this true, it would mean that each individual "picks a logic" only because other people have already picked it. Yet there cannot be an infinite regress of people picking, say to affirm LNC, just because other people have already agreed to it. The choice has to bottom out in some arbitrary first mover. So, the dominant "custom" is just that, arbitrary custom. This also means that the pluralist, in bucking the dominant custom, is also being arbitrary. The utility of a logic rests in "how many other people agree to it."

    I think this is obviously not the case. The fact that trivial logics, which are very common, lack utility, is because it is not the case that every statement one can formulate in language is both true and not true. That's absurd. It's false. And people eschew trivial logics because they allow for false conclusions; they are not actually truth preserving, only "truth preserving" vis-á-vis some deflated notion of truth.

    But at any rate, simply choosing to affirm something as true solely because "other people are doing it," is not good reasoning. This is the old: "if everyone jumped off the Brooklyn Bridge would you do it too?"
  • The inhuman system


    And regarding politics.... Are the current politics working for us? Who is accountable for the mass migration and the issues surrounding them? Who is responsible for the housing crisis, the climate change crisis, and so on? Our politicians? They shift and change every few years, but these problems persist

    I mentioned in a recent thread here recently why I am skeptical of attempts to pin these problems on either "progressives" or "conservatives:"

    I'm a bit skeptical of narratives that try to pin all these problems on just the (mis)rule of leaders on one side of the political spectrum. The problems being discussed (difficulty getting good jobs, huge numbers of applicants for each job, over qualified workers, unaffordable housing, low quality services, welfare expenses becoming unaffordable, etc.) are endemic to the West. You see the same sorts of complaints re Canada, France, Germany, Sweden, Spain, the US, etc. Yet different sides of the political spectrum have had very varying degrees of long term control across these different states.

    Nor is it clear that things are better anywhere else. Housing is increasingly unaffordable in the US, yet it is one of the most affordable rental and ownership markets in the world. It's "hell" in Canada and the UK, yet income to rental/mortgage rates are actually a good deal worse in most of the developing world.

    Certainly, Japan and Korea, might shed some light on things. These are wealthy states that haven't experimented with the neo-liberal ideal of the free movement of labor across borders (migration on a fairly unparalleled scale, e.g. to the extent that German children born today will be minorities in Germany before they are middle age) to nearly the same degree. This, and differing cultures, has given them a different blend of problems (e.g. too much work instead of not enough; homes losing value as investments, or even being given away for free, which is a total loss for someone). Yet some of the other problems are very much the same, or even more acute (e.g. the gender-politics gap/war is probably the worst in ROK, scarcity vis-á-vis healthcare services, etc.).

    That said, I think one can identify neo-liberalism and globalization as a key driver of many of the issues, although ocean acidification, global warming, and sea level rise as well as other forms of environmental degradation related to overconsumption (e.g. micro plastics) have an earlier etiology. Neoliberalism was originally pushed by the political right, but now the left has found itself in the odd place of defending it and late-capitalism (just while advocating for more transfer payments from underfunded welfare states to moderate it).

    G.K. Chesterton has a great quote here: "The whole modern world has divided itself into Progressives and Conservatives. The business of Progressives is to go on making mistakes. The business of Conservatives is making sure they never get fixed."

    However, in the case of globalization and neo-liberalism, the roles are sort of reversed. Of course, while pithy, it's not entirely accurate. Progressivism has done some great things, like universal education (even if they messed up the execution) or ending Jim Crow. It's also done some very bad things, like the unrestrained market/cultural forces that have allowed the Black-White wealth gap to grow even larger than under Jim Crow (or the Israeli - Palestinian gap), and then conservatives have chosen to defend these same bad things after having resisted them.
  • What is real? How do we know what is real?


    You can have a logical system that is just "rule following" all the way down. You can choose whatever logic you prefer of any of the infinite possible logics over any others. But it's of no use unless the folk you are talking to agree.

    Why would they agree or not agree? It's arbitrary after all right? Whoever has the most power (will to power?) enforces the truth by force?
  • Habemus papam (?) POLL


    a Christian man

    I was going to make the old joke about 100% of popes being Christian, but then considering some of the pontiffs I am not even sure if this is true :rofl:. It takes some pretty significant disregard for the faith to buy the office with bribes, use it to aggrandize oneself through violence, and to spend most of one's time in veritable orgies.
  • The Myopia of Liberalism


    My father is a Zionist, and when I was 14 he moved the family to Israel with the expectation of settling there permanently. It didn’t work out for various reasons and we returned to the States, but what I learned in the year we spent there was that a democracy based on Jewish nationalism is not a robust democracy. Even if the intent is equal treatment for Jews and non-Jews alike, in practice the biases in favor of Jewish religion and culture translate into the institutionalization of unequal treatment. I’m an atheist , but I would never dream of prioritizing atheist migrants over religious ones, any more than I would prioritize white migrants over people of color simply because whites happen to be the dominant population of the U.S. We tried that for 40 years when strict immigration limits were set in the 1920’s to keep out Catholics and Jews from eastern and southern Europe, as well as Asians, under the pretext that they could not assimilate ‘American’ values.

    Well, first I'd just point out that Israel is very much a liberal ethno-state, not a confessional state. Most modern liberal states were founded as ethno-states (and in Europe this involved a vast project of ethnic cleansing). "A German state for the German people." "An Algerian state for Algerians (and not just equal rights in a French state)." "A Kurdish state for Kurds and a Tibet for its people, etc."

    This is still a mainstream idea; nationalism was sublated by liberalism and remains a pillar of its legitimacy. But Israel faced particular internal and external challenges that stopped it from embracing the liberal drive to homogeneity in Europe. This might have more to do with the size and cohesion of their minority population more than anything else. As German children born today face becoming minorities in Germany by middle age we are seeing a fascist party become a major political player in a state where citizens undergo an extensive protective indoctrination against fascism and where the far right faces significant state repression and censorship. The bet of globalized neo-liberalism was that it could digest and destroy culture rapidly enough to make replacement migration a feasible solution to falling birthrates. I think there is ample evidence in the rise of the far-right to show that this bet is not paying off. The leftist response so far seems to be mostly to advocate for greater state powers of coercion, and perhaps even a turn away from popular democracy, to resolve this issue. However I am skeptical that this would work even if they could win elections (which I am also skeptical of).

    Anyhow, my original point wasn't whether such restrictions would be wise, but rather that I cannot see how they have any immediate bearing on the liberty of current citizens, save for liberalism's particular ideological preference for severing the individual from custom and culture as a means of promoting "liberty."

    However, when large numbers of migrants of a particular background have concentrated in Europe they have at times advocated for their own communitarian festivals, corporate events, legal systems, and ways of life. And liberalism has, in general, not been willing to accommodate this. Liberalism can be indifferent to the culture and religion of new arrivals precisely because it bars culture, custom, and religion from public life and from defining the common good around which the polity is organized. It's an "any color you like, so long as its white," sort of thing, i.e. "any culture you like, so long as it remains irrelevant to the realities of market and administrative state."

    But I'd argue that the only reason liberalism can be this accommodating is because of capitalism's tendency to eradicate culture and particularity in the first place. Also, it's worked so far because the cumulative effects of the replacement migration strategy are back loaded. It's one thing when 10% of a population is recent arrivals. It's another when that climbs to 50% in a generation, particularly when new arrivals become a sizeable majority in younger age brackets and the working aged population, while natives make up the lion's share of an extremely expensive pensioner population whose benefits crowd out investment in the young (now majority minority). In many ways, it seems like a recipe for a return to ethnopolitics in the long run (which we're already seeing today), and it's a dynamic driven by neo-liberalism's growth fetish.

    Frankly, I think we tend to underestimate both the ability of migrants to assimilate to the dominant culture and their ability to contribute to its economy. We undervalue the tremendous motivation involved in choosing to leave one’s home country for a foreign land.

    It's the capacity of modern liberalism/capitalism to break down culture that matters here. There are minority populations who migrated into locales across the globe who remain culturally, religiously, and even genetically distinct even after centuries in their new homeland (e.g. minorities in Iraq or Syria). The sort of assimilation seen in modern liberalism is not a historical norm. It's not that pluralistic societies didn't exist in the past, they absolutely did. It's that they remained pluralistic generation after generation, rather than becoming rapidly homogeneous.

    This history is precisely what gives lie to liberalism's claim to be "value neutral" and to not engage in positive indoctrination. In general, when we see assimilation in history, as opposed to long term pluralism, it is because of (often coercive) regimes to attain homogeneity (e.g. Rome in the western half of the empire, Chinese imperial policy, the Spanish Reconquista, etc. ).


    It’s easy to attack, but more difficult to lay yourself on the line by committing to a detailed alternative to liberal politics that others can then pick apart.

    "Just offer a realistic alternative to a totalitarian and now globally hegemonic force." A tough ask! Unfortunately, I think humanity will have to weather the collapse of liberalism and it's attendant ecological disasters before any decisive break is possible. Maybe the God of progress will save us, but I doubt it.

    What likely comes after liberalism has been described variously as a sort of "techno-feudalism," a combination of technocratic rule and "consent-based" corporate (often patronage-centric) governance for those with the skills of connections to still be "economically viable" in the era of artificial intelligence. We can already see this new system coming into being in the new "company towns" (Musk just won his vote for his "Starbase" community), planned cities like Telosa or the Saudi "Line," or the global elite's capacity to create legal city-states like Próspera, or communities like Fisher Island and the Villages (I am shocked that one named Galt's Gulch hasn't opened). "Voluntary" citizenship in such communities (based on "consent") will be incentived by the collapse in funding for public services that is already hitting developed countries who are still early in their demographic crises.

    Changes in military technology that privilege small elite cadres with expensive training and equipment are occuring on a scale not seen since the stirrup led to feudalism and the rise of the mounted, hereditary knight in Europe. What happens when the mobilization and buy in of "the people" is no longer necessary for winning wars? Or, as Michelle Alexander says of the plight of inner city African American populations, what happens when elites no longer want to exploit the people's labor but just see them as a problem/burden to be contained? Probably nothing good for public services, particularly when this is paired with the divisive demographic conditions mentioned above.

    Hence the emergence of the "consent-based," marketized corporate citizenship as a marketized "escape" from low quality public safety and services (we already see this dynamic with US school districts and localities to some extent). This is, at least, the sort of theory that is appealing to the West's oligarchs these days, and it has a certain sort of economic logic to it.

    What this opens up is the prospect for the return of the polis as the fundamental political unit, a more natural scale for governance. And this in turn opens up the possibility of creating communities that aren't based on liberalism. This is far more difficult today, because liberalism says: "you're perfectly free to pursue your own alternatives, it's just that these must have no real authority (power) over individuals, and you have to pay for liberalism's institutions first and then use whatever is left over for any alternative, while also being unable to force members to pay for any alternative in the way the state does." So, one reform to push for is to say "let state funding follow the citizen to their communities of choice, rather than corralling it into liberal institutions and saying 'you're free to fund your own alternative,' as if this wasn't essentially an economic impossibility for most of the population."

    So for instance, if you don't want your child to undergo 180 days of liberal indoctrination a year for the first two decades of their life, the economic reality for most families will mean spending a substantial share of all household income on education (if decent options even exist in a locale the family can afford, which is unlikely for most). Likewise, one can organize for an alternative civil society and provision of welfare only after one has paid for the liberal version, something that is only slightly plausible today because the liberal version has been financed by tremendous debt, but which will become impossible when taxes are raised to fund the tsunami of liabilities that are already on the books.

    A polis based around a more robust conception of the common good would do many things differently. For instance, the purpose of education would be the development of virtue and happiness, not workforce preparation and enabling people to meet whatever desires they happen to develop. It would probably provide for civil defense through universal citizen military service instead of a standing professional (and increasingly mercenary) army/police force. And it would foster specific festivals and common events/rthyms as opposed to restricting these in the interests of individual and commercial interests (e.g. the old Church calendar had holidays very regularly, as opposed to just a handful during which much commerce still continues.

    Catholic philosophy has the most robust development of such alternatives, but arguably this is simply because Catholicism has maintained the only strong university/intellectual system outside of liberalism. Orthodoxy certainly has many of the same intellectual resources. Whereas Protestant Christian nationalism tends towards "liberalism with Christian characteristics," and so is self-undermining.

    The irony is that entry into Catholic intentional communities sort of requires becoming a liberal exceptional individual first, both due to the need to cultivate heterodox beliefs one will not be exposed to in liberal institutions and because it requires wealth.

    Not that other formulations wouldn't be possible, I have just not seen them developed. I hope they will be, since, although a confessional society can be pluralistic (and indeed, can safeguard religious freedom in a way liberalism will always deny) it would nonetheless be beneficial for many people. Yet I know of no secular intentional communities outside the history of rapidly collapsing communes or ethnic colonies. In general, secular criticisms of liberalism are just that, criticisms, not alternatives, perhaps because they are often wed to the materialist metaphysics that leads to liberalism in the first place.
  • GETTIER – Why Plato Would Reject Justified True Belief (a Platonic dialogue)


    It seems to me like you are getting at the role of understanding in knowledge, which has a phenomenological component. If truth just involved discursive justification and assigning the right truth values to linguistic utterances or symbolic strings, then LLMs would "know," right?

    The crux seems to be here:

    Socrates: Well spoken. For even though your example is a thought experiment, it arises from the realm
    of contingency: it depends on concrete circumstances—who hires whom, how many coins someone
    carries—all things that do not follow from the essence of the matter, but from chance. But what comes from
    chance is not necessary—and what is not necessary cannot be the object of knowledge. Therefore, Ed: your
    example remains in the domain of the opinionable, because it aims at something that neither always is nor
    is so by its own nature—but only through external conditions. And in this very point, it becomes clear that
    it can never touch epistēmē in the sense of being. And can it surprise us, then, that logos fails when it seeks
    to grasp what is not stable?14

    This is precisely what Aristotle would say re episteme. The example relies on a coincidence of accidents. Could it be reconstructed with per se predication? I don't think I've ever seen it done.

    The difficulty of limiting knowledge to being is of course explaining discursive knowledge in the realm of becoming, which does seem to exist. This requires a robust metaphysics, a "metaphysics of knowledge," which is made difficult by the tendency of modern thought to put either epistemology (early modern) or philosophy of language (linguistic turn) before metaphysics.

    I would guess we have pretty similar opinions here. We had a recent thread on this and my thoughts were:


    I tend to see these sort of issues as indictive of the fact that "justified true belief" is simply a bad way to define knowledge. It's a definition that recommends itself by being analytically quite easy to work with; however this is a bit like the guy who lost his keys on the lawn and looks under the street light for them instead because "that's where he can see."

    If knowledge involves the adequacy of the intellect to being, then simply affirming true propositions with proper discursive justification is not all there is to knowledge. Truth is primarily a property of the intellect, and only analogically predicated of linguistic utterances (as signs of truth in the intellect). When someone thinks p is true for bad reasons, and p is true, there is an adequacy of the intellect to being insomuch as truth is properly affirmed, but this will not involve the fuller adequacy that comes with understanding (which we would tend to call "knowledge.")

    I think the empiricist tendencies in analytic thought tend to lead to a neglect of the role of understanding in knowledge. However, even if one dismisses any faculty of noesis/intellectus (which I wouldn't), I still think the phenomenology of knowledge suggests a big role for understanding (and this a relevant role for problems of vagueness). With vagueness, it seems we can have properly justified true belief and still lack "knowledge" in a strong sense. Knowledge is understanding and if "the truth is the whole," it is also in some sense inexhaustible. A "model" that tries to make truth primarily a binary property of propositions is going to miss this (and has other problems if truth/falsity represent contrary instead of contradictory opposition).

    As you seem well aware, ancient conceptions of knowledge have an erotic element, one that involves an ecstasis, a going out to the known, and a penetration of the self by the known. There is an element of "knowing by becoming." One cannot capture this in an understanding based on "sense data + computational reason (ratio)."

    I am not sure about the appeal to Plato's theory of mimesis at the end though. This has always struck me as quite a reach, and fairly implausible. Why not simply take up the later position of Plato's student that the mind is potentially all things, but that a move from potency to act always relies on act (actuality as form (eidos) delivered through the senses)? This also jives with information theoretic analyses of perception quite well.

    When we get to the "metaphysics of knowledge" I don't even know if it is appropriate to call knowledge (or at least what is most fully knowledge) a "belief." When we are sure that there are cars in the oncoming traffic lane and that we mustn't drive into them, I think this is not simply a case of sense data + ratio (computational reason) = propositional belief. The reason we find it quite impossible to ignore such knowledge lies, IMHO, more in the co-identity of knower and known in such cases (a union). People find it impossible to believe otherwise because their intellect is "informed" by truth in the senses (sense knowledge), or what we might call the communication of actuality.

    That's a fairly Aristotlian/Neoplatonic view, and less strictly Platonic of course.
  • Toilets and Ablutions


    Someone can correct me if I am wrong, but my knowledge of pre-modern architecture and bathing facilities suggests these were distinct. I am not sure there is that much too it outside of the physical necessity of having to run indoor plumbing into multiple rooms precluding this separation. Larger, wealthier homes often do put the toilet in an adjoining room, but you put them close because otherwise you would need to run separate pipe, and even if you are rich you still want things to be easy and quick to fix.

    But, as the taboo on nudity grows, it also makes sense to colocate them, since both require disrobing.
  • The inhuman system


    It's really only because you were born into that way of seeing things that you are able to raise the questions you have here.

    Pretty much every issue brought up by the OP has been kicked around since the earliest days of philosophy. For instance, the idea of cultural relativism is in Herodotus. The insufficiency of wealth, status, and sensible goods is a very old trope in philosophy. Rebellion against the notion that one must hit certain achievements by a certain age has been around at least as long as young Roman men were being told to pursue ascent up the cursus honorum. Liberalism likes to pretend it has a monopoly on critique and "free thought," but I don't think this is historically accurate. What is unique to liberalism is the "right" of all such thoughts to be fully privatized, thus meaningless, and thus rendered harmless matters of taste (until they aren't, and one is jailed for them). It's not that liberal societies don't jail people for speech, it's that they avoid having to do so quite so often because they have individualized speech to the point where it is less effective and offensive.

    This is an unrealistically romantic view of human nature. It's a good thing to be able to stand back and look a our competitive behaviors and evaluate their usefulness, but to claim that they are somehow unnatural or avoidable is just not true. We had a family of foxes in our back yard a couple of summers ago. The pups were always play fighting and wrestling. I think humans are just as naturally competitive and aggressive as those foxes. Cooperation is also a valuable approach to social living. It's not a question of getting rid of competitiveness, it's a question of balance.


    That's a fair point. I think this needs a distinction. There is competition over finite goods that diminish when shared and competition within the context of common goods that grow with participation. Playing basketball, for instance, is a common good. You compete, yes, but it's much more fun to play an opponent than to play alone, whereas extrinsic rewards associated with high level basketball often are the sort of finite goods that diminish when shared (e.g. wealth).

    Competition over individual goods is part of life, but is resolved by the recognition of common goods. The good of a "good marriage" or a "good family life," for instance, is not reducible to the individual goods received by each individual member involved. The problem the OP identifies stems from a dominant conception of reason and desire in liberalism in particular, which reduces all common goods to individual goods, and man to an atomized, "rational" utility maximizer in terms of such goods. Such a view will tend to make cooperation "just another strategy" within competition. The former is ordered to the latter, instead of vice versa.

    Other civilizations and societies in the past have been just as "inhuman" as ours today is. If you're looking for a return to some state-of-nature, your goal is unrealistic - naive. AsI see it, the changes you are talking about are, always have been, and can only be personal, not political ones. As I noted, the irony is that it is the breakdown of norms that allow us to see the things you have seen.

    The assertion that the issues in the OP have "always been around and recognized" seems to contradict the claim that it is only modernity and liberalism that allow OP to recognize such things, no?

    At any rate, here is why I think the bolded is wrong.

    1. The development of self-determination and self-governance, which allows man to overcome the issues mentioned in the OP, to live a flourishing live, to attain to liberty, and—crucially—to be a good citizen capable of participating in communal self-rule, all require cultivation and education.

    2. Man cannot cultivate and educate himself entirely by himself. We are dependent early in life, and our ability to become more self-determining (e.g. able to provide for our needs, able to transcend the tyranny of immediate desire and gratification, etc.) must be positively fostered and cultivated.

    Because of 1 and 2, the solutions to the issues in the OP cannot simply be privatized and individualized. Politics is, by definition, the science of the common good. One cannot exclude the cultivation by which man is able to participate in common goods and self-rule from politics. This is at least acknowledged by progressive liberalism to some degree, and conservative liberalism through its vestigial (dying) respect for the liberal arts.

    The reduction of the common good to a collocation of individual goods is the key dogma of liberalism, the result of its ahistoric and unrealistic "state of nature" anthropology.

    Of course, the problems mentioned in the OP are perennial, but that doesn't mean they cannot be made better or worse, just as one wouldn't argue that the recent surge in female suicides is a non-issue because some women have always committed suicide. I'd argue that their "privatization" makes them particularly insoluable, which is why liberalism has tended to make the problems more acute. In particular, it's anthropology says such problems simply shouldn't exist, hence they are a "personal failure." Probably the most striking example of this is Manosphere dating advice, which reduces romantic love entirely to transactional private enjoyment while using the language of the market to justify this move (advice for women is actually not all that different). Here, it's worth noting that Adam Smith and David Hume were close friends.

    Of relevance:

    Depression is a narcissistic malady. It derives from overwrought, pathologically distorted self-reference. The narcissistic-depressive subject has exhausted itself and worn itself down. Without a world to inhabit, it has been abandoned by the Other. Eros and depression are opposites. Eros pulls the subject out of itself, toward the Other. Depression, in contrast, plunges the subject into itself. Today’s narcissistic “achievement-subject” seeks out success above all. Finding success validates the One through the Other. Thereby, the Other is robbed of otherness and degrades into a mirror of the One — a mirror affirming the latter’s image. This logic of recognition ensnares the narcissistic achievement-subject more deeply in the ego. The corollary is success-induced depression: the depressive achievement-subject sinks into, and suffocates in, itself. Eros, in contrast, makes possible experience of the Other’s otherness, which leads the One out of a narcissistic inferno. It sets into motion freely willed self-renunciation, freely willed self-evacuation. A singular process of weakening lays hold of the subject of love — which, however, is accompanied by a feeling of strength. This feeling is not the achievement of the One, but the gift of the Other.

    Today, love is being positivized into a formula for enjoyment. Above all, love is supposed to generate pleasant feelings. It no longer represents plot, narration, or drama — only inconsequential emotion and arousal. It is free from the negativity of injury, assault, or crashing. To fall (in love) would already be too negative. Yet it is precisely such negativity that constitutes love: “Love is not a possibility, is not due to our initiative, is without reason; it invades and wounds us.” Achievement society —which is dominated by ability, and where everything is possible and everything occurs as an initiative and a project— has no access to love as something that wounds or incites passion.


    - "The Agony of Eros," Byung-Chul Han

    Norms are no more illusions than any other aspect of human social life. Yes, they're stories, but all the things we know about the world are stories. Every human idea is a story. Your OP is a story. Humans tell stories. All our mental and social life is made up of stories. Stories are at the heart of human nature.

    Right. , if everything is a story/fiction then there can be no story/fiction / reality/non-fiction dichotomy. For a distinction to have content, "appearance" must differ from "reality." If it's "appearance all the way down," then appearances are reality.

    Is the world we live in "primarily built on fear, ego, and greed?" No, of course not.

    IDK, Homo oecononimicus, equipped with a Humean reason that is "enslaved to the passions" is a sociopath but for any remainder of "pro-social sentiment" shaping their preferences. This is the anthropology that globalized neo-liberal late-capitalism is based on.

    That people are generally better than the self-image they are indoctrinated with, I would agree with. For instance, despite the popularity of moral anti-realism or egoism, hardly anyone who espouses it acts like it is true, to their credit.

    But at any rate, this is obviously a common sentiment, that is, based on polling, becoming more common, particularly in the young. I hardly see how its condescending to dislike the current culture. For it to be condescending to dislike what another likes on this matter would seem to suggest that "are the current conditions of modern neo-liberalism good?" is primarily a question of taste. It would be bad form to question others' taste on a question of subjective preference. However, I think one of the very defects of the dominant culture is precisely that it makes "is society good right now?" a question of privatized taste and not one of facts about human flourishing and the common good (of course, it allows some facts, namely those about consumption, GDP growth, etc.)

Count Timothy von Icarus

Start FollowingSend a Message