• The Unity of Dogmatism and Relativism


    A Catholic intellectual. It seems to me that many of the prominent advocates of Platonism and traditional philosophy generally are Catholic. This is something I wrestle with, as I'm not Catholic, rather more a lapsed Anglican. But the metaphysics of 'the Good' seems to me to imply a real qualitative dimension, a true good or summum bonum. That will fit naturally with belief in God but rather uneasily with cosmopolitan secularism, I would have thought.

    I think this is partly an accident. There are still a large number of Catholic universities with large philosophy programs, and that's where a lot of this sort of work gets done and where it is more popular/not met with disapproval. So you get a system where Catholics are introduced to it more and where non-Catholics go to Catholic settings to work in the area and become Catholic. Either process tends to make the the area of study more dominated by Catholics. Given trends in Orthodoxy, and podcast guests I've heard, I would imagine we would see a not dissimilar phenomena in Eastern European/Middle Eastern Christian-university scholarship but for the fact that they publish in a plethora of different languages and so end up more divided.

    Robert Wallace (at Cornell, a secular land-grant college) hits on some extremely similar themes but doesn't seem to identify with organized religion at all. Indeed, his big point is that organized religion, particularly Christianity tends to make God non-trancendent (pace the Patristics and Medievals).

    It seems to me that a major part of what’s going on in the world of “religion” and “spirituality,” in our time, is a sorting out of the issue of what is genuinely transcendent. Much conventional religion seems to be stuck in the habit of conceiving of God as a separate being, despite the fact that when it’s carefully examined, such a being would be finite and thus wouldn’t really transcend the world at all. Plus, it’s hard to know how we would know anything about such a being, which is defined as being both separate from us and inaccessible to our physical senses. In response to these difficulties, more or less clearly understood, many people have ceased to believe in such a being, and ceased to support whole-heartedly the institutions that appear to preach such a being. Thus we have the apparent “secularization” of major parts of (at least) European and North American societies.

    Wallace - on his blog

    But he identifies a number of religious thinkers with his conception of the truly transcendent and transcendent love/reason tied to the Good. "Plato, Plotinus, St Paul, St Athanasius, St Augustine, Meister Eckhart, Rumi, Hegel, Emerson, Whitman, Whitehead, Tillich, Rahner." I would add Merton here, who is probably the biggest English-speaking Catholic intellectual in the past century, and John Paul II, probably the biggest recent Catholic intellectual period (IDK, maybe Edith Stein?)

    Some of his key points re Plato and Hegel are quite similar to Schindler it seems though:

    Further, in contrast to the presumptuous self-limitation of reason within modernity, Schindler avers that reason is ecstatic, that it is “always out beyond itself” and “always already with the whole.” The result of this ek-stasis is that reason is already intimately related to beings through the intelligibility of the whole; thus, reason is catholic.

    Review of Schindler's "The Catholicity of Reason" (small c "catholic" here, not "Roman Catholic.")

    But nothing in Schindler's framing really seems to point towards political conservatism or necessarily just Roman Catholicism.



    Is their error the same as the undergraduate's error?

    Broadly speaking yes, although each thinker has their own unique attack on reason and it comes into their thought in different ways.

    I have seen a lot of very flattering analyses of Hume. I don't think I've ever seen one on his moral philosophy that wasn't highly critical. For example:

    Hume's answers to these questions [about why to be moral] reveal the underlying weakness of his account.For he tries to conclude in the Treatise that it is to our long­ term advantage to be just, when all that his premises warrant is the younger Rameau's [Diderot] conclusion that it is often to our long-term advantage that people in general should be just.And he has to invoke, to some degree in the Treatise and more strongly in the Enquiry, what he calls 'the communicated passion of sympathy': we find it agreeable that some quality is agreeable to others because we are so constructed that we naturally sym­pathize with those others. The younger Rameau's answer would have been: 'Sometimes we do, sometimes we do not; and when we do not, why should we?'

    Once reason is made "a slave of the passions," it can no longer get round the passions and appetites to decide moral issues. Aristotle's idea of the virtues as a habit or skill that can be trained (to some degree) or educated has the weight of common sense and empirical experience behind it. We might have a talent for some virtues, but we also can build on those talents. But if passion comes first, then the idea of discourse in the "good human life," or "the political ideal," loses purchase on its ability to dictate which virtues we should like to develop.

    Nietzsche's attack on reason is different, and leads to different problems. In the final book added to the Gay Science in later additions, he is focused on the tyranny of old ideas on us. The rule of reason becomes a sort of tyranny across his work, and there is a great focus on a sort of freedom that must be sought (within the confines of a sort of classical fatalism).

    But how might our freedom be properly expressed and executed? Here is where the "no true Nietzschean," problem springs up, for followers on the left and right are sure that the other's moral standpoints are incompatible with Nietzsche, but seem unable to articulate why in any sort of a systematic manner (e.g. "anti-Semitism isn't Nietzschean because he didn't like it.") The separation of reason from the will, and the adoption of Hume's bundle of drives ("congress of souls" in BG&E) makes it unclear exactly who or what is being freed, and how this avoids being just another sort of tyranny, even if it is a temporary one.

    The identity movements of the recent epoch run into similar problems. I recall a textbook on psychology that claimed that a focus on quantitative methodology represented "male dominance," and that the sciences as a whole must be more open to qualitative, "female oriented," methods as an equally valid way of knowing. The problem here is not that a greater focus on qualitative methods might not be warranted, it's the grounding of the argument in identity as opposed to reason. For it seems to imply that if we are men, or if the field is dominated by men, that there is in fact no reason to shift to qualitative methods, because each sex has their preferred methodology grounded solely in identity, making both equally valid.

    Rawls might be another example. In grounding social morality in the desired of the abstract "rational agent," debates become interminable. We might try to imagine ourselves "behind the viel of ignorance," but we can't actually place ourselves there. Thus, we all come to it with different desires, and since desires determine justice, we still end up with many "justices." The debate then, becomes unending, since reason is only a tool, and everything must circle back to conflicting desires. Argumentation becomes, at best, a power move to try to corral others' desires to our position.



    Socrates, Plato, and Aristotle were zetetic skeptics.

    I don't know what this is supposed to mean. Socrates, Plato, and Aristotle didn't think rational inquiry was useful? Is Plato sceptical of the dialectical having any utility? This would seem strange.

    Plato (I wouldn't lump Aristotle in here) does seem to imply at times that words deal with the realm of appearances, but he also seems to allow that they can point to, aid in the remembrance of, knowledge (e.g. the Meno teaching scene). A person must be ruled over by the rational part of the soul to leave the cave, but they can also be assisted in leaving if they are willing. Plato never gets around to an inquiry on semiotics, but I would imagine he would agree with something like the early Augustine, where signs are reminders pointing back to our essential connection with the proper subjects of "knowledge."

    I would tend to agree with assessments that the divided line is not a demarcation of a dichotomy, opinion lying discrete from knowledge. Being in Plato is a unity. The appearance is still part of the whole; there is a strong non-dualism in Plato brought out in Plotinus, Eckhart, etc. And this is why we are not cut off completely in a world of appearances. Indeed, the appearance/reality distinction has no content if all we ever can experience/intuit/know is appearance. Then appearance is just reality.



    The problem of misologic is raised at the center or heart of Plato's Phaedo. Simply put, Socrates wants to provide his friends with arguments to support belief in the immortality of the soul. The arguments fail to accomplish this. Those whose trust in reasoned argument is excessive and unreasonable are shattered. They may become haters of argument because it has failed them.

    The interlude on misology is a warning against abandoning reason when one has discovered that what has seemed to be a good argument turns out to be a bad one. To drive this home, Plato next has Socrates advance three (arguably four) arguments about why the soul is not like a harmony, which are of varying quality.

    I don't get how you get a reading out the interlude to the effect of "don't trust reason to much, or be lovers of wisdom, because then you will get let down." It is "if you get let down, don't stop being lovers of reason."

    The cure involves, as the action of the dialogue shows, a shift from logos to mythos. Socrates turns from the problem of sound arguments to the soundness of those who make and judge arguments. Socrates human wisdom, his knowledge of his ignorance, is more than just knowing that he is ignorant. It is knowing how to think and live in ignorance.

    Plato uses mythos for a number of reasons. At the end of the Republic, it is arguably a nice story for those who failed to grasp the full import of the dialogue. Sometimes he uses it to demonstrate the essentially ecstatic and transcendent nature of reason (the Phaedrus), and sometimes it is as you say, a way around an insoluble problem (the Phaedo).

    This move in the Phaedo and other places often is refered to as the "second sailing." Being unable to catch the right "wind" to resolve the appearance/reality distinction and explain the forms, Plato switches to another form of communication. He likens this to how sailors who cannot catch the wind must sometimes pull out the oars.

    But Plato seems to catch the wind in The Republic, where this subject is tackled more head on.

    To the point, consider the following:

    ...is the man who holds that there are fair things but doesn’t hold that there is beauty itself and who, if someone leads him to the knowledge of it, isn’t able to follow—is he, in your opinion, living in a dream or is he awake?

    Does Plato think it is impossible to learn of beauty itself or for someone to be led to it?
  • The Unity of Dogmatism and Relativism


    One thing to remember is that people are not inherently rational. It takes effort, oftentimes training, and a willingness to be wrong. Most people are rationalizing. In other words, they have an outcome they want to see and create justifications that support the conclusion they want, while only critically critiquing to reject anything which goes against what they want.

    This is certainly true, but lack of reason is not the same thing as disrespect for reason or arguing that it is involved in justification for some claims. See below:



    Dogmatists and relativists are irrational in a similar way.

    This is sort of missing the point. Many people who agree on the authority of reason re claims and justification act irrationally at times. That isn't misology. The similarity between the dogmatist and the relativist lies in their claims that reason and argument simply cannot apply to/judge their claims. For example, argumentation of evolution is simply irrelevant because it must be decided by faith, or argumentation and justification re moral claims is simply irrelevant/lacks any authority because moral claims are decided by power and argument is only relevant as an exercise of power. What the two share is not general "irrationality," but the claim that rationality has no authority or cannot be trusted.

    Many influential thinkers have attacked reason: Martin Luther, Rousseau, Hume, etc. That it seems particularly popular to do so writ large now is the relevance of college classes.
  • The Unity of Dogmatism and Relativism


    The argument of the OP rests on an analysis of the weaknesses of pragmatism and discourses of power relations. The claim is made that truth is relative on those occasions when it suits the purposes of those in charge, and is absolute on other occasions.

    That's not really it; that would be a much more narrow diagnosis. The argument is that the validity or reason and argument is discarded selectively, and that this is a commonality in unquestioned dogmatism and relativism. It doesn't really matter why it is done so; that will take many forms.



    Would it be more accurate to call this fallibalism rather than relativism? The possibility, or even inevitably of error or lack of certainty does not mean that epistemic justification is relative.
  • The Unity of Dogmatism and Relativism


    In my attempt to make the OP short enough, I may not have explained the phenomena I am getting at. The relativist enters into misogyny when they deny the validity of argument and reason in grounding their opinion. Reduction or elimination vis-á-vis ethics is not necessarily misological, and it's unclear if it can rightly be called relativist either. To say something doesn't really exist, that it is really just some better known thing, is not to say that it is relative.

    Someone who says something like, "based on this analysis and these arguments, I think morality reduces to statements of emotion," is not engaged in misology. Misology would enter the picture when the claim is something like "because all debates about morality are actually just power struggles, disputation resolves nothing in ethics. Rather, we must pragmatically pursue what we find good through power, and argument is just a means of shifting power relations." (This is pretty much the position of the Sophists.)

    The person who reduces or eliminates ethics isn't really a relativist. They are not saying "what is good depends on power, aesthetic taste, etc." They are making a rationally grounded claim about the content of moral propositions, that statements like "rape is evil," are equivalent with something like "rape is not to my taste and I do not want people to do it for this reason." Good doesn't depend on context in this case, it simply doesn't really exist. But the eliminitivist position is often conflated with the relativist position, particularly because relativists will selectivity employ the language and arguments of the eliminitivist when it fits their needs (misology).

    Edit: And note the elimination must narrowly defined what type of "good," turns out to be illusory. If all concepts of "good" turn out to be emotion, then we do end up at misology. For now what makes an argument or any criteria of judgement "good" has had the rug pulled out from under it.
  • Indirect Realism and Direct Realism


    I'm not sure, you could think about the sunset itself having the quality of being beautiful, as we do of people.

    I was going to say the same thing.

    Anyhow, some things have to be reality rather than appearance. The appearances versus reality distinction starts to lose its content if everything known or perceived is appearance.

    That the statements "I see stars," after getting bonked on the head and "the car is red," are different is obvious from a naive standpoint, but it becomes difficult to pull the two apart if there is only appearance. Indeed, what's the point of calling things "appearances" at all if they are all we've got? Without a "reality" to compare to, isn't appearance just reality?

    This seems like a problem for those particular forms of indirect realism that claim that only appearance is experienced or known, which granted is not many of them.
  • Types of faith. What variations are there?


    It might be that this way of thinking comes out of libertarian intuitions. Given libertarian free will, we cannot "know" how a person/persons will act because they are "free" to choose between possible courses of action. Truth values about future acts are indeterminate, not probabilistic in such a view, hence "faith in," generally applying to choice/persons.

    But this might highlight some of the coherency problems in naive libertarianism, because our ability to have such "informed" faith presupposes that choices are determined by things that exist prior to them, in which case free will would not seem to be wholly undetermined.
  • The Unity of Dogmatism and Relativism


    I honestly have no clue who he is outside of having had the book recommended to me. The book doesn't seem particularly conservative so far; the argument about misology would seem to apply anywhere on the political spectrum and the discussion of Plato has a lot in common with Robert Wallace, who I wouldn't think is conservative (who knows, maybe he is?).

    But if I'm that conservative for reading Schindler, I am equally a qualified liberal for having read Honneth, the heir to that great bastion of "Cultural Marxism" ... the Frankfurt School :scream:.

    IDK, Honneth didn't strike me as super liberal. I once saw a book I liked by Leon Kass back to back denounced in reviews as the work of an Bush-II-working-with arch-conservative, and the work of "a denizen of liberal post-modern academia blaspheming the Bible" in back to back comments.
  • Thought Versus Communication


    I find the terminology on this sort of thing incredibly inconsistent and frustrating lol. :rofl:

    Are there a good parallel words for "visualizing" that apply to taste, sound, touch, etc?

    Imagining seems like it should involve images, that's the root of the word right? But then I've seen phenomenologists call mere visualization "picturing" where as "imagining" involves the displacement of us as an agent into some sort of imagined setting.

    The problem is that then you can talk about both "picturing" or "imagining" sounds, smells, touch, etc.

    Same problem with the idea of "mental images." Wouldn't an imagined sound be more a "mental recording?"

    But then "images" and "recordings" are themselves records of some object. Yet as Husserl says, "my centaur is my centaur," my imagined centar isn't an "image" of some real centaur, but my own creation. It's a funny area.
  • Types of faith. What variations are there?


    Well, consider a statement made during the Korean War such as, "the situation is extremely dire, and the UN forces have collapsed into a chaotic rout. However, I have faith in General Rigeway to sort the situation out. He's an excellent commander and he's pulled a rabbit out a hat before."

    The person making the statement doesn't, and wouldn't claim to "know" that organization will be restored and the Chinese offensive halted. However, their assessment isn't blind either. They have a faith in the character and abilities of the general, and this is not "blind" but based on past experiences.

    And perhaps we could say that the general benefits from "faithful officers," who have "faith in" him and so execute his commands even if they do not understand or agree with them.

    The "faith in," and "faith that," distinction targets persons and propositions respectively. The problem with discussions of faith, particularly in religion, is that these two uses end up mixed ambiguously.

    There is a similar distinction between "knowing how," and "knowing that." Knowing how to ride a bike doesn't seem to tie neatly to propositions. But religious practices sometimes fit the "knowing how" distinction better, and this seems to lead to confusions when religion is thought of in terms of a set of propositions.



    Would it be fair to say this is more a "faith in" institutions, rather than a "faith that," given claims are true?
  • Thought Versus Communication
    Interestingly, many models of how conciousness is "produced" would suggest that thought itself is also a sort of communication, as well as the product of intensive communication between different specialized systems.

    Language is no doubt extremely useful for both interpersonal communication and thought, so it seems hard to differentiate which would be more important in the development of humans' linguistic capabilities. It's like asking what made us develop cars, the fact that they go fast or the fact that we can put ourselves and stuff in them. Well, both clearly.

    Phenomenological explanations of language tend to emphasize that the intersubjective, communicative facets of language and those which are intra subjective, "thought-focused" are probably best thought of as mutually reinforcing, rather than one reducing to the other, and I think this is a wise assessment.
  • Types of faith. What variations are there?
    There are lots of distinctions. Let me just throw out the few I am aware of and see if that helps:

    "Faith that..." versus "faith in x..." is the most popular distinction in philosophy of religion. "Faith that..." applies to propositions and facts, whereas "faith in," is about persons or groups or persons. "I have faith that it will rain, the garden will be fine," is saying a different sort of thing when compared to "don't worry, she'll come through, I have faith in Edith." Faith in persons entails a sort of regard and respect for the "trustworthiness of an agent."

    Effective distinctions:

    Demonic Faith: "Thou believest that there is one God; thou doest well: the devils also believe, and tremble." (James 2:19). This would be faith in a self-evident or well supported fact, rather than a personal "faith in/regard for." "Belief in the obvious is to no one's intellectual or moral credit," is the point. This is a default, rather than radical skepticism.

    Dead Faith: "Even so faith, if it hath not works, is dead, being alone. (James 2:17) This is faith insufficient for producing more than sentiment or desire, but without directing one's life. I've seen people call this incontinent faith following Aristotle as well

    Justifying/Saving Faith: Trust in/that which motivates life changing/defining action and strong emotion

    Indwelling/Supernatural Faith: this one is more unique to the Christian tradition. It is faith pouring from the indwelling of the Holy Spirit, and is connected to the ideals of catharsis, illumination, theosis, and deification.

    Levels of maturity in faith - faith AND reason

    In a number of places Saint Paul makes a distinction between those who are "babes in Christ," who must have a soft and nurturing faith, and be given "spiritual milk," those who have moved on to vegetables, and those who must chew over "spiritual meat." Origen has a fairly common view of how to interpret this, which is that new faith is largely emotional and experiential, not grounded in knowledge or practice. Over time, faith develops like a virtue (habit/skill) and is challenged and deepened by the intellect and reason.

    The babe might need the milk of the fairly straightforward expressions in the Gospel of Luke. Those of developed faith must chew over the meanings of the Pentateuch, Canticle or Canticles, etc. These books confuse novices of weak faith who read in a fleshly way, in a literal manner, whereas the deep faith is informed by analogical, typological, and anagogic interpretation ("the sprit gives life, the flesh profits nothing" John 6). The unity of being hangs together in the Good, the Beautiful, and the True (Plato - Good only — Plotinus, Augustine, Bonneventure, Aquinas, Eckhart, etc.) and the unity of faith encompasses gnosis and logos as well as pathos. In this view, faith is not opposed to reason, but rather fused into it ever more deeply, as is practice/techne. There is a blend of techne, episteme, phronesis (discernment), nous, and sophia; whereas faith today is often understood primarily as nous/intuition.

    I think the main philosophical thing to draw out here is the idea of faith as habit/practice/techne in addition to initial nous, the natural role of reason in techne, and then the progression from techne to a blend of techne and sophia (Aristotle's uses of these terms). We could also think of the progression in explanation in Plato from mythos to logos, and particularly the "unity of reason," and primacy of the whole in true knowledge.

    This is not how Saint Paul and the author of Hebrews is always interpreted. Medieval fideists rejected this view, opting for a more intuitionist faith separated from logos. Luther builds on this, and has some rough things to say about reason. However , this blended, progressive view remains strong in Catholicism, Orthodox, and Coptic/Oriental thought, and is certainly not absent in Protestantism, but is less ubiquitous there.

    Obviously, of can be applied outside of theological settings, in terms of how skepticism is overcome, particularly in terms of circular, falliblist epistemologies.

    You could also consider the idea of faith as being necessary for beginning any inquiry (faith in your ability to learn, faith in the intelligibility of the subject, etc.) found in Saint Augustine and Anselm of Canterbury.
  • Supervenience Problems: P-Regions and B-Minimal Properties


    I'd have to go back to the original article to see how it is intended. It does seem to me that the idea of the P-Region corresponding to the actual region responsible for any real instance of M is more useful, simply because the B-minimal idea seems to cover the "set of P-regions capable of producing M," in a better way.

    I think this is actually useful for my intended uses. If some elements of perception are tied to B-minimal properties that are instantiated outside the brain, and if any one instance of M is realized by only one P-Region, then it seems that some elements of perception have a one to one correspondence with physical properties of "external objects




    Here, M is perfectly multiply realizable, because B-min(P) is multiply realizable: all sorts of things can apply pressures greater and lesser than y.

    I agree almost, but I think you're a bit off on how a B-minimal property is multiply realizable here. It is multiply realizable in the sense you mention, many different physical systems can apply greater than or equal to y pressure. It is not the case that the B-minimal property itself is multiply realizable, for the property in this case just is "produces greater than or equal to y pressure." If M changes, it is no longer in the on state, it is necessarily the case that whatever P is, it is not longer generating greater than or equal to y pressure, which means it's B-minimal properties have changed.

    That's how they get into a one to one correspondence. Something is B-minimal if and only if changing it is going to change M. So if M changes, P must too. Granted B-min(P) can be actualized in many systems (sets of possible P-Regions).
  • Is philosophy just idle talk?
    Socrates, Boethius, Origen, and many others died for living out/preaching their philosophy. The latter two were subjected to prolonged torture first and in least in Origen's case we know he never recanted despite this. It isn't all idle talk; sometimes it's deadly serious. :scream:

    Might be best to take ourselves as blessed to not have these concerns.
  • Analysis of Goodness


    The virtues are the skills and talents needed to attain eudaimonia. There are many, so speaking of "attaining virtue," singular, would be similar to saying one needs to "attain skill," or "talent" to be a good musician. It's true, but there are particular forms. The English-language history is interesting here because if MacIntyre's sources in After Virtue are to be believed, speaking of a single "virtue," as in "the singular skill of being good," didn't enter English discourse until 18th century.

    Plato does attempt to unify the virtues in the Protagoras, but in the sense that all virtues are born of knowledge, not that there is a single excellence required for "the good life." And of course Plato has a unified idea of the good, but that's not the same thing, although modern discourse has tended to flatten out "virtue" such that they start to become synonymous. "One must be virtuous to be a good person," becomes a tautology

    The point is not that the virtues are wholly dependant on one's vocation or social status; Aristotle's analysis applies across these distinctions. It's that they are seated and expressed within a context on an entire life, which necessarily includes the aforementioned, rather than being applied to individual acts (this follows with eudaimonia also being achieved across a lifetime and its legacy). In an ethics based on the moral value of individual acts, the focus on skill/habit tends to get lost.

    The polis shows up most robustly in contrast to thinkers like Hume, for whom morality must be about the concerns of the individual. For both Plato and Aristotle, there is a strong sense of a "shared good," e.g., Socrates' claims that it would make no sense for him to make his fellow citizens worse in the Apology. The point here is that there is nothing like the tendency to think in terms of "trade-offs," the way there is in modern ethical discourse, where we are always concerned with how much utility an individual must give up to obey some precept and "shared good," is just defined as "an instance where every individual benefits as an individual from the same good."
  • Analysis of Goodness


    Is virtue (arete) unrelated to perfection?

    Arete could also be translated as "excellence," and for Aristotle it was deeply related to perfection. This would be the use of perfection in sentences like "the glove is a perfect fit," or "thanks for working in my car, it's been running perfectly (without malfunction) ever since." This conception of perfection is grounded in function or purpose (telos).

    But the idea of "virtue," singular, as opposed to the "virtues," is a modern innovation. The virtues were those excellences a person needed to fulfill their social role, and they might vary depending on the sort of person you were. The virtues required of a knight are not necessarily the same as those required by a nun, or a teacher, etc.

    With the shift to market economies and mass production, social roles took on a declining importance in how people defined their lives. The products of people's labor was no longer largely consumed in the immediate community, so work could no longer be tied back to ones role in supporting the community (alienation). Thus, efforts were made to recontextualize ethics in terms of universal laws or principles — "what can said to be good in every case."

    There is an argument to be made that this is a mistaken outlook. Trying to develop ethics outside of a social context is like trying to develop a view of "the differences between men and women sans culture." It doesn't work because people don't live outside of a culture and community; ethics isn't practiced individually.

    There are places where fairly objective lists of the virtues still exist in our modern world, and these tend to be professions with defined "practices." For example, it probably be far easier for us to reach agreement on what makes someone a "good scientist" or "good doctor," than what makes someone a "good person." This is the sort of analysis where the virtues were originally intended. Aristotle sets out the "life of contemplation," as the highest sort of life, but maintains that one may be virtuous and flourish in other types of life.

    There are, of course, candidates for metavirtues that are required for all people to fulfill their roles. But modern ethics tends to focus more on the ethical nature of individual acts than "a good life," which is another complication. I personally find the older focus on the entire life/life narrative, "count no man happy/wise until he is dead," (Solon/The Book of Sirach) works better. The framing of good/bad in terms of free floating "acts" in a life makes it impossible to get a grip on the necessary context for ethical behavior.
  • Wittgenstein’s creative sublimation of Kant


    Fallacy of composition or division

    Both, I guess. The fallacy of composition is to assume that because a certain type/element of thought is linguistic, that all aspects of it must be — e.g., if there are things we like about a piece of music or art that we can't put into words, this je ne sais quoi isn't contained in "thought."

    But there does seem to be a fallacy of division as well, in that it is in language that thought most obviously"hangs together," as a whole, and yet the linguistic nature doesn't reach all the way down.

    In general, I think philosophy of language also tends to underestimate the value of language in non-social contexts, the way in which it is a tool for imagination, planning, and problem solving. I've seen some convincing speculation on how our capabilities for language may have grown out of both social and "internal," use, some from Daniel Dennett funny enough.



    Back to Kant and what the individual sees and knows (and can't know).

    Interestingly, solipsism was sort of a going concern from the pre-Socratics in the West and showing up as early as the Brihadaranyaka Upanishad in the East. As far as I am aware, the position that "we do not mean things by words," i.e. that our words don't sometimes reflect our internal mental states or refer to things/people around us, is an entirely modern conception. It seems to grow out of the twin tendencies towards reduction and the elimination of difficult concepts — that the limits of the (currently) formalizable represent the limits of possible knowledge.
  • Wittgenstein’s creative sublimation of Kant


    Perhaps more to the point, do we actually think that people with aphasia have no content to their thoughts, that there is simply "no one home," in there doing "any thinking" once they lose the ability to either produce or comprehend language? If they can produce but not comprehend speech, or vice versa, how does the loss of one half of the speech world affect their status? What about the person with agnosia who has no trouble with language but cannot use sense perception to identify objects or people (and thus cannot name them)?

    Sometimes people recover from these conditions if they are brought on by stroke or another form of brain injury. In general, their narratives reveal a radical absence of "essential" elements of conciousness, and yet a continued stream consciousness they can recall. What appears to be "thought" shows up in the absence of linguistic capabilities (e.g., "I must call the ambulance," existing in the absence of an ability to recognize numbers on a phone or to produce intelligible speech once 911 has been dialed, or to understand the other person on the line in neuroanatomist Jill Bolte Taylor's case).

    The brain is a system of systems; language is a faculty built on top of prior systems, taking advantage of them. It can seem all encompassing vis-á-vis experience precisely because it utilizes so many systems. When we imagine a scene described by an author, we're employing the same systems we use to process incoming sense organ data. Lions clearly do not have a word for "gazelle," and yet it would be strange if they couldn't recognize one from any other object. People with aphasia don't necessarily have agnosia, just because names seem wed to "object recognition," in healthy people doesn't seem to suggest that you can't lose one without the other. Language as the defining aspect of thought or mental life appears to be a sort of synecdoche, or maybe a fallacy of composition.



    It might be worth bringing up Davidson's famous "swampman" argument where he denies most physicalist interpretations of philosophy of mind. In his view, an atom for atom copy of himself couldn't understand language because it would "lack a causal history," associated with language. That is, Swampman would live out the rest of Davidson's life just like he would, speaking and listening, but would have no thoughts. This just seems implausible in light of what we know about learning and language. I don't believe he was ever married, which makes a certain sort of sense here. I'd maintain that it would be difficult to have raised a toddler, having to continually remind them to "use their words," to communicate themselves, and then argue that thought cannot continually outrun the limits of language/exist prior to it.
  • Wittgenstein’s creative sublimation of Kant


    Dogs cannot set out the rule they are following. We can

    That's partly my point. Once we know what a rule is, we can make them for dogs, cats, ourselves, etc. Kirpke's point about rule following is ostentatiously false, at least the the way it is presented there. I might buy that we learn "what rules are" through our interactions with others, but it's also clear we can develop and implement private rules.

    Anyhow, unfortunately, we can only set out the rules the dog is following. If we could set out the rules that we follow, then philosophy of language wouldn't be in the state it is in.

    I will leave my comments on Davidson's theory there.
  • Wittgenstein’s creative sublimation of Kant


    Gotcha. I am only vaguely familiar with Davidson. I assumed "a process of ‘triangulation’ must occur, whereby the content of the thought someone is having is ‘fixed’ by the way in which someone else correlates the responses he makes to something in the world," suggested an ongoing process.

    The latter example still seems like a problem, unless we're going to say that "someone else," doesn't need to mean "some other experiencing entity."

    "...it is impossible to make sense of what it is to follow a rule correctly, unless this means that what one is doing is following the practice of others who are like-minded"

    Is this true though? Tolkien nerds can certainly correct each other on the proper use of the Elven language, but was the language not a language system until Tolkien shared it? Surely it had rules before then. Once one knows what a rule is, it seems completely possible to make up you own, in isolation, e.g., Allan Calhamer inventing the game Diplomacy, Naismith inventing basketball, etc. That we can create rules in the absence of a community and then other can learn them is how we get stuff like the mystery of the Zodiac Killer (Ted Cruz of course) or the related issue of languages that are "dead" for thousands of years before being decoded.

    On the other end of the spectrum, it's possible to get a dog to follow rules and perform acts based on verbal commands, but the rule following there hardly seems like it can "fix" the content of thought.
  • Wittgenstein’s creative sublimation of Kant


    On one interpretation, Davidson’s transcendental argument is based on his account of what it takes for a thought to have content, for which he argues that a process of ‘triangulation’ must occur, whereby the content of the thought someone is having is ‘fixed’ by the way in which someone else correlates the responses he makes to something in the world. Thus, Davidson argues, if there were no other people, the content of our thoughts would be totally indeterminate, and we would in effect have no thoughts at all...

    The counterfactual seems tough here. If there is a lone astronaut on a mission out past the Moon, and a freak particle accelerator accident someone generates a black hole that tears the Earth apart, so that now our astronaut is the lone surviving human, would her thoughts lose their content?

    It doesn't even seem that it is obvious that we must be around other minds for our thoughts to have content. We can imagine a human child raised by highly sophisticated robots. The robots have no subjective experience, but they are able to function well enough to keep the child alive and run her through the basics of a K-12 education, responding to her prompts the way a much more advanced Chat GPT might. Do her thoughts lack content? It's not obvious that they should.

    I guess this sort of gets at my point about foundationalism, the need to ground the obvious substance of everyday experience, instead of begining with them as Aristotle suggests.

    Saint Augustine says, "understanding is the reward of faith. Therefore, seek not to understand that you may believe, but believe that you may understand." (Tractate 29) To which Anselm adds "For I believe even this: that unless I believe, I shall not understand." (Proslogion, drawing on Isaiah 7:9) This could be taken as a religious platitude, but in fact Augustine applies it against the same sort of solipsistic and relativist concerns common to modern philosophy.

    His point, laid out most fully in Contra Academicos, is that learning itself requires taking experience as it comes. We can doubt anything. Yet, if we doubt every letter in our physics textbook, we shall never learn physics. Only after we have digested the topic can we have an informed opinion about its validity, and this will be the case even if no firm "foundation" exists (which is the case in modern physics; we know the middle better than the smallest or largest scales). This is true for social concerns and solipsism too. We can doubt that our parents are our parents, for we could have been switched at birth, but it would be insane to refuse filial devotion to our parents for this reason. Augustine's point is less clear in the context of modern culture, were it isn't seen as so shameful to lack filial devotion. The modern example here might be posting nude pictures to the internet because you assume other minds might not exist (whereas the ancients didn't much care about nudity).

    So, regardless of whether conversation is required to give thoughts content, it is clear that in our case, it is an important component of how our thoughts come to have content.
  • Currently Reading
    I just reread Boethius' "The Consolation of Philosophy."

    Such an amazing book. If ethics was going to be taught in schools (and it really should be), I would put this up there with Aristotle.

    It's very similar to Saint Augustine, who does an excellent job fusing Plato and Aristotle in his ethics, but manages to be far more concise while also being far less ostentatiously religious (a pro for modern audiences). That and the back and forth of poetry and dialogue is really great.

    The only weak part is his framing of the absence theory of evil, which is not particularly convincing.

    Also been reading the Analects. There is some interesting similarities to Aristotle in Confucius. MacIntyre's After Virtue sold me on the idea that modern ethics is fundementally flawed, but he largely looks back at the Western, particularly the Aristotlean tradition. I wanted to explore the Platonist/Patristic tradition more (Boethius) and that of China, since they also seem to avoid the fall into emotivism and excessive individualism MacIntyre describes re the moderns.
  • Wittgenstein’s creative sublimation of Kant


    But is it idealism?

    I think we can avoid this question if we take Wittgenstein's advice about not looking for all encompassing theories in PI to heart. Unfortunately , Wittgenstein himself doesn't always take this advice, and some of his disciples in particular fail to heed it when they attempt to develop a theory of all language, or even all communication, solely in terms of "games." That language is sometimes usefully thought of as a game does not entail that we must always think of it as such, or attempt some sort of "reduction."

    If you go through an introductory text on philosophy of language, you're likely to find a steady stream of mutually exclusive claims about how language is "just" (reducible to):

    - Signs representing propositions (abstract objects);
    - Just verification or truth conditions.
    - Just games
    - Just the communication of internal mental states, etc.

    There are good arguments for each, and also significant flaws in each, and in general they also tend to totally ignore the broader field of semiotics, leaving the field a bit "free floating," from other philosophical areas of inquiry that certainly seem relevant (e.g. metaphysics, philosophical anthropology, etc.).

    I personally really like the work of Robert Sokolowski, who capably weaves together Husserl, much of philosophy of language, Aristotle, and Aquinas to develop a solid theory of philosophical anthropology in a way that jives well with Wittgenstein's sentiment. However, it doesn't go overboard in trying to reduce the human experience or it's horizons to language. There is a practical element. He follows Aristotle's advice in the Ethics that "sometimes with complex things you need to start at the end or the middle, with what is most familiar, not with a clear foundation/beginning," and that "we shouldn't expect hyper detailed answers for the most complex phenomena."

    This allows him the space to develop a theory where conversation and intersubjectivity are essential to the human experience, and how we come to "say things about things," without getting "stuck in the box of language," or "the cabinet of the mind." That is, he says we should start with language because it is dominant in our lives and philosophical discourse, uniquely human, and on the surface of our experience to analyze. Then, from the intersection of language and phenomenology, we can get into how predication works, how intelligibilities are perceived/communicated, etc. without having to reduce everything to language or necessarily ground the discourse in language in a strict sense. There is room for metaphysics, etc., but we start with what is most obvious, "what people say," and phenomenological experience, then make our way from there, without using these as a "foundation" in the strict sense of "all phenomena must be traced back to and explained in terms of our foundation."

    It seems to me like perhaps the biggest misstep in modern philosophy is the obsession with foundationalism, although the 20th century tendency to claim all other positions were "meaningless" or the jump to make all difficult philosophical questions into "pseudo problems," or else eliminate (or massively deflate) the difficult term, are up there too. By my count, there have been serious attempts to eliminate causation, truth, logic, meaning, qualia, our own consciousness, etc. Surely these can't all be dispensed, or we'll have no philosophy left.
  • A re-definition of {analytic} that seems to overcome ALL objections that anyone can possibly have


    Expressions of language that are stipulated to be true are from the current correct model of the actual world. If someone says that the current number of members of congress is {a stale bologna sandwich} then they are wrong.

    Are they wrong in virtue of the fact that a bologna sandwich was never elected to Congress or are they wrong in virtue of the fact that the database hasn't included that as an axiom?

    proven completely true entirely on the basis of its meaning

    Ok, so you can have your magic database, and I will make my own. In mine, the current congressman for the 12th District is a stale bologna sandwich. This is axiomatic and can be "proven completely true entirely on the basis of its meaning."

    Is it now the case that it is both completely true and also false that a bologna sandwich is a member of Congress? Or is your database right and my database is wrong? If yours is right and mine is wrong, in virtue of what is your database correct and mine incorrect? It can't be in virtue of the meanings of terms alone, for I have a unique integer code that says that a bologna sandwich is a member of Congress by definition.

    Might it be that yours is correct because it is true in virtue of how the proposition relates to states of affairs and not the meaning ascribed to some code? :chin:
  • A re-definition of {analytic} that seems to overcome ALL objections that anyone can possibly have


    Axiom is a proposition regarded as self-evidently true without proof.

    Is it self-evidently true that Moscow must be the capital of Russia?



    Cat is animal.
    Cat is plant.

    But after the update, the system has two expressions for the same word cat, which are contradictory.

    It seems to me the larger issue is if you were simply to put something completely false in, e.g. "the US House of Representatives has 572 members."

    This is false. How do we know it is false? Not because "The US House of Representatives," fails to be synonymous with "has 572 members." In is a contingent fact. If something like the Wyoming rule was ever passed, the House very well could end up with that many members, but it still wouldn't make the fact true by definition. It isn't an analytic truth. Most true propositions are not analytic.

    We could debate about propositions related to natural kinds, e.g., if Carbon was synonymous with "the element with 6 protons in its nucleus" before "element" had its current definition and before anyone knew what a proton was. However, the more obvious case where this breaks down are propositions like "Moscow is the capital of Russia." Well, it is right now. It wasn't when Saint Petersburg was the capital though, and it might not be in the future. Moscow simply is not synonymous with "the capital of Russia;" "Moscow is the capital of Russia," is not a tautology, it is not analytic.

    Saying, "what if we collected all possible non-analytical truths, and then declared them true by axiom, won't that will turn them into analytical truths," is totally missing what an analytical truth is. It turns non-analytical truths into tautologies only in the context of our made up language. But our made up language could just as easily contain false axioms. How would we determine which is which? How do we determine which true "axioms" to include in our language? Well, for all those truths that aren't real tautologies, it would still require sense data, because they are simply not analytical truths. You can't "turn a truth analytic," by axiom (at least not in the context in which the distinction is remotely useful).

    The distinction was about truths simpliciter, not about "what can be made analytical in some arbitrary system." Absolutely no one denies that you can make a system where "Paris is the capital of Mexico," is true by definition, and that in that system, that proposition will be true by definition, a tautology, and thus "analytical." But that's really missing the point of both why the distinction was ever relevant and Quine and others' critique of it, which is not about truth in the context of some one arbitrary system. You have to overdose on deflation and think of truth as just "what formal systems say about statements," to get to this.
  • On Carcinization


    God is a Lobster, or a double pincer

    God, I hope not. We'd be in for it.

    3u4t54jys6oixda7.png
  • On Carcinization
    k5chsuyr9dnl57zm.png


    Nature is way ahead of you.
  • A re-definition of {analytic} that seems to overcome ALL objections that anyone can possibly have


    So the discovery of the period table occured because one day someone said "this is by definition true?" We know water is H2O because someone happened to declare "this is true by definition?" People don't know that dogs bark by hearing dogs bark, but rather because one day someone declared the "dogs bark is axiomatically true?" Come on.

    You are disagreeing that there can be a correct model of the world because you don't understand
    how it is updated?

    The empirical fact/analytic distinction relates to how facts are discovered/verified. Things like "water is composed of hydrogen and oxygen, were discovered empirically."

    You're fundementally misunderstanding what the distinction is and why it is important. Even assuming some sort of magical list of all true statements, it would still be the case that the way one verifies that fact statements on the list are true is through sense experience. There is a reason the distinction is literally between analytical truths and facts, because they are not the same thing, and what makes them different is what is required to verify them.

    Water is H2O is a good example in that this was not known for human history. Establishing the synonymy of "water" and "H2O" requires matters of facts, which is part of Quine's point.

    But more to the point, even if you don't buy those critiques, it still remains the case that "analytic" never referred to matters of fact. Kant's definition of an analytic truth are those truths whose negation is a contradiction. "Ravenna is not the capital of the Roman Empire," is not a contradiction, even though Ravenna was one capital of that empire. "Ravenna" defines a city in northern Italy, it is not a synonym for "capital of the Roman Empire."
  • A re-definition of {analytic} that seems to overcome ALL objections that anyone can possibly have


    Ah, so when the Roman capital moves to Milan people learn about this to memorize it... how exactly? How exactly did people come to memorize the fact that Senator Obama has become President Obama? Your solution involves totally ignoring how facts are actually know and you still haven't explain why/how false axioms wouldn't be added. In virtue of what are facts verified as true so that they can be stated as true by definition? The periodic table wasn't a given to humanity, it had to be discovered, etc.

    Hume's Fork is about how we come to know truths. The distinction is about how people can come to know things. A magical inviolable database where all true statements exist and no false ones sort of misses the point of debate.
  • A re-definition of {analytic} that seems to overcome ALL objections that anyone can possibly have


    One wonders then what the utility of language classes are then, or how it is that people living in a foreign country come to speak its language. :roll:

    But it seems like the point stands, how does one differentiate between true and false axioms such as: "Michelle is the tallest woman in the room," "Springfield is the capital of Illinois," "Mogadishu is the capital of Florida ," "weed is a slang term for marijuana," "Alfred the Great is a slang term for cocaine," or "Helium has an atomic number of 8," etc. These aren't going to be shown to be true of false analytically, and you could make any of them "axioms" even though some are false.
  • A re-definition of {analytic} that seems to overcome ALL objections that anyone can possibly have


    With my redefinition of the {analytic} side of the analytic/synthetic distinction any and all knowledge
    that can be completely verified as true entirely on the basis of text <is> stipulated to be analytic.

    Yes, and most (arguably all) facts fail to actually fall into your category.

    That "cats are animals" is verified as true on the basis of the axiom {cats are animals}.

    No, "cats are animals," is verified by experience. Consider that we could just as easily stipulate "cats are racecars," "cats are robots," and "cats are rocket ships" as axioms. Then, using the same processes, we could "verify" that these are analytical truths entirely on the basis of the text/axioms.

    Your concept of what makes things analytical truths would entail that literally any arbitrarily chosen axiom is "an analytical truth." But this is clearly nonsense, cats are not racecars just because we have stated an axiom that "cats are racecars." How does one distinguish between a bad "axioms" that are clearly nonsense, such as "cats are sailboats," and good axioms that are true, like "cats are animals?" Through experience of what cats are.

    No one thought of solving the distinction by advancing the solution that "if you arbitrarily declare all true things to be true by definition and all false things to be false by definition, then every truth becomes analytical," because it totally misses why the distinction is useful in the first place. It presupposes that you already know what is true and what is false. But then how do you find out which "axiom" is true so as to posit it in the first place? Certainly not on the basis of the word' meanings alone. Nothing about the term "Ravena" entails, "capital of the late Western Roman Empire," for instance.
  • A re-definition of {analytic} that seems to overcome ALL objections that anyone can possibly have


    Isn't it an objection to say that the definitions of the terms in play are arbitrary and not tied to reality? Or more to what I think Quine's point was, you would have to do a lot of empirical work to figure out what definitions to put into your database. That is, they aren't actually analytical truths because what you have put into the database has been determined not by definitions, but by empirical inquiry.

    Otherwise it's just saying something like: given A is true, and given A = B, B is true. But the interest in analytical a priori truths was generally motivated by the idea that they were aspects of reality that could be known with certainty, and which might act as a foundation for the justification of knowledge, not by the truism that "anything defined as true by definition is true by definition, given we accept that definition."

    Going out and cataloging a bunch of non-analytical truths (empirical facts), throwing them into a database, and then saying "I have now defined every fact as true by definition," doesn't solve the problem. Particularly, it fails to solve the problem if you embrace any theory of truth other than completely deflationary ones. For, "cats are a type of sailboat" could no doubt be defined as an "analytical truth," by fiat and entered into a database, but this would not make it true that cats are a type of sailboat. This would, under most theories of truth, just make it a falsehood.

    There is a reason facts were not considered analytical, why Hume's Fork, which kicks off the distinction, distinguishes between "relations of ideas" (analytical) and "matters of fact." You could make innumerable databases purporting to be "true models of the world," that vary in what they define as true. How would one compare these databases and determine which "analytical truths" are actually true? They would have to go out and observe the world... which means, in point of fact, the truths aren't analytical because determining their truth value does not depend on their definition (because empirical facts aren't analytical).
  • A re-definition of {analytic} that seems to overcome ALL objections that anyone can possibly have


    If we have this sort of nearly infinite database that corresponds to all facts about the world, why even bother with genus and species? What is the value of encoding "cats are contained in animals?" and in virtue of what does the database decide that a given entity should be contained in a given category? Questions like: "are 'pet rocks' really 'pets?'" would seem to need to be answered to allow for a full categorization of all "facts."

    In scholastic philosophy genus and difference are predicated of things as known by us, as conceptualized or as present “in the mind.” They arise when the intellect reflects on itself and on what it contains. But it seems like the system you are envisioning:

    A. Has no mind.
    B. Could simply contain all the facts uniquely specifying each individual cat, or each atom in each cat, etc.

    Getting into species and genus seems difficult because people disagree about them and they disagree about how they relate to actual ontological differences. For example, "do species actually exist?" is a topic of debate in the philosophy of biology. What it means to be "living" is itself contested. Do viruses fall under the category of "living?"

    Amorphous terms like "post-modern," "fascism," etc. don't seem to clearly map to entities in any sort of definitive fashion. Rather, it would seem that the database would need to incorporate each individual's beliefs and judgements vis-á-vis species and genus across time as independent facts. Facts like "Mount Washington is the tallest mountain in the Presidential Range," are based on evolving social conventions, and even what constitutes a distinct mountain and not just another peak on the same mountain is not based on firm criteria.

    Yet without categorical distinctions, the database seems to turn into nothing but a phase-space map of the universe, or a Le Place's Demon, in which case it seems hard to see how it is easier to get facts out of it than simply observing the world (unless we also envision a computer of unlimited computational power attached to it).

    Secondly, essences appear to be able to evolve over time. "Communism," today is not the same concept/category that it was in 1848. "Essences" are not what they were for Aristotle. Would the database need to have time/culture dependent categorization?

    Further, we can consider the problem of defining superveniance relations. In virtue of what will a given subatomic particle be said to be "part of" a given candle flame or cat, and won't this change moment to moment?
  • How to do nothing with Words.


    How so? Counterfactual analysis is probably single biggest tool used in the philosophy of causation. If Aristotle never writes anything, it is clear that our world would be quite different, due to innumerable small changes across the ages. It's not even clear if there would be a Kant or a Prussia. More directly, if Kant never read Aristotle and was never introduced to his ideas, it seems reasonable to assume his thought would have been quite different. What are the chances that Kant derives Aristotle's exact categories for judgement had he not already been using those categories because of Aristotle?

    So where is the absence of any causal link?

    Is the objection that reading Hume or Aristotle didn't necessitate Kant's work? That's certainly true, but there is a useful distinction between "x uniquely determines y," and "x plays a causal role in y." No one cigarette is going to "cause" lung disease, but years of smoking would seem to, each playing a causal role.

    Animals evolving to survive on land didn't uniquely specify the invention of cars, but it seems to be a necessary precondition for their invention. That everything can be causally traced back to the begining of the universe per prevailing physics is sometimes raised as an objection to the entire concept of causation, but I don't think this really holds water.

    The difference on display:

    0piqvrwebw88x646.jpg
    rn18wu3ol9gshe2v.jpg
  • Supervenience Problems: P-Regions and B-Minimal Properties


    Let M be the property of an object P being able to depress a pressure sensitive plate. You can remove or add matter to P, and M still holds. Moreover, you can replace P with a different material with the same mass, and M still holds.

    I don't think so.

    Let's take B-minimal properties first. If all we care about is the plate being depressed, then the B-minimal properties will include the plate apparatus and the property of x amount of force pressing down on the plate. For simplicity, let's think of the plate being depressed or not as a binary, on/off. In this case the B-minimal properties of the object on the plate would be something like "weighing y pounds/grams" where y = the absolute minimum needed to depress the plate.

    In our real world example, the object on the plate might very well weigh more than y, but y doesn't change. Y is the absolute minimum to flip the plate's binary setting from off to on (given all other conditions of the system). Y is fixed by the characteristics of the thing it is trying to explain, it is defined as the minimum force needed to produce an "on" state in the plate. Any change in the amount of force needed to define the on/off state (any change in M) requires that B-min(P) also change. This is because of how B-min(P) itself is defined, as the minimal properties needed to produce M (in this case an "on" state.)

    So, like I said, P is multiply realizable here in one sense. The object on the plate can have all sorts of different chemical compositions. But the B-minimal property of exerting x amount of force on the plate doesn't change when the chemical composition changes. And if the amount of force needed to produce an "on" state is changed or redefined (an M change), then by definition B-min(P) must be redefined since it is defined by exactly what is needed to produce M.

    The P-Region concept is trickier. The way I understand it, we are talking about the actual spatio-temporal region involved in producing M given any one actual instance of M. If something in P-Reg(P) can be removed and M doesn't change, then it shouldn't be in the P-Region in the first place. If something can be added to P-Reg(P) and not change M, then it shouldn't be in the P-Region because it is not essential to M. P-Reg(P) is defined such that something is only included in it if it plays an essential role in producing M. This means P-Reg(P) cannot change unless M changes (no multiple realizability).

    I did consider if we could redefine the P-Region such that the P-Region is "any possible spacio-temporal region that generates M." This turns the P-Region into a set of physical ensembles.

    The problem here is that it essentially turns the P-Region concept into the B-minimal concept. Under this version, the P-Region would just be the set of all possible physical systems with the B-minimal properties associated with M. So, this version of the P-Region has the same sort of multiple realizability that B-Minimal properties do, but at the cost of becoming virtually the same thing.

    But in defining how superveniance works in the real world, I think the unmodified version is more intuitive.

    With the modified version and B-Minimal properties, superveniance becomes defined in terms of a single given mental state and it's relation to a set of possible physical systems, rather than a single physical system.
  • Postmodernism and Mathematics


    The whole point of the "9/11 didn't happen," meme popular on places like 4chan isn't that people actually think that the government falsified the construction of the Twin Towers in some objective sense, and then faked an attack on non-existent buildings. That would be too ridiculous even for those circles. The point is that history is whatever people in power say it is (and that Alt-Right activists possess this same power to change history). Objective history is inaccessible, a myth. The history we live with is malleable. It's a joke, but a joke aimed at an in-crowd who has come to see the past as socially constructed.

    This is what is normally refered to as anti-realism in philosophy of history at least.


    Are there people who really believe that Taylor Swift's entire career was a "psyop" to build up a media figure who could be leveraged for political gains? I'm sure there are, but the whole wave of attacks on her has an air unreality. The audience isn't supposed to see it as objective truth, the point is precisely that it is ridiculous, as this gets it into the mainstream media which in turn makes it real in a way, because once something is in mass media then people need to take a side based on their identity allegiances. It's trolling, which is at the heart of the Alt-Right. And at the heart of that sort of political trolling is the same sort of "performative transgression," you see in third wave feminist actions like the "Slut Walk."

    This is a movement that happily rejoiced in the term "alternative facts."

    Another main route for anti-realism to enter the far-right has been through esoterica, particularly Julius Evola and Rene Guenon. On places like 4chan it is not rare to have people talking about tulpas, creating realities through concentrated thought — thinking something is true makes it so — although this generally partially ironic (like everything in the Alt-Right). Hence, their God who was created from memetic energy or whatever. Everything is ironic and unreal, a sort of trolling of the "real" to show its total groundlessness. The Christchurch shooter covered his weapons in meme jokes because even terror attacks are covered in a level of irony and unreality, DFW's sincere post-irony in the flesh.

    The subtext behind declaring every mass shooting a "hoax" is that "you can never be sure what is happening in current events." In a world where consensus reality has collapsed, identity has primacy and determines the world narrative. Daniel Friberg doesn't urge "rebutting" or "debunking" leftist "lies" but "deconstructing their narratives" in "metapolitical warfare." When Mark Brahmin lays out his plan for a new religion based on worship of Apollo he is not claiming the Greco-Roman gods are "real," but that they were real and can be again. (And we can consider all the neopagans and the ubiquitous references to "LARPing" here too.)

    Adherents to this religion are meant to forge religious Männerbünde: elite male groups of cultural critics and creators, metapolitical warriors. Their goal? The overthrow of "Saturn"—representative of perceived dysgenic, anti-Aryan forces in religion, politics, and society—followed by the establishment of a Nordicist "eugenic cult" and the erection of Apollonian temples and idols


    This certainly looks a look like the campus projects that grew out of continental philosophy at least.
    Attachment
    1000001252 (108K)
  • Supervenience Problems: P-Regions and B-Minimal Properties


    I don't think any of these attempted precisions are aimed at ruling out multiple realizability. Multiple realizability is a feature, not a bug of supervenience, and I haven't seen anyone actually trying to rule it out.

    Yes, exactly. That why P-Regions not allowing for multiple realizability seems like it might be a bug. In general, it seems we would like to have multiple realizability because it suggests that M is dependent on P, not that the two just vary together.

    I don't see how that follows. Supervenience with P-regions or B-minimal properties is still an asymmetric relation: There can be no M-differences without P-differences, but the reverse does not hold.

    A physical entity is part of the P-Region if and only if it is essential to M. This facet is what rules out multiple realizability. If a physical entity can be removed from the system and it doesn't affect M then by definition it in not included in the P-Region. We are now in a situation where P cannot vary unless M also varies (P is in fact defined in terms of M so that this is the case).

    The problem is the same for B-Minimal Properties, but less acute. By definition, a property of a physical system is B-minimal if and only if it is essential to M. So again, if M varies, P must vary. It is a bidriectional correspondence.

    The difference is that B-Minimal Properties themselves seem like the might be multiply realizable through disparate ensembles of matter and energy. If we consider a person looking at an apple, the apple and an indiscernibly similar fake apple might have different physical makeups but instantiate identical B-minimal processes vis-á-vis the viewing subject.
  • What is Logic?


    Above point tells us Logic is not just simple symbolic formula manipulation.

    I'm a bit more cautious about that. It seems like the die is already cast on logic generally referring to formal systems in philosophy. I was searching around for a good term to refer to the idea of "what we use logic to describe in nature," but I haven't thought of a catchy one.

    "Logos" has a nice ring, but it's quite pregnant with mystical and theological connotations, and I don't necessarily want to imply all of those.
  • Postmodernism and Mathematics


    This is true, and it's worth noting that many of the "big names" associated with the movement rejected the label. It's seems like only younger scholars ever came around to embracing it.



    Well, to my broader point, it certainly seems like elements of the right have taken Baudrillard’s thesis in “The Gulf War Did Not Take Place,” to heart. If you look at narratives on the war and Ukraine, what can be said to have "actually happened," invocations hyperreality, or the ubiquitous claims of wartime events as "psyops," it seems at least something has seeped in.

    Perhaps we can't rightly call anti-realism vis-á-vis history, (or even contemporary events) post-modern, but it certainly gets lumped in with the term, and it's a cornerstone of Alt-Right thought.

    I generally find myself agreeing with Freinacht (who does seem to embrace the pomo label) on the ways in which the movement is itself post-modern. At the very least, it is emblematic of the problems many post modern thinkers were striving to identify re globalization and late stage capitalism. I think "blame" narratives miss the mark, because in many cases theorists were diagnosing problems, and this is unfairly conflated with them advocating for those same problems.

    https://metamoderna.org/4-things-that-make-the-alt-right-postmodern/


    If we allow that critical theory and identity movements fit under the umbrella of post modernism then the relationship is even more obvious because the Alt-Right is both a self-conscious reaction to these movements, while also itself being a similar sort of identity movement employing similar methods of critique.
  • Postmodernism and Mathematics


    Right, but the question was: "did elements of the Nu/Alt-Right grow out of/use ideas from post-modernism?" not "does Nick Land understand Deleuze in particular?"

    The attacks on science and the concept of accelerationism in particular don't change much in content when employed by their new users.

    By way of example, we might allow that Karl Marx seems to have misread Hegel in some core respects, but he certainly didn't misread or fail to understand everything Hegel was laying down. Nor would it be unfair to say Marxism clearly grows out of Left-Hegelianism.

    However, like I said, it seems unreasonable to assume that someone who had a successful career as an academic publishing on Deleuze and wasn't subject to particular criticism until after he adopted controversial political opinions completely misread his sources. I don't even know if these sorts of questions are answerable. You get no clear summary of Plato in Aristotle, and lots of contravening opinion, but whether Plato's star pupil failed to understand him seems unlikely, even if no clear answer lies in the text.

    As for the quote, the debates about Derrida are interminable. The claim of his critics is not that he didn't ever voice positions akin to that quote; this is easy to verify. The question is if other parts of his work contradict that sentiment, or claims that it becomes "truth for me, but not for thee," in practice. I'm not really interested enough to care who was actually right here, and it's irrelevant to the point about the modern right being influenced by post modernism.
  • Human beings: the self-contradictory animal


    This is an interesting, well framed post. But I find myself disagreeing with most of it, so I'll just throw out why.

    Philosophical inquiry, try as it may to find some sort of light, has led us deeper and deeper into the questions. At this point in history, we now must immediately be suspicious of ourselves if we ever claim some answer has been made clear - this could give rise to the much maligned "objectivity" or worse, the terror of "dogma".

    I would consider what the fear of error presupposes itself.

    Meanwhile, if the fear of falling into error introduces an element of distrust into science, which without any scruples of that sort goes to work and actually does know, it is not easy to understand why, conversely, a distrust should not be placed in this very distrust, and why we should not take care lest the fear of error is not just the initial error. As a matter of fact, this fear presupposes something, indeed a great deal, as truth, and supports its scruples and consequences on what should itself be examined beforehand to see whether it is truth. It starts with ideas of knowledge as an instrument, and as a medium; and presupposes a distinction of ourselves from this knowledge. More especially it takes for granted that the Absolute stands on one side, and that knowledge on the other side, by itself and cut off from the Absolute, is still something real; in other words, that knowledge, which, by being outside the Absolute, is certainly also outside truth, is nevertheless true — a position which, while calling itself fear of error, makes itself known rather as fear of the truth.

    G.W.F Hegel - The Phenomenology of Spirit §74


    If we try to say what identity is, we are stuck battling between some sort of platonic fairy essence, or universal natural kind fairy, and

    I am sympathetic to your point here as a big fan of process physics, but I think conceiving of the Platonist's universal as a sort of magical "fairy essence," is a little off the mark. Plato is nondualist in a key respect, in the sense that Shankara, Miester Eckhart, or Plotinus deny real ontic division. Plato and later Platonists like Porphry, Plotinus, and Proclus (P⁴ as I call em) rather embrace an idea of veridical hierarchy where what is "more real" is more real in virtue of being less contingent, less a bundle of external causes, and thus more fully itself and self-determining.

    I only point this out because I long dismissed Plato as positing some sort of "spirit realm," of forms as distinct from the world we live in, and only later realized this is a cheapening of what Plato has to offer. It might be better to think of Plato as a sort of objective idealist rather than any sort of a dualist, and his conception of the universal flows from his idealism and anthropology.

    Aristotlean essence would likewise not be a "fairy essence." Essence is a facet of nature related to form. For Aristotle, secondary substance is discovered through experience and the process of abstraction (and thought is definitively processual in Aristotle). I don't think this conception of essence is necessarily at odds with "the motions of physics that constantly redefine particular things and deny identity a chance to take hold, ever." Aristotlean essence can be cashed out in information theoretic/pancomputationalist conceptions of physics (which are essentially processual), where the form is just the informational ensemble corresponding to morphisms between "types" abstracted in consciousness. The question of whether such forms truly "exist" would be related to the question of "do numbers exist?" But for Aristotle, forms, number, shape, etc. exist exactly where instantiated in the natural world, so there doesn't seem to be much "fairy" or supernatural about them. Granted, there are also many ways to formulated essence is ways that do clash with physics.

    If we try to point out a thing in itself, we can show that we are pointing in the wrong direction, pointing always instead at the phenomenon of our own devise, never at the thing that is the only thing we were seeking.

    Doesn't this assertion clash with the above assertion re skepticism and dogma? Certainly many thinkers have challenged Kant's formulation of "phenomenal vs thing-in-itself." The biggest charge against this is precisely that it results from Kant's own dogmatic presuppositions. Aside from that, per Berkeley, Kant is just simply wrong and confused here, positing things he has no reason for positing. Point being, this assertion re the limits of knowledge is itself grounded in its own metaphysical assertion.


    If we try to point at our own minds, being the Pointer, as if mental constructs were things in themselves, we spin off into an impossible picture of a self-reflective object that never grasps any object at all, and that is a now immaterial "self", or we spin off into mind/body dualism that is irreconcilable, incapable of physical unity. And we have to invent ghosts or spirit or ego as placeholders without any better grounding than the fairy Platonic forms. More Deus ex Machina to move the plot along.

    This seems to be a real problem. This is why a number of thinkers (e.g. Jensen) say the way to avoid "being trapped in the box" of ideas or language, etc. is to simply never get inside the box to begin with. What is required is a paradigm shift. That is, getting trapped in the box is evidence of bad starting suppositions. As Aristotle says in the Nicomachean Ethics, "sometimes we need to start at the end, with what is most familiar to us." Much of modern philosophy is the denial of this, the assertion of foundationalism and the need to start from the beginning, with what is least familiar to us. It's problems might simply suggest a flaw in methodology.

    If we are amazed that my words here have allowed you to read this far in the post, we should be amazed, because the meaning of words is like identity, or essence, or self - a placeholder so that we might use these words at all, and the pursuit of "objective meaning" is a useless pursuit because meaning is more like use in the first place, and "meaning" has no real use anymore. As usual, putting aside what my words here might possibly mean, words themselves do not seem sturdy enough to move us out of the gate. And now I remind myself that all wisdom can only be recorded in words, so even if I found wisdom, why would I think I could communicate it in words?


    Should we be amazed? It seems prima facie unreasonable to say words don't mean anything. To quote J.S. Mill, "one must have made some significant advances in philosophy to believe such a thing."

    If language is ONLY use in games, what game are we playing with ourselves when we engage in internal monologue? What game are mammals playing when we can almost universally recognize aggression or fear in each other based on facial expressions? It seems to me like a full accounting of language requires that the idea of a "game" and "use" be stretched to the point where they no longer reflect their original content. In PI, Wittgenstein warns against such all encompassing theorizing and reduction. But considering all of PI grows out of a Saint Augustine quote, I find myself wishing Wittgenstein had engaged a bit with the semiotic theory of that author, because I think it would clear a lot up.

    The problem might lie in the search for "objective meaning," itself. Words cannot mean things "of themselves." We have a fundementally broken paradigm if we must assert such a thing. But much hay has been made over showing how the positivist paradigm (objectivity approaches truth at the limit) is wrong, and then turning around to claim that this means we must dispense with "meaning" and "truth" entirely. Rocks do not understand words by having them carved into them. Humans understand words.

    From the outset with Saint Augustine, semiotics has involved a tripartite model of object known/sign by which it is known/ interpretant who knows. Philosophy of language early in the 20th century largely ignored this model to its own peril. Thus we end up with a formulation where the sign represents an insurmountable barrier between object and interpretant, rather than the very means by which the two are linked, a strange formulation.

    And then there is freedom, that base existential condition that is what it is to be a human being, in a world so over-crowded with necessity and determined forces that there can be no room for freedom. Of course the logic that demands we see freedom is impossible, is the same logic that showed us logic itself may be built of the illogical.

    Only if we assume that determinism and freedom are mutually exclusive. But consider that uncaused randomness also precludes freedom. For our actions to be our own, they must be determined by our memories, desires, beliefs, etc. I think Leibniz makes a very solid case that determinism is a prerequisite for freedom, not anathema too it. Determinism is only a problem for libertarian formulations of free will as "uncaused." I think these are ultimately contradictory. If something is uncaused, determined by nothing, then it is random and arbitrary. Random action isn't free will, although it also isn't determinist.

    The other main objections to compatibalist free will tend to be grounded in reductionism and smallism and I find the empirical evidence for these claims to be weak at best. That is, "atoms don't have meaning and purpose and all facts about humans are reducible to facts about atoms, so reason and purpose must be illusory," is not a claim I think is particularly well supported by the sciences. Actual reductions (not the unifications they are often confused for) have been very rare in the sciences and mature fields like chemistry or physics itself have yet to be reduced. Is a century long enough to declare that reductionism shouldn't be the default assumption?

Count Timothy von Icarus

Start FollowingSend a Message